How can artificial intelligence help improve human life and solve intrinsic world problems such as infant mortality and flooding? How does society protect against human obsolescence because of AI and how does society prevent a technological backlash?
Stanford University plans to tackle these questions and more through its new Institute for Human-Centered Artificial Intelligence, which was formally ushered in through a daylong symposium on Monday that brought together leading artificial-intelligence experts from academia and business who heard from two keynote speakers: Microsoft founder Bill Gates and Gov. Gavin Newsom.
The institute is launching with 200 participating faculty members from the university's seven schools and will add at least 20 new faculty members in the humanities, engineering, medicine, the arts and basic sciences. It will also work with companies in finance, technology, health care and manufacturing. The initiative has already supported about 55 interdisciplinary research teams including a project to assist the resettlement of refugees; a system to improve health care delivery in hospital intensive-care units; and a study of the impact of autonomous vehicles on social governance and infrastructure.
The institute is led by philosophy professor and former provost John Etchemendy and Fei-Fei Li, a computer science professor and former director of the Stanford AI Lab. On Monday, Li said that about five years ago she became concerned that a very narrow group of people was developing technology and most AI developers were "guys in hoodies."
"There is a lack of (diverse) representation and a need for a more human mission," she said.
Gates said that AI has tremendous potential to impact health outcomes in developing countries, places where he has a particular interest and has made many contributions through the Bill and Melinda Gates Foundation.
"Up to 20 percent of children in very poor countries die before the age of 5 and 40 percent of the remainder will never develop physically or mentally to their full capacity. ... Their ability to learn and contribute is permanently damaged," he said.
AI's ability to look at the microcosmic level has reaped some valuable information that could not have been deduced with the researcher's naked eye, so to speak. Gates said that in one research study they found giving azithromycin, an antibiotic costing only pennies a dose, could save 100,000 lives but it is disappearing from patients' systems within a few days.
"So there's something about their microbiome (in the gut) that is having a profound effect, and I don't believe that without machine-learning techniques that we will ever be able to take the dimensionality of this problem and be able to find a solution about what is going on there."
Gates believes that in the next 10 years new medicines will be discovered at rapid paces because of AI.
For example, Gates provided funding for scientists to take DNA data from genetic-testing company 23andMe in Mountain View to study what’s causing women to have premature births in Africa. They found a correlation between a malfunction in genes that process the mineral selenium and these women were given selenium pills whose diets lacked the mineral. In 18 months, they will learn how the pills have impacted their health. Based on preliminary data, Gates said they estimate a 15 percent reduction in premature births. For Africa as a whole, it would amount to 80,000 lives saved per year.
Gates said AI is a particularly useful opportunity to learn about solving issues related to education, such as understanding why dropout rates have really not improved; why some teachers are so good; why some students are not motivated and others are; socioeconomic factors; what makes some teachers so effective; and what interventions really work.
"That would be a very profound thing," he said. "I think it is a chance, given the incredibly general-purpose nature of these technologies, to find patterns and insights. It's a chance to do something in terms of social science policy — particularly education policy — also health care quality, health care costs. ... It's a chance to supercharge the social sciences with the most important by far being education itself," he said.
With AI, "we have a chance to supercharge the social sciences," but the development of human-centered AI requires responsible management, he said.
"The world hasn't had too many technologies that are both promising and dangerous. We have nuclear energy and nuclear weapons -- and so far, so good," he said.
"With AI, the power of it is so incredible that it will change society in some very deep ways. The fact that the technology is moving so quickly (as are) the policies and understanding around it -- even something as simple as face recognition -- what sort of awareness and use case should there be for that?"
"These are not issues that confine themselves to nation-state boundaries in a simple way like a lot of previous technologies," he said.
Other panelists agreed, raising questions over how we as a society talk about AI and power, AI's effect on social institutions and the true cost of an AI system.
Human skills change more slowly than technology, so there could be many more unintended consequences of AI unless there is more investment in skills training, said Erik Brynjolfsson, Schussel Family Professor of Management, at Massachussetts Institute of Technology. It's possible some people will be left behind or for the majority to be left behind. If these shifts in the labor force are left unaddressed, "there will be a technology backlash," he said.
For Kate Crawford, founder of the AI Now Institute at New York University, AI stirs up questions over who has power and who is experiencing the downside of these systems.
"You have to put power at the center of the analysis of how it will affect social institutions," she said.
Following all that is necessary to make one of Amazon's AI smart assistant Alexa, institute researchers found that many of the environmental and labor costs are hidden, she said.
There is also a profound cost to civic life. The institute just published a yearlong study that looked at 13 jurisdictions in the U.S. currently covered by judicial orders because of illegal, biased or unconstitutional policing.
"What we found is that in many cases that 'dirty data' is being imported directly into predictive policing systems. So that means that those systems are actually directing police resources based on illegal data. So that has to make us think differently about different structures in our history, particularly about structural racism informing the AI tools of the future," she said.
And while we think of AI as inevitable, Crawford said she would pose another question: How do technologies serve our vision of the kind of world we want to live in rather than drive it?
Newsom said the country and the state are only contributing a pittance toward scaling up training for the AI revolution.
"We are not prepared for it as a state and certainly not prepared for it as a nation," he said. "We have an industrial age mindset in an information age ... If there's a tweetable moment, it's to make everyone smarter. ... It requires an order-of-magnitude change."
He noted that Singapore is offering every citizen a rebate and tax break based on age to improve their skills to address the changing reality in every industry affected by AI, he said. Their road map is measured not in decades but in three to four years, a stark contrast to the lack of a national training program in the U.S. California's state budget has dedicated $10 million to increase AI training at community colleges, Newsom said.
"There is an empathy gap in technology," Newsom said, applauding the institute's work. "It's about growth and inclusion. It's the second part of that equation that we've got to wake up to," he said.
Stanford plans to build a 200,000-square-foot building that will house the Institute for Human-Centered Artificial Intelligence and a new Data Science Institute.
CORRECTION: This story previously attributed statements by Erik Brynjolfsson to Eric Horvitz.