New Stanford artificial intelligence institute focuses on humanitarianism

University marks initiative with symposium featuring Bill Gates, Gov. Gavin Newsom

How can artificial intelligence help improve human life and solve intrinsic world problems such as infant mortality and flooding? How does society protect against human obsolescence because of AI and how does society prevent a technological backlash?

Stanford University plans to tackle these questions and more through its new Institute for Human-Centered Artificial Intelligence, which was formally ushered in through a daylong symposium on Monday that brought together leading artificial-intelligence experts from academia and business who heard from two keynote speakers: Microsoft founder Bill Gates and Gov. Gavin Newsom.

The institute is launching with 200 participating faculty members from the university's seven schools and will add at least 20 new faculty members in the humanities, engineering, medicine, the arts and basic sciences. It will also work with companies in finance, technology, health care and manufacturing. The initiative has already supported about 55 interdisciplinary research teams including a project to assist the resettlement of refugees; a system to improve health care delivery in hospital intensive-care units; and a study of the impact of autonomous vehicles on social governance and infrastructure.

The institute is led by philosophy professor and former provost John Etchemendy and Fei-Fei Li, a computer science professor and former director of the Stanford AI Lab. On Monday, Li said that about five years ago she became concerned that a very narrow group of people was developing technology and most AI developers were "guys in hoodies."

"There is a lack of (diverse) representation and a need for a more human mission," she said.

Gates said that AI has tremendous potential to impact health outcomes in developing countries, places where he has a particular interest and has made many contributions through the Bill and Melinda Gates Foundation.

"Up to 20 percent of children in very poor countries die before the age of 5 and 40 percent of the remainder will never develop physically or mentally to their full capacity. ... Their ability to learn and contribute is permanently damaged," he said.

AI's ability to look at the microcosmic level has reaped some valuable information that could not have been deduced with the researcher's naked eye, so to speak. Gates said that in one research study they found giving azithromycin, an antibiotic costing only pennies a dose, could save 100,000 lives but it is disappearing from patients' systems within a few days.

"So there's something about their microbiome (in the gut) that is having a profound effect, and I don't believe that without machine-learning techniques that we will ever be able to take the dimensionality of this problem and be able to find a solution about what is going on there."

Gates believes that in the next 10 years new medicines will be discovered at rapid paces because of AI.

For example, Gates provided funding for scientists to take DNA data from genetic-testing company 23andMe in Mountain View to study what’s causing women to have premature births in Africa. They found a correlation between a malfunction in genes that process the mineral selenium and these women were given selenium pills whose diets lacked the mineral. In 18 months, they will learn how the pills have impacted their health. Based on preliminary data, Gates said they estimate a 15 percent reduction in premature births. For Africa as a whole, it would amount to 80,000 lives saved per year.

Gates said AI is a particularly useful opportunity to learn about solving issues related to education, such as understanding why dropout rates have really not improved; why some teachers are so good; why some students are not motivated and others are; socioeconomic factors; what makes some teachers so effective; and what interventions really work.

"That would be a very profound thing," he said. "I think it is a chance, given the incredibly general-purpose nature of these technologies, to find patterns and insights. It's a chance to do something in terms of social science policy — particularly education policy — also health care quality, health care costs. ... It's a chance to supercharge the social sciences with the most important by far being education itself," he said.

With AI, "we have a chance to supercharge the social sciences," but the development of human-centered AI requires responsible management, he said.

"The world hasn't had too many technologies that are both promising and dangerous. We have nuclear energy and nuclear weapons -- and so far, so good," he said.

"With AI, the power of it is so incredible that it will change society in some very deep ways. The fact that the technology is moving so quickly (as are) the policies and understanding around it -- even something as simple as face recognition -- what sort of awareness and use case should there be for that?"

"These are not issues that confine themselves to nation-state boundaries in a simple way like a lot of previous technologies," he said.

Other panelists agreed, raising questions over how we as a society talk about AI and power, AI's effect on social institutions and the true cost of an AI system.

Human skills change more slowly than technology, so there could be many more unintended consequences of AI unless there is more investment in skills training, said Erik Brynjolfsson, Schussel Family Professor of Management, at Massachussetts Institute of Technology. It's possible some people will be left behind or for the majority to be left behind. If these shifts in the labor force are left unaddressed, "there will be a technology backlash," he said.

For Kate Crawford, founder of the AI Now Institute at New York University, AI stirs up questions over who has power and who is experiencing the downside of these systems.

"You have to put power at the center of the analysis of how it will affect social institutions," she said.

Following all that is necessary to make one of Amazon's AI smart assistant Alexa, institute researchers found that many of the environmental and labor costs are hidden, she said.

There is also a profound cost to civic life. The institute just published a yearlong study that looked at 13 jurisdictions in the U.S. currently covered by judicial orders because of illegal, biased or unconstitutional policing.

"What we found is that in many cases that 'dirty data' is being imported directly into predictive policing systems. So that means that those systems are actually directing police resources based on illegal data. So that has to make us think differently about different structures in our history, particularly about structural racism informing the AI tools of the future," she said.

And while we think of AI as inevitable, Crawford said she would pose another question: How do technologies serve our vision of the kind of world we want to live in rather than drive it?

Newsom said the country and the state are only contributing a pittance toward scaling up training for the AI revolution.

"We are not prepared for it as a state and certainly not prepared for it as a nation," he said. "We have an industrial age mindset in an information age ... If there's a tweetable moment, it's to make everyone smarter. ... It requires an order-of-magnitude change."

He noted that Singapore is offering every citizen a rebate and tax break based on age to improve their skills to address the changing reality in every industry affected by AI, he said. Their road map is measured not in decades but in three to four years, a stark contrast to the lack of a national training program in the U.S. California's state budget has dedicated $10 million to increase AI training at community colleges, Newsom said.

"There is an empathy gap in technology," Newsom said, applauding the institute's work. "It's about growth and inclusion. It's the second part of that equation that we've got to wake up to," he said.

Stanford plans to build a 200,000-square-foot building that will house the Institute for Human-Centered Artificial Intelligence and a new Data Science Institute.

CORRECTION: This story previously attributed statements by Erik Brynjolfsson to Eric Horvitz.


Follow the Palo Alto Weekly/Palo Alto Online on Twitter @PaloAltoWeekly and Facebook for breaking news, local events, photos, videos and more.

What is democracy worth to you?
Support local journalism.


7 people like this
Posted by Here's an Idea
a resident of Crescent Park
on Mar 19, 2019 at 8:15 am

How about using AI to search through recent college applications to spot fraudulent test scores, bogus athletic participation, doctored photos, and things like that?

AI is great at this. It can find hidden patterns, such as cases of poor high school performance coupled with stellar SAT scores from applicants with high-income parents.

And Stanford is the ideal place for such research, since the university already has easy access to a huge collection of applications that apparently includes these exact types of problems.

I look forward to the new institute's published findings on this!

9 people like this
Posted by AnthroMan
a resident of Stanford
on Mar 19, 2019 at 9:29 am

The excessive use of 'metrics' (computerized statistical analysis) runs the risk of dehumanizing rather than further humanizing mankind.

A sense of morality & ethics cannot be programmed into cannot even be instilled in humans.

This is just another high-tech effort to circumvent & interfere with human life in general. It's OK for predicting hurricanes & the outcome of football games but as a means of making key decisions regarding humanity, a computer should not even be a part of the picture.

Computerized fascism will not be a pretty sight & advocates of allowing even more AI to monitor & control our lives represent the new Fuhrers of the Millennium.

4 people like this
Posted by Anon
a resident of Another Palo Alto neighborhood
on Mar 19, 2019 at 9:33 am

Posted by AnthroMan, a resident of Stanford

>> Computerized fascism will not be a pretty sight
-is not-

-- "Anon"

14 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 19, 2019 at 10:10 am

I think this is great step, but I think if they truly want a humanitarian and human-focused approach to AI, they also need what I call: “temporal ergonomics” — understanding and improving how technology intersects with functioning and autonomy of humans who have finite time.

This effort is still far too top-down — to get “a more human mission,” they must bring ordinary people who need artificial intelligence solutions in their daily lives into the problem-solving arena. Rather than just studying temporal ergonomics, or giving people a little money to cope with their technology (as in Singapore), or bringing in a few young women in hoodies, ordinary people need to be empowered to develop temporally ergonomic technology that works for individuals and continually makes them better.

Everyone should read MIT Professor Eric von Hippel’s book Democratizing Innovation (free on the internet). His group’s research found that people who innovate — do something new and unexpected that solves a problem — have certain characteristics. They experience a problem themselves and expect to benefit from solving it, and are willing to be the first to do so. Necessity is indeed the mother of invention. Big sports companies looking to innovate in bicycles didn’t create the mountain bike (now the face of the industry), enthusiasts who needed something the big companies would never develop from all the focus groups in the world did.

Closer to home, today I am faced yet again with the mind-numbing task of slogging through medical paperwork, replete with tricks and “mistakes” generated by my multi-billion-dollar artificial-intelligence-enabled insurance company. I will, yet again, have to use my limited cognitive, temporal, and financial resources — instead of working on writing about an actual solution to a serious problem affecting a lot of people — to avoid being bankrupted by my healthcare and to try yet again to force yet another corporate behemoth to honor their contract. The technological-age version of your money or your life. When I am done, maybe later next week, after sacrificing resources I could have spent doing something productive, I will then slog through the new tax rules, including yet hours and hours more of mind-numbing paperwork, with all its attendant, complicated side tasks.

Having personal AI could spare me and my family so much. Having an artificially intelligent assistant who could competently scan and sort the paperwork and keep track throughout the year (without me being the robot assistant doing all the interface, technical support, secretarial support, and backstop tasks), and discuss the tasks and issues at hand, taking direction and even doing tasks for me while allowing me to manage from an executive level, and solving the various technological problems that crop up, would free up so much of my life for my family.

Something as simple as having artifiical intelligence assistant instantly review End User License Agreements and privacy policies when I need it, given my own values, and then suggest alternatives to me to accomplish what I am after, would not only help me, if millions of other people had such assistance in their lives, it would redirect the incentives of the industry in a more positive direction in a million ways (such as not trying to insert traps in EULA’s because no ordinary person has the time to read and evaluate all of them). It just seems like any technological need of today results in a cascade of nested technologically-related tasks of indeterminant (and uncontrollable) time drain. Temporally ergonomic artificial intelligence assistants could help level the playing field and allow ordinary people to be more effective with technology with less of all the burdens we have come to expect.

“Autonomy” is key, because the promise of technology, even the “solutions” discussed above, too often becomes a burden for ordinary people. For too much of my life, in too many ways, I have become the “robot” who has to spend my time, money, and mental energies serving the technology. For example, when things are “upgraded”, too often the upgrade serves a purpose for the technology company and requires more time, energy, and money from me while either adding nothing to my functionality, disrupting my workflow and requiring new tasks of me when the former technology was working fine, or worse, taking away my functionality altogether.

Instead of the Six-Million-Dollar-Man model of technology — making me better than I was, stronger, faster, which artificial intelligent could do now — the technology keeps hitting the reset button on MY life. It is a situation crying out for an artificial intelligence solution to make ordinary people like me far more functional, allowing us as creative humans the ability and autonomy to solve the problems (using artificial intelligence where we might). There is a big difference between a company presuming to solve a problem for millions of people, and empowering millions of people to solve the problems (often created by technology) in their own lives.

I went to an educational conference last year and in pretty much every session, regardless of what it was about, someone had a question about how they could solve yet another problem with how technology was seriously INTERFERING with the educational situation at hand. They need the technology to do what they are doing, yet the technology is practically booby-trapped for utter lack of temporal ergonomics.

Fei Li hit the nail on the head when she brought up the lack of diversity in technology development. The biggest problem with that from a humanitarian problem-solving standpoint is that the people developing the technology typically have no experience with the problems they are creating for everyone else. Young energetic males who have never experienced the burden of chronic health problems, never dealt with a confluence of crushing life circumstances like losing a home in a disaster while caretaking for an adult with Alzheimer’s, never had to sort through a crush of papers created by a hostile entity or unjust legal situation to save their business, they have no appreciation, for example, for how damaging an “attention merchant” economic model — employing brain science to essentially addict people to their technological devices and steal their time and autonomy — is in the lives of real people.

Fei Li: a bunch of fellow parents and I have been talking for a long time about writing a letter to Carnegie Mellon professors to request exactly this, help developing personal AI assistants WITH US, so that instead of technologists creating yet another burden (or way to replace humans), technology does things to make us better, in ways that level the playing field and that takes over tasks that currently, only humans can do. Is there a place in this effort for us?

4 people like this
Posted by Annette
a resident of College Terrace
on Mar 19, 2019 at 11:27 am

Annette is a registered user.

We need to keep the human in humanity so it is reassuring that Stanford has created this institute. A recent WSJ article, "The Autocrat's New Tool Kit" focused on how AI can be used to build a dystopian world. Truly scary potential; this institute's work is much needed.

5 people like this
Posted by CrescentParkAnon.
a resident of Crescent Park
on Mar 19, 2019 at 11:29 am

Reading about the Mafia boss that was assassinated in NY the other day I wonder with all this AI and NSA surveillance we have ... why is there even such a thing as a Mafia boss anymore?

If all this technology is not taking care of these criminals and other systems of corruption, what is it good for? ... because sooner or later all this technology and power will be used by the criminals against us - if it is not already. We seem to have a "boss" of some sort as our leader today, and a lot of the people who suport him act like thugs.

15 people like this
Posted by Guiseppe
a resident of Greenmeadow
on Mar 19, 2019 at 1:22 pm

Too much focus on artificial intelligence...not enough on expanding actual human intelligence.

10 people like this
Posted by AnthroMan
a resident of Stanford
on Mar 19, 2019 at 8:14 pm

> Having an artificially intelligent assistant who could competently scan and sort the paperwork and keep track throughout the year (without me being the robot assistant doing all the interface, technical support, secretarial support, and backstop tasks), and discuss the tasks and issues at hand, taking direction and even doing tasks for me while allowing me to manage from an executive level, and solving the various technological problems that crop up, would free up so much of my life for my family.

It's called cybernetics & the concept/science has been around since the beginning of the Industrial Revolution. It replaces people with machines/robots & in theory, leads to higher production/efficiency/QA and...Human unemployment or the necessity for retraining individuals with new or different job skills.

AI does have its potential. Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers. This in turn could alleviate personal loneliness and perhaps even reduce domestic violence as destroying one's robot would be akin to tossing a chair through a television screen. No crime involved...just go out & buy a new robot mate.

7 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 11:54 am

Ah, but AnthroMan, you have missed my point. My main point is not really about a specific tangible implementation of technology (e.g., cybernetics versus software), but about “temporally ergonomic” technology — technology focused always on making me more than I was in the whole context of my human finiteness. I want upgrades that always take me from where I am as a finite human being, already enabled by technology in the context of my existing life, that are designed to make me even better without requiring me in any way to backtrack or buy (and spend time patching together) new things to keep doing what I was already doing just fine.

What I am discussing is the opposite of replacing humans with technology — I am asking for technology that does what I do in my life as a human BUT THAT NO ONE ELSE CAN OR WILL (even if I could afford it), like making mincemeat out of my paperwork in the course and context of my life. I am then able to do more as a human because having that gives me back my time and autonomy. Human-focused technology should enable ME, without burdening my time, attention, finances, or life — it should be temporally ergonomic. It shouldn’t require me to be the tech support, secretary, repairperson, backstop for all interface tasks, amateur lawyer, just to use it.

The problem is that the entire thrust of the technological age has completely missed the point about temporal ergonomics. The focus has been, as you aptly pointed out, on replacing people, not on enabling them to be better than they were.

Let me give you a simple, current example: addiction to videogames.

There is a huge industry that makes great entertainment, with a dark side that I’m not even going to spending time describing, because it affects millions of people but doesn’t affect everyone equally, for many reasons. There are beneficial sides, too, which I think Jane McGonigal is a great evangelist for in her books, which also don’t affect everyone equally.

Regardless, the beneficial side isn’t offered without the dark side. In some ways (but not all), they are inextricable: video games take advantage of our human brains in a similar way to the way movies take advantage of our human brains to draw us through a story. There is a reason it is more difficult for (most) people to get up and leave a movie at certain dramatic markers in the middle of a good movie than when the dramatic arc has resolved at the end. Human stories hack our brains, just a little.

But video games don’t design in the same kind of resolution that a half-hour sitcom or a 90-minute movie do. They (and the platforms they function on) are designed to just keep us there. I’m always surprised that so many people don’t know this, but a good place to start in understanding why the industry is so NOT temporally ergonomic is Anderson Cooper’s 60 Minutes stories on brain hacking: Web Link

Parents dealing with what I call the “Dementor on the desk” (nods to Harry Potter) that seems to suck their children’s consciousness and attention away, can’t create their own family boundaries where their children get the benefits (as required from school) without also inviting in the dark sides. Because of the industry’s “brain hacking” and the “attention-merchant” economic model, as Tristan Harris says, it’s not fair to make it about willpower of a child when there are a thousand people on the other side of that screen trying to keep them there. I do remember a time, before the graphical interface, when computers really weren’t addicting. The technology alone isn’t the problem.

The segment of our society most affected by this are families. Parents struggle with videogame addiction in a spectrum of ways, but what they don’t have is the choice to accept the good without the bad.

If I had the kind of artificial intelligent assistant in my home that I would wish for, I would be able to get help solving the problem, for example, to allow my children to play video games (even get the benefits of gaming that McGonigal describes), but I could ask the assistant to fix the addiction problem. For example, if my children wanted to play a specific videogame for 90-minutes, I could ask the AI assistant to write code to draw them through an arc of resolution, the same way a movie ending does, so that my kids could transition seamless to that and leave the game after a predetermined amount of time, entertained, happy, and ready to move onto something else. If me and my AI assistant came up with something really good, maybe we could even create whole original videogames together that created a satisfying experience from start to satisfying finish, over a pre-determined amount of time decided on by the user, and sell it.

Even without writing a whole new videogame experience, I could envision an AI assistant that could monitor a person/child otherwise using a technological device and create agreeable counter measures when attention merchant tactics make it difficult for the child to optimally use the technology for education (with a MINIMUM of screen time) and LEAVE. Such an AI assistant could ONLY be designed to be temporally ergonomic in the real context of lives like mine, not in isolated labs staffed by a whole industry of people who have never experienced such problems.

I always point out to people that Steve Jobs had an assistant (person) whose only job was to make sure his technology worked the way he needed it to work so he could USE it for what he wanted, when he wanted, and not have to fiddle with all the problems the rest of us do, all those nested tasks of indeterminant time drain (from dealing with malfunctioning routers to choosing whether to spend time reading an EULA for privacy concerns). For the rest of us, an AI assistant could achieve the same thing. It wouldn’t replace a human being since the rest of us mortals could never hire an assistant like Jobs did.

My point is that such technology would empower ME as a human being. My point is that I can wait until the cows (never) come home for technologists to create those solutions for me. But they won’t, they don’t have an incentive to and haven’t for most of the technological age, they don’t even remotely understand the problem, or have a population in the industry even capable of understanding it.

I once called in to a radio show to put these points in front of an AI expert, asking that AI be used to make ME more effective in my life, and he answered by saying the technology already does that, completely and utterly missing the point (and missing that he was WRONG). I’m afraid this is a pretty consistent reaction from people in the industry. They just. don’t. get it. But not long after, I called in to speak with a doctor who wrote a book about where technology was not meeting its promise and sometimes impeding medicine, and I brought up temporal ergonomics and how, just creating technology to do or replace a doctor doing a given task doesn’t necessarily help the doctor become a better doctor. It can even create complexity that impedes the doctor’s effectiveness (which is what her book was in part about). The technologist completely missed my point, whereas the doctor totally got it, immediately. And now AI technologists are looking for where they can most easily replace doctors, instead of figuring out how to make existing doctors far, far more effective through temporally ergonomic technology. It’s not nearly the same thing.

If someone truly has a mission to humanize AI, then it must be first and foremost to democratize and distribute it, in a way that allows individuals to solve problems in their own lives — to give humans full control of their time and efforts. Temporal ergonomics is essential. The very first problems we ordinary humans will solve are those we face in ordinary life that technology has created, and AI in the hands of much bigger, more powerful, wealthier entities (like insurance companies and attention merchants) make far worse.

8 people like this
Posted by The Best Of Both Worlds
a resident of Portola Valley
on Mar 20, 2019 at 12:41 pm

> Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers.

This is the the key...a personal assistant to handle all of AJL's administrative priorities while providing comfort sans any sexual harassment allegations.

Just remove the batteries when things get out of hand.

4 people like this
Posted by CrescentParkAnon.
a resident of Crescent Park
on Mar 20, 2019 at 2:30 pm

> I always point out to people that Steve Jobs had an assistant (person) whose only job was to make sure his technology worked the way he needed it to work so he could USE it for what he wanted, when he wanted,

That seems to be unequivocal argument that he was just motivated by selling a fantasy device that really did not work. That he never really used his own devices well enough to know how to set them up and fix them. Technology has been a boondoggle for so long that we barely even perceive it any more.

A comment was written about GMO's in Steven Drucker's book "Altered Genes, Twisted Truth" where a GMO industry spokesperson made the statement that if Americans want to be first in technology , they need to accept being guinea pigs.

Routinely software products are brought to market without testing, and just letting users complain about the things they do not like - fixing them on an as-needed basis. If you think that works think about Boeing's latest crash which first indications indicate was software and training issue.

1 person likes this
Posted by Looking for Owls
a resident of East Palo Alto
on Mar 20, 2019 at 3:33 pm

Great!, I thought to myself as the title of the article caught my eye.
Wisdom, not just narrow intelligence to counteract what Isaac Asimov formulated as: "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom."

That excitement turned into a cry-or-laugh-questionmark-reaction. I'm a medical doctor. When we start thinking that a low dose of antibiotics is the solution to child mortality in developing countries to correct their microflora, or doing DNA-studies on women in Africa who give premature births for customizing their diets (essentially) - coming from this group of Stanford-interdisciplinary-stars - maybe we should just let AI take over?

"There are some ideas so absurd that only an intellectual could believe them."
- George Orwell

This is absolutely outrageous. Resources are limited. We have to do a better job at choosing and formulating problems, and then solving them in an equally disciplined way, and yes, AI might very well play its part.
And I'm still waiting for wisdom to catch up with science (which I highly respect).

4 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 3:34 pm

"That seems to be unequivocal argument that he was just motivated by selling a fantasy device that really did not work."

Or, it's an indication of just how little respect technologists have for the rest of us (and our time), when the company that did the most for "humanizing" computers never developed a sense that other people value their time as much as Jobs did his.

I believe the next "killer app" of technology will be democratizing AI to allows people to never be burdened by their technology in the way users/consumers have been in the last 30 years, but instead to be freed and allowed to become always better and more effective per their own goals.

The big revolutions in computers (aside from the usual obvious) are: going from no screen to a screen, going from line input to a graphical interface. The next should/will be to free us from the constraints of rapidly obsoleted aracana that technology imposes on us in myriad ways. That is what Steve Jobs was buying his way out of with a human to do those tasks for him, so that he was free to just be effective with the technology.

Like this comment
Posted by Don
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 7:55 pm

This long conversation reminds me of a more succinct notion, a split in what is now loosely called AI. A little more than 50 years ago there was emerging from the likes of John McCarthy and others the kind of AI that imitates human activity. It proceeded for some decades but its approach was complex and ultimately not very successful. AI faded from view and underwent name changes like "machine intelligence". The other approach taken at the outset was called augmented intelligence and its principal advocate was Douglas Engelbart. He simply wanted computing to enhance one's, or more favorably a group's ability to tackle problems otherwise unsolvable. That has certainly come to pass in examples like the human genome. Imitating humans or augmenting them was the bifurcation a half century ago. The boundaries are now murky and AI has come roaring back with substantially new approaches. Just maybe the distinction is getting increasingly academic.

3 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 10:37 pm

Douglas Englebart was ahead of his time and had so much heart. If only that were the face of technological development!

Like this comment
Posted by Deng Zhao
a resident of Charleston Meadows
on Mar 21, 2019 at 3:17 pm

> Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers.

Same in China but not as friends and lovers. As soldiers.

Raw materials from recycling and technology from US and Japan + our own.

Sorry, but further commenting on this topic has been closed.

All your news. All in one place. Every day.

Su Hong 2.0? Former waiter reopens Chinese standby under new name in Palo Alto
By Elena Kadvany | 11 comments | 7,062 views

Living as Roommates? Not Having Much Sex?
By Chandrama Anderson | 0 comments | 3,437 views

What gives you hope?
By Sherry Listgarten | 20 comments | 3,423 views

Expert witnesses are more than experts. Plus my 7 fundamental impeachment questions
By Douglas Moran | 36 comments | 3,141 views

Sure, the traffic mess in town is a complicated problem, but I want a solution
By Diana Diamond | 30 comments | 1,182 views


Palo Alto Weekly Holiday Fund

For the last 26 years, the Palo Alto Weekly Holiday Fund has given away more than $7 million to local nonprofits serving children and families. When you make a donation, every dollar is automatically doubled, and 100% of the funds go directly to local programs. It’s a great way to ensure your charitable donations are working at home.