DARPA wants AI with common sense

The Machine Common Sense program aims to help intelligent systems understand their world, communicate with people and learn from new experiences.

The Defense Advanced Research Projects Agency is planning to release a broad agency announcement for the Machine Common Sense program, part of its $2 billion AI Next campaign.

Common sense  “has been a big problem in AI for decades,” according to Dave Gunning, a program manager within the Information Innovation Office at DARPA. “This is one of the biggest barriers between narrow AI, which is what we have plenty of today, and kind of more-general AI we’d like to have more of in the future.”

Common sense is, well, common among people. It’s always operating in the background, helping to fill in gaps in everyday conversations and experiences and helping humans relate to the world around them.

“If I ask you if an elephant fits through a doorway, you immediately say no," Gunning told GCN. "You don’t have to calculate the size and volume of the elephant, you just know that automatically.”

Without the insights provided by common sense, an intelligent system may not understand its world, communicate clearly with people, behave reasonably in unforeseen situations or learn from new experiences, Gunning said in an agency release.

DARPA plans to research three main areas of common sense over the course of the four-year project. First, intuitive physics -- the knowledge of spaces, objects and places that explains why an elephant won’t fit through a doorway. Second, intuitive psychology -- a general understanding of people and their goals that explains why two people yelling at each other are probably arguing or that people walking into a restaurant are likely hungry. Finally, basic facts,  the information an average adult should know.

The research agency  is tackling this problem by advancing machine learning and compiling a large crowdsourced repository of common sense knowledge that machines can plug into. But it is also plans to look at the latest research in developmental psychology to get a better idea of how humans learn at a young age

A one-year-old child has a basic understanding of people, object permanence, change, causation and spatial reasoning.  “And that actually develops at some point, they learn that,” Gunning said. “There’s some foundation there that we really need to do a better job of capturing.'

But how will we know when a machine actually has common sense? That’s something that the Allen Institute for AI is already exploring. The Institute developed a test with 113,000 multiple-choice questions about situations that an AI model with common sense should be able to answer, such as this example from an Allen Institute research paper:

On stage, a woman takes a seat at the piano. She

  1. a) sits on a bench as her sister plays with the doll.
  2. b) smiles with someone as the music plays.
  3. c) is in the crowd, watching the dancers.
  4. d) nervously sets her fingers on the keys.

Humans can easily infer that when a woman sits down to play the piano on stage, she's probably nervous when she sets her fingers on the keys. AI models, however, struggle to get the correct answer, especially when the other choices are stylistically and contextually similar to the correct answer. The researchers recruited humans to take the test and used those responses as a benchmark for machine performance.

"Despite the recent AI successes, common sense -- which is trivially easy for people  --  is remarkably difficult for AI," Oren Etzioni, the CEO of the Allen Institute for AI, said in a statement earlier this year.  "No AI system currently deployed can reliably answer a broad range of simple questions such as: 'If I put my socks in a drawer, will they still be in there tomorrow?' or 'How can you tell if a milk carton is full?'"

Right now, the Institute has put the best machine learning models up against the test, and they are able to correctly answer about 55 percent of the questions. Around 90 percent would be a desirable rate, Gunning said.

“I don’t know if we will get to 90 percent, to tell you the truth, because I think that would be pretty hard. I would hope by the end of the program we’re halfway there” to around 70 to 75 percent, he said.

“I’ve worked in AI for more years than I can count …and have at different times worked on this problem in different variations of the technology,” Gunning said. “My deep belief is the magic answer is somehow buried in what human children know at one year old."