Kichigin / Shutterstock

DARPA Wants AI to Learn Language as Human Babies Do

The Pentagon’s research wing is funding efforts to build AI language systems that learn more like people and less like machines.

The latest artificial intelligence project at Pentagon’s research office is shedding new light on the phrase “mean what you say.”

The Defense Advanced Research Projects Agency on Thursday announced it would begin funding research to reshape the way AI language systems like Alexa and Siri learn to speak. Instead of crunching gargantuan datasets to learn the ins and outs of language, the agency wants the tech to teach itself by observing the world like human babies do.

Using this approach, the Grounded Artificial Intelligence Language Acquisition, or GAILA, program aims to build AI tools that understand the meaning of what they’re saying instead of stringing together words based on statistics.

“Children learn to decipher which aspects of an observed scenario relate to the different words in the message from a tiny fraction of the examples that [machine-learning] systems require,” DARPA officials wrote in the solicitation.  “ML technology is also brittle, incapable of dealing with new data sources, topics, media, and vocabularies. These weaknesses of ML as applied to natural language are due to exclusive reliance on the statistical aspects of language, with no regard for its meaning.”

Teams participating in the program will be expected to build a theoretical model for training AI systems to speak by associating audible sounds with visual cues—videos, images and live demonstrations. Unlike today’s systems, which rely on carefully labeled datasets and explicit training, the AI developed under GAILA would use logic and inference. While today’s tools only “know” what it’s explicitly told by developers and datasets, GAILA’s tech would be able to correctly interpret situations it’s never seen before.

Ultimately, teams will build a working prototype that learns and understands English text and speech from scratch, according to the solicitation.

“As the vocabulary, concepts, and linguistic constructions are augmented, the machine should gain the ability to describe increasingly complex events and relations,” officials wrote.

Project will be divided into two phases—a feasibility study and proof of concept—and be eligible for up to $1 million in funding. Proposals are due April 26 and research will officially kick off June 25.

The program comes as part of DARPA’s AI Exploration initiative, which provides rapid bursts of funding for a slew of high-risk, high-reward AI research projects. Over the next five years, the agency will pour some $2 billion into developing so-called “third-wave” artificial intelligence capable of logical reasoning and human-like communication.