Can AI Systems Learn How to Learn?

Can AI Systems Learn How to Learn?

Limitations make them akin to a cartoon genius who can solve the riddles of the universe but can’t remember how to make toast.

Artificial intelligence machines are good at what they do, but how smart are they really?

A supercomputer toppling a grandmaster at chess is old hat, with IBM’s Deep Blue beating Garry Kasparov at the end of the last millennium. DeepMind’s artificial intelligence system AlphaGo beat the world champion in 2016 at the even more complex Go — and then AlphaGo Zero, which learned the game by teaching itself rather than by playing against humans, wiped the floor with AlphaGo.

In recent years, AI systems, whether used in games, medical research or self-driving cars, have shown an extraordinary ability to learn and learn fast — AlphaGo Zero defeated the version of AlphaGo that had beaten the world champ just three days after it started learning the game.

-- Sign up for our weekly newsletter to receive the latest analysis and insights on emerging federal technologies and IT modernization.

Image removed.See Hava Siegelmann discuss AI's role in national security at the Dec. 14 CXO Tech Forum.

Big whoop. As much as machine learning neural networks have accomplished, they still, as Morpheus told Neo in “The Matrix,” live in a world based on rules. Neural networks are adaptable within a selected domain — hence their ability to learn — but not outside that domain. They tend to focus on one particular area — whether identifying signs of cancer or recognizing patterns of financial fraud — leaving them limited in a savant-like way, like a cartoon genius who can solve the riddles of the universe but can’t remember how to make toast.

An AI robot trained to play pool, for instance, most likely would quickly learn all the angles, “see the table,” plan out shots in advance and clean up. But if you changed the rules — say, added an odd wrinkle by playing without hitting the cue ball — it wouldn’t adapt the way human players would. It would make the same mistakes repeatedly and wind up buying drinks all night. To succeed under the new “rules,” it would have to be taken from the game and reprogrammed.

Adding a Human Touch

The Pentagon’s research arm is looking to add a human element into the learning mix. The Defense Advanced Research Projects Agency earlier this year launched the Lifelong Learning Machines, or L2M, program with the goal of enabling machines to employ a level of biological intelligence, learning on the fly, or on the job.

“Life is by definition unpredictable. It is impossible for programmers to anticipate every problematic or surprising situation that might arise, which means existing ML systems remain susceptible to failures as they encounter the irregularities and unpredictability of real-world circumstances,” said Hava Siegelmann, DARPA’s L2M program manager. “Today, if you want to extend an ML system’s ability to perform in a new kind of situation, you have to take the system out of service and retrain it with additional data sets relevant to that new situation. This approach is just not scalable.”

When a system encounters new information outside of its training, it unsettles everything else the machine has already learned, resulting in what Siegelmann called “catastrophic forgetting,” as IEEE Spectrum reported.

It’s why learning machines can excel at chess and Go, but don’t play role-playing games such as "Dungeons & Dragons." AI systems have created some interesting D&D spells — albeit ones easy to spot as computer generated — but don’t play the game.

L2M is pursuing two technical areas, the first to develop a framework for applying lessons learned to fresh data or new circumstances, the second to apply examples of how biological systems learn and adapt.

“Life has had billions of years to develop approaches for learning from experience,” Siegelmann said. “There are almost certainly some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators.”

The 4-year program is spreading $65 million across a variety of projects, and the research agency notes it's not expected, at this point, to produce marketable products but to make progress. Sixteen groups have been chosen, though DARPA says it’s still open to other proposals.