Show a child a pencil and a crayon and they’ll instantly see the difference and will forever remember which is which. You won’t have to go over it a second time. Show the same two writing instruments to an artificial intelligence machine, and the machine will eventually get it, but it might take tens of thousands of “lessons” before the poor thing can reliably tell apart the two.
That’s one of the current challenges AI researchers face. Machine learning systems acquire knowledge so quickly because their processors and software are fast, but the method of learning involves a lot of repetition, computing power, input from human programmers and trainers, and often (once you get past pencils and crayons) large data sets to work from.
The Defense Department’s Project Maven, for instance, last year released its first AI tool to analysts, a data-tagging technology that autonomously recognizes features (a type of truck, a person with a weapon) in still images. It works, but training the machines is still a time-consuming process that requires tagging each object 100,000 times before the process is put into practice.
Finding ways for machines to learn more efficiently would cut down on the computing power and the size of the data sets required, while possibly opening the door to more fluid and brain-like ways to learn.
Watch and Learn
The Army Research Laboratory and the University of Texas at Austin have developed techniques to improve machine learning using a very human tactic — providing feedback to a robot in real time. Using a deep-learning algorithm called Deep TAMER (Training an Agent Manually via Evaluative Reinforcement), a robot learns a task with a human trainer, watching video streams or observing a person perform the task, and then practicing it with the help of some simple feedback like “good job” and “bad job,” according to an Army report.
In one example, it took 15 minutes of human feedback with Deep TAMER to teach an AI program to beat even expert human players at Atari bowling — a game proved difficult for state-of-the-art AI programs to master. Adding a feedback loop enhances the learning progress for robots of computer programs that work by seeing images, researchers said. The process also doesn’t require an expert programmer to provide the feedback, but just someone who understands what the robot is trying to do.
In the Army’s case, it could help support DOD’s plans for human-machine teams that will work together on surveillance, search and rescue, and other tasks in which a robot will work in an environment for extended periods of time.
"If we want these teams to be successful, we need new ways for humans to be able to quickly teach their autonomous teammates how to behave in new environments," said Dr. Garrett Warnell, an ARL researcher. "We want this instruction to be as easy and natural as possible. Deep TAMER, which requires only critique feedback from the human, shows that this type of real-time instruction can be successful in certain, more-realistic scenarios."
Taking a Shot
ARL and UT researchers aren’t alone in devising faster ways for AI systems to learn.
Companies such as Qualcomm and Nvidia are hoping to develop “one-shot learning” for neural networks that would allow systems to learn from small data sets, which would take a lot of the pressure off of the processors required now. That would “upend the whole paradigm,” Charles Bergan, vice president of engineering at Qualcomm, told MIT Technology Review’s EmTech China conference earlier this year.
Nvidia is working on creating smaller, more efficient algorithms and what it calls network pruning to make neural networks with fewer simulated neurons, leaving only those that contribute directly to the result.
“There are ways of training that can reduce the complexity of training by huge amounts,” Nvidia chief scientist Bill Dally told the conference.
IBM, meanwhile, is working the other side of the problem, “training” large data sets so the most important information is given to an algorithm first — as opposed to a variable, nonuniform feed — which speeds up the learning process for a machine. The company said it has produced a tenfold speed increase over existing training methods.
AI systems can’t learn like a 2-year-old yet, but new approaches to simplifying the training process are getting the machines closer to learning more like humans do, and as a result could make human-machine teaming more viable.