Machine learning systems today are limited by their inability to continuously learn or adapt as they encounter new situations; their programs are secured after training, leaving them unable to react to new, unexpected conditions once they are fielded. Further, adding new information to cover programming deficits overwrites the existing training set. This stumbling block requires taking the system offline and retraining it on a dataset that incorporates the new information. It is a long and grueling process that DARPA’s Lifelong Learning Machines (L2M) program is working to overcome, DARPA reported in a release.
“The L2M program’s prime objective is to develop systems that can learn continuously during execution and become increasingly expert while performing tasks, are subject to safety limits, and capable of applying previous skills and knowledge to new situations, without forgetting previous learning,” said Dr. Hava Siegelmann, program manager in DARPA’s Information Innovation Office (I2O). “Though complex, it is an area where we are making significant progress.”
DARPA first announced L2M in 2017 and is well underway in its research and development of next generation AI systems and their components, as well as learning mechanisms in biological organisms capable of translation into computational processes. L2M supports a large base of 30 performer groups via grants and contracts of different duration and size, DARPA said.
Today, L2M researcher Francisco J. Valero-Cuevas, professor of biomedical engineering and biokinesiology at USC Viterbi School of Engineering, along with USC Viterbi School of Engineering doctoral students Ali Marjaninejad, Dario Urbina-Melendez and Brian Cohn published results regarding exploration into bio-inspired AI algorithms. In an article outlined in the March cover of Nature Machine Intelligence, Valero-Cuevas’ team details their successful creation of an AI-controlled robotic limb driven by animal-like tendons capable of teaching itself a walking task, even automatically recovering from a disruption to its balance.
Behind the USC researchers’ robotic limb is a bio-inspired algorithm that can learn a walking task on its own after only five minutes of “unstructured play” – or conduct random movements that enable the robot to learn about its own structure as well as its surrounding environment. The robot’s ability to learn-by-doing is a significant advance for lifelong learning in machines. The current machine learning approaches rely on pre-programming a system for all potential scenarios, which is complex, labor intensive, and inefficient, DARPA said. What the USC researchers have accomplished shows that it is possible for AI systems to learn from relevant experience, finding and adapting solutions to challenges overtime.
Siegelmann noted, “We’re at a major moment of transition in the field of AI. Current fixed methods underlying today’s smart systems will quickly give way to systems capable of learning in the field. The missing ingredients to safer, more flexible, and more useful AI are the abilities to both learn while in operation and to apply learning to new circumstances for which the system was not previously trained. These abilities are necessary, for instance, for complex systems like self-driving cars to become truly functional. Incorporating L2M technologies will allow them to become increasingly expert as they drive in different conditions and will make them safer than human-driven cars. Professor Valero-Curevas and his team have successfully taken us closer to that goal; that’s what the L2M project is about.”
To read the full article, please visit: Nature Machine Intelligence