By Sandra I. Erwin
The next frontier for the robotics industry is to build machines that think like humans. Scientists have pursued that elusive goal for decades, and they believe they are now just inches away from the finish line.
A Pentagon-funded team of researchers has constructed a tiny machine that would allow robots to act independently. Unlike traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.
Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency on a program called “physical intelligence.” This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.
This project does not use standard robot hardware with integrated circuitry, he said. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans, Gimzewski said.
Participants in this project include Malibu-based HRL (formerly Hughes Research Laborary) and the University of California at Berkeley’s Freeman Laboratory for Nonlinear Neurodynamics. The latter is named after Walter J. Freeman, who has been working for 50 years on a mathematical model of the brain that is based on electroencephalography data. EEG is the recording of electrical activity in the brain.
What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.
A “physical intelligence” device would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching, he said. An aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.
Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.
Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.
It is not clear, however, that the Pentagon is ready to adopt this technology for weapon systems. The Obama administration’s use of drones in “targeted killings” of terrorist suspects has provoked a backlash and prompted the Pentagon to issue new rules for the use of robotic weapons. “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” said a Nov. 2012 Defense Department policy statement. Autonomous weapons, the document said, must “complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, [must] terminate engagements or seek additional human operator input before continuing the engagement.”