Everyday objects can run artificial intelligence programs | Science

Imagine using any object around you—a frying pan, a glass paperweight—as the central processor in a neural network, a type of artificial intelligence that loosely mimics the brain to perform complex tasks. That’s the promise of new research that, in theory, could be used to recognize images or speech faster and more efficiently than computer programs that rely on silicon microchips.

“Everything can be a computer,” says Logan Wright, a physicist at Cornell University who co-led the study. “We’re just finding a way to make the hardware physics do what we want.”

Current neural networks usually operate on graphical processing chips. The largest ones perform millions or billions of calculations just to, say, make a chess move or compose a word of prose. Even on specialized chips, that can take lots of time and electricity. But Wright and his colleagues realized physical objects also compute in a passive way, merely by responding to stimuli. Canyons, for example, add echoes to voices without the use of soundboards.

To demonstrate the concept, the researchers built neural networks in three types of physical systems, which each contained up to five processing layers. In each layer of a mechanical system, they used a speaker to vibrate a small metal plate and recorded its output using a microphone. In an optical system, they passed light through crystals. And in an analog-electronic system, they ran current through tiny circuits.

In each case, the researchers encoded input data, such as unlabeled images, in sound, light, or voltage. For each processing layer, they also encoded numerical parameters telling the physical system how to manipulate the data. To train the system, they adjusted the parameters to reduce errors between the system’s predicted image labels and the actual labels.

In one task, they trained the systems, which they call physical neural networks (PNNs), to recognize handwritten digits. In another, the PNNs recognized seven vowel sounds. Accuracy on these tasks ranged from 87% to 97%, they report in this week’s issue of Nature. In the future, Wright says, researchers might tune a system not by digitally tweaking its input parameters, but by adjusting the physical objects—warping the metal plate, say.

Lenka Zdeborová, a physicist and computer scientist at the Swiss Federal Institute of Technology Lausanne who was not involved in the work, says the study is “exciting,” although she would like to see demonstrations on more difficult tasks.

“They did a good job of demonstrating the idea in different contexts,” adds Damien Querlioz, a physicist at CNRS, the French national research agency. “I think it’s going to be quite influential.”

Wright is most excited about PNNs’ potential as smart sensors that can perform computation on the fly. A microscope’s optics might help detect cancerous cells before the light even hits a digital sensor, or a smartphone’s microphone membrane might listen for wake words. These “are applications in which you really don’t think about them as performing a machine-learning computation,” he says, but instead as being “functional machines.”