The archerfish is one of Southeast Asia’s deadliest sharpshooters. When a tasty insect or spider catches its eye on the banks of a mangrove forest, the small, striped swimmer fires a precise, high-powered jet of water from its mouth, knocking the bug into the river, where it can be gobbled up.
To do so, the fish must distinguish insects and spiders from a variety of flowers, twigs, and other objects—no easy feat for a creature separated from mammals by about 450 million years of evolution. “Their brain processing power, we presume, is a lot smaller due to their brain size,” says Cait Newport, a visual ecologist at the University of Oxford. “So how are they solving these really complicated problems?”
To find out, neuroscientist Ronen Segev and colleagues trained five archerfish to shoot at images on a computer screen hanging above their tanks in their lab at Ben-Gurion University of the Negev. Archerfish don’t need extra incentive to snipe bugs in the wild, but they required a bit of coaxing to spit at the motionless, gray-scale facsimiles on screen. The researchers rewarded the fish with food pellets when they shot jets of water at blinking squares on the screen. Once the fish were content to spit at digital targets, the researchers began to show them photos of objects on a white background. The fish only received rewards when they shot at insects and spiders, not nonanimal objects such as leaves or flowers.
Then, the scientists showed the archerfish 800 pairs of images the fish had never seen before, each pair containing one insect or spider and one nonanimal object. The fish shot at the animals about 70% of the time, the team reports this month in the Journal of Experimental Biology. A different set of fish that had been trained to shoot at leaves and flowers rather than bugs was just as accurate at categorizing the images; those fish shot at nonanimals about 70% of the time. In both cases, the fish were able to apply what they knew about bugs they’d already seen to distinguish plant from prey. “The fact that they can actually generalize in such an easy way was a surprise,” Segev says.
To figure out how the fish discriminate, the researchers turned to a computer model. The model assessed which visual properties—such as roundness, symmetry, and texture—it needed to think like an archerfish. The animals rely mostly on an object’s shape to classify it, the model suggested; texture was far less important. And indeed, when the team showed the archerfish images of bugs and plants containing only shape information (the object’s silhouette) or only texture (a circle depicting the surface of the object), the fish had far more success with the silhouettes.
“It’s a very good paper,” says Newport, who was not involved with the research.
Segev notes that the study only used motionless, gray-scale images. That means the archerfish were missing information about color and movement, which they would have when hunting in the wild. It’s likely that fish rely on more than just shapes to distinguish plant from animal, he says.
The relatively simple solutions fish use to recognize objects could help engineers design computer visual systems, like the ones needed for autonomous cars, Segev adds. But he also says it’s important to study archerfish and other piscine swimmers for their own sake, as they make up the majority of vertebrates alive today. “Fish are smart,” he says. “They are amazing.”