Fake faces created by AI look more trustworthy than real people

Synthetic human faces are so convincing they can fool even trained observers, and they may be highly effective for use in scams


14 February 2022

Collage of hyperrealistic AI-generated human faces, created by GAN - generative adversarial network

A collage of fake faces generated by an AI

Anatolii Babii / Alamy

Artificial intelligence can create such realistic human faces that people can’t distinguish them from real faces – and they actually trust the fake faces more.

Fictional, computer-generated human faces are so convincing they can fool even trained observers. They can be easily downloaded online and used for internet scams and fake social media profiles.

“We should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud, for example,” says Sophie Nightingale at Lancaster University in the UK.

Join us for a mind-blowing festival of ideas and experiences. New Scientist Live is going hybrid, with a live in-person event in Manchester, UK, that you can also enjoy from the comfort of your own home, from 12 to 14 March 2022. Find out more.

AI programs called generative adversarial networks, or GANs, can learn to create fake images that are less and less distinguishable from real images, by pitting two neural networks against each other.

Nightingale and her colleague Hany Farid at the University of California, Berkeley, asked 315 untrained and 219 trained participants, recruited on a crowdsourcing website, to say whether they could distinguish a selection of 400 fake photos from 400 photographs of real people. Each set consisted of 100 people from each of four ethnic groups: white, Black, East Asian and South Asian. They then asked an additional 223 participants to rate a selection of the same faces on their level of trustworthiness, on a scale of 1 to 7.

Untrained participants had an initial accuracy rate of 48.2 per cent – slightly worse than chance, says Nightingale. Those who were given training to recognise computer-generated faces were slightly more successful, with an accuracy rate of 59 per cent, but this difference is negligible, she says. White faces were the hardest for people to distinguish between real and fake, perhaps because the synthesis software was trained on disproportionally more white faces.

The participants rated the fake faces as 8 per cent more trustworthy, on average, than the real faces – a small yet significant difference, according to Nightingale. That might be because synthetic faces look more like “average” human faces, and people are more likely to trust typical-looking faces, she says.

There was little difference between ethnic groups apart from a slight tendency for people to rate Black faces as more trustworthy than South Asian faces.

Looking at the extremes, the four faces rated most untrustworthy were real, whereas the three most trustworthy faces were fake.

“We need stricter ethical guidelines and more legal framework in place because, inevitably, there are going to be people out there who want to use [these images] to do harm, and that’s worrying,” says Nightingale.

To reduce these risks, developers could add watermarks to their images to flag them as fake, she says. “In my opinion, this is bad enough. It’s just going to get worse if we don’t do something to stop it.”

Journal reference: PNAS, DOI: 10.1073/pnas.2120481119

More on these topics: