Skip to main content

Study Finds Most People Struggle to Tell AI Faces From Real Ones

New research indicates that many people struggle to accurately identify whether a face is generated by artificial intelligence or belongs to a real person. In controlled tests, participants frequently mistook synthetic faces for real ones, often expressing high confidence in incorrect answers.

Researchers found that AI-generated faces are becoming increasingly realistic due to improvements in training data, image resolution, and facial symmetry. In some cases, participants were more likely to trust AI-generated faces because they appeared more “average” or visually consistent than real photographs.

The findings raise concerns about the potential misuse of synthetic images, including misinformation, identity fraud, and social engineering. Experts warn that as AI-generated imagery becomes more widespread, the ability to verify authenticity will become increasingly important.

At the same time, researchers note that awareness and training can improve detection skills. They argue that public education, along with technical safeguards such as watermarking and detection tools, will be essential to address the growing challenges posed by realistic AI-generated content.