Quinn Edmonds
4/10/25
Professor Yalpi
CYSE201s
Article review #2 and Deepfakes
‘Deepfakes’ are AI trained videos or images of real people on other people’s bodies. All that is needed is a lot of photos of some person and a person whose body and face look like who you’re trying to replicate. It is a pretty simple process for celebrities due to their high photo nature, but it can be done on normal people too. This relates to social sciences as impersonation is a big issue.
The research question is can human beings identify ai deepfakes from real human faces? They took 280 people and separated them into 4 groups and 1 group being the control. The other 3 groups were shown 50 fake and 50 real faces in which they attempted to discern them with levels of confidence
expressed. They used standard Scientific method procedures and did a well-orchestrated experiment. The average accuracy was 65% and average reported confidence was generally high as well providing an
insight into confidence compared to accuracy.
The topic relates to the challenges and issues with celebrities, people literally want to be them, so it makes impersonation much more likely. Celebrities also are generally marginalized since there are fewer of them. This can be a big problem with misinformation, because deepfakes are a man-made thing that don’t come out of nowhere and must be personally created.
The study found people are better at recognizing than random which is good but imperfect at detecting deepfakes and it is still a prevalent issue we must address as a society. Deepfakes also apply to audio mimicry which can contribute to the confusion felt by our society when viewing deepfakes. It can be fun to make silly videos with deepfakes but as time progresses it will become very dangerous to try and identify fakes. Research from this project showed humans used eyes and mouths as decriable things to base their answers on. They also viewed the chin and neck as giveaway areas for spotting fakes.
In conclusion, deepfakes are dangerous for our cyber society and can be used to evil and are hard to determine from normal videos. Especially if presented with a leading response before viewing the video like this is blank celebrity, it would make for a devastating combo for misinformation.
Journal of Cybersecurity | Oxford Academic, academic.oup.com/cybersecurity. Accessed 11 Apr. 2025.
Leave a Reply