Category Archives: Article Reviews

Article Review #2: Testing human ability to detect ‘deepfake’ images of human faces 

The article “Testing human ability to detect ‘deepfake’ images of human faces” by Bray, Johnson, and Kleinberg explores how well people can tell the difference between real and deepfake faces. Using a study of 280 participants across different testing groups, the authors wanted to know: First, how accurately people can detect deepfakes, second, whether small interventions like giving advice or showing examples beforehand help, and third, whether people are confident in their decisions even when they’re wrong. 

The study used a randomized control trial and collected data from participants who viewed 20 face images: half real and half AI-generated and had to label each one. Some groups received brief advice or example images before starting. Participants also rated how confident they were with each choice. The results of the study showed that overall accuracy wasn’t much better than chance, and surprisingly, the extra help didn’t really improve performance. People were often very confident in their answers even when they were wrong, which shows a disconnect between perception and reality. 

I think this connects well with social science concepts discussed throughout the course. We learned about symbolic interactionism, where meaning is created through individual interaction. People think they “see” the truth in a photo, but that meaning can be manipulated, especially by deepfakes. Social cybersecurity also ties in. The article studies more than just technology, it analyzes how people process visual info, how confident they feel in those judgments, and how attackers might exploit that overconfidence. 

From a cybersecurity culture standpoint, this article also reveals a problem: many people still lack the media literacy needed in today’s digital world. That’s especially concerning for marginalized groups, who are often the targets of manipulated media like deepfakes used for scams, harassment, or misinformation. This makes the research even more important from a social angle. 

Overall, the study shows that deepfake detection isn’t just a technological issue, it’s also a human issue. There’s no quick fix, and efforts to build awareness, improve education, and shape smarter policies will be important in helping society deal with this kind of threat.  

Sergi D Bray, Shane D Johnson, Bennett Kleinberg, Testing human ability to detect ‘deepfake’ images of human faces, Journal of Cybersecurity, Volume 9, Issue 1, 2023, tyad011, https://doi.org/10.1093/cybsec/tyad011  

Article Review #1: Investigating the Intersection of AI and Cybercrime

The article I chose to review is a study published in the International Journal of Cybersecurity Intelligence & Cybercrime. It explores artificial intelligence as a revolutionary tool, and a growing cybercrime threat. Cybercriminals are misusing and exploiting the new technology to generate new malware, phishing campaigns, and conducting various dark-web activities.

The study and concepts presented touch on a few things we have covered in our lectures, specifically some of the social science principles. We see the principle of empiricism by collecting data from hacking forums and expert interviews. It is often difficult to conduct studies in cybersecurity, because data cannot be collected until after an event has happened. Collecting data from the sources the article uses ensures that their findings are based on observed behaviors.

The study explores a few different questions and hypotheses: How is malicious AI generated content distributed on the web/dark web? Does the media/social media play a role in the distribution of the maliciously generated content? How can cybersecurity professionals improve to prevent AI based threats? The study’s purpose is to understand how AI-generated security threats spread, how the technology is being misused, and how to address these issues.

With AI technology developing so quickly, and becoming very convincing, it can be used to target the elderly, who may not understand the technology, or digitally illiterate groups. Victims in these digitally marginalized groups can easily fall prey to generated misinformation and Deepfake scams. The study addresses these groups and highlights them as being at a heightened risk of being targeted. It addresses the issue of these groups lacking quality access to cybersecurity education and resources.

To summarize, from a societal perspective, the study contributes an awareness to the growing issue at hand with artificial intelligence, and its role in cybercrime. It is a growing threat and becoming increasingly easier and accessible to generate malicious content. The study calls for regulation and cybersecurity awareness, naming them as essential countermeasures, as well as imploring future research to explore long-term solutions for artificial intelligence based cyber-threats.

Citation:
Shetty, S. , Choi, K. & Park, I. (2024). Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures . International Journal of Cybersecurity Intelligence & Cybercrime, 7(2), – . DOI: https://doi.org/10.52306/2578-3289.1187


Available at: https://vc.bridgew.edu/ijcic/vol7/iss2/3