Quinn Edmonds
2/26/25
CYSE 201S
Professor Yalpi
Article Review #1 Investigating the Intersection of AI and Cybercrime: Risks, Trends, and
Countermeasures.
By Shetty, S. , Choi, K. & Park
This article covers the concept of using AI to help create materials used to commit cybercrime, such as scams or fraud and if that is a way AI is being used. (Shetty, S. , Choi, K. & Park, I. (2024). Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures . International Journal of Cybersecurity Intelligence & Cybercrime, 7(2), – . DOI: https://doi.org/10.52306/2578-3289.1187) “Foremost among these concerns is the potential misuse of AI, notably in cybercrime. With AI technology advancing and growing in complexity, it has become an enticing instrument for cybercriminals to exploit in their operations.” This quote confirms that this is a pressing issue with AI. There are many algorithms that can exploited to create scam emails and other such fraudulent messages via a highly advanced free AI. This relates to the reinforcement sensitivity theory (RST) of cybercrime from module 5’s lecture. Using Ai in this manor can be very quick and easy which supports the impulsivity and it is goal driven as you are trying to swindle money or information out of the person your attacking with this AI which is effective and rewarding for the criminal allowing RST to be a probable answer to why AI is used this way. Some victims may display some of the five big personality traits, such as openness as they receive these AI messages as if they were real with no scrutiny, and agreeableness can lead to interacting with the criminal in a trusting way. The embarrassment of being scammed can lead to people not reporting criminals they get scammed by, especially the elderly as discussed in module 5.
(Shetty, S. , Choi, K. & Park, I. (2024). Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures . International Journal of Cybersecurity Intelligence & Cybercrime, 7(2), – . DOI: https://doi.org/10.52306/2578-3289.1187) “FlowGPT, Respostas Ocultas, Reddit, Dread, Legal RC, Hidden Answers, Dark Net Army, and YouTube. Most of the discussion was written in English, reflecting widespread use of English as the lingua franca in online communities. However, some posts were in other languages, including Russian and Portuguese, highlighting the diverse linguistic landscape of the online forums where AI-generated prompts for malicious activities are exchanged and discussed.” This part of the article shows how they gathered their data which I will now explain summarialy. They went onto the dark and surface web and found 113 examples of Ai cybercrime on the web, then contacted 13 professionals, 6 of which responded, and they did an in-depth review of all the materials present and concluded AI is being used harmfully. They then provided insight to help this article reach fruition and help people like me understand the concepts more in depth.
In conclusion, Ai can and is used maliciously worldwide and is very effective due to the strong informative deep learning of Ai. This scam can be easily done on an impulse and lines up well with psychological theories discussed in class which preys on less cyber security educated individuals by these impulsive criminals confirming Ai’s use in cybercrime.
Leave a Reply