Article by Annalise Kempen; Photos by FreePik
ChatGPT has been acknowledged as a miracle tool that can make life much easier for users – whether you want to write, learn, create or experience something – all you need to do is to ask ChatGPT. Although there are some people, including writers, artists and students, who misuse this “artificial intelligence (AI) chatbot that uses natural language processing to create humanlike conversational dialogue” (Hetler, nd) to claim the work of ChatGPT as their own, this is not the focus of this article. This article will focus on how those with malicious motives have turned the potential positive characteristics of large language models (LLMs) or everyday AI systems such as ChatGPT into an ugly and dangerous tool in the hands of cybercriminals.
Cybercriminals believe that rules MUST be broken
Malicious, cyberthreat actors are known for developing hacking tools that can promote malicious activities. In a world where ChatGPT has gained a lot of popularity, these malicious actors have launched copycat hacker tools that “offer similar chatbot services to the real generative AI-based app but is aimed specifically at promoting malicious activity” (SecureOps, 2023).
****************************
[This is only an extract of an article that is published in Servamus: November 2024. This article is available for purchase.]