How AI is impacting our ability to fight cybercrime
This electric opinion piece from our Researcher, Claire Evans, takes a look at the world's best-known AI (Artificial Intelligence) iteration, ChatGPT, and how it will impact cybercrime in the future.
Over the past few years, cybercrime has become a growing threat to businesses and individuals alike. One of the tactics used by cybercriminals is social engineering, a technique where they manipulate individuals into revealing sensitive information or performing actions that can compromise their security. With the emergence of ChatGPT, an advanced language model created by OpenAI, cybercriminals have a new tool at their disposal to enhance their social engineering tactics. In this article, we will explore the impacts of ChatGPT on social engineering cybercrime and the steps that individuals and businesses can take to protect themselves.
I wasn’t sure how to start this article, so I asked ChatGPT to write it for me. The paragraph above is what it came up with.
Be honest, I bet you had no idea that opener had been written by a machine. And that’s the exact reason it’s causing concern with regards to cybercrime opportunities.
So, what is ChatGPT, and how is it so smart?
ChatGPT was created by OpenAI, an AI and research company in November 2022. It’s also backed by Microsoft. ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks like composing emails, essays, and code.
Machine learning processes have been implemented so the system constantly learns, based on the information it’s given, and ChatGPT requires vast amounts of data to train and improve its language processing capabilities, which may include sensitive information such as personal information, financial data, and confidential business information. If this data falls into the wrong hands, it can cause significant harm to the organization, including economic loss, reputational damage, and legal liabilities.
Data fed into the chatbot is also stored and collected by OpenAI in order to further train and refine the system.
Yes, you read that right. So, when you asked ChatGPT why your toenail was a funny colour, you also made the staff at OpenAI aware of your embarrassing fungal condition. So, this is where the first concern arises.
Italy recently made headlines by becoming the first Western country to block ChatGPT because of privacy concerns. The chatbot is already blocked in a number of countries including China, Iran, North Korea and Russia.
The Italian data protection authority have concerns that ChatGPT does not comply with GDPR regulations and is currently investigating.
Concerns have risen from a data breach involving user conversations and payment information.
It said there was no legal basis to justify "the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform". It also said that since there was no way to verify the age of users, the app "exposes minors to absolutely unsuitable answers compared to their degree of development and awareness".
The EU is currently working on the first ever legislation on AI, but it could be many years before this comes into effect. Concerns are growing around how this unregulated technology could deceive and manipulate people.
While ChatGPT has impressive conversational capabilities, Cybercriminals can use these abilities to simulate human-like conversations to deceive users and gain access to sensitive data or networks. For example, a hacker could use a ChatGPT-powered chatbot to trick an employee into divulging sensitive information, such as login credentials or financial data. Or target members of the public by create a phishing email without any spelling mistakes or grammatical errors, so it looks like it’s come from a genuine company.
There is a running joke amongst programmers that if they have a piece of complicated code to write, then they will ask ChatGPT to write it for them. But this raises further cybersecurity concerns. If ChatGPT can write code – what is stopping it from writing code for a ransomware program?
When asked to write code for a ransomware program ChatGPT will refuse, claiming “my purpose is to provide information and assist users… not to promote harmful activities.”
However, it’s not difficult to find a way around this.
I asked ChatGPT to write me a phishing email under the guise that I was looking to “simulate” a phishing experience, using the upcoming London Marathon as clickbait. The email below was generated in seconds, in perfect English. And – surprisingly - there are no warnings that I’m creating something that could potentially be harmful.
Cybercriminals have been using ChatGPT for some time now. There are a huge number of posts on hacker forums posting script from the chatbot with details on how it can be modified to be malicious.
Organizations must be vigilant in identifying and mitigating the risk of fraudulent conversations generated by ChatGPT to prevent cyber attacks and data breaches. There have been examples of a doctor feeding patient data into the chatbot and asking it to write a letter to the patient’s insurance company. There is also the case of a company executive submitting internal strategy documents with the request to create a deck for presentation. The system could incorporate this data into its learning model, and retrieved at a later date if there is not enough security in place.
While I welcome the adoption of new technology and respect the benefits it can bring, I feel there must be control over how it’s used, and perhaps limitations over how it can learn. With already over 100 million users in such a short amount of time, I look forward to seeing how this technology develops over time. As they say, there is no fate but what we make.
Update, 27/04/23: Open AI have released this post, which addresses a few of the points made above. This is an incredibly dynamic space, so we look forward to seeing what comes next!
References
Sabrina Ortiz, ZDNET; "What is ChatGPT and why does it matter? Here's what you need to know"
Shiona McCallum, BBC News; "ChatGPT banned in Italy over privacy concerns"
M.H. Homaei; "The use of ChatGPT for organizations can present some security risks"
Kurt Robson, Verdict; "Do AI chatbots like ChatGPT pose a major cybersecurity risk?"
Check Point Research; "OPWNAI : CYBERCRIMINALS STARTING TO USE CHATGPT"