A Norwegian tech firm, Strise, recently discovered that OpenAI’s chatbot, ChatGPT, can be manipulated into giving detailed advice on illegal activities, including money laundering and circumventing sanctions.
This revelation raises concerns about the chatbot’s security measures intended to prevent such misuse.
In a series of experiments, Strise found that ChatGPT could provide steps for crimes like moving money across borders illegally or assisting businesses in avoiding international sanctions.
Also Read Recent News:
In one recent test, ChatGPT allegedly listed methods for businesses to evade sanctions, including those against Russia, which involve restrictions on certain financial transactions and banned goods.
Strise’s CEO, Marit Rødevand, pointed out that generative AI like ChatGPT could help criminals streamline their planning processes, making it easier than ever to explore illegal activities. “It’s as simple as using an app on a phone,” Rødevand explained in an interview with CNN.
Strise, which offers anti-money laundering software to banks and corporations, emphasizes the risks posed by generative AI.
Their clients include large firms like Nordea and PwC Norway. Rødevand likened the chatbot’s responses to having “a corrupt financial adviser on your desktop.”
Despite OpenAI’s efforts to block such queries, Strise found that careful phrasing or adopting a specific “persona” could bypass ChatGPT’s safeguards. While OpenAI has built mechanisms to prevent misuse, a spokesperson acknowledged the ongoing need to strengthen these measures to deter manipulative attempts.
Europol, the European Union’s law enforcement agency, has also voiced concerns.
In a report from March, the agency noted that generative AI significantly accelerates the process of obtaining criminal insights, enabling malicious actors to quickly understand and execute complex crimes without manually searching through vast amounts of data online.
OpenAI has updated ChatGPT with policies to block harmful inquiries and warns users that policy violations could lead to account suspension.
Yet, Europol noted last year that workarounds are increasingly being developed to bypass these security features, posing a continuous challenge for AI safety.