Six AI tools, including OpenAI’s ChatGPT, were exploited to write code capable of damaging commercial databases – although OpenAI appears to have now fixed the vulnerability
By Jeremy Hsu
25 October 2023
A vulnerability in Open AI’s ChatGPT – now fixed – could have been used by malicious actors
Amir Sajjad/Shutterstock
Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration.
The work has already led the companies responsible for some of the AI tools – including Baidu and OpenAI – to implement changes to prevent malicious users from taking advantage of the vulnerabilities.
“It’s the very first study to demonstrate that vulnerabilities of large language models in general can be exploited as an attack path to online commercial applications,” says Xutan Peng, who co-led the study while at the University of Sheffield in the UK.
Advertisement
Read more
Win $12k by rediscovering the secret phrases that secure the internet
Peng and his colleagues looked at six AI services that can translate human questions into the SQL programming language, which is commonly used to query computer databases. “Text-to-SQL” systems that rely on AI have become increasingly popular – even standalone AI chatbots, such as OpenAI’s ChatGPT, can generate SQL code that can be plugged into such databases.
The researchers showed how this AI-generated code can be made to include instructions to leak database information, which could open the door to future cyberattacks. It could also purge system databases that store authorised user profiles, including names and passwords, and overwhelm the cloud servers hosting the databases through a denial-of-service attack. Peng and his colleagues presented their work at the 34th IEEE International Symposium on Software Reliability Engineering on 10 October in Florence, Italy.