OpenAI has recently fixed several vulnerabilities in its ChatGPT that could have enabled cybercriminals to take over user accounts and access chat histories. The vulnerabilities included a “Web Cache Deception” flaw discovered by bug bounty hunter and Shockwave founder Gal Nagli.
The flaw allowed attackers to send a crafted .css path to the session endpoint, thereby gaining access to the victim’s JWT credentials and taking over their account.
Though the flaw was quickly addressed, another security researcher, Ayoub Fathi, discovered that it was possible to bypass authentication and gain access to a user’s conversation titles, full chats, and account status.
Fathi reported the issue to OpenAI, who quickly addressed it. However, the researchers pointed out that OpenAI does not have a bug bounty program to reward those who report vulnerabilities in its chatbot.
On Friday, OpenAI also announced that the recent exposure of users’ personal information and chat titles in its chatbot service was caused by a bug in the Redis open-source library. The company promptly addressed the issue.
While OpenAI is proactive in addressing vulnerabilities, these incidents underscore the need for continued vigilance and regular security assessments of AI chatbots and similar technologies. Users are encouraged to monitor their accounts and report any suspicious activity promptly.
Companies are urged to consider establishing bug bounty programs to incentivize responsible disclosure of vulnerabilities by researchers.