How ChatGPT User Data Leaks Are Framing Users

At the end of May, a large-scale data breach of ChatGPT users was uncovered, which could potentially compromise the confidential and sensitive information that users trust this advanced chatbot. Logs containing more than 100,000 ChatGPT accounts have appeared on the exchanges of stolen data on the dark web. According to The Hacker News and Singapore-based cybersecurity company Group-IB, the credentials of users who logged into ChatGPT from its launch in June 2022 to May 2023, when information about the leak appeared, became publicly available, which means that it may well continue. The largest number of locations for leaked accounts are the USA, France, Morocco, Indonesia, Pakistan and Brazil.

“The number of available logs containing compromised ChatGPT accounts peaked at 26,802 in May 2023,” Group-IB. “Over the past year, the Asia-Pacific region has seen the highest concentration of ChatGPT credentials for sale.”

It should be understood that we are talking about logs with ChatGPT accounts, and not their owners or ChatGPT itself. With the explosion of chatbots and interest in AI in general since the end of last year, it goes without saying that more recent logs will contain more ChatGPT accounts than those offered a few months ago. While the company investigates the issue, ChatGPT continues to adhere to industry-standard security standards, according to OpenAI, the parent organization of ChatGPT.

“Group-IB Threat Intelligence report points to the effects of massive malware on people’s devices, not an OpenAI hack,” said an OpenAI spokesperson. published by Tom’s Hardware. “We are currently investigating the exposed accounts. OpenAI uses industry-leading user authentication and authorization methods in its services, including ChatGPT, and we encourage our users to use strong passwords and install only verified and reliable software on personal computers.”

The 26,802 available logs mentioned above mean that the leaked data is already actively traded on the black market of the Internet: “Logs with compromised data are actively traded on darknet marketplaces,” Group-IB said in a statement. “Additional information about the logs available in such markets includes lists of domains found in the log, as well as information about the IP address of the compromised host.”

Most of the reset credentials were found in logs associated with several families of information-stealing malware. The popular malware Raccoon Infostealer (aka Racealer) was used to hack exactly 78,348 accounts. (Exact numbers are easy to ascertain once you know what to look for for each type of malware.)

Raccoon Infostealer is a perfect example of how the darknet is a parallel world – a kind of looking glass of the regular Internet: users can purchase access to Raccoon on a subscription model; no coding or special skills required. Its ease of deployment is one of the reasons for the rise in cybercrime-related crimes. Raccoon, like other similar programs, comes with various additional features. These subscription-based information thieves don’t just steal credentials; they also allow attackers to automate subsequent attacks.

Of course, other blackhat tools were also used to steal user credentials: Vidar was second only to Raccoon, used to access 12,984 accounts, and third was 6,773 credentials captured using RedLine malware.

That these credentials give access to ChatGPT accounts should give anyone who uses it a second thought.

Data leaks are one of the many threats to user security and the adequacy of information on the Internet that have emerged along with new advanced AI models that have been described so far:

  1. Hacking and “hijacking” AI chatbots to gain access to their underlying code and data would allow them to be used to create malicious chatbots that can impersonate regular ones.

  2. Facilitating digital attacks. AI chatbots can be used to aid in fraud and phishing attacks by creating persuasive messages that trick users into revealing sensitive information or doing things they shouldn’t. For example, all that is needed for an attack called covert prompt injection is to hide the request (prompt) for the bot on the web page with zero or invisible white font on a white background. By doing this, the attacker can tell the AI ​​to do whatever it wants, such as sniffing out the user’s bank card details.

  3. Digital assistant of criminals. The latest capabilities of neural networks are already being adopted by scammers of all sorts, blurring the line between digital and offline crime. In April, a case had already thundered when extortionists demanded a ransom of a million dollars from a woman for the return of an allegedly kidnapped child, using a deepfake of her daughter’s voice. Believable deepfakes of audio, video, realistic pictures and texts created by neural networks together create a powerful tool for deceiving and coercing people.

  4. Data poisoning. AI chatbots can be trained on infected datasets containing malicious content, which can then be used to create malicious content, such as phishing emails.

  5. AI hallucinations. The term is used to describe fictional chatbot responses. Many users have already encountered this phenomenon, but there is still no explanation for it. ChatGPT is different in that it invents non-existent books, quotes, studies and people, and provides them with detailed tables of contents, lists of sources, saturates the biographies of fictional people with events – and rattles it with such persuasiveness, as if he were retelling a Wikipedia article, but all this – completely fabricated from scratch on the go. And although there is (most likely) no malicious intent here – at least for now – it’s hard to even imagine what clogging of the Internet with products of AI hallucinations this will lead to. But there is no doubt that it will happen: quotes on the Internet were a problem even before AI.

  6. Data leaks compromising information entrusted to chatbots. It’s not just about access to your personal information. “Employees give the bot secret information or use the bot to optimize proprietary code. Given that the default configuration of ChatGPT saves all conversations, this can give credentialed attackers valuable information.” Since most users store their chats in the OpenAI application, leaked accounts give access to them – and, therefore, to everything that happened and is happening in these chats: business planning, application development (including malware development, by the way) and everything that was written in these chats. You can find both personal and professional content in a ChatGPT account, from company trade secrets that shouldn’t be there to personal diaries – maybe even secret documents.

In general, this is a pretty serious information leak. So remember: all passwords matter. But perhaps the security of your ChatGPT tab (both at home and at work) is even more important than the rest.

  • Keep track of which plugins to ChatGPT you install.

  • Choose how to use ChatGPT anonymously. Now there are already many options for accessing it: Telegram bots are quite popular, but they, in turn, get the Telegram user base. You can use ChatGPT without identifying traces in an anonymous messenger Utopiawhich does not require a phone or mail, and which is called Utopia AI built-in ChatGPT.

  • Use strong passwords.

  • enable two-factor authentication (2FA).

  • remember about advanced standards cyber security that will reduce the likelihood of you being successfully attacked.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *