Thousands of ChatGPT users could be at risk of fraud, scams or cyberattacks after they were targeted by info-stealing malware, experts have revealed.
More than 101,000 stealer-infected devices with saved ChatGPT login details have been identified by Singapore-based cybersecurity firm Group-IB.
The company’s Threat Intelligence platform found 101,134 ChatGPT credentials tucked away within the logs of info-stealing malware traded on dark web marketplaces over the past 12 months, with more than a quarter of them coming from May 2023 alone.
Your ChatGPT login details could be up for sale
Geographically, Group-IB says that the Asia-Pacific region was most affected, accounting for more than two in five cases.
Accounting for more than 78,000 of cases, the Raccoon info stealer proved to be most popular. Vidar, at just under 13,000, and Redline at nearly 7,000, make up the top three infostealers.
Typically, malware like this collects credentials saved in browsers, bank card details, cryptocurrency wallets, cookies, and browsing history, sending it back to the operator. Instant messengers and emails have also become more common targets for infostealers.
Leading the way with 12,632 compromised credentials was India, with Pakistan and Brazil rounding up the top three respectively. The US ranked at number six, with 2,995 credentials compromised.
Group-IB head of Threat Intelligence, Dmitry Shestakov, said: “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
While many have risen concerns over the security of generative AI tools like ChatGPT and Bard, a large number of businesses continue to use them, while many employees are likely to continue using them against the employer’s will. With discussions potentially involving company insider information and code, the potential for devastation if the wrong hands get hold of such information could be huge.
In an effort to protect themselves, Group-IB recommends that users change their passwords regularly, as well as use security measures like two-factor authentication (2FA). More broadly, these types of measures should be taken across the Internet wherever unauthorized access could cause damage.
“People may not realise that their ChatGPT accounts may in fact hold a great amount of sensitive information that is sought after by cybercriminals,” added Jake Moore, Global Cyber Security Advisor at ESET.
“It stores all input requests by default and can be viewed by those with access to the account. Furthermore, info stealers are becoming more prominent in ChatGPT compromises and even used in malware-as-a-service attacks. Info stealers focus on stealing digital assets stored on a compromised system looking for essential information such as cryptocurrency wallet records, access credentials and passwords as well as saved browser logins.”
“It might be a wise idea to therefore disable the chat saving feature unless absolutely necessary. The more data that chatbots are fed, the more they will be attractive to threat actors so it is also advised to think carefully about what information you input into cloud based chatbots and other services.”