A hacker said they purloined personal details from countless OpenAI accounts-but scientists are doubtful, and the business is examining.
OpenAI says it's investigating after a hacker claimed to have actually swiped login qualifications for 20 million of the AI company's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher published a puzzling message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering possible purchasers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being marketed "for just a couple of dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If genuine, this would be the 3rd major security event for bphomesteading.com the AI business because the release of ChatGPT to the general public. In 2015, links.gtanet.com.br a hacker got access to the business's internal Slack messaging system. According to The New York City Times, the hacker "stole details about the design of the business's A.I. innovations."
Before that, in 2023 an even simpler bug involving jailbreaking prompts allowed hackers to obtain the personal information of OpenAI's paying customers.
This time, however, security scientists aren't even sure a hack happened. Daily Dot reporter Mikael Thalan composed on X that he found void email addresses in the expected sample data: "No proof (recommends) this alleged OpenAI breach is genuine. At least two addresses were invalid. The user's just other post on the forum is for a thief log. Thread has actually since been deleted as well."
No evidence this alleged OpenAI breach is legitimate.
Contacted every email address from the purported sample of login qualifications.
A minimum of 2 addresses were invalid. The user's just other post on the online forum is for a stealer log. Thread has considering that been erased also. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025

OpenAI takes it 'seriously'

In a declaration shared with Decrypt, an OpenAI spokesperson acknowledged the circumstance while maintaining that the business's systems appeared protected.
"We take these claims seriously," the representative said, adding: "We have not seen any evidence that this is connected to a compromise of OpenAI systems to date."
The scope of the supposed breach triggered issues due to OpenAI's enormous user base. Millions of users worldwide count on the business's tools like ChatGPT for organization operations, instructional purposes, and material generation. A genuine breach could expose personal discussions, business tasks, and other delicate data.
Until there's a last report, some preventive measures are always recommended:
- Go to the "Configurations" tab, log out from all connected devices, and enable two-factor authentication or 2FA. This makes it practically impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then develop a virtual card number to handle OpenAI subscriptions. By doing this, it is simpler to spot and iuridictum.pecina.cz avoid scams.
- Always watch on the discussions stored in the chatbot's memory, and be mindful of any phishing attempts. OpenAI does not request for any individual details, and any payment upgrade is always managed through the main OpenAI.com link.
