OpenAI Blocks ChatGPT Accounts Linked to Chinese Hackers Developing Malware
gbhackersOpenAI has taken decisive action to stop misuse of its ChatGPT models by banning accounts tied to a group of Chinese hackers.
This move reflects OpenAI’s core aim to ensuring artificial general intelligence benefits everyone. By setting clear rules and acting swiftly on policy violations, OpenAI hopes to keep AI tools safe and accessible for legitimate users.
Since launching public threat reporting in February 2024, OpenAI has tracked and broken apart more than 40 networks misusing its services.
In its latest quarterly update, the company revealed it identified a cell of hackers in China who were using ChatGPT to write and refine malware code.
These hackers combined AI suggestions with old exploit methods to speed up their attacks. OpenAI’s threat intelligence team spotted unusual query patterns and repeated prompts about malicious payloads.

Copyright of this story solely belongs to gbhackers . To see the full text click HERE