OpenAI has disrupted (more) Chinese accounts using ChatGPT to create social media surveillance tools

OpenAI that a now-banned account originating in China was using ChatGPT to help design promotional materials and project plans for a social media listening tool. OpenAI says that this work was purportedly done for a government client. The tool was a “probe” that could crawl social media sites like X, Facebook, Instagram, Reddit, TikTok and YouTube for specific political, ethnic or religious content as defined by the operator. The company said it cannot independently verify if the tool was used by a Chinese government entity. OpenAI disrupted earlier this year.
The company also says it banned an account that was using ChatGPT to develop a proposal for a tool described as a “High-Risk Uyghur-Related Inflow Warning Model” that would aid in tracking the movements of “Uyghur-related” individuals. China has long been accused of alleged against Uyghur Muslims in the country.
OpenAI began publishing threat reports in , raising awareness of state-affiliated actors using large language models to debug malicious code, develop phishing scams and more. The company’s serves as a roundup of notable threats and banned accounts over the last quarter.
The company also caught Russian-, Korean- and Chinese-speaking developers using ChatGPT to refine malware, as well as entire networks in Cambodia, Myanmar and Nigeria using the chatbot to help create scams in an attempt to defraud people. According to OpenAI’s own estimates, ChatGPT is being used to detect scams three times as often as it is to create them. .
This summer, OpenAI in Iran, Russia and China that were using ChatGPT to create posts, comments and to drive engagement and division as part of online influence campaigns. The AI-generated content was used on various social media platforms in both the originating nations and internationally.