Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Threat actors, some probably based in China and Iran, are formulating new ways to kidnap and use Americans Artificial Intelligence (AI) Malicious intent models, including undercover influence operations, according to a new OpenAI report.
The February report includes two interruptions involving threat actors that seem to have originated in China. According to the report, these actors have used, or at least have tried to use, models built by OpenAi and goal.
In an example, Openai prohibited a Chatgpt account that generated critical comments from the Chinese dissident Cai Xia. The comments were published on social networks by accounts who claimed to be people based in India and the USA. However, these publications did not seem to attract an online substantial commitment.
That same actor also used the Chatgpt Service to generate long -standing news in Spanish that “denigrated” the United States and subsequently published by the main media in Latin America. The lines of these stories were attributed to an individual and, in some cases, to a Chinese company.
China, Iran and Russia convicted of dissidents at the UN Geneva Summit
Threat actors worldwide, including those based in China and Iran, are finding new ways to use American models for malicious intentions. (Bill Hinton/Philip Fong/AFP/Maksim Konstantinov/Soup images/lightrockt through Getty Images)
During a recent press information session that included Fox News Digital, Ben Nimmo, principal researcher of the OpenAI Intelligence and Research Team, said a translation was inclined as content sponsored on at least one occasion, which suggests that someone had paid it.
Operai says that this is the first instance in which a Chinese actor successfully planted articles long in the main media to attack the Latin American public with anti-US narratives.
“Without a view of that use of AI, we could not have made the connection between tweets and web articles,” Nimmo said.
He added that threat actors sometimes give Openai an idea of what they are doing in other parts of the Internet due to how they use their models.
“This is a fairly worrying vision of the way in which a non -democratic actor tried to use democratic AI or based in the United States for non -democratic purposes, according to the materials they were generating themselves,” he continued.
What is artificial intelligence (AI)?
The Flag of China is flown behind a pair of surveillance cameras outside the central government offices in Hong Kong, China, on Tuesday, July 7, 2020. The Hong Kong leader, Carrie Lam, defended the national security legislation imposed to the city for China last week, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, Hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours, hours. After his government affirmed new police powers, including searches without court order, online surveillance and property seizures. (Roy Liu/Bloomberg through Getty Images)
The company also prohibited a Chatgpt account that generated tweets and articles that were then published in assets of publicly linked third parties to Iranian iOS (entrance/exit) known. IO is the process of moving data between a computer and the outside world, including the audio, video, software and text movement.
These two operations have been informed as separate efforts.
“The discovery of a possible overlap between these operations, although small and isolated, raises a question about whether there is a link between these Iranian iOS, where an operator can work on behalf of what seem to be different networks, the threat,” The threat. Report states.
In another example, Openai prohibited a set of chatgpt accounts that they were using OpenAI models To translate and generate comments for a romantic bait network, also known as “Pork Carnage”, on platforms such as X, Facebook and Instagram. After informing these findings, Meta indicated that the activity seemed to originate in a “scam compound recently put in Cambodia.
What is the startup of China Depseek?
The Operai Chatgpt logo is seen on a mobile phone in this photo illustration on May 30, 2023 in Warsaw, Poland. ((Photo by Jaap Arriens/Nurphoto through Getty Images)))
Last year, Operai became the first research laboratory of publishing reports on efforts to prevent abuse by adversaries and other malicious actors when supporting the US. UU., Allied governments, industry partners and Interested parties.
Operai says that he has largely expanded his research capabilities and understanding the new types of abuse since his first report was published and has interrupted a wide range of malicious uses.
The company believes, among other interruption techniques, that AI companies can obtain a substantial vision of Threat actors If the information is shared with the ascending suppliers, such as accommodation and software suppliers, as well as subsequent distribution platforms (social media companies and open source researchers).
Click here to get the Fox News application
Operai emphasizes that his investigations also greatly benefit from the work shared by the colleagues.
“We know that threat actors will continue to prove our defenses. We are determined to continue identifying, avoiding, interrupting and exposing the attempts to abuse our models for harmful purposes,” Openai said in the report.