Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI is quietly revising its policy document to remove reference to “politically neutral” artificial intelligence


OpenAI has quietly removed language supporting “politically neutral” AI recently published policy documents.

in the year original draft In its “economic blueprint” for the US AI industry, OpenAI said AI models should “aim to be politically neutral by default.” A new draft released Monday removes that phrase.

Reached for comment, an OpenAI spokesperson said the edit was part of an effort to “make it adequate” and that other OpenAI documents, including OpenAI’s Model Specification, “emphasize objectivity.” The Model SpecReleased by OpenAI in May, it aims to shed light on the behavior of the company’s various artificial intelligence systems.

But the review also points to a political minefield that has become a discourse on “biased AI.”

Many of President-elect Donald Trump’s allies, including Elon Musk, and cryptocurrency and AI “czar” David Sachs, have embraced AI chatbots. censoring conservative views. There are sacks separated OpenAI’s ChatGPT is especially “programmed to wake up” and as a liar about politically sensitive topics.

Musk blamed both the data on which the AI ​​models were trained and the “awakening” of San Francisco Bay Area firms.

“Many of the AIs that are being trained in the San Francisco Bay Area are adopting the philosophy of the people around them,” Musk said. he said last October at an event supported by the government of Saudi Arabia. “So you have an awakened, nihilistic philosophy built into these AIs.”

In fact, bias in artificial intelligence is an intractable technical problem. Musk has his own artificial intelligence company xAI struggled creating a chatbot that doesn’t support some political views over others.

An article from researchers from Great Britain has been published In August, ChatGPT suggested that it had a liberal bias on issues such as immigration, climate change and same-sex marriage. There is OpenAI he claimed that any biases seen in ChatGPT are “bugs, not features”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *