Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The internet seems to be growing drowning in false visionswe can at least smell humanity’s BS when it matters. A number of recent studies show that AI-generated disinformation has had no significant impact on this year’s elections around the world because it’s not very good yet.
Over the years, there has been much concern that increasingly realistic but synthetic content can manipulate audiences in a harmful way. The rise of generative AI has reignited these fears, as the technology makes it easy for anyone to produce fake visual and audio media that looks real. In August, a political consultant used artificial intelligence He cheats on President Biden’s vote For a robocall telling voters in New Hampshire to stay home during the state’s Democratic primary.
Tools like ElevenLabs allow you to provide a short audio recording of someone speaking and then have the user repeat their voice to say what they want. Although many commercial AI tools include safeguards to prevent this use, open source models are available.
Despite these advances, Financial Times in a new story looked back over the year and found that very little synthetic political content went viral in the world.
A referred report According to research by the Alan Turing Institute, a total of 27 pieces of AI-generated content went viral during the summer European elections. The report concluded that there was no evidence that the election had been affected by artificial intelligence disinformation, as “most exposure has been concentrated among the minority of users whose political beliefs are already aligned with the ideological narratives embedded in such content.” In other words, among the few who saw the content (presumably before it was flagged) and were willing to believe it, those exposed to it reinforced those beliefs about the candidate, even though they knew the content itself was generated by AI. He cited the example of AI-generated footage of Kamala Harris speaking at a rally standing in front of Soviet flags.
In the US, the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% of them used artificial intelligence. In X, “deeply fake” or “AI-generated” notes in Community Notes were usually only mentioned when new image generation models were released, not during elections.
It is interesting that users on social networks made more wrong identifications real The images were created by artificial intelligence, but in general, users showed a healthy dose of skepticism. Fake media can still be exposed through official communication channels or other means such as Google reverse image search.
If the findings are accurate, it would make a lot of sense. AI-powered images are everywhere these days, but AI-generated images still have some tell-tale qualities that indicate they’re fake. The arm may be unusually long, or the face may not be properly reflected on the mirrored surface; There are many small signs that an image is synthetic. Photoshop can be used to create more convincing fakes, but it takes skill.
Proponents of artificial intelligence should not necessarily welcome this news. This means that the generated images still have a ways to go. Whoever is checked OpenAI’s Sora model knows the video it produces isn’t very good—it almost looks like something created by a video game graphics engine (the assumption is that it was taught on video games), one who clearly does not understand properties such as physics.
Despite all this, there are still concerns. A report from the Alan Turing Institute did after all, it can be concluded that even if the audience knows that the media is not real, beliefs can be reinforced by a real deep fake based on disinformation; confusion about whether media is real or not undermines trust in online sources; and AI images have already been used targets female politicians with pornographic deepfakesit can damage psychological and professional reputation as it reinforces sexist beliefs.
The technology will undoubtedly continue to improve, so it’s worth keeping an eye out.