Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
AI often considered a threat to democracies and a boon to dictators. In 2025, it is likely that algorithms will continue to undermine democratic discourse by spreading outrage, fake news. conspiracy theories. In 2025, algorithms will continue to accelerate the creation of general surveillance regimes where the entire population is monitored 24 hours a day.
Most importantly, AI makes it easy to centralize all data and power. In the 20th century, distributed information networks like the US worked better than centralized information networks like the USSR because the human machines at the center could not efficiently analyze all the data. Replacing apparatchiks with AIs could make Soviet-style centralized networks superior.
However, AI is not all good news for dictators. First, there is the infamous control problem. Dictatorial control is based on terror, but algorithms cannot be terrorized. Occupation in Russia Ukraine officially defined as a “special military operation” and calling it a “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian internet calls it “war” or talks about war crimes committed by Russian troops, how can the regime punish that chatbot? The government can prevent this and try to punish human creators, but this is much harder than disciplining human users. Moreover, competent bots can generate dissenting opinions simply by detecting patterns in the Russian information sphere. It’s a Russian-style adaptation problem. Russia’s human engineers may do their best to create an AI that is completely compatible with the regime, but given the ability of AI to learn and change on its own, how can engineers ensure that an AI that has received the regime’s seal of approval in 2024 is not compatible with the regime? Won’t you enter illegal territory in 2025?
The Russian Constitution makes great promises such as “everyone is guaranteed freedom of thought and speech” (Article 29.1) and “censorship is prohibited” (29.5). Almost no Russian citizen is naive enough to take these promises seriously. But bots don’t understand doublespeak. A chatbot tasked with upholding Russian laws and values might read that constitution, conclude that free speech is a core Russian value, and criticize the Putin regime for violating that value. How can Russian engineers explain to a chatbot that while the constitution guarantees freedom of speech, the chatbot shouldn’t actually believe in the constitution and point out the gap between theory and reality?
In the long run, authoritarian regimes will face a greater threat: AIs can control them instead of criticizing them. Throughout history, the greatest threat to autocrats has usually come from their subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their subordinates. In 2025, a dictator who gives artificial intelligences too much power could become their puppet down the road.
Dictatorships are more susceptible to this kind of algorithmic takeover than democracies. In a decentralized democratic system like the US, even a super-Machiavellian AI would have a hard time gathering power. Even if AI learns to manipulate the US president, it may face resistance from Congress, the Supreme Court, state governors, the media, large corporations, and various NGOs. How would the algorithm deal with, say, a Senate filibuster? It is easier to seize power in a highly centralized system. To break an authoritarian network, AI only needs to manipulate one paranoid individual.