Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

We need to rethink what the ‘A’ in AI means


This month, artificial intelligence bots descended on Santa. First, AI-enabled gifts are on the rise – as I know myself, I was recently gifted an impressive AI-dictation tool.

Meanwhile, retailers such as Walmart are deploying AI tools to provide frugal shoppers with holiday assistance. Think of these, if you will, as the equivalent of a personal elf, offering shopping and gifting shortcuts. And they seem to be doing well based on recent reviews.

But here’s the irony: as AI creeps into our lives – and Christmas stockings – hostility is still high. Earlier this month, for example, a British government survey found that four out of ten people expect AI to provide benefits. However, three out of ten expect a significant risk, due to breaches of “data protection”, “spread of false information” and “job displacement”.

That’s not surprising, perhaps. The risks are real and well publicized. However, as we enter 2025, it’s important to reflect on three often-overlooked facts about the latest AI theory that can help frame this chaos in a more constructive way.

First, we need to rethink which “A” we use “AI” today. Yes, machine learning methods are “artificial”. However, bots don’t always – or often – replace our brains, as an alternative to flesh and blood. Instead, they often enable us to work faster and move more efficiently through tasks. Shopping is just one important example.

So maybe we should rebrand AI as “augmented” or “accelerated” intelligence – or “agency” intelligence, to use a buzzword for that Nvidia’s latest blog calls it the “next frontier” of AI. This means bots can act as autonomous agents, performing tasks for humans at their behest. It will be an important topic in 2025. Or as Google announced when it recently introduced its latest version of Gemini AI: “The era of AI agents has arrived.”

Second, we need to think outside of the traditional structure of Silicon Valley. Until now, “anglophone actors” have “dominated the conversation” around AI around the world, such as academics Stephen Cave and Kanta Dihal. note in the introduction to their book, Thinking about AI. That shows US technological dominance.

However, other cultures view AI a little differently. The views of developing countries, he said, are more positive than those of developed countries, such as James Manyika, co-head of the UN organization on AI, and the chief executive of Google, r.he recently said at Chatham House.

Countries like Japan are also different. More importantly, the Japanese public has long shown a more positive attitude toward robots than their anglophone counterparts. And now it is reflected in the opinions related to AI systems.

Why is that so? Another reason is Japan’s labor shortage (and the fact that many Japanese are afraid of having immigrants fill this gap, so they find it easier to embrace robots). One is popular culture. In the second half of the 20th century, when Hollywood movies like The Terminator or 2001: A Space Odyssey they were spreading fear of intelligent machines to the anglophone audience, the Japanese public was amazed by Astro Boy saga, which showed robots in a good light.

Its creator, Osamu Tezuka, he has been saying this in the influence of the Shinto religion, which does not set strict boundaries between living and non-living things—unlike the Jewish and Christian traditions. “The Japanese do not distinguish between man, a higher being and the world around him,” he noted earlier. “We easily accept robots as well as the world around us, insects, rocks – they are all one.”

And that is visible in the way that companies like Sony or SoftBank design AI products today, which is one of the conversations. Thinking about AI note: these are trying to make “robots with a heart” in a way that American consumers might find scary.

Third, this cultural difference shows that our attitudes towards AI do not need to be set in stone, but can change, as technological changes and different cultural influences emerge. Consider facial recognition technology. Back in 2017, Ken Anderson, an anthropologist at Intel, and his colleagues learned Chinese and American consumer attitudes toward facial recognition devices, and found that while the former accepted this technology for everyday tasks, such as banking, the latter did not I went to receive it.

The difference reflected American concerns about privacy, it seemed. But in the same year when the study was published, Apple introduced facial recognition equipment on the iPhone, which was quickly adopted by US consumers. Attitudes changed. The important point, then, is that “cultures” are not like Tupperware boxes, sealed and fixed. They are like slow-moving rivers on muddy banks, into which new streams flow.

So whatever else 2025 brings, one thing that can be predicted is that our attitudes towards AI will continue to change subtly as the technology becomes more mainstream. That may scare some, but it can also help us frame the technology debate more constructively, and focus on ensuring that people control their digital “agents” – not the other way around. Marketers today may be rushing to AI, but they need to ask what the “A” they want in the AI ​​tag is.

gillian.tett@ft.com



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *