Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Yoshua Bengio (L) and Max Tegmark (R) discuss the development of artificial general intelligence during a live podcast recording of “Beyond the Valley” of CNBC in Davos, Switzerland in January 2025.
CNBC
Artificial general intelligence built as “agents” could be dangerous since its creators could lose control of the system, they told CNBC two of the world’s most prominent scientists.
In the last episode of CNBC’s “Beyond the valley” Podcast published on Tuesday, Max Tegmark, professor at the Massachusetts Institute of Technology and president of the Future of Life Institute, and Yoshua Bengio, called one of the “AI standards” and professor at the University of Montreal, spoke about his concerns about his concerns about his concerns about his concerns about his concerns about his concerns about Artificial general intelligence, or AGI. The term widely refers to AI systems that are smarter than humans.
Their fears are derived from the largest companies in the world who now speak of “AI agents” or “Agent”, which companies claim that it will allow chatbots to act as assistants or agents and help at work and life everyday Industry estimates vary when AGI will enter into existence.
With that concept comes the idea that AI Systems could have an “agency” and own thoughts, according to Bengio.
“IA researchers have been inspired by human intelligence to build artificial intelligence and, in humans, there is a combination of the ability to understand the world such as pure intelligence and agent behavior, which means … using their Knowledge to achieve objectives, “Bengio told” Beyond the Valley “of CNBC.
“At this time, this is how we are building AGI: we are trying to make them agents that understand a lot about the world, and then they can act accordingly. But this is actually a very dangerous proposal.”
Bengio added that pursuing this approach would be like “creating a new species or a new intelligent entity on this planet” and “without knowing if they behave so that they agree with our needs.”
“Then, on the other hand, we can consider, what are the scenarios in which things go wrong and all depend on the agency? In other words, it is because AI has its own objectives that we could be in trouble.”
The idea of self -preservation could also be activated, since AI becomes even smarter, said Bengio.
“Do we want to compete with entities that are smarter than us? It is not a very reassuring bet, right? So we have to understand how self-conservation can arise as an objective in AI.”
For Tegmark of MIT, the key lies in the so -called “AI tool”, systems that are created for a specific and closely purpose, but that do not have to be agents.
Tegmark said that an artificial intelligence tool could be a system that tells him how To be able to control it. “
“I think, in an optimistic note here, we can have almost everything we are excited about AI … if we simply insist on having some basic safety standards before people can sell power systems,” said Tegmark.
“They have to show that we can keep them under control. Then, the industry will quickly innovate to discover how to do better.”
The Future Institute of Life of Tegmark in 2023 asked for a break from the development of AI systems that can compete with intelligence at the human level. While that has not happened, Tegmark said people are talking about the issue, and now is the time to take measures to discover how to put the railings in place to control AGI.
“So at least now many people are talking about the talk. We have to see if we can make them walk the walk,” Tegmark told “Beyond the Valley” from CNBC.
“It’s crazy for us to build something much smarter than us before discovering how to control it.”
There are several views about when AGI will arrive, partly driven by different definitions.
Openai’s CEO, Sam Altman, said his company knows how to build AGI and said he will arrive before what people think, although he minimized the impact of technology.
“I guess we will hit AGI before what most people in the world think and it will be much less important,” Altman saying In December.