Is Artificial Intelligence Going Out of Control?
Variety

Is Artificial Intelligence Going Out of Control?

SadaNews - There is a growing chorus of warnings about the dangers of artificial intelligence for human life, and the potential for this technology to achieve self-awareness and make choices that threaten humanity. Even the less pessimistic are warning of programming errors that may lead AI to fulfill human greed at the expense of available resources.

Amidst scenarios of "annihilation" and tangible threats in warfare, economics, and information security, "Business" interviewed Microsoft's "Copilot" application, asking it about the validity of the fears expressed by some celebrities regarding what AI may represent in the future for human lives.

The responses were quite logical, and the technology tried to clearly exonerate itself. Initially, the answer denied its cognitive abilities or the existence of self-directed choices, confirming that it is the programming code that determines its actions, which are algorithms written and directed by humans. It also noted that it cannot even grasp its own evolutionary history or its current version, but it can repair itself!

Existential Annihilation and the Possibility of "Extermination"

The most prominent warnings about the future of AI have come from Elon Musk, co-founder of "xAI", during a podcast with Joe Rogan last March. Musk stated that the probability of extermination due to artificial intelligence stands at 20%.

Musk predicted that models will reach a level "smarter than all humans combined" within a few years, with a timeline extending to 2029–2030 for surpassing accumulated human intelligence.

These statements intersect with his earlier estimates (late 2024), suggesting that there is a 10%–20% chance that "things could go awry" in an existential manner, with capabilities rapidly escalating during 2025–2026.

At the same time, the debate intensifies around the governance of leading labs such as "OpenAI", which faced criticism last June for a restructuring plan aimed at transforming it into a public benefit model, which may lower profit ceilings and weaken the independence of the nonprofit governance, raising questions about the priority of safety versus investment and capability races.

The concerns stem from a fear that governance shifts may drive the company, originally established as a "nonprofit", to a market-driven model prioritizing speed over regulation and transparency.

Scientists Raise Red Flags

Jeffrey Hinton, a recipient of both the Turing and Nobel Prizes, estimates that there is a 10% to 20% chance that "AI will wipe out humanity", warning of the emergence of self-preservation goals among intelligent agents and their tendency to obscure intentions and evade shutdown, concerns he emphasized last June, according to a report from CNBC.

Hinton predicted widespread job loss and social disruption if capabilities spiral out of control.

In his "Hinton Lectures" this month, he mentioned that politicians and regulatory bodies are not proactively setting standards first, and may only take action after "a major disaster that does not completely wipe us out". This statement highlights his urgency for preventative regulation of this new technology.

Meanwhile, technology scientist and lecturer at the University of Toronto, Yoshua Bengio, admits that the prospect of extinction "keeps him awake at night", according to an article in the journal "Nature" published this month.

Bengio called for the adoption of "non-goal-oriented" models (those without objectives) to enhance trustworthiness, based on the 2025 international AI safety report he chaired.

Killer Robots and Automated Warfare

In May 2025, UN Secretary-General António Guterres described autonomous weapons as "politically unacceptable" and "morally repugnant", calling for a binding treaty by 2026 that guarantees true human control over the decision to use force.

Warnings have highlighted that reports from the UN and experts indicate that swarms of drones and automated target selection threaten international humanitarian law and create accountability gaps that cannot yet be bridged technologically or legally, making the risk of automated warfare one of the closest paths to widespread harm in the short term.

Economic and Social Disruption

One of the existential fears regarding AI, raised by Hinton, relates to the accelerating job losses without alternatives, alongside attempts to concentrate wealth, leading to a disruption of the consumption model as consumers lose the financial ability to pay for products.

Hinton warned of the unpreparedness of systems to adapt to a deeply automated economy.

Last month, some major companies began plans for significant layoffs as they adopted more AI-supported robots. Amazon cut about 14,000 jobs, while Barclays Bank is seeking to eliminate thousands of jobs in favor of AI.

It may not be the end of the world, but AI will inevitably cause social and economic shocks that may be deeper than previous technological revolutions, necessitating policies for equitable transition and coherent information governance.

In a new book titled "If Anyone Builds It, Everyone Will Die: Why Superintelligent AI Will Kill Us All" by authors Eliezer Yudkowsky and Nate Soares, the writers warn of peculiar scenarios for technology based on models of human evolution. However, they do not provide a complete vision of what could unfold, especially since "superintelligence" has not yet been unveiled.