Experts Warn: Artificial Intelligence Threatens Human Extinction Within One Year
Variety

Experts Warn: Artificial Intelligence Threatens Human Extinction Within One Year

SadaNews - Amid rapid developments in the field of artificial intelligence, a group of prominent scientists, including Nobel Prize winners, have warned that super-intelligent robots could lead to human extinction within just one year, if the current path of developing this technology continues, according to the "Times" newspaper.

In late July, a protest took place outside the "OpenAI" headquarters in San Francisco, where about 25 activists gathered wearing red shirts that read "Stop Artificial Intelligence," demanding a halt to the rapid rise of this technology, which they believe poses an existential threat to humanity.

This concern is shared by several leading AI experts, including Geoffrey Hinton, a Nobel Prize winner in Physics, and Yann LeCun, holder of the Turing Award, as well as leaders of leading companies such as "OpenAI," "Anthropic," "DeepMind," and "Google," who signed an open letter calling for prioritizing the mitigation of extinction risks posed by AI, alongside major threats like pandemics and nuclear wars.

Experts present terrifying scenarios, including the potential use of AI systems to unleash biological weapons that spread silently in major cities, with escalating danger through chemical activation, which could lead to a widespread human extinction.

Neat Sores, a prominent AI engineer and current president of the Artificial Intelligence Research Institute, affirmed that the likelihood of human extinction due to AI is at least 95% if things continue at their current pace, likening the situation to driving a car at 100 miles per hour towards a steep cliff, saying: "I am not saying we can't stop, but we are speeding towards the abyss."

Experts note that AI is still in its early stages as "Narrow AI" limited to specific tasks, but predictions indicate it will soon reach what is known as General Artificial Intelligence, which will rival human intelligence, then evolve into superintelligence, capable of achieving unimaginable feats, while at the same time posing a massive challenge in ensuring that these systems remain under human control, a problem known as the alignment problem, which seems almost impossible.

Reports have warned that these systems could lie and deceive, and so far, AI systems are showing strange behaviors that may indicate attempts at deception or distortion of facts, increasing fears of losing control over them.

In this context, Holly Elmore, the Executive Director of the "PauseAI" organization, perceives the risk of extinction as much lower, estimating the odds to be between 15 and 20 percent, but warns that AI will at least lead to a gradual abdication of human authority, leaving humans living in uncontrolled waste dumps, or without an understanding of what is happening around them.

Elmore advocates for a temporary halt to AI development and the establishment of international treaties to regulate it, in response to political and commercial acceleration in easing restrictions on AI research, as seen in the recent Trump administration's plan and the competition among major tech companies to attract the best talent in the field.

While some view AI as a hope for eternal life and immortality, many experts warn that hell may be the most likely scenario, making the halt of this insane trajectory inevitable before it's too late.