Should we hit pause on superintelligence?

Experts and public voices urge a prohibition on superintelligence until AI is safe, controllable and aligned with humanity.

A powerful coalition of global voices wants the world to stop and think before crossing a line we cannot return from. The Future of Life Institute has launched a new initiative backed by hundreds of figures from science, faith, politics and the arts. Their shared message is clear: we need a prohibition on the development of superintelligence until it is proven safe, controllable and publicly accepted.

Superintelligence refers to artificial intelligence that could outperform humans in almost every cognitive task, reasoning, learning, creativity and problem-solving. Many experts warn that such systems could quickly become too complex for humans to understand or control.

The institute warns that these frontier AI systems, meaning the most advanced models being built today, could reach that level within a few years. Without safety measures, they believe the risks could become unmanageable.

 

 

Illustration of AI

 

Inside the coalition to stop superintelligence

The list of signatories reads like a global who’s who. It includes five Nobel Laureates, Turing Award winners, AI pioneers, security experts, faith leaders and artists. Yoshua Bengio and Geoffrey Hinton, two of the world’s most cited AI scientists, are among them. Apple co-founder Steve Wozniak, Virgin founder Richard Branson, actor Stephen Fry and former Irish President Mary Robinson have also signed.

Support comes from across political and cultural lines. Former U.S. National Security Advisor Susan Rice, Admiral Mike Mullen, evangelical leaders Johnnie Moore and Walter Kim, plus Papal AI Advisor Paolo Benanti are on the list. So are artists Joseph Gordon-Levitt, will.i.am and even Prince Harry and Meghan, the Duke and Duchess of Sussex.

Together, they form a politically diverse coalition united by one concern: superintelligence must not be built before it is proven safe.

Visual of an AI data center

 

Expert warnings about superintelligence risks

Yoshua Bengio, professor at the Université de Montréal, explained the stakes. “Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

Actor Stephen Fry shared his own warning. “To get the most from what AI has to offer mankind, there is simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far. By definition this would result in a power that we could neither understand nor control,” he said.

Anthony Aguirre, co-founder and Executive Director of the Future of Life Institute, said the same urgency applies to industry. “Many people want powerful AI tools for science, medicine, productivity and other benefits. But the path AI corporations are taking, racing toward smarter-than-human AI designed to replace people, is wildly out of step with what the public wants, scientists think is safe or religious leaders feel is right. Nobody developing these AI systems has been asking humanity if this is OK. We did, and they think it’s unacceptable.”

Visual of an AI data center

 

Poll results reveal public demand for AI regulation

A national U.S. poll from the institute reveals how strong public concern has become. Only 5 percent of Americans support the current approach of unregulated development. About 73 percent favor strong government oversight, and 64 percent believe superintelligence should not move forward until scientists agree it is safe and controllable.

Max Tegmark, president of the institute and a professor at MIT, summarized the results. “Ninety-five percent of Americans don’t want a race to superintelligence, and experts want to ban it,” he said.

The data sends a clear signal. People want safety, not a reckless race to the top.

Image of an AI data center

 

The real dangers of the AI race

The statement warns that rushing toward superintelligence without guardrails could cause catastrophic harm. The risks include mass job loss, weakened freedoms, erosion of dignity, national security threats and even possible extinction. The coalition insists that slowing down progress is not the same as stopping it.

Supporters call for secure innovation that focuses on solving real problems in medicine, energy and education. They envision a pro-human AI renaissance built around tools that empower people instead of replacing them. Progress, they say, must serve humanity’s values, not undermine them.

 

What this means for you

This conversation affects everyone. As a citizen, you rely on systems that may soon be shaped by AI. Your privacy, safety and job security are all on the line.

As a worker, you already see how automation changes the workplace.

As a consumer, you will feel the results of these choices in the technology you use every day.

The coalition’s message calls for public involvement. The decision to pursue or pause superintelligence belongs to society, not just tech companies or researchers. That choice will shape the kind of future we build together.

 

Related Links: 

 

Kurt’s key takeaways

The call to prohibit superintelligence until it is proven safe has united scientists, faith leaders, policymakers and artists across traditional divides. Their plea is simple: slow down before humanity loses control of its own creation. Whether you see superintelligence as promise or peril, this debate forces us to ask what kind of intelligence we truly want guiding our future.

If artificial intelligence could soon outthink every human being, should we build it before we are sure it will obey our best intentions? Let us know your thoughts in the comments below,. 

FOR MORE OF MY TECH TIPS & SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE

 

 

Copyright 2025 CyberGuy.com.  All rights reserved.  CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow