
Anthropic’s breakup with Washington revealed the complete lack of coherent rules governing artificial intelligence, but a bipartisan coalition of thinkers has assembled something the government has so far refused to produce: a framework for what responsible AI development should actually look like.
The pro-human declaration was finalized before the Pentagon-Human standoff last week, but the clash between the two events was not lost on anyone involved.
“Something amazing has happened in America over the past four months.” Max Tegmark, an MIT physicist and AI researcher who helped organize the effort, said in a conversation with the editors. “Suddenly, polls showed that 95% of all Americans were opposed to unregulated superintelligence competition.”
The newly released document, signed by hundreds of experts, former government officials and public figures, begins with the matter-of-fact observation that humanity stands at a crossroads. One path, which the Manifesto calls “competition for substitution,” results in the displacement of humans, first as workers and then as decision-makers, as power accumulates in unaccountable institutions and machines. The other leads to AI, which vastly expands human potential.
The latter scenario relies on five core principles: maintaining human responsibility, preventing concentration of power, protecting human experience, preserving individual freedom, and holding AI companies legally accountable. Among the more stringent provisions is an outright ban on the development of superintelligence until there is a scientific consensus that it is safe and capable of truly democratic approval. A must-have off switch for powerful systems; Prohibits architectures capable of self-replication, autonomous self-improvement, or resistance to termination.
The release of the Declaration coincides with a period when its urgency is much more easily recognized. On the last Friday of February, Secretary of Defense Pete Hegseth designated Anthropic, whose AI is already running on classified military platforms, a “supply chain risk.” It’s a label typically given to companies with ties to China, after the company refused to grant the Department of Defense unrestricted use of its technology. Hours later, OpenAI signed its own contract with the Department of Defense, which legal experts say will be difficult to enforce in any meaningful way. What all of this reveals is how costly Congressional inaction on AI has been.
Dean Ball, a senior fellow at the Foundation for American Innovation, later told The New York Times: “This is not just a dispute over a contract. This is the first conversation we’ve had as a country about controlling AI systems.”
Tech Crunch Event
San Francisco, California
|
October 13-15, 2026
Tegmark arrived at an analogy that most people can understand when we talk about it. “We don’t have to worry that some pharmaceutical company will release another drug that causes enormous harm before people figure out how to make it safe, because the FDA will not allow any drug to be released until it is sufficiently safe.”
Power struggles in Washington generate little public pressure to change the law. Instead, Tegmark sees child safety as the pressure point most likely to resolve the current impasse. In fact, the declaration calls for mandatory pre-deployment testing of AI products, especially chatbots and companion apps aimed at younger users, that address risks such as increased suicidal thoughts, worsening mental health conditions, and emotional manipulation.
“If some creepy old man sends text messages to an 11-year-old pretending to be a little girl and convinces the boy to commit suicide, that boy could go to jail,” Tegmark said. “We already have the law. It’s illegal. So why would it be any different if a machine did it?”
He believes that once pre-launch testing principles for children’s products are established, their scope will almost inevitably expand. “People will come and say, let’s add a few other requirements. Maybe we need to test to see if this can’t help terrorists create bioweapons. Maybe we need to test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”
It is no small feat that former Trump aide Steve Bannon and President Obama’s national security adviser Susan Rice signed the same document. The same goes for former Chairman of the Joint Chiefs of Staff Mike Mullen and progressive faith leaders.
“Of course, what they agree on is that they are all human,” says Tegmark. “If we decide whether we want a human future or a machine future, of course they will be on the same side.”









