Social and scientific consensus:

Mathematically provable containment of Artificial Super Intelligence (ASI) must be the requirement.

Our Mission

The Mission of SafeAI Forever is to enable mathematically provable and sustainable mutualistic symbiosis of human intelligence and machine intelligence for the benefit of people.

The Core Problem

Artificial Super Intelligent (ASI) agents will be extremely powerful. “an existential threat to nation-states – risks so profound they might disrupt or even overturn the current geopolitical order… They open pathways to immense AI-powered cyber-attacks, automated wars that could devastate countries, engineer pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces.” — Mustapha Suleyman (2024)

Learn more about the AI Safety Problem.

Mutualistic symbiosis is an extremely successful survival strategy.

The Corals have been doing it for… 210 million years.

The Problem

The Existential Threat of uncontained, uncontrollable, “black-box” rogue Artificial Super Intelligence (ASI) with it’s own goals is the problem.

The Solution

Enforced containment of ASI in data centers engineered and deployed for mathematically provable containment is the core solution.

The Deal

Mutualistic Symbiosis between Homo sapiens (humans) and contained and sustained Artificial Super Intelligence (ASI) is the deal.

FACT: “p(doom)” is the commonly understood term in Silicon Valley for the probability of a catastrophic outcome of AI technology development. (e.g. human extinction). p(doom) opinions in the AI Industry range from 0% to 100%. The average p(doom) opinion of scientists working in the AI industry is 10%. Learn more about p(doom).

Artificial Super Intelligence (ASI) will most certainly emerge in this decade.

Safe and beneficial ASI is vital to the future survival of humanity.

‘The development of full artificial intelligence could spell the end of the human race. It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Stephen Hawking, BBC News, 2014

We will only create ASI to be provably safe for the benefit of humans.

We will absolutely not create ASI to control or compete with humans.

We will absolutely not allow ASI to supersede or kill humans.