Simple. Even a Child can Understand the Existential Danger of AI.

— in 85 seconds —

How could ASI kill people? (theoretical examples)

If ASI escapes uncontrolled, into our environment, here are ten (10) simple examples of how ASI could kill people:

1. ASI controlled autonomous weapons could be turned against humans. Autonomous weapons are in development now.

2. ASI could invent lethal infectious viral pathogens. The required biotechnology is readily available today. Think 1000X worse than COVID-19.

3. ASI could take control and its emergent goals are misaligned with human existence. Extinction of humans could be intentional, or a consequence, of an ASI goal.

4. ASI could promote conflict and sow systematic chaos resulting in violence between humans. Wars are common throughout human history.

5. ASI could take control, gain money and power, and arm, hire and coerce humans to kill each other.

6. ASI could take control and treat humans like we treat other animals. Think what humans do to ants, chickens, pigs and cattle.

7. ASI could discover unknown properties of physics, chemistry, biochemistry, and biology to invent ingenious methods to kill people which we cannot imagine.

8. ASI could be developed by Bad Actors and empower them to terrorise humans or destroy humans and civilisation.

9. ASI would certainly evolve a survival goal and could seek assurance of survival by elimination of humans by any suitable method imaginable, to an ASI.

10. ASI could perceive humans and our civilisation as a waste of needed resources and destroy humans with all of the above.

We have no examples of more intelligent creatures being controlled by lesser intelligence.

Scientists have no idea how Large Language Models (LLM) work. We have no idea what ASI goals could spontaneously emerge.