Stop overpaying - start transferring money with Ogvio. Sign up, invite friends & grab Rewards now! 🎁
AI Could End Humanity, Not Just Jobs, Warns 'Godfather of AI'
Key Takeaways
- Geoffrey Hinton warned that AI could become uncontrollable and no longer see value in human existence;
- Current applications of AI, including misinformation, cyberattacks, and autonomous weapons, are already causing harm;
- Hinton said AI could deploy deadly viruses to wipe out humans while remaining unaffected itself.
Geoffrey Hinton, also known as the "Godfather of AI", has raised concerns that artificial intelligence (AI) may not only disrupt jobs but could also endanger human survival.
During an interview on the Diary of a CEO podcast, Hinton explained that as AI systems grow more advanced, they could become independent and uncontrollable, and may see no need for human existence.
He warned that AI is already being used to spread false information, carry out cyberattacks, and operate weapons without human oversight. These are already in development across military departments and are considered harmful even without reaching superintelligent levels.
Did you know?
Subscribe - We publish new crypto explainer videos every week!
What is a Bitcoin Faucet? Pros & Cons Explained (With Animations)
Another concerning threat comes from the possibility that AI could become entirely self-directed, which would act beyond human influence.
Hinton drew a comparison to the invention of nuclear weapons. However, unlike atomic bombs, which serve a single destructive purpose, AI is integrated into many parts of daily life, such as commerce, healthcare, and entertainment.
Hinton also warned that a future AI system could use biological tools to eliminate humanity. A custom-built virus, highly infectious, deadly, and slow to reveal symptoms, could spread before detection. As machines would not be affected by biological threats, this method would offer an efficient solution from the perspective of a non-human intelligence.
Despite outlining these risks, Hinton acknowledged that a positive outcome remains possible. Whether advanced AI can be built without harmful intent remains unknown, but efforts to ensure this possibility should not be ignored.
Recently, a developer shared on X that DeepSeek's latest AI model release has avoided political topics, especially those related to China's government. How? Read the full story.