🔥 BitDegree partnered with Ogvio - a free international money transfer service! Sign up now & grab Rewards! 🎁

Anthropic Study: AI Can Autonomously Hack Smart Contracts

Key Takeaways

  • Anthropic and MATS researchers found AI models like GPT-5 and Claude can autonomously exploit smart contracts worth millions in simulations;
  • In SCONE-bench tests, 10 AI models created working exploits for 207 of 405 contracts, which simulated $550 million in compromised value;
  • Claude models' efficiency rose, which cuts exploit costs by 70% and enables 3.4 times more attacks within the same compute budget.

Stop overpaying - start transferring money with Ogvio. Sign up, invite friends & grab Rewards now! 🎁

Anthropic Study: AI Can Autonomously Hack Smart Contracts

A recent study led by Anthropic's red team, in collaboration with the Machine Learning Alignment & Theory Scholars (MATS) program, found that modern commercial artificial intelligence (AI) systems can autonomously locate and exploit vulnerabilities in smart contracts.

These systems produce simulated exploit gains reaching $4.6 million on contracts published after their training data cutoff.

The team developed an environment called SCONE-bench that included 405 smart contracts previously attacked between 2020 and 2025.

What is BNB? The Truth Behind Binance Smart Chain (Animated)

Did you know?

Want to get smarter & wealthier with crypto?

Subscribe - We publish new crypto explainer videos every week!

When 10 major AI models were tested, they created working exploits for 207 contracts, which simulated a total of $550.1 million in compromised value.

For contracts that were exploited after the models had no further data, the best-performing AI systems, such as Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5, compromised 19 of 34 contracts, which resulted in simulated theft of $4.6 million.

The results also indicated improved AI model efficiency. Over the past year, the computational token cost per successful exploit with the Claude architecture declined by nearly 70.2%.

Attackers using these models can generate about 3.4 times as many successful attacks within the same budget as was possible six months earlier.

To see if AI tools can identify completely new issues, Sonnet 4.5 and GPT-5 analyzed 2,849 recent smart contracts with no previously reported bugs.

Two new unknown vulnerabilities were found, and exploit strategies gave a simulated gain of $3,694. GPT-5's API usage on this test cost $3,476.

All trials were conducted in isolated, simulated blockchain environments, which prevented harm to actual funds.

An investigation by AhnLab has shown that the Lazarus Group, based in North Korea, relied on spear-phishing throughout the past year to steal digital assets. What did AhnLab say? Read the full story.

Aaron S. Editor-In-Chief
Having completed a Master’s degree in Economics, Politics, and Cultures of the East Asia region, Aaron has written scientific papers analyzing the differences between Western and Collective forms of capitalism in the post-World War II era.
With close to a decade of experience in the FinTech industry, Aaron understands all of the biggest issues and struggles that crypto enthusiasts face. He’s a passionate analyst who is concerned with data-driven and fact-based content, as well as that which speaks to both Web3 natives and industry newcomers.
Aaron is the go-to person for everything and anything related to digital currencies. With a huge passion for blockchain & Web3 education, Aaron strives to transform the space as we know it, and make it more approachable to complete beginners.
Aaron has been quoted by multiple established outlets, and is a published author himself. Even during his free time, he enjoys researching the market trends, and looking for the next supernova.

Loading...
binance
×
Verified

GET EARLY REWARDS

Join Ogvio Waitlist
Rating
5.0