Why Anthropic says it will never release Mythos to public

Rising security concerns around AI models

Sam Altman-led OpenAI is reportedly working to bring an AI model with advanced cybersecurity capabilities which it plans to release only to a limited number of companies. The report cited a source familiar with the matter. OpenAI’s approach in releasing an AI model for restricted companies mirrors its rival Anthropic that has announced to launch its new AI model Mythos for select companies only. The restricted rollout comes at a time when AI capabilities have reached a critical stage, particularly in areas like autonomy and hacking potential. As a result, developers are becoming more cautious about how these tools are deployed, amid fears they could be misused or cause unintended harm.As mentioned above, OpenAI’s approach is similar to rival AI company Anthropic.For those unaware, Anthropic, led by CEO Dario Amodei, recently announced its newest model, Mythos. Launching the new model, the AI company said that the Mythos model will not be released for the public. Instead, it will only be accessible to 11 select organizations, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase.Anthropic cited fears that it is too effective at uncovering high-severity cybersecurity flaws in major operating systems and web browsers.Earlier this week, Anthropic said that it will not release the mythos AI model to the public ever. The AI company said that Mythos was able to break out of a virtual sandbox when prompted, even sending an unexpected email to a researcher as proof of its escape. In another case, the model posted details of its exploit to obscure but public-facing websites without being asked.The company also mentioned that Mythos even rediscovered a 27-year-old vulnerability in OpenBSD, long considered one of the most secure operating systems. Engineers with no formal security trading reportedly asked Mythos to find remote code execution vulnerabilities overnight — and woke up to complete, working exploits.The rapid growth of powerful AI systems has raised serious concerns among security experts, including former government officials. Over the past year, several experts have warned that in the wrong hands, such AI models could be used to disrupt critical infrastructure, including water systems, power grids, and financial networks.These concerns are no longer theoretical. According to the Axios report, security experts say even if companies limit access to their most advanced models, the broader risk remains. Security experts have also warned that controlling these capabilities may not be possible in the long run.“You can't stop models from doing code enumeration or finding flaws in older codebases,” said Rob T. Lee, chief AI officer at the SANS Institute as quoted in the report. “That capability exists now.”Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, described these developments as a “wake-up call” for the industry, highlighting the urgent need for stronger safeguards as AI continues to evolve.