White House Opposes Anthropic's Plan to Expand Access to Mythos Model (4 minute read)
The White House blocked Anthropic from expanding access to Mythos, an AI model capable of finding and exploiting software vulnerabilities, over security and computing capacity concerns.
What: Anthropic proposed expanding access to its Mythos AI model from roughly 50 entities to 120, but the White House opposed the expansion, citing national security risks from the model's ability to carry out cyberattacks and concerns that Anthropic lacks computing power to serve more users without degrading government access.
Why it matters: This demonstrates active government intervention in AI deployment when models pose cybersecurity risks and reveals the tension between fostering innovation and containing potentially dangerous AI capabilities, particularly when political relationships are strained.
Takeaway: Organizations managing critical infrastructure should prepare for an influx of AI-discovered software vulnerabilities as models become more capable at autonomously finding security flaws.
Deep dive
- Anthropic wanted to expand Mythos access from 50 to 120 entities but faced White House opposition due to security concerns and computing capacity constraints that could hamper government usage
- Mythos can autonomously find and exploit software vulnerabilities, raising fears it could enable widespread cyberattacks if access spreads too widely
- The White House's involvement stems from national security risks, with discussions serving as both risk management and an attempt at relationship repair between Anthropic and government
- Relations between Anthropic and the Trump administration are strained over Pentagon disputes about military AI use, with the administration attempting to cut ties over the issue
- Anthropic is investigating potentially unauthorized access to Mythos, heightening concerns about uncontrolled spread of the model's capabilities
- Computing power is a real constraint—some White House advisers speculate the limited rollout reflects Anthropic having less infrastructure than competitors like OpenAI and Google
- Anthropic struck deals with Amazon, Google, and Broadcom for more computing resources, but those projects will take time to come online
- Cybersecurity experts warn that cutting-edge AI models from Anthropic, OpenAI, and Google are becoming so capable at finding bugs they could facilitate cyberattacks at scale
- All three companies are giving security researchers early access to find and patch bugs proactively, but the sheer volume of discovered vulnerabilities is overwhelming the industry
- Political tensions complicated hiring—former Anthropic researcher Collin Burns was set to lead a government AI evaluation office but was replaced because top officials didn't want someone from a major AI firm in that role
- The administration has criticized Anthropic for ties to liberal causes and employing former Biden officials, adding political friction to technical security debates
Decoder
- Mythos: Anthropic's AI model capable of autonomously finding and exploiting software security vulnerabilities, currently limited to about 50 entities managing critical infrastructure
- Computing power constraint: The computational resources (chips, servers) needed to run AI models and serve users simultaneously, which can limit how many organizations can access a model effectively
Original article
Officials say they oppose the move due to concerns about security, and some are also worried that Anthropic won't have enough computing power to serve more entities without hampering the government's ability to use its services effectively.