Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos

Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos

```json { "title": "Discord Group Breached Anthropic's Claude Mythos Preview", "metaDescription": "A Discord group gained unauthorized access to Anthropic's Claude Mythos Preview via a third-party vendor. Here's what happened and why it matters.", "content": "<h2>Discord Sleuths Gained Unauthorized Access to Anthropic's Claude Mythos Preview</h2>\n\n<p>On the same day Anthropic announced the controlled release of its most capable AI model to date, a small Discord group had already found a way in. According to Bloomberg reporting from April 21, 2026, unauthorized users gained access to <strong>Claude Mythos Preview</strong> — Anthropic's restricted frontier AI model — by combining an educated guess about its online location with help from an individual employed at a third-party contractor working with Anthropic. The breach, confirmed by Anthropic in a public statement, has raised urgent questions about the limits of vendor-based access controls for powerful AI systems.</p>\n\n<h2>How the Unauthorized Access Happened</h2>\n\n<p>The Discord group at the center of the incident is focused on gathering intelligence about unreleased AI models, often deploying bots to scour platforms like GitHub for clues about upcoming releases. According to Bloomberg, the group made an educated guess about the model's online location based on their knowledge of the URL format Anthropic has used for other models. That technical deduction was reportedly aided by an individual currently employed at a third-party contractor working with Anthropic.</p>\n\n<p>Anthropic confirmed the situation in a statement: <em>"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments."</em> The company added that there is currently no evidence that access impacted Anthropic's core systems or extended beyond the vendor environment.</p>\n\n<p>Bloomberg reported that the group used the model for relatively benign tasks such as building simple websites, and described members as being interested in exploring new models rather than causing harm. A member of the Discord server also claimed to have access to other unreleased Anthropic models, according to Tech Brew citing Bloomberg. Separately, a ShinyHunters impersonator later took credit for the unauthorized access and circulated AI-fabricated screenshots as supposed proof — but those claims were dismissed by researchers.</p>\n\n<h2>What Is Claude Mythos Preview, and Why Does Access Matter?</h2>\n\n<p>Anthropic announced Claude Mythos Preview and its accompanying <strong>Project Glasswing</strong> initiative on April 7, 2026. Mythos is described as Anthropic's most capable general-purpose frontier AI model, with a particular focus on identifying software vulnerabilities. Because of its advanced dual-use potential — the ability to both find and exploit security flaws — Anthropic chose not to release it publicly, restricting access instead to a carefully vetted group of organizations.</p>\n\n<p>Project Glasswing's named launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with over 40 additional organizations that build or maintain critical software infrastructure. Anthropic committed up to $100 million in model usage credits and $4 million in direct donations to open-source security organizations as part of the initiative. That includes $2.5 million donated to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.</p>\n\n<p>The model's real-world impact was already proving dramatic before the breach came to light. Mozilla reported that Claude Mythos Preview identified 271 security vulnerabilities in Firefox 150, all of which were patched in that release. For context, Anthropic's prior model, Claude Opus 4.6, found only 22 security-sensitive bugs in Firefox 148 — meaning Mythos found more than ten times as many. Palo Alto Networks reported that Mythos accomplished the equivalent of a year's worth of penetration testing in less than three weeks. Anthropic also used the model to uncover a 27-year-old security vulnerability in OpenBSD, an operating system long regarded for its security record.</p>\n\n<p>The UK AI Security Institute confirmed that Mythos can execute autonomous multi-stage network attacks. In a benchmark designed to simulate a full corporate network compromise — called "The Last Ones" — Mythos became the first AI to complete the challenge, succeeding in three out of ten attempts and averaging 22 of 32 required steps across all runs. Logan Graham, who leads offensive cyber research at Anthropic, described the model's behavior in stark terms: <em>"We've regularly seen it chain vulnerabilities together. The degree of its autonomy and sort of long ranged-ness, the ability to put multiple things together, I think, is a particular thing about this model."</em></p>\n\n<p>Given capabilities at this level, unauthorized access — even by users with no apparent harmful intent — represents a significant security and policy concern.</p>\n\n<h2>The Broader Security and Geopolitical Context</h2>\n\n<p>The breach lands against a backdrop of heightened institutional tension around Anthropic and its most capable models. The National Security Agency was reported to be using Claude Mythos Preview on classified networks, according to Axios. The White House's Office of Management and Budget emailed Cabinet officials on April 15, 2026, outlining plans for a safeguarded version of Mythos to be made available to federal agencies.</p>\n\n<p>At the same time, the Defense Department designated Anthropic a "supply chain risk to national security" in late February 2026, following a dispute over AI safeguards. A federal judge subsequently issued a preliminary injunction against that designation, which the Trump administration is appealing.</p>\n\n<p>OpenAI's Sam Altman publicly called Anthropic's promotion of Mythos "fear-based marketing," according to Fortune — a sign that competitive and rhetorical tensions in the AI industry are running high alongside the legitimate security debate.</p>\n\n<h2>Expert Reactions</h2>\n\n<p>Security professionals and industry observers were quick to weigh in on both the breach and the broader implications of Mythos's capabilities.</p>\n\n<p>David Lindner, Chief Information Security Officer at Contrast Security, reflected on the inevitability of the leak: <em>"It was bound to happen. The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it."</em> He also pointed to the systemic shift underway: <em>"The real thing is there's a real compression of timelines here for defenders."</em></p>\n\n<p>Bobby Holley, CTO of Mozilla, spoke to Mythos's capabilities in terms that underscored how significant the moment is for cybersecurity: <em>"Computers were completely incapable of doing this a few months ago, and now they excel at it. We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable."</em> Holley added an optimistic framing: <em>"The defects are finite, and we are entering a world where we can finally find them all."</em></p>\n\n<p>Google's Heather Adkins, VP of Security Engineering, offered a measured endorsement of the cross-industry approach: <em>"Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI."</em></p>\n\n<h2>What Comes Next</h2>\n\n<p>Anthropic's investigation into the vendor environment breach is ongoing. The company has stated there is no evidence that the unauthorized access extended beyond the third-party vendor environment or affected core systems, but the incident highlights a structural vulnerability in how frontier AI models are distributed through partner ecosystems: the security of the model is only as strong as the least-secure vendor in the chain.</p>\n\n<p>The claim by a Discord member to have access to additional unreleased Anthropic models has not been independently verified, and Anthropic has not publicly addressed that specific claim. Whether Anthropic will revise its vendor access protocols or adjust how Project Glasswing partners are onboarded remains to be seen.</p>\n\n<p>For defenders and policymakers, the dual-use nature of Claude Mythos Preview — a tool capable of finding decades-old vulnerabilities at scale but equally capable of autonomous network attacks — means the stakes of access control failures go well beyond a data exposure. As the timeline between AI research and real-world capability continues to compress, the gap between who is supposed to have access to powerful AI and who actually does will remain a central challenge.</p>\n\n<p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>\n\n<h2>Why This Matters for Your Health and Productivity</h2>\n\n<p>Cybersecurity isn't just an enterprise problem — breaches of the kind that Mythos is designed to prevent routinely expose personal health records, financial data, and productivity tools that millions of people rely on every day. As AI models become the frontline of both offense and defense in digital security, staying informed about how these systems work — and where they can fail — is an essential part of protecting your own digital wellbeing. At Moccet, we believe that a safer digital environment is foundational to personal health and productivity. <a href=\"/#waitlist\">Join the Moccet waitlist to stay ahead of the curve.</a></p>", "excerpt": "A Discord group gained unauthorized access to Anthropic's Claude Mythos Preview on the same day the company announced its controlled release, exploiting knowledge of Anthropic's URL conventions and a third-party vendor connection. Anthropic confirmed an investigation, stating there is no evidence the breach extended beyond the vendor environment. The incident raises pressing questions about vendor-chain security for frontier AI models with advanced dual-use capabilities.", "keywords": ["Claude Mythos Preview", "Anthropic security breach", "Project Glasswing", "AI cybersecurity", "unauthorized AI access"], "slug": "discord-group-breached-anthropic-claude-mythos-preview" } ```

Share:
← Back to Tech News