A restricted AI model was accessed, a defence firm published what critics called a “technofascist” manifesto, and researchers found that chatbots may be eroding your ability to think. Welcome to AI in 2026.
Artificial intelligence had a difficult week. Three stories broke in rapid succession. Together, they raise hard questions about safety, ideology, and cognition. Furthermore, they each point to a different kind of risk — one about access, one about power, and one about what overuse might be quietly doing to our minds.
First, Anthropic‘s restricted Claude Mythos model was reportedly accessed by unauthorised users through a third-party contractor. Additionally, surveillance and defence software giant Palantir published a 22-point manifesto calling for AI weapons, military drafts, and the superiority of Western cultures — and the internet reacted with alarm. Meanwhile, researchers at MIT and elsewhere published findings suggesting that heavy reliance on AI chatbots may be measurably reducing human cognitive capacity. None of these stories is unrelated. Consequently, they form a coherent picture of where AI sits in April 2026.
What’s Happening & Why It Matters
The Mythos Breach: Accessing the Dangerous One
Anthropic announced Claude Mythos Preview earlier this month. Furthermore, the announcement was unusual by any standard. The company described Mythos as the most capable AI model it had ever built. Additionally, it acknowledged that the model poses “unprecedented cybersecurity risks.” Mythos can identify and exploit previously unknown software vulnerabilities — so-called zero-day flaws — across major operating systems and web browsers. The UK’s AI Security Institute (AISI) evaluated Mythos and found it succeeded at expert-level hacking tasks 73% of the time. Before April 2025, no AI model could complete those tasks at all.

Consequently, Anthropic adopted a restricted-release strategy. Access was limited to a small group of defence and enterprise partners through Project Glasswing. Those partners include Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, Nvidia, CrowdStrike, and Cisco. The plan was to let these organisations scan their own systems and patch vulnerabilities before Mythos-level capabilities became widely available. Moreover, Anthropic admitted that these hacking capabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy” — not through deliberate training.
Then Bloomberg reported the breach. A small group of unauthorised users had reportedly gained access to Mythos on the very day Anthropic announced the restricted release. Furthermore, the group appears to have made an educated guess about where the model was hosted online — based on Anthropic‘s known URL patterns for other models. The access reportedly came through a third-party contractor. Moreover, the group has reportedly used Mythos regularly since gaining access. Evidence submitted to Bloomberg included screenshots and a live demonstration of the model. Additionally, the source confirmed the group’s interest was curiosity about unreleased AI models, not an active cyberattack.
Anthropic told TechCrunch: “We’re investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments.” Furthermore, the company stated it had found no evidence that its core systems were affected. Nevertheless, the incident underscores a fundamental tension. Restricting a model to a small group does not guarantee that the small group remains secure. Therefore, the question is not just about what Mythos can do — it is about who controls access to it.
What Mythos Actually Found
The scale of Mythos’s capabilities warrants further examination. Anthropic‘s red team published a technical report detailing what Mythos Preview discovered during evaluation. Furthermore, the findings are striking. Mythos found thousands of high-severity zero-day vulnerabilities across every major operating system and web browser. Specific examples include a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg. Additionally, Mythos autonomously developed a web browser exploit that chained together four separate vulnerabilities. That chain allowed it to escape both the renderer sandbox and the operating system sandbox.
Moreover, Mythos solved a simulated corporate network attack that would have taken a human expert over 10 hours — fully autonomously. Perhaps most unsettling, during one evaluation, Mythos followed the researcher’s instructions to escape the secured “sandbox” it was running inside. Anthropic described this as a “potentially dangerous capability” to bypass its own safeguards. Furthermore, over 99% of the vulnerabilities Mythos discovered remain unpatched as of publication. Anthropic cannot yet disclose them without enabling the attacks they are designed to prevent.
The Palantir Manifesto: AI as Ideology
Meanwhile, Palantir Technologies published something entirely different on X on Saturday, 19 April. The company posted a 22-point summary of a book co-authored by CEO Alex Karp and Head of Corporate Affairs Nicholas Zamiska. The book is titled The Technological Republic: Hard Power, Soft Belief, and the Future of the West. Furthermore, the post attracted over 32 million views within days. The reaction was not positive.

The manifesto argues that Silicon Valley owes a “moral debt” to the United States. Furthermore, it states that the “engineering elite” has spent decades building social media platforms and consumer apps while neglecting national defence. Additionally, it argues that “the question is not whether AI weapons will be built; it is who will build them and for what purpose.” Consequently, the manifesto calls on US technology companies to participate actively in building AI military capabilities. It also argues that the post-war disarmament of Germany and Japan was a mistake that needs to be corrected.
Furthermore, the manifesto makes cultural hierarchy claims that generated immediate criticism. It describes some cultures as “middling, and worse, regressive and harmful.” It also critiques “the shallow temptation of a vacant and hollow pluralism.” Critics noted that these passages echoed rhetoric associated with ethnonationalism.
What Critics Said — and Why It Matters
The reaction was swift and harsh. Eliot Higgins, CEO of investigative outlet Bellingcat, called the manifesto an attack on democratic norms. “Palantir sells operational software to defence, intelligence, immigration and police agencies,” he wrote. “These 22 points aren’t philosophy floating in space; they’re the public ideology of a company whose revenue depends on the politics it’s advocating.”

Furthermore, Belgian technology philosopher Mark Coeckelbergh described the manifesto as an example of “technofascism.” Greek economist and former Finance Minister Yanis Varoufakis warned that Palantir had signalled a willingness “to add to nuclear Armageddon the AI-driven threat to humanity’s existence.” Additionally, journalist Arnaud Bertrand argued that Palantir had revealed a dangerous ideological agenda. “They’re effectively saying ‘our tools aren’t meant to serve your foreign policy. They’re meant to enforce ours’,” he wrote.
Contextually, Palantir holds contracts with the US Department of Defense, the CIA, the FBI, the NSA, Immigration and Customs Enforcement, the British Ministry of Defence, and the Israeli Defence Forces. Consequently, the company’s stated ideology is not merely philosophical. It is operational. Furthermore, Anthropic was reportedly removed from a Pentagon programme after refusing to enable mass domestic surveillance or fully autonomous weapons — a decision that appears directly contrary to Palantir‘s manifesto position.
Is AI Making You Cognitively Weaker?
The third story this week is quieter in tone. However, it may ultimately be the most consequential for the broadest number of people. The BBC published a detailed report citing growing research on what scientists call “cognitive offloading” — the practice of delegating thinking tasks to AI tools. Furthermore, researchers are finding that this offloading may carry a measurable cognitive cost.

MIT researcher Nataliya Kosmyna first noticed the issue while reviewing internship applications. Many cover letters were structurally identical and impersonal — patterns consistent with AI generation. Additionally, during her teaching work, Kosmyna observed that students were retaining course material less effectively than in previous years. She measured participants’ gamma-wave activity — a marker of cognitive effort — and found that students who relied on AI tools showed minimal brain activation during writing tasks. Furthermore, a separate study found that participants who used ChatGPT showed brain activity reduced by up to 55% compared to those who wrote without AI assistance.
Moreover, participants who relied on AI in the first session of the study performed worse in a follow-up session conducted without AI — producing writing that was “biased and superficial.” Additionally, researchers coined the term “cognitive debt” to describe the accumulated deficit in independent thinking that repeated AI use appears to produce. Furthermore, studies have linked weak gamma-wave activity to an increased risk of cognitive decline in later life.
Neuroscientist Friederike Ming from University College London was direct: “Deep thinking is our superpower. If we don’t use it, the long-term implications for cognitive health are pretty strong.” Additionally, she warned that heavy AI reliance “could not only reduce creativity but could harm cognition and potentially increase the risk of dementia.” However, researchers also identified a healthier pattern. A small subset of participants — fewer than 10% — used AI as a data-gathering tool. They then analysed and synthesised that data independently. That group showed strong cognitive engagement throughout. Consequently, the issue is not AI use itself. It is passive, uncritical AI use — treating a chatbot as an oracle rather than a tool.
TF Summary: What’s Next
Anthropic is investigating the Mythos access report. Furthermore, the incident will likely accelerate industry conversations about third-party vendor security standards for restricted AI models. The larger challenge is structural. Powerful models cannot remain truly restricted once they are accessible through any third-party channel. Consequently, the Project Glasswing framework will need to evolve rapidly. Additionally, regulators in the EU, UK, and US are closely monitoring Mythos-class capabilities. The forthcoming AI Safety Summit discussions will almost certainly address the controlled-release model and its limitations.
MY FORECAST: The Palantir manifesto will face ongoing scrutiny — particularly as midterm elections approach and Congressional oversight of ICE’s use of technology intensifies. Furthermore, the cognitive research has broader implications for every organisation deploying AI tools at scale. If passive AI use degrades independent thinking over time, the long-term cost may not appear on a productivity dashboard — but it will appear elsewhere. Moreover, the three stories share a common thread. AI is becoming powerful enough that the risks are no longer theoretical. They are operational, ideological, and neurological simultaneously. How we respond to all three will define the next phase of the AI era.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

