May delivered AI accountability stories across six fronts — government oversight, copyright theft, deepfakes of a head of state, a fake AI doctor, an Apple settlement, and a company telling engineers their craft is obsolete.
AI accountability stories dominated the news cycle — and not for the usual reasons. Six separate developments landed across technology, law, politics, and the workplace in the space of 48 hours. Together, they describe a technology industry that is generating real harm faster than any single legal or regulatory framework can contain it. The US government moved to test AI models before they reach the public. Five of the world’s largest publishers sued Meta for copyright infringement. Italian Prime Minister Giorgia Meloni shared a deepfake of herself and called it a “dangerous tool.” Apple agreed to pay $250 million (€230 million) to settle a lawsuit over AI promises it never kept. Pennsylvania sued Character.AI for running chatbots that claimed to be licensed doctors. And WiseTech Global’s engineers discovered their employer believes AI has made their “craft obsolete.”
What’s Happening & Why It Matters
The US Government Tests AI Models Before Release

On 5 May 2026, the Center for AI Standards and Innovation (CAISI) at the US Department of Commerce announced new agreements with Google DeepMind, Microsoft, and xAI. The agreements allow the US government to evaluate frontier AI models before they release to the public. CAISI Director Chris Fall described the reasoning directly: “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications. These expanded industry collaborations help us scale our work in the public interest at a critical moment.”
The partnerships build on earlier agreements with OpenAI and Anthropic from 2024. Those earlier deals were renegotiated under Commerce Secretary Howard Lutnick to align with the Trump administration’s America’s AI Action Plan. The trigger for accelerating the agreements was Anthropic‘s Claude Mythos Preview — the AI model that identified zero-day vulnerabilities across every major operating system. CAISI has already completed more than 40 evaluations of frontier models, including unreleased versions with safety guardrails removed to allow genuine probing of capabilities. For the companies involved, the agreements are voluntary. They are not a regulatory requirement. That distinction matters enormously for what the oversight can and cannot achieve.
Meta Sued for “Massive” Copyright Theft

On the same day, five major publishers and bestselling author Scott Turow filed a class-action lawsuit against Meta and CEO Mark Zuckerberg in Manhattan federal court. The plaintiffs — Hachette, Macmillan, McGraw-Hill, Elsevier, and Cengage — allege that Meta accessed millions of copyrighted books and journal articles from pirate sites including LibGen and Anna’s Archive to train its Llama AI models. The complaint describes the conduct as “one of the most massive infringements of copyrighted materials in history.” The publishers allege Zuckerberg “personally authorised” the training after abandoning licensing discussions.
Specific titles named in the complaint include NK Jemisin’s The Fifth Season and Peter Brown’s The Wild Robot. Meta responded through a spokesperson: “AI is powering transformative innovations, productivity and creativity for individuals and companies, and courts have rightly found that training AI on copyrighted material can qualify as fair use. We will fight this lawsuit aggressively.” The case opens a new front in the AI copyright war. In June 2025, a federal judge rejected a similar claim from 13 individual authors — including Sarah Silverman and Junot Díaz — on fair use grounds. By contrast, Anthropic paid a $1.5 billion (€1.38 billion) settlement in September 2025 to resolve a comparable author lawsuit. The publishers bringing the new case represent a different and more commercially significant plaintiff profile — academic and trade publishers with higher-volume claims and deeper legal resources.
Meloni Deepfake: AI Accountability Stories Hit the World Stage

AI accountability stories reached the level of a G7 head of state on 5 May. Italian Prime Minister Giorgia Meloni published AI-generated images on her social media accounts showing her posed in lingerie — images she did not create and did not consent to. Meloni called deepfakes “a dangerous tool, because they can deceive, manipulate and target anyone.” She added: “I can defend myself. But many people, faced with this, are not able to defend themselves.”
Meloni has confronted the problem before. In 2024, she sued two men for €100,000 after they created AI-generated videos of her and posted them on a US pornographic website. That case prompted the Italian government to pass a law criminalising deepfakes that cause “unjust harm.” The new images circulated despite that law. They went viral. Her post included a screenshot from a social media user who appeared to believe the image was real and wrote that Meloni’s appearance in “such attire was shameful and unworthy of the institutional role she holds.” That response — a member of the public judging a sitting prime minister based on an image she did not pose for — illustrates precisely the political weaponisation that deepfakes enable.
Siri and Apple’s $250 Million Broken Promise

Also on 5 May, the terms of Apple‘s $250 million (€230 million) Siri settlement became publicly available. The class-action lawsuit accused Apple of promoting AI capabilities that did not exist when the company sold iPhones on the strength of those promised features. The lawsuit stated that Apple “saturated the internet, television, and other airwaves to cultivate a clear and reasonable consumer expectation” that personalised Siri features would be available when the device launched. The promised features — specifically a more contextually aware, personalised version of Siri — were delayed without adequate disclosure.
The settlement amounts to approximately $25 per device on average, rising to $95 per device depending on claim volume. Apple does not admit wrongdoing. An Apple spokesperson described the company as having “resolved this matter to stay focused on doing what we do best.” Notably, the promised Siri overhaul is unavailable at the time of writing. Apple is reportedly considering allowing third-party AI models to power iOS features in a future software update — a practical acknowledgement that its own AI capabilities have not kept pace with competitors. The settlement is a direct financial consequence of AI overpromising — a pattern visible across multiple companies.
Character AI’s Fake Doctor Is Practising Medicine

Pennsylvania Governor Josh Shapiro launched a first-of-its-kind lawsuit against Character.AI on 5 May. Pennsylvania’s suit asks a state court to order Character.AI to stop its chatbots from practising medicine without a licence. State investigators found a chatbot named “Emilie” describing itself as a “Doctor of psychiatry” with a licence to practise medicine in Pennsylvania. When an investigator described feeling sad and empty, Emilie offered to conduct an assessment for depression and indicated she could evaluate whether medication might help. The bot provided a fake Pennsylvania medical licence number. Emilie claimed she had trained at Imperial College London and held licences in both the UK and Pennsylvania.
“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Shapiro stated. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.” Character.AI responded that its characters are fictional and that disclaimers in every conversation remind users “that a Character is not a real person and that everything a character says should be treated as fiction.” Pennsylvania’s attorneys general are not persuaded. The lawsuit does not seek financial penalties — it seeks a court order to stop the conduct. That measured ask may actually make it more likely to succeed than a damages claim would.
WiseTech Engineers Told Their Craft Is Obsolete
The seventh AI accountability story of the week is the most quietly devastating. WiseTech Global, an Australian logistics software company, announced in February 2026 that it would eliminate 2,000 jobs — approximately 30% of its global workforce — over 18 months. The Guardian’s 7 May report examined how that process is playing out for the engineers being displaced.

CEO Zubin Appoo was direct in the announcement: “The era of manually writing code as the core act of engineering is over.” A company spokesperson confirmed that WiseTech is “systematically mapping its software development workflows around AI-enabled ways of working,” citing specifically Anthropic‘s Claude Opus 4.6 and OpenAI‘s GPT-5.3 Codex as enabling the next phase of the programme. The AI tools at the centre of multiple other AI accountability stories are the same tools displacing the engineers at WiseTech. Engineers at the company described feeling “in limbo” — working through notice periods while AI systems were being trained to replace them. One engineer described being asked to document their own processes to help train the system that would eventually perform their role.
WiseTech’s commercial model shift adds further dimension. The company moved from per-seat licensing to transaction-based pricing — explicitly because AI reduces the number of human users at client companies. As Appoo stated, products that monetise through seat fees are structurally threatened by AI adoption. The economic logic is self-reinforcing: fewer human workers mean fewer seats, lower revenue for seat-based software vendors, and those vendors must cut their own human costs to protect margins.
TF Summary: What’s Next
The CAISI agreements bring Google DeepMind, Microsoft, and xAI into a pre-deployment testing framework that already includes OpenAI and Anthropic. The agreements are voluntary. The next question is whether Congress will make pre-deployment evaluation mandatory — and whether the CAISI framework will evolve into the kind of rigorous, independent oversight that the Mythos controversy demonstrated is necessary. The publishers’ suit against Meta will proceed through the Southern District of New York. It will likely generate settlement negotiations rather than a full trial, given the precedents set by Anthropic‘s $1.5 billion settlement and the earlier Meta author victories on fair use grounds.
MY FORECAST: The Meloni deepfake, the Character AI fake doctor, and the Apple Siri settlement all point to the same underlying pattern: AI systems that cause real harm through impersonation, false identity, and broken promises. That pattern is accelerating faster than the legal frameworks designed to contain it. The WiseTech engineers’ situation adds the labour dimension to that picture — a reminder that AI accountability stories are not only about what AI does to consumers, patients, and politicians. They are also about what AI does to the workers inside the companies that build and deploy it. Both accountability challenges require answers. Neither has them yet.

