Authors’ ‘Empty Book’ Protests AI Content Training

When writers publish nothing, they’re still telling AI companies to keep their hands off the books.

Z Patel

When Writers Publish Nothing, They’re Really Saying Plenty.


The publishing world has found a clever way to scream without writing a single new chapter.

Thousands of authors have joined an unusual protest against AI training on copyrighted books by releasing an “empty” book titled Don’t Steal This Book. The work contains no story, no essays, no poems, no secret literary twist. It contains names. Roughly 10,000 of them. That is the point.

The protest is a tense juncture in the United Kingdom. The government is weighing possible changes to copyright law that make it easier for AI companies to use protected creative works unless rights holders actively opt out. Writers, composers, publishers, and other creatives are fighting back. They argue that generative AI companies have built products using material taken without permission or payment.

The symbolism is deliciously sharp. An “empty” book is being used to protest what authors see as the emptying of authorship itself — creative labour turned into raw material for machines.

That is why this story matters. It is not only about publishing. It is about whether the next phase of AI development treats human culture like a free buffet.

What’s Happening & Why This Matters

10,000 Authors Publish a Protest Book With Almost No Content

The protest book, Don’t Steal This Book, is not trying to entertain anyone. It is trying to embarrass policymakers and pressure them before the UK government releases an economic impact assessment and a progress update on its copyright reform consultation.

(CREDIT: DOWNTHETUBES.NET)

Copies are being distributed at the London Book Fair, one of the publishing industry’s most visible trade gatherings. That placement is strategic. It puts the protest right in front of the people who buy rights, sell rights, license content, and increasingly worry about whether their business model is being quietly fed into a model-training pipeline.

The organiser, Ed Newton-Rex, is a composer and campaigner for artists’ copyright. He does not soften the message. He says the AI industry is “built on stolen work … taken without permission or payment.” He adds that this is “not a victimless crime” because generative AI competes with the very people whose work trains it, “robbing them of their livelihoods.”

That language matters because it moves the debate beyond abstract legal theory and into labour politics. This is not only about rights. It is about income, bargaining power, and whether human creative work still holds market value when a machine can remix it at scale.

Big Literary Names Join the Protest

This is not a fringe campaign by a few angry midlist writers yelling into the algorithmic void.

The file names major authors, including Kazuo Ishiguro, Philippa Gregory, Richard Osman, Mick Herron, Marian Keyes, David Olusoga, and Malorie Blackman, among others, who contributed their names to the book.

Blackman’s quote cuts to the bone:

“It is not in any way unreasonable to expect AI companies to pay for the use of authors’ books.”

That sentence is so obvious that it almost sounds ridiculous to say. Yet here we are. The core of the dispute is whether AI firms should have to license creative work the same way other commercial users often do, or whether training should sit inside a looser exception regime.

Once famous authors start using public protest tactics instead of quiet lobbying, you know the reservoir of trust is running low.

The protest is aimed at a specific policy fight in the UK.

According to the file, one main government proposal would let AI firms use copyright-protected works without the owner’s permission unless the owner has signalled a desire to opt out.

(CREDIT: GOOGLE)

That sounds procedural. It is actually a massive philosophical shift.

An opt-out system starts from the assumption that AI companies may use the work unless the creator has clearly and in a way that machines or companies will honour their rights. Critics argue that this unfairly flips the burden. Instead of asking AI firms to obtain permission, it asks creators to defend themselves from being scraped.

For many writers and artists, that feels backwards to the point of absurdity.

The file also says ministers are considering other options: leaving the current situation unchanged, requiring AI companies to seek licenses, or allowing unrestricted use of copyrighted work with no opt-out at all. The government has also refused to rule out a copyright waiver for “commercial research,” which creators fear could be a wide-open side door for AI firms.

That last point is especially spicy. “Commercial research” sounds polite and academic. In practice, it could become a lovely legal trench coat for a highly commercial training activity.

The Writers’ Protest: Permission & Power

At one level, the authors are demanding the obvious: permission and payment.

At another level, they are fighting a power imbalance. AI firms can ingest vast amounts of data cheaply, then generate outputs that compete with human creators in publishing, journalism, music, design, and advertising. Newton-Rex makes that point directly when he says generative AI competes with the people whose work it is trained on.

That is why creators are so angry. This is not a normal licensing dispute between two industries. To many artists, it seems like one industry is extracting value from another while presenting the whole exercise as inevitable.

That tends to irritate people. Especially writers, who can turn irritation into memorable copy.

Publishers Are Building a Licensing Response

The protest is not only symbolic. The industry is also trying to build an alternative.

(CREDIT: PLS)

Publishers’ Licensing Services, a nonprofit body, is launching a collective licensing scheme at the London Book Fair. The idea is to create a framework that gives AI firms legal access to published works through organised licensing rather than vague scraping and post-hoc lawsuits.

This matters because it suggests the creative industries are not simply saying “no.” They are saying “pay.”

That is a more durable position. Absolute refusal is hard to scale in a data-hungry AI economy. Licensing offers a middle path where creators to be compensated and AI companies can operate with legal clarity.

Of course, middle paths only work if both sides want one. Some AI firms may prefer the murkier route, where data gets collected first and defended later.

The file points to a major example: Anthropic reportedly agreed to pay $1.5 billion (£1.1 billion) to settle a class-action lawsuit brought by book authors who said the company used pirated copies of their works to train its flagship chatbot.

That number matters for two reasons.

First, it signals the scale of risk. Copyright disputes over training data are no longer tiny compliance headaches. They can become staggering financial liabilities.

Second, it reinforces the authors’ argument that this is not some speculative fear. Major AI firms are already facing legal blows over how they sourced training material.

The publishing protest, then, is not a tantrum. It is a warning flare.

The UK is not the only place where the ground is shifting.

The European Parliament has adopted recommendations calling for a “permanent” solution to protect copyright from AI use. Among the proposals is a European register at the EUIPO listing copyrighted works used to train AI models, plus artists who have opted out. The report also suggests companies disclose which websites they scraped for training data. Parliamentarians warn that failure to comply with these transparency requirements could amount to copyright infringement.

MEP Axel Voss frames the principle clearly:

“Generative AI must not operate outside of the rule of law. If copyrighted works are used to train AI systems, creators are entitled to transparency, legal certainty, and fair compensation.”

(CREDIT: GESCA)

That sounds like Brussels doing what Brussels does best: turning vibes into registries, oversight, and paperwork. In this case, the paperwork may be exactly the point.

Creative groups such as GESAC welcomed the vote as a strong signal for creators’ rights and backed the idea of a licensing market to ensure creators get paid. Meanwhile, industry groups such as the Computer & Communications Industry Association warn that requiring prior authorisation could become a “compliance tax” that hurts European competitiveness.

(CREDIT: CCIA)

There’s the whole fight in miniature. One side says “pay creators.” The other says, “Don’t slow innovation.” The uncomfortable truth is that both things matter. The political struggle is deciding which one gets first claim.

Is the “Empty Book” Protest Effective?

The protest works because it compresses a complicated legal argument into one visual punch.

A blank book filled only with authors’ names says: you can’t have the work without the workers. It also says: if you hollow out the value of writing, what is left is the shell.

That symbolism travels well. Policymakers understand it. Publishers understand it. Readers understand it. Even AI companies probably understand it, though they may pretend not to.

A good protest is memorable because it turns a policy fight into an image. This one does that neatly.

TF Summary: What’s Next

The authors’ “empty book” protest is not a gimmick. It is a concentrated response to a policy shift that many creators fear will normalise the use of copyrighted books for AI training without meaningful permission or compensation. Roughly 10,000 writers have signed onto Don’t Steal This Book, while publishers are also building a collective licensing scheme and EU lawmakers are pressing for greater transparency, creator opt-outs, and fair remuneration.

MY FORECAST: The next phase will move away from vague moral arguments and into hard commercial structures. Expect more lawsuits, more licensing markets, more demands for training-data disclosure, and more pressure on governments to reject opt-out frameworks that make creators do all the defensive work. The companies that adapt fastest will be the ones willing to pay for clean training pipelines instead of treating legal ambiguity as a business model.

— Text-to-Speech (TTS) provided by gspeech | TechFyle


Share This Article
Avatar photo
By Z Patel “TF AI Specialist”
Background:
Zara ‘Z’ Patel stands as a beacon of expertise in the field of digital innovation and Artificial Intelligence. Holding a Ph.D. in Computer Science with a specialization in Machine Learning, Z has worked extensively in AI research and development. Her career includes tenure at leading tech firms where she contributed to breakthrough innovations in AI applications. Z is passionate about the ethical and practical implications of AI in everyday life and is an advocate for responsible and innovative AI use.
Leave a comment