When “inspired by” turns into “you used my name without asking,” the lawyers usually wake up fast.
Grammarly has spent years selling itself as the polite helper in the corner. It fixes grammar, smooths sentences, and tries not to embarrass you in front of your boss. Then it launched a feature that borrowed the names and reputations of real writers without consent, and the whole thing blew up in its face.
The company suspended its controversial Expert Review feature after backlash from journalists, authors, and academics whose identities were used as AI personas. The tool offered writing feedback “inspired by” well-known figures, including Stephen King, Carl Sagan, and investigative journalist Julia Angwin. Critics said the feature crossed a line from inspiration into impersonation, and a class-action lawsuit followed.
AI keeps wandering into the same trap. Companies tell themselves they are surfacing expertise, democratising insight, or creating a fun shortcut. Then, creators look closer and say, “You used my name, my credibility, and my life’s work as a product feature.” That is not a tiny product bug. That is a trust problem with legal teeth.
What’s Happening & Why This Matters
Pulling the Feature After Public Backlash

Grammarly turned off Expert Review, a feature that provided writing suggestions modelled on the styles or viewpoints of real journalists, authors, and academics. According to the reporting, the company did not get consent from the people whose names and identities appeared in the tool. Those named included writers such as Neil deGrasse Tyson and Stephen King, along with multiple journalists from major tech and news outlets.
The backlash escalated quickly. Writers discovered that their names being used as AI personas inside a paid product; they were not amused. Grammarly initially tried a softer approach by offering an opt-out option, but that move backfired. Critics argued that consent should never have been reversed into a “tell us if you don’t want to be used” model. Gaming journalist Wes Fenlon called the opt-out option “laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility.”
That criticism hits the centre of the issue. The company appears to have treated human expertise as raw branding material. It borrowed authority first, then worried about permission later. That is exactly the sort of move that keeps getting AI firms into public trouble.
Julia Angwin’s Lawsuit: A Product Misstep Into a Legal Fight
The loudest consequence so far is legal. Julia Angwin, an investigative journalist and contributing opinion writer for The New York Times, is the lead plaintiff in a class-action lawsuit filed in the Southern District of New York against Grammarly and its parent company, Superhuman. The filing alleges that the companies misappropriated the identities of “hundreds” of writers to drive profits for a paid subscription product. It argues that using someone’s name for commercial purposes without consent violates laws in New York and California.

The suit asks the court to ban Grammarly from using people’s names and identities without consent and seeks damages. The filing says damages exceed $5 million (€4.6 million). However, Angwin’s lawyer, Peter Romer-Friedman, said that figure is only the minimum jurisdictional threshold, and the true total would depend on how much money the company made from the feature.
Angwin’s comments make the injury feel more human and less abstract. She said she was “stunned” to see her professional identity marketed as a product. She also said she had never thought of editing as something that could be stolen, the way a deepfake image or video might be.
That quote matters because it shows how AI is redrawing the boundaries of appropriation. People already understood image deepfakes. Voice cloning is becoming familiar too. But “editorial identity theft” is newer terrain. Grammarly managed to stumble directly into it.
The Real Problem. It’s Credibility Theft.
One of the sharpest complaints in the coverage is not merely that Grammarly used names without asking. It is that the company used names to lend the illusion of expert authority to weak AI output.
Angwin described the generated edits as poor and even called the imitation a “slopperganger,” a mocking term associated with the idea of AI slop. She said the edits attributed to her often made sentences worse and more complex, and she found the idea of her name being attached to terrible advice “really appalling.”
Here is where the story gets extra nasty. If an AI tool performs badly under its own name, users can judge it accordingly. If the same weak output is wrapped in the reputation of a respected journalist, scientist, or author, the damage spreads in two directions. Users may get worse advice. The expert’s credibility is dragged through the mud.
That is why this episode id worse than a gimmicky product experiment gone wrong. Grammarly was not merely trying to simulate “style.” It was monetising borrowed trust.
And trust, inconveniently, still belongs to the humans who earned it.
Grammarly Apologized
Grammarly’s parent company, Superhuman, has apologised. A spokesperson said the company built the agent to help users tap into the insights of thought leaders and experts and to give experts new ways to share knowledge and reach audiences. Then came the admission: “Based on the feedback we’ve received, we clearly missed the mark. We are sorry and will do things differently going forward.”
Chief executive Shishir Mehrotra acknowledged that the tool had “misrepresented” the voices of experts. He wrote that the company received valid criticism. Recognising that it fell short, Grammarly announced that the feature would be taken down for redesign. He added that the tool had relied on publicly available information from third-party large language models to surface suggestions inspired by influential published work.

That apology is notable, but it likewise exposes the original product logic. Grammarly seems to have believed that “publicly available” plus “inspired by” was enough cover to turn real people into branded AI personas. That logic is exactly what creators, journalists, and authors are challenging across the industry.
Publicly available does not mean freely exploitable. Influence does not mean consent. Inspiration does not mean permission to borrow a living person’s name and sell access to it.
The War of Copyright and Consent
The Grammarly fight is not happening in a vacuum. It fits into a creative backlash against AI systems that train on, imitate, or repackage human work without clear permission. The same search results from uploaded files include the growing authors’ revolt in the UK over AI firms using books for model training without consent, including the “empty” protest book, Don’t Steal This Book, backed by around 10,000 writers.
That parallel matters because it shows the same underlying conflict playing out across different creative sectors. Writers, journalists, and other experts are not objecting only to style mimicry. They are objecting to a deeper business model in which AI firms treat published human work and established human reputations as cheap input material.
The common thread is not technology. It is power.
A company with scale can ingest public material, remix it, attach a familiar name, and launch a premium feature before the affected people even know it exists. By the time the public notices, the company can offer apologies, redesigns, and carefully worded blog posts about learning and listening.
Creators do not want to be cast as unpaid beta testers in that cycle.
TF Summary: What’s Next
Grammarly has suspended its Expert Review feature after strong backlash and a class-action lawsuit over its use of real writers’ names and identities without consent. The company apologised, admitted it had fallen short, and said it would rethink its approach. But the more serious damage is already visible: the episode has turned a writing-assistance tool into another case study in how AI companies borrow human authority too casually and then act surprised when the humans object.
MY FORECAST: Grammarly will eventually return with a much more controlled expert feature, likely based on opt-in partnerships, licensing, and direct compensation instead of implied consent and public-data hand-waving. Expect AI product teams everywhere to start treating identity rights and credibility rights as serious legal exposure, not fuzzy ethics chatter. The companies that survive this phase will be the ones that stop trying to reverse-engineer trust from other people’s names and start paying for the privilege properly.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

