Another busy AI week: one lawsuit, two fresh product plays, and one nasty hardware warning.
Artificial intelligence keeps charging ahead, yet the latest headlines show a market under stress from every angle at once. Penguin Random House is suing OpenAI in Germany over an allegedly near-copy version of a beloved children’s book. Security researchers have shown new Rowhammer attacks that can hand an attacker full control of machines running certain Nvidia GPUs. Google has upgraded Vids with Veo, Lyria, and directable AI avatars, while Gemma 4 has arrived with a friendlier Apache 2.0 license and a clear pitch toward local and agentic development.
That pile of news says more than any single announcement. The AI race is no longer only about who has the smartest chatbot. Copyright pressure is rising. Hardware risk is getting uglier. Creative tools are becoming more synthetic. Open-weight competition is heating up. The whole sector is more useful, more fragile, and more litigious at the same time.
What’s Happening & Why This Matters
Penguin Random House Is Taking OpenAI to Court in Germany
One of the sharper stories of the week comes from Germany, where Penguin Random House has filed suit against OpenAI over what the publisher says was a near-copy of Ingo Siegner’s hugely popular Coconut the Little Dragon books. The case says ChatGPT produced a Mars-themed children’s book with text, cover art, and supporting publishing material that were “virtually indistinguishable” from the original series after a prompt asking for a Coconut-style story on Mars.

That claim is hard because the case does not revolve around vague training-data theory alone. The publisher is pointing to output that allegedly looked too close to the source material for comfort. Penguin Random House says the result is clear evidence of unlawful “memorisation,” meaning large language models stored enough of Siegner’s work memorisation substantial creative elements later.
Carina Mathern, the publisher for children’s and young-adult books at Penguin Random House Verlagsgruppe, put the argument plainly: “Human creativity is and is at the heart of our work as publishers.” She added that the group is open to AI opportunities, but protecting intellectual property comes first. OpenAI responded that the company is reviewing the allegations and respects creators and content owners.
The bigger problem for OpenAI is in the pattern. Germany already handed down one copyright blow last year in the music space. Another high-profile case in publishing adds more pressure and makes “memorisation” sound less like an academic edge case and more like a courtroom word with teeth.
GPU Rowhammer Attacks Have Movedmemorisationo Nasty

The security side of the week carries a darker tone. New reporting around GDDRHammer and GeForge hammer shows attackers can use fresh Rowhammer methods against shared Nvidia GPUs to gain root control of host machines. That is a nasty jump because the attack chain no longer stops at corrupting GPU memory. According to the reporting, the new methods can cross the line from GPU abuse into full machine takeover.
Rowhammer has haunted memory security for years. Rapid access to certain DRAM rows can create electrical interference that flips bits in nearby rows. Earlier attacks focused on CPU-side memory. Researchers have already shown GPUHammer can induce bit flips in GDDR6 memory on an Nvidia A6000, degrading AI model accuracy from 80% to 0.1% with a single flipped bit.

The latest reporting raises the stakes further by showing how similar tactics can help compromise the host machine in shared cloud-style environments where expensive GPUs are time-sliced among many users. Shared GPUs are normal in AI infrastructure. A single hostile tenant gaining control over a host would turn a hardware quirk into a cloud nightmare.
Nvidia has already urged users to enable system-level ECC where available, especially on affected GDDR6 systems, and researchers say newer protections reduce some risk. Yet the message is grim. AI infrastructure leans heavily on costly shared accelerators. Shared accelerators make attractive targets. A market drunk on model benchmarks may have spent too little time staring at the plumbing underneath.
Google Vids Has Turned Into a Full AI Video Toybox

While lawyers and security researchers were busy, Google spent the week building a shinier creative machine. Google Vids has gained support for Veo 3.1, Lyria music models, directable avatars, a new recording extension, and easier YouTube publishing. The package gives Vids a much stronger creation stack for business clips, greetings, internal explainers, and lightweight marketing work.
The Veo side is the flashy part. Users can generate 8-second, 720p video clips right inside Vids. Free users get only 10 video generations per month. AI Pro users get 50, while AI Ultra users can reach 1,000. Google’s latest Lyria music tools join the stack too, letting users generate music with vibe-based prompts instead of manually writing lyrics.

The directable-avatar feature may carry more practical weight. Google says users can place realistic or cartoony avatars into scenes, customize their appearance, and tell them what to say or do through prompts. The avatars can even interact with objects in generated scenes. Google clearly wants Vids to cover a range of small-scale videocustomisehout forcing users into a traditional production workflow.
Google is trying to turn corporate video into a prompt box. For busy teams, that sounds useful. For anyone who already fears a flood of bland AI-generated office video, the future just got a little louder and a little more synthetic.
Gemma 4 Gives Google a Better Open Story

Google had another AI announcement worth more respect than the usual product glitter. Gemma 4 has arrived as the first major update to Google’s open-weight model family in over a year, and Google has dropped the old custom license in favor of Apache 2.0. That license switch may be the most important part of the whole release. Developers hated the older licensing terms because the custom rules felt restrictive and too easy for Google to reinterpret later. Apache 2.0 is familiar, permissive, and much less creepy.
The model family includes four sizes. The larger options include a 26B Mixture of Experts model and a 31B Dense model aimed at stronger local use on serious hardware, including a single 80GB Nvidia H100 in unquantized form. Smaller Effective 2B and Effective 4B variants target mobile and edge devices, with Google saying the Pixel team worked closely with Qualcomm and MediaTek for optimization.

Google says Gemma 4 improves reasoning, math, instruction follounquantiseduse, structured JSON output, code generation, visual understanding, and speech handling. Context windows reach 128k on edge models and 256k on the larger ones. Googleoptimisation Gemma 31B debuts near the top of the open-model ranking tables while staying far smaller than some larger open rivals.
The deeper significance is strategic. Google wants a stronger position in open-weight AI without forcing developers through the tighter commercial lane around Gemini. Gemma 4 gives developers more freedom, more local options, and better licensing. That combination could help Google in exactly the part of the AI market where goodwill still counts.
One Week, Four Stories, A Brutal Pattern
Put all four items in one room, and a rougher picture appears. AI is maturing, but the maturing process is messy. Copyright claims are moving beyond theory and into more concrete allegations around outputs. Hardware security is showing fresh weak spots at the exact moment shared GPUs are central to AI economics. Google is giving offices easier ways to mass-produce synthetic video. Open-weight model competition is friendlier and more permissive at the same time.
That mix says the AI race has entered a nastier phase. The first wave was about spectacle. The second wave is about systems. Who owns the training result or secures the accelerators? Who ships the better creative workflow? Can the developer ecosystem win with fewer strings attached?

A year ago, much of the industry still talked as if raw model intelligence would decide the winners. That idea looks thinner by the week. Legal durability, hardware trust, licensing sanity, and product packaging all carry a heavier weight.
That is probably healthy. A market that only chases one benchmark usually ends up learning discipline the expensive way.
TF Summary: What’s Next
The snapshot of weekly AI news delivered a sharp reminder that the sector’s future will not be written by labs alone. Penguin Random House has taken OpenAI into court in Germany over alleged copyright mimicry involving Coconut the Little Dragon. Fresh Rowhammer-style attacks have raised uglier concerns around shared Nvidia GPUs and host compromise. Google Vids has moved deeper into AI-generated business video with Veo, Lyria, and prompt-driven avatars. Gemma 4 is here, with stronger local-model ambitions and a far more developer-friendly Apache 2.0 license.
MY FORECAST: More lawsuits will target outputs, not only training. GPU-security research will scare cloud providers into harder safeguards. Ofice software will flood workplaces with synthetic media dressed up as efficiency. More developers will reward open models that come with lighter legal chains. The next AI winners will need more than clever models. The next winners will need sturdier legal footing, safer hardware, and fewer ways to annoy the people building with the stack.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

