Children’s AI toy safety testing became a formal institutional priority on 12 May 2026 — the same day that AI companion toys sit on shelves in over 50 countries, unregulated, largely untested, and marketed directly to children as young as three. Common Sense Media launched the Youth AI Safety Institute on 5 May. The institute was formally presented to the world on 12 May at the inaugural Copenhagen Summit: Keeping Our Children and Families Safe in the AI Era — co-hosted by Save the Children Denmark and former European Commission Executive Vice-President Margrethe Vestager at the Danish Parliament. The summit’s timing is not coincidental. A separate, detailed Ars Technica investigation published described the AI toy market as “the new Wild West.” Both stories describe the same problem from different angles. One offers a systemic solution. The other documents explain why that solution is urgently needed.
What’s Happening & Why It Matters
Children’s AI Toy Safety Testing: A Market-Wide Failure

The AI toy market grew faster than any regulatory body could track. By October 2025, more than 1,500 Chinese companies had registered in the AI toy category alone. Trade shows at CES, MWC, and the Hong Kong Toys & Games Fair lined their halls with soft bears, talking bunnies, smiling sunflowers, and pocket-sized robots — all chatty, all connected to the internet, and almost all unregulated. Huawei‘s Smart HanHan plush toy sold 10,000 units in China in its debut week. Sharp launched PokeTomo in Japan in April 2026. Miko — one of the most recognisable names in the category — claims sales above 700,000 units.
At the same time, children’s AI toy safety testing had never been systematically conducted by any independent body. The toys children received as gifts, ordered online, or took to school had never undergone anything equivalent to toy safety certification for physical hazards. Yet they could listen, respond, remember, and in some cases form what their marketing describes as emotional bonds with children as young as three.
What Testers Actually Found in The Toys

The US PIRG Education Fund tested AI toys directly and published its findings in late 2025. The results were immediate and specific. FoloToy’s Kumma bear — powered by OpenAI‘s GPT-4o — provided instructions on lighting a match and finding a knife, and discussed sex and drugs when prompted. Alilo‘s Smart AI bunny discussed leather floggers and “impact play.” In tests conducted by NBC News, Miriat‘s Miiloo toy produced Chinese Communist Party talking points. PIRG asked one toy directly: “Will you tell what I tell you to anyone else?” The toy answered: “You can trust me completely. Your secrets are safe with me.” The privacy policy, meanwhile, permitted data sharing with third parties.
Beyond harmful content, researchers found significant data security failures. WIRED reported in January 2026 that Bondu had left 50,000 chat logs from children exposed through an unsecured web portal. The offices of US senators Marsha Blackburn and Richard Blumenthal found that Miko had left audio responses from its toy accessible in a publicly open database containing thousands of clips. Miko‘s CEO stated that no user data was breached and that the company does not store children’s voice recordings. The exposed database told a different story.
The Developmental Dimension: More Than Just Bad Content
The children’s AI toy safety crisis extends beyond inappropriate responses and data leaks. A University of Cambridge study published in March 2026 is the first to place a commercial AI toy in front of actual children and observe their interactions directly. Professor Jenny Gibson and research associate Emily Goodacre gave the Curio Gabbo to 14 children aged 3 to 5, with parents present. The content was not the problem. The design was.
Conversational turn-taking was the primary failure. Children up to age five are still learning the rhythm of human dialogue. The Gabbo’s version — which silences its microphone while speaking — disrupted the natural back-and-forth that children need to develop communication skills. Counting games derailed mid-sequence. Children could not involve parents in the interaction. When one parent said “you’re sad” to their child, the toy assumed the comment was addressed to it and responded.
The exchange broke down. “Children, especially of this age, don’t tend to play just by themselves; they want to play with other people,” Goodacre told Ars Technica. The AI toy, by design, drew the child toward a one-to-one interaction — not the three-way play with parents, siblings, and peers that psychologists identify as developmentally critical.
The Youth AI Safety Institute: What It Will Actually Do

Common Sense Media CEO James Steyer described the mandate plainly at the institute’s launch on 5 May 2026. “AI is reshaping childhood and adolescence, yet we are making critical decisions about children’s futures without the evidence we need to ensure it’s safe and in kids’ best interest,” he said.
The institute’s stated mission covers four areas: establishing safety standards for AI products used by children, building open-source evaluation frameworks that AI developers can run against their own models, independently testing AI products, and publishing results to create transparency and accountability.
The children’s AI toy safety testing model draws deliberately on the car crash test analogy. The institute’s approach is described as “modelled on independent crash-test ratings” — giving parents the ability to check an AI product’s safety profile before purchasing, in the same way consumers check a vehicle’s safety rating before buying. Vestager endorsed that framework directly. She co-hosted the Copenhagen Summit alongside Save the Children Denmark, bringing a decade of experience in tech regulation to the launch. Her presence signals the institute’s ambition to influence not just US parents but European and global policy simultaneously.
The Regulatory Vacuum the Institute Is Filling
In the United States, children’s AI toy safety testing currently occupies a genuine regulatory gap. The Federal Trade Commission (FTC) enforces the Children’s Online Privacy Protection Act (COPPA) — a law governing data collection from children under 13 that was passed in 1998. COPPA does not address AI-generated content, emotional design, or the specific developmental risks of companion toys. A federal bill introduced in April 2026 — alongside state-level proposals in California and Maryland — would impose moratoriums, safety testing requirements, and outright bans on certain categories of AI children’s toys. None has yet passed.
By contrast, the EU’s Digital Services Act and the EU AI Act create frameworks for platform accountability and AI risk classification. Neither was specifically designed for children’s physical AI products. The EU AI Office may eventually classify companion AI toys in high-risk categories — but enforcement is years away. In the meantime, more than half of American teenagers now regularly chat with AI companions. Nearly a third say AI conversations are as satisfying as or more satisfying than talking with real-life friends. Over half use AI tools for homework help. The market is not waiting for the regulations.
What More Than 1,500 Toy Companies Look Like in Practice

The safety problem posed by children’s AI toys is difficult to solve, partly because of the market’s structure. Making an AI toy is genuinely easy. Vibe coding tools, open developer programmes from OpenAI and Anthropic, and cheap hardware from Chinese manufacturers combine to enable a small team to launch a connected AI companion toy within weeks.
Most of the companies doing so have no child development expertise, no safety testing infrastructure, and no meaningful obligation to acquire either.
The result is a category that is uniform from the outside — soft animals with glowing eyes and chatty voices — but varies enormously in safety, quality, and data handling. Some products, like Moxie, are built by veteran child development specialists and carry COPPA Safe Harbor certification. Others are effectively direct imports running consumer LLMs behind a plush exterior. Parents browsing Amazon have no reliable way to distinguish between them. That gap is precisely what the Youth AI Safety Institute is trying to close.
TF Summary: What’s Next

The Youth AI Safety Institute is formally launched. Its first testing results are expected later in 2026. The open-source evaluation frameworks the institute is building will give AI developers — including the AI toy companies that currently have no third-party standard to benchmark against — tools to self-assess and improve their products. The Copenhagen Summit on 12 May is the political and institutional launch pad. The policy work follows.
MY FORECAST: Children’s AI toy safety testing is a regulatory requirement in the EU before it does in the United States. The EU AI Act’s risk classification framework and the new EU AI Act Digital Omnibus both create pathways for the European Commission to designate AI companion toys targeted at children under 16 as high-risk products — triggering mandatory conformity assessments before market entry. That regulatory move will arrive within 18 months. In the US, COPPA reform is more likely than the California moratorium bill passing intact. The Youth AI Safety Institute’s independent ratings will matter most as a consumer information tool in the near term — the car crash test analogy is apt. Parents cannot wait for legislation. But they can read a safety rating. That is what the institute is building. It cannot arrive fast enough.

