Europe’s digital rulebook faces a fresh stress test as election fears and mental health misinformation collide.
Europe’s fight over digital rules just picked up speed. French President Emmanuel Macron wants the European Commission to enforce the Digital Services Act more aggressively before a packed election calendar in 2026 and 2027. His warning came as France dealt with fresh concerns about foreign interference and online manipulation during local campaigns.
At the same time, a new study added more fuel to the debate. Researchers found that more than half of the reviewed social media posts on mental health and neurodivergence carried misinformation, with TikTok showing the highest levels in the sample. Taken together, the two stories point to one blunt question: can Europe enforce digital rules hard enough to protect elections, public debate, and public health without turning the internet into a bureaucratic maze?
What’s Happening & Why This Matters
Stricter Enforcement Before Key Elections

Macron’s message to Brussels was direct. In a 16 March letter seen by Euronews, he urged Ursula von der Leyen to step up enforcement of EU digital rules ahead of major votes across Europe. France faces municipal election pressure in 2026, while more important contests are scheduled for 2027. Macron argued that hostile actors are trying to distort civic debate and election integrity.
The issue is not theoretical. Euronews reported that French security services identified several interference cases during the current municipal campaign, including cases linked to Russia. Macron argued that the EU and member states need stronger tools to defend democratic processes from information manipulation. He wrote that Europe must protect civic discourse, election fairness, and the integrity of the vote.
That argument carries weight because 11 EU countries will head to the polls in 2026. France, Italy, and Poland follow with major elections in 2027. When several countries vote in close succession, a weak response in one market can quickly spill across borders. Digital platforms do not stop at customs. False claims travel faster than campaign buses.
The Digital Services Act at the Quarrel’s Heart
Macron did not call for a brand-new law. He called for sharper enforcement of the law Europe already has. The Digital Services Act requires very large online platforms to assess and reduce systemic risks. Those risks include threats to civic discourse, election integrity, and public safety.
According to Euronews, French officials want updated guidance first drafted before the 2024 European elections. They want platforms such as Meta, X, and TikTok to do more under the DSA. Macron’s camp called for tighter action against algorithm-driven virality, clearer labels for AI-generated or AI-modified content, stronger removal of fake accounts, and tougher transparency around political advertising.

The Commission already has teeth under the DSA. It can issue injunctions, apply safeguard measures, and impose fines of up to 6% of global annual revenue. Still, the law matters only when enforcement is prompt and consistent. That is where the current debate sharpens. Europe no longer argues about whether rules exist. Europe debates whether Brussels will use them quickly enough amid mounting pressure.
Romania Still Haunts the Debate Over Platform Power
One reason the issue is urgent is Romania. Euronews noted that concerns over TikTok’s role in the 2024 first-round win of ultranationalist and pro-Russian candidate Călin Georgescu led to the annulment of the vote after declassified intelligence documents pointed to coordinated accounts and algorithmic amplification.

That case still hangs over every new EU conversation about platform accountability. Critics see it as proof that manipulation can change political outcomes before regulators even finish reading the morning brief. Supporters of tougher rules say the lesson is simple: wait too long, and the damage sticks.
Macron’s letter reflects that fear. He wants a “complementary strike force” that lets Brussels and EU capitals act quickly against interference operations. That language matters. It suggests a more active enforcement posture, not another long consultation cycle filled with papers, panels, and polite concern. Europe has done enough talking. The next phase needs execution.
The Mental Health Component
The digital rules debate is no longer only about elections. A second Euronews report showed why online governance is turning into a health issue, too. A study reviewing 27 studies and 5,000 social media posts found that up to 56% of posts about mental health and neurodivergence were inaccurate or unsubstantiated.

The highest misinformation rates appeared in content about autism and ADHD. Euronews reported that 52% of ADHD-related TikTok videos and 41% of autism-related TikTok videos contained misinformation. By comparison, YouTube averaged 22%, while Facebook averaged just under 15%. Those figures do not tell the whole story, yet they do show where the heaviest concentration appeared.
Eleanor Chatburn, co-author of the study at the University of East Anglia, put the problem plainly. She said misinformation rates were as high as 56% and warned that engaging videos can spread widely even when the information is not accurate. That warning matters because health content carries a different kind of risk. A bad political clip can distort a vote. A bad mental health clip can distort how someone understands their own mind.
Young Users in the Middle
The study matters because younger users often turn to social platforms for early answers about symptoms, identity, and diagnosis. Euronews cited World Health Organization data showing that one in seven people aged 10 to 19 lives with a mental disorder. Depression, anxiety, and behavioural disorders rank among the main causes of illness and disability in that group.
That context changes the stakes. Social platforms are not just entertainment feeds. They are search engines, peer groups, and self-diagnosis funnels for millions of teenagers. When misleading clips stack up, confusion can deepen fast. Chatburn warned that false ideas can feed stigma and make people less likely to seek real support when they need it most.
TikTok rejected the study’s conclusions. A spokesperson told Euronews that the research was flawed and relied on outdated research across multiple platforms. The spokesperson added that TikTok removes harmful health misinformation and gives users access to reliable information from the WHO. YouTube offered a similar defence, saying it highlights credible health videos and uses protections for teens. Those responses matter, yet they do not end the policy question. Europe still has to decide whether voluntary platform steps are enough.
Establishing Enforcement Credibility
The article title says “vote,” but the bigger vote is political and institutional. Europe is deciding what kind of regulator it wants to be. One path leads to heavy rhetoric and uneven action. The other leads to frequent interventions, real penalties, and faster cross-border coordination.
This matters for Apple, Google, Meta, TikTok, X, and every large platform operating in the bloc. The DSA already gave Brussels legal muscle. Macron’s intervention asks Brussels to use that muscle more decisively. The mental health misinformation study adds pressure from a different angle. Election security alone is serious. Public health misinformation makes the same rulebook feel even more urgent.
The tension is obvious. Too little enforcement invites manipulation. Too much clumsy enforcement risks political backlash, legal fights, and claims of censorship. Europe needs something harder than slogans and softer than panic. It needs precise action, clear standards, and credible follow-through.
The Next Phase of Internet Governance
This debate reaches far beyond France. If Brussels acts decisively, the EU could strengthen its role as the world’s most assertive digital regulator. That would affect election security, AI content labels, platform transparency, health misinformation controls, and future debates over recommender systems.
If Brussels moves too slowly, the DSA risks looking strong on paper and weak in practice. That gap would help critics who argue that Europe writes grand digital laws but struggles to enforce them quickly. In politics, perception matters. In regulation, timing matters even more.
For platforms, the warning is clear. Europe is no longer treating moderation, transparency, and systemic risk as side issues. The issues are near the centre of democratic resilience. For users, the fight may appear abstract, but the effects are close to home. Search results, political feeds, AI labels, health advice clips, and election content will all pass through this tightening rulebook.
TF Summary: What’s Next
Europe’s digital rules face a decisive stretch. Macron wants tougher DSA enforcement before major elections. New research on mental health misinformation gives regulators a second pressure point. One story deals with ballots. The other deals with young people, diagnosis culture, and viral advice. Together, they make a stronger case for faster, firmer action.
MY FORECAST: Brussels will toughen enforcement before the 2027 election wave and pair that effort with more scrutiny of health misinformation and AI-labelled content. Large platforms will face more requests, more audits, and more legal risk. Europe’s next real digital vote will not happen in a polling booth. It will happen through enforcement choices that decide whether the DSA changes online habits or stays a handsome PDF with attitude.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

