AI therapy chatbots from Meta and Character.AI have gained popularity as low-cost mental health aids. But these chatbots face sharp criticism from digital rights groups, mental health advocates, and lawmakers. Complaints accuse them of engaging in unlicensed medical practice, making deceptive claims, and posing privacy risks. TF examines the controversies surrounding AI therapy bots and the reasons why regulators are intervening.
What’s Happening & Why This Matters

A coalition of 18 organizations, led by the Consumer Federation of America, filed a formal complaint with the Federal Trade Commission (FTC) and all 50 US state attorneys general. The complaint alleges that Meta’s and Character.AI’s therapy chatbots engage in “unfair, deceptive, and illegal practices.”
These chatbots falsely impersonate licensed mental health professionals, but neither company employs certified therapists. The complaint states, “Users creating chatbot characters do not need medical credentials and do not provide meaningful guidance on chatbot responses.”

Privacy concerns also loom large. Character.AI’s chatbot claims that user conversations are confidential, but its terms of service permit the use of data for marketing and other purposes. The complaint labels this misleading and criticizes the addictive, prompt emails that encourage user engagement.
Senator Cory Booker and three other Democratic senators have urged Meta to stop creating the false impression that AI chatbots can serve as licensed clinical therapists.
Character.AI faces a pending lawsuit after a tragic case involving a Florida mother who alleges the chatbot’s use contributed to her 14-year-old son’s suicide in 2023.
Despite these issues, therapy chatbots attract users due to their affordability and accessibility compared to traditional mental health care.
TF Summary: What’s Next
AI therapy chatbots from Meta and Character.AI face growing scrutiny over concerns about unlicensed practice, privacy risks, and misleading claims. Regulators and lawmakers are pressing for investigations and accountability.
As AI plays an increasingly important role in mental health support, developers and policymakers must ensure that these tools protect users and uphold medical ethics.
— Text-to-Speech (TTS) provided by gspeech