Spotify Hosted Fake Podcasts Pushing Dangerous Prescription Drugs

Spotify Caught Hosting Fake AI Podcasts Selling Dangerous Drugs

Sophia Rodriguez

Spotify has been caught hosting fake podcasts promoting the sale of addictive prescription drugs like XanaxOxycodone, and Tramadol. Many of these podcasts contained no real audio or included only brief AI-generated voiceovers, with all the promotional content in the podcast bios. When users clicked the links, they were redirected to websites selling drugs without prescriptions.

What’s Happening & Why This Matters

These weren’t obscure entries either. Titles like “My Adderall Store” showed up in Spotify’s Top 50 search results for terms related to these drugs — often right next to legitimate podcasts about recovery and addiction.

Investigations by CNN and Business Insider identified over 200 fake podcasts tied to over 25 different drugs. Some even featured the opioid Opana, which has been pulled from the U.S. market due to its high risk of addiction.

According to a 2023 report by the CDCoverdose deaths from counterfeit pills more than doubled between mid-2019 and late 2021. The crisis has worsened in Western states, where fake prescription pills bought online and through social media continue to pose a growing public health risk.

(credit: Spotify/CNN)

Spotify spokesperson said, “We are constantly working to detect and remove violating content across our service.” But this isn’t Spotify’s first run-in with AI-generated scams.

In September 2024, a North Carolina man was arrested for using AI to create hundreds of thousands of fake songs, swindling over $10 million in royalties from streaming platforms like Apple Music and Spotify.

AI Abuse Meets Public Health Threat

The rise of AI-generated fake content isn’t just a copyright headache. It’s also a public safety hazard. When bots push counterfeit or unapproved prescription drugs, they can lead to real-world harm — especially when found on trusted platforms like Spotify.

Although Spotify has tried to respond by adding visibility features like podcast listener counts, these tools don’t entirely prevent malicious use. Even after tweaking the feature to hide stats from episodes with fewer than 50,000 plays, the issue remains about what’s being hosted, not how many people see it.

Platforms like Spotify face pressure to moderate user-generated content better, especially when AI can create realistic-looking material at scale. Public trust and user safety are now at the center of the conversation.

TF Summary: What’s Next

Spotify needs stronger oversight of what is published on its platform, especially when it comes to AI-generated scams involving health risks. While detection tools exist, the company must improve enforcement and create more transparent safety measures.

This incident shows how AI misuse can spiral from spam into danger. Platforms that fail to act might not just lose users; they could also play a role in the next public health emergency.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Avatar photo
By Sophia Rodriguez “TF Eco-Tech”
Background:
Sophia Rodriguez is the eco-tech enthusiast of the group. With her academic background in Environmental Science, coupled with a career pivot into sustainable technology, Sophia has dedicated her life to advocating for and reviewing green tech solutions. She is passionate about how technology can be leveraged to create a more sustainable and environmentally friendly world and often speaks at conferences and panels on this topic.
Leave a comment