Artificial intelligence is changing how workplaces function, but not always for the better. A growing problem, dubbed “AI slopwork,” is now disrupting teams and workflows across industries. The low-quality, AI-generated work looks legitimate on the surface but lacks the depth and accuracy needed to push projects forward. A recent Stanford University study reveals just how widespread this issue has become in American workplaces.
What’s Happening & Why This Matters

The term workslop was coined by researchers at Stanford Social Media Lab and BetterUp Labs, and the findings were published in the Harvard Business Review. According to their survey, 40% of U.S. full-time employees reported receiving AI-generated content that qualifies as slopwork in the past month. On average, 15.4% of the work shared among colleagues falls into this category.
The problem primarily occurs between peers, with 40% of slopwork exchanged laterally. Even managers are not immune, receiving 18% of subpar content from direct reports. Technology and professional services are the industries most affected, reflecting their heavy reliance on digital tools and automation.
Declining Trust and Team Dynamics
Slopwork doesn’t just hurt productivity; it damages workplace relationships. Survey participants said that when colleagues send slopwork, they see them as less creative, less capable, and less reliable.
- 42% of respondents considered these colleagues less trustworthy.
- 37% viewed them as less intelligent.
The perception problem makes AI misuse a serious HR and leadership hurdle. What is supposed to be an efficiency tool is instead a source of mistrust and wasted time.
AI Efficiency vs. Reality

AI was supposed to streamline workflows and free workers from tedious tasks. However, studies suggest the opposite may be happening. A joint paper from the University of Chicago and the University of Copenhagen found that while AI can save a few hours per month on specific tasks, it creates additional work.
For instance, a teacher might use AI to create lesson plans, only to spend those saved hours later reviewing student assignments for AI-generated content. Similarly, a study focused on programmers discovered that AI coding tools slowed performance. Developers spent excessive time prompting AI tools and verifying their output, leading to longer completion times for complex projects.
Why It Matters Now
As companies race to integrate AI into their workflows, many lack clear policies for oversight and quality control. This absence of structure allows poorly generated AI work to spread unchecked, creating inefficiencies. The findings suggest a need for AI literacy training and strict quality checks to prevent slopwork from undermining productivity.
The challenge isn’t just about fixing bad content—it’s about reshaping how teams collaborate and ensuring AI serves as a complement, not a crutch. Without accountability, AI could erode trust within teams and negatively impact workplace culture.
TF Summary: What’s Next
The rise of slopwork highlights the need for companies to treat AI integration as a cultural and operational issue, not just a technological one. Businesses must develop clear usage guidelines, implement review processes, and train employees to assess AI-generated output critically. Leaders also need to address the interpersonal fallout by encouraging transparency and collaboration.
MY FORECAST: If organizations fail to address slopwork, it could create a productivity paradox where AI-driven inefficiencies outweigh the promised benefits, leaving workplaces more fractured than ever.
— Text-to-Speech (TTS) provided by gspeech