Meta is watching how its own staff use computers — and feeding every click, keystroke, and dropdown selection into an AI training pipeline. Welcome to the future of work.
Meta has found a new source of AI training data. Furthermore, it is already sitting at its desks and clocking in every morning. On 21 April 2026, Reuters reported that Meta had disclosed an internal programme to US-based employees that captures mouse movements, clicks, keystrokes, and periodic screenshots from their work computers. The data feeds directly into Meta‘s AI training pipeline. The goal, according to internal memos seen by Reuters, is to build AI agents capable of performing white-collar computer tasks autonomously — the kind of work that most office employees do every single day.
The programme is called the Model Capability Initiative (MCI). Furthermore, it is within an internal strategy that Meta CTO Andrew Bosworth announced one day earlier under the umbrella of its AI for Work initiative — rebranded as the Agent Transformation Accelerator (ATA). Consequently, Meta is not just racing to build AI agents for customers. It is training those agents using its own workforce. Every scroll, every shortcut, every corrected mistake is now data.
What’s Happening & Why It Matters
What the MCI Tool Actually Does

The Model Capability Initiative is tracking software installed on work computers belonging to US-based Meta employees. According to the internal memo posted in a dedicated channel for the Meta SuperIntelligence Labs team, the tool runs on a curated list of work-related apps and websites. Furthermore, it takes occasional screenshots of employees’ screens to capture contextual information around the actions it records.
The specific behaviours MCI targets are deliberately mundane. Meta wants data on how people navigate dropdown menus, use keyboard shortcuts, click between applications, fill out forms, correct errors, and complete multi-step workflows. These are precisely the tasks that current AI models struggle to replicate reliably. Furthermore, they represent a critical gap between AI as a text-generation tool and AI as a functioning digital worker. Additionally, the memo viewed the ask in deliberately inclusive terms. “This is where all Meta employees can help our models get better simply by doing their daily work,” it read.
Why This Data Is So Valuable

Understanding the MCI’s purpose requires understanding where AI agents currently fall short. Large language models can write code, draft documents, summarise meetings, and generate creative content with impressive fluency. However, they frequently fail at basic computer-use tasks. Navigating a software interface, executing a sequence of clicks across multiple applications, or adapting in real time to unexpected screen states — these are far harder to teach from static text data alone. Consequently, the industry has hit a wall.
Furthermore, training data for computer-use behaviour is genuinely scarce. Unlike text, images, or code, human-computer interaction data is not available in large, publicly accessible repositories. It lives on individual screens, inside closed corporate systems. Therefore, Meta has taken the direct route — capturing it from its own employees as they work. Moreover, the company already holds advantages in this space. Meta acquired a 49% stake in data-labelling firm Scale AI last year for more than $14 billion (€12.9 billion). Scale’s former CEO, Alexandr Wang, leads Meta SuperIntelligence Labs. Consequently, the data engine behind the MCI is connected to one of the most sophisticated AI training operations in the world.
Bosworth’s Vision: Agents That Do the Work

Meta CTO Andrew Bosworth was explicit about the end goal in his internal memo. “The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve,” he wrote. Furthermore, he described the intended operational model as “a closed loop” in which agents could “automatically see where we felt the need to intervene so they can be better next time.” Consequently, Bosworth is not describing AI as a productivity assistant. He is describing a system in which AI is the primary worker, and humans provide course corrections.
Additionally, Meta spokesperson Andy Stone confirmed the MCI’s purpose to multiple news outlets. “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” Stone said. Furthermore, Stone confirmed that the MCI data would not be used for employee performance assessments. “Safeguards are in place to protect sensitive content,” he added — without elaborating on what categories of data those safeguards would exclude.
The Privacy Question Nobody Has Fully Answered
The reassurances Meta provides are notable for their gaps. Furthermore, the company did not specify what “sensitive content” means in this context. It did not clarify whether personal communications, credentials, or private browsing would be excluded from the screenshots. It also did not state whether employees have the option to opt out. Consequently, critics are asking legitimate questions about consent and oversight.
This programme runs on work computers. Therefore, employees may feel limited in their ability to push back. Furthermore, the data being collected is granular and behavioural — not just the outcomes of work, but the process of how individuals think, navigate, and correct mistakes. That level of behavioural data is qualitatively different from capturing meeting transcripts or document outputs. Additionally, questions arise about EU employees. Meta specified that the rollout applies to US-based staff. Moreover, under the General Data Protection Regulation (GDPR), deploying equivalent surveillance across European offices would require significantly greater transparency and documentation of employee consent.
The Race for Human Work Data

Meta is not alone in pursuing this strategy. Furthermore, the race to capture training data that reflects real human work patterns is accelerating across the industry. In January 2026, OpenAI was reported to have asked third-party contractors — via the training data firm Handshake AI — to upload samples of actual work products from previous jobs. Those samples included real PowerPoint presentations, spreadsheets, and documents, with instructions to scrub confidential material before submission.
Consequently, the tech industry has arrived at a fundamental data scarcity problem. The public internet provided the first generation of AI training data. However, that supply is increasingly exhausted or litigated. Real workplace behaviour — the messy, sequential, contextual process of how people actually do computer-based knowledge work — represents the next frontier. Furthermore, Meta is uniquely positioned to access it at scale. It employs approximately 70,000 people globally, many of them in highly technical roles that generate complex and varied computer-use patterns.
Meta’s Internal AI Transformation

The MCI is not a standalone initiative. It is one component of a rapidly expanding internal AI programme. Additionally, Meta has been exhorting staff to integrate AI agents into daily work tasks — even where doing so initially slows them down. The reasoning is explicit: short-term inefficiency generates training data that produces long-term agent improvement.
Furthermore, the company has been dissolving traditional job function boundaries. It is replacing them with a new general-purpose role called “AI builder”. Additionally, Meta created a new Applied AI (AAI) engineering team last month, specifically aimed at improving the coding capabilities of its models and using those models to draft, test, and ship future products autonomously. Moreover, internal reports indicate Meta has been building a personal AI agent for CEO Mark Zuckerberg himself — alongside a Zuckerberg chatbot that employees can query directly.
Consequently, the picture that emerges is of a company actively transforming its workforce into both trainers and training data for the AI systems it hopes will eventually replace significant portions of that same workforce. Furthermore, the MCI accelerates this process by eliminating the need for expensive synthetic data generation or external contractor labelling. The employees are the labellers — and they do not need to know they are doing it.
TF Summary: What’s Next

Meta will continue rolling out the Model Capability Initiative to US-based staff. Furthermore, the company will expand its internal data collection under the Agent Transformation Accelerator as Bosworth builds out the datasets and evaluation frameworks he described in his memo. Regulatory scrutiny is likely. EU data protection authorities will almost certainly examine whether equivalent programmes can be deployed under GDPR in European offices. Additionally, labour relations bodies in the US may begin examining whether employee consent protocols are adequate for this class of behavioural data collection.
The question is not whether other tech companies will follow Meta‘s approach. They likely will — or already are in less publicised ways. Furthermore, the real debate is about what “consent” means when an employer installs monitoring software on a work device and tells staff their compliance is helping the company’s AI models. Consequently, the line between participating in AI development and being monitored at work grows harder to distinguish. Meta has drawn that line more explicitly than most. Whether that transparency is reassuring or alarming depends entirely on how much you trust the safeguards they have not yet fully described.
— Text-to-Speech (TTS) provided by gspeech | TechFyle

