Microsoft, Google Fight Back Against Deepfake Nudes in Search

Z Patel

Microsoft and Google have taken steps to combat the growing problem of deepfake nudes and non-consensual intimate images appearing in search results. These tech giants are leveraging new technologies and partnerships to ensure that users have more control over their digital presence, particularly when it comes to sensitive or explicit content.

What’s Happening & Why This Matters

Since March, Microsoft has removed nearly 300,000 intimate images from its Bing image search results. These images were posted online without consent, highlighting the ongoing battle against non-consensual explicit content on the internet. Microsoft has also provided users with a way to request the removal of any nude or sexually explicit images or videos of themselves from Bing’s search results.

To further strengthen these efforts, Microsoft partnered with the Stop Non-Consensual Intimate Image Abuse (StopNCII) organization. This partnership has integrated an updated version of Microsoft’s PhotoDNA technology into the StopNCII platform, allowing users over the age of 18 to create a digital fingerprint, or hash, of images they do not want shared online. This digital hash is then shared with StopNCII partners, including platforms like Instagram, Facebook, TikTok, Threads, OnlyFans, Bumble, and Reddit. If content matching the hash appears on these sites, it is flagged for potential removal.

Deepfake content statistics (Dec 2023). credit: AI Content Detector

Courtney Gregoire, Microsoft’s Chief Digital Safety Officer, noted in a blog post that the company has been testing the StopNCII database to prevent such content from appearing in Bing’s image search results. Up until the end of August, Microsoft had acted on 268,899 images and plans to expand this partnership further. Gregoire encouraged adults worried about their images being released to report them to StopNCII.

Microsoft has also launched its own reporting portal to support adults dealing with AI-generated revenge porn or deepfake nudes. The company stated that images of minors should be reported as child sexual exploitation and abuse imagery.

credit: Sint/India Today

Google is similarly stepping up its efforts to address this issue by enhancing its search algorithms and working with advocacy groups to prevent the proliferation of deepfake content. These actions are part of a broader commitment from both companies to make the internet safer and more respectful of individuals’ privacy.

TF Summary: What’s Next

The actions taken by Microsoft and Google enlists the search leaders in combating the abuse of deepfake technology. As both companies strengthen their tools and collaborate with organizations like StopNCII, the effectiveness of their measures is likely to improve. Tech companies, watchdogs, regulators and individuals all must remain vigilant — adapting as threats evolve and prioritizing the safety and privacy of all users in the digital space.​

Share This Article
Avatar photo
By Z Patel “TF AI Specialist”
Background:
Zara ‘Z’ Patel stands as a beacon of expertise in the field of digital innovation and Artificial Intelligence. Holding a Ph.D. in Computer Science with a specialization in Machine Learning, Z has worked extensively in AI research and development. Her career includes tenure at leading tech firms where she contributed to breakthrough innovations in AI applications. Z is passionate about the ethical and practical implications of AI in everyday life and is an advocate for responsible and innovative AI use.
Leave a comment