The rise of generative AI has introduced a new and deeply troubling challenge to the digital world: the spread of synthetic nude images that closely resemble real individuals. On Thursday, Microsoft made a significant move to combat this growing issue by equipping victims of revenge porn with a powerful tool to prevent these harmful images from appearing in Bing search results.
In a groundbreaking initiative, Microsoft has partnered with StopNCII (Stop Non-Consensual Intimate Images), an organization dedicated to helping victims of revenge porn. This collaboration allows individuals to create a digital fingerprint—known as a “hash”—of explicit images, whether they are real or AI-generated, directly on their devices. Once generated, StopNCII’s partners, including Microsoft, utilize this hash to scrub the images from their platforms, effectively halting their spread. Bing now joins the ranks of Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, Pornhub, and OnlyFans in leveraging StopNCII’s technology to combat the dissemination of non-consensual intimate content.
In a recent blog post, Microsoft revealed the impact of this partnership, sharing that the company had already taken action on 268,000 explicit images that were flagged through Bing’s image search during a pilot program with StopNCII’s database, which concluded at the end of August. Although Microsoft had previously offered a direct reporting tool for such content, the company acknowledged that this approach alone was insufficient.
“We have heard concerns from victims, experts, and other stakeholders that user reporting alone may not scale effectively for impact or adequately address the risk that imagery can be accessed via search,” Microsoft stated in its blog post.
While Microsoft’s efforts represent a significant step forward, the problem is exacerbated on a much larger scale by more popular search engines like Google. Although Google Search offers its own tools for reporting and removing explicit images, the tech giant has faced criticism for not partnering with StopNCII. A recent investigation by Wired revealed that Google has received substantial backlash from both former employees and victims over its handling of such content. Since 2020, Google users in South Korea alone have reported 170,000 search and YouTube links containing unwanted sexual content, according to Wired.
The issue of AI-generated deepfake pornography is already widespread, with devastating consequences. StopNCII’s tools, however, are currently only available to individuals over the age of 18. Unfortunately, “undressing” sites—platforms that generate synthetic nude images—are already causing significant distress for high school students across the United States. Adding to the complexity, the U.S. lacks a comprehensive federal law addressing AI deepfake pornography, forcing the country to rely on a patchwork of state and local laws to tackle the issue.
In August, prosecutors in San Francisco filed a lawsuit targeting 16 of the most notorious “undressing” sites. According to a tracker created by Wired, 23 U.S. states have enacted laws to address non-consensual deepfakes, while proposals in nine states have been struck down.
As the landscape of online content continues to evolve, partnerships like the one between Microsoft and StopNCII are critical in safeguarding individuals from the harm caused by AI-generated and non-consensual explicit content. While there is still much work to be done, Microsoft’s new tool represents a meaningful step toward a safer digital future.