It withdraws from voluntary disinformation agreements that are still set to be formalized into law.
Google has informed the European Union (EU) that it will not integrate fact-checking organizations into its flagship platforms, Search and YouTube, just as the bloc prepares to tighten its grip on digital disinformation through new legislation. This marks a significant pushback against the EU’s efforts to formalize voluntary commitments made under its 2022 Code of Practice on Disinformation into binding regulations through the upcoming Digital Services Act (DSA).
A Shift in Cooperation
Google’s decision to back away from fact-checking commitments signals a notable departure from its previous stance. While the tech giant was an initial signatory of the voluntary Code of Practice, which aims to curb the spread of online disinformation, it now argues that fact-checking measures are neither “appropriate” nor “effective” for its services. According to Kent Walker, Google’s global affairs president, the company will withdraw from all fact-checking obligations within the code before these measures become law under the DSA.
In a letter to Renate Nikolay, the European Commission’s head of content and technology, Walker emphasized the challenges posed by the fact-checking requirements. “Search and YouTube will endeavor to reach agreements with fact-checking organizations in line with this measure, but services will not have complete control over this process,” Walker wrote. This indicates that Google views these obligations as unworkable given the scale and operational complexity of its platforms.
What the Code of Practice Entails
The EU’s Code of Practice on Disinformation sets ambitious goals for curbing false information online. It requires signatories to:
- Collaborate with fact-checkers across all EU member states.
- Provide fact-checking content in every EU language.
- Reduce financial incentives for spreading disinformation.
- Make disinformation easier for users to identify, understand, and report.
- Label political advertisements and analyze malicious activities, such as fake accounts, bots, and deep fakes.
The voluntary code has garnered support from 40 platforms, including Microsoft, TikTok, Twitch, and Meta. However, compliance has varied significantly among signatories, with critics pointing to lax implementation efforts. Notably, Twitter (now X) withdrew from the code entirely after Elon Musk’s acquisition of the platform, while Meta recently discontinued its fact-checking program in the United States.
A Broader Regulatory Context
Google’s rejection of fact-checking commitments comes amid growing tensions between U.S. tech companies and EU regulators. The EU has been a global frontrunner in holding digital platforms accountable, introducing stringent rules around data privacy (GDPR) and now tackling disinformation. However, this regulatory assertiveness has met resistance from U.S. tech giants, who often argue that these measures are overly prescriptive and impractical to implement at scale.
The political landscape adds further complexity. Leaders of major U.S. tech firms, including Google CEO Sundar Pichai, Apple CEO Tim Cook, and Meta’s Mark Zuckerberg, have been lobbying President-elect Donald Trump to counter EU regulatory efforts. Their concerns center on the potential for these regulations to set a global precedent, influencing other regions to adopt similar measures.
Fact-Checking in Google’s Ecosystem
Unlike some of its peers, Google has never fully embraced fact-checking as a core component of its content moderation practices. The company’s approach to tackling disinformation has primarily relied on algorithmic solutions, user education, and partnerships with trusted organizations. However, critics argue that these measures fall short of addressing the nuanced and rapidly evolving nature of online disinformation.
Google’s decision to opt out of the EU’s fact-checking requirements raises questions about the efficacy of voluntary codes in driving meaningful change. If even a major player like Google is unwilling to commit, what does this mean for the future of collective efforts to combat disinformation?
Uncertain Future for the DSA Code of Conduct
As the EU prepares to formalize the Code of Practice into law, significant uncertainties remain. Lawmakers are still negotiating with signatories to determine which commitments will become legally binding. The European Commission has yet to announce an official timeline for implementing the DSA’s disinformation provisions, though they are expected to take effect by January 2025 at the earliest.
For now, the effectiveness of the EU’s regulatory push remains in limbo. The European Fact-Checking Standards Network, an advocacy group, has criticized the inconsistent implementation of the code by various platforms. If these voluntary measures are not robustly enforced, they risk becoming symbolic gestures rather than substantive tools for combating disinformation.
Implications and Broader Questions
Google’s stance raises critical questions about the balance between regulatory ambition and practical feasibility. While the EU’s efforts to combat disinformation are laudable, the withdrawal of a key player like Google underscores the challenges of achieving widespread compliance. As other platforms continue to navigate their commitments under the code, the debate over how best to tackle online disinformation is far from settled.
With the DSA’s implementation on the horizon, the stakes are high for both regulators and tech companies. The outcome of this standoff could shape the future of digital governance, not just in Europe but around the world. Will the EU’s regulatory framework succeed in holding platforms accountable, or will pushback from industry giants force a rethinking of its approach? Only time will tell.