OpenAI argues it needs access to ‘avoid forfeiting’ the lead in AI to China.
In a move that could reshape the landscape of artificial intelligence, OpenAI and Google are urging the U.S. government to grant them broader access to copyrighted content for AI training. Both companies recently submitted official proposals advocating for AI training under the umbrella of fair use, with OpenAI framing the issue as a matter of national security.
The Push for Fair Use in AI Training
The companies’ proposals were submitted in response to a request from the White House, which sought input from industry leaders, private sector organizations, and policymakers on President Donald Trump’s “AI Action Plan.” This initiative aims to reinforce America’s dominance in artificial intelligence while avoiding regulatory roadblocks that could stifle innovation.
OpenAI’s argument is particularly striking. The company warns that restricting U.S. AI firms from using copyrighted content for training could place them at a serious disadvantage compared to Chinese competitors. It explicitly mentions DeepSeek, a rising AI powerhouse in China, as a potential threat to the U.S.’s AI leadership.
“There’s little doubt that the PRC’s [People’s Republic of China] AI developers will enjoy unfettered access to data — including copyrighted data — that will improve their models. If the PRC’s developers have unrestricted access and American companies do not, the race for AI is effectively over.” — OpenAI’s official response
Google’s Take on Copyright Barriers
Google, echoing OpenAI’s stance, contends that existing copyright, privacy, and patent laws hinder access to critical data needed for training cutting-edge AI models. The tech giant argues that fair use policies and text and data mining exceptions have historically played a crucial role in AI development, allowing researchers to utilize publicly available information without undergoing lengthy and often unpredictable licensing negotiations.
“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders. They also help avoid highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.” — Google’s official response
Anthropic’s National Security-Centric Approach
Anthropic, the AI company behind the chatbot Claude, also submitted a proposal—but it took a different approach. Unlike OpenAI and Google, Anthropic did not mention copyrights. Instead, it focused on national security, calling for a robust system to evaluate AI-related risks. The company also proposed strengthening export controls on AI chips and improving the country’s energy infrastructure to support the expanding demands of artificial intelligence.
The Legal and Ethical Controversy
The debate over AI training on copyrighted material is far from new. Numerous AI companies, including OpenAI, have been accused of scraping copyrighted content without permission to train their models. OpenAI is currently facing multiple lawsuits from major news organizations, including The New York Times. High-profile figures such as Sarah Silverman and George R.R. Martin have also taken legal action, alleging that AI models have been trained using their copyrighted works without consent.
Beyond OpenAI, Apple, Anthropic, and Nvidia have faced accusations of scraping YouTube subtitles for AI training—a practice that YouTube has explicitly condemned as a violation of its terms of service.
The Implications: Balancing AI Innovation and Copyright Protection
As AI continues to evolve, the conflict between technological advancement and intellectual property rights is intensifying. Supporters of OpenAI and Google’s stance argue that enabling AI companies to train on copyrighted content is essential for maintaining the U.S.’s global AI leadership. Critics, however, warn that such practices could undermine the rights of creators, publishers, and copyright holders, potentially leading to significant legal and ethical ramifications.
The U.S. government now faces a critical decision: Should it loosen copyright restrictions to empower AI innovation, or should it impose stricter safeguards to protect intellectual property? The outcome of this debate could shape the future of AI for decades to come.
As the discussion unfolds, one thing remains clear—AI companies will continue pushing the boundaries of what’s possible, and the legal frameworks surrounding intellectual property will have to adapt to keep pace.