Back in May 2024, OpenAI announced an ambitious tool called Media Manager, promising to empower creators with the ability to decide how their works could be used in training AI models. Touted as a solution to mounting criticisms and looming legal battles, this tool aimed to let creators specify whether their text, images, audio, and video could be included in OpenAI’s datasets. Yet, as 2025 begins, Media Manager has yet to materialize, leaving many creators frustrated and skeptical of OpenAI’s intentions.
A Promising Vision, but Little Action
The idea behind Media Manager was groundbreaking. OpenAI described it as a machine learning-powered tool capable of identifying copyrighted materials and respecting creators’ preferences across multiple platforms. It was meant to be a comprehensive upgrade to the existing opt-out mechanisms, which many creators criticized as cumbersome and incomplete.
In OpenAI’s words, Media Manager was designed to “set a standard across the AI industry” and address concerns over intellectual property (IP) rights. By doing so, the company hoped to fend off critics and avoid the mounting legal challenges it faced from creators whose works had been used without permission.
However, despite these lofty promises, the tool has yet to see the light of day. “I don’t think it was a priority,” said a former OpenAI employee. “To be honest, I don’t remember anyone working on it.”
Another individual who coordinates projects with OpenAI confirmed that discussions about Media Manager took place earlier in the year but noted there had been no updates since. Even Fred von Lohmann, a key legal figure at OpenAI who worked on the project, transitioned to a part-time consultant role in October 2024. OpenAI’s PR team has confirmed Von Lohmann’s reduced involvement but has provided no updates on Media Manager’s development.
Missing Deadlines and Creator Frustration
In its May announcement, OpenAI promised to have Media Manager ready “by 2025.” While this phrasing could technically include the entirety of 2025, many interpreted it to mean the tool would launch by January 1, 2025. As that date passes without any sign of Media Manager, creators and critics alike are questioning OpenAI’s commitment to the project.
For creators, the lack of progress is particularly frustrating. OpenAI’s current opt-out solutions, including a submission form for flagging images and web-crawling blocks, have been described as inadequate and burdensome. For example, the image opt-out process requires creators to submit individual copies of each image they wish to exclude, along with detailed descriptions. These methods fail to address the needs of writers, musicians, and video creators, leaving them without effective tools to protect their works.
A Wave of Legal Challenges
The absence of Media Manager is even more significant in light of the growing number of legal challenges OpenAI faces. The company is currently battling class-action lawsuits from artists, authors, and media organizations, including prominent figures like Sarah Silverman and Ta-Nehisi Coates. These lawsuits allege that OpenAI illegally trained its models on copyrighted materials without obtaining proper permissions.
The stakes are high. AI models like OpenAI’s rely on massive datasets to function effectively. These datasets often include publicly available materials, but many of these are copyrighted works not intended for such use. For instance, OpenAI’s video generator, Sora, has been shown to produce clips featuring TikTok’s logo and characters from popular video games. Similarly, ChatGPT has quoted copyrighted articles verbatim under certain prompts, behavior that OpenAI attributed to a “hack.”
OpenAI has pursued licensing deals with select partners to mitigate these issues, but many creators argue that these deals are neither fair nor inclusive. Meanwhile, competitors like Anthropic and Google are implementing their own copyright protection measures, adding pressure on OpenAI to deliver.
The Ambitions and Limitations of Media Manager
Even if Media Manager eventually launches, experts are skeptical about its ability to resolve the complex legal and ethical issues surrounding AI training. Adrian Cyhan, an intellectual property attorney, notes that building a robust content identification and opt-out system is a Herculean task.
“Even platforms as large as YouTube and TikTok struggle with content ID at scale,” Cyhan said. “Ensuring compliance with creator protections and compensation requirements is especially challenging given the rapidly evolving legal landscape.”
Others argue that Media Manager, as described, places an unfair burden on creators. Ed Newton-Rex, founder of Fairly Trained, a nonprofit advocating for creator rights, believes the tool risks becoming a scapegoat for OpenAI. “Most creators will never even hear about it, let alone use it,” he said. “But OpenAI could still use its existence to justify widespread use of creative works without explicit permission.”
Additionally, Media Manager may not address scenarios where copyrighted works appear on third-party platforms without the creator’s knowledge. “Creators often don’t even know where their works are hosted,” said Joshua Weigensberg, an IP and media lawyer. “Even if they opt out, companies may still train on copies of their works found elsewhere.”
What’s at Stake for OpenAI?
OpenAI’s legal battles hinge on the principle of “fair use,” which allows limited use of copyrighted materials without permission under certain conditions. The company argues that its models create transformative works rather than direct copies, a claim that courts may ultimately need to evaluate.
If courts rule in OpenAI’s favor, Media Manager could become largely irrelevant from a legal standpoint. However, the tool could still serve as a valuable PR move, positioning OpenAI as an ethical leader in the AI industry. “This feature may be more about optics than practical impact,” said copyright lawyer Evan Everist. “But it’s not a substitute for adhering to copyright law.”
The Road Ahead
In the absence of Media Manager, OpenAI has implemented filters to prevent its models from reproducing training examples verbatim. However, these filters are far from perfect, and creators remain skeptical of the company’s intentions. As legal and public scrutiny intensifies, OpenAI faces a reckoning that could shape the future of AI and intellectual property rights.
Whether Media Manager eventually arrives or not, one thing is clear: OpenAI’s promises alone won’t be enough to quell the growing discontent among creators. Only meaningful action can address the concerns of those whose works have been swept up in the AI revolution.