By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Not even Spotify is safe from AI slop
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > Not even Spotify is safe from AI slop
Tech

Not even Spotify is safe from AI slop

Usama
Last updated: November 16, 2024 10:24 am
Usama
Share
9 Min Read
Not even Spotify is safe from AI slop
SHARE

How fake music targets real artists.

The first sign something was off came through my Spotify Release Radar. A new HEALTH album popped up, but something about it felt wrong. The cover art was off-brand for the band, and when I played it, the music didn’t sound like HEALTH at all. It turned out to be one of three fake albums uploaded to their artist page that weekend.

Contents
The Broken Honor System of Music DistributionFraud as a Business ModelThe AI Factor: Accelerating the ProblemA System Struggling to Keep UpThe Bigger Picture: A Crisis of TrustA Way Forward?

The band’s account on X made light of the situation, joking about the imposters. The fraudulent albums were eventually removed, and I chalked it up as a one-off mishap. But the following weekend, it happened again—this time to Annie. The fake album appeared plausible at first since Annie had recently dropped a single, “The Sky Is Blue.” However, when I clicked in, the tracklist didn’t match up. I played it and was greeted by a mishmash of bird sounds and New Age instrumentals. It wasn’t Annie.

“That was upsetting to me because if you have ears, you can definitely hear it’s not our music,” one artist later told me.

It turned out Annie and HEALTH weren’t alone. This was part of a broader issue affecting countless musicians on Spotify, many of whom discovered their pages hijacked by AI-generated slop. Metalcore bands like Caliban, Northlane, and Silent Planet were among the early targets, as were a suspicious number of artists with single-word names like Swans, Asia, and Gong. Some of these fakes disappeared within days, while others lingered for weeks or months, unchecked and unchallenged.

Not even Spotify is safe from AI slop
Can you spot the fake?

The Broken Honor System of Music Distribution

To understand how this happens, you need to grasp how music distribution works. Unlike managing your Facebook page, artists don’t upload albums directly to Spotify. Instead, they go through distributors that handle metadata, licensing, and royalty payments. These distributors upload vast amounts of music in bulk, trusting the metadata to ensure everything ends up in the right place. But this process largely operates on the honor system.

Fraudsters exploit this loophole by falsely claiming to represent legitimate artists. A distributor might unknowingly upload fake content, which Spotify then accepts at face value. For instance, when Standards, an instrumental math-rock band, saw a fake album appear on their page in late September, their lead guitarist Marcos Mena assumed it was a simple mistake. But when he contacted Spotify, he received a disappointing automated response:

“It looks like the content is mapped correctly to the artist’s page. If you require further assistance, please contact your music provider. Please do not reply to this message.”

The fake album remained on Standards’ page for nearly two months, despite Mena’s efforts to have it removed. It wasn’t until early November—long after the band’s real 2023 album had been released—that someone finally took it down. “That was upsetting to me,” Mena said. “It’s definitely a bummer because we did have a new album come out this year, and I feel like [the fake album] is detracting from that.”

Not even Spotify is safe from AI slop
Can you spot the fake?

Fraud as a Business Model

Why would anyone go to the trouble of uploading fake albums? The answer, unsurprisingly, is money. Spotify pays royalties based on streams, with payouts going through the distributor. When someone streams a fake album, the royalties are funneled not to the artist but to whoever uploaded the counterfeit.

Consider the case of “Gupta Music,” the supposed label behind the fake Standards album. A search for Gupta Music on platforms like Every Noise at Once reveals over 700 releases, many with AI-generated cover art and generic single-word band names like Rany, Bedroom, and Culture. Each of these albums has the potential to generate royalties, which—when aggregated across hundreds or thousands of fraudulent tracks—can add up to significant payouts.

Not even Spotify is safe from AI slop
Screenshot of Every Noise at Once, before the albums were removed.

It’s not just Spotify, either. This type of scam plays out across more than 100 streaming platforms worldwide. And while Spotify recently cut ties with Ameritz Music, the licensor responsible for this wave of fakes, the problem is far from solved.

The AI Factor: Accelerating the Problem

AI tools have turbocharged these scams, making it easier than ever to produce low-effort content. Fraudsters no longer need to remix or plagiarize existing tracks—they can generate entirely new songs with minimal effort. They also exploit typos, uploading tracks under names like Kendrik Laamar, Arriana Gramde, and Llady Gaga to siphon off streams from careless listeners.

Earlier this year, a Danish man was sentenced to 18 months in prison for using bots to artificially inflate streams on Spotify, earning him $300,000 in royalties. Another fraudster, Michael Smith, allegedly made $10 million over seven years by using AI to generate a massive catalog of fake songs, which he then streamed using bots.

“People upload massive amounts of albums that are intended to be streaming fraud albums,” says Andrew Batey, CEO of Beatdapp, a company focused on preventing streaming fraud. He estimates that $2–3 billion is stolen annually from legitimate artists due to schemes like these.

A System Struggling to Keep Up

Spotify claims to “invest heavily in automated and manual reviews” to combat fraud. However, the sheer volume of uploads and the complexity of the scam make it a challenging battle. Fraudsters often use multiple fake labels and distributors to obscure their tracks. Some even hack real users’ accounts to disguise fraudulent streams among legitimate activity.

Meanwhile, the distribution industry faces its own challenges. Distributors make money by taking a cut of artists’ royalties, so they have little financial incentive to crack down on fraud unless forced. Lawsuits, like the one Universal Music Group (UMG) recently filed against Believe and TuneCore, might change that calculus. UMG’s lawsuit alleges that Believe knowingly distributed infringing content, including tracks attributed to fake artists with names designed to trick users.

The Bigger Picture: A Crisis of Trust

For Spotify, the rise of AI-generated fraud poses an existential threat. The platform’s value lies in its ability to connect users with the music they love. If listeners regularly encounter garbage masquerading as their favorite artists, trust in the platform will erode.

AI is just the latest accelerant in a problem that has existed for years, but its impact is undeniable. Fraudsters now have the tools to create fake songs at scale, flooding platforms like Spotify with low-quality content. The result is a degraded user experience, not unlike what’s happened to other platforms like Facebook, Instagram, and even Google, as they’ve become overrun with spam and low-value AI content.

A Way Forward?

Fixing this problem requires action at multiple levels. Distributors need better vetting processes, and streaming platforms like Spotify need to implement more robust safeguards. For instance, Spotify could flag uploads from unfamiliar distributors or labels that don’t match an artist’s existing catalog. Improved AI detection tools could also help identify when a new release doesn’t sound like the artist’s established style.

But the industry faces a delicate balancing act. Over-policing uploads risks alienating legitimate artists, especially indie musicians who often operate outside the mainstream system. Yet failing to act could see the platform further exploited by fraudsters, ultimately driving away listeners and artists alike.

As UMG’s lawsuit against Believe unfolds in the coming months, its outcome could reshape how distributors handle uploads and enforce quality control. For now, though, Spotify remains a battleground where legitimate musicians are fighting to reclaim their pages—and their royalties—from the rising tide of AI slop.

You Might Also Like

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom

Samsung resumes its troubled One UI 7 rollout

Google Messages starts rolling out sensitive content warnings for nude images

Vivo wants its new smartphone to replace your camera

Uber users can now earn miles with Delta Air Lines

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article What is Bluesky? Everything to know about the X competitor What is Bluesky? Everything to know about the X competitor
Next Article WhatsApp now lets you save message drafts WhatsApp now lets you save message drafts
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Pinterest is prompting teens to close the app at school
Pinterest is prompting teens to close the app at school
News Tech April 22, 2025
ChatGPT search is growing quickly in Europe, OpenAI data suggests
ChatGPT search is growing quickly in Europe, OpenAI data suggests
AI ChatGPT OpenAI April 22, 2025
social-media-is-not-wholly-terrible-for-teen-mental-health-study-says
Social media is not wholly terrible for teen mental health, study says
News April 22, 2025
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
AI Gemini Google Tech April 19, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?