Streaming platforms have settled on a position toward AI music. It just isn't the one they're announcing.
The official posture is detect, label, stay neutral. The real posture seems to be protect future optionality at the expense of working musicians today.
Deezer sees 75,000 AI uploads daily. Spotify removed 75 million spam tracks in twelve months. Public sentiment is hostile. 66% of listeners never knowingly consume AI music. 52% wouldn't listen to their favorite artist if they knew AI was involved.
Nobody built a filter switch. Nobody will. The tell is the language. Spotify calls AI music "a spectrum, not a binary." Apple defers to content providers. YouTube cites "evolving standards." That's not caution. That's coordination around a future where AI becomes a normalized production input and no platform wants an old policy in the way.
Here's the actual damage: AI tracks account for 1% of streams on Deezer. But 85% of those streams are fraudulent. The threat to working musicians was never listener appetite. Nobody wants this music. The threat is royalty dilution at industrial scale, fraud-driven payouts draining the pool before legitimate artists get to it.
Bandcamp banned it outright. Every other major platform chose labels over limits.
The pattern is familiar. Platforms accumulated scale by staying neutral on content questions until neutrality itself became the economic position. AI music is just the current version. The upload flood continues, the fraud percentage climbs, and the royalty math gets worse for everyone who makes music with their hands.
75,000 uploads a day and 1% of streams. Someone is getting paid. It isn't the artists.
theverge.com/column/921…