A survey published last week suggested 97% of respondents could not spot an AI-generated song. But there are some telltale signs - if you know where to look.
Here’s a quick guide …
-
No live performances or social media presence
-
‘A mashup of rock hits in a blender’
A song with a formulaic feel - sweet but without much substance or emotional weight - can be a sign of AI, says the musician and technology speaker, as well as vocals that feel breathless.
- ‘AI hasn’t felt heartbreak yet’
“AI hasn’t felt heartbreak yet… It knows patterns,” he explains. “What makes music human is not just sound but the stories behind it.”
- Steps toward transparency
In January, the streaming platform Deezer launched an AI detection tool, followed this summer by a system which tags AI-generated music.



AI imitates an overall sound. But doesn’t care much about “instruments” individually. For simple minimal segments it can easily lay down a simple clear beat or melody. But as more gets added. The more the sound becomes muddy and generic. That and if you’re familiar enough with a given instrument. It can often just sound “wrong”. Again because the AI is imitating a sound, not an instrument generally.
But yeah. The other points stand. Social media presence and output are great indicators.
Midnight Darkwave is one I’m highly suspicious of. Super generic name. Not much presence beyond the streaming sites. I like the overall sound, but it often gets muddy and kind of droning. And not in the coldwave sort of way. Something a bit more inhuman, over processed, and mechanical.
I have used Suno quite extensively just for fun, I insert my own lyrics and let it create different styles and beats, and you have to push out like 30 before it does something actually decent, but some of them are fucking bangers. I consider it like watching visualizations in WinAmp.
I am not stating a moral proposition in either direction, just an observation.
I like to ask it to generate lyrics based on funny prompts. For example, I asked it to write a song from Darth Vader’s perspective about the fact that he never actually said “Luke, I am your father”. The results was just savage.
That… Is… Not very good.
BUT! Let’s say that you took that and used it as a scaffolding for an idea. Let’s say you kept some parts, rewrote some other parts, in the end coming up with something much better than the original.
Did you write it, or did the robot?
How much would you have to change for it to be “yours”? Where’s the cutoff point?
I’m just posing this as a general metaphysical conundrum, I don’t take a position other than deconstructing underlying arguments.
Fucking shit. I am starting to sound like a god damned AI.
I just thought it was funny that it decided to brutalize Luke all on its own.
Also, the incest thing was all the AI. I just asked for a song about the fact that Vader never said the line “Luke, I am your father”.
Well then it is not only bad but wildly off topic, because what you got there is a cheap diss track missing a lot of opportunities.
“Hey Luke I heard you kissed your sister on Hoth”
I mean right there you’ve got something rhyming with “hot”. But anyway.
I think that if you don’t know literature and art, dumb literature and art seems genius. And that’s where we’re at as a society.
I can see a creative use for Suno Studio where you can feed it a clip of a chord progression you recorded, have the AI generate a few extrapolations, then arrange bits and pieces of it within Suno Studio to create the basic song structure and finally export the midi to your DAW. Basically, you can use it as a fancy sketchpad.
The problem I can’t get past is the environmental impact.
Right? I used to use… Don’t remember what the app was on Android, but it was like a fun little beat studio, it could generate random patterns according to styles, and randomize instruments too. So you’d get a loop, then you’d tweak it and switch our instruments and sounds and whatnot, and then when I found a nice rhythm, I recreated it in Ableton or FL Studio or whatever.
So, let’s say you use Suno to make a good beat. You import slice and dissect the beat and sounds, and I fail to see how it’s qualitatively any different than using sample or loop packs, which basically every fucking musician on the planet does.
“Are we so different, you and I”? :)
Absolutely, AI is just a tool. It can be used for good things and for bad things. And there are technologies currently being worked on outside the circle jerk Ponzi scheme of all the tech oligarchs right now to make it less environmentally impactful. They just don’t care and are rushing to make every last buck they can before the bubble pops.
Huh. I wonder which it’s gonna be!
Right now, mostly bad. But there are definitely some good things. Lots of new exciting science and such being enabled by it.