When the Machine Becomes a Collaborator
AI music, ghost producers, and the thin line between authorship and abdication.
Every new technology arrives with an argument. That’s the pattern. Something disruptive appears, and our first instinct is suspicion. Then resistance. Then, slowly, adaptation. Eventually we forget there was ever a debate.
Photography went through this cycle. When cameras appeared, critics claimed the machine was doing the work. You didn’t make the picture; the shutter did. How could that possibly count as art?
And yet, over time, photography became one of the most expressive art forms humans have ever invented. Because the artistry wasn’t in the mechanics of capturing light. It was in the framing, the timing, the perspective. A camera without a photographer is just a box. A photographer with a camera can change the way we see the world.
Now AI is stepping into the same argument
.
First Contact
For a long time, I ignored the new wave of generative AI platforms. Not the invisible AI that has been in music studios for years EQ assistants, pitch correction, denoising, mastering algorithms. Those have quietly become standard. I mean the ones that promise to write songs from a prompt, or spin a complete track out of a sentence.
I blanked them out of suspicion, maybe pride. In certain circles “AI music” is a dirty phrase, shorthand for a flood of derivative tracks clogging up streaming services.
And then, inevitably, curiosity cracked me.
I tried one of the platforms that can take an existing demo and re-imagine it, like making a cover version of your own half-finished song. Suddenly I was digging through my old hard drives, feeding in sketches just to see what would come back.
It’s addictive.
The Ghost Producer
AI has a sound. You can hear its average-ness the way it smooths towards the centre of pop. But it’s also unnervingly sharp. It knows when the hook should land, when the chorus should arrive, how long to hold attention before shifting.
It feels like having a ghost producer in the room, leaning over your shoulder:
“Bring the vocals in earlier. Don’t lose them here. Add a middle eight there.”
Used this way, it’s less about finished songs and more about perspective. A reference point. A way of jolting your ear, making you hear your own work differently. Sometimes it suggests something you’d never have imagined yourself.
That’s when it feels like collaboration.
Hot, Cold, Repeat
But it doesn’t last. After a burst of fascination, I find myself going cold. The same AI-voice, the same polish, the same “familiar but not quite” quality starts to grate.
It reminds me of overusing a plug-in. The first time you load a shiny new piano library or a lush string patch, it feels like magic. After the fiftieth time, you hear the repetition, the ceiling of expression. What was inspiring starts to feel like a cage.
That’s where you notice what the machine can’t do: variation, imperfection, the accidents that give music life.
Is It Cheating?
And here’s the tension: if I make a track, feed it into AI, then reshape my song in response, whose work is it? Where does authorship live?
At one extreme, you type a line “write me a pop song about the moon” and hit generate. Nobody would call that authorship.
But what about a twenty-page prompt refined over a thousand iterations? Is that writing? Producing? Supervising? Should there be such a thing as an “AI prompt credit”?
This is where the photography analogy starts to wobble. Photographers never just pressed a button and walked away. They chose the lens, the frame, the light, the moment. With AI, it’s possible to outsource even the act of choosing.
So maybe the real line isn’t between human and machine, but between attention and abdication. Between those who frame, and those who don’t.
The Detection Problem
Then there’s the mess of detection.
I tested one of the new AI-spotting tools on my work. A hybrid track with some AI-assisted drums and vocals came back as “90% human.” Fair enough.
Then I fed in a completely human-made orchestral piece. It flagged as “90% AI.”
That’s when I realised the danger. If platforms start relying on flawed detectors, human music could be stripped out while AI tracks slip through unchallenged. A strange dystopia where the robots keep their place and the people get pushed aside.
The Ethics of Scraping
None of this would feel as thorny if the foundations were solid. But they’re not.
The truth is, Silicon Valley built these systems by hoovering up the world’s creative output. Every song, every performance, every recording. They had the money to license, to credit, to pay. They chose not to.
I don’t think they’ll get away with it forever. My optimistic side believes we’ll enter a phase where people simply won’t accept unlicensed AI. Already, in sync and commercial music, most companies avoid it. The risk is too high.
But there’s also opportunity here. Imagine a system where musicians license their style, their playing, their voice. Where labels bid for access to specific likenesses. Where artists share in the revenue of the models built on their work. That would be fairer, and sustainable.
Tech That Complicates
One of the promises of AI is efficiency. Faster music, faster workflow.
But in practice, it often adds friction.
I once fed a rough demo into an AI platform, then stripped out the stems and rebuilt almost the whole track myself my own drums, keys, bass, textures. In the end, only fragments remained of the AI version. And it took me longer than if I’d just started fresh.
That’s the irony: the tech that promises simplicity often makes life more complicated.
It happens everywhere. Ordering food now means logging into an app, scanning QR codes, navigating badly designed menus. Schools send endless messages through multiple portals and calendars, where once a single letter sufficed. In theory, it’s streamlined. In practice, it’s scattershot.
AI risks being the same: an extra layer that sometimes clarifies, sometimes just clutters.
Where It Gets Interesting
For me, the real excitement isn’t in pushing a button and releasing the output. It’s in AI as an instrument.
Tools that transform sound, process audio in new ways, or generate playable virtual instruments. Imagine models trained not just on genres but on performance nuances bending, sliding, breathing. Imagine immersive or interactive formats that respond in real time.
That’s where entirely new genres could appear. Music that couldn’t have been made before.
Out with the Old, In with the… What Exactly?
We’ve been here before. New technologies always displace old roles, but they also create new ones. Jobs that once seemed permanent vanish, while others emerge that we couldn’t have imagined.
Right now, AI feels like it’s in that awkward middle ground: too new to be stable, too messy to be trusted, but too powerful to ignore.
So I treat it as experimentation. If it sparks an idea, if it reframes a track, if it pushes me into a different angle that’s a net positive. But I wouldn’t release a track that was purely AI-generated. That feels hollow.
Because in the end, the spark still matters. The thing only people bring: variation, imperfection, meaning.
Back to the Frame
Which brings me back to photography.
Once upon a time, people argued a photograph wasn’t art because the camera did the work. Eventually, we learned to see the artistry in what was chosen, not just what was captured.
AI may find the same place. But it won’t happen by accident. It will depend on how we use it: whether we abdicate our choices to the machine, or whether we use it as a lens, a mirror, an extension.
Because the frame still matters.
And for now, the frame is still ours
.
I'll think it's to early to form an opinion about AI. For now, if one doesn't like the production and the product, one can not listen to it. If everything became AI we wouldn't have a choice. We'll see where it goes.