No, I definitely see why people hate on AI music. I appreciate that you had fun, but these songs suuuuuck.
Claude is excellent at a few things, decent at quite a few more. Art and music are not one of these things.
Ar they good as tools to aid in the creative process if you know how to use them and have some restraint? Oh absolutely. As replacements for actual art? Oh absolutely not.
Same goes for the entire genre of tools.
ramon156
> Recently I was listening to music and doing some late night vibe coding when I had an idea. I love art and music, but unfortunately have no artistic talent whatsoever. So I wondered, maybe Claude Code does?
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
While the author explicitly wanted Claude to be in the creative lead here, I recently also thought about how LLMs could mirror their coding abilities in music production workflows, leaving the human as the composer and the LLM as the tool-caller.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
Related: ChatGPT Canvas apps can send/receive MIDI in desktop Chrome. A little easter egg. You can use it to quickly whip up an app that controls GarageBand or Ableton or your op-1 or whatever.
It can also just make sounds with tone.js directly.
shortformblog
Curious to see how this worked, I tried this on Deepseek using Claude Code Router, following the author’s guide, with two small changes: Make it an emo song that uses acoustic guitar (or, obviously an equivalent), and it could install one text-to-speech tool using Python.
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
tuhgdetzhh
We alrrady had Cursor Composer last year, so it sounds like a step back.
bgirard
I like how the author shared the prompt + conversation transcripts. I wish OAI / Anthropic would do that when they share content demos.
JamesSwift
Oh man I love this so much. The prompts made me laugh so hard. Great experiment.
Marha01
Very interesting experiment! I tried something related half a year ago (LLMs writing midi files, musical notation or guitar tabs), but directly creating audio with Python and sine waves is a pretty original approach.
>I love art and music, but unfortunately have no artistic talent whatsoever.
Then go pay someone to teach you to play <instrument>, and you'll get a life skill that will be satisfying to watch grow, instead of whatever this soulless crap is.
edit: Oh god after listening to those samples, send Claude to the same music teacher you choose...
No, I definitely see why people hate on AI music. I appreciate that you had fun, but these songs suuuuuck.
Claude is excellent at a few things, decent at quite a few more. Art and music are not one of these things.
Ar they good as tools to aid in the creative process if you know how to use them and have some restraint? Oh absolutely. As replacements for actual art? Oh absolutely not.
Same goes for the entire genre of tools.
> Recently I was listening to music and doing some late night vibe coding when I had an idea. I love art and music, but unfortunately have no artistic talent whatsoever. So I wondered, maybe Claude Code does?
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
I can't believe AI music hasn't hit the mainstream yet. It's the most amazing thing I've seen since my original ChatGPT 3.5 wtf experience. https://suno.com/playlist/fe6b642c-f4a8-4402-b775-806348640e...
This song was generated from my 2-sentence prompt about a botched trash pickup: https://suno.com/s/Bdo9jzngQ4rvQko9
While the author explicitly wanted Claude to be in the creative lead here, I recently also thought about how LLMs could mirror their coding abilities in music production workflows, leaving the human as the composer and the LLM as the tool-caller.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
[1]: https://github.com/uisato/ableton-mcp-extended
Related: ChatGPT Canvas apps can send/receive MIDI in desktop Chrome. A little easter egg. You can use it to quickly whip up an app that controls GarageBand or Ableton or your op-1 or whatever.
It can also just make sounds with tone.js directly.
Curious to see how this worked, I tried this on Deepseek using Claude Code Router, following the author’s guide, with two small changes: Make it an emo song that uses acoustic guitar (or, obviously an equivalent), and it could install one text-to-speech tool using Python.
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
We alrrady had Cursor Composer last year, so it sounds like a step back.
I like how the author shared the prompt + conversation transcripts. I wish OAI / Anthropic would do that when they share content demos.
Oh man I love this so much. The prompts made me laugh so hard. Great experiment.
Very interesting experiment! I tried something related half a year ago (LLMs writing midi files, musical notation or guitar tabs), but directly creating audio with Python and sine waves is a pretty original approach.
https://strudel.cc/
_Neon Dreams_ is ELO × Daft Punk.
>I love art and music, but unfortunately have no artistic talent whatsoever.
Then go pay someone to teach you to play <instrument>, and you'll get a life skill that will be satisfying to watch grow, instead of whatever this soulless crap is.
edit: Oh god after listening to those samples, send Claude to the same music teacher you choose...