> 25K parameters is about 70 million times smaller than GPT-4. It will produce broken sentences. That's the point - the architecture works at this scale.
Since it seems to just produce broken and nonsensical sentences (at least based on the one example given) I'm not sure if it does work at this scale.
Anyway, as written this passage doesn't really make a whole lot of sense (the point is that it produces broken sentences?), and given that it was almost certainly written by an AI, it demonstrates that the architecture doesn't work especially well at any scale (I kid, I kid).
It (v3) mostly only says hello and bye, but I guess for 25k parameters you can't complain. (I think the rather exuberant copy is probably the product of Claude et al.)
show comments
mixmastamyk
Just reminded me of the random sentence generator program on my Vic-20. I had changed most of the words to all the bad words a preteen could think up. So many laughs with the neighborhood kids.
arketyp
I love these counterfactual creations on old hardware. It highlights the magical freedom of creativity of software.
borsch_not_soup
Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.
If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.
show comments
classichasclass
If you're running this in VICE, run it under the SuperCPU with warp mode on.
show comments
rahen
A little disappointed to see PyTorch + Claude here. I was hoping for some "demo-scene" hand-crafted 6502 assembly, and hopefully training on the C64.
show comments
anyfoo
This would have blown me away back in the late 80s/early 90s.
(Or maybe not, if it doesn't perform better than random, I haven't actually tried it out yet. Some more examples would have been nice!)
I wonder how far you could push this while still staying period correct, e.g. by adding a REU (RAM Expansion Unit), or even a GeoRAM (basically a REU on steroids).
SuperCPU would also be an option, but for me it's always blurring the line of "what is a C64" a bit too much, and it likely just makes it faster anyway.
show comments
djmips
Dissapointed - there was no 6502 code in the GitHub repo.
brcmthrowaway
How does this compare to ELIZA?
show comments
harel
Eliza called, and asked if we saw her grand kids...
show comments
Vaslo
Load”*”,8,1
Brings back memories
Lerc
Ok now we need 1541 flash attention.
I'm not sure what the venn diagram of knowledge to understand what that sentence is suggesting looks like, it's probably more crowded in the intersection than one might think.
> 25K parameters is about 70 million times smaller than GPT-4. It will produce broken sentences. That's the point - the architecture works at this scale.
Since it seems to just produce broken and nonsensical sentences (at least based on the one example given) I'm not sure if it does work at this scale.
Anyway, as written this passage doesn't really make a whole lot of sense (the point is that it produces broken sentences?), and given that it was almost certainly written by an AI, it demonstrates that the architecture doesn't work especially well at any scale (I kid, I kid).
You can chat with the model on the project page: https://indiepixel.de/meful/index.html
It (v3) mostly only says hello and bye, but I guess for 25k parameters you can't complain. (I think the rather exuberant copy is probably the product of Claude et al.)
Just reminded me of the random sentence generator program on my Vic-20. I had changed most of the words to all the bad words a preteen could think up. So many laughs with the neighborhood kids.
I love these counterfactual creations on old hardware. It highlights the magical freedom of creativity of software.
Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.
If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.
If you're running this in VICE, run it under the SuperCPU with warp mode on.
A little disappointed to see PyTorch + Claude here. I was hoping for some "demo-scene" hand-crafted 6502 assembly, and hopefully training on the C64.
This would have blown me away back in the late 80s/early 90s.
(Or maybe not, if it doesn't perform better than random, I haven't actually tried it out yet. Some more examples would have been nice!)
I wonder how far you could push this while still staying period correct, e.g. by adding a REU (RAM Expansion Unit), or even a GeoRAM (basically a REU on steroids).
SuperCPU would also be an option, but for me it's always blurring the line of "what is a C64" a bit too much, and it likely just makes it faster anyway.
Dissapointed - there was no 6502 code in the GitHub repo.
How does this compare to ELIZA?
Eliza called, and asked if we saw her grand kids...
Load”*”,8,1
Brings back memories
Ok now we need 1541 flash attention.
I'm not sure what the venn diagram of knowledge to understand what that sentence is suggesting looks like, it's probably more crowded in the intersection than one might think.
i hate ai, and i love the c64, but i'll allow it.
but can you make mac keyboards feel like a c64c?