djoldman

I would love for the standard to be to ALWAYS report the required amount of memory to load and run a model in bytes of RAM alongside any other metrics. I'd love to see time to first token, token throughput, token latency as well but I'd settle for memory size as described above.

Essentially, many people want to know what the minimum amount of memory is to run a particular model.

Parameter count obscures important details: what are the sizes of the parameters? A parameter isn't rigorously defined. This also gets folks into trouble because a 4B param model with FP16 params is very different from a 4B param model with INT4 params. The former obviously should be a LOT better than the second.

This would also help with MOE models: if memory is my constraint, it doesn't matter if the (much larger RAM required) MOE version is faster or has better evals.

I'm waiting for someone in anger to ship the 1 parameter model where the parameter according to pytorch is a single parameter of size 4GB.

usernametaken29

> δ-mem compresses past information into a fixed-size state matrix updated by delta-rule learning

This doesn’t solve the capacity problem of memory. You can cram more into one context window, but then again you need to associate them with input queries. That’s very hard because slight variations in input create hugely different activations. So really, it doesn’t improve caching. This paper might do a thing or two approximating the compression limit for context windows, but there’s a fundamental limit on how much information can go into it. What you really need is contextual search, as in, different events and objects with the same abstractions and semantic lead to same response, so you can cache effectively… on this front the paper does little to improve “memory” in a meaningful way

show comments
jmward01

The future is fixed size state with a massive token history that the model can look back at like reading a journal. A reframing of the model this way opens a new kind of agent, one with essentially unlimited context, that packs perfectly on a GPU, can be stored/retrieved fairly effortlessly and can essentially be run forever. Fixed size means theta 1 tokens. A model that can look around also means essentially unlimited memory can be bolted on with the model learning to look around memory like it is looking around at the journal of past tokens. Guided windows of attn can do most of this, some other tricks can do the rest.

maxignol

Is there some kind of memory enabling, for instance, an agent to remember guidelines on a repo without having to feed at the beginning of each session 4 markdown files and spending the corresponding tokens each time ?

show comments
3form

Interesting points:

- fixed size of the memory seems like a good idea to overcome the current limitations

- skimming through the thing, I can't find any mention of the cost?

- I would need more time to read it in-depth to see if this is legitimate and not just fancy form of overfitting or training on testing data

in-silico

They basically just added DeltaNet hypernetworks to existing LLMs.

Nothing super novel or groundbreaking, but a moderately interesting read.

raverbashing

Interesting that the headline is showing Δ-Mem while the paper uses δ-mem

Is it a lowercase to uppercase conversion going on here?

show comments
DeathArrow

I see lots of techniques proposed to give LLM the capacity to recall things, I even saw a lot of memory plugins for AI coding agents, I tried some myself.

What I want to see is something that was tested and proved in practice to be genuinely useful, especially for coding agents.

show comments
ktallett

The obvious energy saving step would be to utilise previous searches by others. Many of the tasks people do are rather similar, it is such an energy waste to start again each time.

(Obviously ignoring the huge energy saver, which is to observe if you even need to bother doing the task at all.)

show comments
semiquaver

Hmm, this is a case where HN’s title mangling changed the meaning of the title. Lower case delta (δ) is used intentionally. I don’t think HN should automatically modify the casing of non-ascii chars.

show comments
cubefox

Papers being voted high on Hacker News are usually uncorrelated with their actual importance. It's basically a lottery. There are regularly more interesting papers going semi viral on Twitter.

show comments