Native Sparse Attention

132 points30 commentsa day ago
noosphr

Deep seek papers are a must to read for anyone who wants to understand how to make LLMs operate at hyper scale. All western labs hide their best results, or at most release summaries that are about as meaningful as the answers Cleo used to give on stack exchange: https://math.stackexchange.com/questions/562694/integral-int...

I have a suspicion with how quiet all the major players got after the two weeks after deepseek R1 was released that they were reading and implementing everything in the papers that came with it as fast as humanly possible.

show comments
CalmStorm

For the first time, it introduced native sparse attention into the full training process, achieving up to 11× inference speedup while maintaining model performance.

sabakhoj

> Despite being sparse, NSA surpasses Full Attention baseline on average across general benchmarks, long-context tasks, and reasoning evaluation.

Isn't it very notable that the latency improvement didn't have a performance loss? I'm not super familiar with all the technical aspects, but that seems like it should be one of the main focuses of the paper.

show comments
tony_borlini

DeepSeek and the Sparse Attention Revolution: How a Research Paper is Redefining AI Efficiency

https://deep.liveblog365.com/en/index-en.html?post=50

visarga

I am always skeptical of RNN approaches but this paper is just sparsifying the input, it is not compressing any size input to a fixed memory. I am hopeful maybe this is a big break. 11x inference speedup with no degradation from an algorithmic improvement. Is it really that good? almost too good to be true. Adoption in the next 6 months will tell us the truth.

pyuser583

I'd say award for best title is a tie between: "Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text Generation Systems"; "Finding Needles in Images: Can Multi-modal LLMs Locate Fine Details?"; and "Steering off Course: Reliability Challenges in Steering Language Models."

israrkhan

Well deserved

show comments
gnabgib

Title: Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

The awards page for ACL seems to disagree with this editorialized title: https://2025.aclweb.org/program/awards/

show comments
ninjin

Link to the published paper rather than the preprint (update link?):

https://aclanthology.org/2025.acl-long.1126