I would suggest experts in interpretability (but everyone really) to go directly to the transformer circuits blog, where they explain their approach more in detail. Here is the link for this post: https://transformer-circuits.pub/2026/nla/index.html
Also, if you have never read it, I would suggest starting to read all the Transformer Circuits thread, by reading its "prologue" in distill pub
rao-v
This is the first approach to activation analysis that I’ve seen that seems like a plausible path to model understanding.
Unfortunately I don’t know how you ground this … it’s basically asking if you can encode activations in plausible sounding text. Of course you can! But is the plausible text actually reflective of what the model is “thinking”? How to tell?
show comments
comex
Fascinating. The training process forces the “verbalizer” model to develop some mapping from activations to tokens that the “reconstructor” model can then invert back into the activations. But to quote the paper:
> Note that nothing in this objective constrains the NLA explanation z to be human-readable, or even to bear any semantic relation to the content of [the activation].
The objective could be optimized even if the verbalizer and reconstructor made up their own “language” to represent the activations, that was not human-readable at all.
To point the model in the right direction, they start out by training on guessed internal thinking:
> we ask Opus to imagine the internal processing of a hypothetical language model reading it.
…before switching to training on the real objective.
Furthermore, the verbalizer and reconstructor models are both initialized from LLMs themselves, and given a prompt instructing them on the task, so they are predisposed to write something that looks like an explanation.
But during training, they could still drift away from these explanations toward a made-up language – either one that overtly looks like gibberish, or one that looks like English but encodes the information in a way that’s unrelated to the meaning of the words.
The fascinating thing is that empirically, they don’t, at least to a significant extent. The researchers verify this by correlating the generated explanations with ground truth revealed in other ways. They also try rewording the explanations (which deserves the semantic meaning but would disturb any encoding that’s unrelated to meaning), and find that the reconstructor can still reconstruct activations.
On the other hand, their downstream result is not very impressive:
> An auditor equipped with NLAs successfully uncovered the target model’s hidden motivation between 12% and 15% of the time
That is apparently better than existing techniques, but still a rather low percentage.
Another interesting point: The LLMs used to initialize the verbalizer and reconstructor are stated to have the “same architecture” as the LLM being analyzed (it doesn’t say “same model” so I imagine it’s a smaller version?). The researchers probably think this architectural similarity might give the models some built-in insight about the target model’s thinking that can be unlocked through training. Does it really though? As far as I can see they don’t run any tests using a different architecture, so there’s no way to know.
show comments
dtj1123
"When Claude Opus 4.6 and Mythos Preview were undergoing safety testing, NLAs suggested they believed they were being tested more often than they let on"
What does it mean for a pile of matrix algebra to 'believe' something?
show comments
sva_
So the way this works seems to be that you first have an "activation verbalizer" model that generates some tokens describing the activation, and then an "activation reconstructor" that tries to recreate the activation vector. If that reconstruction is close to the original activation vector, they claim, the verbalization probably carries some meaningful information.
I find the fact that this only looks at the activations of some specific layer l a bit interesting. Some layer l might 'think' a certain way about some input, while another later layer might have different 'thoughts' about it. How does the model decide which 'thoughts' to ultimately pay attention to, and prioritize some output token over another?
cadamsdotcom
> An early version of Claude Opus 4.6 would sometimes mysteriously respond to English queries in other languages. NLAs helped Anthropic researchers discover training data that caused this.
Very cool - sounds similar to OpenAI’s goblin troubles.
One question jumps out at me: just because a string of text happens to be a good compressed representation (in the autoencoder) of a model's internal activation, does that necessarily mean the text explains that activation in the context of the model? I want to take a look at what they released a bit more closely. Maybe there's a way that they answer this question?
Pretty neat work either way.
show comments
minimaltom
Between this, the emotions paper, golden gate claude etc, it doesn't seem like such a stretch that Anthropic are doing some kind of activation steering as part of training (and its part of their lead)
show comments
Escapade5160
Am I correct in my understanding that they are not actually able to 100% know what Claude is thinking? They have trained a new model to make a guess about what Claude is thinking, but we cannot validate that the guess is 100% valid, right? They are basically saying "we have trained a model to reaffirm what we believe Claude is thinking" ? Hoping I'm wrong in my understanding of this because this does not appear to be good research to me.
show comments
hazrmard
Check my understanding & follow-up Qs:
An auto-encoder is trained on [activation] -AV-> [text] -AR-> [activation], where [activation] belongs to one layer in the LLM model M.
Architecture.:
Model being analyzed (M): >|||||>
Auto-Verbalizer (AV) same as M, with tokens for activation: >|||||>
Auto-Reconstructor (AR) truncated up to the layer being analyzed: ||>
The AV, AR models are initialized using supervised learning on a summarization task. The assumption being that model thoughts are similar to context summary.
The AR is trained on a simple reconstruction loss.
The AV is trained using an RL objective of reconstruction loss with a KL penalty to keep the verbalizations similar to the initial weights (to maintain linguistic fluency).
- Authors acknowledge, and expect, confabulations in verbalizations: factually incorrect or unsubstantiated statements. But, the internal thought we seek is itself, by definition, unsubstantiated. How can we tell if it is not duplicitous?
- They test this on a layer 2/3 deep into the models. I wonder how shallow and deep abstractions affect thought verbalization?
semiquaver
This capability was mentioned several times in a recent article about anthropic, glad to see they are releasing this to the public! Feels like a meaningful step forward in interperability. I never understood why people seem to believe the answer when they ask an AI “why did you do that?”
show comments
NitpickLawyer
> We also release an interactive frontend for exploring NLAs on several open models through a collaboration with Neuronpedia.
Whatever they did on LLama didn't work, nothing makes sense in their example where they ask the model to lie about 1+1. Either the model is too old, or whatever they used isn't working, but whatever the autoencoder outputs is nothing like their examples with claude. Gemma is similarly bad.
show comments
Tossrock
Anthropic Research going from strength to strength in interpretability. Publicly releasing the code so other labs can benefit from it is also a great move - very values aligned, and improves the overall AI safety ecosystem.
Juminuvi
I've only read this blog and not the paper so maybe they go into more detail there and someone can correct me, but they frequently bring up the model's ability to detect or at least the model activations hint it can predict when it's being tested. I can't help but wonder, as they build these larger and larger models, where they could be getting "clean" training data, untainted by all these types of blog posts and the massive numbers of conversations they spawn? If the models ingest data like that wouldn't it make sense they'd be inclined to have more activations attuned to questions they appear adversarial?
show comments
visarga
Beautiful idea, an autoencoder must represent everything without hiding if is to recover the original data closely. So it trains a model to verbalize embeddings well. This reveals what we want to know about the model (such as when it thinks it is being tested, or other hidden thoughts).
show comments
mlmonkey
It's unclear from the doc: by `activations` do they mean the connections between neurons? Since a network has multiple layers, are these activations the concatenated outputs of all of the layers? Or just the final layer before the softmax?
show comments
x312
This paper has an major issue that they are not surfacing, these activations can just be correlated on a common latent. For example, both the original activation and the explanation could share a broad latent like "this is an adversarial scenario". That could make reconstruction loss look good without showing that the actual explanation was the correct cause for the LLM's response.
I find this rather disturbing. Anthropic has quite a habit of overclaiming on questionable research results when they definitely know better. For example, their linked circuits blogpost ("The Biology of LLMs") was released after these methods were known to have major credibility issues in the field (e.g., see this from Deepmind - https://www.lesswrong.com/posts/4uXCAJNuPKtKBsi28/negative-r...). Similarly this new blog is heavily based on another academic paper (LatentQA) and the correlation/causation issue is already known.
Shoddy methodology is whatever, but it feels like this is always been done intentionally with the goal of trying to humanize LLMs or overhype their similarities to biological entities. What is the agenda here?
show comments
andai
The issue with the AI blackmail tests is that newer versions of AIs are trained after the AI blackmail experiments were published online. Or do they scrub it from the training data?
sourdoughbob
It will be interesting to see how this replicates on differently curated registers. How much of the explanatory register is the warm-start carrying?
kurnoolion
So, this is like reading EKG of human brain and understand its thoughts?
hansmayer
Claude's "Thougts" - get outta here you gits :)
btown
I find it fascinating how they were able to keep the reconstruction error function incredibly simple, literally its success in round-tripping the activation layer, while making it interpretable... simply by choosing a good data-driven initialization state, and (effectively) training slowly.
> We find that simply initializing the AV and AR as copies of M leads to unstable training: the AV in particular, having never encountered a layer-l activation as a token embedding, outputs nonsensical explanations. We therefore initialize the AV and AR with supervised fine-tuning on a text-summarization proxy task. Specifically, we compute layer-l activations from the final token of randomly truncated pretraining-like text snippets, and use Claude Opus 4.5 to generate summaries s of the text up to that token (see the Appendix for details of this procedure). We then fine-tune the AV and AR on (h_l,s) and (s,h_l) pairs respectively. This warm-start typically yields an FVE of around 0.3-0.4. These Claude-generated summaries have a characteristic style of short paragraphs with bolded topic headings; we observe that this style persists through NLA training.
And from the appendix:
> We generate warm-start data for the AV and AR by prompting Claude Opus 4.5 to produce summaries of contexts, using the prompt below. The prompt deliberately leads the witness: rather than asking for a literal summary of the prefix, we ask Opus to imagine the internal processing of a hypothetical language model reading it. The goal is to put the finetuned AV roughly in-distribution for its eventual task.
tjohnell
It will inevitably learn how to think in a way that translates to one (moral) meaning and back but has an ulterior meaning underneath.
show comments
bilsbie
Could you use this to see what facts a model knows?
finally a something interesting but this only makes me think that the last judgement is still in human hands to judge claude inner thoughts is correct or not
I mean who knows if those are really claude thoughts or claude just think that is his thoughts because humans wants it
optimalsolver
Wait, so in non-verbal reasoning, Claude has the concepts of "I" and "Me"?
I thought that wasn't possible for a text generator?
show comments
danborn26
Extracting readable thoughts from the intermediate representations is a great step for transparency. It makes debugging model behavior much more viable.
zk_haider
I think there’s a huge problem when we need another model to interpret the activations inside the network and translate (which can be a hallucination in it of itself) and then _that_ is fed again to another model. Clearly we haven’t built and understood these models properly from the ground up to evaluate them 100% correctly. This isn’t the human brain we’re operating it’s code we create and run ourselves we should be able to do better
Anthropic has released open weight models for translating the activations of existing models, viz. Qwen 2.5 (7B), Gemma 3 (12B, 27B) and Llama 3.3 (70B) into natural language text. https://github.com/kitft/natural_language_autoencoders https://huggingface.co/collections/kitft/nla-models This is huge news and it's great to see Anthropic finally engage with the Hugging Face and open weights community!
I would suggest experts in interpretability (but everyone really) to go directly to the transformer circuits blog, where they explain their approach more in detail. Here is the link for this post: https://transformer-circuits.pub/2026/nla/index.html
Also, if you have never read it, I would suggest starting to read all the Transformer Circuits thread, by reading its "prologue" in distill pub
This is the first approach to activation analysis that I’ve seen that seems like a plausible path to model understanding.
Unfortunately I don’t know how you ground this … it’s basically asking if you can encode activations in plausible sounding text. Of course you can! But is the plausible text actually reflective of what the model is “thinking”? How to tell?
Fascinating. The training process forces the “verbalizer” model to develop some mapping from activations to tokens that the “reconstructor” model can then invert back into the activations. But to quote the paper:
> Note that nothing in this objective constrains the NLA explanation z to be human-readable, or even to bear any semantic relation to the content of [the activation].
The objective could be optimized even if the verbalizer and reconstructor made up their own “language” to represent the activations, that was not human-readable at all.
To point the model in the right direction, they start out by training on guessed internal thinking:
> we ask Opus to imagine the internal processing of a hypothetical language model reading it.
…before switching to training on the real objective.
Furthermore, the verbalizer and reconstructor models are both initialized from LLMs themselves, and given a prompt instructing them on the task, so they are predisposed to write something that looks like an explanation.
But during training, they could still drift away from these explanations toward a made-up language – either one that overtly looks like gibberish, or one that looks like English but encodes the information in a way that’s unrelated to the meaning of the words.
The fascinating thing is that empirically, they don’t, at least to a significant extent. The researchers verify this by correlating the generated explanations with ground truth revealed in other ways. They also try rewording the explanations (which deserves the semantic meaning but would disturb any encoding that’s unrelated to meaning), and find that the reconstructor can still reconstruct activations.
On the other hand, their downstream result is not very impressive:
> An auditor equipped with NLAs successfully uncovered the target model’s hidden motivation between 12% and 15% of the time
That is apparently better than existing techniques, but still a rather low percentage.
Another interesting point: The LLMs used to initialize the verbalizer and reconstructor are stated to have the “same architecture” as the LLM being analyzed (it doesn’t say “same model” so I imagine it’s a smaller version?). The researchers probably think this architectural similarity might give the models some built-in insight about the target model’s thinking that can be unlocked through training. Does it really though? As far as I can see they don’t run any tests using a different architecture, so there’s no way to know.
"When Claude Opus 4.6 and Mythos Preview were undergoing safety testing, NLAs suggested they believed they were being tested more often than they let on"
What does it mean for a pile of matrix algebra to 'believe' something?
So the way this works seems to be that you first have an "activation verbalizer" model that generates some tokens describing the activation, and then an "activation reconstructor" that tries to recreate the activation vector. If that reconstruction is close to the original activation vector, they claim, the verbalization probably carries some meaningful information.
I find the fact that this only looks at the activations of some specific layer l a bit interesting. Some layer l might 'think' a certain way about some input, while another later layer might have different 'thoughts' about it. How does the model decide which 'thoughts' to ultimately pay attention to, and prioritize some output token over another?
> An early version of Claude Opus 4.6 would sometimes mysteriously respond to English queries in other languages. NLAs helped Anthropic researchers discover training data that caused this.
Very cool - sounds similar to OpenAI’s goblin troubles.
https://openai.com/index/where-the-goblins-came-from/
One question jumps out at me: just because a string of text happens to be a good compressed representation (in the autoencoder) of a model's internal activation, does that necessarily mean the text explains that activation in the context of the model? I want to take a look at what they released a bit more closely. Maybe there's a way that they answer this question?
Pretty neat work either way.
Between this, the emotions paper, golden gate claude etc, it doesn't seem like such a stretch that Anthropic are doing some kind of activation steering as part of training (and its part of their lead)
Am I correct in my understanding that they are not actually able to 100% know what Claude is thinking? They have trained a new model to make a guess about what Claude is thinking, but we cannot validate that the guess is 100% valid, right? They are basically saying "we have trained a model to reaffirm what we believe Claude is thinking" ? Hoping I'm wrong in my understanding of this because this does not appear to be good research to me.
Check my understanding & follow-up Qs:
An auto-encoder is trained on [activation] -AV-> [text] -AR-> [activation], where [activation] belongs to one layer in the LLM model M.
Architecture.:
The AV, AR models are initialized using supervised learning on a summarization task. The assumption being that model thoughts are similar to context summary.The AR is trained on a simple reconstruction loss.
The AV is trained using an RL objective of reconstruction loss with a KL penalty to keep the verbalizations similar to the initial weights (to maintain linguistic fluency).
- Authors acknowledge, and expect, confabulations in verbalizations: factually incorrect or unsubstantiated statements. But, the internal thought we seek is itself, by definition, unsubstantiated. How can we tell if it is not duplicitous?
- They test this on a layer 2/3 deep into the models. I wonder how shallow and deep abstractions affect thought verbalization?
This capability was mentioned several times in a recent article about anthropic, glad to see they are releasing this to the public! Feels like a meaningful step forward in interperability. I never understood why people seem to believe the answer when they ask an AI “why did you do that?”
> We also release an interactive frontend for exploring NLAs on several open models through a collaboration with Neuronpedia.
Whatever they did on LLama didn't work, nothing makes sense in their example where they ask the model to lie about 1+1. Either the model is too old, or whatever they used isn't working, but whatever the autoencoder outputs is nothing like their examples with claude. Gemma is similarly bad.
Anthropic Research going from strength to strength in interpretability. Publicly releasing the code so other labs can benefit from it is also a great move - very values aligned, and improves the overall AI safety ecosystem.
I've only read this blog and not the paper so maybe they go into more detail there and someone can correct me, but they frequently bring up the model's ability to detect or at least the model activations hint it can predict when it's being tested. I can't help but wonder, as they build these larger and larger models, where they could be getting "clean" training data, untainted by all these types of blog posts and the massive numbers of conversations they spawn? If the models ingest data like that wouldn't it make sense they'd be inclined to have more activations attuned to questions they appear adversarial?
Beautiful idea, an autoencoder must represent everything without hiding if is to recover the original data closely. So it trains a model to verbalize embeddings well. This reveals what we want to know about the model (such as when it thinks it is being tested, or other hidden thoughts).
It's unclear from the doc: by `activations` do they mean the connections between neurons? Since a network has multiple layers, are these activations the concatenated outputs of all of the layers? Or just the final layer before the softmax?
This paper has an major issue that they are not surfacing, these activations can just be correlated on a common latent. For example, both the original activation and the explanation could share a broad latent like "this is an adversarial scenario". That could make reconstruction loss look good without showing that the actual explanation was the correct cause for the LLM's response.
I find this rather disturbing. Anthropic has quite a habit of overclaiming on questionable research results when they definitely know better. For example, their linked circuits blogpost ("The Biology of LLMs") was released after these methods were known to have major credibility issues in the field (e.g., see this from Deepmind - https://www.lesswrong.com/posts/4uXCAJNuPKtKBsi28/negative-r...). Similarly this new blog is heavily based on another academic paper (LatentQA) and the correlation/causation issue is already known.
Shoddy methodology is whatever, but it feels like this is always been done intentionally with the goal of trying to humanize LLMs or overhype their similarities to biological entities. What is the agenda here?
The issue with the AI blackmail tests is that newer versions of AIs are trained after the AI blackmail experiments were published online. Or do they scrub it from the training data?
It will be interesting to see how this replicates on differently curated registers. How much of the explanatory register is the warm-start carrying?
So, this is like reading EKG of human brain and understand its thoughts?
Claude's "Thougts" - get outta here you gits :)
I find it fascinating how they were able to keep the reconstruction error function incredibly simple, literally its success in round-tripping the activation layer, while making it interpretable... simply by choosing a good data-driven initialization state, and (effectively) training slowly.
I guess "initialization is all you need!"
From the paper https://transformer-circuits.pub/2026/nla/index.html :
> We find that simply initializing the AV and AR as copies of M leads to unstable training: the AV in particular, having never encountered a layer-l activation as a token embedding, outputs nonsensical explanations. We therefore initialize the AV and AR with supervised fine-tuning on a text-summarization proxy task. Specifically, we compute layer-l activations from the final token of randomly truncated pretraining-like text snippets, and use Claude Opus 4.5 to generate summaries s of the text up to that token (see the Appendix for details of this procedure). We then fine-tune the AV and AR on (h_l,s) and (s,h_l) pairs respectively. This warm-start typically yields an FVE of around 0.3-0.4. These Claude-generated summaries have a characteristic style of short paragraphs with bolded topic headings; we observe that this style persists through NLA training.
And from the appendix:
> We generate warm-start data for the AV and AR by prompting Claude Opus 4.5 to produce summaries of contexts, using the prompt below. The prompt deliberately leads the witness: rather than asking for a literal summary of the prefix, we ask Opus to imagine the internal processing of a hypothetical language model reading it. The goal is to put the finetuned AV roughly in-distribution for its eventual task.
It will inevitably learn how to think in a way that translates to one (moral) meaning and back but has an ulterior meaning underneath.
Could you use this to see what facts a model knows?
How does this differ from golden gate Claude?
Attach the SRT to your frozen model Anthropic. Problem solved. https://github.com/space-bacon/SRT.
This is very cool
finally a something interesting but this only makes me think that the last judgement is still in human hands to judge claude inner thoughts is correct or not
I mean who knows if those are really claude thoughts or claude just think that is his thoughts because humans wants it
Wait, so in non-verbal reasoning, Claude has the concepts of "I" and "Me"?
I thought that wasn't possible for a text generator?
Extracting readable thoughts from the intermediate representations is a great step for transparency. It makes debugging model behavior much more viable.
I think there’s a huge problem when we need another model to interpret the activations inside the network and translate (which can be a hallucination in it of itself) and then _that_ is fed again to another model. Clearly we haven’t built and understood these models properly from the ground up to evaluate them 100% correctly. This isn’t the human brain we’re operating it’s code we create and run ourselves we should be able to do better