All of this to produce a wrong result. Running the python script it produced gives the correct result. Running the verification date command leads to a runtime error (hallucinated syntax). On the other hand Qwen went straight to Option A and kept overthinking the question, verifying every step 10 times, experienced a mental breakdown, then finally returned the right answer. I think Gemma would be clearly superior here if it used the tools it came up with rather than hallucinating using them.
I ran these in LM Studio and got unrecognizable pelicans out of the 2B and 4B models and an outstanding pelican out of the 26b-a4b model - I think the best I've seen from a model that runs on my laptop.
The gemma-4-31b model is completely broken for me - it just spits out "---\n" no matter what prompt I feed it. I got a pelican out of it via the AI Studio API hosted model instead.
show comments
popinman322
Does anyone know whether we'll be receiving transcoders for this batch of models? We got them for Gemma 3, but maybe that was a one-off.
canyon289
Hi all!
I work on the Gemma team, one of many as this one was a bigger effort given it was a mainline release. Happy to answer whatever questions I can
Featuring the ELO score as the main benchmark in chart is very misleading. The big dense Gemma 4 model does not seem to reach Qwen 3.5 27B dense model in most benchmarks. This is obviously what matters. The small 2B / 4B models are interesting and may potentially be better ASR models than specialized ones (not just for performances but since they are going to be easily served via llama.cpp / MLX and front-ends). Also interesting for "fast" OCR, given they are vision models as well. But other than that, the release is a bit disappointing.
show comments
NitpickLawyer
Best thing is that this is Apache 2.0 (edit: and they have base models available. Gemma3 was good for finetuning)
The sizes are E2B and E4B (following gemma3n arch, with focus on mobile) and 26BA4 MoE and 31B dense. The mobile ones have audio in (so I can see some local privacy focused translation apps) and the 31B seems to be strong in agentic stuff. 26BA4 stands somewhere in between, similar VRAM footprint, but much faster inference.
originalvichy
The wait is finally over. One or two iterations, and I’ll be happy to say that language models are more than fulfilling my most common needs when self-hosting. Thanks to the Gemma team!
show comments
Analog24
So the "E2B" and "E4B" models are actually 5B and 8B parameters. Are we really going to start referring to the "effective" parameter count of dense models by not including the embeddings?
These models are impressive but this is incredibly misleading. You need to load the embeddings in memory along with the rest of the model so it makes no sense o exclude them from the parameter count. This is why it actually takes 5GB of RAM to run the "2B" model with 4-bit quantization according to Unsloth (when I first saw that I knew something was up).
show comments
swalsh
I gave the same prompt (a small rust project that's not easy, but not overly sophisticated) to both Gemma-4 26b and Qwen 3.5 27b via OpenCode. Qwen 3.5 ran for a bit over an hour before I killed it, Gemma 4 ran for about 20 minutes before it gave up. Lots of failed tool calls.
I asked codex to write a summary about both code bases.
"Dev 1" Qwen 3.5
"Dev 2" Gemma 4
Dev 1 is the stronger engineer overall. They showed better architectural judgment, stronger completeness, and better maintainability instincts. The weakness is execution rigor: they built more, but didn’t verify enough, so important parts don’t actually hold up cleanly.
Dev 2 looks more like an early-stage prototyper. The strength is speed to a rough first pass, but the implementation is much less complete, less polished, and less dependable. The main weakness is lack of finish and technical rigor.
If I were choosing between them as developers, I’d take Dev 1 without much hesitation.
Looking at the code myself, i'd agree with codex.
show comments
minimaxir
The benchmark comparisons to Gemma 3 27B on Hugging Face are interesting: The Gemma 4 E4B variant (https://huggingface.co/google/gemma-4-E4B-it) beats the old 27B in every benchmark at a fraction of parameters.
The E2B/E4B models also support voice input, which is rare.
show comments
mchusma
For those curious, on openrouter this is $0.14 input and $0.40 output, or ballpark half of Gemini flash lite 3.1 (googles current cheapest current gen closed model)
karimf
I'm curious about the multimodal capabilities on the E2B and E4B and how fast is it.
In ChatGPT right now, you can have a audio and video feed for the AI, and then the AI can respond in real-time.
Now I wonder if the E2B or the E4B is capable enough for this and fast enough to be run on an iPhone. Basically replicating that experience, but all the computations (STT, LLM, and TTS) are done locally on the phone.
I just made this [0] last week so I know you can run a real-time voice conversation with an AI on an iPhone, but it'd be a totally different experience if it can also process a live camera feed.
Can't wait for gemma4-31b-it-claude-opus-4-6-distilled-q4-k-m on huggingface tomorrow
show comments
Deegy
So what's the business strategy here?
Google is the only USA based frontier lab releasing open models. I know they aren't doing it out of the goodness of their hearts.
ceroxylon
Even with search grounding, it scored a 2.5/5 on a basic botanical benchmark. It would take much longer for the average human to do a similar write-up, but they would likely do better than 50% hallucination if they had access to a search engine.
show comments
stevenhubertron
Still pretty unusable on Raspberry Pi 5, 16gb despite saying its built for it, from the E4B model
Prompt: whats a great chicken breast recipe for dinner tonight?
show comments
VadimPR
Gemma 3 E4E runs very quick on my Samsung S26, so I am looking forward to trying Gemma 4! It is fantastic to have local alternatives to frontier models in an offline manner.
show comments
hikarudo
Also checkout Deepmind's "The Gemma 4 Good Hackathon" on kaggle:
Really looking forward to testing and benchmarking this on my spam filtering benchmark. gemma-3-27b was a really strong model, surpassed later by gpt-oss:20b (which was also much faster). qwen models always had more variance.
show comments
bertili
The timing is interesting as Apple supposedly will distill google models in the upcoming Siri update [1]. So maybe Gemma is a lower bound on what we can expect baked into iPhones.
What's a realistic way to run this locally or a single expensive remote dev machine (in a vm, not through API calls)?
show comments
kuboble
Im really looking forward to trying it out.
Gemma 3 was the first model that I have liked enough to use a lot just for daily questions on my 32G gpu.
bearjaws
The labels on the table read "Gemma 431B IT" which reads as 431B parameter model, not Gemma 4 - 31B...
sigbottle
There are so many heavy hitting cracked people like daniel from unsloth and chris lattner coming out of the woodworks for this with their own custom stuff.
How does the ecosystem work? Have things converged and standardized enough where it's "easy" (lol, with tooling) to swap out parts such as weights to fit your needs? Do you need to autogen new custom kernels to fix said things? Super cool stuff.
show comments
stephbook
Kind of sad they didn't release stronger versions. $dayjob offers strong NVidias that are hungry for models and are stuck running llama, gpt-oss etc.
Seems like Google and Anthropic (which I consider leaders) would rather keep their secret sauce to themselves – understandable.
# with uvx
uvx litert-lm run \
--from-huggingface-repo=litert-community/gemma-4-E2B-it-litert-lm \
gemma-4-E2B-it.litertlm
gunalx
We didnt get deepseek v4, but gemma 4. Cant complain.
wg0
Google might not have the best coding models (yet) but they seem to have the most intelligent and knowledgeable models of all especially Gemini 3.1 Pro is something.
One more thing about Google is that they have everything that others do not:
1. Huge data, audio, video, geospatial
2. Tons of expertise. Attention all you need was born there.
3. Libraries that they wrote.
4. Their own data centers and cloud.
4. Most of all, their own hardware TPUs that no one has.
Therefore once the bubble bursts, the only player standing tall and above all would be Google.
show comments
0xbadcafebee
Gemma 3 models were pretty bad, so hopefully they got Gemma 4 to at least come close to the other major open weights
show comments
babelfish
Wow, 30B parameters as capable as a 1T parameter model?
show comments
darshanmakwana
This is awesome! I will try to use them locally with opencode and see if they are usable inreplacement of claude code for basic tasks
bibimsz
is it good? what's it good for?
virgildotcodes
Downloaded through LM Studio on an M1 Max 32GB, 26B A4B Q4_K_M
This more or less reflects my experience with most local models over the last couple years (although admittedly most aren't anywhere near this bad). People keep saying they're useful and yet I can't get them to be consistently useful at all.
show comments
james2doyle
Hmm just tried the google/gemma-4-31B-it through HuggingFace (inference provider seems to be Novita) and function/tool calling was not enabled...
show comments
flakiness
It's good they still have non-instruction-tuned models.
rvz
Open weight models once again marching on and slowly being a viable alternative to the larger ones.
We are at least 1 year and at most 2 years until they surpass closed models for everyday tasks that can be done locally to save spending on tokens.
show comments
DeepYogurt
maybe a dumb question but what what does the "it" stand for in the 31B-it vs 31B?
show comments
daveguy
Fyi, it took me a while to find the meaning of the "-it" in some models. That's how Google designates "instruction tuned". Come on Google. Definite your acronyms.
matt765
I'll wait for the next iteration
einpoklum
D: Di Gi Charat does not like this nyo! Gemma is supposed to help Dejiko-chan nyo!
G: They offered a very compelling benefits package gemma!
heraldgeezer
Gemma vs Gemini?
I am only a casual AI chatbot user, I use what gives me the most and best free limits and versions.
Prompt:
> what is the Unix timestamp for this: 2026-04-01T16:00:00Z
Qwen 3.5-27b-dwq
> Thought for 8 minutes 34 seconds. 7074 tokens.
> The Unix timestamp for 2026-04-01T16:00:00Z is:
> 1775059200 (my comment: Wednesday, 1 April 2026 at 16:00:00)
Gemma-4-26b-a4b
> Thought for 33.81 seconds. 694 tokens.
> The Unix timestamp for 2026-04-01T16:00:00Z is:
> 1775060800 (my comment: Wednesday, 1 April 2026 at 16:26:40)
Gemma considered three options to solve this problem. From the thinking trace:
> Option A: Manual calculation (too error-prone).
> Option B: Use a programming language (Python/JavaScript).
> Option C: Knowledge of specific dates.
It then wrote a python script:
from datetime import datetime, timezone
date_str = "2026-04-01T16:00:00Z"
# Replace Z with +00:00 for ISO format parsing or just strip it
dt = datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ").replace(tzinfo=timezone.utc)
ts = int(dt.timestamp())
print(ts)
Then it verified the timestamp with a command:
date -u -d @1775060800
All of this to produce a wrong result. Running the python script it produced gives the correct result. Running the verification date command leads to a runtime error (hallucinated syntax). On the other hand Qwen went straight to Option A and kept overthinking the question, verifying every step 10 times, experienced a mental breakdown, then finally returned the right answer. I think Gemma would be clearly superior here if it used the tools it came up with rather than hallucinating using them.
Thinking / reasoning + multimodal + tool calling.
We made some quants at https://huggingface.co/collections/unsloth/gemma-4 for folks to run them - they work really well!
Guide for those interested: https://unsloth.ai/docs/models/gemma-4
Also note to use temperature = 1.0, top_p = 0.95, top_k = 64 and the EOS is "<turn|>". "<|channel>thought\n" is also used for the thinking trace!
Comparison of Gemma 4 vs. Qwen 3.5 benchmarks, consolidated from their respective Hugging Face model cards:
I ran these in LM Studio and got unrecognizable pelicans out of the 2B and 4B models and an outstanding pelican out of the 26b-a4b model - I think the best I've seen from a model that runs on my laptop.
https://simonwillison.net/2026/Apr/2/gemma-4/
The gemma-4-31b model is completely broken for me - it just spits out "---\n" no matter what prompt I feed it. I got a pelican out of it via the AI Studio API hosted model instead.
Does anyone know whether we'll be receiving transcoders for this batch of models? We got them for Gemma 3, but maybe that was a one-off.
Hi all! I work on the Gemma team, one of many as this one was a bigger effort given it was a mainline release. Happy to answer whatever questions I can
If you want the fastest open source implementation on Blackwell and AMD MI355, check out Modular's MAX nightly. You can pip install it super fast, check it out here: https://www.modular.com/blog/day-zero-launch-fastest-perform...
-Chris Lattner (yes, affiliated with Modular :-)
Featuring the ELO score as the main benchmark in chart is very misleading. The big dense Gemma 4 model does not seem to reach Qwen 3.5 27B dense model in most benchmarks. This is obviously what matters. The small 2B / 4B models are interesting and may potentially be better ASR models than specialized ones (not just for performances but since they are going to be easily served via llama.cpp / MLX and front-ends). Also interesting for "fast" OCR, given they are vision models as well. But other than that, the release is a bit disappointing.
Best thing is that this is Apache 2.0 (edit: and they have base models available. Gemma3 was good for finetuning)
The sizes are E2B and E4B (following gemma3n arch, with focus on mobile) and 26BA4 MoE and 31B dense. The mobile ones have audio in (so I can see some local privacy focused translation apps) and the 31B seems to be strong in agentic stuff. 26BA4 stands somewhere in between, similar VRAM footprint, but much faster inference.
The wait is finally over. One or two iterations, and I’ll be happy to say that language models are more than fulfilling my most common needs when self-hosting. Thanks to the Gemma team!
So the "E2B" and "E4B" models are actually 5B and 8B parameters. Are we really going to start referring to the "effective" parameter count of dense models by not including the embeddings?
These models are impressive but this is incredibly misleading. You need to load the embeddings in memory along with the rest of the model so it makes no sense o exclude them from the parameter count. This is why it actually takes 5GB of RAM to run the "2B" model with 4-bit quantization according to Unsloth (when I first saw that I knew something was up).
I gave the same prompt (a small rust project that's not easy, but not overly sophisticated) to both Gemma-4 26b and Qwen 3.5 27b via OpenCode. Qwen 3.5 ran for a bit over an hour before I killed it, Gemma 4 ran for about 20 minutes before it gave up. Lots of failed tool calls.
I asked codex to write a summary about both code bases.
"Dev 1" Qwen 3.5
"Dev 2" Gemma 4
Dev 1 is the stronger engineer overall. They showed better architectural judgment, stronger completeness, and better maintainability instincts. The weakness is execution rigor: they built more, but didn’t verify enough, so important parts don’t actually hold up cleanly.
Dev 2 looks more like an early-stage prototyper. The strength is speed to a rough first pass, but the implementation is much less complete, less polished, and less dependable. The main weakness is lack of finish and technical rigor.
If I were choosing between them as developers, I’d take Dev 1 without much hesitation.
Looking at the code myself, i'd agree with codex.
The benchmark comparisons to Gemma 3 27B on Hugging Face are interesting: The Gemma 4 E4B variant (https://huggingface.co/google/gemma-4-E4B-it) beats the old 27B in every benchmark at a fraction of parameters.
The E2B/E4B models also support voice input, which is rare.
For those curious, on openrouter this is $0.14 input and $0.40 output, or ballpark half of Gemini flash lite 3.1 (googles current cheapest current gen closed model)
I'm curious about the multimodal capabilities on the E2B and E4B and how fast is it.
In ChatGPT right now, you can have a audio and video feed for the AI, and then the AI can respond in real-time.
Now I wonder if the E2B or the E4B is capable enough for this and fast enough to be run on an iPhone. Basically replicating that experience, but all the computations (STT, LLM, and TTS) are done locally on the phone.
I just made this [0] last week so I know you can run a real-time voice conversation with an AI on an iPhone, but it'd be a totally different experience if it can also process a live camera feed.
https://github.com/fikrikarim/volocal
Can't wait for gemma4-31b-it-claude-opus-4-6-distilled-q4-k-m on huggingface tomorrow
So what's the business strategy here?
Google is the only USA based frontier lab releasing open models. I know they aren't doing it out of the goodness of their hearts.
Even with search grounding, it scored a 2.5/5 on a basic botanical benchmark. It would take much longer for the average human to do a similar write-up, but they would likely do better than 50% hallucination if they had access to a search engine.
Still pretty unusable on Raspberry Pi 5, 16gb despite saying its built for it, from the E4B model
Prompt: whats a great chicken breast recipe for dinner tonight?Gemma 3 E4E runs very quick on my Samsung S26, so I am looking forward to trying Gemma 4! It is fantastic to have local alternatives to frontier models in an offline manner.
Also checkout Deepmind's "The Gemma 4 Good Hackathon" on kaggle:
https://www.kaggle.com/competitions/gemma-4-good-hackathon
Really looking forward to testing and benchmarking this on my spam filtering benchmark. gemma-3-27b was a really strong model, surpassed later by gpt-oss:20b (which was also much faster). qwen models always had more variance.
The timing is interesting as Apple supposedly will distill google models in the upcoming Siri update [1]. So maybe Gemma is a lower bound on what we can expect baked into iPhones.
[1] https://news.ycombinator.com/item?id=47520438
What's a realistic way to run this locally or a single expensive remote dev machine (in a vm, not through API calls)?
Im really looking forward to trying it out.
Gemma 3 was the first model that I have liked enough to use a lot just for daily questions on my 32G gpu.
The labels on the table read "Gemma 431B IT" which reads as 431B parameter model, not Gemma 4 - 31B...
There are so many heavy hitting cracked people like daniel from unsloth and chris lattner coming out of the woodworks for this with their own custom stuff.
How does the ecosystem work? Have things converged and standardized enough where it's "easy" (lol, with tooling) to swap out parts such as weights to fit your needs? Do you need to autogen new custom kernels to fix said things? Super cool stuff.
Kind of sad they didn't release stronger versions. $dayjob offers strong NVidias that are hungry for models and are stuck running llama, gpt-oss etc.
Seems like Google and Anthropic (which I consider leaders) would rather keep their secret sauce to themselves – understandable.
The LiteRT-LM CLI (https://ai.google.dev/edge/litert-lm/cli) provides a way to try the Gemma 4 model.
We didnt get deepseek v4, but gemma 4. Cant complain.
Google might not have the best coding models (yet) but they seem to have the most intelligent and knowledgeable models of all especially Gemini 3.1 Pro is something.
One more thing about Google is that they have everything that others do not:
1. Huge data, audio, video, geospatial 2. Tons of expertise. Attention all you need was born there. 3. Libraries that they wrote. 4. Their own data centers and cloud. 4. Most of all, their own hardware TPUs that no one has.
Therefore once the bubble bursts, the only player standing tall and above all would be Google.
Gemma 3 models were pretty bad, so hopefully they got Gemma 4 to at least come close to the other major open weights
Wow, 30B parameters as capable as a 1T parameter model?
This is awesome! I will try to use them locally with opencode and see if they are usable inreplacement of claude code for basic tasks
is it good? what's it good for?
Downloaded through LM Studio on an M1 Max 32GB, 26B A4B Q4_K_M
First message:
https://i.postimg.cc/yNZzmGMM/Screenshot-2026-04-03-at-12-44...
Not sure if I'm doing something wrong?
This more or less reflects my experience with most local models over the last couple years (although admittedly most aren't anywhere near this bad). People keep saying they're useful and yet I can't get them to be consistently useful at all.
Hmm just tried the google/gemma-4-31B-it through HuggingFace (inference provider seems to be Novita) and function/tool calling was not enabled...
It's good they still have non-instruction-tuned models.
Open weight models once again marching on and slowly being a viable alternative to the larger ones.
We are at least 1 year and at most 2 years until they surpass closed models for everyday tasks that can be done locally to save spending on tokens.
maybe a dumb question but what what does the "it" stand for in the 31B-it vs 31B?
Fyi, it took me a while to find the meaning of the "-it" in some models. That's how Google designates "instruction tuned". Come on Google. Definite your acronyms.
I'll wait for the next iteration
D: Di Gi Charat does not like this nyo! Gemma is supposed to help Dejiko-chan nyo!
G: They offered a very compelling benefits package gemma!
Gemma vs Gemini?
I am only a casual AI chatbot user, I use what gives me the most and best free limits and versions.
Qwen: Hold my beer
https://news.ycombinator.com/item?id=47615002
Impressive
curious how this scales with larger datasets. anyone tried it in production?