They will be, and that moment is not that far off. We've got the progression in place already: first, large data centers could have performant LLMs, we are now firmly in "a bunch of servers with a couple of H100s each" territory, slowly going into "128 GB VRAM on a MacBook Pro or a Strix Halo". Within the next year, the pattern of "expensive remote LLM for planning, local slow-but-faster-than-human LLM for execution" will become the norm for companies, slowly moving to "using local LLM for everything is good enough". And then we'll have the equilibrium we already have with the "classic cloud": you either self-host or pay for flexibility and speed. The question will be: how much of the current compute capacity craze will local hosting give the kiss of death to and what that means for the market.
show comments
0xbadcafebee
Here's some things you can do right now with local models on a consumer device:
- text-to-speech
- speech-to-text
- dictionary
- encyclopedia
- help troubleshooting errors
- generate common recipes and nutritional facts
- proofread emails, blog posts
- search a large trove of documents, find information, summarize it (RAG)
- manipulate your terminal/browser/etc
- analyze a picture or video
- generate a picture or video
- generate PDFs, documents, etc (code exec)
- simple programming
- financial analysis/planning
- math and science analysis
- find simple first aid/medical information
- "rubber ducking" but the duck talks back
A quarter of those don't need more than a gig of RAM, the rest benefit from more RAM. Technically you don't even need a GPU, it just makes it faster. I do half that stuff on my laptop with local models every day.
That said, it really doesn't need to be local. I like the idea that I can do all that stuff offline if I'm traveling, but I usually have cell service, and the total tokens is pretty cheap (like $2/month for all my non-coding AI use).
show comments
andychiare
> “AI everywhere” is not the goal. Useful software is the goal.
Great observation! Often the excitement of novelty makes us lose sight of the real goal
adamtaylor_13
Cool, well let me know when Opus 4.5 level performance is available locally, at speeds that serve everyday use, and 100% I'm right there with you.
Until then, I'm going to keep sending my JSON to the server farm in Virginia because it's the only place that can serve me a model that actually works for my uses.
show comments
gkcnlr
It seems like everybody is focused on "LLM"s, a.k.a Large Language Models. One interesting addition to that is fine-tuned- small parameter, distilled, context-dependent small language models that:
1- Do a particular task with great capability (due to its constrained, limited scope)
2- Do it in such a way, it integrates gracefully in your workflow without ever requiring you to know you are using an LM.
There is a difference between outsourcing your workflow to AI and actually utilizing it.
People want local AI, but only if UX is good. Tooling/harness quality may matter as much as model quality.
I think the future will probably be a hybrid of:
1. local AI for simple, private, everyday tasks
2. online AI for very hard or long tasks
show comments
rmunn
For image generation, this has already happened. To what degree, I can't tell, as I don't do image generation much so I don't have numbers on Midjourney subscriptions or any other image-AI-as-a-service sites. But civitai.com has become a place where people share their models, based off of Stable Diffusion or other similar bases, with various fine-tunings to achieve desired results. You name it, you can find a model for it at Civitai, and people doing some very creative things with them. (And also a lot of the obvious things, but it's the Internet, what did you expect?)
I haven't seen a text-based model sharing site spring up yet (perhaps they already have and I don't know about it yet). Civitai, being focused on image-generation, has the obvious advantage that it's easy to show off impressive results from the model on the front page of the website, and judging what someone's home-grown fine-tuned LLM will produce is a lot harder. But at some point I expect a Civitai equivalent site for text models, especially code-based ones, to become popular. That will seriously undercut Anthropic, OpenAI, et al, and will probably force them to find a price equilibrium.
Because once you're competing with "I spend $2,500 up front on a powerful video card, download an open-source model for free, and then I get pretty much everything I need for free" (additional power cost of running that video card isn't nothing, but probably not noticeable in your power bill compared to what you're already using)... then suddenly $200/month means your customers are thinking "after one year I would have been better off with the homegrown solution". The only way they'll continue to pay $200/month is if Claude/GPT/Gemini/whoever is truly head-and-shoulders above the "pay upfront once for hardware then use it for free afterwards" models available. And that's going to be doable, perhaps, but tough.
show comments
acidhousemcnab
We need better GUI and OS integrations with sandboxed local LLMs, before this is thrust on everyone and rolled out as the default in commercial OSes. Here in Berlin, I was functionally surrounded and hounded out of a local meetup, due to confrontation over the naive pushing of OS-level and network access agentic AI, done in the mode of mystical powers and artistic possibilities, which due to recent experiences, comes off as string-pulling, to produce a threat or danger that then must be observed and kept tabs on, according to Goodhart's Law.
TheJCDenton
For the mainstream audience, the sentiment around local ai today is the same that they had around open source a few decades ago. For a few products, some paid solutions were so much more advanced that open source were very often completely overlooked. Why bother ? And the like. Then we had captive SaaS and other plateforms and now it's obviously wrong for most of us.
The dependency we have with anthropic and openai for coding for instance is insane. Most accept it because either they don't care, or they just hope chinese will never stop open weights. The business model of open weights is very new, include some power play between countries and labs, and move an absurd amount of money without any concrete oversight from most people.
It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control.
show comments
Guillaume86
I think we should separate the private AI discussion from the local AI discussion.
The pragmatic choice to run big LLMs is one/several big servers online, but that doesn't mean private companies should be the only ones to run them.
A self hosted inference solution that offer good tenant isolation guarantees (ideally zero trust) and is easy enough to deploy and maintain (think Plex for AI) would be my choice for privacy. Now to be honest I have done zero research about this and have zero idea how feasible that is, maybe it already exists and there's some discord servers I should join?
Edit: I don't need to mention it here but what's incredible is that open models are in the ballpark of the best commercial models so supposedly, the hardest part by far is already solved.
show comments
supermdguy
Interesting to see this after the recent post about Chrome’s on-device model using up 4gb of storage, which frustrated a lot of people [1].
I agree local models are great, and it’s cool that Apple has models built in now. But I feel like it basically has to be an OS level feature or users are going to get upset. I’d certainly rather have a small utility call out to OpenAI than download its own model.
it's not going to happen with LLMs unless ram + storage gets several orders of magnitude cheaper like, yesterday
informatics aren't magic, you'll never be able to compress """knowledge""" into a small model in a way equivalent to the 1.5 TB model
show comments
jillesvangurp
I get the sentiment for self hosting. But there are a few counter arguments:
- Self hosting is expensive. It involves expensive machines with GPUs that cost hundreds per month if you use cloud based ones. You might need multiple of those. And you need people to mind those machines and they are even more expensive per month.
- If you run stuff on your laptop, it consumes a lot of resources and energy. I have qwen running on my laptop. Even minimal usage turns my laptop in a radiator. Nice as a demo, but I can't have it this hot all the time. It would run out of battery, and it's probably not great for longevity of components in the laptop.
- Models are evolving quickly and the self hosted smaller ones aren't as good when it comes to things like tool usage, reasoning, etc. Being able to switch tot he latest model is valuable.
- It's easier to get your use case working with one of the top models than with one of the smaller self hosted ones.
- If you get the wrong hardware, it might not be able to run the latest models very soon.
- Self hosting models is mostly a cost optimization. It only becomes relevant if you hit a certain scale.
- You have alternatives in the form of hosted models via a wide range of service providers. Some of those are EU based and offer all the things you'd be looking for if you are offering your services there. Including legal requirements.
- Reinventing what these companies do in house is technically challenging and possibly more expensive than self hosting models because now you need a lot of engineering capacity dedicated to that. And legal. And all the rest.
If, like most companies/people, you are at the experimenting stage, the cheapest and fastest is just getting an API key from an API provider of your choice. You can take it from there if your experiment actually works. And then it's mostly about optimizing cost. If your API usage goes to the thousands per month or worse, it becomes a cost/quality trade off.
mgrund
I really really want to like local AI, but I highly doubt it will see wide adoption for a long time.
The additional up-front cost for hardware designed to run an LLM in addition to normal workload is unlikely to be accepted by most consumers.
The scale will be very constrained (like Apples on-device models which are small, heavily quantized, and have a small 4K token context window). It’s also terrible for battery life.
AI as it is implemented today is simply just computationally expensive and unless you put in dedicated hardware (like the ANE) for only this purpose - a large cost driver - I don’t really see it getting large scale adoption.
Companies will probably need a server-backed solution as fallback if they want reasonable user experience, so why even invest in diverse hardware support.
teiferer
Every reply here forgets/overlooks the main reason for why this is not going to happen: The astronomical AI data center investments currently underway. Those place are not just for training. They are for inference too and the way all those investments are expected to eventually pay off. The whole AI sector of our industry depends on running models in these places.
show comments
ninjahawk1
In my opinion, this is similar to the earlier internet and computers. Few households or individuals had access to state of the art computers, it was primarily research or more well-off individuals. Most random people didn’t really know what it was and certainly didn’t use one.
Now today, AI is very expensive and not readily accessible to most people without paying a good amount.
The early internet became now you can just get a free phone from phone companies so long as you get their extras. Then you get a ton of subscriptions and ad-ons, but you don’t have to spend money, could just use youtube with ads etc.
Local AI would similarly shift this dynamic to paying for access to plug-in’s and tools for your local AI to be able to use. Like how the subscription model works right now.
With local model advancements, such as specifically Qwen 3.6 35B A3B, this future is becoming more likely by the year IMO.
robot-wrangler
Entrenched interests are going to do everything to stop local, but there's at least a few technical reasons to believe small and specialized models could be the norm eventually. If that does happen, local will follow.
TFA is focused on whether big models are necessary for what users want. There's some evidence they may never actually be reliable enough unless a) mechanistic interpretation matures far enough or b) our multi-agent systems all become multi-model.
For (a), advancement in MI might fix problems with big models, but would also mean we can maybe get unified representations, and just slice and dice the useful stuff out of huge models, getting only what we need without the junk. Ability to isolate problems won't really come without bringing the ability to isolate functional subsystems. Only want logic? Only vision? Just cut it out of the big monster and enjoy reduced costs and surface area for problems.
For (b), just look at stuff like the evil vector, or the category of hallucinations specific to tool-use. Without a complete solution for helpful/honest/harmless alignment, it seems likely that creativity and rigor (and many other things) are fundamentally at odds. If you start to need many models for everything anyway, why do we need the huge expensive do-everything ones? So specialization also becomes a pressure to shrink everything towards minimal reliable experts
wrxd
The example in the post confirms my theory that for local models to succeed they need to be "good enough", not big enough that they can compete with frontier models.
They need to be able to do a small task well and they need to be able to run reasonably on consumer-class devices. Even better if they can run on mobile phones.
In my experiments with local LLMs I noticed that while increasing the size of the model is nice the real thing that turns a barely useless model into something useful is the ability to use tools.
Giving my models the ability to search the web and fetch web pages did way more to solve hallucinations than getting a bigger model. And it doesn't have a training cutoff.
Sure, the bigger model is probably better at using tools but I often find the smaller models to be good enough.
show comments
gregjw
Is there a place to learn more about Local AI specifically and maybe even more specifically about models for bespoke purposes or curating them yourself for more specific uses? Feels like theres a lot of fat you can trim off because you don't need generic use, but I don't understand where to even begin there.
show comments
revolvingthrow
A local Answer Machine is the dream, especially when the internet is decaying and generally on its last legs, but the hardware requirements seem like a huge mountain to climb. Things are progressing tremendously - deepseek v4 flash is very good for what it is - but even that goes beyond any reasonable local setup, which imo is 128 GB ram + 16 GB vram. 4 ram slots on a consumer board craters ram speed, 256 gb macs are too expensive, and even then the inference is ungodly slow.
On the other hand… v4 flash model is actual magic compared to what was available 2 years ago. If the rate of improvement stays as is, we’ll get a similar performance in a ~120B model in a year, which is viable (if expensive) for everyman hardware. Possibly you’ll be able to run its equivalent on a ~$1200 laptop by 2028, which for me-in-2020 would sound straight out of a scifi movie. A good harness that lets the model fetch data from other sources like a local wikipedia copy from kiwix could do a lot for factual knowledge, too; there’s only so much you can encode in the model itself, but even a cheapish (pre-curent prices) 2TB drive can hold an immense amount of LLM-accessible data.
Big caveat: I don’t see local models for programming or generally demanding agentic tasks being worth it anytime soon. You likely want bleeding edge models for it, and speed is far more important. Chat at 20tok/s is fine; working on even a small codebase at 20tok/s, especially on a noticeably weaker model, is just a waste of time. Maybe it’s a PEBKAC but I have no idea how people make any meaningful use out of qwen 3.6.
As OP says, it shines in constrained environments where the model is transforming user-owned data. Definitely less useful for anything more open-ended.
show comments
almogodel
Remember nodes and graphs? A comfy user interface allows pretty incredible wiring among models local ai is like eurorack. The current graph skews heavily towards a a pair of small dense models collaborating with the large heavyweights selectively. It’s Qwen 3.6 27B with Gemma 4 31B, both unquantized, bf16/fp16, with phi 14b, nemotron cascade 2, and then those large heavyweights, r1 and subsequent deepseek models including speciale, gpt oss 120b, glm, min max,kimi, command r, mistrals, ever body, up in one graph, all them llm nodes patched and interconnected. Slow, resource intense, better than non local ai. I used Matteo’s graphllm for inspiration, and comfy ui (and st), and used the models to roll a new imgui node/graph model compositor. Now what?!
show comments
tomelders
I do think local models are the future, but there's still the question of cost to be answered. Even if there's some slew of effincency improvements that mean an LLM can run locally on consumer level hardware on an affordable budget (and that's a big "if"), there's still the cost of training the modles to consider.
Assuming we end up in a future where people pay to run multiple smaller models on their machines for specific tasks (e.g. A summariser model, a python coding model, or however fine grained/macro you want to go), the people training those models will need to turn a profit.
So how much will that cost? And how often will consumers have to pay? Models have a very short self life. Say you have a dedicated python coding model - that needs re-training every time there's a significant update to the language itself, any popular packages, related technologies (e.g. servers, cloud infra etc). So how often will users need to "upgrade" to the lastest version? It's going to be "frequently".
And it still needs the language stuff on top of that. Users aren't going to interact with a python coding model by writing python. They're going to use natural language. So the model needs all that stuff. And they're going to give it problems to solve. What if you asked the model "Write me a Bezier curve function". It needs to know about bezier curves, which have nothing to do with Python. So where do these LLM providers draw the line on what makes it into the training data and what doesn't?
And if an LLM doesn't know what a Bezier curve is, that's not going to stop it from just hallucinating an answer. If a significat proportion of prompts resulted in a response that said "Sorry, I don't know what you're talking about", then people will just stop using it. The utility of these things will be quickly overshadowed by the frustrations.
The way these frontier models have been introduced and promoted has set unrealistic expectations, and there's no putting the genie back in the bottle.
show comments
khoury
Agree with the sentiment, but: "We are building applications that stop working the moment the server crashes or a credit card expires."
This has been the case for way longer than openAI and Anthropic has been around with services like AWS, Cloudflare, etc.
manyatoms
It just depends how quickly models become "good enough" that we don't care about SOTA
show comments
Tepix
I'm pretty sure that AI assistants will become widespread.
I consider it to be very careless to entrust your emails, your chats, your calendar, your notes, your calls, your pictures, your contacts, your location history, your waking hours, your files, your TODO list, i.e. stuff including your health data to the for-profit AI companies. The temptation to earn money with your data is just too great, plus the risk of the data being stolen and sold illegally.
Local AI should be the default. For everone who can't do local AI, we need confidential compute. Yes, it has been hacked before. But it's making it a lot harder.
show comments
duchenne
Cloud models can use batch processing which is significantly more efficient. A local model has basically a batch of one which takes as much time to process as a batch of 100 because the gpu is memory bound and spend most of its time loading the model from vram to the gpu cache while the gpu cores are idle. With a batch of 100 the model loading time and compute time are roughly similar. So local
Models have a first 100x lower efficiency. Secondly, local models are idle most of the time waiting for the user to write a prompt, so the efficiency gap is probably more around 1000x.
show comments
worthless-trash
How long till we have distributed AI, where we can have different people run/understand different parts of problems and pass off work to different nodes across the internet.
maxdo
The start of the argument is already broken . Ok , slapping api is bad , so you push api that mimics to your provider, install some Chinese llm that will never obey any lawsuit in your country , install 500 packages to do so , every of them has a potential risk a security issue . How is that better ?
Oh yeah , it feels independent and not lazy , sure
vb-8448
> Use cloud models only when they’re genuinely necessary.
The problem is that it's much easier to use the SOTA models (especially if they are subsidized) instead of spending time fixing the knobs with the local one.
I just realized this with coding agents, yeah, you probably shouldn't always use latest version at xhigh, but you will end doing it because you do the job in less time, with less "effort" and basically at the same price.
I guess we'll see a real effort for local AI only when major vendors will start billing based on actual token usage.
show comments
hyfgfh
Local LLMs is the only thing viable and probably the only thing it will remain once the hype dies down.
A smaller cheaper local model can delivery most the value for coding, while we still use some services for code review and security compliance.
Once the VC money runs out and they start to charge the real price, the C-level will have to impose budges or limits. The current pissing contest over who can expend the most tokens is both ridiculous and shortsighted
QuadrupleA
Not sure how excited I feel about visiting your website and having it auto-download a 8GB model with GPT-3.5 level hallucinations, and then probably crash because I only have 6GB of VRAM. My dad won't be able to use it, or anyone else without a bleeding edge device. On a powerful enough "neural engine" device the battery will be drained quickly, while the heatsink burns a hole in my lap.
show comments
jjordan
It feels like we're one technological breakthrough away from all of these data centers going up to be deemed irrelevant.
show comments
h05sz487b
I really want this to be true. For me getting all models to run to the best of my hardwares ability and the cli tool to also make best use of the model is still a headache. I had coding models not being able to do a search and replace depending on the tool through which they were called, visible <thinking> elements in my message flow, agents doing a task, failing at the linter, then reverting everything again so the linter is happy and presenting the result as a "good compromise".
Right now it feels like we have all the pieces but nobody integrating all that into an amazing experience.
dgb23
I‘m surpised at the presented dichotomy between JSON formatting and what the Apple SDK provides to parse output into structs.
Based on what I understand about how the former works, I would assume that the latter has the same properties and failure modes.
continueops_com
Opus 1M context window and lighting fast response time is hard to compete with, even if you run a local A100 the local models are just not as good as tool calling, long running tasks and non-hallucinations
show comments
diwank
in order for us to get there, i think we need a standardized api at the os layer for local models so that the os could optimize, batch and safely allocate resources. something like an analog of chrome's local model "prompt" api but provided and managed by the os itself. the user can choose which model they want to primarily use and so on but all of the heavy lifting and continuous batching is done automatically by the os
cl0ckt0wer
If they do then hardware costs will explode even more
reshef316
not saying i disagree with the general statement, but there need to be options, not everyone has a machine capable of doing the same type of lifting required to properly run a local version. so what, if my machine is older i'll be locked out? restricted? forced to pay?
holtkam2
I wish I could upvote this twice. We (devs) really REALLY need to consider on-device compute before going to the cloud for LLM inference.
mattlondon
Yet there is another post a few rows down where people are losing their shit that Chrome has a local LLM model that uses a couple of GB of space for local-inference.
Damned if they do, damned if they don't.
show comments
hackermanai
> “But Local Models Aren’t As Smart”
This is what makes me continuously doubt and rewrite the local-first approach to inline chat in my editor. Next edit/ code complete makes more sense due to latency advantage. But chat is hard.
It's fast and feels good to run locally, but output quality is just not ChatGPT etal.
timeattack
My problem with LLMs (apart from philosophical aspects and economical impact) is that it would be unlikely for any of us to be able to train something functional locally (toy-like LLMs -- sure, but something really useful -- no). Apart from that it requires immense computing power, it also requires a dataset which is for the most part is obtained illegally.
show comments
z3t4
We are experimenting with local LLM and opencode at work and the quality is not as good as Claude code et.al but it's not far off and local speed is actually faster. We got 3 of Nvidias latest AI GPU's which was not cheep. It's not good enough to train our own models, but we can run the biggest open models with some tweaking.
nezhar
For me, building with open weights models sounds like the right approach — you are able to switch providers, and you can control where the server is running.
You don't have any guarantees in terms of data, that's true, you rely on the provider. But this is similar to a database or other services where you don't have the knowledge or resources to run them yourself. Hardware cost is an additional factor here.
If on the other hand your idea works out and the model fits the use case, you can always decide to move to a dedicated infrastructure later.
nate
I've been fooling with the Apple Foundation model for AlliHat, so you can chat with it from a Safari sidebar instead of just Claude. It's passible for some basic things like summarizing a page. But it really reminds me of Claude from like 3 years ago. I was trying to get it to generate synonyms for me and it would only generate about 10 with some duplicates. And when I asked for more, it said it would be a waste of resources to generate more. It has some kind of "act responsible" thing that Claude seemed to have. I also asked it to help me come up with synonyms for the game Pimantle, and it decided Pimantle was related to the adult industry and no matter how many times I said "it's just a game" or "I think you've misunderstood", it was stuck on not helping me with anything related to adult websites. And recommended I should play Wordle instead.
All of this being said, it seems Claude gave up this "constitution" it used to train on? I remember trying to get it to help me code some video editing tools, and it was convinced I was pirating videos and so wouldn't help me anymore in that session.
stuaxo
Harnessed seem to be a big part of what makes stuff good or not.
I tried Cline and couldn't get it working well and part of this was that at the time it expected OpenAIs output format.
Animats
Question: for software development, how much of an AI do you need for local development? Can it be run locally? Can someone train something that knows a lot about software but lacks comprehensive coverage of history, politics, and popular culture?
show comments
testfrequency
Local AI is definitely going to be the future as these models continue to advance at the rapid pace they already are.
This is why I believe OAI and Anthropic I’ve been so aggressive at offering services outside of their pure models like Claude Design. This is what will be competitive and keeping people subscribed.
october8140
They will never let us have enough RAM every again. RAM will be kept behind locked doors in the name of national security and only trusted corporations will be aloud to run AIs and "safely" run them in the cloud and sell them to us.
show comments
vivzkestrel
- can we get suggestions from people on what would the equivalent for android
- and for the web / javascript / svelte applications?
- suggestions for local OCR for bulk images?
show comments
imnes
I'm going through a similar exercise right now in an app I'm building. No server dependencies, for features that have traditionally used server side APIs, moving those capabilities onto the device. And also utilizing the on-board AI features provided by Android and iOS. So far it's been a very positive experience, and the capabilities provided on these devices have been more than capable for my needs. Working on providing apps that don't have ongoing operation costs of running server side infrastructure, so I can offer them as "pay once, run it forever" instead of ongoing subscription costs for the user.
imrozim
I use Claude api for my startup and the billings and rate limiting hurt. But local models cant do what i needed yet. Wish they could.
Aleesha_hacker
To what extent is this strategy currently feasible for windows of android development? I am interested in how portable local-first AI is across platforms, but it seems promising on Apple devices.
deivid
Sounds great, but if you din't cave to apple/google (eg: graphene, lineage), models are not built-in. Every app needs to ship their own models, and they are not tiny.
Is there a solution for this?
I'm currently just making users download onnx models if they want a feature, but it's not smooth UX
hydra-f
Unless there's a breakthrough or a transition to diffusion models, it's hard to imagine them becoming an affordable commodity
Small models are still in their infancy, and there's still much to sort out about and around them, as well
hackyhacky
I would like a standardized API for local AI to exist outside of the Apple ecosystem. The Prompt API is Chrome is halfway there.
* What is the answer to local AI for native apps on Windows?
* What is the answer to local AI for Linux?
This is a big opportunity for Linux, given the high quality of open-weight models. I hope some answer emerges before designs fracture and we get a dozen mutually incompatible answers.
show comments
ramon156
GLM 5.1 is very impressive, I wouldn't be surprised if we get to a point where it can live in ~48Gb and have a reliable speed/quality
antidamage
The roadblock to this is you seem to have to build it yourself. I've noted that none of the current cloud models are very good at building a replacement for themselves, and there's significant work that needs to be done to make a local LLM reliable in any way. I haven't found a single standalone package that makes setting them up easy. Sure, I can run Hermes Agent and a model, but getting the self-reflection loop in and all of the other stuff the need to actually be good? I'm still at it, trying to get anything to work reliably and factually.
show comments
manlymuppet
People are trying to “make the best software”, though.
I think the Quixotic accelerationists of AI are more or less a vocal minority of the people who make software, and the choice of online APIs over local systems is largely a choice made for users, rather than developer’s laziness.
You can do more and better with private AI today than with local models. There is no getting around that. Even if local AIs get better, being on the cutting edge of LLM performance is often a very worthy investment.
Most people won’t settle for a product if it’s not the very best and incredibly convenient. That’s a high bar, and local AI often doesn’t meet those standards.
HN’s insistence on treating all users like they are open-source, privacy-first, self-hosted Linux fanatics is painfully corny.
show comments
try-working
I'm building a protocol and router runtime for hybrid local/cloud AI.
The goal is that you would assign roles to models based on tasks, capabilities and observed performance. The router would then take care of model selection in the background.
It's tricky though. Probably have another two weeks before I can release the runtime.
You can follow me on Twitter if you want updates (see profile)
ksec
While I agree that would be the goal, we are too early for that. Just like how speech recognition used to require many server in a Datacenter to process and you send your data over. It is now completely on devices.
We are at least 5 years away from that. And DRAM needs a substantial breakthrough in cost reduction.
RyanZhuuuu
I’m skeptical that local AI will work well with today’s technology. Running capable models consumes too many resources on end-user devices.
eldenring
This article makes 0 sense. Its not up to billing or computer systems or ease of use or anything else that matters. The question is will the scaling laws, which in the asymptote are likely the laws of physics, hold up in converting energy to smarter models. Its not really up to anyone, the labs or developers, to choose if local or remote models will be the norm.
Galanwe
I would love for local inference to be possible, but from my experience, Kimi 2.6 is the only model that would be worth it, and its a $10k (M3 Ultra max spec'd - 30s TTFT so kind of slowish) to $30k (RTX6000/700GB+ DDR5) upfront, noise / power consumption aside.
show comments
knlam
you know what is the hard part about local ai? Supporting it cross platform. The OP get it easy by playing in Apple ecosystem but when you need to support local AI to both iOS/Android the approach is completely different. Even get the users to download the smallest models can be a challenge
mercurialsolo
Not your weights not your brain. Owning your own action and decision model is super important as these models emulate more of our decisions, thinking and learning. Built claudectl - a local brain for coding agents
https://github.com/mercurialsolo/claudectl
everlier
There was never a better time to run LLMs locally. It's just a few commands from zero till a fully working LLM homelab.
# Open WebUI -> llama.cpp + SearXNG for Web RAG + OpenTerminal as sandbox
harbor up searxng webui llamacpp openterminal
```
That's it, it's already better than Claude's or ChatGPT's app.
FrasiertheLion
Overall I'm bullish on standardized local APIs that ship with the browser or platform. Far more tractable than expecting end users to stand up their own local model instances, though r/LocalLLaMA is a fantastic community to follow if you want to go that route.
A useful framing over “local vs cloud AI” can be split along two axes: does the task touch private data, and does it need frontier intelligence? You can use frontier models for developing the software (doesn’t touch data), but open-source models running locally for ops: maintenance, debugging and monitoring (touches data). If you need to fall back to frontier intelligence at some point for a particularly hard to resolve problem, you can still rely on local models for pre-transforming and filtering input in a way that's privacy-preserving or satisfies some constraint before it’s sent off to the cloud for processing. OpenAI's privacy filter is a good example of a model that can be used to mask PII and secrets and that can run locally: https://openai.com/index/introducing-openai-privacy-filter/, before sending any data externally for processing.
Another framing for local vs frontier closed which the article mentions is whether the task saturates model capability. With certain tasks like PDF processing or voice or summarization, adding more intelligence isn't necessarily useful. Arguably we've approached that point for chat interfaces already with frontier open-source models. But for coding and ops through well structured tool use inside a coding capable harness, we're still a ways away.
Tangentially, a contrarian take here is that AI can actually enable more privacy preserving software if you’re so inclined. You can just build personalized software and it lowers the barrier to entry and the effort required to self host. SaaS complexity often comes from scaling and supporting features for all types of customers, and if you're building software for personal use, you don't need all that additional complexity. Additionally, foundational and infra software that is harder to vibecode with AI is often already open source.
msteffen
> One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app.
Well there’s your problem, control needs to go the other way. If you want your app to be AI-enabled, you need to make it easy for AI to control your app. Have you used OpenClaw? It’s awesome!
daishi55
> We are building applications that stop working the moment the server crashes or a credit card expires
Isn’t this true of any application that accesses anything not running on your computer? This is just describing what it means to add an API call to your app. Nothing to do with AI (?)
show comments
RataNova
I mostly agree, though I think local AI will need better UX around failure modes. Cloud models are often used not just because developers are lazy, but because they are more capable and easier to support consistently across devices.
karmasimida
How? Memory price is sky high, that is the choke hold the monopoly will not let go
rarisma
I think with turbo quant forks eventually being merged, its becoming more feasible on mid tier consumer h/w
Dont quite think its ready yet.
krupan
Here I was hoping that this was some plea for us to get away from proprietary solutions that we have no control over and go back to open source, but no, not that at all.
TechSquidTV
Local AI will catch up. Unless we can't get our hands on hardware anymore, which is a legitimate concern I have.
anArbitraryOne
Just let me turn it off to preserve battery life
vegabook
>> years ago I launched "The Brutalist Report"
proceeds to brutalise the reader with an 88-point headline font.
1a527dd5
Consumer/private needs to be local.
Work? I don't want it local at all. I want it all cloud agent.
rduffyuk
agree with the article but the limitation for local llm usefulness is the limited scope from my experiments. eventually context heavy data pipelines require larger models which consumer hardware can't deal with yet. the local model for summary on a page like you describe could be done via code as well, i've found using an llm isn't always the right choice. for example i use ner tagging in my md docs for better indexing and llm search capabilities. this is purely code based and not via an llm. tried with an llm and the results were a lot worse. augmenting tools to make the llm produce better outputs gives better results.
eyk19
Apple stock is going to skyrocket
show comments
Salgat
Local models are much less energy efficient right?
show comments
tuananh
local llm doesn't need to match SOTA performance in order to be useful.
prometheus1992
Agreed, but the way ram prices are going, I don't think we would be able to afford hardware that can run any useful model.
agentifysh
Until the hardware is economical and powerful enough, local AI that can compete with frontier models today is still far off.
If we could even get something like GPT 5.5 running locally that would be quite useful.
alfiedotwtf
This would be nice, but unfortunately the norm at the moment is - release a rushed model that doesn’t work with llama.cpp, but if it does, make sure that the chat template is broken. And even if it did have a perfect chat template, let the model loop endlessly rewriting the same file with same content for hours on end.
It would be nice if model makers could at minimum embrace test harnesses, and stretch goal if they’re going to change underlying formats then at least land compatible readers in the big engines (e.g. llama.cpp and vllm)
ChoGGi
Who can afford local AI?
show comments
hypfer
Same as local compute.
Welcome back to 2014. Let us now continue yelling at the cloud.
refulgentis
The shitty thing here is, either everyone's shipping 800 MB at least with their binary, or, you have to rely on the platform vendor anyway. I'm hoping there's enough external pressure that the OS vendors turn it more into a repository than a blessed-model-garden.
show comments
wilg
Two issues -
1. Local models are likely to be more power-expensive to run (per-"unit-of-intelligence") than remote models, due to datacenter economies of scale. People do not like to engage with this point, but if you have environmental concerns about AI, this is a pretty important one.
2. Using dumb models for simple tasks seems like a good idea, but it ends up being pretty clear pretty quick that you just want the smartest model you can afford for absolutely every task.
show comments
dana321
"NO AI" needs to be the norm, we should be working on better ways of sharing information and better documentation instead of fighting with computers for substandard results.
shmerl
Depending on some remote AI provider is a major lock-in pitfall. But it's exactly what those AI providers want you to do.
williamtrask
I wonder if a popularization moment for local AI will ultimately be the pin-prick that pops the AI bubble. Like the deepseek or openclaw moments but bigger/next.
show comments
DoctorOetker
One advantage of local AI is continual learning.
When I say 'moat' I don't mean moat specific to a company vis-a-vis other companies, but 'moat' specific to the set of inference providers vis-a-vis self-hosted local inference.
The moat consists primarily of being able to batch inference requests.
If we pretend people weren't interested in long context-lengths, there would be a moat for inference providers. who can batch many requests so that streaming the model weights (regardless if from system RAM to GPU RAM; or from GPU RAM to GPU cache SRAM) can be amortized over multiple requests.
However people do want longer memory than the native context length.
One approach is continual learning (basically continue training by using the past conversation as extra corpus material; interspersed with training on continuations from the frozen model, so it doesn't drift or catastrophically forget knowledge / politeness / ...).
However this is very expensive for inference providers, since they would have to multiply model weight storage with the number of users U=N. For a single user the memory cost of continual learning is much less since they only need to support a single user, and are returned some of the memory cost through elimination of KV-caches, and returned higher quality answers compared to subquadratic approximations of quadratic attention.
An advantage of continual learning is that the conversation / code base / context is continuously rebaked into model weights, and so doesn't need KV caches! It doesn't need imperfect approximations to quadratic attention, it attends through working knowledge being updated.
Nothing prevents local LLM users from implementing this and benefiting from the dropped requirements of KV caches and enjoying true quadratic attention implicitly over the whole codebase, or many overlapping projects indeed.
The only remaining moat of inference providers vis-a-vis continual learning local LLM's is the batching advantage, plus the gradient update costs for continual learning minus the KV storage and compute costs, minus the performance loss due to inexact approximations to quadratic attention.
This points towards a stronger incentive for local hosting than currently realized (none of the popular local LLM tools currently support continual learning, once this genie is out of the bottle it will be a permanent decrease of the inference provider moat, the cost of which can't be expressed merely in hardware or energy costs, since it is difficult to quantify the financial loss of inexact approximations to quadratic attention, the financial loss due to limited effective context length and the concomitant loss in quality of the result)
show comments
QuadrupleA
This is just emotional rhetoric. Pretty much any app in the last 20 years has depended on a server somewhere, or a cloud provider. Like an AI provider, they can go down, they can turn off if you don't pay your bill, etc.
And local inference requires fairly beefy hardware, that is FAR from ubiquitous across today's userbases. Local models are also still far dumber than what frontier labs can serve.
Weird that this is getting such a tidal wave of upvotes.
j45
It’s easier to say 32 gb ram needs to be the norm to start getting movement on this
krupan
If you don't need a lot of smarts, do you even need an LLM? Aren't older machine learning techniques just as good, or like, you know, old-school algorithms?
holoduke
We need computers with 128gb or maybe even 192gb of memory before local use make sense. From my own experience 32b LLMs are the absolute minimum for proper tool use and decent output quality. But for local ai you want also vision models and maybe even various LLMs. Plus some memory for the system of course.
On my 36gb M3 the 24b Gemma model is nice. But the entire system gets allocated for that thing.
artursapek
I'm someone who is trying to build a subscription-based business to cover underlying LLM costs, and very hopeful I can one day just sell a permanent license to the software instead with customers using local LLMs to power it.
jmyeet
I've been looking into options for this and we are getting close. There are two main constraints: memory and memory bandwidth.
NVidia segments the market by limiting the amount of memory on GPUs. It currently tops out at 32GB (on a 5090) but it has excellent memory bandwidth (~1.8TB/s). If you want more than the you need to buy an RTX Pro (eg RTX 6000 Pro w/ 96GB for ~$10K) or you get into high high end solutions like H100, H200, etc that have significantly more memory and even higher bandwidth on HBM memory (eg 3.2TB/s+).
NVidia has released the DGX Spark w/ 128GB of memory for ~$4k. The problem is the memory bandwidth. It's only 273GB/s, which is less than the M5 Pro (307GB/s) but more than the M5. You can buy a 16" Macbook Pro with an M5 Max and 128GB of memory for $6k and it has a bandwidth of 614GB/s. So the DGX Spark is a joke, really.
In case it wasn't clear, Apple is interesting in this space because it has a shared memory architecture so the GPU can use all the memory.
Many, myself include, expect there to be no refresh to the 5000 series consumer GPUs this year, which would otherwise happen based on product cycles. So no 5080 Super, for example. And I wouldn't expect a 6090 before 2028 realistically.
One thing Apple hasn't done yet is release the M5 Mac Studios, which are widely expected in Q3 this year. They are interesting because, for example, the M3 Ultra has a memory bandwidth of 819GB/s and previously had a max spec of 512GB but that got discontinued (and the 256GB version also got discontinued more recently).
So many expect an M5 Max Mac Studio with 1TB/s+ bandwidth and specs up to 256GB or 512GB, probably for ~$10k later this year.
You really have to use this hardware almost 24x7 for it to be economical because otherwise H100 computer hours are probably cheaper.
But what happens when the next generation of GPUs comes out to the trillions in AI DC investment? It's going to halve its value. That's over $1 trillion in capex that will disappear overnight, effectively.
I think Apple is the dark horse here because they have no interest in NVidia's psuedo-monopoly. I'm just waiting for them to realize it.
Now CUDA is an issue here still but I think as time goes on it's going to be less of an issue. Memory is still a huge constraint both in terms of price and just general supply because NVidia can justify paying way more for it than you can, probably.
It's still sad to see that 128GB (2x64GB) DDR5 kits are almost $2k now and werre $400 a year ago. Expect that to continue until this bubble pops (which IMHO it will) and we're likely in a global recession.
So the other issue is models. OpenAI and Anthropic are built on proprietary models. Their entire valuation depends on this moat. I don't think this last so both companies are doomed because open source models are going to be sufficiently good.
We can already do some reasonably cool stuff on local hardware that isn't that expensive and even more so once you get to $5-10k hardware. That's going to be so much better in 2 years that I'm hesitant to spend any amount of money now.
Plus the code for running these things is getting better. Just in the last month there have been huge speed ups in local LLMs with MTP.
show comments
sgt
I guess Google got that memo!
cubefox
Local AI is a bit like wind parks. Everyone is in favor, except if they are in your own backyard. There was recently a huge outcry when Chrome shipped a local 4 GB AI model:
https://news.ycombinator.com/item?id=48019219
I have to conclude that people would like to have powerful local AI but it should at the same time only be a tiny model. In which case it wouldn't be powerful.
barrkel
Local models are extraordinarily expensive if you're not maximizing throughput, and you're not going to be maximizing it.
Local models need to be resident in expensive RAM, the kind that has fat pipes to compute. And if you have a local app, how do you take a dependency on whatever random model is installed? Does it support your tool calling complexity? Does it have multimodal input? Does it support system messages in the middle of the conversation or not? Is it dumb enough to need reminders all the time?
Spend enough time building against local models and you'll see they're jagged in performance. You need to tune context size, trade off system message complexity with progressive disclosure. You simply can't rely on intelligence. A bunch of work goes into the harness.
Meanwhile, third party inference is getting the benefits of scale. You only need to rent a timeslice of memory and compute. It's consistent and everybody gets the same experience. And yes, it needs paying for, but the economics are just better.
They will be, and that moment is not that far off. We've got the progression in place already: first, large data centers could have performant LLMs, we are now firmly in "a bunch of servers with a couple of H100s each" territory, slowly going into "128 GB VRAM on a MacBook Pro or a Strix Halo". Within the next year, the pattern of "expensive remote LLM for planning, local slow-but-faster-than-human LLM for execution" will become the norm for companies, slowly moving to "using local LLM for everything is good enough". And then we'll have the equilibrium we already have with the "classic cloud": you either self-host or pay for flexibility and speed. The question will be: how much of the current compute capacity craze will local hosting give the kiss of death to and what that means for the market.
Here's some things you can do right now with local models on a consumer device:
- text-to-speech - speech-to-text - dictionary - encyclopedia - help troubleshooting errors - generate common recipes and nutritional facts - proofread emails, blog posts - search a large trove of documents, find information, summarize it (RAG) - manipulate your terminal/browser/etc - analyze a picture or video - generate a picture or video - generate PDFs, documents, etc (code exec) - simple programming - financial analysis/planning - math and science analysis - find simple first aid/medical information - "rubber ducking" but the duck talks back
A quarter of those don't need more than a gig of RAM, the rest benefit from more RAM. Technically you don't even need a GPU, it just makes it faster. I do half that stuff on my laptop with local models every day.
That said, it really doesn't need to be local. I like the idea that I can do all that stuff offline if I'm traveling, but I usually have cell service, and the total tokens is pretty cheap (like $2/month for all my non-coding AI use).
> “AI everywhere” is not the goal. Useful software is the goal.
Great observation! Often the excitement of novelty makes us lose sight of the real goal
Cool, well let me know when Opus 4.5 level performance is available locally, at speeds that serve everyday use, and 100% I'm right there with you.
Until then, I'm going to keep sending my JSON to the server farm in Virginia because it's the only place that can serve me a model that actually works for my uses.
It seems like everybody is focused on "LLM"s, a.k.a Large Language Models. One interesting addition to that is fine-tuned- small parameter, distilled, context-dependent small language models that:
1- Do a particular task with great capability (due to its constrained, limited scope) 2- Do it in such a way, it integrates gracefully in your workflow without ever requiring you to know you are using an LM.
There is a difference between outsourcing your workflow to AI and actually utilizing it.
Check this: https://www.distillabs.ai/blog/we-benchmarked-12-small-langu...
People want local AI, but only if UX is good. Tooling/harness quality may matter as much as model quality.
I think the future will probably be a hybrid of:
1. local AI for simple, private, everyday tasks
2. online AI for very hard or long tasks
For image generation, this has already happened. To what degree, I can't tell, as I don't do image generation much so I don't have numbers on Midjourney subscriptions or any other image-AI-as-a-service sites. But civitai.com has become a place where people share their models, based off of Stable Diffusion or other similar bases, with various fine-tunings to achieve desired results. You name it, you can find a model for it at Civitai, and people doing some very creative things with them. (And also a lot of the obvious things, but it's the Internet, what did you expect?)
I haven't seen a text-based model sharing site spring up yet (perhaps they already have and I don't know about it yet). Civitai, being focused on image-generation, has the obvious advantage that it's easy to show off impressive results from the model on the front page of the website, and judging what someone's home-grown fine-tuned LLM will produce is a lot harder. But at some point I expect a Civitai equivalent site for text models, especially code-based ones, to become popular. That will seriously undercut Anthropic, OpenAI, et al, and will probably force them to find a price equilibrium.
Because once you're competing with "I spend $2,500 up front on a powerful video card, download an open-source model for free, and then I get pretty much everything I need for free" (additional power cost of running that video card isn't nothing, but probably not noticeable in your power bill compared to what you're already using)... then suddenly $200/month means your customers are thinking "after one year I would have been better off with the homegrown solution". The only way they'll continue to pay $200/month is if Claude/GPT/Gemini/whoever is truly head-and-shoulders above the "pay upfront once for hardware then use it for free afterwards" models available. And that's going to be doable, perhaps, but tough.
We need better GUI and OS integrations with sandboxed local LLMs, before this is thrust on everyone and rolled out as the default in commercial OSes. Here in Berlin, I was functionally surrounded and hounded out of a local meetup, due to confrontation over the naive pushing of OS-level and network access agentic AI, done in the mode of mystical powers and artistic possibilities, which due to recent experiences, comes off as string-pulling, to produce a threat or danger that then must be observed and kept tabs on, according to Goodhart's Law.
For the mainstream audience, the sentiment around local ai today is the same that they had around open source a few decades ago. For a few products, some paid solutions were so much more advanced that open source were very often completely overlooked. Why bother ? And the like. Then we had captive SaaS and other plateforms and now it's obviously wrong for most of us.
The dependency we have with anthropic and openai for coding for instance is insane. Most accept it because either they don't care, or they just hope chinese will never stop open weights. The business model of open weights is very new, include some power play between countries and labs, and move an absurd amount of money without any concrete oversight from most people.
It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control.
I think we should separate the private AI discussion from the local AI discussion. The pragmatic choice to run big LLMs is one/several big servers online, but that doesn't mean private companies should be the only ones to run them.
A self hosted inference solution that offer good tenant isolation guarantees (ideally zero trust) and is easy enough to deploy and maintain (think Plex for AI) would be my choice for privacy. Now to be honest I have done zero research about this and have zero idea how feasible that is, maybe it already exists and there's some discord servers I should join?
Edit: I don't need to mention it here but what's incredible is that open models are in the ballpark of the best commercial models so supposedly, the hardest part by far is already solved.
Interesting to see this after the recent post about Chrome’s on-device model using up 4gb of storage, which frustrated a lot of people [1].
I agree local models are great, and it’s cool that Apple has models built in now. But I feel like it basically has to be an OS level feature or users are going to get upset. I’d certainly rather have a small utility call out to OpenAI than download its own model.
[1]: https://news.ycombinator.com/item?id=48019219
it's not going to happen with LLMs unless ram + storage gets several orders of magnitude cheaper like, yesterday
informatics aren't magic, you'll never be able to compress """knowledge""" into a small model in a way equivalent to the 1.5 TB model
I get the sentiment for self hosting. But there are a few counter arguments:
- Self hosting is expensive. It involves expensive machines with GPUs that cost hundreds per month if you use cloud based ones. You might need multiple of those. And you need people to mind those machines and they are even more expensive per month.
- If you run stuff on your laptop, it consumes a lot of resources and energy. I have qwen running on my laptop. Even minimal usage turns my laptop in a radiator. Nice as a demo, but I can't have it this hot all the time. It would run out of battery, and it's probably not great for longevity of components in the laptop.
- Models are evolving quickly and the self hosted smaller ones aren't as good when it comes to things like tool usage, reasoning, etc. Being able to switch tot he latest model is valuable.
- It's easier to get your use case working with one of the top models than with one of the smaller self hosted ones.
- If you get the wrong hardware, it might not be able to run the latest models very soon.
- Self hosting models is mostly a cost optimization. It only becomes relevant if you hit a certain scale.
- You have alternatives in the form of hosted models via a wide range of service providers. Some of those are EU based and offer all the things you'd be looking for if you are offering your services there. Including legal requirements.
- Reinventing what these companies do in house is technically challenging and possibly more expensive than self hosting models because now you need a lot of engineering capacity dedicated to that. And legal. And all the rest.
If, like most companies/people, you are at the experimenting stage, the cheapest and fastest is just getting an API key from an API provider of your choice. You can take it from there if your experiment actually works. And then it's mostly about optimizing cost. If your API usage goes to the thousands per month or worse, it becomes a cost/quality trade off.
I really really want to like local AI, but I highly doubt it will see wide adoption for a long time.
The additional up-front cost for hardware designed to run an LLM in addition to normal workload is unlikely to be accepted by most consumers.
The scale will be very constrained (like Apples on-device models which are small, heavily quantized, and have a small 4K token context window). It’s also terrible for battery life.
AI as it is implemented today is simply just computationally expensive and unless you put in dedicated hardware (like the ANE) for only this purpose - a large cost driver - I don’t really see it getting large scale adoption.
Companies will probably need a server-backed solution as fallback if they want reasonable user experience, so why even invest in diverse hardware support.
Every reply here forgets/overlooks the main reason for why this is not going to happen: The astronomical AI data center investments currently underway. Those place are not just for training. They are for inference too and the way all those investments are expected to eventually pay off. The whole AI sector of our industry depends on running models in these places.
In my opinion, this is similar to the earlier internet and computers. Few households or individuals had access to state of the art computers, it was primarily research or more well-off individuals. Most random people didn’t really know what it was and certainly didn’t use one.
Now today, AI is very expensive and not readily accessible to most people without paying a good amount.
The early internet became now you can just get a free phone from phone companies so long as you get their extras. Then you get a ton of subscriptions and ad-ons, but you don’t have to spend money, could just use youtube with ads etc.
Local AI would similarly shift this dynamic to paying for access to plug-in’s and tools for your local AI to be able to use. Like how the subscription model works right now.
With local model advancements, such as specifically Qwen 3.6 35B A3B, this future is becoming more likely by the year IMO.
Entrenched interests are going to do everything to stop local, but there's at least a few technical reasons to believe small and specialized models could be the norm eventually. If that does happen, local will follow.
TFA is focused on whether big models are necessary for what users want. There's some evidence they may never actually be reliable enough unless a) mechanistic interpretation matures far enough or b) our multi-agent systems all become multi-model.
For (a), advancement in MI might fix problems with big models, but would also mean we can maybe get unified representations, and just slice and dice the useful stuff out of huge models, getting only what we need without the junk. Ability to isolate problems won't really come without bringing the ability to isolate functional subsystems. Only want logic? Only vision? Just cut it out of the big monster and enjoy reduced costs and surface area for problems.
For (b), just look at stuff like the evil vector, or the category of hallucinations specific to tool-use. Without a complete solution for helpful/honest/harmless alignment, it seems likely that creativity and rigor (and many other things) are fundamentally at odds. If you start to need many models for everything anyway, why do we need the huge expensive do-everything ones? So specialization also becomes a pressure to shrink everything towards minimal reliable experts
The example in the post confirms my theory that for local models to succeed they need to be "good enough", not big enough that they can compete with frontier models.
They need to be able to do a small task well and they need to be able to run reasonably on consumer-class devices. Even better if they can run on mobile phones.
In my experiments with local LLMs I noticed that while increasing the size of the model is nice the real thing that turns a barely useless model into something useful is the ability to use tools. Giving my models the ability to search the web and fetch web pages did way more to solve hallucinations than getting a bigger model. And it doesn't have a training cutoff. Sure, the bigger model is probably better at using tools but I often find the smaller models to be good enough.
Is there a place to learn more about Local AI specifically and maybe even more specifically about models for bespoke purposes or curating them yourself for more specific uses? Feels like theres a lot of fat you can trim off because you don't need generic use, but I don't understand where to even begin there.
A local Answer Machine is the dream, especially when the internet is decaying and generally on its last legs, but the hardware requirements seem like a huge mountain to climb. Things are progressing tremendously - deepseek v4 flash is very good for what it is - but even that goes beyond any reasonable local setup, which imo is 128 GB ram + 16 GB vram. 4 ram slots on a consumer board craters ram speed, 256 gb macs are too expensive, and even then the inference is ungodly slow.
On the other hand… v4 flash model is actual magic compared to what was available 2 years ago. If the rate of improvement stays as is, we’ll get a similar performance in a ~120B model in a year, which is viable (if expensive) for everyman hardware. Possibly you’ll be able to run its equivalent on a ~$1200 laptop by 2028, which for me-in-2020 would sound straight out of a scifi movie. A good harness that lets the model fetch data from other sources like a local wikipedia copy from kiwix could do a lot for factual knowledge, too; there’s only so much you can encode in the model itself, but even a cheapish (pre-curent prices) 2TB drive can hold an immense amount of LLM-accessible data.
Big caveat: I don’t see local models for programming or generally demanding agentic tasks being worth it anytime soon. You likely want bleeding edge models for it, and speed is far more important. Chat at 20tok/s is fine; working on even a small codebase at 20tok/s, especially on a noticeably weaker model, is just a waste of time. Maybe it’s a PEBKAC but I have no idea how people make any meaningful use out of qwen 3.6.
I've got some demos of what the new Prompt API in Chrome that uses a local model can do: https://adsm.dev/posts/prompt-api/#what-could-you-build-with...
As OP says, it shines in constrained environments where the model is transforming user-owned data. Definitely less useful for anything more open-ended.
Remember nodes and graphs? A comfy user interface allows pretty incredible wiring among models local ai is like eurorack. The current graph skews heavily towards a a pair of small dense models collaborating with the large heavyweights selectively. It’s Qwen 3.6 27B with Gemma 4 31B, both unquantized, bf16/fp16, with phi 14b, nemotron cascade 2, and then those large heavyweights, r1 and subsequent deepseek models including speciale, gpt oss 120b, glm, min max,kimi, command r, mistrals, ever body, up in one graph, all them llm nodes patched and interconnected. Slow, resource intense, better than non local ai. I used Matteo’s graphllm for inspiration, and comfy ui (and st), and used the models to roll a new imgui node/graph model compositor. Now what?!
I do think local models are the future, but there's still the question of cost to be answered. Even if there's some slew of effincency improvements that mean an LLM can run locally on consumer level hardware on an affordable budget (and that's a big "if"), there's still the cost of training the modles to consider.
Assuming we end up in a future where people pay to run multiple smaller models on their machines for specific tasks (e.g. A summariser model, a python coding model, or however fine grained/macro you want to go), the people training those models will need to turn a profit.
So how much will that cost? And how often will consumers have to pay? Models have a very short self life. Say you have a dedicated python coding model - that needs re-training every time there's a significant update to the language itself, any popular packages, related technologies (e.g. servers, cloud infra etc). So how often will users need to "upgrade" to the lastest version? It's going to be "frequently".
And it still needs the language stuff on top of that. Users aren't going to interact with a python coding model by writing python. They're going to use natural language. So the model needs all that stuff. And they're going to give it problems to solve. What if you asked the model "Write me a Bezier curve function". It needs to know about bezier curves, which have nothing to do with Python. So where do these LLM providers draw the line on what makes it into the training data and what doesn't?
And if an LLM doesn't know what a Bezier curve is, that's not going to stop it from just hallucinating an answer. If a significat proportion of prompts resulted in a response that said "Sorry, I don't know what you're talking about", then people will just stop using it. The utility of these things will be quickly overshadowed by the frustrations.
The way these frontier models have been introduced and promoted has set unrealistic expectations, and there's no putting the genie back in the bottle.
Agree with the sentiment, but: "We are building applications that stop working the moment the server crashes or a credit card expires."
This has been the case for way longer than openAI and Anthropic has been around with services like AWS, Cloudflare, etc.
It just depends how quickly models become "good enough" that we don't care about SOTA
I'm pretty sure that AI assistants will become widespread.
I consider it to be very careless to entrust your emails, your chats, your calendar, your notes, your calls, your pictures, your contacts, your location history, your waking hours, your files, your TODO list, i.e. stuff including your health data to the for-profit AI companies. The temptation to earn money with your data is just too great, plus the risk of the data being stolen and sold illegally.
Local AI should be the default. For everone who can't do local AI, we need confidential compute. Yes, it has been hacked before. But it's making it a lot harder.
Cloud models can use batch processing which is significantly more efficient. A local model has basically a batch of one which takes as much time to process as a batch of 100 because the gpu is memory bound and spend most of its time loading the model from vram to the gpu cache while the gpu cores are idle. With a batch of 100 the model loading time and compute time are roughly similar. So local Models have a first 100x lower efficiency. Secondly, local models are idle most of the time waiting for the user to write a prompt, so the efficiency gap is probably more around 1000x.
How long till we have distributed AI, where we can have different people run/understand different parts of problems and pass off work to different nodes across the internet.
The start of the argument is already broken . Ok , slapping api is bad , so you push api that mimics to your provider, install some Chinese llm that will never obey any lawsuit in your country , install 500 packages to do so , every of them has a potential risk a security issue . How is that better ?
Oh yeah , it feels independent and not lazy , sure
> Use cloud models only when they’re genuinely necessary.
The problem is that it's much easier to use the SOTA models (especially if they are subsidized) instead of spending time fixing the knobs with the local one.
I just realized this with coding agents, yeah, you probably shouldn't always use latest version at xhigh, but you will end doing it because you do the job in less time, with less "effort" and basically at the same price.
I guess we'll see a real effort for local AI only when major vendors will start billing based on actual token usage.
Local LLMs is the only thing viable and probably the only thing it will remain once the hype dies down.
A smaller cheaper local model can delivery most the value for coding, while we still use some services for code review and security compliance.
Once the VC money runs out and they start to charge the real price, the C-level will have to impose budges or limits. The current pissing contest over who can expend the most tokens is both ridiculous and shortsighted
Not sure how excited I feel about visiting your website and having it auto-download a 8GB model with GPT-3.5 level hallucinations, and then probably crash because I only have 6GB of VRAM. My dad won't be able to use it, or anyone else without a bleeding edge device. On a powerful enough "neural engine" device the battery will be drained quickly, while the heatsink burns a hole in my lap.
It feels like we're one technological breakthrough away from all of these data centers going up to be deemed irrelevant.
I really want this to be true. For me getting all models to run to the best of my hardwares ability and the cli tool to also make best use of the model is still a headache. I had coding models not being able to do a search and replace depending on the tool through which they were called, visible <thinking> elements in my message flow, agents doing a task, failing at the linter, then reverting everything again so the linter is happy and presenting the result as a "good compromise".
Right now it feels like we have all the pieces but nobody integrating all that into an amazing experience.
I‘m surpised at the presented dichotomy between JSON formatting and what the Apple SDK provides to parse output into structs.
Based on what I understand about how the former works, I would assume that the latter has the same properties and failure modes.
Opus 1M context window and lighting fast response time is hard to compete with, even if you run a local A100 the local models are just not as good as tool calling, long running tasks and non-hallucinations
in order for us to get there, i think we need a standardized api at the os layer for local models so that the os could optimize, batch and safely allocate resources. something like an analog of chrome's local model "prompt" api but provided and managed by the os itself. the user can choose which model they want to primarily use and so on but all of the heavy lifting and continuous batching is done automatically by the os
If they do then hardware costs will explode even more
not saying i disagree with the general statement, but there need to be options, not everyone has a machine capable of doing the same type of lifting required to properly run a local version. so what, if my machine is older i'll be locked out? restricted? forced to pay?
I wish I could upvote this twice. We (devs) really REALLY need to consider on-device compute before going to the cloud for LLM inference.
Yet there is another post a few rows down where people are losing their shit that Chrome has a local LLM model that uses a couple of GB of space for local-inference.
Damned if they do, damned if they don't.
> “But Local Models Aren’t As Smart”
This is what makes me continuously doubt and rewrite the local-first approach to inline chat in my editor. Next edit/ code complete makes more sense due to latency advantage. But chat is hard.
It's fast and feels good to run locally, but output quality is just not ChatGPT etal.
My problem with LLMs (apart from philosophical aspects and economical impact) is that it would be unlikely for any of us to be able to train something functional locally (toy-like LLMs -- sure, but something really useful -- no). Apart from that it requires immense computing power, it also requires a dataset which is for the most part is obtained illegally.
We are experimenting with local LLM and opencode at work and the quality is not as good as Claude code et.al but it's not far off and local speed is actually faster. We got 3 of Nvidias latest AI GPU's which was not cheep. It's not good enough to train our own models, but we can run the biggest open models with some tweaking.
For me, building with open weights models sounds like the right approach — you are able to switch providers, and you can control where the server is running.
You don't have any guarantees in terms of data, that's true, you rely on the provider. But this is similar to a database or other services where you don't have the knowledge or resources to run them yourself. Hardware cost is an additional factor here.
If on the other hand your idea works out and the model fits the use case, you can always decide to move to a dedicated infrastructure later.
I've been fooling with the Apple Foundation model for AlliHat, so you can chat with it from a Safari sidebar instead of just Claude. It's passible for some basic things like summarizing a page. But it really reminds me of Claude from like 3 years ago. I was trying to get it to generate synonyms for me and it would only generate about 10 with some duplicates. And when I asked for more, it said it would be a waste of resources to generate more. It has some kind of "act responsible" thing that Claude seemed to have. I also asked it to help me come up with synonyms for the game Pimantle, and it decided Pimantle was related to the adult industry and no matter how many times I said "it's just a game" or "I think you've misunderstood", it was stuck on not helping me with anything related to adult websites. And recommended I should play Wordle instead.
All of this being said, it seems Claude gave up this "constitution" it used to train on? I remember trying to get it to help me code some video editing tools, and it was convinced I was pirating videos and so wouldn't help me anymore in that session.
Harnessed seem to be a big part of what makes stuff good or not.
I tried Cline and couldn't get it working well and part of this was that at the time it expected OpenAIs output format.
Question: for software development, how much of an AI do you need for local development? Can it be run locally? Can someone train something that knows a lot about software but lacks comprehensive coverage of history, politics, and popular culture?
Local AI is definitely going to be the future as these models continue to advance at the rapid pace they already are.
This is why I believe OAI and Anthropic I’ve been so aggressive at offering services outside of their pure models like Claude Design. This is what will be competitive and keeping people subscribed.
They will never let us have enough RAM every again. RAM will be kept behind locked doors in the name of national security and only trusted corporations will be aloud to run AIs and "safely" run them in the cloud and sell them to us.
- can we get suggestions from people on what would the equivalent for android
- and for the web / javascript / svelte applications?
- suggestions for local OCR for bulk images?
I'm going through a similar exercise right now in an app I'm building. No server dependencies, for features that have traditionally used server side APIs, moving those capabilities onto the device. And also utilizing the on-board AI features provided by Android and iOS. So far it's been a very positive experience, and the capabilities provided on these devices have been more than capable for my needs. Working on providing apps that don't have ongoing operation costs of running server side infrastructure, so I can offer them as "pay once, run it forever" instead of ongoing subscription costs for the user.
I use Claude api for my startup and the billings and rate limiting hurt. But local models cant do what i needed yet. Wish they could.
To what extent is this strategy currently feasible for windows of android development? I am interested in how portable local-first AI is across platforms, but it seems promising on Apple devices.
Sounds great, but if you din't cave to apple/google (eg: graphene, lineage), models are not built-in. Every app needs to ship their own models, and they are not tiny.
Is there a solution for this? I'm currently just making users download onnx models if they want a feature, but it's not smooth UX
Unless there's a breakthrough or a transition to diffusion models, it's hard to imagine them becoming an affordable commodity
Small models are still in their infancy, and there's still much to sort out about and around them, as well
I would like a standardized API for local AI to exist outside of the Apple ecosystem. The Prompt API is Chrome is halfway there.
* What is the answer to local AI for native apps on Windows?
* What is the answer to local AI for Linux?
This is a big opportunity for Linux, given the high quality of open-weight models. I hope some answer emerges before designs fracture and we get a dozen mutually incompatible answers.
GLM 5.1 is very impressive, I wouldn't be surprised if we get to a point where it can live in ~48Gb and have a reliable speed/quality
The roadblock to this is you seem to have to build it yourself. I've noted that none of the current cloud models are very good at building a replacement for themselves, and there's significant work that needs to be done to make a local LLM reliable in any way. I haven't found a single standalone package that makes setting them up easy. Sure, I can run Hermes Agent and a model, but getting the self-reflection loop in and all of the other stuff the need to actually be good? I'm still at it, trying to get anything to work reliably and factually.
People are trying to “make the best software”, though.
I think the Quixotic accelerationists of AI are more or less a vocal minority of the people who make software, and the choice of online APIs over local systems is largely a choice made for users, rather than developer’s laziness.
You can do more and better with private AI today than with local models. There is no getting around that. Even if local AIs get better, being on the cutting edge of LLM performance is often a very worthy investment.
Most people won’t settle for a product if it’s not the very best and incredibly convenient. That’s a high bar, and local AI often doesn’t meet those standards.
HN’s insistence on treating all users like they are open-source, privacy-first, self-hosted Linux fanatics is painfully corny.
I'm building a protocol and router runtime for hybrid local/cloud AI.
The goal is that you would assign roles to models based on tasks, capabilities and observed performance. The router would then take care of model selection in the background.
It's tricky though. Probably have another two weeks before I can release the runtime.
I have a preview up at https://role-model.dev/
You can follow me on Twitter if you want updates (see profile)
While I agree that would be the goal, we are too early for that. Just like how speech recognition used to require many server in a Datacenter to process and you send your data over. It is now completely on devices.
We are at least 5 years away from that. And DRAM needs a substantial breakthrough in cost reduction.
I’m skeptical that local AI will work well with today’s technology. Running capable models consumes too many resources on end-user devices.
This article makes 0 sense. Its not up to billing or computer systems or ease of use or anything else that matters. The question is will the scaling laws, which in the asymptote are likely the laws of physics, hold up in converting energy to smarter models. Its not really up to anyone, the labs or developers, to choose if local or remote models will be the norm.
I would love for local inference to be possible, but from my experience, Kimi 2.6 is the only model that would be worth it, and its a $10k (M3 Ultra max spec'd - 30s TTFT so kind of slowish) to $30k (RTX6000/700GB+ DDR5) upfront, noise / power consumption aside.
you know what is the hard part about local ai? Supporting it cross platform. The OP get it easy by playing in Apple ecosystem but when you need to support local AI to both iOS/Android the approach is completely different. Even get the users to download the smallest models can be a challenge
Not your weights not your brain. Owning your own action and decision model is super important as these models emulate more of our decisions, thinking and learning. Built claudectl - a local brain for coding agents https://github.com/mercurialsolo/claudectl
There was never a better time to run LLMs locally. It's just a few commands from zero till a fully working LLM homelab.
``` harbor pull unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL
# Open WebUI -> llama.cpp + SearXNG for Web RAG + OpenTerminal as sandbox harbor up searxng webui llamacpp openterminal ```
That's it, it's already better than Claude's or ChatGPT's app.
Overall I'm bullish on standardized local APIs that ship with the browser or platform. Far more tractable than expecting end users to stand up their own local model instances, though r/LocalLLaMA is a fantastic community to follow if you want to go that route.
A useful framing over “local vs cloud AI” can be split along two axes: does the task touch private data, and does it need frontier intelligence? You can use frontier models for developing the software (doesn’t touch data), but open-source models running locally for ops: maintenance, debugging and monitoring (touches data). If you need to fall back to frontier intelligence at some point for a particularly hard to resolve problem, you can still rely on local models for pre-transforming and filtering input in a way that's privacy-preserving or satisfies some constraint before it’s sent off to the cloud for processing. OpenAI's privacy filter is a good example of a model that can be used to mask PII and secrets and that can run locally: https://openai.com/index/introducing-openai-privacy-filter/, before sending any data externally for processing.
Another framing for local vs frontier closed which the article mentions is whether the task saturates model capability. With certain tasks like PDF processing or voice or summarization, adding more intelligence isn't necessarily useful. Arguably we've approached that point for chat interfaces already with frontier open-source models. But for coding and ops through well structured tool use inside a coding capable harness, we're still a ways away.
Tangentially, a contrarian take here is that AI can actually enable more privacy preserving software if you’re so inclined. You can just build personalized software and it lowers the barrier to entry and the effort required to self host. SaaS complexity often comes from scaling and supporting features for all types of customers, and if you're building software for personal use, you don't need all that additional complexity. Additionally, foundational and infra software that is harder to vibecode with AI is often already open source.
> One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app.
Well there’s your problem, control needs to go the other way. If you want your app to be AI-enabled, you need to make it easy for AI to control your app. Have you used OpenClaw? It’s awesome!
> We are building applications that stop working the moment the server crashes or a credit card expires
Isn’t this true of any application that accesses anything not running on your computer? This is just describing what it means to add an API call to your app. Nothing to do with AI (?)
I mostly agree, though I think local AI will need better UX around failure modes. Cloud models are often used not just because developers are lazy, but because they are more capable and easier to support consistently across devices.
How? Memory price is sky high, that is the choke hold the monopoly will not let go
I think with turbo quant forks eventually being merged, its becoming more feasible on mid tier consumer h/w
Dont quite think its ready yet.
Here I was hoping that this was some plea for us to get away from proprietary solutions that we have no control over and go back to open source, but no, not that at all.
Local AI will catch up. Unless we can't get our hands on hardware anymore, which is a legitimate concern I have.
Just let me turn it off to preserve battery life
>> years ago I launched "The Brutalist Report"
proceeds to brutalise the reader with an 88-point headline font.
Consumer/private needs to be local.
Work? I don't want it local at all. I want it all cloud agent.
agree with the article but the limitation for local llm usefulness is the limited scope from my experiments. eventually context heavy data pipelines require larger models which consumer hardware can't deal with yet. the local model for summary on a page like you describe could be done via code as well, i've found using an llm isn't always the right choice. for example i use ner tagging in my md docs for better indexing and llm search capabilities. this is purely code based and not via an llm. tried with an llm and the results were a lot worse. augmenting tools to make the llm produce better outputs gives better results.
Apple stock is going to skyrocket
Local models are much less energy efficient right?
local llm doesn't need to match SOTA performance in order to be useful.
Agreed, but the way ram prices are going, I don't think we would be able to afford hardware that can run any useful model.
Until the hardware is economical and powerful enough, local AI that can compete with frontier models today is still far off.
If we could even get something like GPT 5.5 running locally that would be quite useful.
This would be nice, but unfortunately the norm at the moment is - release a rushed model that doesn’t work with llama.cpp, but if it does, make sure that the chat template is broken. And even if it did have a perfect chat template, let the model loop endlessly rewriting the same file with same content for hours on end.
It would be nice if model makers could at minimum embrace test harnesses, and stretch goal if they’re going to change underlying formats then at least land compatible readers in the big engines (e.g. llama.cpp and vllm)
Who can afford local AI?
Same as local compute.
Welcome back to 2014. Let us now continue yelling at the cloud.
The shitty thing here is, either everyone's shipping 800 MB at least with their binary, or, you have to rely on the platform vendor anyway. I'm hoping there's enough external pressure that the OS vendors turn it more into a repository than a blessed-model-garden.
Two issues -
1. Local models are likely to be more power-expensive to run (per-"unit-of-intelligence") than remote models, due to datacenter economies of scale. People do not like to engage with this point, but if you have environmental concerns about AI, this is a pretty important one.
2. Using dumb models for simple tasks seems like a good idea, but it ends up being pretty clear pretty quick that you just want the smartest model you can afford for absolutely every task.
"NO AI" needs to be the norm, we should be working on better ways of sharing information and better documentation instead of fighting with computers for substandard results.
Depending on some remote AI provider is a major lock-in pitfall. But it's exactly what those AI providers want you to do.
I wonder if a popularization moment for local AI will ultimately be the pin-prick that pops the AI bubble. Like the deepseek or openclaw moments but bigger/next.
One advantage of local AI is continual learning.
When I say 'moat' I don't mean moat specific to a company vis-a-vis other companies, but 'moat' specific to the set of inference providers vis-a-vis self-hosted local inference.
The moat consists primarily of being able to batch inference requests.
If we pretend people weren't interested in long context-lengths, there would be a moat for inference providers. who can batch many requests so that streaming the model weights (regardless if from system RAM to GPU RAM; or from GPU RAM to GPU cache SRAM) can be amortized over multiple requests.
However people do want longer memory than the native context length.
One approach is continual learning (basically continue training by using the past conversation as extra corpus material; interspersed with training on continuations from the frozen model, so it doesn't drift or catastrophically forget knowledge / politeness / ...).
However this is very expensive for inference providers, since they would have to multiply model weight storage with the number of users U=N. For a single user the memory cost of continual learning is much less since they only need to support a single user, and are returned some of the memory cost through elimination of KV-caches, and returned higher quality answers compared to subquadratic approximations of quadratic attention.
An advantage of continual learning is that the conversation / code base / context is continuously rebaked into model weights, and so doesn't need KV caches! It doesn't need imperfect approximations to quadratic attention, it attends through working knowledge being updated.
Nothing prevents local LLM users from implementing this and benefiting from the dropped requirements of KV caches and enjoying true quadratic attention implicitly over the whole codebase, or many overlapping projects indeed.
The only remaining moat of inference providers vis-a-vis continual learning local LLM's is the batching advantage, plus the gradient update costs for continual learning minus the KV storage and compute costs, minus the performance loss due to inexact approximations to quadratic attention.
This points towards a stronger incentive for local hosting than currently realized (none of the popular local LLM tools currently support continual learning, once this genie is out of the bottle it will be a permanent decrease of the inference provider moat, the cost of which can't be expressed merely in hardware or energy costs, since it is difficult to quantify the financial loss of inexact approximations to quadratic attention, the financial loss due to limited effective context length and the concomitant loss in quality of the result)
This is just emotional rhetoric. Pretty much any app in the last 20 years has depended on a server somewhere, or a cloud provider. Like an AI provider, they can go down, they can turn off if you don't pay your bill, etc.
And local inference requires fairly beefy hardware, that is FAR from ubiquitous across today's userbases. Local models are also still far dumber than what frontier labs can serve.
Weird that this is getting such a tidal wave of upvotes.
It’s easier to say 32 gb ram needs to be the norm to start getting movement on this
If you don't need a lot of smarts, do you even need an LLM? Aren't older machine learning techniques just as good, or like, you know, old-school algorithms?
We need computers with 128gb or maybe even 192gb of memory before local use make sense. From my own experience 32b LLMs are the absolute minimum for proper tool use and decent output quality. But for local ai you want also vision models and maybe even various LLMs. Plus some memory for the system of course. On my 36gb M3 the 24b Gemma model is nice. But the entire system gets allocated for that thing.
I'm someone who is trying to build a subscription-based business to cover underlying LLM costs, and very hopeful I can one day just sell a permanent license to the software instead with customers using local LLMs to power it.
I've been looking into options for this and we are getting close. There are two main constraints: memory and memory bandwidth.
NVidia segments the market by limiting the amount of memory on GPUs. It currently tops out at 32GB (on a 5090) but it has excellent memory bandwidth (~1.8TB/s). If you want more than the you need to buy an RTX Pro (eg RTX 6000 Pro w/ 96GB for ~$10K) or you get into high high end solutions like H100, H200, etc that have significantly more memory and even higher bandwidth on HBM memory (eg 3.2TB/s+).
NVidia has released the DGX Spark w/ 128GB of memory for ~$4k. The problem is the memory bandwidth. It's only 273GB/s, which is less than the M5 Pro (307GB/s) but more than the M5. You can buy a 16" Macbook Pro with an M5 Max and 128GB of memory for $6k and it has a bandwidth of 614GB/s. So the DGX Spark is a joke, really.
In case it wasn't clear, Apple is interesting in this space because it has a shared memory architecture so the GPU can use all the memory.
Many, myself include, expect there to be no refresh to the 5000 series consumer GPUs this year, which would otherwise happen based on product cycles. So no 5080 Super, for example. And I wouldn't expect a 6090 before 2028 realistically.
One thing Apple hasn't done yet is release the M5 Mac Studios, which are widely expected in Q3 this year. They are interesting because, for example, the M3 Ultra has a memory bandwidth of 819GB/s and previously had a max spec of 512GB but that got discontinued (and the 256GB version also got discontinued more recently).
So many expect an M5 Max Mac Studio with 1TB/s+ bandwidth and specs up to 256GB or 512GB, probably for ~$10k later this year.
You really have to use this hardware almost 24x7 for it to be economical because otherwise H100 computer hours are probably cheaper.
But what happens when the next generation of GPUs comes out to the trillions in AI DC investment? It's going to halve its value. That's over $1 trillion in capex that will disappear overnight, effectively.
I think Apple is the dark horse here because they have no interest in NVidia's psuedo-monopoly. I'm just waiting for them to realize it.
Now CUDA is an issue here still but I think as time goes on it's going to be less of an issue. Memory is still a huge constraint both in terms of price and just general supply because NVidia can justify paying way more for it than you can, probably.
It's still sad to see that 128GB (2x64GB) DDR5 kits are almost $2k now and werre $400 a year ago. Expect that to continue until this bubble pops (which IMHO it will) and we're likely in a global recession.
So the other issue is models. OpenAI and Anthropic are built on proprietary models. Their entire valuation depends on this moat. I don't think this last so both companies are doomed because open source models are going to be sufficiently good.
We can already do some reasonably cool stuff on local hardware that isn't that expensive and even more so once you get to $5-10k hardware. That's going to be so much better in 2 years that I'm hesitant to spend any amount of money now.
Plus the code for running these things is getting better. Just in the last month there have been huge speed ups in local LLMs with MTP.
I guess Google got that memo!
Local AI is a bit like wind parks. Everyone is in favor, except if they are in your own backyard. There was recently a huge outcry when Chrome shipped a local 4 GB AI model: https://news.ycombinator.com/item?id=48019219
I have to conclude that people would like to have powerful local AI but it should at the same time only be a tiny model. In which case it wouldn't be powerful.
Local models are extraordinarily expensive if you're not maximizing throughput, and you're not going to be maximizing it.
Local models need to be resident in expensive RAM, the kind that has fat pipes to compute. And if you have a local app, how do you take a dependency on whatever random model is installed? Does it support your tool calling complexity? Does it have multimodal input? Does it support system messages in the middle of the conversation or not? Is it dumb enough to need reminders all the time?
Spend enough time building against local models and you'll see they're jagged in performance. You need to tune context size, trade off system message complexity with progressive disclosure. You simply can't rely on intelligence. A bunch of work goes into the harness.
Meanwhile, third party inference is getting the benefits of scale. You only need to rent a timeslice of memory and compute. It's consistent and everybody gets the same experience. And yes, it needs paying for, but the economics are just better.