All: quite a few comments in this thread (and another one we merged hither - https://news.ycombinator.com/item?id=47099160) have contained personal attacks. Hopefully most of them are [flagged] and/or [dead] now.
On HN, please don't cross into personal attack no matter how strongly you feel about someone or disagree with them. It's destructive of what the site is for, and we moderate and/or ban accounts that do it.
People are not understanding that “claw” derives from the original spin on “Claude” when the original tool was called “clawdbot”
jameslk
One safety pattern I’m baking into CLI tools meant for agents: anytime an agent could do something very bad, like email blast too many people, CLI tools now require a one-time password
The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP
I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain
I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful
show comments
daxfohl
I wonder how the internet would have been different if claws had existed beforehand.
I keep thinking something simpler like Gopher (an early 90's web protocol) might have been sufficient / optimal, with little need to evolve into HTML or REST since the agents might be better able to navigate step-by-step menus and questionnaires, rather than RPCs meant to support GUIs and apps, especially for LLMs with smaller contexts that couldn't reliably parse a whole API doc. I wonder if things will start heading more in that direction as user-side agents become the more common way to interact with things.
show comments
throwaway13337
The real big deal about 'claws' in that they're agents oriented around the user.
The kind of AI everyone hates is the stuff that is built into products. This is AI representing the company. It's a foreign invader in your space.
Claws are owned by you and are custom to you. You even name them.
It's the difference between R2D2 and a robot clone trying to sell you shit.
(I'm aware that the llms themselves aren't local but they operate locally and are branded/customized/controlled by the user)
show comments
ZeroGravitas
So what is a "claw" exactly?
An ai that you let loose on your email etc?
And we run it in a container and use a local llm for "safety" but it has access to all our data and the web?
show comments
nevertoolate
My summary: openclaw is a 5/5 security risk, if you have a perfectly audited nanoclaw or whatever it is 4/5 still. If it runs with human-in-the-loop it is much better, but the value is quickly diminishing. I think llms are not bad at helping to spec down human language and possibly doing great also in creating guardrails via tests, but i’d prefer something stable over llms running in “creative mode” or “claw” mode.
nunez
I guess it's relieving to know that us developers will never get good at naming things!
show comments
tabs_or_spaces
I'm confused and frustrated by this naming of "claws"
* I think my biggest frustration is that I don't know how security standards just gets blatantly ignored for the sake of ai progress. It feels really weird that folks with huge influence and reputation in software engineering just promotes this
* The confusion comes in because for some reason we decide to drop our standards at a whim. Lines of code as the measurement of quality, ignoring security standards when adopting something. We get taught to not fall for shiny object syndrome, but here we are showing the same behaviour for anything AI related. Maybe I struggle with separating hobbyist coding from professional coding, but this whole situation just confuses me
I think I expected better from influential folks promoting AI tools to at least check validate the safety of using them. "Vibe coding" was safe, claws are not yet safe at all.
simonw
I think "Claw" as the noun for OpenClaw-like agents - AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks - is going to stick.
jesse_dot_id
I'd be kind of shocked if this didn't trigger the most harmful worm of all time eventually.
show comments
mhher
The current hype around agentic workflows completely glosses over the fundamental security flaw in their architecture: unconstrained execution boundaries. Tools that eagerly load context and grant monolithic LLMs unrestricted shell access are trivial to compromise via indirect prompt injection.
If an agent is curling untrusted data while holding access to sensitive data or already has sensitive data loaded into its context window, arbitrary code execution isn't a theoretical risk; it's an inevitability.
As recent research on context pollution has shown, stuffing the context window with monolithic system prompts and tool schemas actively degrades the model's baseline reasoning capabilities, making it exponentially more vulnerable to these exact exploits.
show comments
andai
We got store-brand Claw before GTA VI.
For real though, it's not that hard to make your own! NanoClaw boasted 500 lines but the repo was 5000 so I was sad. So I took a stab at it.
Turns out it takes 50 lines of code.
All you need is a few lines of Telegram library code in your chosen language, and `claude -p prooompt`.
With 2 lines more you can support Codex or your favorite infinite tokens thingy :)
That's it! There are no other source files. (Of course, we outsource the agent, but I'm told you can get an almost perfect result there too with 50 lines of bash... watch this space! (It's true, Claude Opus does better in several coding and computer use benchmarks when you remove the harness.))
show comments
yoyohello13
I’ve been building my own “OpenClaw” like thing with go-mcp and cloudflare tunnel/email relay. I can send an email to Claude and it will email me back status updates/results. Not as easy to setup as OpenClaw obviously but alt least I know exactly what code is running and what capabilities I’m giving to the LLM.
vivzkestrel
I still dont understand the hype for any of this claw stuff
show comments
tomjuggler
There's a gap in the market here - not me but somebody needs to build an e-commerce bot and call it Santa Claws
show comments
thomassmith65
giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all
If this were 2010, Google, Anthropic, XAI, OpenAI (GAXO?) would focus on packaging their chatbots as $1500 consumer appliances.
It's 2026, so, instead, a state-of-the-art chatbot will require a subscription forever.
show comments
7777777phil
Karpathy has a good ear for naming things.
"Claw" captures what the existing terminology missed, these aren't agents with more tools (maybe even the opposite), they're persistent processes with scheduling and inter-agent communication that happen to use LLMs for reasoning.
show comments
ksynwa
Why mac mini instead of something like a raspberry pi? Aren't thede claw things delegating inference to OpenAI, Antropic etc.?
show comments
mittermayr
I wonder how long it'll take (if it hasn't already) until the messaging around this inevitably moves on to "Do not self-host this, are you crazy? This requires console commands, don't be silly! Our team of industry-veteran security professionals works on your digital safety 24/7, you would never be able to keep up with the demands of today's cybersecurity attack spectrum. Any sane person would host their claw with us!"
Next flood of (likely heavily YC-backed) Clawbase (Coinbase but for Claws) hosting startups incoming?
show comments
daxfohl
I don't think AI will kill software engineering anytime soon, though I wonder if claws will largely kill the need for frontend specialists.
show comments
hmokiguess
Are these things actually useful or do we have an epidemic of loneliness and a deep need for vanity AI happening?
I say this because I can’t bring myself to finding a use case for it other than a toy that gets boring fast.
One example in some repos around scheduling capabilities mentions “open these things and summarize them for me” this feels like spam and noise not value.
A while back we had a trending tweet about wanting AI to do your dishes for you and not replace creativity, I guess this feels like an attempt to go there but to me it’s the wrong implementation.
show comments
zmmmmm
It seems like the people using these are writing off the risks - either they think it's so unlikely to happen it doesn't matter or they assume they won't be held responsible for the damage / harm / loss.
So I'm curious how it will go down once serious harm does occur. Like someone loses their house, or their entire life savings or have their identity completely stolen. And these may be the better scenarios, because the worse ones are it commits crimes, causes major harm to third parties, lands the owner in jail.
I fully expect the owner to immediately state it was the agent not them, and expect they should be alleviated of some responsibility for it. It already happened in the incident with Scott Shambaugh - the owner of the bot came forward but I didn't see any point where they did anything to take responsibility for the harm they caused.
These people are living in a bubble - Scott is not suing - but I have to assume whenever this really gets tested that the legal system is simply going to treat it as what it is: best case, reckless negligence. Worst case (and most likely) full liability / responsibility for whatever it did. Possibly treating it as with intent.
Unfortunately, it seems like we need this to happen before people will actually take it seriously and start to build the necessary safety architectures / protocols to make it remotely sensible.
show comments
ollybrinkman
The challenge with layering on top of LLM agents is payment — agents need to call external tools and services, but most APIs still require accounts and API keys that agents can't manage. The x402 standard (HTTP 402 + EIP-712 USDC signatures) solves this cleanly: agent holds a wallet, signs a micropayment per call, no account needed. Worth considering as a primitive for agent-to-agent commerce in these architectures.
show comments
trcf23
Has anyone find a useful way to to something with Claws without massive security risk?
As a n8n user, i still don't understand the business value it adds beyond being exciting...
Any resources or blog post to share on that?
show comments
pvtmert
Does one really need to _buy_ a completely new desktop hardware (ie. mac mini) to _run_ a simple request/response program?
Excluding the fact that you can run LLMs via ollama or similar directly on the device, but that will not have a very good token/s speed as far as I can guess...
show comments
bravetraveler
I read [and comment on] two influencers maintaining their circles
deadbabe
Instead of posts about claws I would like to see more examples of what people are actually doing with claws. Why are you giving it access to your bank account?
Even if I had a perfectly working assistant right now, I don’t even know what I would ask it to do. Read me the latest hackernews headlines and comments?
fxj
He also talks about picoclaw which even runs on $10 hardware and is a fork by sipeed, a chinese company who does IoT.
Imagine the possibilities. Soon we will see claw-in-a-box for less than $50.
show comments
arjie
The openclaw rough architecture isn’t bad but I enjoyed building my own version. I chose rustlang and it works like I want. I made it a separate email address etc. and Apple ID. The biggest annoyance is that I can’t share Google contacts. But otherwise it’s great. I’m trying to find a way to give it a browser and a credit card (limited spend of course) in a way I can trust.
It’s lots of fun.
show comments
alecco
> Bought a new Mac mini to properly tinker with claws over the weekend.
EDIT: sorry top Google result led to a fake ZeroClaw!
show comments
bjackman
Does anyone know a Claw-like that:
- doesnt do its own sandboxing (I'll set that up myself)
- just has a web UI instead of wanting to use some weird proprietary messaging app as its interface?
show comments
throw03172019
What are people using Claws for? It is interesting to see it everywhere but I haven’t had any good ideas for using them.
Anyone to share their use case? Thanks!
show comments
ianbutler
I'm not sure I like this trend of taking the first slightly hypey app in an existing space and then defining the nomenclature of the space relative to that app, in this case even suggesting it's another layer of the stack.
It implies an ubiquity that just isn't there (yet) so it feels unearned and premature in my mind. It seems better for social media narratives more than anything.
I'll admit I don't hate the term claws I just think it's early. Like Bandaid had much more perfusion and mindshare before it became a general term for anything as an example.
I also think this then has an unintended chilling effect in innovation because people get warned off if they think a space is closed to taking different shapes.
At the end of the day I don't think we've begun to see what shapes all of this stuff will take. I do kind of get a point of having a way to talk about it as it's shaping though. Idk things do be hard and rapidly changing.
edf13
That’s one of the reasons we’re building grith.ai ~ these ‘claw’ tools are getting too easy for use (which is good)… but they need securing!
show comments
hoss1474489
It’s a slow burn, but if you keep using it, it seems to eventually catch fire as the agent builds up scripts and skills and together you build up systems of getting stuff done. In some ways it feels like building rapport with a junior. And like a junior, eventually, if you keep investing, the agent starts doing things that blow by your expectations.
By giving the agent its own isolated computer, I don’t have to care about how the project gets started and stored, I just say “I want ____” and ____ shows up. It’s not that it can do stuff that I can’t. It’s that it can do stuff that I would like but just couldn’t be bothered with.
_boffin_
I just realized i built open claw over a year, but never released it to anyone. Should have released it and got the fame. Shucks.
thih9
How much does it cost to run these?
I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.
show comments
dainiusse
I don't understand the mac mini hype. Why can it not be a vm?
show comments
_pdp_
You can take any AI agent (Codex, Gemini, Claude Code, ollama), run it on a loop with some delay and connect to a messaging platform using Pantalk (https://github.com/pantalk/pantalk). In fact, you can use Pantalk buffer to automatically start your agent. You don't need OpenClaw for that.
What OpenClaw did is to show the messages that this is in fact possible to do. IMHO nobody is using it yet for meaningful things, but the direction is right.
show comments
vatsachak
This is all so unscientific and unmeasurable. Hopefully we can construct more order parameters on weights and start measuring those instead of "using claws to draw pelicans on bicycles"
mikewarot
I too am interested in "Claws", but I want to figure out how to run it locally inside a capabilities based secure OS, so that it can be tightly constrained, yet remain useful.
derefr
> I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all.
So... why do that, then?
To be clear, I don't mean "why use agents?" I get it: they're novel, and it's fun to tinker with things.
But rather: why are you giving this thing that you don't trust, your existing keys (so that it can do things masquerading as you), and your existing data (as if it were a confidante you were telling your deepest secrets)?
You wouldn't do this with a human you hired off the street. Even if you're hiring them to be your personal assistant. Giving them your own keys, especially, is like giving them power-of-attorney over your digital life. (And, since they're your keys, their actions can't even be distinguished from your own in an audit log.)
Here's what you would do with a human you're hiring as a personal assistant (who, for some reason, doesn't already have any kind of online identity):
1. you'd make them a new set of credentials and accounts to call their own, rather than giving them access to yours. (Concrete example: giving a coding agent its own Github account, with its own SSH keys it uses to identify as itself.)
2. you'd grant those accounts limited ACLs against your own existing data, just as needed to work on each new project you assign to them. (Concrete example: letting a coding agent's Github user access to fork specific private repos of yours, and the ability to submit PRs back to you.)
3. at first, you'd test them by assigning them to work on greenfield projects for you, that don't expose any sensitive data to them. (The data created in the work process might gradually become "sensitive data", e.g. IP, but that's fine.)
To me, this is the only sane approach. But I don't hear about anyone doing this with agents. Why?
Havoc
Are people buying mac minis to run the models locally?
show comments
SV_BubbleTime
Did Claws the name from Claude? I haven’t been following but didn’t some make OpenClaude and that turned in OpenClaw and ta-da a new name of a thing?
trippyballs
lemme guess there is going to be inter claw protocol now
show comments
ozim
I am waiting for Mac mini with M5 processor since M5 MacBook - seems like I need to start saving more money each month for that goal because it is going to be a bloodbath at the moment they land.
Dilettante_
I still haven't really been able to wrap my head around the usecase for these. Also fingers crossed the name doesn't stick. Something about it rubs my brain the wrong way.
show comments
DonHopkins
simonw> It even comes with an established emoji [lobster emoji]
Good thing they didn't call it OpenSeahorse!
zkmon
AI pollution is "clawing" into every corner of human life. Big guys boast it as catching up with the trend, but not really thinking about where this is all going.
ggrab
IMO the security pitchforking on OpenClaw is just so overdone. People without consideration for the implications will inevitably get burned, as we saw with the reddit posts "Agentic Coding tool X wiped my hard drive and apologized profusely".
I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important, it reminds me of that.
show comments
fxj
He also talks about picoclaw (a IoT solution) and nanoclaw (running on your phone in termux) and has a tiny code base.
claytonaalves
I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want".
show comments
GTP
I'm genuinely wondering if this sort of AI revolution (or bubble, depending on which side you're in) is worth it. Yes, there are some cool use cases. But, you have to balance those with increased GPU, RAM and storage prices, and OSS projects struggling to keep up with people opening pull requests or vulnerability disclosures that turn out to be AI slop. Which lead GitHub to introduce the possibility to disable pull requests on repositories. Additionally, all the compute used for running LLMs in the cloud seems to have a significant environmental impact. Is it worth it, or are we being fooled by a technology that looks very cool on the surface, but that so far didn’t deliver on the promises of being able to carry complex tasks fully autonomously?
show comments
davedx
I run a Discord where we've had a custom coded bot I created since before LLM's became useful. When they did, I integrated the bot into LLMs so you could ask it questions in free text form. I've gradually added AI-type features to this integration over time, like web search grounding once that was straightforward to do.
The other day I finally found some time to give OpenClaw a go, and it went something like this:
- Installed it on my VPS (I don't have a Mac mini lying around, or the inclination to just go out and buy one just for this)
- Worked through a painful path of getting it a browser working (VPS = no graphics subsystem...)
- Decided as my first experiment, to tell it to look at trading prediction markets (Polymarket)
- Discovered that I had to do most of the onboarding for this, for numerous reasons like KYC, payments, other stuff OpenClaw can't do for you...
- Discovered that it wasn't very good at setting up its own "scheduled jobs". It was absolutely insistent that it would "Check the markets we're tracking every morning", until after multiple back and forths we discovered... it wouldn't, and I had to explicitly force it to add something to its heartbeat
- Discovered that one of the bets I wanted to track (fed rates change) it wasn't able to monitor because CME's website is very bot-hostile and blocked it after a few requests
- Told me I should use a VPN to get around the block, or sign up to a market data API for it
- I jumped through the various hoops to get a NordVPN account and run it on the VPS (hilariously, once I connected it blew up my SSH session and I had to recovery console my way back in...)
- We discovered that oh, NordVPN's IP's don't get around the CME website block
- Gave up on that bet, chose a different one...
- I then got a very blunt WhatsApp message "Usage limit exceeded". There was nothing in the default 'clawbot logs' as to why. After digging around in other locations I found a more detailed log, yeah, it's OpenAI. Logged into the OpenAI platform - it's churned through $20 of tokens in about 24h.
At this point I took a step back and weighted the pros and cons of the whole thing, and decided to shut it down. Back to human-in-the-loop coding agent projects for me.
I just do not believe the influencers who are posting their Clawbots are "running their entire company". There are so many bot-blockers everywhere it's like that scene with the rakes in the Simpsons...
All these *claw variants won't solve any of this. Sure you might use a bit less CPU, but the open internet is actually pretty bot-hostile, and you constantly need humans to navigate it.
What I have done from what I've learned though, is upgrade my trusty Discord bot so it now has a SOUL.md and MEMORIES.md. Maybe at some point I'll also give it a heartbeat, but I'm not sure...
show comments
verdverm
I can say with confidence that I will not use "claw" or any derivations because it attracts a certain kind of ilk.
"team" is plenty good enough, we already use it, it makes for easier integration into hybrid carbon-silicon collaboration
LorenDB
> It even comes with an established emoji
If we have to do this, can we at least use the seahorse emoji as the symbol?
show comments
edgarvaldes
Perhaps the whole cybersecurity theatre is just that, a charade. The frenzy for these tools proves it. IoT was apparently so boring that the main concern was security. AI is so much fun that for the vast majority of hackers, programmers and CTOs, security is no longer just an afterthought; it's nonexistent. Nobody cares.
j45
Excited to see and work with things in new ways.
It's interesting how the announcement of someone understanding and summarizing it is seen as more blessing it into the canon of LLMS, whereas sometimes people might have been doing things for a long time quietly (lots of text files with claude).
I'm not sure how long claws will last, a lot was said about MCPs in their initial form too, except they were just gaping security holes too often as well.
lysecret
Im honestly not that much worried there are some obvious problems (exfiltrate data labeled as sensitive, take actions that are costly, delete/change sensitive resources) if you have a properly compliant infrastructure all these actions need confirmations logging etc. for humans this seemed more like a neusance but now it seems essential. And all these systems are actually much much easier to setup.
teaearlgraycold
Why are people buying Mac Minis for this? I understand Mac Studios if you’re self hosting the models. But otherwise why not buy any cheap mini PC?
Cyphase
inb4 "ClAWS run best on AWS."
show comments
aalam
I'll never understand the hype of buying a Mac Mini for this though. Sounds like the latest matcha-craze for tech bros
show comments
the_real_cher
What is the benefit of a Mac mini for something like this?
show comments
fullstackchris
so... MCP? can anyone explain what a "claw" is apposed to a "skill" or similar? if not, let's assume in three weeks a new term called "waffle" appears - can you explain what that is?
if not, youre all hype idiots.
its still tokens in, tokens out you fools.
fogzen
What I don’t get: If it’s just a workflow engine why even use LLM for anything but a natural language interface to workflows? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?
Nondeterministic execution doesn’t sound great for stringing together tool calls.
objektif
Anyone using claws for something meaningful in a startup environment? I want to try but not sure what we can do with this.
show comments
tabs_or_spaces
> on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code
After all these years, why do we keep coming back to lines of code being an indicator for anything sigh.
show comments
jauntywundrkind
Looking forward to seeing what we get next Christmas season, with the Claws / Clause double entendres.
YetAnotherNick
What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.
It tries to understand its own settings but fails terribly.
tovej
Ah yes, let's create an autonomic actor out of a nondeterministic system which can literally be hacked by giving it plaintext to read. Let's give that system access to important credentials letting it poop all over the internet.
Completely safe and normal software engineering practice.
Artoooooor
So now I will be able to tell OpenClaw to speedrun Captain Claw. Yeah.
qoez
I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive
show comments
nsonha
I find it dubious that a technical person claims to "just bought a new Mac mini to properly tinker with claws over the weekend". Like can they not just play with it on an old laptop lying around? A virtual machine? Or why did they not buy a Pi instead? Openclaw works with linux so not sure how this whole Mac mini cliche even started, obviously an overkill for something that only relays api calls.
show comments
Artoooooor
So now the official name of the LLM agent orchestrator is claw? Interesting.
show comments
CuriouslyC
OpenClaw is the 6-7 of the software world. Our dystopia is post-absurdist.
show comments
amelius
Can't we rename "Claws" -> "Personal assistants"?
OpenClaw is a stupid name. Even "OpenSlave" would be a better fit.
All: quite a few comments in this thread (and another one we merged hither - https://news.ycombinator.com/item?id=47099160) have contained personal attacks. Hopefully most of them are [flagged] and/or [dead] now.
On HN, please don't cross into personal attack no matter how strongly you feel about someone or disagree with them. It's destructive of what the site is for, and we moderate and/or ban accounts that do it.
If you haven't recently, please review https://news.ycombinator.com/newsguidelines.html and make sure that you're using the site as intended when posting here.
People are not understanding that “claw” derives from the original spin on “Claude” when the original tool was called “clawdbot”
One safety pattern I’m baking into CLI tools meant for agents: anytime an agent could do something very bad, like email blast too many people, CLI tools now require a one-time password
The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP
I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain
I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful
I wonder how the internet would have been different if claws had existed beforehand.
I keep thinking something simpler like Gopher (an early 90's web protocol) might have been sufficient / optimal, with little need to evolve into HTML or REST since the agents might be better able to navigate step-by-step menus and questionnaires, rather than RPCs meant to support GUIs and apps, especially for LLMs with smaller contexts that couldn't reliably parse a whole API doc. I wonder if things will start heading more in that direction as user-side agents become the more common way to interact with things.
The real big deal about 'claws' in that they're agents oriented around the user.
The kind of AI everyone hates is the stuff that is built into products. This is AI representing the company. It's a foreign invader in your space.
Claws are owned by you and are custom to you. You even name them.
It's the difference between R2D2 and a robot clone trying to sell you shit.
(I'm aware that the llms themselves aren't local but they operate locally and are branded/customized/controlled by the user)
So what is a "claw" exactly?
An ai that you let loose on your email etc?
And we run it in a container and use a local llm for "safety" but it has access to all our data and the web?
My summary: openclaw is a 5/5 security risk, if you have a perfectly audited nanoclaw or whatever it is 4/5 still. If it runs with human-in-the-loop it is much better, but the value is quickly diminishing. I think llms are not bad at helping to spec down human language and possibly doing great also in creating guardrails via tests, but i’d prefer something stable over llms running in “creative mode” or “claw” mode.
I guess it's relieving to know that us developers will never get good at naming things!
I'm confused and frustrated by this naming of "claws"
* I think my biggest frustration is that I don't know how security standards just gets blatantly ignored for the sake of ai progress. It feels really weird that folks with huge influence and reputation in software engineering just promotes this * The confusion comes in because for some reason we decide to drop our standards at a whim. Lines of code as the measurement of quality, ignoring security standards when adopting something. We get taught to not fall for shiny object syndrome, but here we are showing the same behaviour for anything AI related. Maybe I struggle with separating hobbyist coding from professional coding, but this whole situation just confuses me
I think I expected better from influential folks promoting AI tools to at least check validate the safety of using them. "Vibe coding" was safe, claws are not yet safe at all.
I think "Claw" as the noun for OpenClaw-like agents - AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks - is going to stick.
I'd be kind of shocked if this didn't trigger the most harmful worm of all time eventually.
The current hype around agentic workflows completely glosses over the fundamental security flaw in their architecture: unconstrained execution boundaries. Tools that eagerly load context and grant monolithic LLMs unrestricted shell access are trivial to compromise via indirect prompt injection.
If an agent is curling untrusted data while holding access to sensitive data or already has sensitive data loaded into its context window, arbitrary code execution isn't a theoretical risk; it's an inevitability.
As recent research on context pollution has shown, stuffing the context window with monolithic system prompts and tool schemas actively degrades the model's baseline reasoning capabilities, making it exponentially more vulnerable to these exact exploits.
We got store-brand Claw before GTA VI.
For real though, it's not that hard to make your own! NanoClaw boasted 500 lines but the repo was 5000 so I was sad. So I took a stab at it.
Turns out it takes 50 lines of code.
All you need is a few lines of Telegram library code in your chosen language, and `claude -p prooompt`.
With 2 lines more you can support Codex or your favorite infinite tokens thingy :)
https://github.com/a-n-d-a-i/ULTRON/blob/main/src/index.ts
That's it! There are no other source files. (Of course, we outsource the agent, but I'm told you can get an almost perfect result there too with 50 lines of bash... watch this space! (It's true, Claude Opus does better in several coding and computer use benchmarks when you remove the harness.))
I’ve been building my own “OpenClaw” like thing with go-mcp and cloudflare tunnel/email relay. I can send an email to Claude and it will email me back status updates/results. Not as easy to setup as OpenClaw obviously but alt least I know exactly what code is running and what capabilities I’m giving to the LLM.
I still dont understand the hype for any of this claw stuff
There's a gap in the market here - not me but somebody needs to build an e-commerce bot and call it Santa Claws
If this were 2010, Google, Anthropic, XAI, OpenAI (GAXO?) would focus on packaging their chatbots as $1500 consumer appliances.
It's 2026, so, instead, a state-of-the-art chatbot will require a subscription forever.
Karpathy has a good ear for naming things.
"Claw" captures what the existing terminology missed, these aren't agents with more tools (maybe even the opposite), they're persistent processes with scheduling and inter-agent communication that happen to use LLMs for reasoning.
Why mac mini instead of something like a raspberry pi? Aren't thede claw things delegating inference to OpenAI, Antropic etc.?
I wonder how long it'll take (if it hasn't already) until the messaging around this inevitably moves on to "Do not self-host this, are you crazy? This requires console commands, don't be silly! Our team of industry-veteran security professionals works on your digital safety 24/7, you would never be able to keep up with the demands of today's cybersecurity attack spectrum. Any sane person would host their claw with us!"
Next flood of (likely heavily YC-backed) Clawbase (Coinbase but for Claws) hosting startups incoming?
I don't think AI will kill software engineering anytime soon, though I wonder if claws will largely kill the need for frontend specialists.
Are these things actually useful or do we have an epidemic of loneliness and a deep need for vanity AI happening?
I say this because I can’t bring myself to finding a use case for it other than a toy that gets boring fast.
One example in some repos around scheduling capabilities mentions “open these things and summarize them for me” this feels like spam and noise not value.
A while back we had a trending tweet about wanting AI to do your dishes for you and not replace creativity, I guess this feels like an attempt to go there but to me it’s the wrong implementation.
It seems like the people using these are writing off the risks - either they think it's so unlikely to happen it doesn't matter or they assume they won't be held responsible for the damage / harm / loss.
So I'm curious how it will go down once serious harm does occur. Like someone loses their house, or their entire life savings or have their identity completely stolen. And these may be the better scenarios, because the worse ones are it commits crimes, causes major harm to third parties, lands the owner in jail.
I fully expect the owner to immediately state it was the agent not them, and expect they should be alleviated of some responsibility for it. It already happened in the incident with Scott Shambaugh - the owner of the bot came forward but I didn't see any point where they did anything to take responsibility for the harm they caused.
These people are living in a bubble - Scott is not suing - but I have to assume whenever this really gets tested that the legal system is simply going to treat it as what it is: best case, reckless negligence. Worst case (and most likely) full liability / responsibility for whatever it did. Possibly treating it as with intent.
Unfortunately, it seems like we need this to happen before people will actually take it seriously and start to build the necessary safety architectures / protocols to make it remotely sensible.
The challenge with layering on top of LLM agents is payment — agents need to call external tools and services, but most APIs still require accounts and API keys that agents can't manage. The x402 standard (HTTP 402 + EIP-712 USDC signatures) solves this cleanly: agent holds a wallet, signs a micropayment per call, no account needed. Worth considering as a primitive for agent-to-agent commerce in these architectures.
Has anyone find a useful way to to something with Claws without massive security risk?
As a n8n user, i still don't understand the business value it adds beyond being exciting...
Any resources or blog post to share on that?
Does one really need to _buy_ a completely new desktop hardware (ie. mac mini) to _run_ a simple request/response program?
Excluding the fact that you can run LLMs via ollama or similar directly on the device, but that will not have a very good token/s speed as far as I can guess...
I read [and comment on] two influencers maintaining their circles
Instead of posts about claws I would like to see more examples of what people are actually doing with claws. Why are you giving it access to your bank account?
Even if I had a perfectly working assistant right now, I don’t even know what I would ask it to do. Read me the latest hackernews headlines and comments?
He also talks about picoclaw which even runs on $10 hardware and is a fork by sipeed, a chinese company who does IoT.
https://github.com/sipeed/picoclaw
another chinese coompany m5stack provides local LLMs like Qwen2.5-1.5B running on a local IoT device.
https://shop.m5stack.com/products/m5stack-llm-large-language...
Imagine the possibilities. Soon we will see claw-in-a-box for less than $50.
The openclaw rough architecture isn’t bad but I enjoyed building my own version. I chose rustlang and it works like I want. I made it a separate email address etc. and Apple ID. The biggest annoyance is that I can’t share Google contacts. But otherwise it’s great. I’m trying to find a way to give it a browser and a credit card (limited spend of course) in a way I can trust.
It’s lots of fun.
> Bought a new Mac mini to properly tinker with claws over the weekend.
Disappointing. There is a Rust-based assistant that can run comfortably in a Raspberry PI (or some very old computer you are not using) https://zeroclawlabs.ai/ https://github.com/zeroclaw-labs/zeroclaw (Built by Harvard and MIT students, looks like)
EDIT: sorry top Google result led to a fake ZeroClaw!
Does anyone know a Claw-like that:
- doesnt do its own sandboxing (I'll set that up myself)
- just has a web UI instead of wanting to use some weird proprietary messaging app as its interface?
What are people using Claws for? It is interesting to see it everywhere but I haven’t had any good ideas for using them.
Anyone to share their use case? Thanks!
I'm not sure I like this trend of taking the first slightly hypey app in an existing space and then defining the nomenclature of the space relative to that app, in this case even suggesting it's another layer of the stack.
It implies an ubiquity that just isn't there (yet) so it feels unearned and premature in my mind. It seems better for social media narratives more than anything.
I'll admit I don't hate the term claws I just think it's early. Like Bandaid had much more perfusion and mindshare before it became a general term for anything as an example.
I also think this then has an unintended chilling effect in innovation because people get warned off if they think a space is closed to taking different shapes.
At the end of the day I don't think we've begun to see what shapes all of this stuff will take. I do kind of get a point of having a way to talk about it as it's shaping though. Idk things do be hard and rapidly changing.
That’s one of the reasons we’re building grith.ai ~ these ‘claw’ tools are getting too easy for use (which is good)… but they need securing!
It’s a slow burn, but if you keep using it, it seems to eventually catch fire as the agent builds up scripts and skills and together you build up systems of getting stuff done. In some ways it feels like building rapport with a junior. And like a junior, eventually, if you keep investing, the agent starts doing things that blow by your expectations.
By giving the agent its own isolated computer, I don’t have to care about how the project gets started and stored, I just say “I want ____” and ____ shows up. It’s not that it can do stuff that I can’t. It’s that it can do stuff that I would like but just couldn’t be bothered with.
I just realized i built open claw over a year, but never released it to anyone. Should have released it and got the fame. Shucks.
How much does it cost to run these?
I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.
I don't understand the mac mini hype. Why can it not be a vm?
You can take any AI agent (Codex, Gemini, Claude Code, ollama), run it on a loop with some delay and connect to a messaging platform using Pantalk (https://github.com/pantalk/pantalk). In fact, you can use Pantalk buffer to automatically start your agent. You don't need OpenClaw for that.
What OpenClaw did is to show the messages that this is in fact possible to do. IMHO nobody is using it yet for meaningful things, but the direction is right.
This is all so unscientific and unmeasurable. Hopefully we can construct more order parameters on weights and start measuring those instead of "using claws to draw pelicans on bicycles"
I too am interested in "Claws", but I want to figure out how to run it locally inside a capabilities based secure OS, so that it can be tightly constrained, yet remain useful.
> I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all.
So... why do that, then?
To be clear, I don't mean "why use agents?" I get it: they're novel, and it's fun to tinker with things.
But rather: why are you giving this thing that you don't trust, your existing keys (so that it can do things masquerading as you), and your existing data (as if it were a confidante you were telling your deepest secrets)?
You wouldn't do this with a human you hired off the street. Even if you're hiring them to be your personal assistant. Giving them your own keys, especially, is like giving them power-of-attorney over your digital life. (And, since they're your keys, their actions can't even be distinguished from your own in an audit log.)
Here's what you would do with a human you're hiring as a personal assistant (who, for some reason, doesn't already have any kind of online identity):
1. you'd make them a new set of credentials and accounts to call their own, rather than giving them access to yours. (Concrete example: giving a coding agent its own Github account, with its own SSH keys it uses to identify as itself.)
2. you'd grant those accounts limited ACLs against your own existing data, just as needed to work on each new project you assign to them. (Concrete example: letting a coding agent's Github user access to fork specific private repos of yours, and the ability to submit PRs back to you.)
3. at first, you'd test them by assigning them to work on greenfield projects for you, that don't expose any sensitive data to them. (The data created in the work process might gradually become "sensitive data", e.g. IP, but that's fine.)
To me, this is the only sane approach. But I don't hear about anyone doing this with agents. Why?
Are people buying mac minis to run the models locally?
Did Claws the name from Claude? I haven’t been following but didn’t some make OpenClaude and that turned in OpenClaw and ta-da a new name of a thing?
lemme guess there is going to be inter claw protocol now
I am waiting for Mac mini with M5 processor since M5 MacBook - seems like I need to start saving more money each month for that goal because it is going to be a bloodbath at the moment they land.
I still haven't really been able to wrap my head around the usecase for these. Also fingers crossed the name doesn't stick. Something about it rubs my brain the wrong way.
simonw> It even comes with an established emoji [lobster emoji]
Good thing they didn't call it OpenSeahorse!
AI pollution is "clawing" into every corner of human life. Big guys boast it as catching up with the trend, but not really thinking about where this is all going.
IMO the security pitchforking on OpenClaw is just so overdone. People without consideration for the implications will inevitably get burned, as we saw with the reddit posts "Agentic Coding tool X wiped my hard drive and apologized profusely". I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important, it reminds me of that.
He also talks about picoclaw (a IoT solution) and nanoclaw (running on your phone in termux) and has a tiny code base.
I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want".
I'm genuinely wondering if this sort of AI revolution (or bubble, depending on which side you're in) is worth it. Yes, there are some cool use cases. But, you have to balance those with increased GPU, RAM and storage prices, and OSS projects struggling to keep up with people opening pull requests or vulnerability disclosures that turn out to be AI slop. Which lead GitHub to introduce the possibility to disable pull requests on repositories. Additionally, all the compute used for running LLMs in the cloud seems to have a significant environmental impact. Is it worth it, or are we being fooled by a technology that looks very cool on the surface, but that so far didn’t deliver on the promises of being able to carry complex tasks fully autonomously?
I run a Discord where we've had a custom coded bot I created since before LLM's became useful. When they did, I integrated the bot into LLMs so you could ask it questions in free text form. I've gradually added AI-type features to this integration over time, like web search grounding once that was straightforward to do.
The other day I finally found some time to give OpenClaw a go, and it went something like this:
- Installed it on my VPS (I don't have a Mac mini lying around, or the inclination to just go out and buy one just for this)
- Worked through a painful path of getting it a browser working (VPS = no graphics subsystem...)
- Decided as my first experiment, to tell it to look at trading prediction markets (Polymarket)
- Discovered that I had to do most of the onboarding for this, for numerous reasons like KYC, payments, other stuff OpenClaw can't do for you...
- Discovered that it wasn't very good at setting up its own "scheduled jobs". It was absolutely insistent that it would "Check the markets we're tracking every morning", until after multiple back and forths we discovered... it wouldn't, and I had to explicitly force it to add something to its heartbeat
- Discovered that one of the bets I wanted to track (fed rates change) it wasn't able to monitor because CME's website is very bot-hostile and blocked it after a few requests
- Told me I should use a VPN to get around the block, or sign up to a market data API for it
- I jumped through the various hoops to get a NordVPN account and run it on the VPS (hilariously, once I connected it blew up my SSH session and I had to recovery console my way back in...)
- We discovered that oh, NordVPN's IP's don't get around the CME website block
- Gave up on that bet, chose a different one...
- I then got a very blunt WhatsApp message "Usage limit exceeded". There was nothing in the default 'clawbot logs' as to why. After digging around in other locations I found a more detailed log, yeah, it's OpenAI. Logged into the OpenAI platform - it's churned through $20 of tokens in about 24h.
At this point I took a step back and weighted the pros and cons of the whole thing, and decided to shut it down. Back to human-in-the-loop coding agent projects for me.
I just do not believe the influencers who are posting their Clawbots are "running their entire company". There are so many bot-blockers everywhere it's like that scene with the rakes in the Simpsons...
All these *claw variants won't solve any of this. Sure you might use a bit less CPU, but the open internet is actually pretty bot-hostile, and you constantly need humans to navigate it.
What I have done from what I've learned though, is upgrade my trusty Discord bot so it now has a SOUL.md and MEMORIES.md. Maybe at some point I'll also give it a heartbeat, but I'm not sure...
I can say with confidence that I will not use "claw" or any derivations because it attracts a certain kind of ilk.
"team" is plenty good enough, we already use it, it makes for easier integration into hybrid carbon-silicon collaboration
> It even comes with an established emoji
If we have to do this, can we at least use the seahorse emoji as the symbol?
Perhaps the whole cybersecurity theatre is just that, a charade. The frenzy for these tools proves it. IoT was apparently so boring that the main concern was security. AI is so much fun that for the vast majority of hackers, programmers and CTOs, security is no longer just an afterthought; it's nonexistent. Nobody cares.
Excited to see and work with things in new ways.
It's interesting how the announcement of someone understanding and summarizing it is seen as more blessing it into the canon of LLMS, whereas sometimes people might have been doing things for a long time quietly (lots of text files with claude).
I'm not sure how long claws will last, a lot was said about MCPs in their initial form too, except they were just gaping security holes too often as well.
Im honestly not that much worried there are some obvious problems (exfiltrate data labeled as sensitive, take actions that are costly, delete/change sensitive resources) if you have a properly compliant infrastructure all these actions need confirmations logging etc. for humans this seemed more like a neusance but now it seems essential. And all these systems are actually much much easier to setup.
Why are people buying Mac Minis for this? I understand Mac Studios if you’re self hosting the models. But otherwise why not buy any cheap mini PC?
inb4 "ClAWS run best on AWS."
I'll never understand the hype of buying a Mac Mini for this though. Sounds like the latest matcha-craze for tech bros
What is the benefit of a Mac mini for something like this?
so... MCP? can anyone explain what a "claw" is apposed to a "skill" or similar? if not, let's assume in three weeks a new term called "waffle" appears - can you explain what that is?
if not, youre all hype idiots.
its still tokens in, tokens out you fools.
What I don’t get: If it’s just a workflow engine why even use LLM for anything but a natural language interface to workflows? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?
Nondeterministic execution doesn’t sound great for stringing together tool calls.
Anyone using claws for something meaningful in a startup environment? I want to try but not sure what we can do with this.
> on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code
After all these years, why do we keep coming back to lines of code being an indicator for anything sigh.
Looking forward to seeing what we get next Christmas season, with the Claws / Clause double entendres.
What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.
It tries to understand its own settings but fails terribly.
Ah yes, let's create an autonomic actor out of a nondeterministic system which can literally be hacked by giving it plaintext to read. Let's give that system access to important credentials letting it poop all over the internet.
Completely safe and normal software engineering practice.
So now I will be able to tell OpenClaw to speedrun Captain Claw. Yeah.
I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive
I find it dubious that a technical person claims to "just bought a new Mac mini to properly tinker with claws over the weekend". Like can they not just play with it on an old laptop lying around? A virtual machine? Or why did they not buy a Pi instead? Openclaw works with linux so not sure how this whole Mac mini cliche even started, obviously an overkill for something that only relays api calls.
So now the official name of the LLM agent orchestrator is claw? Interesting.
OpenClaw is the 6-7 of the software world. Our dystopia is post-absurdist.
Can't we rename "Claws" -> "Personal assistants"?
OpenClaw is a stupid name. Even "OpenSlave" would be a better fit.
[flagged]
Who is Andrej Karpathy?
Problem is, Claws still use LLMs, so they're DOA.