OpenClaw isn't fooling me. I remember MS-DOS

115 points113 comments4 hours ago
piker

Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.

show comments
stared

I don’t get this OpenClaw hype.

When people vibe-code, usually the goal is to do something.

When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).

show comments
repelsteeltje

One could argue that the discussion is once again about tech debt.

Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

show comments
nryoo

$180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.

show comments
pantulis

This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.

But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.

Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.

show comments
ymolodtsov

I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.

It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.

tnelsond4

I think we should be giving AI access to something like templeos where there is no permissions and everything runs unrestricted and you can rewrite the os while it's running.

saidnooneever

DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

Memory isolation is enforced by the MMU. This is not software.

Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)

That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.

show comments
nopurpose

I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.

show comments
teach

This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.

Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.

So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.

I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.

Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.

To this day, automatic memory management still feels a little luxurious.

falense

Very cool project! I am working on something similar myself. I call mine TriOnyx. Its based on Simon Willison's lethal trifecta. You get a star from me :D

https://www.tri-onyx.com/

LudwigNagasena

And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.

I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.

Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.

tomasol

I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.

Schlagbohrer

Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.

show comments
sriku

"Fast" is not always a virtue and "efficiency" is not always the only consideration.

trilogic

Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.

pointlessone

Wow. Much security.

I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm

TacticalCoder

> curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.

Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.

I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.

I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).

Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.

Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.

Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".

Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?

We see major hacks nearly daily know. The cluestick is hammering your head, constantly.

When shall the clue eventually hit the curl-basher?

Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".

Here, a fucking cluestick for the leftpad'ers:

https://wiki.debian.org/Keysigning

(btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)