crazygringo

> This is exactly what Apple Intelligence should have been... They could have shipped an agentic AI that actually automated your computer instead of summarizing your notifications. Imagine if Siri could genuinely file your taxes, respond to emails, or manage your calendar by actually using your apps, not through some brittle API layer that breaks every update.

And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.

Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.

Right now we are still in very, very, very early days.

show comments
fooker

> I suspect ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it

Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.

show comments
IcyWindows

According to https://1password.com/blog/from-magic-to-malware-how-opencla..., The top skill is/was malware.

It's obviously broken, so no, Apple Intelligence should not have been this.

show comments
keyle

           people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-4, or whatever model you want to actually control your computer—has become the killer app for Mac hardware
That makes little sense. Buying mac mini would imply for the fused v-ram with the gpu capabilities, but then they're saying Claude/GPT-4 which don't have any gpu requirements.

Is the author implying mac minis for the low power consumption?

show comments
throwaway613746

OpenClaw is a security NIGHTMARE - Apple would never.

show comments
ed_mercer

The author is a bit extreme for expecting apple to have done something as complex as ooenclaw, not even OpenAI or Anthropic have really done it yet.

However this does not excuse Apple to sit with their thumbs up their asses for all these years.

TheRoque

This post completely has it backwards, people are buying Apple hardware because they don't shove AI down everyone's throat unlike microsoft. And in a few weeks OpenClaw will be outdated or deemed too unsecure anyways, it will never be a long-term products, it's just some crazy experiment for the memes.

notatoad

this seems obviously true, but at the same time very very wrong. openclaw / moltbot / whatever it's called today is essentially a thought experiment of "what happens if we just ignore all that silly safety stuff"

which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.

show comments
chatmasta

> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”

It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.

show comments
fnordpiglet

After having spent a few days with OpenClaw I have to say it’s about the worst software I’ve worked with ever. Everyone focused on the security flaws but the software itself is barely coherent. It’s like Moltbook wrote OpenClaw wrote Moltbook in some insidious wiggum loop from hell with no guard rails. The commit rate on the project reflects this.

JumpCrisscross

> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it

Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?

avaer

> An AI agent that clicks buttons.

Are people's agents actually clicking buttons (visual computer use) or is this just a metaphor?

I'm not asking if CU exists, but rather is this literally the driver of people's workflows? I thought everyone is just running Ralph loops in CC.

For an article making such a bold technological/social claim about a trillion dollar company, this seems a strange thing to be hand wavey about.

show comments
varenc

Apple has a very low tolerance for reputional liabilities. They aren't going to roll out something that %0.01 of the time does something bad, because with 100M devices that's something that'll affect 10,000 people, and have huge potential to cause bad PR, damaging the brand and trust.

show comments
dcreater

This is Yellow Pages type thinking in the age of the internet. No one is going to own an agentic layer (list any of the multitude of platforms already irrelevant like OpenAI Agent SDK, Google A2A) . No one is going to own a new app store (GPTs are already dead). No one is going to own models (FOSS models are extremely capable today). No one is going to own inference (Data centers will never be as cost effective as that old MacBook collecting dust that is plenty capable of running a 1B model that can compete with ChatGPT 3.5 and all the use cases that it already was good at like writing high school essays, recipes etc.) The only thing that is sticking is Markdown (SKILLS.md, AGENTS.md)

This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.

P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.

Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.

show comments
Sharlin

Apparently APIs are now a brittle way for software to use other software and interpreting and manipulating human GUIs with emulated mouse clicks and keypresses is a much better and perfectly reasonable way to do it. We’re truly living in a bizarro timeline.

show comments
RyanShook

In terms of useful AI agents, Siri/Apple Intelligence has been behind for so long that no one expects it to be any good.

I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.

My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.

janalsncm

I think there is a contradiction between

> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to

And

> Here’s what people miss about moats: they compound

Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.

> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”

From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.

daifi

OpenClaw shows what happens when users own their agents. There's a lot of other projects springing to that let users sell access to their unused computing power like Daifi.ai [0]. These products are tackling the infrastructure side. Decentralizing AI compute itself so it’s not locked up by a few cloud providers. Same ethos, different layer.

0. https://www.daifi.ai/

AlexCoventry

...And it will be, now that Apple has partnered with OpenAI. The foundation of OpenClaw is capable models.

terminalbraid

Expensive and overhyped?

show comments
b1temy

> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it

I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).

I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*

* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.

I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.

jesse_dot_id

It's just the juiciest attack surface of all time so I vehemently disagree.

rTX5CMRXIfFG

This article is talking about the AI race as if it’s over when it’s only started. And really, an opinion of the entire market based on a few reddit posts?

Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.

oxqbldpxo

This! Def a game changer for apple.

ozten

Trust takes years to build, seconds to break, and forever to repair.

yalogin

Apple doesn’t enable 3rd party services without having extreme control over the flow and without it directly benefiting their own bottom line.

cadamsdotcom

Unfortunately by not doing that they only managed to be the most valuable company ever.

So yeah, the market isn’t really signaling companies to make nice things.

show comments
ankit219

> And they would have won the AI race not by building the best model, but by being the only company that could ship an AI you’d actually trust with root access to your computer.

and the very next line (because i want to emphasize it

> That trust—built over decades—was their moat.

This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.

And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".

Not defending Apple, but this article is so far detached from reality that its hard to overstate.

orliesaurus

How much revenue do you think Apple made EXTRA from people buying Mac minis for this hype?

yoyohello13

If you can’t see why something like OpenClaw is not ready for production I don’t know what to tell you. People’s perceptions are so distorted by FOMO they are completely ignoring the security implications and dangers of giving an LLM keys to your life.

I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.

Aurornis

OpenClaw is a very fun project, but it would be considered a dumpster fire if any mainstream company tried to sell it. Every grassroots project gets evaluated on a completely different scale than commercial products. Trying to compare an experimental community project to a hypothetical commercial offering doesn't work.

> They could have charged $500 more per device and people would have paid it.

I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.

camillomiller

“People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things.”

Steve Jobs

raincole

> If you browse Reddit or HN, you’ll see the same pattern: people are buying Mac Minis specifically to run AI agents with computer use.

Saved you a click. This is the premise of the article.

show comments
semiquaver

I genuinely don't understand this take. What makes OP think that the company that failed so utterly to even deliver mediocre AI -- siri is stuck in 2015! -- would be up to the task of delivering something as bonkers as Clawdbot?

fortran77

No no no. It's too risky, cutting-edge, and dangerous. While fun to play with, it's not something I'd trust my 92 year old mother with dementia (who still uses an iPad) with.