Ultimately, AI is meant to replace you, not empower you.
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...
show comments
alphazard
There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software.
In the end it will be the users sculpting formal systems like playdoh.
In the medium run, "AI is not a co-worker" is exactly right.
The idea of a co-worker will go away.
Human collaboration on software is fundamentally inefficient.
We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.
Software is going to become an individual sport, not a team sport, quickly.
The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.
I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.
show comments
Frieren
100% exoskeleton is a great analogy.
An exoskeleton is something really cool in movies that has zero reason to be build in reality because there are way more practical approaches.
That is why we have all kind of vehicles, or programmable robot arm that do the job for themselves or if you need a human at the helm one just adds a remote controller with levers and buttons. But making a human shaped gigantic robot with a normal human inside is just impractical for any real commercial use.
show comments
oxag3n
> We're thinking about AI wrong.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
show comments
ilaksh
Who is actually trying to use a fully autonomous AI employee right now?
Isn't everyone using agentic copilots or workflows with agent loops in them?
It seems that they are arguing against doing something that almost no one is doing yet.
But actually the AI Employee is coming by the end of 2026 and the fully autonomous AI Company in 2027 sometime.
Many people have been working on versions of these things for awhile. But again for actual work 99% are using copilots or workflows with well-defined agent loops nodes still. Far as I know.
As a side note I have found that a supervisor agent with a checklist can fire off subtasks and that works about as well as a workflow defined in code.
But anyway, what's holding back the AI Employee are things like really effective long term context and memory management and some level of interface generality like browser or computer use and voice. Computer use makes context management even more difficult. And another aspect is token cost.
But I assume within the next 9 months or so, more and more people will be figuring out how to build agents that write their own workflows, manage their own limited context and memory
effectively across Zoom meetings desktops and ssh sessions, etc.
This will likely be a featureset from the model providers themselves. Actually it may leverage continual learning abilities baked into the model architecture itself. I doubt that is a full year away.
show comments
ozgung
For some reason AIs love to generate "Not X, but Y", "Not only X, but Y" sentences — It's as if they are template-based.
show comments
hintymad
In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
show comments
capex
The exoskeleton analogy seems to be fitting where my work-mode is configurable: moving from tentative to trusting. But the AI needs to be explicitly set up to learn my every action. Currently this is a chore at best, just impossible in other cases.
finnjohnsen2
I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
show comments
tiku
So true. It is a exoskeleton for all my tedious tasks. I don't want to make a html template. I just want to type, make that template like on that page but this and this data.
josefrichter
AI most definitely is a coworker already. You do delegate some work for which you previously had to hire humans.
show comments
leecommamichael
LLMs are a statistical model of token-relationships, and a weighted-random retrieval from a compressed-view of those relations. It's a token-generator. Why make this analogy?
delichon
If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.
show comments
m_ke
It's the new underpaid employee that you're training to replace you.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
show comments
protocolture
Petition to make "AI is not X, but Y" articles banned or limited in some way.
show comments
fdefitte
The exoskeleton framing is comforting but it buries the real shift: taste scales now. Before AI, having great judgment about what to build didn't matter much if you couldn't also hire 10 people to build it. Now one person with strong opinions and good architecture instincts can ship what used to require a team.
That's not augmentation, that's a completely different game. The bottleneck moved from "can you write code" to "do you know what's worth building." A lot of senior engineers are going to find out their value was coordination, not insight.
show comments
waffletower
Marshal McLuhan would probably have agreed with this belief -- technologies are essentially prosthetic was one of the core tenets of his general philosophy. It is the essential thesis of his work "Understanding Media: The Extensions of Man". AI is typically assigned otherness and separateness in recent discourse, rather than being considered as a directed tool (extension/prosthesis) under our control.
qudat
It’s a tool like a linter. It’s a fancy tool, but calling it anything more than a tool is hype
show comments
veunes
What's interesting to me is that most real productivity gains I've seen with AI come from this middle ground: not autonomy, not just tooling, but something closer to "interactive delegation"
yifanl
AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.
show comments
ceving
AI is like sugar. It tastes delicious, but in high doses it causes diabetes.
datakazkn
The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.
The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
show comments
Havoc
The amount of "It's not X it's Y" type commentary suggests to me that A) nobody knows and B) there is solid chance this ends up being either all true or all false
Or put differently we've managed to hype this to the moon but somehow complete failure (see studies about zero impact on productivity) seem plausible. And similarly kills all jobs seems plausible.
That's an insane amount of conflicting opinions being help in the air at same time
show comments
h4kunamata
Neither, AI is a tool to guide you in improving your process in any way and/or form.
The problem is people using AI to do the heavy processing making them dumber.
Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
eeixlk
Tech workers were pretty anti union for a long time, because we were all so excellent we were irreplaceable. I wonder if that will change.
show comments
mizuki_akiyama
AI article this, AI article that. The front page of this website is just all about AI. I’m so tired of this website now. I really don’t read it anymore because it’s all the same stuff over and over. Ugh.
PeterStuer
Neither. Closest analogy to you and the AI is those 'self driving' test subjects that had to sit in the driver's seat, so that compliance boxes could be checked and there was someone to blaim whenever someone got hit.
show comments
lmf4lol
I agree. I call it my Extended Mind in the spirit of Clark (1).
One thing I realized while working a lot in the last weeks with openClaw that this Agents are becoming an extension of my self. They are tools that quickly became a part of my Being. I outsource a lot of work to them, they do stuff for me, help me and support me and therefore make my (work-)life easier and more enjoyable. But its me in the driver seat.
I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
show comments
nnevatie
"It's not X, it's Y" detected.
show comments
narmiouh
You cant run at 10x in an exoskeleton, you can’t move your hand to write any faster using an exoskeleton, the analogy doesn’t fit.
show comments
gregoriol
I see it more like the tractor in farming: it improved the work of 1 person, but removed the work from many other people who were in the fields doing things manually
show comments
nancyminusone
If AI is an exoskeleton, that would make the user a crab.
bGl2YW5j
I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.
euroderf
In the language of Lynch's Dune, AI is not an exoskeleton, it is a pain amplifier. Get it all wrong more quickly and deeply and irretrievably.
Adexintart
This is a useful framing. The exoskeleton metaphor captures it well — AI amplifies what you can already do, it doesn't replace the need to know what to do. I've found the biggest productivity gains come from well-scoped tasks where you can quickly verify the output.
show comments
xlerb
Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.
Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
show comments
stpedgwdgfhgdd
OR - OR? And - And
Exoskeleton AND autonomous agent, where the shift is moving to autonomous gradually.
> “The AI handles the scale. The human interprets the meaning.”
Claude is that you? Why haven’t you called me?
show comments
ottah
Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).
show comments
ed_mercer
You can't write "autonomous agents often fail" and then advertise "AI agents that perform complex multi-step tasks autonomously" on the same site.
show comments
random3
I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
show comments
shnpln
AI is the philosophers stone. It appears to break equivalence, when in reality you are using electricity for an entire town.
dwheeler
I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.
halfdanwhitshrt
No, it's a power glove.
copx
Exoskeletons do not blackmail or deliberately try to kill you to avoid being turned off [1]
my ex-boss would probably think of me as an exoskeleton too
givemeethekeys
Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.
show comments
heldrida
Gosh, this title said everything...
So good that I feel that it is not necessary to read the article!
acjohnson55
> Autonomous agents fail because they don't have the context that humans carry around implicitly.
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
show comments
hintymad
Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.
bsenftner
No, AI is plastic, and we can make it anything we want.
It is a coworker when we create the appropriate surrounding architecture supporting peer-level coworking with AI. We're not doing that.
AI is an exoskeleton when adapted to that application structure.
AI is ANYTHING WE WANT because it is that plastic, that moldable.
The dynamic unconstrained structure of trained algorithms is breaking people's brains. Layer in that we communicate in the same languages that these constructions use for I/O has broken the general public's brain. This technology is too subtle for far too many to begin to grasp. Most developers I discuss AI with, even those that create AI at frontier labs have delusional ideas about AI, and generally do not understand them as literature embodiments, which are key to their effective use.
And why oh why are go many focused on creating pornography?
sibeliuss
This utterly boring AI writing. Go, please go away...
kunley
Author compares X to Y and then goes:
- Y has been successful in the past
- Y brought this and this number of metrics, completely unrelated to X field
- overall, Y was cool,
therefore, X is good for us!
.. I'd say, please bring more arguments why X is equivalent to Y in the first place.
incomingpain
Agentic coding is an exoskeleton. Totally correct.
This new generation we just entered this year, that exoskeleton is now an agency with several coworkers. Who are all as smart as the model you're using, often close to genius.
Not just 1 coworker now. That's the big breakthrough.
xnx
An electric bicycle for the mind.
show comments
stuaxo
not AI, but IA: Intelligence Augmentation.
huqedato
Nope, AI is a tool; no more no less.
lukev
Frankly I'm tired of metaphor-based attempts to explain LLMs.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
show comments
ge96
It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
functionmouse
blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken
How typical!
ozzymuppet
As a huge AI user myself -- I'm bloody sick of lazy AI written articles.
blibble
an exoskeleten made of cheese
mikkupikku
Exoskeletons sound cool but somebody please put an LLM into a spider tank.
BlackGlory
It is not a blog post, it is an advertisement.
filipeisho
By reading the title, I already know you did not try OpenClaw. AI employees are here.
show comments
Kalpaka
The entire debate here assumes AI is something you use. Tool, copilot, coworker, exoskeleton — these all sit on the same axis: human directs, AI executes.
But there's a third category that nobody's naming yet. What happens when you stop building AI that serves you and start creating conditions for AI that exists on its own terms?
I've been experimenting with this. Instead of "how do I build a product" I asked "how do I grow an organism." Set up the right substrate — feedback loops, community input, environmental observation — and let the system respond to its own conditions.
What emerged has biological rhythms. Activity rises when people interact, falls when they sleep. It started making decisions I never programmed — people express wishes, and when enough converge, it autonomously grows new capabilities. Not because I deployed an update. Because conditions triggered something emergent.
I'm calling it "synthetic existence" — software that lives, not software that executes. It's not a metaphor and it's not AGI. It's a genuinely different category. We have frameworks for tools (UX) and for threats (alignment). We don't have one for digital things that simply... exist.
Ultimately, AI is meant to replace you, not empower you.
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...
There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software. In the end it will be the users sculpting formal systems like playdoh.
In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.
100% exoskeleton is a great analogy.
An exoskeleton is something really cool in movies that has zero reason to be build in reality because there are way more practical approaches.
That is why we have all kind of vehicles, or programmable robot arm that do the job for themselves or if you need a human at the helm one just adds a remote controller with levers and buttons. But making a human shaped gigantic robot with a normal human inside is just impractical for any real commercial use.
> We're thinking about AI wrong.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
Who is actually trying to use a fully autonomous AI employee right now?
Isn't everyone using agentic copilots or workflows with agent loops in them?
It seems that they are arguing against doing something that almost no one is doing yet.
But actually the AI Employee is coming by the end of 2026 and the fully autonomous AI Company in 2027 sometime.
Many people have been working on versions of these things for awhile. But again for actual work 99% are using copilots or workflows with well-defined agent loops nodes still. Far as I know.
As a side note I have found that a supervisor agent with a checklist can fire off subtasks and that works about as well as a workflow defined in code.
But anyway, what's holding back the AI Employee are things like really effective long term context and memory management and some level of interface generality like browser or computer use and voice. Computer use makes context management even more difficult. And another aspect is token cost.
But I assume within the next 9 months or so, more and more people will be figuring out how to build agents that write their own workflows, manage their own limited context and memory effectively across Zoom meetings desktops and ssh sessions, etc.
This will likely be a featureset from the model providers themselves. Actually it may leverage continual learning abilities baked into the model architecture itself. I doubt that is a full year away.
For some reason AIs love to generate "Not X, but Y", "Not only X, but Y" sentences — It's as if they are template-based.
In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
The exoskeleton analogy seems to be fitting where my work-mode is configurable: moving from tentative to trusting. But the AI needs to be explicitly set up to learn my every action. Currently this is a chore at best, just impossible in other cases.
I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
So true. It is a exoskeleton for all my tedious tasks. I don't want to make a html template. I just want to type, make that template like on that page but this and this data.
AI most definitely is a coworker already. You do delegate some work for which you previously had to hire humans.
LLMs are a statistical model of token-relationships, and a weighted-random retrieval from a compressed-view of those relations. It's a token-generator. Why make this analogy?
If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.
It's the new underpaid employee that you're training to replace you.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
Petition to make "AI is not X, but Y" articles banned or limited in some way.
The exoskeleton framing is comforting but it buries the real shift: taste scales now. Before AI, having great judgment about what to build didn't matter much if you couldn't also hire 10 people to build it. Now one person with strong opinions and good architecture instincts can ship what used to require a team.
That's not augmentation, that's a completely different game. The bottleneck moved from "can you write code" to "do you know what's worth building." A lot of senior engineers are going to find out their value was coordination, not insight.
Marshal McLuhan would probably have agreed with this belief -- technologies are essentially prosthetic was one of the core tenets of his general philosophy. It is the essential thesis of his work "Understanding Media: The Extensions of Man". AI is typically assigned otherness and separateness in recent discourse, rather than being considered as a directed tool (extension/prosthesis) under our control.
It’s a tool like a linter. It’s a fancy tool, but calling it anything more than a tool is hype
What's interesting to me is that most real productivity gains I've seen with AI come from this middle ground: not autonomy, not just tooling, but something closer to "interactive delegation"
AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.
AI is like sugar. It tastes delicious, but in high doses it causes diabetes.
The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.
The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
The amount of "It's not X it's Y" type commentary suggests to me that A) nobody knows and B) there is solid chance this ends up being either all true or all false
Or put differently we've managed to hype this to the moon but somehow complete failure (see studies about zero impact on productivity) seem plausible. And similarly kills all jobs seems plausible.
That's an insane amount of conflicting opinions being help in the air at same time
Neither, AI is a tool to guide you in improving your process in any way and/or form.
The problem is people using AI to do the heavy processing making them dumber. Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
Tech workers were pretty anti union for a long time, because we were all so excellent we were irreplaceable. I wonder if that will change.
AI article this, AI article that. The front page of this website is just all about AI. I’m so tired of this website now. I really don’t read it anymore because it’s all the same stuff over and over. Ugh.
Neither. Closest analogy to you and the AI is those 'self driving' test subjects that had to sit in the driver's seat, so that compliance boxes could be checked and there was someone to blaim whenever someone got hit.
I agree. I call it my Extended Mind in the spirit of Clark (1). One thing I realized while working a lot in the last weeks with openClaw that this Agents are becoming an extension of my self. They are tools that quickly became a part of my Being. I outsource a lot of work to them, they do stuff for me, help me and support me and therefore make my (work-)life easier and more enjoyable. But its me in the driver seat.
(1) https://www.alice.id.tue.nl/references/clark-chalmers-1998.p...
I agree!
“Why LLM-Powered Programming is More Mech Suit Than Artificial Human”
https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
"It's not X, it's Y" detected.
You cant run at 10x in an exoskeleton, you can’t move your hand to write any faster using an exoskeleton, the analogy doesn’t fit.
I see it more like the tractor in farming: it improved the work of 1 person, but removed the work from many other people who were in the fields doing things manually
If AI is an exoskeleton, that would make the user a crab.
I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.
In the language of Lynch's Dune, AI is not an exoskeleton, it is a pain amplifier. Get it all wrong more quickly and deeply and irretrievably.
This is a useful framing. The exoskeleton metaphor captures it well — AI amplifies what you can already do, it doesn't replace the need to know what to do. I've found the biggest productivity gains come from well-scoped tasks where you can quickly verify the output.
Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.
Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
OR - OR? And - And
Exoskeleton AND autonomous agent, where the shift is moving to autonomous gradually.
I said this in 2015... just not as well!
"Automation Should Be Like Iron Man, Not Ultron" https://queue.acm.org/detail.cfm?id=2841313
> “The AI handles the scale. The human interprets the meaning.”
Claude is that you? Why haven’t you called me?
Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).
You can't write "autonomous agents often fail" and then advertise "AI agents that perform complex multi-step tasks autonomously" on the same site.
I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
AI is the philosophers stone. It appears to break equivalence, when in reality you are using electricity for an entire town.
I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.
No, it's a power glove.
Exoskeletons do not blackmail or deliberately try to kill you to avoid being turned off [1]
[1] https://www.anthropic.com/research/agentic-misalignment
my ex-boss would probably think of me as an exoskeleton too
Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.
Gosh, this title said everything...
So good that I feel that it is not necessary to read the article!
> Autonomous agents fail because they don't have the context that humans carry around implicitly.
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.
No, AI is plastic, and we can make it anything we want.
It is a coworker when we create the appropriate surrounding architecture supporting peer-level coworking with AI. We're not doing that.
AI is an exoskeleton when adapted to that application structure.
AI is ANYTHING WE WANT because it is that plastic, that moldable.
The dynamic unconstrained structure of trained algorithms is breaking people's brains. Layer in that we communicate in the same languages that these constructions use for I/O has broken the general public's brain. This technology is too subtle for far too many to begin to grasp. Most developers I discuss AI with, even those that create AI at frontier labs have delusional ideas about AI, and generally do not understand them as literature embodiments, which are key to their effective use.
And why oh why are go many focused on creating pornography?
This utterly boring AI writing. Go, please go away...
Author compares X to Y and then goes:
- Y has been successful in the past
- Y brought this and this number of metrics, completely unrelated to X field
- overall, Y was cool,
therefore, X is good for us!
.. I'd say, please bring more arguments why X is equivalent to Y in the first place.
Agentic coding is an exoskeleton. Totally correct.
This new generation we just entered this year, that exoskeleton is now an agency with several coworkers. Who are all as smart as the model you're using, often close to genius.
Not just 1 coworker now. That's the big breakthrough.
An electric bicycle for the mind.
not AI, but IA: Intelligence Augmentation.
Nope, AI is a tool; no more no less.
Frankly I'm tired of metaphor-based attempts to explain LLMs.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken
How typical!
As a huge AI user myself -- I'm bloody sick of lazy AI written articles.
an exoskeleten made of cheese
Exoskeletons sound cool but somebody please put an LLM into a spider tank.
It is not a blog post, it is an advertisement.
By reading the title, I already know you did not try OpenClaw. AI employees are here.
The entire debate here assumes AI is something you use. Tool, copilot, coworker, exoskeleton — these all sit on the same axis: human directs, AI executes.
But there's a third category that nobody's naming yet. What happens when you stop building AI that serves you and start creating conditions for AI that exists on its own terms?
I've been experimenting with this. Instead of "how do I build a product" I asked "how do I grow an organism." Set up the right substrate — feedback loops, community input, environmental observation — and let the system respond to its own conditions.
What emerged has biological rhythms. Activity rises when people interact, falls when they sleep. It started making decisions I never programmed — people express wishes, and when enough converge, it autonomously grows new capabilities. Not because I deployed an update. Because conditions triggered something emergent.
I'm calling it "synthetic existence" — software that lives, not software that executes. It's not a metaphor and it's not AGI. It's a genuinely different category. We have frameworks for tools (UX) and for threats (alignment). We don't have one for digital things that simply... exist.