When we (the engineering team I work on) started using agents more seriously we were worried about this: that we'd speed up coding time but slow down review time and just end up increasing cycle time.
So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.
We're finding cases where fast coding really does seem to be super helpful though:
* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)
* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)
* Times where the path forward is simple but also a lot of work (tedious stuff)
* Dealing with edge cases after building the happy path
* EDIT: One more huge one I would add: anywhere where the thing you're adding is a complete analogy of another branch/PR the agent seems to do great at (which is like a "simple but tedious" case)
The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.
I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).
show comments
thepasch
> Someone approves a PR they didn’t really read. We’ve all done it (don’t look at me like that). It merges. CI takes 45 minutes, fails on a flaky test, gets re-run, passes on the second attempt (the flaky test is fine, it’s always fine, until it isn’t and you’re debugging production at 2am on a Saturday in your underwear wondering where your life went wrong. Ask me how I know… actually, don’t). The deploy pipeline requires a manual approval from someone who’s in a meeting about meetings. The feature sits in staging for three days because nobody owns the “get it to production” step with any urgency.
This is the company I (soon no longer) work at.
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
furyofantares
> The bottleneck is understanding the problem. No amount of faster typing fixes that.
Why not? Why can't faster typing help us understand the problem faster?
> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.
Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?
I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.
> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
I guess because we're just cynical.
show comments
nicbou
I'm a solo dev. In fact I'm hardly a dev; it's just a helpful skill. Code writing speed IS a problem, because it takes valuable time away from other tasks. A bit like doing the dishes.
I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.
I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.
show comments
podgorniy
Yeah, we again have a solution (LLMs) in search of problems.
Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".
--
This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..
show comments
petcat
As human developers, I think we're struggling with "letting go" of the code. The code we write (or agents write) is really just an intermediate representation (IR) of the solution.
For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.
Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.
A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.
show comments
everdrive
Companies genuinely don't want good code. Individual teams just get measured by how many things they push around. An employee warning that something might not work very well is going to get reprimanded as "down in the weeds" or "too detail oriented," etc. I didn't understand this for a while, but internal actors inside of companies really just want to claim success.
show comments
wei03288
Completely agree with the premise. The bottleneck in most teams I've worked with isn't typing speed or even coding speed - it's the feedback loop between 'I think this is right' and 'this is actually right in production.'
The biggest time sink is usually debugging integration issues that only surface after you've connected three services together. Writing the code took 2 hours, figuring out why it doesn't work as expected takes 2 days.
I've found the most impactful investment is in local dev environments that mirror production as closely as possible. Docker Compose with realistic seed data catches more bugs than any amount of unit testing.
mikkupikku
My problem when writing code is mainly executive dysfunction; I constantly succumb to the temptation to take the easy way and do it properly later, and later never comes. For some reason, using a coding agent seems to alleviate this. Things get done the way I think they should be done, not just in a way that's "good enough for now."
ianberdin
I don’t agree. I have built Replit clone alone in months. They have hundreds of millions of funding…
It's unfair to characterize AI as 'code writing / completion' - it's at minimum 1/4 layer of abstraction above that - and even just 'at that' - it's useful.
So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.
Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.
These are cars in the age of horses, it's just a matter of properly characterizing the cars.
m463
> The Goal ... it's also the most useful business book you'll ever read that's technically fiction
factorio ... it's also the most useful engineering homework that's technically a game
k1rd
> That's the part most people get. Here's the part they don't, and it's the part that should scare you:
> When you optimise a step that is not the bottleneck, you don't get a faster system. You get a more broken one.
if you ever played factorio this is pretty clear.
po1nt
While reading articles like this, I feel like we're just in the "denial" stage. We're just trying to look for negatives instead of embracing that this is a definite paradigm shift in our craft.
larsnystrom
I can really relate to this. At the same time I’m not convinced cycle time always trumps throughput. Context switching is bad, and one solution to it is time boxing, which basically means there will be some wait time until the next box of time where the work is picked up. Doing time boxing properly lowers context switching, increases throughput but also increases latency (cycle time). It’s a trade-off. But of course maybe time boxing isn’t the best solution to the problem of context switching, maybe it’s possible to figure out a way to have the cookie and eat it. And maybe different circumstances require a different balance between latency and throughput.
metalrain
I think it's more abstraction problem.
You could write more code, but you also could abstract code more if you know what/how/why.
This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.
milesward
Correct, but I'd frame it to confused leaders a bit differently. Because we made this part easier, we've increased how critical, how limiting, other steps/functions are. Data's more valuable now. QA is more valuable now. More teams need to shift more resources, faster.
725686
The word typing is wrong.
It is not about the speed of typing code.
Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.
slibhb
The idea that LLMs don't significantly increase productivity has become ridiculous. You have to start questioning the psychology that's leading people to write stuff like this.
sorokod
Amdahl's law applies regardless of whether you are believe in it or not
gammalost
Seems easy to address with a simple rule. Push one PR; review one PR
show comments
avereveard
Eh code doesn't have a lot of value. Especially filling methods between signatures and figuring out the dependencies exact incantation is mechanistic and definitely time better spent doing other things.
A lot of these blog start from a false premise or a lack of imagination.
In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.
danilocesar
I'm here just for the comments...
show comments
tmshapland
Thank you. 100%.
cess11
One of the main reasons I like vim is that it enables me to navigate code very fast, that the edits are also quick when I've decided on them is a nice convenience but not particularly important.
Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.
Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.
Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.
A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.
lukaslalinsky
If I can offload the typing and building, I can spend more energy understanding the bigger picture
wolttam
"Our newest model reduces your Mean Time To 'Oh, Fuck!' (MTTF) by 70%!"
renewiltord
Understanding the problem is easier for me experienced with engaging with solutions to the problem and seeing what form they fail in. LLMs allow me to concretize solutions so that pre-work simply becomes work. This allows me to search through the space of solutions more effectively.
gyanchawdhary
he’s treating “systems thinking” and architecting software like it’s some sacred, hard to automate layer that AI apparntly sucks at ..
andrewstuart
These “LLM programming ain’t nothing special” posts are becoming embarrassing for the authors who - due to their anti AI dogmatism - have no idea how truly incredibly fast and powerful it’s become.
Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.
Ignorance of what Claude can actually do means your arguments have no standing at all.
“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”
show comments
luxuryballs
It’s definitely going to create a lot of problems in orgs that already have an incomplete or understaffed dev pipeline, which happen to often be the ones where executive leadership is already disconnected and not aware of what the true bottlenecks are, which also happen to often be the ones that get hooked by vendor slide decks…
nathias
people can have more than one problem
6stringmerc
Because the way the world is currently and this is trending, let me jump in and save you a lot of time:
Expedience is the enemy of quality.
Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.
This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.
Bye for now.
teaearlgraycold
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks. The prospect didn't even end up buying. The feature got used by eleven people, and nine of them were internal QA. That's not a delivery problem. That's an "oh fuck, what are we even doing" problem.
I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.
Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.
myst
No one there is solving a problem. The AI bros are hooking a new generation (NG) on _their_ set of crutches, without which NG "is not coding (living) up to their true potential". Nothing personal, just business.
PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.
gedy
I'm cynical but kinda surprised that so many mgmt types are rah-rah AI as "we're waiting for engineering... sigh" has been a very convenient excuse for many projects and companies that I've seen over past 25 years.
show comments
dannersy
The blog isn't even necessarily anti-AI yet the majority of responses here are defending it like the author kicked their dog.
The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.
When we (the engineering team I work on) started using agents more seriously we were worried about this: that we'd speed up coding time but slow down review time and just end up increasing cycle time.
So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.
We're finding cases where fast coding really does seem to be super helpful though:
* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)
* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)
* Times where the path forward is simple but also a lot of work (tedious stuff)
* Dealing with edge cases after building the happy path
* EDIT: One more huge one I would add: anywhere where the thing you're adding is a complete analogy of another branch/PR the agent seems to do great at (which is like a "simple but tedious" case)
The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.
I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).
> Someone approves a PR they didn’t really read. We’ve all done it (don’t look at me like that). It merges. CI takes 45 minutes, fails on a flaky test, gets re-run, passes on the second attempt (the flaky test is fine, it’s always fine, until it isn’t and you’re debugging production at 2am on a Saturday in your underwear wondering where your life went wrong. Ask me how I know… actually, don’t). The deploy pipeline requires a manual approval from someone who’s in a meeting about meetings. The feature sits in staging for three days because nobody owns the “get it to production” step with any urgency.
This is the company I (soon no longer) work at.
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
> The bottleneck is understanding the problem. No amount of faster typing fixes that.
Why not? Why can't faster typing help us understand the problem faster?
> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.
Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?
I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.
> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
I guess because we're just cynical.
I'm a solo dev. In fact I'm hardly a dev; it's just a helpful skill. Code writing speed IS a problem, because it takes valuable time away from other tasks. A bit like doing the dishes.
I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.
I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.
Yeah, we again have a solution (LLMs) in search of problems.
Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".
--
This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..
As human developers, I think we're struggling with "letting go" of the code. The code we write (or agents write) is really just an intermediate representation (IR) of the solution.
For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.
Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.
A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.
Companies genuinely don't want good code. Individual teams just get measured by how many things they push around. An employee warning that something might not work very well is going to get reprimanded as "down in the weeds" or "too detail oriented," etc. I didn't understand this for a while, but internal actors inside of companies really just want to claim success.
Completely agree with the premise. The bottleneck in most teams I've worked with isn't typing speed or even coding speed - it's the feedback loop between 'I think this is right' and 'this is actually right in production.'
The biggest time sink is usually debugging integration issues that only surface after you've connected three services together. Writing the code took 2 hours, figuring out why it doesn't work as expected takes 2 days.
I've found the most impactful investment is in local dev environments that mirror production as closely as possible. Docker Compose with realistic seed data catches more bugs than any amount of unit testing.
My problem when writing code is mainly executive dysfunction; I constantly succumb to the temptation to take the easy way and do it properly later, and later never comes. For some reason, using a coding agent seems to alleviate this. Things get done the way I think they should be done, not just in a way that's "good enough for now."
I don’t agree. I have built Replit clone alone in months. They have hundreds of millions of funding…
Btw: https://playcode.io
It's unfair to characterize AI as 'code writing / completion' - it's at minimum 1/4 layer of abstraction above that - and even just 'at that' - it's useful.
So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.
Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.
These are cars in the age of horses, it's just a matter of properly characterizing the cars.
> The Goal ... it's also the most useful business book you'll ever read that's technically fiction
factorio ... it's also the most useful engineering homework that's technically a game
> That's the part most people get. Here's the part they don't, and it's the part that should scare you: > When you optimise a step that is not the bottleneck, you don't get a faster system. You get a more broken one.
if you ever played factorio this is pretty clear.
While reading articles like this, I feel like we're just in the "denial" stage. We're just trying to look for negatives instead of embracing that this is a definite paradigm shift in our craft.
I can really relate to this. At the same time I’m not convinced cycle time always trumps throughput. Context switching is bad, and one solution to it is time boxing, which basically means there will be some wait time until the next box of time where the work is picked up. Doing time boxing properly lowers context switching, increases throughput but also increases latency (cycle time). It’s a trade-off. But of course maybe time boxing isn’t the best solution to the problem of context switching, maybe it’s possible to figure out a way to have the cookie and eat it. And maybe different circumstances require a different balance between latency and throughput.
I think it's more abstraction problem.
You could write more code, but you also could abstract code more if you know what/how/why.
This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.
Correct, but I'd frame it to confused leaders a bit differently. Because we made this part easier, we've increased how critical, how limiting, other steps/functions are. Data's more valuable now. QA is more valuable now. More teams need to shift more resources, faster.
The word typing is wrong.
It is not about the speed of typing code.
Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.
The idea that LLMs don't significantly increase productivity has become ridiculous. You have to start questioning the psychology that's leading people to write stuff like this.
Amdahl's law applies regardless of whether you are believe in it or not
Seems easy to address with a simple rule. Push one PR; review one PR
Eh code doesn't have a lot of value. Especially filling methods between signatures and figuring out the dependencies exact incantation is mechanistic and definitely time better spent doing other things.
A lot of these blog start from a false premise or a lack of imagination.
In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.
I'm here just for the comments...
Thank you. 100%.
One of the main reasons I like vim is that it enables me to navigate code very fast, that the edits are also quick when I've decided on them is a nice convenience but not particularly important.
Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.
Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.
Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.
A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.
If I can offload the typing and building, I can spend more energy understanding the bigger picture
"Our newest model reduces your Mean Time To 'Oh, Fuck!' (MTTF) by 70%!"
Understanding the problem is easier for me experienced with engaging with solutions to the problem and seeing what form they fail in. LLMs allow me to concretize solutions so that pre-work simply becomes work. This allows me to search through the space of solutions more effectively.
he’s treating “systems thinking” and architecting software like it’s some sacred, hard to automate layer that AI apparntly sucks at ..
These “LLM programming ain’t nothing special” posts are becoming embarrassing for the authors who - due to their anti AI dogmatism - have no idea how truly incredibly fast and powerful it’s become.
Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.
Ignorance of what Claude can actually do means your arguments have no standing at all.
“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”
It’s definitely going to create a lot of problems in orgs that already have an incomplete or understaffed dev pipeline, which happen to often be the ones where executive leadership is already disconnected and not aware of what the true bottlenecks are, which also happen to often be the ones that get hooked by vendor slide decks…
people can have more than one problem
Because the way the world is currently and this is trending, let me jump in and save you a lot of time:
Expedience is the enemy of quality.
Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.
This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.
Bye for now.
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks. The prospect didn't even end up buying. The feature got used by eleven people, and nine of them were internal QA. That's not a delivery problem. That's an "oh fuck, what are we even doing" problem.
I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.
Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.
No one there is solving a problem. The AI bros are hooking a new generation (NG) on _their_ set of crutches, without which NG "is not coding (living) up to their true potential". Nothing personal, just business.
PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.
I'm cynical but kinda surprised that so many mgmt types are rah-rah AI as "we're waiting for engineering... sigh" has been a very convenient excuse for many projects and companies that I've seen over past 25 years.
The blog isn't even necessarily anti-AI yet the majority of responses here are defending it like the author kicked their dog.
The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.