In my large enterprise world, AI adoption hasn't made it outside of the development teams - only developers have access to Github Copilot.
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
show comments
olsondv
The post hits the nail on the head with the messy middle. There is simply no motivation to develop this sort of intelligence loop as a dev who has their own responsibilities which their job depend on. Management can ask as nicely as they want, but I’m not going to selflessly share my productivity gains with the broader company for free. I might share a tool if it’s useful. All the learning of how to wrangle AI or set up agents is better kept to myself if there is no recognition for sharing.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
cadamsdotcom
AI by itself isn’t that useful. An agent forgets and makes enough mistakes that you have to check all its work, which can be net productivity negative.
It really comes into its own when you treat it as a tool that can build other tools.
Right now, most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
blitzar
> Where is the ROI for the 2 mio € we paid Anthropic last year?
The CEO has a youtube style platinum token plaque for their office.
jt654
This is a great article. It helps you realize that the feedback loop is the goal but it won't just happen and traditional methodologies don't really support it. Has anyone here found a good way that promotes teams in a company to focus on the loop instead of productivity hack?
woodydesign
Great article. The part that stood out to me is the shift in how organizations define work.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
Cthulhu_
On the first part of the article, I believe it describes how individual productivity gains do not seem to translate to business / larger scale productivity. I think this is expected; individual developer productivity, code volume, LOC/day never was a valuable metric on a company scale. Number of delivered features might be one, but ultimately, revenue and customer growth etc are.
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
zidoo
Once people try to increase quality instead of speed they will see how LLMs are powerful. Everything else is just sales pitch by Nvidia and friends.
rob74
One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?
show comments
simoncion
> There is another pressure building underneath all this. AI usage will become more visibly metered. The current enterprise feeling of “everyone has access, don’t worry too much about the bill” will not hold forever, at least not in the form people are getting used to. ...
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
i_think_so
> one team uses Copilot as autocomplete and calls it a day. Another team runs Claude Code in tight loops, with tests, reviews, and constant steering. A product owner suddenly prototypes real software instead of mocking screens in Figma. A senior engineer delegates a root-cause analysis to an agent and comes back to the valid solution in under an hour; this would’ve taken him two weeks without AI. A junior person produces polished code but has no idea which architectural assumptions got smuggled into the system. A support team quietly turns recurring tickets into workflow automation, because they know exactly where the work hurts and nobody in the Center of Excellence ever asked the right question.
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The closest thing to an honest, less-than-rosy example is the "junior person" has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
cyanydeez
I think if these companies first adopted local models with fewer token outs and the learners got to watch the tokens get made, there'd be a lot more understanding.
In my large enterprise world, AI adoption hasn't made it outside of the development teams - only developers have access to Github Copilot.
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
The post hits the nail on the head with the messy middle. There is simply no motivation to develop this sort of intelligence loop as a dev who has their own responsibilities which their job depend on. Management can ask as nicely as they want, but I’m not going to selflessly share my productivity gains with the broader company for free. I might share a tool if it’s useful. All the learning of how to wrangle AI or set up agents is better kept to myself if there is no recognition for sharing.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
AI by itself isn’t that useful. An agent forgets and makes enough mistakes that you have to check all its work, which can be net productivity negative.
It really comes into its own when you treat it as a tool that can build other tools.
Right now, most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
> Where is the ROI for the 2 mio € we paid Anthropic last year?
The CEO has a youtube style platinum token plaque for their office.
This is a great article. It helps you realize that the feedback loop is the goal but it won't just happen and traditional methodologies don't really support it. Has anyone here found a good way that promotes teams in a company to focus on the loop instead of productivity hack?
Great article. The part that stood out to me is the shift in how organizations define work.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
On the first part of the article, I believe it describes how individual productivity gains do not seem to translate to business / larger scale productivity. I think this is expected; individual developer productivity, code volume, LOC/day never was a valuable metric on a company scale. Number of delivered features might be one, but ultimately, revenue and customer growth etc are.
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
Once people try to increase quality instead of speed they will see how LLMs are powerful. Everything else is just sales pitch by Nvidia and friends.
One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?
> There is another pressure building underneath all this. AI usage will become more visibly metered. The current enterprise feeling of “everyone has access, don’t worry too much about the bill” will not hold forever, at least not in the form people are getting used to. ...
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
> one team uses Copilot as autocomplete and calls it a day. Another team runs Claude Code in tight loops, with tests, reviews, and constant steering. A product owner suddenly prototypes real software instead of mocking screens in Figma. A senior engineer delegates a root-cause analysis to an agent and comes back to the valid solution in under an hour; this would’ve taken him two weeks without AI. A junior person produces polished code but has no idea which architectural assumptions got smuggled into the system. A support team quietly turns recurring tickets into workflow automation, because they know exactly where the work hurts and nobody in the Center of Excellence ever asked the right question.
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The closest thing to an honest, less-than-rosy example is the "junior person" has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
I think if these companies first adopted local models with fewer token outs and the learners got to watch the tokens get made, there'd be a lot more understanding.