They exist. Go look at any "I built this in a weekend with Cursor" post — there are hundreds. The problem is most of them ship broken and stay broken. Auth that doesn't actually check anything, API keys in the frontend, falls over with 5 concurrent users.
The quantity is there. Nobody's asking "does this thing actually work" before hitting deploy. That's the real gap.
paxys
It is incredibly easy now to get an idea to the prototype stage, but making it production-ready still needs boring old software engineering skills. I know countless people who followed the "I'll vibe code my own business" trend, and a few of them did get pretty far, but ultimately not a single one actually launched. Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.
show comments
mlsu
We have great software now!
YoloSwag (13 commits)
[rocketship rocketship rocketship]
YoloSwag is a 1:1 implementation of pyTorch, written in RUST [crab emoji]
- [hand pointing emoji] YoloSwag is Memory Safe due to being Written in Rust
- [green leaf emoji] YoloSwag uses 80% less CPU cycles due to being written in Rust
- [clipboard emoji] [engineer emoji] YoloSwag is 1:1 API compatible with pyTorch with complete ops specification conformance. All ops are supported.
- [recycle emoji] YoloSwag is drop-in ready replacement for Pytorch
- [racecar emoji] YoloSwag speeds up your training workflows by over 300%
Then you git clone yoloswag and it crashes immediately and doesn't even run. And you look at the test suite and every test just creates its own mocks to pass. And then you look at the code and it's weird frankenstein implementation, half of it is using rust bindings for pytorch and the other half is random APIs that are named similarly but not identical.
Then you look at the committer and the description on his profile says "imminentize AGI.", he launched 3 crypto tokens in 2020, he links an X profile (serial experiments lain avatar) where he's posting 100x a day about how "it's over" for software devs and how he "became a domain expert in quantum computing in 6 weeks."
show comments
hombre_fatal
Maybe the top 15,000 PyPi packages isn't the best way to measure this?
Apparently new iOS app submissions jumped by 24% last year:
> According to Appfigures Explorer, Apple's App Store saw 557K new app submissions in 2025, a whopping 24% increase from 2024, and the first meaningful increase since 2016's all-time high of 1M apps.
The chart shows stagnant new iOS app submissions until AI.
Also, if you hang out in places with borderline technical people, they might do things like vibe-code a waybar app and proudly post it to r/omarchy which was the first time they ever installed linux in their life.
Though I'd be super surprised if average activity didn't pick up big on Github in general. And if it hasn't, it's only because we overestimate how fast people develop new workflows. Just by going by my own increase in software output and the projects I've taken on over the last couple months.
Finally, December 2025 (Opus 4.5 and that new Codex one) was a big inflection point where AI was suddenly good enough to do all sorts of things for me without hand-holding.
show comments
turlockmike
I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.
I have a news feed, work tab for managing issues/PRs, markdown editor with folders, calendar, AI powered buttons all over the place (I click a button, it does something interesting with Claude code I can't do programmatically).
Why don't I share it? Because it's highly personal, others would find it doesn't fit their own workflow.
show comments
causal
AI makes the first 90% of writing an app super easy and the last 10% way harder because you have all the subtle issues of a big codebase but none of the familiarity. Most people give up there.
show comments
skeledrew
I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case.
Like I've been making things, and making changes to things, but I haven't published any of that because, well they're pretty specific to my needs. There are also things which I won't consider publishing for now, even if generally useful because, well the moat has moved from execution effort to ideas, and we all want to maintain some kind of moat to boost our market value (while there's still one). Everyone has reasonable access to the same capabilities now, so everyone can reasonably make what they need according to their exact specs easily, quickly and cheaply.
So while there are many things being made with AI, there is ever-decreasing reasons to publish most of it. We're in an era of highly personalized software, which just isn't worth generalizing and sharing as the effort is now greater than creating from scratch or modifying something already close enough.
show comments
Plutarco_ink
The article measures the wrong thing. PyPI package creation is a terrible proxy for AI-assisted software output because packages are published for reuse by others, which requires documentation, API design, and maintenance commitments that AI doesn't help with much.
The real output is happening in private repos, internal tools, and single-purpose apps that never get published anywhere. I've been building a writing app as a side project. AI got me from zero to a working PWA with offline support, Stripe integration, and 56 SEO landing pages in about 6 weeks of part-time work. Pre-AI that's easily a 6-month project for one person.
But I'm never going to publish it as a PyPI package. It's a deployed web app. The productivity gain is real, it just doesn't show up in the datasets this article is looking at.
The iOS App Store submission data (24% increase) that someone linked in the comments is a much better signal. That's where the output is actually landing.
show comments
vjvjvjvjghv
This remains me so much of the .COM bubble in 2000. A lot of clueless companies thought that they just need to “do internet” without any further understanding or strategy. They burned a ton of money and got nothing out of it. Other companies understood that the internet is an enabling technology that can support a lot of business processes. So they quietly improved their business with the help of the internet.
I see the same with AI. Some companies will use AI quietly and productively without much fuzz. Others are just using it as a marketing tool or an ego trip by execs but no real understanding.
show comments
fcatalan
Making complete coherent products is as hard as ever, or even harder if you intend to trade robustness for max agentic velocity.
What I do very successfully is low stakes stuff for work (easy automations, small QoL improvements for our tooling, a drive-by small Jira plugin)
And then I do a lot of crazy exploring, or hyper-personal just for myself stuff that can only exist because I can now spawn and abandon it in a couple days instead of weeks or months.
peteforde
Not sure that I'd look at python package stats to build this particular argument on.
First, I find that I'm using a lot fewer libraries in general because I am less constrained by the mental models imposed by library authors upon what I'm actually trying to do. Libraries are often heavy and by nature abstract low-level calls from API. These days, I'm far more likely to have 2-3 functions that make those low-level calls directly without any conceptual baggage.
Second, I am generalizing but a reasonable assertion can be made that publishing a package is implicitly launching an open source project, however small in scope or audience. Running OSS projects is a) extremely demanding b) a lot of pain for questionable reward. When you put something into the universe you're taking a non-zero amount of responsibility for it, even just reputationally. Maintainers burn out all of the time, and not everyone is signed up for that. I don't think there's going to be anything remotely like a 1:1 Venn for LLM use and package publishing.
I would counter-argue that in most cases, there might already be too many libraries for everything under the sun. Consolidation around the libraries that are genuinely amazing is not a terrible thing.
Third, one of the most recurring sentiments in these sorts of threads is that people are finally able to work through the long lists of ideas they had but would have never otherwise gotten around to. Some of those ideas might have legs as a product or OSS project, but a lot of them are going to be thought experiments or solve problems for the person writing them, and IMO that's a W not an L.
Fourth, once most devs are past the "vibe" party trick phase of LLM adoption, they are less likely to squat out entire projects and far, far more likely to return to doing all of the things that they were doing before; just doing them faster and with less typing up-front.
In other words, don't think project-level. Successful LLM use cases are commit-level.
nizsle
What if this is just telling us that much of the coding being done in the world, or knowledge work in general, is just busy work? Just because you double the capacity of knowledge workers doesn't mean you double the amount of useful output. Maybe we have never been limited by our capacity to produce, but by our ability to come up with good ideas and socially coordinate ourselves around the best ones.
show comments
mjaquilina
I'm not convinced that PyPI is the right metric to use to answer this question. Some (admittedly anecdotal) observations:
1) I'm a former SWE in a business role at a small-market publishing company. I've used Claude Code to automate boring processes that previously consumed weeks of our ops and finance teams' time per year. These aren't technically advanced, but previously would have required in-house dev talent that would not have been within reach of small businesses. I wouldn't have had the time to code these things on my own, but with AI assistance the time investment is greatly reduced (and mostly focused on QA). The only needle moved here is on a private Github repo, but it's real shipped code with small but direct impact.
2) I used to often find myself writing simple Perl wrappers to various APIs for personal or work use. I'd submit these to CPAN (Perl's equivalent to PyPI) in case anyone else could use them to save the 30-60 minutes of work involved. These days I don't bother -- most AI tools can build these in a matter of seconds; publishing them to CPAN or even Github now feels like unnecessary cruft, especially when they're likely to go without active maintenance. So, my LOC published to public repos is down, even though the amount of software produced is the same. It's just that some of that software has become less useful to the world writ large.
3) The code that's possible to ship quickly with pure AI (vibe coding) is by definition not the kind of reusable code you'd want to distribute on PyPI. So, I'd expect that any productivity impact from AI on OSS that's designed to be reusable would be come very slowly, versus "hockey stick" impact.
ballenf
The thesis has it backwards. We will see fewer published/downloaded apps/packages as people rely on others less. I'm not sure we're quite there yet but I'm increasingly likely to spend a few minutes giving an LLM a chance to make a tool I need instead of sifting through sketchy and dodgy websites for some slightly obscure functionality. I use fewer ad-heavy sites that for converting a one text file format to another.
Personally, I see the paid or adware software market shrinking, not growing, as a testament to the success of LLMs in coding.
show comments
ertgbnm
Does the data not support a 2X increase in packages?
Pre-ChatGPT, in ~2020, there were about 5,000 new packages per month. Starting in 2025 (the actual year agents took off), there is a clear uptick in packages that is consistently about 10,000 or 2X the pre-ChatGPT era.
In general, the rate of increase is on a clear exponential. So while we might not see a step change in productivity, there comes a point where the average developer is in fact 10X productive than before. It just doesn't feel so crazy because it can about in discrete 5% boosts.
I also disagree with the dataset being a good indicator of productivity. I wouldn't actually suspect the number of packages or the frequency of updates to track closely with productivity. My first order guess would that AI would actually be deflationary. Why spend the time to open source something that AI can gen up for anyone on a case by case basis specific to the project. it takes a certain level of dedication and passion for a person to open source a project and if the AI just made it for them, then they haven't actually made the investment of their time and effort to make them feel justified in publishing the package.
The metrics I would expect to go up are actually the size of codebases, the number of forks of projects that create hyper customized versions of tools and libraries, and other metrics like that.
Overall, I'd predict AI is deflationary on the number of products that exist. If AI removes the friction involved with just making a custom solution, then the amount of demand for middleman software should actually fall as products vertically integrate and reduce dependencies.
fny
Claude Code was released for general use in May 2025. It's only March.
Also using PyPI as a benchmark is incredibly myopic. Github's 2025 Octoverse[0] is more informative. In that report, you can see a clear inflection point in total users[1] and total open source contributions[2].
The report also notes:
> In 2025, 81.5% of contributions happened in private repositories, while 63% of all repositories were public
I have published 4 open source projects thanks to the productivity boost from AI. No apps though, just things I needed in my line of work.
But I have been absolutely flooded with trailers for new and upcoming indie games. And at least one indie developer has admitted that certain parts of their game had used the aide of AI.
I also noticed sometimes when I think of writing something, I ask AI first if it exists, and AI throws up some link and when I check the link it says "made with <some AI>".
So I'm not sure what author is trying to say here but I definitely feel like I am noticing a rise in software output due to AI.
But with that said, I also am noticing the burden of taking care of those open source projects. Sometimes it feels like I took on a 2nd job.
I think a lot of software is being produced with AI and going unnoticed, they don't all end up on the front page of HN for harassing developers.
arjie
I won't make any claims as to the Python ecosystem and why there is no effect seen here (and I suppose no effect seen of the Internet on productivity) but one thing that is entirely normal for me now is that I never see the need to open-source anything. I also don't use many new open-source projects. I can usually command Claude Code to build a highly idiosyncratic thing of greater utility. The README.md is a good source of feature inspiration but there are many packages I simply don't bother using any more.
Besides, it's working for me. If it isn't working for others I don't want to convince them of anything. I do want to hear from other people for whom it's working, though, so I'm happy to share when things work for me.
tiborsaas
I fail to see why the author thinks Python packages are a good proxy for AI driven/built code. I've built a number of projects with AI, but I haven't created any new packages.
It's like looking at tire sales to wonder about where the EV cars are.
show comments
stephc_int13
Coding assistants/agents/claws whatever the current trend is are over-hyped but also quite useful in good hands.
But the mistake is to expect a huge productivity boost.
This is highly related to Amdahl's law, also The Mythical Man-Month.
Some tasks can be accomplished so fast that it seems magical, but the entire process is still very serial, architecture design and debug are pretty weak on the AI side.
butz
Please, be patient. Wrangling AI agents, writing and rewriting prompts, waiting for the start of another month because tokens ran out - there are so many challenges here, you cannot expect everyone to ship an app a day or something.
micimize
Thoughts:
1. Some hype-types may have been effusive about AI-assisted coding since ChatGPT, but IMO the commonly agreed paradigm shift was claude code, and especially 4.5, very very recent.
2. Anchoring biases in reaction to hype is still letting one's perspective be defined by hype. Yes the cursor post is a joke, but leading with that is a strawman. This article does not aim to take it's subject seriously, IMO.
3. While I agree the hype is currently at comical levels, the utility of the current LLMs is obvious, and reasons for "skilled" usage not being easily quantifiable are also obvious.
IE, using agents to iterate through many possible approaches, spike out migrations, etc might save a project a year of misadventures, re-designs, etc, but that productivity gain _subtracts_ the intermediate versions that _didn't_ end up being shipped.
As others have mentioned, I think yak-shaving is now way more automated. IE, If I want to take a new terminal for a spin, throw together a devtool to help me think about a specific problem better, etc, I can do it with very low friction. So "personal" productivity is way higher.
show comments
happyopossum
I’m not a developer by trade. I’ve screwed around with some programming classes when I was in school, and have written some widely used but highly specific scripts related to my work, but I’ve never been a capital-D developer.
In the last few months, Gemini (and I) have written for highly personal, very niche apps that are perfect for my needs, but I would never dream of releasing. Things like cataloguing and searching my departed mom‘s recipe cards, or a text message based budget tracker for my wife and I to share.
These things would never be released or available as of source or commercial applications in the way that I wanted them, and it took me less time to have them built with AI then it would have taken me to Research existing alternatives and adapt my workflow/use case to fit whatever I found.
So yeah, there are more apps but I would venture to say you’ll never see most of them…
sid_talks
AI does make me more productive. At least until the stage of getting my idea to the "working prototype stage". But in my personal experience, no one has been realistically able to get to the 10x level that a lot of people claim to have achieved with LLMs.
Yes, you do produce more code. But LoC produced is never a healthy metric. Reviewing the LLM generated code, polishing the result and getting it to production-level quality still very much requires a human-in-the-loop with dedicated time and effort.
On the other hand, people who vibe code and claims to be 10x productive, who produces numerous PRs with large diffs usually bog down the overall productivity of teams by requiring tenuous code reviews.
Some of us are forced to fast-track this review process so as to not slow down these "star developers" which leads to the slow erosion in overall code quality which in my opinion would more than offset the productivity gains from using the AI tools in the first place.
fritzo
They're private, that's the beauty. Code is so cheap now, we can ween ourselves off massive dependency chains.
200 years ago text was much more expensive, and more people memorized sayings and poems and quotations. Now text is cheap, and we rarely quote.
daemonk
I AI coded an entire platform for my work. It works great for me. I also recognize that this is not something I want to make into a commercial product because it was so easy that there's just no value.
I think this might be more of an comment on software as a business than AI not coding good apps.
bredren
The models are not well trained on bringing products to market.
And even “product engineers” often do not have experience going from zero to post sales support on a saas on their own.
It is a skill set of its own to make product decisions and not only release but stick with it after the thing is not immediately successful.
The ability to get some other idea going quickly with AI actually works against the habits needed to tough through the valley(s).
sesm
Vibe coding is actually a brilliant MLM scheme: people buy tokens to generate apps that re-sell tokens (99% of those apps are AI-something).
happyPersonR
This is going to cause people to react, but I think those of us that truly love opensource don't push AI generated code upstream because we know it's just not ready for use beyond agentic use. It's just not robust for alot of use common use cases because the code produces things that are hyper hardcoded by default, and the bugs are so basic, i doubt any developer that actually cared would push something so shamefully sloppy upstream with their name on it.
The tools for generating AI code aren't yet capable of producing code that is decent enough for general purpose use cases, with good robust tests, and clean and quality.
tantalor
Where are they? Well they aren't being uploaded to PyPI. 90% of the "AI apps" one-off scripts that get used by exactly one person and thrown away. The rest are too proprietary, too personal, or too weird to share.
tunesmith
Isn't most of the positive impact not going to be "new projects" but the relative strength of the ideas that make it into the codebase? Which is almost impossible to measure. You know, the bigger ideas that were put off before and are now more tractable.
Vanshfin
There is one AI app that is not just an app it is your personal assistant which
will work on your assign task and give you the results you can connect it with your social media it will deploy in just 3 single step also has free trial try it now becuase your saas needs an personal assistant that work on behalf of you
Give it try:https://clawsifyai.com/
hknceykbx
How do packages measure anything? This is a biased sample. Average user of AI/developer would not ever in their life make a package or any open source contribution. They would probably work on the proprietary software.
Not to say that conclusions are wrong though.
vanyaland
I think part of the mismatch is that people are still looking for “more apps” as the output metric.
A lot of the real value shows up as workflow compression instead. Internal tools, one-off automations, bespoke research flows, coding helpers, things that would never have justified becoming a product in the first place.
lucas_the_human
There are actually a lot of new startups coming out with agentic workflows, and they're probably moving fast. But to your point, there's probably still a lot of friction that keeps the average person/dev from launching new companies.
Sharlin
The reason why the release cadence of apps about AI has increased presumably reflects the simple facts that
a) there are likely many more active, eager contributors all of a sudden, and
b) there's suddenly a huge amount of new papers published every week about algorithms and techniques that said contributors then eagerly implement (usually of dubious benefit).
More cynically, one might also hypothesize that
c) code quality has dropped, so more frequent releases are required to fix broken programs.
relation_al
Well, it's kind of like asking about streaming media. If anyone can have their own "tv show" or anyone can be their own "music producer" then the ratios are so radically altered vis-a-vis content/attention calculation. The question has never been "more means more success stories" because musicians make $.000001 per stream, so even if they stream millions of songs ... you get the point.
So surely there are good apps, but the accompanying deluge makes them seem less significant.
EastLondonCoder
I’ve done a event ticket system that’s in production. Stripe integration, resend for mailing and a scan app to scan tickets. It’s for my own club but it’s been working quite well. Took about 80 hours from inception to live with a focus on testing.
I’ve done some experiments with reading gedcom files, and I think I’m quite close to a demoable version of a genealogy app.
Biggest thing is a tool for remotely working musicians. It’s about 10000 lines of well written rust, it is a demoable state and I wish I could work more on it but I just started a new job.
But yeah, this wouldn’t have been possible if I hadn’t been a very experienced dev who knows how to get things live. Also I’ve found a way to work with LLMs that works for me, I can quickly steer the process in the right way and I understand the code thats written, again it’s possible that a lot of real experience is needed for this.
show comments
CharlieDigital
> So, let’s ask again, why? Why is this jump concentrated in software about AI?...Money and hype
The AI field right now is drowning in hype and jumping from one fad to another.
Don't get me wrong: there are real productivity gains to be had, but the reality is that building small one-offs and personal tools is not the same thing as building, operationalizing, and maintaining a large system used by paying customers and performing critical business transactions.
A lot of devs are surrendering their critical thinking facilities to coding agents now. This is part of why the hype has to exist: to convince devs, teams, and leaders that they are "falling behind". Hand over more of your attention (and $$$) to the model providers, create the dependency, shut off your critical thinking, and the loop manifests itself.
The providers are no different from doctors pushing OxyContin in this sense; make teams dependent on the product. The more they use the product, the more they build a dependency. Junior and mid-career devs have their growth curves fully stunted and become entirely reliant on the LLM to even perform basic functions. Leaders believe the hype and lay off teams and replace them with agents, mistaking speed for velocity. The more slop a team codes with AI, the more they become reliant on AI to maintain the codebase because now no one understands it. What do you do now? Double down; more AI! Of course, the answer is an AI code reviewer!. Nothing that more tokens can't solve.
I work with a team that is heavily, heavily using AI and I'm building much of the supporting infrastructure to make this work. But what's clear is that while there are productivity gains to be had, a lot of it is also just hype to keep the $$$ flowing.
show comments
skyberrys
Wouldn't the apps go into the Apple store and Android play? I guess looking at python packages is valid, but I don't think it's the first thing someone thinks to target with vibe coding. And many apps go to be websites, a website never tells me much about how it is made as a user of the site.
show comments
zabil
I am learning music. I used codex to create a native metronome app, a circle of fifths app, a practice journal app. I try to build a native app alternatives.
I have no plans of publishing them or making the open source, so it will not be a part of this metric. I believe others are doing this too.
bdcravens
I don't think people are using AI to create new dependencies that they're then submitting to open source package managers (which is what this shows)
This is more useful for discussing what kind of projects AI is being used for than whether it's being used.
czhu12
Is this the best way to measure this? I think the biggest adopters of AI coding has been companies who are building features on existing apps, not building new apps entirely. Wouldn't it make more sense looking at how quickly teams are able to build and ship within companies?
It seems like all tech executives are saying they are seeing big increases in productivity among engineering teams. Of course everyone says they're just [hyping, excusing layoffs, overhired in 2020, etc], but this would be the most relevant metric to look at I think.
olup
A bit tangential to the article themes, but I feel in some workplaces that engineering velocity has gone up while product cycles and agile processes have stayed the same. People end up churning tickets faster and working less, while general productivity has not changed.
Of course these are specific workplaces designed around moving tickets on a board, not high-agentic, fast-moving startups or independent projects—but they might represent a lot of the developer workforce.
I also know this is not everyone's experience and probably a rare favorable outcome of productivity gain captured by a worker that is not and won't stay the norm.
bilater
I'd take this info with a grain of salt. You have to understand how new some of these developments are. It's only been a couple of months since we hit the opus 4.5+ threshold. I created 4 react packages for kicks in a weekend: https://www.hackyexperiments.com/blog/shipping-react-librari...
soerxpso
This is just counting pypi packages. Why would I go to the effort of publishing a library or cli tool that took me ten minutes to create? Especially in an environment where open source contributions from strangers are useless. If anything I'd expect useful AI to reduce the number of new pypi packages.
jmarchello
Looking at Python packages, or any developer-facing form of software, is not a good indicator of AI-based production. The key benefit of AI development is that our focus moves up a few layers of abstraction, allowing us to focus on real-world solutions. Instead of measuring Github, you need to measure feature releases, internal tools created, single-user applications built for a single niche use case.
Measuring python packages to indicate AI-based production is like measuring saw production to measure the effectiveness of the steam engine. You need to look at houses and communities being built, not the tools.
patchorang
I've been vibe-coding a Plex music player app for MacOS and iOS. (I don't like PlexAmp) I've got to the point where they are the apps I use for listening to music. But they are really just in an alpha/beta state and I'm having a pretty hard time getting past that. The last few weeks have felt like I'm playing wack-a-mole with bugs and issues. It's definitely not at the point others will be willing to use it as their daily app. I'm having to decide now if I keep wanting to put time into it. The vibe-coding isn't as fun when you're just fixing bugs.
show comments
rupertsworld
One problem with a lot of the skepticism around AI produced software is that it focuses on existing ways of packaging and delivering software. PyPi packages are one example, shipping “apps” another.
While it’s interesting to see that in open source software the increase is not dramatic, this ignores however many people are now gen-coding software they will never publish just for them, or which winds up on hosting platforms like Replit.
calebpeterson
Even taking the “we’re all 100x more efficient at writing code” argument at face value… there’s still all of the product/market fit, marketing, sales, etc “schlep” which is very much non-trivial.
Are there any agentic sales and marketing offerings?
Because being able to reliably hand off that part of the value chain to an agent would close a real gap. (Not sure this can be done in reality)
vivzkestrel
- this would be much more insightful if the author takes the number of submissions to producthunt and the top 10 saas directories as the measure to see how many new apps were created pre AI and post AI era
- product hunt or app sumo is something i believe everyone tries to get a submission to which would truly measure how many new apps are we having per month these days
Feuilles_Mortes
I like using it to make personal apps that are specific to my use-case and solve problems I've had for ages, but I like my job (scientist), and I don't want to run an app company.
QuantumNoodle
Internally, we've created such good debugging tools that can aggregate a lot from a lot of sources. We've yet to address the quality of vibecoded critical applications so they aren't merged, but one off tools for incall,alert debugging and internal workflows has skyrocketed.
chistev
On Show HN.
show comments
noemit
Theres tons of ai apps. They're all general use chatbots or coding agents. Manus, Cursor, ChatGPT. Almost every app that has a robust search uses a reranker llm. AI is everywhere.
As far as totally new products - I built one (Habit.am - wordless journaling for mental health) and new products require new habits, people trying new things, its not that easy to change people's behavior. It would be much easier for me to sell my little app if it was a literal plain old journal.
collinmanderson
There are more apps, fewer libraries.
You don't need as many libraries when functionality can be vibe-coded.
You don't need help from the open source community when you have an AI agent.
The apps are probably mostly websites and native apps, not necessarily published to PyPI.
"Show HN" has banned vibe-coded apps because there's been so many.
cossray
I feel they're largely here, on this platform. Hacker News, currently, could be renamed to AI News, without any loss of generality.
EruditeCoder108
well, many apps i made are really good but i would never bother to share it, takes unnecessary effort and i don't really know what works best for me will work like that for others
andrewflnr
By "apps" this author apparently means "PyPi packages". This is a bafflingly myopic perspective in a world of myopic perspectives. Do we really expect people vibecoding "apps" to put anything on PyPi as a result? They're consumers of packagers, not creators.
I don't blame people for responding to the title instead of the article, because the article itself doesn't bother to answer its own question.
show comments
justacatbot
The bottleneck shifted but didn't disappear. Getting to a working prototype in a weekend is real, but error handling, edge cases, and ops work hasn't gotten much faster. Distribution is completely unchanged too. A lot of these 'where are the AI apps' questions are really asking why there aren't more successful AI businesses, which is a harder and very different problem.
wrs
Title asks where the AI apps are. Analysis looks at Python libraries. Kind of a non-sequitur, no?
saidnooneever
maybe some developers are more productive while the rest of em is laid off.. keeping the same release cadence but with fewer devs?
i know maybe this is not to your analysis as its about open source stuff, but this is the sentiment i see with some companies. rather than have 10x output which their clients dont need, they produce things cheaper and earn more money from what they produce. (and later lose that revenue to a breach :p)
CrzyLngPwd
The first 80% is the easy part, and good ol' Visual Basic was fabulous at it, but the last 80% is the time suck.
Same with vibe-coded stuff.
jstummbillig
I am worried for people using write ups like this as a huge, much appreciated dose of copium.
Try it out and don't stop trying. If something improves at this rate, even if you think it's not there right now, don't assume it is going to stop. Be honest about the things we were always obviously bad at, that the ai has been getting quickly better at, and assume that it will continue getting better. If this were true, what would that mean for you?
CalRobert
So far, as sideloaded APKs on my tablet. Most recently one that makes it easier to learn Dutch and quiz myself based on captions from tv shows
show comments
quikoa
While a good post the title is a bit ambiguous. The post is about applications created using AI not applications with AI functionality embedded.
karmakurtisaani
As we haven't seen new operating systems or web browsers and the like, I'm guessing the reason is the same the corporation execs still have to find out: producing the code is just a small part of it. The big part is iterating bug fixes, compatibility, maintenance etc.
severak_cz
My guess - these are not not on PyPI because of libraries. AI generating is good when you don't care about how your app works, when implementation details does not matter.
When you are developing library it's exact opposite - you really care about how it works and which interface it provides so you end up writing it mostly by hand.
epolanski
It's simple. AI speeds the 80% of development that was never the blocker.
Arguably makes the remaining 20% even harder to handle.
I'm sure that AI can be a huge boost to great, mature developers. Which are insanely rare in an industry that has consistently promoted brainless ivy league coders farming algo quizzes for months.
But those with a huge sensibility and experience can definitely be enabled to produce more.
But the 20% is still there and again, it's easy to make it way harder because you're less intimate with the brittle 80%.
Robdel12
We’re in a personal software era. Or disposable software era however you want to look at it. I think most people are building for themselves and no longer needing to lean on community to get a lot of things done now.
I absolutely hate web development with a passion and haven’t done a new from the ground up web app in 25 years and even since then it was mostly a quick copy and paste to add a feature.
But since late last year even when it’s not part of the requirements leading app dev + cloud consulting projects, I’ll throw in a feature complete internal web admin site to manage everything for a project with a UI that looks like something I would have done 25 years ago with a decent UX.
They are completely vibe coded, authenticated with Amazon Cognito and the only things I verify are that unauthenticated users can’t access endpoints, the permissions of the lambda hosting environment (IAM role) and the database user it’s using permissions.
Only at most 5 people will ever use the website at a time - but yeah I get scalability for free (not that it matters) because it’s hosted on Lambda. (yes with IAC)
The website would not exist at all if it weren’t for AI.
Now just to be clear, if a website is meant for real people and the customer’s customers. I’ll insist on a real web designer and a real web developer be assigned to the project with me.
vicchenai
the pypi metric feels off. most of the ai stuff i see shipping is either internal tooling that never hits pypi, or its built on top of existing packages (langchain, openai sdk, etc) rather than creating new ones.
the real growth is in apps that use ai as a feature, not ai-first packages. like every saas just quietly added an llm call somewhere in their stack. thats hard to measure from dependency graphs.
nickserv
There has been a 2x and sometimes even 10x in PR size, measured in LoC...
But that's not really what we were promised.
satiated_grue
If one were to release an AI app - what would be an appropriate license?
Genuine question.
show comments
somewhatjustin
> So where are all the AI apps?
They're in the app stores. Apple's review times are skyrocketing at the moment due to the influx of new apps.
jimbob21
Why would package be used as the standard? What person fully leveraging AI is going to put up packages for release? They (their AI model) write the code to leverage it themselves. There is no reason to take on the maintenance of a public package just because you have AI now. If anything, packages are a net drag on new AI productivity because then you'd have to worry about breaking changes, etc. As far as actual apps being built by AI, the same indie hackers that had garbage codebases that worked well enough for them to print money are just moving even faster. There are plenty of stories about that.
dev_tools_lab
One pattern I've noticed: the apps that work best
combine multiple models rather than relying on one.
Single-model outputs have too much variance for
production use cases.
bsima
It's silly to think that 'AI apps' must look like the enterprise, centrally-managed SaaS that we are used to. My AI apps are all bespoke, tailored to my exact needs, accessed only via my VPN. They would not be useful to anyone else, so why would I make them public?
Hmmm, my anecdotal experience doesn't match up with this article. Personally I am seeing an explosion of AI-created apps. A number of different subreddits I use for disparate interests have been inundated with them lately. Show HN has experienced the same thing, no?
nyc_pizzadev
A friend of mine who is tech savvy and I would say has novice level coding experience decided to build his dream app. Its really been a disaster. The app is completely broken in many different ways, has functionality gaps, no security, no thought out infrastructure, its pretty much a dumpster fire. The problem is that he doesn't know what he doesn't know, so its impossible for him to actually fix it beyond instructing the AI over and over to simply "fix it". The more this is done, the worst the app becomes. He's tried all the major AI vendors, from scratch, same result, a complete mess of code. He's given up on it now and has moved on with his life.
Im not saying that AI is bad, infact, its the opposite, its one of the most important tools that I have seen introduced in my lifetime. Its like a calculator. Its not going to turn everyone into a mathematician, but it will turn those who have an understanding of math into faster mathematician.
kartikarti
i think it's hard to measure this, it's kinda like measuring productivity through number of commits / PRs
erelong
I think this is a great question to ask and maybe I need my own blog to post about these things as I might reply with a big comment
Making Unpublished Software for Themselves
One issue is, I think maybe a lot of people are making software for themselves and not publishing it - at least I find myself doing this a lot. So there's still "more software produced than before", but it's unpublished
LOC a Good Measure?
Another question is like Lines of Code, about if we best measure AI productivity by new packages that exist. AI might make certain packages obsolete and there may be higher quality, but less, contributions made to existing packages as a result. So actually less packages might mean more productivity (although, generally we seem to think it's the opposite, conventionally speaking)
Optimizing The Unnoticeable
Another issue that comes up is maybe AI optimizes unnoticeable things: AI may indeed make certain things go 100x faster or better. But say a website goes from loading in 1 second to 1/100th of a second... it's a real 100x gain, but in practice doesn't seem to be experienced as a whole lot bigger of a gain. It doesn't translate in to more tangible goods being produced. People might just load 100 pages in the same amount of time, which eats up the 100x gain anyway (!).
Bottleneck of Imagination
I think also this exposes a bottleneck of imagination: what do we want people to be building with AI? People may not be building things, because we need more creative people to be dreaming up things to build. AI is only fed existing creative solutions and, while it does seem to mix that together to generate new ideas, still the people reading the outputs are only so creative. I've thought standard projects would be 1) creating open source alternatives to existing proprietary software, 2) writing new software for old hardware (like "jailbreaking" but doesn't have to be?) to make it run new software so that it can be used for something other than be e-waste. 3) Reverse engineering a bunch of designs so you can implement some new design on them, where open source code doesn't exist and we don't know how they function (maybe kind of like #1). So like there is maybe a need for a very "low tech" creation of spaces where people are just regularly swapping ideas on building things they can only build themselves so much, to either get the attention of more capable individuals or to build up teams.
Time Lag to Adapt
Also, people may still be getting adjusted to using AI stuff. One other post detailed that the majority of the planet does not use AI, and an even smaller subset pays for subscriptions. So there's still a big lag in society of adoption, and of adopters knowing how to use the tools. So I think people might really experience optimizing something at 100x, but they may not know how to leverage that to publish it to optimize things for everyone else at 100x amount, yet.
Social Media Breakdown?
Another problem is, I have made stuff I'd like to share but... social media is already over-run with over-regulation and bots. So where do I publish new things? Even on HN, there was that post about how negative the posters can be, who have said very critical things about projects that ended up being very successful. So I wonder if this also fuels people just quietly creating more stuff for their own needs.
Has GDP Gone Up or Time Been Saved?
Do other measures of productivity exist? GDP appears to have probably only gone up a bit. But again, could people be having gains that don't translate to GDP gains? People do seem to post about saving time with AI but... the malicious thing about technology is that, when people save 10 hours from one tool, they usually just end up spending that working on something else. So unless we're careful, technology for some people doesn't save them much time at all (in fact, a few people have posted about being addicted to AI and working even more with it than before AI!).
Are There Only So Many "10x Programmers"?
Another issue is, maybe there are only a minority of people who get "10x" gains from AI; at the same time, "lesser" devs (like juniors?) have apparently been displaced by AI with some layoffs and hiring freezes.
Conclusion
I guess we are trying to account for real gains and "100x experiences" people have, with a seeming lack of tangible output. I don't think these things are necessarily at odds with each other for some of the aforementioned reasons written above. I imagine maybe in 5 years we'll see more clearly if there is some noticeable impact or not, and... not to be a doomer / pessimist, but we may have some very negative experience from AI development that seems to negate the gains that we'll have to account for, too.
aaroninsf
Among the various ways this analysis is flawed,
two that are drawn from my own experience are:
- meaningful software still takes meaningful time to develop
- not all software is packaged for everyone
I've seen a lot of examples shared of software becoming narrow-cast, and/or ephemeral.
That that doesn't show up in library production or even app store submissions is not interesting.
I'm working on a large project that I could never have undertaken prior to contemporary assistance. I anticipate it will be months before I have something "shippable." But that's because it's a large project, not a one shot.
I was musing that this weekend: when do we see the first crop of serious and novel projects shipping, which could not have been done before (at least, by individual devs)... but which still took serious time.
Could be a while yet.
mentalgear
> So what?
As mentioned in a comment here:
> Maybe the top 15,000 PyPi packages isn't the best way to measure this?
> Apparently new iOS app submissions jumped by 24% last year
Looks like most LLM generated code is used by amateurs/slop coders to generate end-user apps they hope to sell - these user profiles are not the type of people who contribute to the data/code commons. Hence there's no uptick in libs. So basically a measurement issue.
codybontecou
Stuck behind Apple's app review process.
devmor
My experience with AI-driven and AI-assisted development so far is that it has actually enhanced my workflow despite how much I dislike it.
With a caveat.
If you were to compare my workflow to a decade ago, you wouldn’t see much difference other than my natural skill growth.
The rub is that the tools, communities and services I learned to rely on over my career as a developer have been slowly getting worse and worse, and I have found that I can leverage AI tools to make up for where those resources now fall short.
PeterStuer
My take is you are missing out on a barrage of "Shadow AI" and bespoke LoB and B2B software (By "Shadow AI" I mean the (unsanctioned) use of GenAI in Shadow IT, traditionally dominated by Excel and VBA).
All of the above are huge software markets outside of the typical Silicon Valley bubble.
furyofantares
I have a number of small apps and libraries I've prompted into existence and have never considered publishing. They work great for me, but are slop for someone else. All the cases I haven't used them for are likely incomplete or buggy or weird, the code quality is poor, and documentation is poor (worse than not existing in many cases.)
Plus you all have LLMs at home. I have my version that takes care of exactly my needs and you can have yours.
brontosaurusrex
Intel AI denoiser in Blender.
show comments
kwar13
Apparently everyone has evidence to the complete contrary
I’ll ask another question. Why isn’t software getting better? Seems like software is buggier than ever. Can’t we just have an LLM running in a loop fixing bugs? Apparently not. Is this the future? Just getting drowned in garbage software faster and faster?
heliumtera
The thing to shill now is agents.
So they are all producing products to produce products.
My guess is 50% of token usage globally is to produce mediocre articles on "how I use Claude code to tell HN how I use Claude code".
shevy-java
I am still waiting for them.
Kye
Cool data. What do I do with it? None of my use cases involve writing software, so I don't think this is _for_ me since my extensive AI use wouldn't show up in git commits, but I'm not sure who it's for. When I'm talking to artist friends, musician friends, academic friends, etc data is nice to have but I'm talking in stories: the real thing I did and how it made me better at the thing.
nemo44x
AI is unbelievably useful and will continue to make an impact but a few things:
- The 80/20 rule still applies. We’ve optimized the 20% of time part (a lot!) but all the hype is only including the 80% of work part. It looks amazing and is, but you can’t escape the reality of ~80% of the time is still needed on non-trivial projects.
- Breathless AI CEO hype because they need money. This stuff costs a lot. This has passed on to run of the mill CEOs that want to feel ahead of things and smart.
- You should be shipping faster in many cases. Lots of hype but there is real value especially in automating lots of communication and organization tasks.
chaostheory
I feel that your assumption that everyone will want to share is a flawed one.
enraged_camel
This article is very poorly researched and reasoned, but it's in the "AI hater" category so I guess it's no surprise it's on the front page.
Furthermore, most productivity gains will be in private repos, either in a work setting or individuals' personal projects.
moralestapia
I agree with the premise of the article, in the sense that there has not been, and I don't think there will be, a 100x increase in "productivity".
However, PyPi is not really the best way to measure this as the amount of people who take time to wrap their code into a proper package, register into PyPi, push a package, etc... is quite low. Very narrow sampling window.
I do think AI will directly fuel the creation of a lot of personal apps that will not be published anywhere. AI lower the barrier of entry, as we all know, so now regular folks with a bit of technical knowledge can just build the app they want tailored to their needs. I think we´ll see a lot of that.
gos9
I, for one, am not publishing my “apps” for others to use because my “apps” make me money
dominotw
I am now scared to talk to anyone. Eventually the conversation turns to AI and they want to talk or show their vibecoded app.
I am just tired boss. I am not going to look at your app.
Like others have mentioned, I think the premise of looking at the most popular few projects (pypi.org currently lists 771,120 projects) on pypi as any sort of proxy for AI coding is terribly misguided/unrepresentative and that almost no one is going to be packaging up their vibe-coded projects for distribution on pypi.
That being said, I've personally put 3 up recently (more than I've published in total). I'm sure they have close to zero downloads (why would they? they're brand new, solve my own problems, I'm not interested in marketing them or supporting them, they're just shared because they might be useful to others) so they wouldn't show up in their review. 2 of these are pretty meaty projects that would have taken weeks if not months of work but instead have been largely just built over a weekend or a few days. I'd say it's not just the speed, but that w/o the lowered effort, these projects just wouldn't ever have crossed the effort/need bar of ever being started.
I've probably coded 50-100X more AI-assisted code that will never go to pypi, even as someone that has released pypi packages before (which already puts me in a tiny minority of programmers, much less regular people that would even think about uploading a pypi project).
For those interested in the scope of the recent projects:
https://pypi.org/project/realitycheck/ - first pypi: Jan 21 - 57K SLoC - "weekend" project that kept growing. It's a framework that leverages agentic coding tools like Codex/Claude Code to do rigorous, systematic analysis of claims, sources, predictions, and argument chains.It has 400+ tests, and does basically everything I want it to do now. The repo has 20 stars and I'd estimate only a handful of people are using it.
https://pypi.org/project/tweetxvault/ - first pypi: Mar 16 - 29K SLoC - another weekend project (followup on a second weekend). This project is a tool for archiving your Twitter/X bookmarks, likes, and tweets into a local db, with support for importing from archives and letting you search through them. I actually found 3 or 4 other AI-coded projects that didn't do quite what I wanted so it I built my own. This repo has 4 stars, although a friend submitted a PR and mentioned it solved exactly their problem and saved them from having to build it themselves, so that was nice and justifies publishing for me.
https://pypi.org/project/batterylog/ - first pypi: Mar 22 - 857 SLoC - this project is actually something I wrote (and have been using daily) 3-4 years ago, but never bothered to properly package up - it tracks how much battery is drained by your laptop when asleep and it's basically the bare minimum script/installer to be useful. I never bothered to package it
up b/c quite frankly, manual pypi releases are enough of a PITA to not bother, but LLMs now basically make it a matter of saying "cut a release," so when I wanted to add a new feature, I packaged it up as well, which I would never have done this otherwise. This repo has 42 stars and a few forks, although probably 0 downloads from pypi.
(I've spent the past couple years heavily using AI-assisted workflows, and only in the past few months (post Opus 4.6, GPT-5.2) would I have even considered AI tools reliable enough to consider trusting them to push new packages to pypi.)
superkuh
On my local computer used only by me because now I don't need a corporation to make them for me. In the past decades I'd make maybe one or two full blown applications for myself per 10 years. In the past year "I" (read: a corporate AI and I) have made dozens to scratch many itches I've had for a very long time.
It's a great change for a human person. I'm not pretending I'm making something other people would buy nor do I want to. That's the point.
threethirtytwo
This is so stupid. I don't know whether AI has improved things but this is clearly cope, we're not even a year into the transition since agentic coding took over so any data you gather now is not the full story.
But people are desperate for data right? Desperate to prove that AI hasn't done shit.
Maybe. But this much is true. If AI keeps improving and if the trendline keeps going, we're not going to need data to prove something equivalent to the ground existing.
imperio59
This is such copium for AI haters. I stopped working almost any single line of code at the beginning of this year and I've shipped 3 production projects that would have taken months or years to build by hand in a matter of days.
Except none of them are open source so they don't show up in this article's metrics.
But it's fine. Keep your head in the sand. It doesn't change the once in a lifetime shift we are currently experiencing.
ramesh31
No one needs another SaaS. Games are the real killer app for AI. Hear me out.
I've wanted to make video games forever. It's fun, and scratches an itch that no other kind of programming does. But making a game is a mountain of work that is almost completely unassailable for an individual in their free time. The sheer volume of assets to be created stops anything from ever being more than a silly little demo. Now, with Gemini 3.1, I can build an asset pipeline that generates an entire game's worth of graphics in minutes, and actually be able to build a game. And the assets are good. With the right prompting and pipeline, Gemini can now easily generate extremely high quality 2d assets with consistent art direction and perfect prompt adherence. It's not about asking AI to make a game for you, it's about enabling an individual to finally be able to realize their vision without having to resort to generic premade asset libraries.
show comments
notjes
All apps you are using are made with AI.
erythro
Not all of us get addicted to the rat race and wake up at 3am to run more Ralph loops. Some are perfectly content getting the same amount of work done as before, just with less investment of time and effort.
They exist. Go look at any "I built this in a weekend with Cursor" post — there are hundreds. The problem is most of them ship broken and stay broken. Auth that doesn't actually check anything, API keys in the frontend, falls over with 5 concurrent users.
The quantity is there. Nobody's asking "does this thing actually work" before hitting deploy. That's the real gap.
It is incredibly easy now to get an idea to the prototype stage, but making it production-ready still needs boring old software engineering skills. I know countless people who followed the "I'll vibe code my own business" trend, and a few of them did get pretty far, but ultimately not a single one actually launched. Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.
We have great software now!
YoloSwag (13 commits)
[rocketship rocketship rocketship]
YoloSwag is a 1:1 implementation of pyTorch, written in RUST [crab emoji]
- [hand pointing emoji] YoloSwag is Memory Safe due to being Written in Rust
- [green leaf emoji] YoloSwag uses 80% less CPU cycles due to being written in Rust
- [clipboard emoji] [engineer emoji] YoloSwag is 1:1 API compatible with pyTorch with complete ops specification conformance. All ops are supported.
- [recycle emoji] YoloSwag is drop-in ready replacement for Pytorch
- [racecar emoji] YoloSwag speeds up your training workflows by over 300%
Then you git clone yoloswag and it crashes immediately and doesn't even run. And you look at the test suite and every test just creates its own mocks to pass. And then you look at the code and it's weird frankenstein implementation, half of it is using rust bindings for pytorch and the other half is random APIs that are named similarly but not identical.
Then you look at the committer and the description on his profile says "imminentize AGI.", he launched 3 crypto tokens in 2020, he links an X profile (serial experiments lain avatar) where he's posting 100x a day about how "it's over" for software devs and how he "became a domain expert in quantum computing in 6 weeks."
Maybe the top 15,000 PyPi packages isn't the best way to measure this?
Apparently new iOS app submissions jumped by 24% last year:
> According to Appfigures Explorer, Apple's App Store saw 557K new app submissions in 2025, a whopping 24% increase from 2024, and the first meaningful increase since 2016's all-time high of 1M apps.
The chart shows stagnant new iOS app submissions until AI.
Here's a month by month bar chart from 2019 to Feb 2026: https://www.statista.com/statistics/1020964/apple-app-store-...
Also, if you hang out in places with borderline technical people, they might do things like vibe-code a waybar app and proudly post it to r/omarchy which was the first time they ever installed linux in their life.
Though I'd be super surprised if average activity didn't pick up big on Github in general. And if it hasn't, it's only because we overestimate how fast people develop new workflows. Just by going by my own increase in software output and the projects I've taken on over the last couple months.
Finally, December 2025 (Opus 4.5 and that new Codex one) was a big inflection point where AI was suddenly good enough to do all sorts of things for me without hand-holding.
I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.
I have a news feed, work tab for managing issues/PRs, markdown editor with folders, calendar, AI powered buttons all over the place (I click a button, it does something interesting with Claude code I can't do programmatically).
Why don't I share it? Because it's highly personal, others would find it doesn't fit their own workflow.
AI makes the first 90% of writing an app super easy and the last 10% way harder because you have all the subtle issues of a big codebase but none of the familiarity. Most people give up there.
I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case.
Like I've been making things, and making changes to things, but I haven't published any of that because, well they're pretty specific to my needs. There are also things which I won't consider publishing for now, even if generally useful because, well the moat has moved from execution effort to ideas, and we all want to maintain some kind of moat to boost our market value (while there's still one). Everyone has reasonable access to the same capabilities now, so everyone can reasonably make what they need according to their exact specs easily, quickly and cheaply.
So while there are many things being made with AI, there is ever-decreasing reasons to publish most of it. We're in an era of highly personalized software, which just isn't worth generalizing and sharing as the effort is now greater than creating from scratch or modifying something already close enough.
The article measures the wrong thing. PyPI package creation is a terrible proxy for AI-assisted software output because packages are published for reuse by others, which requires documentation, API design, and maintenance commitments that AI doesn't help with much.
The real output is happening in private repos, internal tools, and single-purpose apps that never get published anywhere. I've been building a writing app as a side project. AI got me from zero to a working PWA with offline support, Stripe integration, and 56 SEO landing pages in about 6 weeks of part-time work. Pre-AI that's easily a 6-month project for one person.
But I'm never going to publish it as a PyPI package. It's a deployed web app. The productivity gain is real, it just doesn't show up in the datasets this article is looking at.
The iOS App Store submission data (24% increase) that someone linked in the comments is a much better signal. That's where the output is actually landing.
This remains me so much of the .COM bubble in 2000. A lot of clueless companies thought that they just need to “do internet” without any further understanding or strategy. They burned a ton of money and got nothing out of it. Other companies understood that the internet is an enabling technology that can support a lot of business processes. So they quietly improved their business with the help of the internet.
I see the same with AI. Some companies will use AI quietly and productively without much fuzz. Others are just using it as a marketing tool or an ego trip by execs but no real understanding.
Making complete coherent products is as hard as ever, or even harder if you intend to trade robustness for max agentic velocity.
What I do very successfully is low stakes stuff for work (easy automations, small QoL improvements for our tooling, a drive-by small Jira plugin)
And then I do a lot of crazy exploring, or hyper-personal just for myself stuff that can only exist because I can now spawn and abandon it in a couple days instead of weeks or months.
Not sure that I'd look at python package stats to build this particular argument on.
First, I find that I'm using a lot fewer libraries in general because I am less constrained by the mental models imposed by library authors upon what I'm actually trying to do. Libraries are often heavy and by nature abstract low-level calls from API. These days, I'm far more likely to have 2-3 functions that make those low-level calls directly without any conceptual baggage.
Second, I am generalizing but a reasonable assertion can be made that publishing a package is implicitly launching an open source project, however small in scope or audience. Running OSS projects is a) extremely demanding b) a lot of pain for questionable reward. When you put something into the universe you're taking a non-zero amount of responsibility for it, even just reputationally. Maintainers burn out all of the time, and not everyone is signed up for that. I don't think there's going to be anything remotely like a 1:1 Venn for LLM use and package publishing.
I would counter-argue that in most cases, there might already be too many libraries for everything under the sun. Consolidation around the libraries that are genuinely amazing is not a terrible thing.
Third, one of the most recurring sentiments in these sorts of threads is that people are finally able to work through the long lists of ideas they had but would have never otherwise gotten around to. Some of those ideas might have legs as a product or OSS project, but a lot of them are going to be thought experiments or solve problems for the person writing them, and IMO that's a W not an L.
Fourth, once most devs are past the "vibe" party trick phase of LLM adoption, they are less likely to squat out entire projects and far, far more likely to return to doing all of the things that they were doing before; just doing them faster and with less typing up-front.
In other words, don't think project-level. Successful LLM use cases are commit-level.
What if this is just telling us that much of the coding being done in the world, or knowledge work in general, is just busy work? Just because you double the capacity of knowledge workers doesn't mean you double the amount of useful output. Maybe we have never been limited by our capacity to produce, but by our ability to come up with good ideas and socially coordinate ourselves around the best ones.
I'm not convinced that PyPI is the right metric to use to answer this question. Some (admittedly anecdotal) observations:
1) I'm a former SWE in a business role at a small-market publishing company. I've used Claude Code to automate boring processes that previously consumed weeks of our ops and finance teams' time per year. These aren't technically advanced, but previously would have required in-house dev talent that would not have been within reach of small businesses. I wouldn't have had the time to code these things on my own, but with AI assistance the time investment is greatly reduced (and mostly focused on QA). The only needle moved here is on a private Github repo, but it's real shipped code with small but direct impact.
2) I used to often find myself writing simple Perl wrappers to various APIs for personal or work use. I'd submit these to CPAN (Perl's equivalent to PyPI) in case anyone else could use them to save the 30-60 minutes of work involved. These days I don't bother -- most AI tools can build these in a matter of seconds; publishing them to CPAN or even Github now feels like unnecessary cruft, especially when they're likely to go without active maintenance. So, my LOC published to public repos is down, even though the amount of software produced is the same. It's just that some of that software has become less useful to the world writ large.
3) The code that's possible to ship quickly with pure AI (vibe coding) is by definition not the kind of reusable code you'd want to distribute on PyPI. So, I'd expect that any productivity impact from AI on OSS that's designed to be reusable would be come very slowly, versus "hockey stick" impact.
The thesis has it backwards. We will see fewer published/downloaded apps/packages as people rely on others less. I'm not sure we're quite there yet but I'm increasingly likely to spend a few minutes giving an LLM a chance to make a tool I need instead of sifting through sketchy and dodgy websites for some slightly obscure functionality. I use fewer ad-heavy sites that for converting a one text file format to another.
Personally, I see the paid or adware software market shrinking, not growing, as a testament to the success of LLMs in coding.
Does the data not support a 2X increase in packages?
Pre-ChatGPT, in ~2020, there were about 5,000 new packages per month. Starting in 2025 (the actual year agents took off), there is a clear uptick in packages that is consistently about 10,000 or 2X the pre-ChatGPT era.
In general, the rate of increase is on a clear exponential. So while we might not see a step change in productivity, there comes a point where the average developer is in fact 10X productive than before. It just doesn't feel so crazy because it can about in discrete 5% boosts.
I also disagree with the dataset being a good indicator of productivity. I wouldn't actually suspect the number of packages or the frequency of updates to track closely with productivity. My first order guess would that AI would actually be deflationary. Why spend the time to open source something that AI can gen up for anyone on a case by case basis specific to the project. it takes a certain level of dedication and passion for a person to open source a project and if the AI just made it for them, then they haven't actually made the investment of their time and effort to make them feel justified in publishing the package.
The metrics I would expect to go up are actually the size of codebases, the number of forks of projects that create hyper customized versions of tools and libraries, and other metrics like that.
Overall, I'd predict AI is deflationary on the number of products that exist. If AI removes the friction involved with just making a custom solution, then the amount of demand for middleman software should actually fall as products vertically integrate and reduce dependencies.
Claude Code was released for general use in May 2025. It's only March.
Also using PyPI as a benchmark is incredibly myopic. Github's 2025 Octoverse[0] is more informative. In that report, you can see a clear inflection point in total users[1] and total open source contributions[2].
The report also notes:
> In 2025, 81.5% of contributions happened in private repositories, while 63% of all repositories were public
[0]: https://github.blog/news-insights/octoverse/octoverse-a-new-...
[1]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
[2]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
I have published 4 open source projects thanks to the productivity boost from AI. No apps though, just things I needed in my line of work.
But I have been absolutely flooded with trailers for new and upcoming indie games. And at least one indie developer has admitted that certain parts of their game had used the aide of AI.
I also noticed sometimes when I think of writing something, I ask AI first if it exists, and AI throws up some link and when I check the link it says "made with <some AI>".
So I'm not sure what author is trying to say here but I definitely feel like I am noticing a rise in software output due to AI.
But with that said, I also am noticing the burden of taking care of those open source projects. Sometimes it feels like I took on a 2nd job.
I think a lot of software is being produced with AI and going unnoticed, they don't all end up on the front page of HN for harassing developers.
I won't make any claims as to the Python ecosystem and why there is no effect seen here (and I suppose no effect seen of the Internet on productivity) but one thing that is entirely normal for me now is that I never see the need to open-source anything. I also don't use many new open-source projects. I can usually command Claude Code to build a highly idiosyncratic thing of greater utility. The README.md is a good source of feature inspiration but there are many packages I simply don't bother using any more.
Besides, it's working for me. If it isn't working for others I don't want to convince them of anything. I do want to hear from other people for whom it's working, though, so I'm happy to share when things work for me.
I fail to see why the author thinks Python packages are a good proxy for AI driven/built code. I've built a number of projects with AI, but I haven't created any new packages.
It's like looking at tire sales to wonder about where the EV cars are.
Coding assistants/agents/claws whatever the current trend is are over-hyped but also quite useful in good hands.
But the mistake is to expect a huge productivity boost.
This is highly related to Amdahl's law, also The Mythical Man-Month.
Some tasks can be accomplished so fast that it seems magical, but the entire process is still very serial, architecture design and debug are pretty weak on the AI side.
Please, be patient. Wrangling AI agents, writing and rewriting prompts, waiting for the start of another month because tokens ran out - there are so many challenges here, you cannot expect everyone to ship an app a day or something.
Thoughts: 1. Some hype-types may have been effusive about AI-assisted coding since ChatGPT, but IMO the commonly agreed paradigm shift was claude code, and especially 4.5, very very recent. 2. Anchoring biases in reaction to hype is still letting one's perspective be defined by hype. Yes the cursor post is a joke, but leading with that is a strawman. This article does not aim to take it's subject seriously, IMO. 3. While I agree the hype is currently at comical levels, the utility of the current LLMs is obvious, and reasons for "skilled" usage not being easily quantifiable are also obvious.
IE, using agents to iterate through many possible approaches, spike out migrations, etc might save a project a year of misadventures, re-designs, etc, but that productivity gain _subtracts_ the intermediate versions that _didn't_ end up being shipped.
As others have mentioned, I think yak-shaving is now way more automated. IE, If I want to take a new terminal for a spin, throw together a devtool to help me think about a specific problem better, etc, I can do it with very low friction. So "personal" productivity is way higher.
I’m not a developer by trade. I’ve screwed around with some programming classes when I was in school, and have written some widely used but highly specific scripts related to my work, but I’ve never been a capital-D developer.
In the last few months, Gemini (and I) have written for highly personal, very niche apps that are perfect for my needs, but I would never dream of releasing. Things like cataloguing and searching my departed mom‘s recipe cards, or a text message based budget tracker for my wife and I to share.
These things would never be released or available as of source or commercial applications in the way that I wanted them, and it took me less time to have them built with AI then it would have taken me to Research existing alternatives and adapt my workflow/use case to fit whatever I found.
So yeah, there are more apps but I would venture to say you’ll never see most of them…
AI does make me more productive. At least until the stage of getting my idea to the "working prototype stage". But in my personal experience, no one has been realistically able to get to the 10x level that a lot of people claim to have achieved with LLMs.
Yes, you do produce more code. But LoC produced is never a healthy metric. Reviewing the LLM generated code, polishing the result and getting it to production-level quality still very much requires a human-in-the-loop with dedicated time and effort.
On the other hand, people who vibe code and claims to be 10x productive, who produces numerous PRs with large diffs usually bog down the overall productivity of teams by requiring tenuous code reviews.
Some of us are forced to fast-track this review process so as to not slow down these "star developers" which leads to the slow erosion in overall code quality which in my opinion would more than offset the productivity gains from using the AI tools in the first place.
They're private, that's the beauty. Code is so cheap now, we can ween ourselves off massive dependency chains.
200 years ago text was much more expensive, and more people memorized sayings and poems and quotations. Now text is cheap, and we rarely quote.
I AI coded an entire platform for my work. It works great for me. I also recognize that this is not something I want to make into a commercial product because it was so easy that there's just no value.
I think this might be more of an comment on software as a business than AI not coding good apps.
The models are not well trained on bringing products to market.
And even “product engineers” often do not have experience going from zero to post sales support on a saas on their own.
It is a skill set of its own to make product decisions and not only release but stick with it after the thing is not immediately successful.
The ability to get some other idea going quickly with AI actually works against the habits needed to tough through the valley(s).
Vibe coding is actually a brilliant MLM scheme: people buy tokens to generate apps that re-sell tokens (99% of those apps are AI-something).
This is going to cause people to react, but I think those of us that truly love opensource don't push AI generated code upstream because we know it's just not ready for use beyond agentic use. It's just not robust for alot of use common use cases because the code produces things that are hyper hardcoded by default, and the bugs are so basic, i doubt any developer that actually cared would push something so shamefully sloppy upstream with their name on it.
The tools for generating AI code aren't yet capable of producing code that is decent enough for general purpose use cases, with good robust tests, and clean and quality.
Where are they? Well they aren't being uploaded to PyPI. 90% of the "AI apps" one-off scripts that get used by exactly one person and thrown away. The rest are too proprietary, too personal, or too weird to share.
Isn't most of the positive impact not going to be "new projects" but the relative strength of the ideas that make it into the codebase? Which is almost impossible to measure. You know, the bigger ideas that were put off before and are now more tractable.
There is one AI app that is not just an app it is your personal assistant which will work on your assign task and give you the results you can connect it with your social media it will deploy in just 3 single step also has free trial try it now becuase your saas needs an personal assistant that work on behalf of you Give it try:https://clawsifyai.com/
How do packages measure anything? This is a biased sample. Average user of AI/developer would not ever in their life make a package or any open source contribution. They would probably work on the proprietary software. Not to say that conclusions are wrong though.
I think part of the mismatch is that people are still looking for “more apps” as the output metric.
A lot of the real value shows up as workflow compression instead. Internal tools, one-off automations, bespoke research flows, coding helpers, things that would never have justified becoming a product in the first place.
There are actually a lot of new startups coming out with agentic workflows, and they're probably moving fast. But to your point, there's probably still a lot of friction that keeps the average person/dev from launching new companies.
The reason why the release cadence of apps about AI has increased presumably reflects the simple facts that
a) there are likely many more active, eager contributors all of a sudden, and
b) there's suddenly a huge amount of new papers published every week about algorithms and techniques that said contributors then eagerly implement (usually of dubious benefit).
More cynically, one might also hypothesize that
c) code quality has dropped, so more frequent releases are required to fix broken programs.
Well, it's kind of like asking about streaming media. If anyone can have their own "tv show" or anyone can be their own "music producer" then the ratios are so radically altered vis-a-vis content/attention calculation. The question has never been "more means more success stories" because musicians make $.000001 per stream, so even if they stream millions of songs ... you get the point. So surely there are good apps, but the accompanying deluge makes them seem less significant.
I’ve done a event ticket system that’s in production. Stripe integration, resend for mailing and a scan app to scan tickets. It’s for my own club but it’s been working quite well. Took about 80 hours from inception to live with a focus on testing.
I’ve done some experiments with reading gedcom files, and I think I’m quite close to a demoable version of a genealogy app.
Biggest thing is a tool for remotely working musicians. It’s about 10000 lines of well written rust, it is a demoable state and I wish I could work more on it but I just started a new job.
But yeah, this wouldn’t have been possible if I hadn’t been a very experienced dev who knows how to get things live. Also I’ve found a way to work with LLMs that works for me, I can quickly steer the process in the right way and I understand the code thats written, again it’s possible that a lot of real experience is needed for this.
Don't get me wrong: there are real productivity gains to be had, but the reality is that building small one-offs and personal tools is not the same thing as building, operationalizing, and maintaining a large system used by paying customers and performing critical business transactions.
A lot of devs are surrendering their critical thinking facilities to coding agents now. This is part of why the hype has to exist: to convince devs, teams, and leaders that they are "falling behind". Hand over more of your attention (and $$$) to the model providers, create the dependency, shut off your critical thinking, and the loop manifests itself.
The providers are no different from doctors pushing OxyContin in this sense; make teams dependent on the product. The more they use the product, the more they build a dependency. Junior and mid-career devs have their growth curves fully stunted and become entirely reliant on the LLM to even perform basic functions. Leaders believe the hype and lay off teams and replace them with agents, mistaking speed for velocity. The more slop a team codes with AI, the more they become reliant on AI to maintain the codebase because now no one understands it. What do you do now? Double down; more AI! Of course, the answer is an AI code reviewer!. Nothing that more tokens can't solve.
I work with a team that is heavily, heavily using AI and I'm building much of the supporting infrastructure to make this work. But what's clear is that while there are productivity gains to be had, a lot of it is also just hype to keep the $$$ flowing.
Wouldn't the apps go into the Apple store and Android play? I guess looking at python packages is valid, but I don't think it's the first thing someone thinks to target with vibe coding. And many apps go to be websites, a website never tells me much about how it is made as a user of the site.
I am learning music. I used codex to create a native metronome app, a circle of fifths app, a practice journal app. I try to build a native app alternatives.
I have no plans of publishing them or making the open source, so it will not be a part of this metric. I believe others are doing this too.
I don't think people are using AI to create new dependencies that they're then submitting to open source package managers (which is what this shows)
This is more useful for discussing what kind of projects AI is being used for than whether it's being used.
Is this the best way to measure this? I think the biggest adopters of AI coding has been companies who are building features on existing apps, not building new apps entirely. Wouldn't it make more sense looking at how quickly teams are able to build and ship within companies?
It seems like all tech executives are saying they are seeing big increases in productivity among engineering teams. Of course everyone says they're just [hyping, excusing layoffs, overhired in 2020, etc], but this would be the most relevant metric to look at I think.
A bit tangential to the article themes, but I feel in some workplaces that engineering velocity has gone up while product cycles and agile processes have stayed the same. People end up churning tickets faster and working less, while general productivity has not changed.
Of course these are specific workplaces designed around moving tickets on a board, not high-agentic, fast-moving startups or independent projects—but they might represent a lot of the developer workforce.
I also know this is not everyone's experience and probably a rare favorable outcome of productivity gain captured by a worker that is not and won't stay the norm.
I'd take this info with a grain of salt. You have to understand how new some of these developments are. It's only been a couple of months since we hit the opus 4.5+ threshold. I created 4 react packages for kicks in a weekend: https://www.hackyexperiments.com/blog/shipping-react-librari...
This is just counting pypi packages. Why would I go to the effort of publishing a library or cli tool that took me ten minutes to create? Especially in an environment where open source contributions from strangers are useless. If anything I'd expect useful AI to reduce the number of new pypi packages.
Looking at Python packages, or any developer-facing form of software, is not a good indicator of AI-based production. The key benefit of AI development is that our focus moves up a few layers of abstraction, allowing us to focus on real-world solutions. Instead of measuring Github, you need to measure feature releases, internal tools created, single-user applications built for a single niche use case.
Measuring python packages to indicate AI-based production is like measuring saw production to measure the effectiveness of the steam engine. You need to look at houses and communities being built, not the tools.
I've been vibe-coding a Plex music player app for MacOS and iOS. (I don't like PlexAmp) I've got to the point where they are the apps I use for listening to music. But they are really just in an alpha/beta state and I'm having a pretty hard time getting past that. The last few weeks have felt like I'm playing wack-a-mole with bugs and issues. It's definitely not at the point others will be willing to use it as their daily app. I'm having to decide now if I keep wanting to put time into it. The vibe-coding isn't as fun when you're just fixing bugs.
One problem with a lot of the skepticism around AI produced software is that it focuses on existing ways of packaging and delivering software. PyPi packages are one example, shipping “apps” another.
While it’s interesting to see that in open source software the increase is not dramatic, this ignores however many people are now gen-coding software they will never publish just for them, or which winds up on hosting platforms like Replit.
Even taking the “we’re all 100x more efficient at writing code” argument at face value… there’s still all of the product/market fit, marketing, sales, etc “schlep” which is very much non-trivial.
Are there any agentic sales and marketing offerings?
Because being able to reliably hand off that part of the value chain to an agent would close a real gap. (Not sure this can be done in reality)
- this would be much more insightful if the author takes the number of submissions to producthunt and the top 10 saas directories as the measure to see how many new apps were created pre AI and post AI era
- product hunt or app sumo is something i believe everyone tries to get a submission to which would truly measure how many new apps are we having per month these days
I like using it to make personal apps that are specific to my use-case and solve problems I've had for ages, but I like my job (scientist), and I don't want to run an app company.
Internally, we've created such good debugging tools that can aggregate a lot from a lot of sources. We've yet to address the quality of vibecoded critical applications so they aren't merged, but one off tools for incall,alert debugging and internal workflows has skyrocketed.
On Show HN.
Theres tons of ai apps. They're all general use chatbots or coding agents. Manus, Cursor, ChatGPT. Almost every app that has a robust search uses a reranker llm. AI is everywhere.
As far as totally new products - I built one (Habit.am - wordless journaling for mental health) and new products require new habits, people trying new things, its not that easy to change people's behavior. It would be much easier for me to sell my little app if it was a literal plain old journal.
There are more apps, fewer libraries.
You don't need as many libraries when functionality can be vibe-coded.
You don't need help from the open source community when you have an AI agent.
The apps are probably mostly websites and native apps, not necessarily published to PyPI.
"Show HN" has banned vibe-coded apps because there's been so many.
I feel they're largely here, on this platform. Hacker News, currently, could be renamed to AI News, without any loss of generality.
well, many apps i made are really good but i would never bother to share it, takes unnecessary effort and i don't really know what works best for me will work like that for others
By "apps" this author apparently means "PyPi packages". This is a bafflingly myopic perspective in a world of myopic perspectives. Do we really expect people vibecoding "apps" to put anything on PyPi as a result? They're consumers of packagers, not creators.
I don't blame people for responding to the title instead of the article, because the article itself doesn't bother to answer its own question.
The bottleneck shifted but didn't disappear. Getting to a working prototype in a weekend is real, but error handling, edge cases, and ops work hasn't gotten much faster. Distribution is completely unchanged too. A lot of these 'where are the AI apps' questions are really asking why there aren't more successful AI businesses, which is a harder and very different problem.
Title asks where the AI apps are. Analysis looks at Python libraries. Kind of a non-sequitur, no?
maybe some developers are more productive while the rest of em is laid off.. keeping the same release cadence but with fewer devs?
i know maybe this is not to your analysis as its about open source stuff, but this is the sentiment i see with some companies. rather than have 10x output which their clients dont need, they produce things cheaper and earn more money from what they produce. (and later lose that revenue to a breach :p)
The first 80% is the easy part, and good ol' Visual Basic was fabulous at it, but the last 80% is the time suck.
Same with vibe-coded stuff.
I am worried for people using write ups like this as a huge, much appreciated dose of copium.
Try it out and don't stop trying. If something improves at this rate, even if you think it's not there right now, don't assume it is going to stop. Be honest about the things we were always obviously bad at, that the ai has been getting quickly better at, and assume that it will continue getting better. If this were true, what would that mean for you?
So far, as sideloaded APKs on my tablet. Most recently one that makes it easier to learn Dutch and quiz myself based on captions from tv shows
While a good post the title is a bit ambiguous. The post is about applications created using AI not applications with AI functionality embedded.
As we haven't seen new operating systems or web browsers and the like, I'm guessing the reason is the same the corporation execs still have to find out: producing the code is just a small part of it. The big part is iterating bug fixes, compatibility, maintenance etc.
My guess - these are not not on PyPI because of libraries. AI generating is good when you don't care about how your app works, when implementation details does not matter.
When you are developing library it's exact opposite - you really care about how it works and which interface it provides so you end up writing it mostly by hand.
It's simple. AI speeds the 80% of development that was never the blocker.
Arguably makes the remaining 20% even harder to handle.
I'm sure that AI can be a huge boost to great, mature developers. Which are insanely rare in an industry that has consistently promoted brainless ivy league coders farming algo quizzes for months.
But those with a huge sensibility and experience can definitely be enabled to produce more.
But the 20% is still there and again, it's easy to make it way harder because you're less intimate with the brittle 80%.
We’re in a personal software era. Or disposable software era however you want to look at it. I think most people are building for themselves and no longer needing to lean on community to get a lot of things done now.
Self plug, but basically that’s the TL;DR https://robertdelu.ca/2026/02/02/personal-software-era/
I absolutely hate web development with a passion and haven’t done a new from the ground up web app in 25 years and even since then it was mostly a quick copy and paste to add a feature.
But since late last year even when it’s not part of the requirements leading app dev + cloud consulting projects, I’ll throw in a feature complete internal web admin site to manage everything for a project with a UI that looks like something I would have done 25 years ago with a decent UX.
They are completely vibe coded, authenticated with Amazon Cognito and the only things I verify are that unauthenticated users can’t access endpoints, the permissions of the lambda hosting environment (IAM role) and the database user it’s using permissions.
Only at most 5 people will ever use the website at a time - but yeah I get scalability for free (not that it matters) because it’s hosted on Lambda. (yes with IAC)
The website would not exist at all if it weren’t for AI.
Now just to be clear, if a website is meant for real people and the customer’s customers. I’ll insist on a real web designer and a real web developer be assigned to the project with me.
the pypi metric feels off. most of the ai stuff i see shipping is either internal tooling that never hits pypi, or its built on top of existing packages (langchain, openai sdk, etc) rather than creating new ones.
the real growth is in apps that use ai as a feature, not ai-first packages. like every saas just quietly added an llm call somewhere in their stack. thats hard to measure from dependency graphs.
There has been a 2x and sometimes even 10x in PR size, measured in LoC...
But that's not really what we were promised.
If one were to release an AI app - what would be an appropriate license? Genuine question.
> So where are all the AI apps?
They're in the app stores. Apple's review times are skyrocketing at the moment due to the influx of new apps.
Why would package be used as the standard? What person fully leveraging AI is going to put up packages for release? They (their AI model) write the code to leverage it themselves. There is no reason to take on the maintenance of a public package just because you have AI now. If anything, packages are a net drag on new AI productivity because then you'd have to worry about breaking changes, etc. As far as actual apps being built by AI, the same indie hackers that had garbage codebases that worked well enough for them to print money are just moving even faster. There are plenty of stories about that.
One pattern I've noticed: the apps that work best combine multiple models rather than relying on one. Single-model outputs have too much variance for production use cases.
It's silly to think that 'AI apps' must look like the enterprise, centrally-managed SaaS that we are used to. My AI apps are all bespoke, tailored to my exact needs, accessed only via my VPN. They would not be useful to anyone else, so why would I make them public?
https://gethuman.sh
Hmmm, my anecdotal experience doesn't match up with this article. Personally I am seeing an explosion of AI-created apps. A number of different subreddits I use for disparate interests have been inundated with them lately. Show HN has experienced the same thing, no?
A friend of mine who is tech savvy and I would say has novice level coding experience decided to build his dream app. Its really been a disaster. The app is completely broken in many different ways, has functionality gaps, no security, no thought out infrastructure, its pretty much a dumpster fire. The problem is that he doesn't know what he doesn't know, so its impossible for him to actually fix it beyond instructing the AI over and over to simply "fix it". The more this is done, the worst the app becomes. He's tried all the major AI vendors, from scratch, same result, a complete mess of code. He's given up on it now and has moved on with his life.
Im not saying that AI is bad, infact, its the opposite, its one of the most important tools that I have seen introduced in my lifetime. Its like a calculator. Its not going to turn everyone into a mathematician, but it will turn those who have an understanding of math into faster mathematician.
i think it's hard to measure this, it's kinda like measuring productivity through number of commits / PRs
I think this is a great question to ask and maybe I need my own blog to post about these things as I might reply with a big comment
Making Unpublished Software for Themselves
One issue is, I think maybe a lot of people are making software for themselves and not publishing it - at least I find myself doing this a lot. So there's still "more software produced than before", but it's unpublished
LOC a Good Measure?
Another question is like Lines of Code, about if we best measure AI productivity by new packages that exist. AI might make certain packages obsolete and there may be higher quality, but less, contributions made to existing packages as a result. So actually less packages might mean more productivity (although, generally we seem to think it's the opposite, conventionally speaking)
Optimizing The Unnoticeable
Another issue that comes up is maybe AI optimizes unnoticeable things: AI may indeed make certain things go 100x faster or better. But say a website goes from loading in 1 second to 1/100th of a second... it's a real 100x gain, but in practice doesn't seem to be experienced as a whole lot bigger of a gain. It doesn't translate in to more tangible goods being produced. People might just load 100 pages in the same amount of time, which eats up the 100x gain anyway (!).
Bottleneck of Imagination
I think also this exposes a bottleneck of imagination: what do we want people to be building with AI? People may not be building things, because we need more creative people to be dreaming up things to build. AI is only fed existing creative solutions and, while it does seem to mix that together to generate new ideas, still the people reading the outputs are only so creative. I've thought standard projects would be 1) creating open source alternatives to existing proprietary software, 2) writing new software for old hardware (like "jailbreaking" but doesn't have to be?) to make it run new software so that it can be used for something other than be e-waste. 3) Reverse engineering a bunch of designs so you can implement some new design on them, where open source code doesn't exist and we don't know how they function (maybe kind of like #1). So like there is maybe a need for a very "low tech" creation of spaces where people are just regularly swapping ideas on building things they can only build themselves so much, to either get the attention of more capable individuals or to build up teams.
Time Lag to Adapt
Also, people may still be getting adjusted to using AI stuff. One other post detailed that the majority of the planet does not use AI, and an even smaller subset pays for subscriptions. So there's still a big lag in society of adoption, and of adopters knowing how to use the tools. So I think people might really experience optimizing something at 100x, but they may not know how to leverage that to publish it to optimize things for everyone else at 100x amount, yet.
Social Media Breakdown?
Another problem is, I have made stuff I'd like to share but... social media is already over-run with over-regulation and bots. So where do I publish new things? Even on HN, there was that post about how negative the posters can be, who have said very critical things about projects that ended up being very successful. So I wonder if this also fuels people just quietly creating more stuff for their own needs.
Has GDP Gone Up or Time Been Saved?
Do other measures of productivity exist? GDP appears to have probably only gone up a bit. But again, could people be having gains that don't translate to GDP gains? People do seem to post about saving time with AI but... the malicious thing about technology is that, when people save 10 hours from one tool, they usually just end up spending that working on something else. So unless we're careful, technology for some people doesn't save them much time at all (in fact, a few people have posted about being addicted to AI and working even more with it than before AI!).
Are There Only So Many "10x Programmers"?
Another issue is, maybe there are only a minority of people who get "10x" gains from AI; at the same time, "lesser" devs (like juniors?) have apparently been displaced by AI with some layoffs and hiring freezes.
Conclusion
I guess we are trying to account for real gains and "100x experiences" people have, with a seeming lack of tangible output. I don't think these things are necessarily at odds with each other for some of the aforementioned reasons written above. I imagine maybe in 5 years we'll see more clearly if there is some noticeable impact or not, and... not to be a doomer / pessimist, but we may have some very negative experience from AI development that seems to negate the gains that we'll have to account for, too.
Among the various ways this analysis is flawed,
two that are drawn from my own experience are:
- meaningful software still takes meaningful time to develop
- not all software is packaged for everyone
I've seen a lot of examples shared of software becoming narrow-cast, and/or ephemeral.
That that doesn't show up in library production or even app store submissions is not interesting.
I'm working on a large project that I could never have undertaken prior to contemporary assistance. I anticipate it will be months before I have something "shippable." But that's because it's a large project, not a one shot.
I was musing that this weekend: when do we see the first crop of serious and novel projects shipping, which could not have been done before (at least, by individual devs)... but which still took serious time.
Could be a while yet.
> So what?
As mentioned in a comment here:
> Maybe the top 15,000 PyPi packages isn't the best way to measure this? > Apparently new iOS app submissions jumped by 24% last year
Looks like most LLM generated code is used by amateurs/slop coders to generate end-user apps they hope to sell - these user profiles are not the type of people who contribute to the data/code commons. Hence there's no uptick in libs. So basically a measurement issue.
Stuck behind Apple's app review process.
My experience with AI-driven and AI-assisted development so far is that it has actually enhanced my workflow despite how much I dislike it.
With a caveat.
If you were to compare my workflow to a decade ago, you wouldn’t see much difference other than my natural skill growth.
The rub is that the tools, communities and services I learned to rely on over my career as a developer have been slowly getting worse and worse, and I have found that I can leverage AI tools to make up for where those resources now fall short.
My take is you are missing out on a barrage of "Shadow AI" and bespoke LoB and B2B software (By "Shadow AI" I mean the (unsanctioned) use of GenAI in Shadow IT, traditionally dominated by Excel and VBA).
All of the above are huge software markets outside of the typical Silicon Valley bubble.
I have a number of small apps and libraries I've prompted into existence and have never considered publishing. They work great for me, but are slop for someone else. All the cases I haven't used them for are likely incomplete or buggy or weird, the code quality is poor, and documentation is poor (worse than not existing in many cases.)
Plus you all have LLMs at home. I have my version that takes care of exactly my needs and you can have yours.
Intel AI denoiser in Blender.
Apparently everyone has evidence to the complete contrary
"THE APPLE APP STORE IS DROWNING IN AI SLOP" https://x.com/shiri_shh/status/2036307020396241228
I’ll ask another question. Why isn’t software getting better? Seems like software is buggier than ever. Can’t we just have an LLM running in a loop fixing bugs? Apparently not. Is this the future? Just getting drowned in garbage software faster and faster?
The thing to shill now is agents.
So they are all producing products to produce products. My guess is 50% of token usage globally is to produce mediocre articles on "how I use Claude code to tell HN how I use Claude code".
I am still waiting for them.
Cool data. What do I do with it? None of my use cases involve writing software, so I don't think this is _for_ me since my extensive AI use wouldn't show up in git commits, but I'm not sure who it's for. When I'm talking to artist friends, musician friends, academic friends, etc data is nice to have but I'm talking in stories: the real thing I did and how it made me better at the thing.
AI is unbelievably useful and will continue to make an impact but a few things:
- The 80/20 rule still applies. We’ve optimized the 20% of time part (a lot!) but all the hype is only including the 80% of work part. It looks amazing and is, but you can’t escape the reality of ~80% of the time is still needed on non-trivial projects.
- Breathless AI CEO hype because they need money. This stuff costs a lot. This has passed on to run of the mill CEOs that want to feel ahead of things and smart.
- You should be shipping faster in many cases. Lots of hype but there is real value especially in automating lots of communication and organization tasks.
I feel that your assumption that everyone will want to share is a flawed one.
This article is very poorly researched and reasoned, but it's in the "AI hater" category so I guess it's no surprise it's on the front page.
Number of iOS apps has exploded since ChatGPT came out, according to Sensor Tower: https://i.imgur.com/TOlazzk.png
Furthermore, most productivity gains will be in private repos, either in a work setting or individuals' personal projects.
I agree with the premise of the article, in the sense that there has not been, and I don't think there will be, a 100x increase in "productivity".
However, PyPi is not really the best way to measure this as the amount of people who take time to wrap their code into a proper package, register into PyPi, push a package, etc... is quite low. Very narrow sampling window.
I do think AI will directly fuel the creation of a lot of personal apps that will not be published anywhere. AI lower the barrier of entry, as we all know, so now regular folks with a bit of technical knowledge can just build the app they want tailored to their needs. I think we´ll see a lot of that.
I, for one, am not publishing my “apps” for others to use because my “apps” make me money
I am now scared to talk to anyone. Eventually the conversation turns to AI and they want to talk or show their vibecoded app.
I am just tired boss. I am not going to look at your app.
here: https://www.youtube.com/shorts/vGKC9LpGnOQ
Like others have mentioned, I think the premise of looking at the most popular few projects (pypi.org currently lists 771,120 projects) on pypi as any sort of proxy for AI coding is terribly misguided/unrepresentative and that almost no one is going to be packaging up their vibe-coded projects for distribution on pypi.
That being said, I've personally put 3 up recently (more than I've published in total). I'm sure they have close to zero downloads (why would they? they're brand new, solve my own problems, I'm not interested in marketing them or supporting them, they're just shared because they might be useful to others) so they wouldn't show up in their review. 2 of these are pretty meaty projects that would have taken weeks if not months of work but instead have been largely just built over a weekend or a few days. I'd say it's not just the speed, but that w/o the lowered effort, these projects just wouldn't ever have crossed the effort/need bar of ever being started.
I've probably coded 50-100X more AI-assisted code that will never go to pypi, even as someone that has released pypi packages before (which already puts me in a tiny minority of programmers, much less regular people that would even think about uploading a pypi project).
For those interested in the scope of the recent projects:
https://pypi.org/project/realitycheck/ - first pypi: Jan 21 - 57K SLoC - "weekend" project that kept growing. It's a framework that leverages agentic coding tools like Codex/Claude Code to do rigorous, systematic analysis of claims, sources, predictions, and argument chains.It has 400+ tests, and does basically everything I want it to do now. The repo has 20 stars and I'd estimate only a handful of people are using it.
https://pypi.org/project/tweetxvault/ - first pypi: Mar 16 - 29K SLoC - another weekend project (followup on a second weekend). This project is a tool for archiving your Twitter/X bookmarks, likes, and tweets into a local db, with support for importing from archives and letting you search through them. I actually found 3 or 4 other AI-coded projects that didn't do quite what I wanted so it I built my own. This repo has 4 stars, although a friend submitted a PR and mentioned it solved exactly their problem and saved them from having to build it themselves, so that was nice and justifies publishing for me.
https://pypi.org/project/batterylog/ - first pypi: Mar 22 - 857 SLoC - this project is actually something I wrote (and have been using daily) 3-4 years ago, but never bothered to properly package up - it tracks how much battery is drained by your laptop when asleep and it's basically the bare minimum script/installer to be useful. I never bothered to package it up b/c quite frankly, manual pypi releases are enough of a PITA to not bother, but LLMs now basically make it a matter of saying "cut a release," so when I wanted to add a new feature, I packaged it up as well, which I would never have done this otherwise. This repo has 42 stars and a few forks, although probably 0 downloads from pypi.
(I've spent the past couple years heavily using AI-assisted workflows, and only in the past few months (post Opus 4.6, GPT-5.2) would I have even considered AI tools reliable enough to consider trusting them to push new packages to pypi.)
On my local computer used only by me because now I don't need a corporation to make them for me. In the past decades I'd make maybe one or two full blown applications for myself per 10 years. In the past year "I" (read: a corporate AI and I) have made dozens to scratch many itches I've had for a very long time.
It's a great change for a human person. I'm not pretending I'm making something other people would buy nor do I want to. That's the point.
This is so stupid. I don't know whether AI has improved things but this is clearly cope, we're not even a year into the transition since agentic coding took over so any data you gather now is not the full story.
But people are desperate for data right? Desperate to prove that AI hasn't done shit.
Maybe. But this much is true. If AI keeps improving and if the trendline keeps going, we're not going to need data to prove something equivalent to the ground existing.
This is such copium for AI haters. I stopped working almost any single line of code at the beginning of this year and I've shipped 3 production projects that would have taken months or years to build by hand in a matter of days.
Except none of them are open source so they don't show up in this article's metrics.
But it's fine. Keep your head in the sand. It doesn't change the once in a lifetime shift we are currently experiencing.
No one needs another SaaS. Games are the real killer app for AI. Hear me out.
I've wanted to make video games forever. It's fun, and scratches an itch that no other kind of programming does. But making a game is a mountain of work that is almost completely unassailable for an individual in their free time. The sheer volume of assets to be created stops anything from ever being more than a silly little demo. Now, with Gemini 3.1, I can build an asset pipeline that generates an entire game's worth of graphics in minutes, and actually be able to build a game. And the assets are good. With the right prompting and pipeline, Gemini can now easily generate extremely high quality 2d assets with consistent art direction and perfect prompt adherence. It's not about asking AI to make a game for you, it's about enabling an individual to finally be able to realize their vision without having to resort to generic premade asset libraries.
All apps you are using are made with AI.
Not all of us get addicted to the rat race and wake up at 3am to run more Ralph loops. Some are perfectly content getting the same amount of work done as before, just with less investment of time and effort.