The sigmoids won't save you

281 points259 commentsa day ago
stego-tech

I felt the better takeaway from this was that it's impossible to know for certainty how long this will or will not continue regardless of the data or models you're using, because if you (or anyone else) could predict that accurately they'd be one of the richest people on the planet.

I don't know when (or if) AI will implode or succeed with any degree of provable certainty, because that's not my area of expertise. Rather, I can point out and discuss flaws in the common booster and doomer arguments, and identify problems neither side seems willing to discuss. That brings me cold comfort, but it's not enough to stake my money on one direction or another with any degree of certainty - thus I limit my exposure to specific companies, and target indices or funds that will see uplift if things go well, or minimize losses if things go pear-shaped.

I also think relying on such mathematics to justify a position in the first place is kind of silly, especially for technical people. Mathematical models work until they don't, at which point entirely new models must be designed to capture our new knowledge. On the other hand, logical arguments are more readily adapted to new data, and represent critical, rather than mathematical, thinking and reasoning.

Saying AI is going boom/bust because of sigmoids or Lindy's Law or whathaveyou is not an argument, it's an excuse. The real argument is why those things may or may not emerge, and how do we address their consequences within areas inside and outside of AI through regulation, innovation, or policy.

show comments
btilly

Lindy’s Law is an absolute gem, that I'm keeping.

If we don't understand the fundamental limits to any particular kind of trend, our default assumption should be that it will continue for about as long as it has gone on already.

We can, in fact, easily put a confidence interval on this. With 90% odds we're not in the first 5% of the trend, or the last 5% of the trend. Therefore it will probably go on between 1/19th longer, and 19 times longer. With a median of as long as it has gone on so far.

This is deeply counterintuitive. When we expect something to last a finite time, every year it goes on, brings us a year closer to when it stops. But every year that it goes on properly brings the expectation that it will go on for a year longer still.

We're looking at a trend. We believe that it will be finite. Our intuition for that is that every year spent, is a year closer to the end. But our expectation becomes that every year spent, means that it will last yet another year more!

How can we apply that? A simple way is stocks. How long should we expect a rapidly growing company, to continue growing rapidly?

show comments
dreambuffer

FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.

I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.

show comments
andy99

AI has scaled well according to convenient measures. It (neural networks) have the property that whatever you define, they can rapidly be trained master it. We’re able to show that various tasks of increasing complication do not require intelligence and can be framed as autoregressive RL problems. I personally don’t think AI is any closer to sentient intelligence than LeNet; it’s almost trivially clear, we know how it works. So we’re measuring something orthogonal, basically how well a universal function approximator can fit to a function we define, given arbitrary computing power, and calling that progress. What will be really interesting is if we’re able to find a way to properly measure what they can’t do and what’s different about real intelligence.

Edit: in particular I don’t agree with

  But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level... 
One has to agree that the benchmark results are getting “scarier”, which is not automatically implied by finding more goals to optimize for
show comments
stymaar

I don't know when the sigmoid is going to kick in, but Nvidia's Quaterly datacenters revenues have been grown 15 folds over the past 3 years[1], and nobody including Scott believes this is sustainable for 3 more years otherwise Nvidia's market cap would conservatively be at least an order of magnitude higher than it is.

All exponential eventually becomes a sigmoid because exponential growth always expose limiting factors that weren't limiting at the beginning. Silicon manufacturing had lots of room for high-margin customers like Nvidia even a year ago (by the mere virtue of outbidding lower-margin customers), but now it is mostly gone, and no amount of money will make fabs build themselves overnight.

[1]: https://stockanalysis.com/stocks/nvda/metrics/revenue-by-seg...

dwa3592

I am not disagreeing with the conclusion of the article but the way they reached the conclusion is problematic (and wrong) and here is my nuanced explanation of that : There is no attention paid to the origin (0,0) in any of the plots in the article. To see a meaningful trend you have to be be able to zoom out and zoom in, which means your origin is not a fixed point, ever. This also means that plots with fixed point origins don't tell us anything important at a societal level.

For example: the flight airspeed plot, it starts at ~1900. Now of course it should start there bc we did not have planes before that. But let's change the plot heading from airspeed to human speed a.k.a at what speed could humans move. now you can change the origin meaningfully as we had chariots, horses, bicycles, ships before that.

If you instead create a plot for last 5000 years you would see that the speed at which humans are able to move is rising exponentially, from walking on foot in a radius of 1000-2000feet in a thick jungle 5000 years ago to reentering earth at 25000 miles/hour in 1969 (yeah, read that again). Even for AI, if you zoom out the plot to last 70 years it will look exponential, if you zoom in to last 2 months it will look absolutely flat. The point is that the whole sigmoid/exponential argument is a function of the origin (0,0).

LarsDu88

I think an interesting thing about recent AI developments is that its all happening right as we hit the diminishing returns side of another "exponential that's actually a sigmoid" which is Moore's law.

The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.

At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.

With the right level of research funding in hardware, the ceiling for AI can be very high.

show comments
noosphr

This article answers the question in the second paragraph then completely ignores the answer for the rest of it.

>My understanding is that this represents 3-4 “generations” of different technology (propellers, turbojets, etc). Each technology went through normal iterative improvement, then, when it reached its fundamental limits, got replaced by a better technology. The last technology, ramjets, reached its limit at about 3500 km/h, and there wasn’t the economic/regulatory will to develop anything better, so the record stands.

You don't have one sigmoid, you have multiple each stacked on top of each other. Airplanes aren't just one technology they are multiple technologies that happen to do the same thing.

Each one is following a sigmoid perfectly. It only looks exponential(ish) because of unpredictable discoveries that let you switch to another sigmoid that has a higher maximum potential.

The same is true in AI. If you used the same architecture as GPT2 today you're in for a bad time training a new frontier model. It's only because we have dozens of breakthroughs that the capabilities of models have improved as much as they have.

That said exponential and sigmoids are the wrong model to use for growth. Growth is a differential equation. It has independent inputs, it has outputs and some of those outputs are dependent inputs again through causal chains of arbitrary complexity. What happens depends entirely on what the specific DE that governs the given technology is. We can easily have a chaotic system with completely random booms and busts which have no deep fundamental rhyme or reason. We currently call that the economy.

show comments
gm678

I don't know what the Y-axis is supposed to be on that Wharton AI capabilities graph, but I am not really convinced that Opus 4.6 has more than double the intelligence/capability/whatever of GPT 5.1 Max.

show comments
jasongi

First sigmoid was transformers allow us to rapidly scale to our already abundant data until we tapped it out, the second is/was reasoning, allowing us to scale to our available compute (and compute manufacturing capacity). Correct me if I'm wrong but we don't have candidate for the third sigmoid, and scaling inference is hitting real-world supply chain constraints - electricity and chips.

Short of a third sigmoid appearing in the ML CompSci space, perhaps in the form of ongoing, repeated step-optimisations which will also have diminishing returns, intelligence growth is now limited a few scaling problems that have already been worked on for a very long time.

Transistors, which have been doubling for almost a century now, but Moores Law has already plateaued and reached limits on energy efficiency, and simply building new fabs is not something that we can do exponentially. And the other growth limiter is electricity - there is no exponential supply of fossil fuels or power plants. Although manufacturing has scaled, PV tech improvements are also plateauing - and while storage is getting cheaper, it's still not economical vs fossil fuels (meaning: when we have to switch to it, the growth slows down further) and we are unlikely to see battery efficiency sigmoid enough to maintain the AI sigmoid.

I don't mean to be bearish here. There's so much money sloshing around that we can afford to put the smartest people, using unlimited tokens, on the task of finding small, incremental gains on the CompSci side of things that will have large monetary payoffs - hopefully allowing further scaling and increased emergent abilities of LLMs. Maybe we can squeeze the algos for quite a while. But I don't see that maintaining the same level of exponential as unlocking unlimited data or maxxing out the world's energy/fab capacity for long.

And I don't see why this is a massive issue except for the people who want to have some god-like super AI? Frontier LLMs are genuinely magic. Not "won't delete your production database" magic, but definitely a massive productivity gain for competent knowledge workers.

show comments
whatshisface

If you want a model, here's one: LLMs have never demonstrated the ability to go obviously beyond interpolating their training data. It takes an army of paid data producers solving homework problems to give ChatGPT the ability to do your homework. All vibecoded apps that turned out to be successful could put on a geological soil chart with other apps, probably on GitHub somewhere, on the corners. The prediction? They won't.

In this model, the exponential growth that everybody is freaking out about is only the realization of the modular software dream ("we'll only have to write an ORM once for all of human history!") and the sheer amount of knowledge in libraries.

It's at least falsifiable.

show comments
Brendinooo

> then what is their model?

My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.

Ultimately, you can't make something look more realistic than real.

I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.

show comments
Staross

You should really do a Bayesian fit for such predictions and give confidence intervals, it would probably show that the uncertainty is very high in these cases.

OscarCunningham

John D Cook gives more technical details here: "Trying to fit a logistic curve" https://www.johndcook.com/blog/2025/12/20/fit-logistic-curve...

niemandhier

If you look at problems that can be solved by reasoning in text form or maybe even images, I am more than willing to accept that we simply cannot know when the curve will level.

The situation is drastically different for problems that require interaction was the physical world to determine success.

As soon as you add a powerful simulator for physical problems to the self learning experience of the AI, you are extremely hampered by the large amount of needed computation.

philipallstar

But they do explain the improvement of AI driving 2017-2021 vs 2022-2026.

janalsncm

> What if you don’t fully understand the process? AI forecasters know some things (like how data centers work and how much it costs to build them). But they’re unsure about other things (researchers keep inventing new paradigms of data generation that get over data walls, but for how long?), and other things are entirely opaque (What is intelligence really? Why do scaling laws work? Might they just stop working at some point?) Is there anything you can do here?

This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.

One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.

zkmon

The curve is a smoothed step curve (y=1 if x>1 otherwise 0). Nature doesn't allow any change to happen instantly at any degree of rate of change. The curveis just a manifestation a change with exponential smoothening of the sharp corners.

For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.

jsmcgd

> It’s true that birth rates must eventually flatten out and become sigmoid

All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive. No gentle curve, but a hard kink and perfect flat line at zero. Forever. I think it would be a stretch to categorize that pattern as sigmoid. Predicting a sigmoid pattern for negative growth implies some sort of a soft landing (depending on your definition of soft).

We can think of many populations that are no longer with us. So just a caution about over applying this reasoning in the negative case.

show comments
pron

1. Scott Alexander is famous for writing about topics he knows little about. I'm glad to see he's found a subject he knows little about but so does everyone else.

2. What's even worse than predicting that some growth curve flattens before X happens is predicting it will flatten before X happens but after Y happens, which is what we see when it comes to AI in software development. Too many people predict that AI will be able to effectively write most software, replacing software engineers, yet not be able to replace the people who originate the ideas for the software or the people who use them. I see no reason why AI capability growth should stop after the point it's able to write air-traffic control or medical diagnosis software yet before the point where it's able to replace air traffic controllers and doctors.

3. While we don't know much about AI (or, indeed, intelligence in general), we do know something about computational complexity. Some predictions about "scary things" happening (the ones I'm guessing Alexander is alluding to, though I can't be certain) do hit known computational complexity limits. Most systems affecting people are nonlinear (from weather to the economy). Predicting them requires not intelligence but computational resources. Controlling them, similarly, requires not intelligence but either computational resources or other resources. It's possible that people choose to give control over resources to computers (although probably not enough to answer many tough, important questions), although given how some countries choose to give control to people with below-average intelligence (looking at you, America), I don't see why super-human intelligence (if such a thing even exists) would be, in itself, exceptionally risky.

show comments
amelius

The exponentials won't save you either. In fact it's more likely that a sigmoid takes over at some point.

andai

Well, curve shape aside, the high watermark might be lower than where it tapers off.

https://news.ycombinator.com/item?id=46199723

AvAn12

Forecasts are a thought exercise, not the revelation of something foretold. Best thing to do is think of the outcome you wish for and then try to take whatever actions you can to help make it so. Like with climate change for example.

show comments
plomme

I’m not saying he’s wrong about the core thesis here, but using Claude Opus 4.6 as a “mic drop” with a chart showing it being twice as good as the last model feels in my experience way off.

leoc

Hmm. What’s the general belief about Toby Ord’s “Are the Costs of AI Agents Also Rising Exponentially?” https://www.tobyord.com/writing/hourly-costs-for-ai-agents among those who are well-equipped to judge? Is it seen as wrong or disproven or unlikely? Because if not—if indeed recent LLM capability advances have likely relied on increases in inference cost per run which can’t be much further sustained—then it seems remiss not to mention that if you point to those advances to claim that the exponential trend remains on track.

dsign

We did hit the sigmoid's plateau on airplane speed, but the applications of airplane speed are still coming (how fast can a Chinese company airship the PCB you ordered three minutes ago?). I expect the the same will happen with LLMs, though I also happen to believe things are just getting started on end capabilities.

mentalgear

> Why do scaling laws work?

Strictly speaking, the original paradigm of scaling laws doesn't work any more. The assumption that we could achieve better performance simply through "vertical scaling" ie infusing models with exponentially more parameters and pre-training data, is no longer the driving force of AI progress.

Instead, the industry has pivoted toward inference-time scaling. Rather than relying solely on a massive, static neural network, modern architectures allocate more compute during the actual generation process, allowing the model to "think" and verify its logic dynamically.

Furthermore, the latest state-of-the-art models are no longer pure LLMs; they are compound neuro-symbolic systems that integrate external tools like REPLs, databases, and structured skill documentation to archive things pure LLM vertical parameter scaling was not able to do.

show comments
izucken

Births is, sans miscalculation, a number that tracks exact events.

Is the "capability" number on these LLM strengh graphs as tangible?

I think it would be interesting to visit a reality that obeys arbitrary abstractions, but I would personally never go there.

mordae

I am 100% certain it's a sigmoid. Speed of light is finite. :-)

baxtr

> The moral of the story is that, even though all exponentials eventually become sigmoids, this doesn’t necessarily happen at the exact moment you’re doing your analysis. Sometimes they stay exponential for much longer than that!

All exponentials eventually become sigmoids? Don’t think this can be true without qualifiers.

show comments
Permit

> But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level, then the burden is on them to explain either:…

This is not the context in which I hear about sigmoids vs exponentials. I hear it in regards to “the singularity”, not that AI won’t reach some pre-specified level. You may get AGI, you aren’t getting a singularity.

tim-projects

Does hype follow a sigmoid curve?

krupan

News flash: predicting the future is hard

show comments
mgraczyk

Don't bet on the sigmoid flattening out any time soon. I don't know when it will flatten, but it definitely won't be within the next 2 years

patrickmay

Stein's Law: "If something cannot go on forever, it will stop."

show comments
graphememes

line can go up, line can go sideways, line can go up sideways, line can go up sideways up, line go where line go

kubb

If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.

show comments
overgard

This feels like a really verbose way of saying "things have been growing fast for a while so they should continue to grow as fast for just as long", and then he places the burden on people to prove him wrong. Um, no, the burden of proof is shared, "this will just keep going" requires just as much proof as "this is going to level off" if you're just looking at trend lines.

It's better to look at the underlying factors. Money sources are drying up, nobody is making a profit outside of nVidia, most blackwell GPU's are likely not even installed yet and will probably be 2 generations behind when they finally are being used, data centers are hitting all sorts of obstacles getting built and powered and they're getting built slowly, most AI researchers seem to think that LLMs are a dead end, the newer models seem to be getting more expensive and sometimes worse, or even potentially are showing signs of model collapse (goblins..), the supposed productivity gains are not materializing.. AI has worse public sentiment than congress.. I could keep going. Some obscure "law" seems to pale in comparison to the hard evidence that the status quo is utterly unsustainable and none of these companies seem to have a realistic plan other than trying to become too big to fail essentially.

I like some of this guy's writing on other topics, but to me this is a prime example of what happens when you get public "intellectuals" talking about subjects far outside of their area of expertise. It's not as bad as Richard Dawkins latest fall into psychosis but it's basically the same phenonmenon.

nathan_compton

A lot of words to say "The initial part of a sigmoidal curve is not very informative about the parameters of the sigmoid function in question."

show comments
pyrale

Such a long article to say that neither side has a fucking idea about what will happen next.

While we're at it, the "exponentials are actually sigmoïds" meme is not necessarily true. While exponentials are never exponentials, sigmoids are not guaranteed. Overshoot-and-collapse examples also happen in tech, e.g. the dotcom bubble, or the successive AI winters.

show comments
inglor_cz

Hmmm, this is quite an interesting take by Scott.

Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).

But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.

A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.

show comments
FrustratedMonky

Black Swan Events probably look like nothing just before they happen.

ngruhn

> all exponentials eventually become sigmoids

Except innovation. When one sigmoid tapers off we keep finding new ones to keep the climb going.

itkovian_

The other thing people don’t understand is exponential curves are self similar. The start of an exponential looks like an exponential. People always look at and think ‘well that’s it it’s exponential now, have missed it, can’t sustain’. Nope.

Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.

lovich

I wonder how the graph would look like if cost and/or profitability was taken into account.

I could probably make increasingly larger fires for years if I was willing to burn the entire world.

jrflowers

I like this article about how we should assume, at any given point, that we are exactly halfway through a phenomenon which relies on a single data point on a graph —-that apparently doesn’t need its relevance or importance explained— to illustrate that this is obviously true for AI in particular

show comments
devmor

"Exponentials all tend to become sigmoids but you can't predict exactly when" is a true statement, but I'm not sure it needed an article.

This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.

I really don't get the point of what I just read.

show comments
BoredPositron

If you use the log scale you'll see that the time horizon of opus 4.6 was as expected...

show comments
dnnddidiej

Tldr you cant accurately predict the future of complex systems. Corollary... you cant accurately predict the future of complex systems using sigmoid.

Attention is all you need took us by surprise and we don't know how big the wave is let alone if there are other waves behind it.

bedobi

[flagged]

show comments