This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
show comments
atomic128
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
show comments
gojomo
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
Damn, good read.
show comments
ericmcer
Great article, super fun.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
show comments
PaulHoule
The simple model of an "intelligence explosion" is the obscure equation
dx 2
-- = x
dt
which has the solution
1
x = -----
C-t
and is interesting in relation to the classic exponential growth equation
dx
-- = x
dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.
Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
dx
-- = (1-x) x
dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
show comments
s1mon
Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.
He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.
javier_e06
I had to ask duck.ai to summarize the article in plain English.
It said that the article claims that is not necessarily that AI is getting smarter but that people
might be getting too stupid to understand what are they getting into.
Can confirm.
show comments
nphardon
Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s".
I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.
Also:
> As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
show comments
Taniwha
I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen
rektomatic
If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity
show comments
jgrahamc
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
show comments
blahbob
It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
show comments
medbar
> The labor market isn't adjusting. It's snapping.
I’m going to lose it the day this becomes vernacular.
sdwr
> arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line
The only metric going infinite is the one that measures hype
Nition
I'm not sure about current LLM techniques leading us there.
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know
kpil
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative)
* We're cutting to strengthen our strategic focus and control our operational costs.(Positive)
* We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
show comments
root_axis
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
show comments
chasd00
I wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.
zh3
Fortuitously before the Unix date rollover in 2038. Nice.
show comments
pixl97
>That's a very different singularity than the one people argue about.
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
show comments
dakolli
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
show comments
marifjeren
> I [...] fit a hyperbolic model to each one independently
^ That's your problem right there.
Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.
The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.
show comments
mygn-l
Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.
mbgerring
I have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.
I feel like I need to start more sprint stand-ups with this quote...
show comments
pocksuppet
Was this ironically written by AI?
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
show comments
zackmorris
Just wanted to leave a note here that the Singularity is inevitable on this timeline (we've already passed the event horizon) so the only thing that can stop it now is to jump timelines.
In other words, there may be a geopolitical crisis in the works, similar to how the Dot Bomb, Bush v. Gore, 9/11, etc popped the Internet Bubble and shifted investment funds towards endless war, McMansions and SUVs to appease the illuminati. Someone might sabotage the birth of AGI like the religious zealot in Contact. Global climate change might drain public and private coffers as coastal areas become uninhabitable, coinciding with the death of the last coral reefs and collapse of fisheries, leading to a mass exodus and WWIII. We just don't know.
My feeling is that the future plays out differently than any prediction, so something will happen that negates the concept of the Singularity. Maybe we'll merge with AGI and time will no longer exist (oops that's the definition). Maybe we'll meet aliens (same thing). Or maybe the k-shaped economy will lead to most people surviving as rebels while empire metastasizes, so we take droids for granted but live a subsistence feudal lifestyle. That anticlimactic conclusion is probably the safest bet, given what we know of history and trying to extrapolate from this point along the journey.
rcarmo
"I could never get the hang of Tuesdays"
- Arthur Dent, H2G2
show comments
baalimago
Well... I can't argue with facts. Especially not when they're in graph form.
overfeed
> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
woopsn
Good post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.
qoez
Great read but damn those are some questionable curve fittings on some very scattered data points
show comments
wayfwdmachine
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
TooKool4This
I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.
The meme at the top is absolute gold considering the point of the article. 10/10
show comments
0xbadcafebee
> The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
jcims
Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?
b00ty4breakfast
The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed
jama211
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
sixtyj
The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.
The answer to the meaning of life is 42, by the way :)
show comments
ragchronos
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
show comments
lencastre
I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
dirkc
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
arscan
Don't worry about the future
Or worry, but know that worrying
Is as effective as trying to solve an algebra equation by chewing Bubble gum
The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday
- Everybody's free (to wear sunscreen)
Baz Luhrmann
(or maybe Mary Schmich)
Johnny_Bonk
Wow what a fun read
jmugan
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
show comments
buildbot
What about the rate of articles about the singularity as a metric of the singularity?
Bratmon
I've never been Poe's lawed harder in my life.
miguel_martin
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
show comments
athrowaway3z
> Tuesday, July 18, 2034
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
jrmg
This is gold.
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
show comments
svilen_dobrev
> already exerting gravitational force on everything it touches.
So, "Falling of the night" ?
hinkley
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
loumf
This is great. Now we won’t have to fix y2K36 bugs.
sempron64
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
show comments
kuahyeow
This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).
regnull
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
That would be 8 years after math + humor peaked in an article about singularity
braden-lk
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
show comments
moffkalast
> I am aware this is unhinged. We're doing it anyway.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
bwhiting2356
We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
dusted
Will.. will it be televised ?
jonplackett
This assumes humanity can make it to 2034 without destroying itself some other way…
ddtaylor
Just in time for Bitcoin halving to go below 1 BTC
banannaise
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
cesarvarela
Thanks, added to calendar.
qwertyuiop_
Who will purchase the goods and services if most people loose jobs ? Also who will
pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?
wbshaw
I got a strong ChatGPT vibe from that article.
show comments
skulk
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
show comments
raphar
Why the plutocrats believe that the entity emerging from the singularity will side with them?
Really curious
markgall
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
show comments
daveguy
What I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.
aenis
Damn. I had plans.
singularfutur
The singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.
darepublic
> Real data. Real model. Real date!
Arrested Development?
PantaloonFlames
This is what I come here for. Terrific.
bradgessler
What time?
show comments
hhh
this just feels like ai psychosis slop man
neilellis
End of the World? Must be Tuesday.
nurettin
With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.
ck2
Does "tokens per dollar" have a "moore's law" of doubling?
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
OutOfHere
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
hipster_robot
why is everything broken?
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
vagrantstreet
Was expecting some mention of Universal Approximation Theorem
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
bitwize
Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."
brador
100% an AI wrote this. Possibly specifically to get to the top spot on HN.
Those short sentences are the most obvious clue. It’s too well written to be human.
show comments
Night_Thastus
This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)
CGMthrowaway
> 95% CI: Jan 2030–Jan 2041
u8rghuxehui
hi
cubefox
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:
Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:
This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
AndrewKemendo
Y’all are hilarious
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
https://www.baen.com/Chapters/9781618249203/9781618249203___...
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
Damn, good read.
Great article, super fun.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
The simple model of an "intelligence explosion" is the obscure equation
which has the solution and is interesting in relation to the classic exponential growth equation because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.
He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.
I had to ask duck.ai to summarize the article in plain English.
It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.
Can confirm.
Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.
Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen
If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
> The labor market isn't adjusting. It's snapping.
I’m going to lose it the day this becomes vernacular.
> arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line
The only metric going infinite is the one that measures hype
I'm not sure about current LLM techniques leading us there.
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...
iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
I wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.
Fortuitously before the Unix date rollover in 2038. Nice.
>That's a very different singularity than the one people argue about.
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
> I [...] fit a hyperbolic model to each one independently
^ That's your problem right there.
Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.
The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.
Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.
I have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.
It was GMT, wasn't it?
https://www.youtube.com/watch?v=9aVO7GAwxnQ
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.
I feel like I need to start more sprint stand-ups with this quote...
Was this ironically written by AI?
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
Just wanted to leave a note here that the Singularity is inevitable on this timeline (we've already passed the event horizon) so the only thing that can stop it now is to jump timelines.
In other words, there may be a geopolitical crisis in the works, similar to how the Dot Bomb, Bush v. Gore, 9/11, etc popped the Internet Bubble and shifted investment funds towards endless war, McMansions and SUVs to appease the illuminati. Someone might sabotage the birth of AGI like the religious zealot in Contact. Global climate change might drain public and private coffers as coastal areas become uninhabitable, coinciding with the death of the last coral reefs and collapse of fisheries, leading to a mass exodus and WWIII. We just don't know.
My feeling is that the future plays out differently than any prediction, so something will happen that negates the concept of the Singularity. Maybe we'll merge with AGI and time will no longer exist (oops that's the definition). Maybe we'll meet aliens (same thing). Or maybe the k-shaped economy will lead to most people surviving as rebels while empire metastasizes, so we take droids for granted but live a subsistence feudal lifestyle. That anticlimactic conclusion is probably the safest bet, given what we know of history and trying to extrapolate from this point along the journey.
"I could never get the hang of Tuesdays"
- Arthur Dent, H2G2
Well... I can't argue with facts. Especially not when they're in graph form.
> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
Good post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.
Great read but damn those are some questionable curve fittings on some very scattered data points
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.
Sure is a lot of words though :)
https://www.economist.com/cdn-cgi/image/width=1096,quality=8...
The meme at the top is absolute gold considering the point of the article. 10/10
> The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?
The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.
The answer to the meaning of life is 42, by the way :)
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
Wow what a fun read
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
What about the rate of articles about the singularity as a metric of the singularity?
I've never been Poe's lawed harder in my life.
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
> Tuesday, July 18, 2034
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
This is gold.
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
> already exerting gravitational force on everything it touches.
So, "Falling of the night" ?
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
This is great. Now we won’t have to fix y2K36 bugs.
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
That would be 8 years after math + humor peaked in an article about singularity
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
> I am aware this is unhinged. We're doing it anyway.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
Will.. will it be televised ?
This assumes humanity can make it to 2034 without destroying itself some other way…
Just in time for Bitcoin halving to go below 1 BTC
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
Thanks, added to calendar.
Who will purchase the goods and services if most people loose jobs ? Also who will pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?
I got a strong ChatGPT vibe from that article.
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
Why the plutocrats believe that the entity emerging from the singularity will side with them? Really curious
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
What I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.
Damn. I had plans.
The singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.
> Real data. Real model. Real date!
Arrested Development?
This is what I come here for. Terrific.
What time?
this just feels like ai psychosis slop man
End of the World? Must be Tuesday.
With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.
Does "tokens per dollar" have a "moore's law" of doubling?
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
Prior work with the same vibe: https://xkcd.com/1007/
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
why is everything broken?
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
Was expecting some mention of Universal Approximation Theorem
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."
100% an AI wrote this. Possibly specifically to get to the top spot on HN.
Those short sentences are the most obvious clue. It’s too well written to be human.
This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)
> 95% CI: Jan 2030–Jan 2041
hi
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:
There is an excellent blog post about it by Scott Alexander:"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
LLM slop article.
Friendly reminder:
Scaling LLMs will not lead to AGI.
This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
Y’all are hilarious
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics