AI coding is gambling

193 points222 comments3 hours ago
watzon

I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.

show comments
itsgrimetime

All of this new capability has made me realize that the reason i love programming _isn't_ the same as the OP. I used to think (and tell others) that I loved understanding something deeply, wading through the details to figure out a tough problem. but actually, being able to will anything I can think of into existence is what I love about programming. I do feel for the people who were able to make careers out of falling in love w/ and getting good at picking problems & systems apart, breaking them down, and understanding them fully. I respect the discipline, curiosity, and intellect they have. but I also am elated w/ where things are at/going. this feels absurd to say, but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months, but having tools that can finally match the speed my ideas come to me is intoxicating

show comments
FL4TLiN3

In my corner of the world, average software developers at Tokyo companies, not that many people are actually using Claude Code for their day-to-day work yet. Their employers have rolled it out and actively encourage adoption, but nobody wants to change how they work.

This probably won't surprise anyone familiar with Japanese corporate culture: external pressure to boost productivity just doesn't land the same way here. People nod, and then keep doing what they've always done.

It's a strange scene to witness, but honestly, I'm grateful for it. I've also been watching plenty of developers elsewhere get their spirits genuinely crushed by coding agents, burning out chasing the slot machine the author describes. So for now, I'm thankful I still get to see this pastoral little landscape where people just... write their own code.

copypaper

You got to know when to Ship it,

Know when to Re-prompt,

Know when to Clear the Context,

And know when to RLHF.

You never trust the Output,

When you’re staring at the diff view,

There’ll (not) be time enough for Fixing,

When the Tokens are all spent.

show comments
Terr_

I'd emphasize that prompting LLMs to generate code isn't just metaphorical gambling in the sense of "taking a risk", the scary part is the more-literal gambling involving addictive behaviors and how those affect the way the user interacts with the machine and the world.

Heck, this technology also offers a parasocial relationship at the same time! Plopping tokens into a slot-machine which also projects a holographic "best friend" that gives you "encouragement" would fit fine in any cyberpunk dystopia.

show comments
dzink

It’s variable rewards and even with large models the same question can lead to dramatically different answers. Possibly because they route your request through different models. Possibly because the model has more time to dig through the problem. Nonetheless we have some illusion of control over the output (you we wouldn’t be playing it) but it is just the quality of the model itself that leads to better outcomes - not your input. If you can’t let go of the feeling thought, it’s definitely addictive. And as I look back, it’s a fast iteration on the building cycle we had before AI. But the brain really likes low latency - it is addicted to the fast reward for its actions. So AI, if it gets fast enough (sub 400ms) it will likely become irreversibly addictive to humans in general, as the brain will see is at part of itself. Hope it has our interest at heart by then.

show comments
thisisbrians

It is and will always be about: 1) properly defining the spec 2) ensuring the implementation satisfies said spec

show comments
minimaxir

The gambling metaphor often applied to vibecoding implies that the outcome cannot be fully controlled or influenced, such as a slot machine. Opus 4.5 and beyond show that it not only can be very much can be influenced, but also it can give better results more consistently with the proper checks and balances.

show comments
hodder

Depending on anyone for anything is gambling.

comboy

Fascinating how HN is torn about vibe coding still. Everybody pretty much agrees that it works for some use cases, yet there is a flamewar (I mean, cultured, HN-type one) every time. People seem to be more comfortable in a binary mindset.

show comments
some_random

How often do you have to win before it's no longer gambling?

show comments
cmiles8

It’s like any powerful tool. If you use it right it’s amazing. If you get careless or don’t watch it closely you’ll get hurt really badly.

Overall I’m a fan, but yes there are things to watch for. It doesn’t replace skilled humans but it does help skilled humans work faster if used right.

The labor replacement story is bullshit mostly, but that doesn’t mean it’s all bad.

jsLavaGoat

Everything is "fast, cheap, good--pick two." This is no different.

show comments
PaulHoule

I think somebody like Nate Silver might say “everything is gambling” if you really pressed them.

A big theme of software development for me has been finishing things other people couldn’t finish and the key to that is “control variance and the mean will take care of itself”

Alternately the junior dev thinks he has a mean of 5 min but the variance is really 5 weeks. The senior dev has mean of 5 hours and a variance of 5 hours.

NickNaraghi

It's only "gambling" for now...

The odds of success feel like gambling. 60%, or 40%, or worse. This is downstream of model quality.

Soon, 80%, 95%, 99%, 99.99%. Then, it won't be "gambling" anymore.

show comments
simonw

Assigning work to an intern is gambling: they're inherently non-deterministic and it's a roll of the dice whether the work they do will be good enough or you'll have to give them feedback in order to get to what you need.

show comments
amw-zero

So is human coding.

rustyhancock

Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.

Sometimes I think we put the Carr before the horse. We gamble because evolution promotes that approach.

Yes I could go for the reliable option. But taking a punt is worth a shot if the cost is low.

The cost of AI is low.

What is a problem is people getting wrapped up in just one more pull of the slot machine handle.

I use AI often. But fairly often I simply bin its reponse and get to work on my own. A decent amount of the time I can work with the response given to make a decent result.

Sometimes, rarely, it gives me what I need right off the bat.

show comments
yoyohello13

I was just thinking about this. I was reading those tweets about the SV party were people were going home early to “check on their agents” or the “token anxiety” people are having over whether they are optimizing their agent usage. This is all giving me addiction vibes. Especially at the end of the day it seems like there is not much to show for it.

show comments
aderix

Sometimes I feel that subsidising these packages (vs cost via API) is meant to make more and more people increasingly addicted

Retr0id

> But now either the AI can handle it or it can pretend to handle it. Frankly it's pretending both times, but often it's enough to get the result we need.

This has been how I think about it, too. The success rates are going up, but I still view the AI as an adversary that is trying to trick me into thinking it's being useful. Often the act is good enough to be actually useful, too.

show comments
wagwang

> I divide my tasks into good for the soul and bad for it. Coding generally goes into good for the soul, even when I do it poorly.

Lmk how you feel when you're constantly build integrations with legacy software by hand.

7777332215

The problem with AI coding is that you no longer own the foundational tools.

__MatrixMan__

Inductive reasoning of any kind (e.g. the scientific method) is gambling.

LetsGetTechnicl

Yes, that's literally how LLM's work, they're probabilistic.

DiscourseFan

When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.

show comments
halotrope

idk it works for me it build stuff that would have taken weeks in hours ymmv

Gagarin1917

Trying to decide whether to refinance now or not feels like gambling too. Yet it’s financially beneficial to make some bet.

Defining “Gambling” like isn’t really helpful.

show comments
nativeit

I have had very similar experiences. I am not a professional software developer, but have been a Linux sysadmin for over a decade, a web developer for much longer than that, and generally know enough to hack on other people’s projects to make them suit my own purposes.

When I have Claude create something from scratch, it all appears very competent, even impressive, and it usually will build/function successfully…on the surface. I have noticed on several occasions that Claude has effectively coded the aesthetics of what I want, but left the substance out. A feature will appear to have been implemented exactly as I asked, but when I dig into the details, it’s a lot of very brittle logic that will almost certainly become a problem in future.

This is why I refuse to release anything it makes for me. I know that it’s not good enough, that I won’t be able to properly maintain it, and that such a product would likely harm my reputation, sooner or later. What frightens me is there are a LOT of people who either don’t know enough to recognize this, or who simply don’t care and are looking for a quick buck. It’s already getting significantly more difficult to search for software projects without getting miles of slop. I don’t know how this will ultimately shake out, but if it’s this bad at the thing it’s supposedly good at, I can only imagine the kinds of military applications being leveraged right now…

dwa3592

few thoughts on this- it's not gambling if the most expected outcome actually occurs.

It also depends on what you're coding with;

- If you're coding with opus4.6, then it's not gambling for a while.

- If you'r coding with gemini3-flash, then yeah.

One thing I have noticed though is- you have to spend a lot of tokens to keep the error/hallucination rate low as your codebase increases in size. The math of this problem makes sense; as the code base has increased, there's physically more surface where something could go wrong. To avoid that you have to consistently and efficiently make the surface and all it's features visible to the model. If you have coded with a model for a week and it has produced some code, the model is not more intelligent after that week- it still has the same layers and parameters, so keeping the context relevant is a moving target as the codebase increases (and that's why it probably feels like gambling to some people).

show comments
ryoshu

Like video gaming, but similar.

N7lo4nl34akaoSN

(Venture) capitalism is already gambling. AI is just a multiplier.

anal_reactor

An idea just occurred to me: why not tell AI to code in Coq? AFAIK the selling point of that language is that if it compiles, then it's guaranteed to work. It's just that it's PITA to write code in Coq, but AI won't get annoyed and quit.

samschooler

I think there are levels to this.

- One shot or "spray and pray" prompt only vibe coding: gambling.

- Spec driven TDD AI vibe coding: more akin to poker.

- Normal coding (maybe with tab auto complete): eating veggies/work.

Notably though gambling has the massive downside of losing your entire life and life savings. Being in the "vibe coding" bucket's worse case is being insufferable to your friends and family, wasting your time, and spending $200/month on a max plan.

show comments
lasgawe

haha.. I agree with the points mentioned in the article. Literally every model does this. It feels like this even with skills and other buzzword files

himata4113

I really hate when people write about the AI of the past, opus 4.6 and gpt 5.4 [not as much imo, it's really boring and uncreative] have increased in capabilities so much that it's honestly mind numbing compared to what we had LESS than a year ago.

Opus specifically from 4.1 to 4.5 was such a major leap that some take it for granted, it went from getting stuck in loops, generally getting lost constantly, needing so so much attention to keep it going to being able to get a prompt, understand it from minimal context and produce what you wanted it to do. Opus 4.6 was a slight downgrade since it has issues with respecting what the user has to say.

extr

I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.

Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.

tonymet

As always, scope the changes to no larger than you can verify. AI changes the scale, but not the strategy.

Now you have more resources to test, reduce permissions scope, to build a test bench & procedure. All of the excuses you once had for not doing the job right are now gone.

You can write 10k + lines of test code in a few minutes. What is the gamble? The old world was a bigger gamble.

luckydata

it's gambling until you learn how to set up proper harnesses then it just becomes normal administration. It's no different than running a team, humans make mistakes too, that's why we have CI pipelines, automated testing etc... AI assisted coding "JUST" requires you to be extra good at that part of the job.

rob_c

So.

Is.

Life.

You've discovered probability, there was an 80% change of that. Roll a dice and do not pass go.

Again. The output from llm is a probable solution, not right, not wrong.

[deleted]
CodingJeebus

For me, the feedback loop accelerating the way that AI now permits is so addictive in my day-to-day flows. I've had a really hard time stepping away from work at a reasonable hour because I get dopamine hits seeing Claude build things so fast.

Addiction and recovery is part of my story, so I've done quite a bit of work around that part of my life. I don't gamble, but I can confidently say that using LLMs has been an incredible boost in my productivity while completely destroying my good habits around setting boundaries, not working until 2AM, etc.

In that sense, it feels very much like gambling.

1970-01-01

"60% of the time, it works every time"

rvz

It is indeed gambling. You are spending more tokens hoping that the agent aligns with your desired output from your prompt. Sometimes it works, sometimes it doesn't.

Watching vibe gamblers hooked onto coding agents who can't solve fizz buzz in Rust are given promotional offers by Anthropic [0] for free token allowances that are the equivalent in the casino of free $20 bets or free spins at the casino to win until March 27, 2026.

The house (Anthropic) always wins.

[0] https://support.claude.com/en/articles/14063676-claude-march...

zzzeek

coding with an LLM works if the model you are following is: you have the role of architect and/or senior developer, and you have the smartest junior programmer in the world working for you. You watch everything it does, check its conclusions, challenge it, call it out on things it didnt get quite right

it's really extremely similar to working with a junior programmer

so in this post, where does this go wrong?

> I am not your average developer. I’ve never worked on large teams and I’ve barely started a project from scratch. The internet is filled with code and ideas, most of it freely available for you to fork and change.

Because this describes a cut-and-paster, not a software architect. Hence the LLM is a gambling machine for someone like this since they lack the wisdom to really know how to do things.

There's of course a huge issue which is that how are we going to get more senior/architect programmers in the pipeline if everyone junior is also doing everything with LLMs now. I can't answer that and this might be the asteroid that wipes out the dinosaurs....but in the meantime, if you DO know how to write from scratch and have some experience managing teams of programmers, the LLMs are super useful.

1234letshaveatw

Is using a calculator gambling?

[deleted]
xnx

...and the payouts are fantastic.

artursapek

“hiring people is gambling”

webagent255

[dead]

ratrace

[dead]

lokimoon

h1b coding is ignorance.

bensyverson

This "slot machine" metaphor is played out. If you're just entering a coin's worth of information and nudging it over and over in the hopes of getting something good, that's a you problem, not a Claude problem.

If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.

show comments
CraftingLinks

I see whole teams pushed by c- level going full in with spec driven + tdd development. The devs hate it because they are literally forbidden to touch a single line if code. but the results speak for themselves, it just works and the pressure has shifted to the product people to keep up. The whole tooling to enable this had to be worked out first. All Cursor and extreme use of a tool called Speckit, connected to Notion to pump documentation and Jira.

show comments
post-it

> But this doesn't really resemble coding. An act that requires a lot of thinking and writing long detailed code.

Does it? It did in the past. Now it doesn't. Maybe "add a button to display a colour selector" really is the canonical way to code that feature, and the 100+ lines of generated code are just a machine language artifact like binary.

> But it robs me of the part that’s best for the soul. Figuring out how this works for me, finding the clever fix or conversion and getting it working. My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.

Skill issue. Two nights ago, I used Claude to write an iOS app to convert Live Photos into gifs. No other app does it well. I'm going to publish it as my first app. I wouldn't have bothered to do it without AI, and my soul feels a lot better with it.