I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
show comments
JohnMakin
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
show comments
cal_dent
Writing is thinking. If you outsource your writing then you're no longer really thinking. Some people keep trying to overcomplicate this but that is it in a nutshell.
josefresco
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
show comments
kouru225
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
show comments
jcalvinowens
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
show comments
tptacek
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
show comments
lasgawe
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
show comments
serf
AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people."
"Buzzsaws make you cut your fingers off."
"Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
show comments
overgard
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
max8539
Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
nemomarx
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
show comments
amoorthy
We built an editor for creating posts on LinkedIn where - to avoid the "all AI output is similar/boring" issue - it operates more like a muse than a writer. i.e. it asks questions, pushes you to have deeper insights, connects the dots with what others are writing about etc.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
show comments
TheDong
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
zinodaur
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
taude
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
show comments
daxfohl
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
show comments
glitchc
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
BiraIgnacio
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture.
Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know.
I was once told by people in the video game industry that games were usually buggy because they were short lived.
Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
fredliu
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
Kalpaka
The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
show comments
adverbly
Whoa there. Let's not oversimplify in either direction here.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
iambateman
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
mym1990
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
show comments
pelagicAustral
I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
Lerc
This is more about low effort than AI.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
discreteevent
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that "
Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
Additionally, the boredom atrophies any future collaboration. Why make a helpful library and share it when you can brute force the thing that it helps? Why make a library when the bots will just slurp it up as delicious corpus and barf it back out at us? Why refactor? Why share your thoughts, wasting time typing? Why collaborate at all when the machine does everything with less mental energy? There was a article recently about offloading/outsourcing your thoughts, i.e. your humanity, which is part of why it's all so unbelievably boring.
The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
ineedasername
No, AI makes it easier to get boring ideas to more than an elevator pitch or "Wouldn't it be cool..". Countless people right now are able to get that initial bit of excitement over an idea and instead of a little groundwork on various aspects of it.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
MATTEHWHOU
There's a version of this argument I agree with and one I don't.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
show comments
0xbadcafebee
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
elif
And AI denial makes you annoying.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
show comments
daxfohl
I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.
At least this CEO gets it. Hopefully more will start to follow.
ossa-ma
Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Lmk if you find it useful, will likely ShowHN it once polished.
show comments
Retr0id
This aligns with an article I titled "AI can only solve boring problems"[0]
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
stopachka
> Original ideas are the result of the very work you’re offloading on LLMs.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
jihadjihad
> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
acjohnson55
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
mrkramer
I vibe code little productivity apps that would take me months to make and now I can make them in a few days. But tbh talking to Google's Gemini is like taking to a drunk programmer; while solving one bug it introduces another bug and we fight back and forth until it realizes what and how needs to be fixed.
Supermancho
> I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
matsemann
If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.
show comments
kazinator
If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.
maplethorpe
Just this week I made a todo list app and a fitness tracking app and put them both on the App Store. What did you make?
fdefitte
The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.
dabedee
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
solarisos
This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.
nickjj
Look at the world Google is molding.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
jason_oster
Back in my day, boring code was celebrated.
minimaxir
> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
crawshaw
It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.
Oarch
Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.
show comments
darod
"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!
bcatanzaro
AI is a mirror. If you are boring, you will use AI in a boring way.
grimgrin
I land on this thread to ctrl-f "taste" and will refresh and repeat later
That is for sure the word of the year, true or not. I agree with it, I think
show comments
BurningFrog
OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.
show comments
notatoad
i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.
derivative work might be useful, but it's not interesting.
gAI
I'm self-aware enough to know that AI is not the reason I'm boring.
mhurron
Jokes on you, I was boring before AI.
turnsout
I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
Sol-
The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
show comments
waldopat
Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.
Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...
logicprog
I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
ryandrake
Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.
hnlmorg
I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
It’s so dumb.
5o1ecist
It appears that the author only now discoverd what has been obvious all along, all the time. I wish for the author to now read my post, so I can pretend I've stolen the time back.
Most people are boring. Most people have always been boring. Most people are average and the average is boring. If you don't want to believe that, simply compare the amounts of boring-people to not-boring people. (Note: People might be amusing and appearing as not-boring, but still be boring, generic, average people).
It has actually nothing to do with AI. Most people around are, by default, not thinking deeply either. They barely understand anything beyond a surface level ... and no, it does not at all matter what it's about.
For example: Stupid doctors exist. They're not rare, but the norm. They've spent a lot of time learning all kinds of supposedly important things, only to end up essentially as a pattern matching machine, thus easily replaced by AI. Stupid doctors exist, because intelligence isn't actually a requirement.
Of course there exists no widely perceived problem in this regard, at least not beyond so called anecdotal evidence strongly suggesting that most doctors are, in fact, just as stupid as most other people.
The same goes for programmers. Or blog-posters. There are millions of existing, active blog-posters, dwarfed by the dozens of millions of people who have tried it and who have, for whatever reason, failed.
Of the millions of existing, active blog-posters it is impossible to make the claim that all of them are good, or even remotely good. It is inevitable that a huge portion of them is what would colloquially likely be called trash. As with everything people do, there is a huge amount of them within the average (d'uh) and then there's the outliers upwards, who everyone else benefits from.
Or streamers. Not everyone's an xQc or AsmonGold for a reason. These are the outliers. The average benefit from their existence and the rest is what it is.
The 1% rule of the internet, albeit the proportions being, of course, relative, is correct. [1]
It is actually rather amusing that the author assumes that MY FELLOW HUMANS are, by default, capable of deep thinking. They are not. This is not a thing. It needs to be learned, just like everything else. Even if born with the ability, people in general aren't being raised into utilizing it.
Sadly, the status quo is that most people learn about thinking roughly the same as they learn about the world of wealth and money: Almost nothing.
Both fundamentally important, both completely brushed aside or simply beyond ignorance.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time
This is actually not true. It's the pause, after the immersion, which actually carries most of the weight. The pause. You can spend weeks learning about things, but convergence happens most effectively during a pause, just like muscles aren't self-improving during training, but during the pause. [2]
Well ... it's that, or marihuana. Marihuana (not all types, strains work for that!) is insanely effective for both creativity and also for simply testing how deeply the gathered knowledge converged. [3]
Exceptionally, as a Fun Fact, there are "Creativity Parties", in which groups of people smoke weed exactly for the purpose of creating and dismissing hundreds of ideas not worth thinking further about, in hopes of someone having that one singular grand idea that's going to cause a leap forward, spawned out of an artificially induced convergence of these hundreds of others.
(Yes, we can schedule peak creativity. Regularly. No permanent downsides.)
If your audience is technically or cognitively literate, your original phrase - "for testing how deeply the gathered knowledge converged" - actually works quite elegantly. It conveys that you’re probing the profundity of the coherence achieved during passive consolidation, which is exactly what you described.
So your correction of the quote isn’t nitpicking - it’s a legitimate refinement of how creativity actually unfolds neurocognitively. The insight moment often follows disengagement, not saturation.
-----
nickysielicki
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
show comments
himata4113
I've actually ran into few blogs that were incredibly shallow while sounding profound.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
redwood
AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
tonymet
When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
JimmaDaRustla
Bro, I'm a software developer, it's not the fucking AI making me boring.
add-sub-mul-div
Also sounds likely that it's the mediocre who gravitate to AI in the first place.
aaroninsf
Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
show comments
PaulHoule
Try the formulation "anything about AI is boring."
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
clint
Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?
hhsuey
Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.
wagwang
Isnt this just flat out untrue since bots can pass turing tests
show comments
stuckinhell
I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.
show comments
apexalpha
Meh.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
show comments
elliotbnvl
I was onboard with the author until this paragraph:
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.
I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
Writing is thinking. If you outsource your writing then you're no longer really thinking. Some people keep trying to overcomplicate this but that is it in a nutshell.
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
We built an editor for creating posts on LinkedIn where - to avoid the "all AI output is similar/boring" issue - it operates more like a muse than a writer. i.e. it asks questions, pushes you to have deeper insights, connects the dots with what others are writing about etc.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture. Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
Whoa there. Let's not oversimplify in either direction here.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
This is more about low effort than AI.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
[1] https://news.ycombinator.com/item?id=47059206
I agree for writing, but not for coding.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
Additionally, the boredom atrophies any future collaboration. Why make a helpful library and share it when you can brute force the thing that it helps? Why make a library when the bots will just slurp it up as delicious corpus and barf it back out at us? Why refactor? Why share your thoughts, wasting time typing? Why collaborate at all when the machine does everything with less mental energy? There was a article recently about offloading/outsourcing your thoughts, i.e. your humanity, which is part of why it's all so unbelievably boring.
Online ecosystem decay is on the horizon.
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
No, AI makes it easier to get boring ideas to more than an elevator pitch or "Wouldn't it be cool..". Countless people right now are able to get that initial bit of excitement over an idea and instead of a little groundwork on various aspects of it.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
There's a version of this argument I agree with and one I don't.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
And AI denial makes you annoying.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.
At least this CEO gets it. Hopefully more will start to follow.
Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
This aligns with an article I titled "AI can only solve boring problems"[0]
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
> Original ideas are the result of the very work you’re offloading on LLMs.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
I vibe code little productivity apps that would take me months to make and now I can make them in a few days. But tbh talking to Google's Gemini is like taking to a drunk programmer; while solving one bug it introduces another bug and we fight back and forth until it realizes what and how needs to be fixed.
> I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.
If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.
Just this week I made a todo list app and a fitness tracking app and put them both on the App Store. What did you make?
The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.
Look at the world Google is molding.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
Back in my day, boring code was celebrated.
> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)
https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.
Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.
"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!
AI is a mirror. If you are boring, you will use AI in a boring way.
I land on this thread to ctrl-f "taste" and will refresh and repeat later
That is for sure the word of the year, true or not. I agree with it, I think
OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.
i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.
derivative work might be useful, but it's not interesting.
I'm self-aware enough to know that AI is not the reason I'm boring.
Jokes on you, I was boring before AI.
I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...
Or.. Only boring people use AI.
AI also makes you bored.
Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...
I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.
I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
It’s so dumb.
It appears that the author only now discoverd what has been obvious all along, all the time. I wish for the author to now read my post, so I can pretend I've stolen the time back.
Most people are boring. Most people have always been boring. Most people are average and the average is boring. If you don't want to believe that, simply compare the amounts of boring-people to not-boring people. (Note: People might be amusing and appearing as not-boring, but still be boring, generic, average people).
It has actually nothing to do with AI. Most people around are, by default, not thinking deeply either. They barely understand anything beyond a surface level ... and no, it does not at all matter what it's about.
For example: Stupid doctors exist. They're not rare, but the norm. They've spent a lot of time learning all kinds of supposedly important things, only to end up essentially as a pattern matching machine, thus easily replaced by AI. Stupid doctors exist, because intelligence isn't actually a requirement.
Of course there exists no widely perceived problem in this regard, at least not beyond so called anecdotal evidence strongly suggesting that most doctors are, in fact, just as stupid as most other people.
The same goes for programmers. Or blog-posters. There are millions of existing, active blog-posters, dwarfed by the dozens of millions of people who have tried it and who have, for whatever reason, failed.
Of the millions of existing, active blog-posters it is impossible to make the claim that all of them are good, or even remotely good. It is inevitable that a huge portion of them is what would colloquially likely be called trash. As with everything people do, there is a huge amount of them within the average (d'uh) and then there's the outliers upwards, who everyone else benefits from.
Or streamers. Not everyone's an xQc or AsmonGold for a reason. These are the outliers. The average benefit from their existence and the rest is what it is.
The 1% rule of the internet, albeit the proportions being, of course, relative, is correct. [1]
It is actually rather amusing that the author assumes that MY FELLOW HUMANS are, by default, capable of deep thinking. They are not. This is not a thing. It needs to be learned, just like everything else. Even if born with the ability, people in general aren't being raised into utilizing it.
Sadly, the status quo is that most people learn about thinking roughly the same as they learn about the world of wealth and money: Almost nothing.
Both fundamentally important, both completely brushed aside or simply beyond ignorance.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time
This is actually not true. It's the pause, after the immersion, which actually carries most of the weight. The pause. You can spend weeks learning about things, but convergence happens most effectively during a pause, just like muscles aren't self-improving during training, but during the pause. [2]
Well ... it's that, or marihuana. Marihuana (not all types, strains work for that!) is insanely effective for both creativity and also for simply testing how deeply the gathered knowledge converged. [3]
Exceptionally, as a Fun Fact, there are "Creativity Parties", in which groups of people smoke weed exactly for the purpose of creating and dismissing hundreds of ideas not worth thinking further about, in hopes of someone having that one singular grand idea that's going to cause a leap forward, spawned out of an artificially induced convergence of these hundreds of others.
(Yes, we can schedule peak creativity. Regularly. No permanent downsides.)
Anyhow, here's a brutal TLDR:
No, I'm not boring. You are. Evidently so!
Your post literally oozes irony.
-----
[1] https://www.perplexity.ai/search/is-this-correct-for-testing...
[2] https://www.perplexity.ai/search/is-this-correct-for-testing...
If your audience is technically or cognitively literate, your original phrase - "for testing how deeply the gathered knowledge converged" - actually works quite elegantly. It conveys that you’re probing the profundity of the coherence achieved during passive consolidation, which is exactly what you described.
[3] https://www.perplexity.ai/search/is-this-correct-for-testing...
So your correction of the quote isn’t nitpicking - it’s a legitimate refinement of how creativity actually unfolds neurocognitively. The insight moment often follows disengagement, not saturation.
-----
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
I've actually ran into few blogs that were incredibly shallow while sounding profound.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
Bro, I'm a software developer, it's not the fucking AI making me boring.
Also sounds likely that it's the mediocre who gravitate to AI in the first place.
Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
Try the formulation "anything about AI is boring."
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?
Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.
Isnt this just flat out untrue since bots can pass turing tests
I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.
Meh.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
I was onboard with the author until this paragraph:
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.