Thoughts on thinking

547 points354 comments20 hours ago
abathologist

I think we are going to be seeing a vast partitioning in society in the next months and years.

The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

show comments
don_neufeld

Completely agree.

From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.

I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.

Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.

Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.

show comments
jebarker

> nothing I make organically can compete with what AI already produces—or soon will.

No LLM can ever express your unique human experience (or even speak from experience), so on that axis of competition you win by default.

Regurgitating facts and the mean opinion on topics is no replacement for the thoughts of a unique human. The idea that you're competing with AI on some absolute scale of the quality of your thought is a sad way to live.

show comments
taylorallred

Among the many ways that AI causes me existential angst, you've reminded me of another one. That is, the fact that AI pushes you towards the most average thoughts. It makes sense, given the technology. This scares me because creative thought happens at the very edge. When you get stuck on a problem, like you mentioned, you're on the cusp of something novel that will at the very least grow you as a person. The temptation to use AI could rob you of the novelty in favor what has already been done.

show comments
ay

Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

show comments
Havoc

>in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI

I don't think the compete part is true. I'll never cook like gordon ramsey, but I can still enjoy cooking. My programming will never be kernel dev level, but I still enjoy it.

The only angle where I have doubts like this is work. Cause there enjoying it isn't enough...you actually have to be competitive.

paintboard3

I've been finding a lot of fulfillment in using AI to assist with things that are (for now) outside of the scope of one-shot AI. For example, when working on projects that require physical assembly or hands-on work, AI feels more like a superpower than a crutch, and it enables me to tackle projects that I wouldn't have touched otherwise. In my case, this was applied to physical building, electronics, and multimedia projects that rely on simple code that are outside of my domain of expertise.

The core takeaway for me is that if you have the desire to stretch your scope as wide as possible, you can get things done in a fun way with reduced friction, and still feel like your physical being is what made the project happen. Often this means doing something that is either multidisciplinary or outside of the scope of just being behind a computer screen, which isn't everyone's desire and that's okay, too.

show comments
neoden

Homo sapience had a long period of its history when it was crucial to have a well developed body. Today we need to perform useless exercises just to maintain our bodies in somewhat acceptable shape. This might be applied to intellect as well.

In the coming era of unnecessary intellectual power we might need to do thinking exercises as something that helps maintaining a healthful (and beautiful) mind though our core values would shift towards something else, something that is regarded as good but not mandatory for personal success today.

show comments
bsenftner

I believe the author is awash in a sea they do not understand, and that is the cause of their discomfort. When they describe their ideas being fully realized by LLMs, are they really, or just appearing as such because the words and terms arrive in a similar and expected manner as their prompt?

Performing any type of intellectual philosophic or exploratory work with LLMs is extremely subtle, largely because you nor they know what you are seeking, and the discovery process with LLMs is not writing prompts and varying one's prompts in trial manners to hopefully get "something else, something better" <- that is pure incomprehension of how they work, and how to work with them.

Very few seem to be realizing the mirror aspects embodied within LLMs: they will mirror you back, and if you are unaware of this, you may not be getting the replies you really seek, you're receiving "comfort replies" and replies mirroring your metadata (style, nuance) more than the factual logic of your requests, if any factual requests are made.

There is an entire body of work, multiple careers worth of human effort, to document the new subtle logical keys to working with LLMs. These are new logical constructs that have never existed before, not even fictionally, not realized as they are now, with all the implications and details bare, in our faces, yet completely misunderstood as people attempt old imperative methods that will not work with with this new entity with completely different characteristics than anything reality has any experience.

A major issue with getting developers to effectively use LLMs is the fact that many developers are weak to terrible communicators themselves. LLMs are fantastic communicators, who will mirror their audience in an attempt to be better understood, but when that audience is a weak communicator the entire process disintegrates. That is, what I suspect is happening with the blog post author. An inability to be discriminate in their language to the degree they parcel out the easy immediate sophomore level replies, and then arrive at a context within the LLM's capacity that is the integrity of context they seek, but that requires them to meet that intellectually and linguistically or that LLM context is destroyed. So subtle.

show comments
nopinsight

Au contraire, reasoning LLMs free me from mundane cognitive tasks, e.g. selecting websites to read from among search results, assessing level of evidence in papers before reading them, routine coding, and many other repetitive tasks. It allows me to spend mental effort on more abstract and strategic thoughts.

We should seize the opportunity to elevate our mental skills to a higher plane, aided by these “cognitive elves”.

tutanosh

I used to feel the same way about AI, but my perspective has completely changed.

The key is to treat AI as a tool, not as a magic wand that will do everything for you.

Even if AI could handle every task, leaning on it that way would mean surrendering control of your own life—and that’s never healthy.

What works for me is keeping responsibility for the big picture—what I want to achieve and how all the pieces fit together—while using AI for well-defined tasks. That way I stay fully in control, and it’s a lot more fun this way too.

steamrolled

I think the article describes a real problem in that AI discourages thought. So do other things, but what's new about AI is that it removes an incentive to try.

It used to be that if you spent your day doomscrolling instead of writing a blog post, that blog post wouldn't get written and you wouldn't get the riches and fame. But now, you can use AI to write your blog post / email / book. If you don't have an intrinsic motivation to work your brain, it's a lot easier to wing it with AI tools.

At the same time... gosh. I can't help but assume that the author is just depressed and that it has little to do with AI. The post basically says that AI made his life meaningless. But you don't have to use AI tools if they're harming you. And more broadly, life has no meaning beyond what we make of it... unless your life goal is to crank out text faster than an LLM, there's still plenty of stuff to focus on. If you genuinely think you can't possibly write anything new and interesting, then dunno, pick a workshop craft?

show comments
curl-up

> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.

So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.

My mixer can mix dough better than I can, but I still enjoy kneading it by hand. The incredibly good artisanal bakery down the street did not reduce my enjoyment of baking, even though I cannot compete with them in quality by any measure. Modern slip casting can make superior pottery by many different quality measures, but potters enjoy throwing it on a wheel and producing unique pieces.

But if your idea of fun is tied to the "no one else can do this but me", then you've been doing it wrong before AI existed.

show comments
iamwil

OP said something similar about writing blog posts when he found himself doing twitter a lot, back in 2013. So whatever he did to cope with tweeting, he can do the same with LLMs, since it seems like he's been writing a lot of blog posts since.

> I’ve been thinking about this damn essay for about a year, but I haven’t written it because Twitter is so much easier than writing, and I have been enormously tempted to just tweet it, so instead of not writing anything, I’m just going to write about what I would have written if Twitter didn’t destroy my desire to write by making things so easy to share.

and

> But here’s the worst thing about Twitter, and the thing that may have permanently destroyed my mind: I find myself walking down the street, and every fucking thing I think about, I also think, “How could I fit that into a tweet that lots of people would favorite or retweet?”

https://dcurt.is/what-i-would-have-written

show comments
baxtr

> Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

Fully agree.

Sorry to say it like that but I thought the post was a bit "whiny". I really like the thought process of the author. An LLM would have never created a post like that. I think he should not give up.

keiferski

I am somewhat in disbelief as my reaction to AI tools has been pretty much the exact opposite to the author’s. I had thousands of short notes, book ideas, etc. before AI, most of which were scattered in a notepad program.

Since AI, I’ve made genuine, real progress on many of them, mostly by discussing the concepts with ChatGPT, planning outlines, creating research reading lists, considering angles I hadn’t considered, and so on. It’s been an insanely productive process.

Like any technology, AI tools are in some sense a reflection of their users. If you find yourself wanting to offload all thinking to the machine, that’s on you, not the tool.

show comments
paulorlando

I've noticed something like this as well. A suggestion is to write/build for no one but yourself. Really no one but yourself.

Some of my best writing came during the time that I didn't try to publicize the content. I didn't even put my name on it. But doing that and staying interested enough to spend the hours to think and write and build takes a strange discipline. Easy for me to say as I don't know that I've had it myself.

Another way to think about it: Does AI turn you into Garry Kasparov (who kept playing chess as AI beat him) or Lee Sedol (who, at least for now, has retired from Go)?

If there's no way through this time, I'll just have to occasionally smooth out the crinkled digital copies of my past thoughts and sigh wistfully. But I don't think it's the end.

show comments
drakonka

This rings true for me. I still write a lot on my personal blog and still use writing as a way to process and solidify my learnings, but as I outsource more to LLMs during the process of writing (e.g., helping me find a source or something, or have it explain a topic to me instead of googling for an answer from multiple channels), I can feel my brain getting more sluggish. I think this is impacting not just my creative thinking and problem solving when learning something, but also how I form those thoughts into language. It's so hard to put a finger on exactly what the concrete factors are, but I can feel the change.

show comments
wuj

A good analogy is lifting. We lift to build strength, not because we need that extra strength to lift things in real life. There are plenty machinery to do that for us. But we do so for the sense of accomplishment of hitting our goals when we least expect it, seeing physical changes, and the feeling that we are getting healthier rather than chasing the utility benefits. If we perceive lifting as an utility, we realize its futile and meaningless. Instead, if we see it as a routine with positive externalities sprinkled on top, we feel a lot less pressured to do so.

As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.

The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.

show comments
NobodytheHobbit

There is going to be anxiety over losing any toolset. It just sucks that this time it's the human experience.

Sincerely though you have to do it for you. The product will be whatever the widget is but you can't treat your experience that coldly. None of us can. That's why everyone is so bonkers right now.

Our memes in a memetic sense are being frustrated and we in turn are being frustrated. I mean that mechanically we are frustrated but if you feel a certain way about that it's probably because this is going to also feel frustrating when the subject being frustrated is our creative aspirations. I don't know about you but that's kind of why I get out of bed to do this thing called life.

There is also a certain level of dread that an agent can be spun up to replace anyone. Makes you wonder what need feckless people might need for all this meat when a physical labor force can be replaced by robots and a mental one with AI and you can combine those forces seamlessly. Existential dread everyday on this scale is a chaos to say the least.

I know this is very dour but that's because this is very dour.

jonplackett

I’m so so glad at the end it said “Written entirely by a human” because all the way through I was thinking - absolutely no way does AI come up with great ideas or insights and it definitely would not write this article- it holds together and flows too well (if this turns out to be the joke I’ll admit we’re all now fucked)

Having said that I am very worried about kids growing up with AI and it stunting their critical thinking before it begins - but as of right this moment AI is extremely sub par at genuinely good ideas or writing.

It’s an amazing and useful tool I use all the time though and would struggle to be without.

mayas_

for better or for worse gen ai has is fundamentally changing how ideas are expressed and shared

afaic it's a net positive. i've always been lazy on writing down/expressing my thoughts and gen ai feels exactly like the missing piece.

i'm able to "vibe write" my ideas into reality. the process is still messy but exciting.

i've never been this excited about the future since my childhood

bradgessler

I keep going back and forth on this feeling, but lately I find myself thinking "F it, I'm going to do what I'm going to do that interests me".

Today I'm working on doing the unthinkable in an AI-world: putting together a video course that teaches developers how to use Phlex components in Rails projects and selling it for a few hundred bucks.

One way of thinking about AI is that it puts so much new information in front of people that they're going to need help from people known to have experience to navigate it all and curate it. Maybe that will become more valuable?

Who knows. That's the worst part at this moment in time—nobody really knows the depths or limits of it all. We'll see breakthroughs in some areas, and others not.

wjholden

I just wrote a paper a few days ago arguing that "manual thinking" is going to become a rare and valuable skill in the future. When you look around you, everyone is finding ways to be better using AI, and they're all finding amazing successes – but we're also unsure about the downsides. I hedge that my advantage in ten years will be that I chose not to do what everyone else did. I might regret it, we will see.

show comments
montebicyclelo

My thoughts are that it's key that humans know they will still get credit for their contributions.

E.g. imagine it was the case that you could write a blog post, with some insight, in some niche field – but you know that traffic isn't going to get directed to your site. Instead, an LLM will ingest it, and use the material when people ask about the topic, without giving credit. If you know that will happen, it's not a good incentive to write the post in the first place. You might think, "what's the point".

Related to this topic - computers have been superhuman at chess for 2 decades; yet good chess humans still get credit, recognition, and I would guess, satisfaction, from achieving the level they get to. Although, obviously the LLM situation is on a whole other level.

I guess the main (valid) concern is that LLMs get so good at thought that humans just don't come up with ideas as good as them... And can't execute their ideas as well as them... And then what... (Although that doesn't seem to be the case currently.)

show comments
imhoguy

While reading this I got a feeling like reading the last report in captain's log on a ghost ship.

Centigonal

> But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought.

I can't relate to this at all. The reason I write, debate, or think at all is to find out what I believe and discover my voice. Having an LLM write an essay based on one of my thoughts is about as "me" as reading a thinkpiece that's tangentially related to something I care about. I write because I want to get my thoughts out onto the page, in my voice.

I find LLMs useful for a lot of things, but using an LLM to shortcut personal writing is antithetical to what I see as the purpose of personal writing.

socalgal2

I'm clearly not Dustin Curtis. For me, so far, LLMs let me check my assumptions in a way that is why more effective than before which is to say, I didn't or rarely checked before. I'd have an opinion on a topic. I'd hold that opinion based on intuition/previous-experience/voodoo. Someone might challenge it. I'd probably mostly be shrug off their challenge. I'm not saying I'd dismiss it. I'd just not really look into it. Now I type something into ChatGPT/Gemini and it gives me back the pros and cons of the positions. It links to studies. Etc... I'm not saying I believe it point-blank but at least it gives me much more than I got before.

ccppurcell

Im pretty much an LLM skeptic but I also think that this sort of sentiment goes back to Plato complaining about writing. I have made light exploratory use of chatgpt. Mainly I explore its limitations and boilerplate capabilities. I'm unimpressed with its non trivial code output to be honest. But it has brought up things I wouldn't have thought of on my own that I can go and do "proper" research into. If you want an example I've been playing with binary strings that are fixed by f.r where r reverses and f flips every bit. Chatgpt came up with the word "anti palindrome " and pointed out a connection to DNA I had no idea about. I read the relevant Wikipedia and asked a biologist and now understand DNA a little better. It probably won't amount to anything in this case but I can imagine it doing so.

SilverSlash

I noticed a similar effect on me in regards to critical thinking when I'm coding. My default response when faced with any coding problem now is to use an LLM.

But it gets even worse. Last year I'd get just an initial solution from an LLM and then reason about it myself. Now even that is too much work and I instead ask the same question to multiple LLMs and draw consensus from their results, skipping/easing even that second step of thinking.

Energy takes the path of least resistance. Thinking requires energy. So are our brains learning to off-load thinking/reasoning to LLMs whenever possible?

abhisek

I think it’s a trade off between depth and breadth. Thinking is hard and painful. But the insights achieved through deep thinking IMHO is worth because of the mental models that we develop. Not just how it compares now with an LLM generated content.

No doubt LLMs are excellent at researching, collating, structuring and summarizing information. Infact I think o3 Deep Research can probably save a weeks worth of survey time.

But in my experience a lot of thinking is still required to do something meaningful with it.

agotterer

I appreciate your thoughts and definitely share some of your sentiments. But don’t underestimate the value and importance of continuing to be a creative.

Something I’ve been thinking about lately is the idea of creative stagnation due to AI. If AI’s creativity relies entirely on training from existing writing, anrchitecture, art, music, movies, etc., then future AI might end up being trained only on derivatives of today’s work. If we stop generating original ideas or developing new styles of art, music, etc., how long before society gets stuck endlessly recycling the same sounds, concepts, and designs?

zmmmmm

I don't agree directly with this, but a variant of it does bother me: will the auto-regressive nature of AI eventually limit the novelty of the ideas humanity can come up with?

So many breakthroughs come from people who work either in ignorance or defiance of existing established ideas. Almost by definition, in fact - to a large extent, everything obvious has already been thought. So to some extent, all the real progress happens in places that violate norms and pre-established logic.

So what's going to happen now if every idea has to run the gauntlet of a supremely intelligent but fully regressive AI? It feels like we could lose a tremendous amount of the potential for original thought from humanity. A good counter argument would be that this has already happened and we're still making progress. I just wonder however if it's a question of degree and that degree matters.

show comments
windowshopping

That first paragraph is really sad to me. I can't imagine believing that my own thoughts aren't worthwhile because "an LLM will think up a better version of anything I think." Jesus. I can't say I have ever felt that way for even a second.

iamwil

> But it doesn’t really help me know anything new.

Pursue that, since that's what LLMs haven't been helping you with. LLMs haven't really generated new knowledge, though there are hints of it--they have to be directed. There are two or three times when I felt the LLM output was really insightful without being directed.

--

At least for now, I find the stuff I have a lot of domain expertise in, the LLM's output just isn't quite up to snuff. I do a lot of work trying to get it to generate the right things with the right taste, and even using LLMs to generate prompts to feed into other LLMs to write code, and it's just not quite right. Their work just seems...junior.

But for the stuff that I don't really have expertise in, I'm less discerning of the exact output. Even if it is junior, I'm learning from the synthesis of the topic. Since it's usually a means to an end to support the work that I do have expertise in, I don't mind that I didn't do that work.

rollinDyno

I recently finished my PhD studies in social sciences. Even though it did not lead me to career improvements as I initially expected, I am happy I had the opportunity to undertake an academic endeavor before LLMs became cheap and ubiquitous.

I bring up my studies because what the author is talking about strikes me as not having been ambitious enough in his thinking. If you prompt current LLMs with your idea and find the generated arguments and reasoning satisfactory, then you aren't really being rigorous or you're not having big enough ideas.

I say this confidently because my studies showed me not only the methods in finding and contrasting evidence around any given issue, but also how much more there is to learn about the universe. So, if you're being rigorous enough to look at implications of your theories, finding datapoints that speak to your conclusions and find that your question has been answered, then your idea is too small for what the state of knowledge is in 2025.

show comments
xivzgrev

in life and with people I think about a car, knowing when to go and when to stop

some people are all go and no stop. we call them impulsive.

some people may LOOK all go but have wisdom (or luck) behind the scenes putting the brakes on. Example: Tom Cruise does his own stunts, and must have a good sense for how to make it safe enough

What this author touches on is a chief concern with AI. In the name of removing life friction, it removes your brakes. Anything you want to do, just ask AI!

But should you?

I was out the other day, pondering what the word "respect" really means. It's more elusive than simply liking someone. Several times I was tempted to just google it or ask AI, but then how would I develop my own point of view? This kind of thing feels important to have your own point of view on. And it's that we're losing - the things we should think about in this life, we'll be tempted to not anymore. And come out worse for it.

All go, no brakes

agentultra

Have you read the book if I told you what it was about? Knowing what Crime and Punishment is about is different than reading it yourself. There’s no royal road and no shortcut to knowledge.

Read How To Solve It by Polya. The frustrations, dead ends, and trials are all a part of the process. It’s how we convince ourselves of truth and reinforce our understanding. It develops our curiosity and creativity.

Aziell

A year ago, I used to journal almost every day. Writing was how I organized my thoughts, found direction, and occasionally uncovered ideas that even surprised me.

But gradually, I started relying on GPT to help me write. At first, it felt efficient. But over time, I noticed I was thinking less. The more I expressed myself through AI, the more my own desire to express started to fade. Now I’m trying to return to my own thinking process again,but it’s much harder than I expected.

b0ner_t0ner

Reaching for a calculator made us stupid, let's all go back to using an abacus.

nicbou

I run an informative website for a living. If the trend continues, I will lose my job to AI trained on my content.

I get the feeling of pointlessness, but not because AI is making me obsolete. AI still needs me, because it still needs human beings to experience the real world and report on it. It need to copy someone's homework. It just destroys the economics of doing that homework.

But there is not the faintest chance of AI doing that sort of work itself. It might repeat what it knows, but it can't survey an audience, shake hands with industry experts, empathize with users, feel friction, or knock on doors.

These are still jobs for thinking humans.

malloryerik

I find LLMs fail and fail hard at what might be the most imaginative form of writing: poetry.

Before sending this comment I pecked around the net for examples of gleaming LLM verse.

A few articles claimed human readers preferred AI-brewed poetry to the human stuff. I checked the examples. Clearly most of the people surveyed were underliterate -- the human poems were excellent and the AI poems just creepily bad and simplistic -- so the articles turned into sad and unwitting testament about the state of our culture.

Maybe if you expertly LLM prompt your way to a highly abstract poem, over several iterations you might land something that has some actual feel to it, but even then that might owe more to your prompting talent than the LLM's skill. You could do the same with dice and a dictionary. (Is prompting is essentially editing?)

Please, show me otherwise. If faced with strong contrary evidence, I will be forced to change my mind.

show comments
xwowsersx

Well said, and such an important point.

> Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show.

That line really hits home for me.

dvrp

This is a byproduct of abundance.

In this case, abundance of cognitive ability.

We say that our food sucks. Yet, our elite athletes would crush Hercules or other God-like figures from our mythology. At the same time, we suffer from obesity.

The answer to the paradox comes from abundance. I don’t know why it happens, but I’ve noticed it on food, information retrieval, and now cognitive capacity.

Think about what happened to our capacity to search information on books. Librarians are masters of organizing chaos and filtering through information. But most of us don’t know a tiny fraction of their knowledge because we grew up with Google.

My hope is that, just like eating healthy is not as pleasurable as processed sugars but it’s necessary for a fit life, we will need to go through the process of thinking healthy even though is not as pleasurable as tinkering with LLM prompts.

This doesn’t mean escapism however. Modern athletes take advantage of the industrial world too, but they’re smart about it. I don’t think thinking will be much different.

neom

For me thinking is kinda like: First draft thought > thinking > 2nd draft thought > thoughtfulness > thought. I suspect a lot of people just need to adjust the frame of their thinking style, LLMs are useful for getting to a more "final thought" quickly, but he says "I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought." - Indeed, to my mind anyway: a complete draft thought. From "when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea" to quality thought typically needs some meaningful time/walking around to gestate into ideas and things worth sharing anyway. LLMs for me are mostly either a) draft thoughts or b) pure work related knowledge transfer - and they work great for those things.

analog31

Something I'm on the fence about, but just trying to figure out from observation, is whether the AI can decide what is worthwhile. It seems like most of the successes of AI that I've seen are cases where someone is tasked with writing something that's not worth reading.

Granted, that happened before AI. The vast majority of text in my in-box, I never read. I developed heuristics for deciding what to ignore. "Stuff that looks like it was probably generated" will probably be a new heuristic. It's subjective for now. One clue is if it seems more literate than the person who wrote it.

Stuff that's written for school falls into that category. It existed for some reason other than being read, such as the hope that the activity of writing conferred some educational benefit. That was a heuristic too -- a rule of thumb for how to teach, that has been broken by AI.

Sure, AI can be used to teach a job skill, which is writing text that's not worth reading. Who wants to be the one who looks the kids in the eye and explain this to them?

On the other hand, I do use Copilot now, where I would have used Stackoverflow in the past.

williamcotton

> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.

I find immense joy and satisfaction when I write poetry. It's like crafting a puzzle made of words and emotions. While I do enjoy the output, if there is any goal it is to tap into and be absorbed by the process itself.

Meanwhile, code? At least for me, and to speak nothing of those that approach the craft differently, it is (almost) nothing but a means to an ends! I do enjoy the little projects I work on. Hmm, maybe for me software is about adding another tool to the belt that will help with the ongoing journey. Who knows. It definitely feels very different to outsource coding than to outsource my artistic endeavors.

One thing that I know won't go away are the small pockets of poetry readings, singer-songwriters, and other artistic approaches that are decidedly more personal in both creation and audience. There are engaged audiences for art and there are passive consumers. I don't think this changes much with AI.

smcleod

Early on I wondered if things like this might happen.

But for me what has actually happened is almost the opposite, I seem to be experiencing more of a "tree of thoughts" with the ability to now perform rapid experimentation down a given branch, disposing branches that don't bare fruit.

I feel more liberated to explore creative thoughts than ever. I spend less time on the toil needed both bootstrap my thought process and to fending off cognitive dissonance when the feeling of sunk cost creeps in after going too deep down the wrong path.

I wonder if it's just perhaps a difference in how people explore and "complete" their thoughts? Or am I kidding myself and actually getting dumber and just fail to see it?

show comments
largbae

This feels off, as if thinking were like chess and the game is now solved and over.

What if you could turn your attention to much bigger things than you ever imagined before? What if you could use this new superpower to think more not less, to find ways to amplify your will and contribute to fields that were previously out of your reach?

show comments
ankit219

Familiar scenario. One thing that would help is to use AI as someone you are having a conversation with.

(most AIs need to be explicitly told before you start this). You tell them not to agree with you, to ask more questions instead of providing the answers, to offer justifications and background as to why those questions are being asked. This helps you refine your ideas more, understand the blind spots, and explore different perspectives. Yes, an LLM can refine the idea for you, especially if something like that is already explored. It can also be the brainstorming accessory who helps you to think harder. Come up with new ideas. The key is to be intentional about which way you want it. I once made Claude roleplay as a busy exec who would not be interested in my offering until i refined it 7 times (and it kept offering reasons as to why an AI exec would or would not read it).

regurgist

You are in a maze of twisty little passages that are not the same. You have to figure out which ones are not. Try this. The world is infinitely complex. AI is very good at dealing with the things it knows and can know. It can write more accurately than I can, spell better. It just takes stuff I do and learns from my mistakes. I'm good with that. But here is something to ask AI:

Name three things you cannot think about because of the language you use?

Or "why do people cook curds when making cheese."

Or how about this:

"Name three things you cannot think about because of the language you use?"

AI is at least to some extent a artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before?

The reason people cook curds is because the goal of cheese making was to preserve milk, not to make cheese.

qudat

Man people are using LLMs much differently from myself. I use it as an answer engine and that’s about where I stop using it. It’s a great tool for research and quick answers but I haven’t found it great for much else.

show comments
uludag

What happens to conversation in this case? When groups of people are trained to use LLMs as a crutch for thinking, what happens when people get together and need to discuss something. I feel like the averageness of the thinking would get compounded so that the conversation as a whole becomes nothing but a staging ground for a prompt. Would an hour long conversation about the intricacies of a system architecture become a ten minute chatter of what prompts people would want to ask? Does everyone then channel their LLM to the rest of the group? In the end, would the most probable response to which all LLMs being used agree with be the result of the conversation?

show comments
tezza

Thoughts all the way down?

klntsky

Pointlessness is a feeling within. People are just rationalizing it conveniently by blaming LLMs (they used to blame other things in the past).

Same with doom anxiety.

Literally just look up some good therapist prompts for chatgpt

zzzbra

Reminds me of this:

“There are no shortcuts to knowledge, especially knowledge gained from personal experience. Following conventional wisdom and relying on shortcuts can be worse than knowing nothing at all.” ― Ben Horowitz

tibbar

AI is far better in style than in substance. That is, an AI-written solution will have all the trappings of expertise and deep thought, but frankly is usually at best mediocre. It's sort of like when you hear someone make a very eloquent point and your instinct is to think "wow, that's the final word on the subject, then!" ... and then someone who actually understands the subject points out the flaw in the elegant argument, and it falls down like a house of cards. So, at least for now, don't be fooled! Develop expertise, be the person who really understands stuff.

WillAdams

This is only a problem if one is writing/thinking on things which have already been written about without creating a new/novel approach in one's writing.

An AI is _not_ going to get awarded a PhD, since by definition, such are earned by extending the boundaries of human knowledge:

https://matt.might.net/articles/phd-school-in-pictures/

So rather than accept that an LLM has been trained on whatever it is you wish to write, write something which it will need to be trained on.

show comments
luisacoutinho

While I can somewhat relate to this post, I can't help but think this sort of thinking is expected and even part of a cycle. AI isn't the first time technology takes over something that humans had to do manually. Like photography? You used to have to paint, or pay someone to paint your picture. Then all of a sudden you didn't anymore. Painters might've argued that photographs remove the need to think, plan and decide on how best to execute a painting - photo cameras make all of that so easy. Even the output comes out faster.

I don't see anyone lamenting the existence of cameras. No one wants to go back to a reality in which, if you want pictures of you and your loved ones, you need to draw or paint them yourself. Even painters have benefited from the existence of cameras.

AI is, of course, more powerful tech than a camera - but when I find myself getting stuck with thoughts of "what's the point? AI can do it better (or will be able to)" - I like to think about how people have gone through similar "revolutions" before, and while some practices did lose value, not everything was replaced. It helps to be specific, because I'm sure AI cannot replace everything we currently do - we're just in the process of figuring out what that is.

armchairhacker

I feel similar, except not because of AI but the internet. Almost all my knowledge and opinions have already been explained by other people who put in more effort than I could. Anything I'd create, art or computation, high-quality similar things already exist. Even this comment echoes similar writing.

Almost. Similar. I still make things because sometimes what I find online (and what I can generate from AI) isn't "good enough" and I think I can do better. Even when there's something similar that I can reuse, I still make things to develop my skills for further occasions when there isn't.

For example, somebody always needs a slightly different JavaScript front-end or CRM, even though there must be hundreds (thousands? tens-of-thousands?) by now. There are many programming languages, UI libraries, operating systems, etc. and some have no real advantages, but many do and consequently have a small but dedicated user group. As a PhD student, I learn a lot about my field only to make a small contribution*, but chains of small contributions lead to breakthroughs.

The outlook on creative works is even more optimistic, because there will probably never be enough due to desensitization. People watch new movies and listen to new songs not because they're better but because they're different. AI is especially bad at creative writing and artwork, probably because it fundamentally generates "average"; when AI art is good, it's because the human author gave it a creative prompt, and when AI art is really good, it's because the human author manually edited it post-generation. (I also suspect that when AI gets creative, people will become even more creative to compensate, like how I suspect today we have more movies that defy tropes and more video games with unique mechanics; but there's probably a limit, because something can only be so creative before it's random and/or uninteresting.)

Maybe one day AI can automate production-quality software development, PhD-level research, and human-level creativity. But IME today's (at least publicly-facing) models really lack these abilities. I don't worry about when AI is powerful enough to produce high-quality outputs (without specific high-quality prompts), because assuming it doesn't lead to an apocalypse or dystopia, I believe the advantages are so great, the loss of human uniqueness won't matter anymore.

* Described in https://matt.might.net/articles/phd-school-in-pictures/

bachittle

The article mentions that spell and grammar checking AI was used to help form the article. I think there is a spectrum here, with spell and grammar checking on one end, and the fears the article mentions on the other end (AI replacing our necessity to think). If we had a dial to manually adjust what AI works on, this may help solve the problems mentioned here. The issue is that all the AI companies are trying too hard to achieve AGI, and thus making the interfaces general and without controls like this.

rudimentary_phy

AI has been really interesting. I definitely feel its benefits when I want to cram some learning in. On the other hand, it has introduced a new and novel issue for me that feels an awful lot like imposter syndrome. This sounded like the same issue.

Perhaps we will now suffer from AI-mposter syndrome as well. Ain't life wonderful?

apsurd

If you are in the camp of "ends justify the means" then it makes sense why all this is scary and nihilistic. What's the point doing anything if the outcome is infinitely better done in some other way by some other thing/being/agent?

If the "means justify the ends" then doing anything is its own reason.

And in the _end_, the cards will land where they may. ends-justify-means is really logical and alluring, until I realize why am I optimizing for END?

lcsiterrate

I wonder if there is any like the cs50.ai for other fields, which acts as a guide by not instantly giving any answer. The one I experiment to have like this experience is by using custom instructions (most of LLMs have this in the settings) wherein I set it to Socratic mode so the LLM will spit out questions to stir ideas in my head.

prmph

IDK. I cancelled my Claude subscription because for the highly creative code I am working on, it is maybe useful for like 5% of it, and LLM produced code is still mostly slop.

I'm generally able to detect LLM writing output; In most contexts, I discount it as fluff with little depth.

AI produced paintings are still weird and uncanny.

So I'm utterly unable to identify with the author's sense of futility whenever they want to write or code. I truly believe the output of my skill and creativity is not diminished by the existence of AI.

sippndipp

I understand the depression. I'm a developer (professional) and I make music (ambitious hobby). Both arts heavily in a transformational process.

I'd like to challenge a few things. I rarely have a moment where an LLM provides me a creative spark. It's more that I don't forget anything from the mediocre galaxy of thoughts.

See AI as a tool.

A tool that helps you to automate repetitive cognitive work.

regurgist

You are in a maze of twisty little passages that are not the same. You have to figure out which ones are not.

Try this. The world is infinitely complex. AI is very good at dealing with the things it knows and can know. It can write more accurately than I can, spell better. It just takes stuff I do and learns from my mistakes. I'm good with that. But here is something to ask AI:

"Name three things you cannot think about because of the language you use?"

Or "why do people cook curds when making cheese."

Or how about this:

"Name three things you cannot think about because of the language you use?"

AI is at least to some degree an artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before or that have been thought about incorrectly?

The reason people cook curds is because the goal of cheese making was (in the past) to preserve milk, not to make cheese.

show comments
deepsun

> instantly get a fully reasoned, researched, and completed thought

That's not my experience though. I tried several models, but usually get a confident half-baked hallucination, and tweaking my prompt takes more time than finding the information myself.

My requests are typically programming tho.

ahussain

If anything, with the right tooling LLMs should improve the quality of our thinking. For example, how much better would your thinking be if you could ask the LLM to reliably figure out all the 2nd and 3rd effects of your ideas, or to identify hidden assumptions.

show comments
perplex

I don't think LLMs replace thinking, but rather elevate it. When I use an LLM, I’m still doing the intellectual work, but I’m freed from the mechanics of writing. It’s similar to programming in C instead of assembly: I’m operating at a higher level of abstraction, focusing more on what I want to say than how to say it.

show comments
mlboss

What we need is mental gyms. In modern society there is no need for physical labor but we go to gyms just to keep ourselves healthy.

Similarly in future we will not need mental "labor" but to keep ourselves sharp we need engage in mental exercises. I am thinking of picking up chess again just for this reason.

show comments
thidr0

Are we going to see a small market for artisanal thought emerge?

anoplus

I think the unhappiness we experience from AI is not because of AI, but a symptom of a society that lacks more humanism. I currently like to define AI's level of intelligence as its ability to reduce human suffering. By this definition, if AI failed to keep you and society happy for whatever reason, then it is stupid. If you or people around you feeling worthless, anxious, depressed, starving, because of AI, it is stupid! And society needs you to fix it!

mattfrommars

I am envious of folks who get to use AI on daily basis to do productive tasks. My use is very limited to asking it to summarize things to asking to explain LC problems.

boznz

Think Bigger. As the LLM's capability expands use it to push yourself outside your comfort zone every now and then. I have done some great fun projects recently I would have never thought of tackling before

binary132

Ok, nice article. Now prove an AI didn’t write it.

As a matter of fact I’m starting to have my doubts about the other people writing glowing, longwinded comments on this discussion.

asim

True. Now compare that to the creation of the universe and feel your insignificance of never being able to match it's creation in any form. Try to create the code that creates blades of grass. Good luck creating life. AI for all it's worth is a toy. One that shows us our own limitations but replace the word AI with God and understand that we spend a lot of time ruminating over things that don't matter. It's an opinion, don't kill me over it. But Dustin has reached an interesting point. What's the value of our time and effort and where should we place it? If the tools we create can do the things we used to do for "work" and "fun" then we need to get back to doing what only humans were made for. Being human.

woah

> But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought. Minimal organic thinking required.

No offense, but I've found that AI outputs very polished but very average work. If I am working on something more original, it is hard to get AI to output reasoning about it without heavy explanation and guidance. And even then, it will "revert to the mean" and stumble back into a rut of familiar concepts after a few prompts. Guiding it back onto the original idea repeatedly quickly uses up context.

If an AI is able to take a sliver of an idea and output something very polished from it, then it probably wasn't that original in the first place.

ryankrage77

> All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM

I would like access to whatever LLM the author is using, because I cannot relate to this at all. Nearly all LLM output I've ever generated has been average, middle-of-the-road predictable slop. Maybe back in the GPT-3 days before all LLMs were RLHF'd to death, they could sometimes come up with novel (to me) ideas, but nowadays often I don't even bother actually sending the prompt I've written, because I have a rough idea of what the output is going to be, and that's enough to hop to the next idea.

satisfice

I want to shake this guy. I want to say “you are a choosing to be a fucking idiot because you falsely believe you are a vacuous moron.” But I know it won’t help.

Here’s the problem: he thinks that what LLMs produce are well-reasoned, coherent thoughts. Here’s a healthier alternative: what LLMs produce is shallow and banal text, designed to camouflage its true nature.

Now here’s a heuristic: treat anything written by an LLM, about any conceptual matter as wrong by definition (because it was not a product of human experience and insight, and because we are humans). If it LOOKS right, look closer. Look more carefully. Take that insight to the next level.

Second heuristic: anything written by an LLM that you cannot falsify is, by definition, banal. Ho hum. Who cares? Does an LLM have an opinion about how to find happiness? How cute… but not worth believing.

Third heuristic: that which an LLM writes which you can neither falsify nor dismiss as banal, you may assume that the LLM itself does not understand. It’s babbling. But perhaps you can understand it, and take it farther.

Define YOURSELF as that which lies beyond these models, and write from that sensibility.

BrenBarn

Although I think there's some truth to the overall gist here, I don't really agree with the author's point that AI can do so many of these things better. I seriously doubt that an AI would write a blog post as good as this, and I entirely doubt that it could write a blog post better than the best human writers could. For me the depressing part is not so much that AI can do everything better, but that AI is so much better at producing low-quality output that it's becoming increasingly difficult to locate anything better than that baseline in the morass of slop.

cess11

This person should probably read pre-Internet books, discover or rediscover that the bar for passable expression in text is very low compared to what it was.

Most of that 'corpus' isn't even on the Internet so it is wholly unknown to our "AI" masters.

Trasmatta

Agreed. I keep seeing posts where people claim the output is all that really matters (particularly with code), and I think that's missing something deeply fundamental about being human.

martin-t

This resonates with me deeply.

I used to write open source a lot but lately, I don't see the point. Not because I think LLMs can produce novel code as good code as me or will be able to in the near future. But because any time I come up with a new solution to something, it will be stolen and used without my permission, without giving me credit or without giving users the rights I give them. And it will be mangled just enough that I can't prove anything.

Large corporations were so anal about copyright that people who ever saw Microsoft's code were forbidden from contributing to FOSS alternatives like wine. But only as long as copyright suited them. Now abolishing copyright promises the C-suite even bigger rewards by getting rid of those pesky expensive programmers, if only they could just steal enough code to mix and match it with enough plausible deniability.

And so even though _in principle_ anybody using my AGPL code or anything that incorporates my AGPL code has the right to inspect and modify said code; yet now tine fractions of my AGPL code now have millions or potentially billions of users but nobody knows and nobody has the right to do anything about it.

And those who benefit the most are those who already have more money than they can spend.

bsimpson

I admittedly don't use AI as much as many others here, but I have noticed that whenever I take the time to write a hopefully-inciteful response in a place like Reddit, I immediately get people trying to dunk on me by accusing me of being an AI bot.

It makes me not want to participate in those communities (although to be honest, spending less time commenting online would probably be good for me).

show comments
MarcelOlsz

Have not resonated with an article for a long time until this one.

aweiher

Have you tried writing this article on .. AI? *scnr

brador

What is the value of thought? Why not become a mindless automaton in the AI machine? You usefulness will ensure your continued survival.

thor_molecules

The apologists be damned. This article nails it. A grand reduction. Not a bicycle; a set of training wheels.

Where is the dignity in all of this?

russellbeattie

AI's profound effect on communication is something I haven't worked out yet. Usually I'm pretty good at extrapolating tech trends out to the near future in broad strokes, but there's this paradox I'm running into: Creating long form content is now easy with the help of AI, but no one is going to read that content because AI will summarize it for us.

I can't figure out what the end result of this is going to be - society is going to become both more and less verbose at the same time.

MinimalAction

> My thinking systems have atrophied, and I can feel it.

I do understand where the author is coming from. Most of the times, it is easy to read an answer---regardless of whether it is right/wrong, relevant or not---than think of an answer. So AI does take that friction of thinking away.

But, I am still disappointed of all this doom because of AI. I am inclined to throw my hands and say "just don't use it then". The process of thinking is where the fun lies, not in showing the world I am better or always right than so and so.

sneak

> This post was written entirely by a human, with no assistance from AI. (Other than spell- and grammar-checking.)

That’s not what “no assistance” means.

I’m not nitpicking, however - I think this is an important point. The very concept of what “done completely by myself” means is shifting.

The LLMs we have today are vastly better than the ones we had before. Soon, they will be even better. The complaint he makes about the intellectual journey being missing might be alleviated by an AI as intellectual sparring partner.

I have a feeling this post basically just aliases to “they can think and act much faster than we can”. Of course it’s not as good, but 60-80% as good, 100x faster, might be net better.

petesergeant

This really hasn't matched my experience, and maybe I'm just blind to it. I have absolutely not found any LLM on the market able to produce text or organize my ideas well, and I spend all days cajoling LLMs into producing text for a living.

I've found it very useful for proof-reading, and calling me out on blind-spots. I'll tell ChatGPT, Claude, and Anthropic that my drafts were written by a contractor and I need a rating out of ten to figure out how much I should pay him. They come back with ideas. They often wildly disagree. I will absolutely ask it to redraft stuff to give me inspiration, or to take a stab at a paragraph to unblock me, but the work produced is almost always dreck that needs heavy rework to get to a place I'm happy with. I will ask its opinion on if an analogy I have created works, but I've found if I ask it for analogies by itself it rarely comes up with anything useful.

I've found it immensely useful for educating myself. For me, learning needs to be interactive to stick. I learn something by asking many clarifying questions about it: "does that imply that" and "well isn't that the same as" and "why like this instead of like that" until I really get it, and the models are beautiful for that. It doesn't -- to me -- feel like I'm atrophying my thinking skills when I do this, it feels like I am managing to implant new and useful concepts because I can really sink my teeth into them.

In short, I think it's improved my writing by challenging me, and I think it's helped me understand complex topics much more efficiently than I would have done by banging my head against a text book. My thinking skills feel sharper, not weaker, from the exercise.

keernan

>>the process of creation >>my original thoughts >>I’d have ideas

As far as I can tell, LLMs are incapable of any of the above.

I'd love to hear from LLM experts how LLMs can ever have original ideas using the current type of algorithms.

spinach

I've had the same experience but with drawing. What's the point when AI can generate perfect finished pieces in seconds? Why put all that effort in to learning and drawing something. It's always been hard for me but it used to feel worth it for the finished piece, but now you can bring a piece of computer art into being with a simple word prompt.

I still create, I just use physical materials like clay and such, to make things that AI can't yet replicate.

show comments
BlueTemplar

Using LLMs is a bit like smoking : better to never have started (or at least quit ASAP).

And perhaps even more fitting now that not doing it comes with short term career limitations :

Friends: Rachel Gets Peer Pressured Into Smoking (Season 5 Clip) | TBS

https://www.youtube.com/watch?v=nzDJdZLPeGE

(Though an even better parallel is using platforms.)

double0jimb0

99% (or more?) of human written work is filler/fluff. LLMs seem to doing a great job of reducing this by actually getting to the point.

There is still a limit at which “points” can be groked, humans can only read so fast.

What is the problem here?

ThomPete

My way out of this was to start thinking about what can't the LLMs of the world do and my realization was actually quite simple and quite satisfying.

What LLMs can't replace is network effects. One LLM is good but 10 LLMs/agents working together creating shared history is not replaceable by any LLM no matter how smart it becomes.

So it's simple. Build something that benefit from network effects and you will quickly find new ideas, at least it worked for me.

So now I am exploring ex. synthetic predictions markets via https://www.getantelope.com or

Rethinking myspace but for agents instead like: https://www.firstprinciple.co/misc/AlmostFamous.mp4

AI want's to be social :)

quantadev

There's quite a dichotomy going on in software development. With AI, we can all create much more than we ever could before, and make it much better and even in much less time, but what we've lost is the sense of pride that comes with the act of creating/coding, because nowadays:

1) If you wrote most of it yourself then you failed to adequately utilize AI Coding agents and yet...

2) If AI wrote most of it, then there's not exactly that much of a way to take pride in it.

So the new thing we can "take pride in" is our ability to "control" the AI, and it's just not the same thing at all. So we're all going to be "changing jobs" whether we like it or not, because work will never be the same, regardless of whether you're a coder, an artist, a writer, or an AD agency fluff writer. Then again pride is a sin, so just GSD and stop thinking about yourself. :)

show comments
mattsears

Peak Hacker News material

show comments
keybored

Some might think that this is new. People who self-identify as makers. It’s not really new to me at all.[1]

- Why play music infront of anyone? People have Spotify. It will take me a ton of effort to learn one song. Meanwhile I will burden the others with having to politely listen and give feedback.

- Why keep learning instruments? There will be hundreds who are better than me at them at the “touch of a button”. Recurring joke: Y has inspired millions to pick up X and other millions to give up X.

- Why learn an intellectual field? There are hundreds of experts at the “touch of a button”. It would be better to defer to them. Let’s not “Dunning Kruger” myself.

- Why write? Look at the mountain of writings out there. What can I add to that? Rehashes of rehashes? A twentieth explanation on topic X?

- Why do anything that is not about improving or maintaining myself or my family? I can exercise, YouTube can’t do that for me. But what can I do for other people? Probably nothing, there are so many others out there and it will be impossible to “compete”.

- Why read about politics? See previous point. Experts.

- Why get involved in politics? See previous point. And I hear that democratic participation just ends up being populism.

I have read this sentiment before. And a counter-argument to that thinking. One article. One single article. I don’t find it in any mainstream space. You would probably find it in a certain academic corner.

There is no mainstream intellectual investigation of this that I know of. Because it’s by design. People are supposed to be passive, unfulfilled, narrowly focused (on their work and their immediate self and family) and dependent.

The antidote is a realization. One part is the realization that there is a rich inner life that is possible. Which is only possible by applying yourself. Like in writing, for example. Because you can write for yourself. Yes, you might say that we are just back to being narrowly focused on yourself and your family. But this realization might just be a start. Because you can start imagining the untapped potential of the inner mind. What if you journaled for a few weeks. If you just stopped taking in some of the inputs you habitually do. Then you see dormant inner resources coming back. Resources that were dormant because you thought you yourself and your abilities that were not narrowly about doing your professional job and your duties were... they were just not good enough to be cultivated.

But I think they are.

Then you realize that existence is not just about doing your job and doing your duties and in between that being a passive consumer or lackey, deferring everything else to the cream who has floated to the top. Every able-bodied moment can be imbued with meaningful action and movement, because you have innate abilities that are more than good enough to propel yourself forward, and in ninety-nine point nine percent of the cases it is irrelevant that you are not world-class or even county-class at any of it.

[1] But I haven’t really been bit by the AI thing to the point of not programming or thinking anymore. I will only let AI do things like write utility functions and things which I don’t have the brain capacity for, like parsing options in shell scrips.

Maybe because I don’t feel the need to be maximally productive—I was never productive to begin with.

show comments
hodder

"AI could probably have written this post far more quickly, eloquently, and concisely. It’s horrifying."

ChatGPT write that post more eloquently:

May 16, 2025 On Thinking

I’ve been stuck.

Every time I sit down to write a blog post, code a feature, or start a project, I hit the same wall: in the age of AI, it all feels pointless. It’s unsettling. The joy of creation—the spark that once came from building something original—feels dimmed, if not extinguished. Because no matter what I make, AI can already do it better. Or soon will.

What used to feel generative now feels futile. My thoughts seem like rough drafts of ideas that an LLM could polish and complete in seconds. And that’s disorienting.

I used to write constantly. I’d jot down ideas, work them over slowly, sculpting them into something worth sharing. I’d obsess over clarity, structure, and precision. That process didn’t just create content—it created thinking. Because for me, writing has always been how I think. The act itself forced rigor. It refined my ideas, surfaced contradictions, and helped me arrive at something resembling truth. Thinking is compounding. The more you do it, the sharper it gets.

But now, when a thought sparks, I can just toss it into a prompt. And instantly, I’m given a complete, reasoned, eloquent response. No uncertainty. No mental work. No growth.

It feels like I’m thinking—but I’m not. The gears aren’t turning. And over time, I can feel the difference. My intuition feels softer. My internal critic, quieter. My cleverness, duller.

I believed I was using AI in a healthy, productive way—a bicycle for the mind, a tool to accelerate my intellectual progress. But LLMs are deceptive. They simulate the journey, but they skip the most important part. Developing a prompt feels like work. Reading the output feels like progress. But it's not. It’s passive consumption dressed up as insight.

Real thinking is messy. It involves false starts, blind alleys, and internal tension. It requires effort. Without that, you may still reach a conclusion—but it won’t be yours. And without building the path yourself, you lose the cognitive infrastructure needed for real understanding.

Ironically, I now know more than ever. But I feel dumber. AI delivers polished thoughts, neatly packaged and persuasive. But they aren’t forged through struggle. And so, they don’t really belong to me.

AI feels like a superintelligence wired into my brain. But when I look at how I explore ideas now, it doesn’t feel like augmentation. It feels like sedation.

Still, here I am—writing this myself. Thinking it through. And maybe that matters. Maybe it’s the only thing that does.

Even if an AI could have written this faster. Even if it could have said it better. It didn’t.

I did.

And that means something.