Techno-cynics are wounded techno-optimists

36 points49 comments3 hours ago
guitarlimeo

I liked the original title better "The Left Doesn't Hate Technology, We Hate Being Exploited". I think that sums up my grievances towards AI - amazing technology and certainly a booster to anyone's life, but what is the cost? Why AI companies get to download, consume and transform all copyrighted works essentially for free (I think there were some lawsuits that resulted in the companies paying), but normal people have to pay millions if they wanted to access all that data and pay to the original creators? I'm also not so ok with the workforce being displaced, but it's what happens with technological progress. But am not ok that it's displacing the writers while benefiting from their prior work without paying them a cent.

show comments
simianwords

> I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.

The strawberry thing has been solved and LLM's have moved way beyond that helping in mathematics and physics. Its easy for the blog author to pick this but lets try something different.

It would be a good idea to come up with a question that trips up a modern LLM like GPT with reasoning enabled. I don't think there exists such a question that can fool an LLM but not fool a reasonably smart person. Of course it has to be in text.

hnnp0329

Like others have said I relate to the title at least. I can look at most technological advances with a very optimistic perspective BUT has I've aged I've learned that these advancements are often driven by and increasingly controlled by people with bad intentions. A quote that resonates with me that I've seen on social media the last year or so is "I don't want AI to make art for me I want it to do my laundry." It would be a dream for technology to advance to a stage where we all work less and live more fulfilling lives but when I look at history the powers that be don't ever let that happen and manipulate the technology to keep most of us stuck in the rat race.

hinkley

One of the hardest hitting George Carlin observations:

“Scratch any cynic and you will find a disappointed idealist.”

_gabiru

>I do feel more and more like the Luddites were right

It seems to me that this take will start to resonate with more and more people

cadamsdotcom

dang the top comments here are about the title being editorialized. Isn’t that against HN guidelines?

slfnflctd

The title resonates with me. The post does not.

Cynicism is the mind's way of protecting itself from repeating unproductive loops that can be damaging. Anyone who ever had a waking dream come crashing down more than once likely understands this.

It doesn't necessarily logically follow that you wholesale reject entire categories of technology which have already shown multiple net positive use cases just because some people are using it wastefully or destructively. There will always be someone who does that. The severity of each situation is worth discussing, but I'm not a big fan of the thought-terminating cliché.

show comments
PaulHoule

You have to ask the question of "what exactly is Capitalism?"

By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.

Similarly, a society with different priorities wouldn't have an arms race between entrepreneurs to spend billions training AI models.

[1] an ancient "sword" often looks like a moderately sized knife to our eyes

show comments
BizarroLand

The actual title of the article is "The Left Doesn't Hate Technology, We Hate Being Exploited" and I think anyone can agree with that sentiment regardless of your political leanings.

LLMs are amazing math systems. Give them enough input and they can replicate that input with exponential variations. That in and of itself is amazing.

If they were all trained on public domain material, or if the original authors of that material were compensated for having the corpus of their work tossed into the shredder, then the people who complain about it could easily be described as Luddites afraid of having their livelihood replaced by technology.

But you add in the wholesale theft of the content of almost every major, minor, great and mediocre work of fiction and non-fiction alike to be shredded and used as logical paper mache to wholesale replace the labor of living human beings for nickles on the dollar and their complains become much more valid and substantial in my opinion.

It's not that LLMs are bad. It's that the people running them are committing ethical crimes that have not been formally illegalized. We can't use the justice system to properly punish the people who have literally photocopied the soul of modern media for an enormously large quick buck. The frustration and impotence they feel is real and valid and yet another constant wound for them in a life full of frustrating constant wounds, which in itself is a lesser but still substantial portion of what we created society to guard the individual against.

It's a small group of ethically amoral people injuring thousands of innocent people and making money from it, mind thieves selling access to their mimeographs of the human soul for $20/month, thank you very much.

If some parallel of this existed in ancient Egypt or Rome, surely the culprits would be cooked alive in a brazen bull or drawn and quartered in the town square, but in the modern era they are given the power and authority and wealth of kings. Can you not see how that might cause misery?

All that being said, if the 20 year outcome of this misery is that everyone ends up in an GAI assisted beautiful world of happiness and delight, then surely the debt will be paid, but that is at bet a 5% likely outcome.

More likely, the tech will crash and burn, or the financial stability of the world that it needs to last for 20 years will crash and burn, or WWIII will break out and in a matter of days we will go from the modern march towards glory to irradiated survivors struggling for daily survival on a dark poisoned planet.

Either way, the manner in which we are allowing LLMS to be fed, trained, and handled is not one that works to the advantage of all humanity.

show comments
simianwords

> One thing I do believe in are the words of Karl Marx: from each according to their ability, to each according to their need. The creation of a world where that is possible is not dependent on advanced technology but on human solidarity.

The author doesn't understand Marx but merely parrots leftist talking points. Marx strongly claims that without change in technology, feudalism would not have changed to capitalism.

kmeisthax

For me, the change from optimism to cynicism happened when I realized the value of tech companies came primarily from being able to find new rules exploits. Not from any of the actual, y'know, technology. Like, sure, Apple invented the iPhone, but Uber found a way to turn your iPhone into a legal weapon aimed directly at your city's local taxi licensing scheme.

That's also why Apple is so worried about their App Store revenue above all else. The legal argument they make is that the 30% take is an IP licensing scheme, but the value of IP is Soviet central planning nonsense. Certainly, if the App Store was just there to take 30% from games, Apple wouldn't be defending it this fiercely[0], and they wouldn't have burned goodwill trying to impose the 30% on Patreon.

Likewise, the value of generative AI is not that the AI is going to give us post-scarcity mental labor or even that AI will augment human productivity. The former isn't happening and the latter is dwarfed by the fact that AI is a rules exploit to access a bunch of copyrighted information that would have otherwise cost lots of money. In that environment, it is unethical to evaluate the technology solely on its own merits. My opinion of your model and your thinly-veiled """research""" efforts will depend heavily on what the model is trained for and on, because that's the only intelligent way to evaluate such a thing.

Did you train on public domain or compensated and consensually provided data? Good for you.

Did you train an art generator on a bunch of artists' deviantART or Dribbble pages? Fuck off, slopmonger.

Did you train on a bunch of Elsevier journals? You know what? Fuck them, they deserve it, now please give me the weights for free.

Humans can smell exploitation a mile away, and the people shitting on AI are doing so because they smell the exploitation.

[0] As a company, Apple has always been mildly hostile to videogames. Like, strictly speaking, operating a videogame platform requires special attention to backwards compatibility that only Microsoft and console vendors have traditionally been willing to offer. The API stability guarantees Apple and Google provide - i.e. "we don't change things for dumb reasons, but when we do change them we expect you to move within X years" are not acceptable to anything other than perpetually updated live service games. The one-and-done model of most videogames is not economically compatible with the moving target that is Apple platforms.

jauntywundrkind

The title resonates a lot with me as well.

I think this hazard extends up and down too; a balance we each have of how we regard possibility & value vs whether we default to looking for problems or denial. This becomes a pattern of perspective people adopt. And I worry so much at how doubt & denial pervade. In our hearts and… well… in the comments, everywhere.

I get it and I respect it; it's true: we need to be aware, alert, and on guard. Everything is very complicated. Hazards and bad patterns abound. But especially as techies, finding possibility is enormously valuable to me. Being willing to believe and amplify the maybe, even when it's a challenging situation. I cherish that so much.

Thank you very much Steve Yegge for the life-changing experience of Notes from the Mystery Machine Bus. I did not realize, did not have framing to understand the base human motivations of tech & building & the comments. I see the world so much differently for grokking the thesis here, see much more the outlooks people come from than I did. It has pushed me in life to look for higher possibility & reach, & to avoid closings of the mind, to avoid rejecting, to avoid fear uncertainty and doubt. https://gist.github.com/cornchz/3313150

It's one of the most Light Side vs Dark Side noospherically illuminating pieces I've ever read. The article here touches upon those who care, and what they see: it frames the world. Yegge's post I think reflects further, back at the techie, on what happens to caring thoughtful people, Carlin's arc if idealist -> disappointed -> cynic. And to me Notes was a rallying cry to have fortitude, & to keep a certain purity of hope close, and to work against thought terminating fear uncertainty and doubt.

Animats

> I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them.

OK. Closed tab.

show comments
simianwords

>Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.

I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells.

>here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.

This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this?

> The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no.

I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job.

The rest of the article is pretty low quality and full of errors.

show comments