ronanfarrow

Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.

show comments
rupi

Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."

I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"

It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!

cmiles8

It seems unlikely OpenAI can survive long term with San at the helm. Challenge is folks already realized that once and yet here we are.

andrewrn

“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

show comments
arionhardison

Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

show comments
laylower

Reading this makes me even happier to pay for Anthropic.

Amodei and his sister saw through the behavior and called it out.

" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."

jablongo

For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

show comments
thrwaway55

We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.

show comments
stavros

I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

show comments
kmfrk

Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

Fantastic reporting.

show comments
neonate
show comments
krackers

[1] is also good to read as a follow-up, and compare the personalities

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

show comments
swingboy

It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.

show comments
bkummel

Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.

show comments
vlovich123

> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?

locust101

It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.

Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.

We really really need a way for our society to be more equitable and hold these people responsible.

show comments
ainch

Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.

show comments
snakeboy

I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).

However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.

nerdyadventurer

Why would anyone trust him at all? their tech is used to bomb children, all of these rich folks are immoral only about their selfish gain.

show comments
morleytj

Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.

> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."

This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.

show comments
basyt

he doesn't control his own future... chatgpt implodes in 18 months max depending upon how the strait of hormuz play goes...

just_once

Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.

show comments
wk_end

This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?

> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.

show comments
throw4847285

A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!

show comments
latentframe

It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.

ambicapter

I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.

show comments
keepamovin

YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.

HardwareLust

Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.

show comments
bambax

> Altman does not recall the exchange.

Altman SAYS he does not recall the exchange. Not the same thing.

steve_adams_86

> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”

I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.

show comments
bootload

“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

This statement rings true.

JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.

show comments
pharos92

We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.

Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.

show comments
6Az4Mj4D

I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.

mvkel

> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.

Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?

It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.

Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.

Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.

isolay

As for the titular question, Betteridge's law of headlines applies. The answer is: No, we can't trust Sam Altman.

ycui1986

he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.

slg

One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.

show comments
innocenttop

Why is the story so downranked? Folks at HackerNews have something to do with it ?

show comments
ernsheong

I bet Satya Nadella is regretting defending Altman now.

b8

Sam failed upwards.

dmitrygr

The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious

show comments
avaer

Who would you trust more: Sam Altman, or a council of 1000 representative AI models?

netcan

My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.

>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."

Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.

Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.

Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.

By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."

At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.

So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.

That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.

Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.

ergocoder

I wonder if Sam might abandon the ship soon. Other co-founders already did.

The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.

This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.

show comments
saeranv

Greg Brockman honestly sounds like a psychopath:

> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?

RagnarD

No.

trakkstar

Girls and boys, this is a prime example of a rhetoric question.

pupppet

Ask Condé Nast if he can be trusted..

https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u

show comments
einrealist

I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

show comments
CyborgUndefined

ugh, i don't understand why only altman scares you? what about google, china, and other players?

for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.

if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.

the concentration of power in one/two persons is the problem.

383toast

if you have to ask if someone can be trusted, they usually can't

show comments
the_arun

The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.

eximius

Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.

We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.

People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.

Right now times are only merely very bad.

BrenBarn

Of course not. No one can be trusted to control our future.

sph

Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”

Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.

---

1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.

tines

Two "insure" typos?

show comments
brap

He’s a grown ass man tweeting in all lowercase, that’s all I needed to know.

I could more or less infer the rest from that.

Rover222

I don’t know, but any time I see an interview of Altman and I look at those eyes, I get creeped out.

game_the0ry

For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

Some concepts from the book:

> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

> Trust your instincts over a person's social role (e.g., doctor, leader, parent)

Check and check.

OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.

show comments
almostdeadguy

Seems this got buried from the front page very quickly

show comments
shevy-java

I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.

lenerdenator

If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.

pdonis

Does the article ever actually answer the title question?

show comments
panzi

No. Next question.

cedws

Sounds like a snake pit. None of them can be trusted. If we have to rely on companies to self appoint a benevolent ‘AI dictator’ we’re fucked.

The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.

brandonpollack2

I haven't read it yet. The answer is no.

tw04

I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.

Arubis

This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.

KellyCriterion

Na, it will be Dario instead of Sam, Id say? :-))

cm2012

I don't see anything bad about Altman in this article that cant be explained by the chaos of growing a billion dollar company in a few years.

jesterson

Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.

The overall response and particularly the body language speaks a lot.

jader201

Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?

I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.

Therefore, I feel like “Sam Altman may control our future” is a far stretch.

show comments
hirako2000

tautology

AbuAssar

no

primer42

"Any headline that ends in a question mark can be answered by the word no."

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

show comments
lizhang

i think im shadowbanned :(

show comments
slibhb

It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?

Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.

show comments
jerrygoyal

could someone please give a tldr? this was way too long

simoncion

Can Sam "The board can fire me, I think that's important." Altman be trusted?

If for no other reason, given what happened when the board fired him... no. I'd say not.

mayhemducks

I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.

jrflowers

I hope somebody just publishes The Ilya Memos. Sounds like a fun read

o0-0o

Hey, Ronan. Did the IPO come up at all in the research or interviews for this article? A yes or no will suffice, and color it if you want. ~_^

zoklet-enjoyer

I believe Annie Altman.

show comments
thewileyone

[flagged]

lnenad

This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.

imagetic

No.

firemelt

obviously not

davidmurdoch

"Good luck, have fun, don't die."

wileydragonfly

No

therobots927

Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.

I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.

show comments
y1n0

Betteridge's law of headlines: no

GlibMonkeyDeath

Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.

The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.

Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.

What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...

show comments
ProAm

Nope, never trust this man. His history proves why you cannot. Pure greed.

Aboutplants

Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.

The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.

show comments
guzfip

> Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”

lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.

andrewstuart

Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.

The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.

I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.

So, the article and headline are dramatic but not much really there.

I think all the AI safety obsessed people turn out to have been the ones off course.

Cheyana

Harvey Dent…

show comments
thm

Hybris.

nickphx

speak for yourself, he doesnt control my future.

show comments
jojobas

The guy called out for being a sociopath by a multitude of Silicon Valley CEOs of all people, sure we can trust him our future.

seba_dos1

Looks like Betteridge's law of headlines applies here too.

smcg

Rule of Headlines says "no"

josefritzishere

Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "NO."

ambicapter

> The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".

Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.

sumeno

Betteridge strikes again

selimthegrim

Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.

drivingmenuts

Short answer: No. Long answer: Hell, no.

giwook

tl;dr

No, he cannot.

romeroej

Can anybody tho?

show comments
FpUser

>"Sam Altman may control our future"

TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)

asK1ajsh

The New Yorker is owned by Conde Nast just as Reddit. Conde Nast has a deal with OpenAI:

https://www.reuters.com/technology/openai-signs-deal-with-co...

This is a damage control piece, and you see that the most stinging comments here get downvoted.

show comments
gchokov

He is cooked. Only a matter of time before the whole thing blows up. Once a scammer, always a scammer.

ahartmetz

Well, no, obviously not. Not one bit.

aduty

LOL, no.

nielsbot

No one person control our future. Stop there.

show comments
killbot5000

No. Why is this a question?

LetsGetTechnicl

No

show comments
catigula

1. No.

2. You cannot "control" superintelligent AI.

ekjhgkejhgk

No.

aksss

"could", "may", "might" - these words do so much heavy lifting in "journalism". Almost always it's an invitation to worry and be miserable.

drob518

[flagged]

bijowo1676

This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.

The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.

The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."

Who cares???????

The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.

ninjahawk1

OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.

I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.

show comments
quantified

A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?

show comments
aryehof

I might expect such a subjective, gossipy exposé of a public official, but this of a private individual in a non-public sector commercial company?

rambambram

Any idea how stupid this title sounds!? It's past exaggeration.