I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
show comments
blueblisters
My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI
But what's the most charitable / objective interpretation of this?
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.
show comments
gabeh
It's only $200 from me for the remainder of the year but you're not getting it anymore OpenAI. Voting with my wallet tonight. Really sad, I've followed OpenAI for years, way before ChatGPT. It's just too hard to true up my values with how they've behaved recently. This sucks. Goodnight everyone.
show comments
quantumwannabe
More details on the difference between the OpenAI and Anthropic contracts from one of the Under Secretaries of State:
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
Just uninstalled the app and canceled subscription. OpenAI can't justify their insane valuation without an user base. Especially when there are capable models elsewhere.
deaux
All OpenAI employees during the board revolt that vouched for sama's return are personally responsible.
show comments
Jcampuzano2
I would put bets on the issue probably being that it was pointed out that Anthropic's models were used to assist the raid in Venezuela, Anthropic then aggressively doubled down on their rules/principles and the DOD didn't like being called out on that so they lashed out, hard.
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
show comments
push0ret
So they agreed to the same red lines that had earlier led to the fallout with Anthropic? Kind of strange.
show comments
davidw
We need some kind of group like "tech people with morals". I'm done with these people and their corruption and garbage.
show comments
KronisLV
In an imaginary world, this would be a precursor to Anthropic coming to EU in a greater capacity and teaming up with Mistral, eventually leading to similar innovation and progress that DeepSeek forced upon the West, benefitting everyone in the long run. They seem to have the morals for it and the respect for human rights and life given their recent announcement (after some backtracking), unlike OpenAI. Sadly, that's not the real world.
show comments
ozgung
Do I understand this correctly:
An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.
So killing people is legal,
Killing people by a random process is legal,
A randomized algorithm deciding on who to kill is legal,
And some of you think you are legally protected because they used the word “domestic”?
show comments
tintor
Difference from Anthropic's deal is:
- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"
- Anthropic is not ok with use of their AI for autonomous weapons
show comments
pbnjay
I had kept my Plus subscription just because I was lazy, and it was inexpensive and convenient… but this turn definitely helped me get off the fence. I am exporting and deleting my data now, and the cancellation is already done.
fiatpandas
>human responsibility for the use of force, including for autonomous weapon systems
So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.
ttrashh
Cancel your subscription. It's the least you can do.
Both are based in Europe but Proton Lumo has the better privacy promises.
Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)
show comments
adangert
Let me reiterate some points for people here:
Income and revenue sources always, inevitably, and without fail, determine behavior.
show comments
dgxyz
Sam Altman being a complete bell end? Who'd have thought it.
I hope everyone goes and works for Anthropic and OpenAI collapses.
Markets going to be interesting on Monday. This plus a war. Urgh.
pu_pe
So this week we've learned that even the government asseses Anthropic has the better model, and that OpenAI leadership has no concern for safety whatsoever.
operator_nil
So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?
show comments
AbstractH24
It’s amazing how quickly the players keep shifting here.
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
show comments
slibhb
I'm unsure how to feel about this whole dust-up. It doesn't seem like much has changed in substance. Maybe OpenAI outmaneuvered Anthropic behind the scenes. Possibly Anthropic was seen as not behaving deferentially enough towards the government. But this administration has proven comically corrupt, so it wouldn't surprise me if money was involved. Will be interested to see what journalists turn up.
vander_elst
Subscribers should be aware what they are supporting. I think that keeping an OpenAI account can be considered an active support of this decision, at least for private people who can easily change providers.
fabbbbb
Anyone having success with exporting data from ChatGPT? Got the export email 11 hours ago but still no download link..
kledru
Sorry, despite the public statements of some sort of solidarity with Anthropic by sama this looks like a plot to take over from losing position.
Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.
(Guess I need to build everything I intended this year in a weekend.)
I would not be surprised if Sam A. helped engineer this whole situation… “Child’s play,” like replacing a reddit ceo.
show comments
taway1874
Well ... bumped up my Claude subscription from Pro to Max and closed out my OpenAI accounts. It's a drop in the ocean but I'll sleep better knowing I did the right thing. Thanks ChatGPT! It was good knowing you.
iainctduncan
Did anyone ever doubt sama would just follow the money?
weasels gonna weasel
mmanfrin
Absolute disgrace of a person and organization.
rich_sasha
Is the Pentagon signing a EULA confirming all their data will now be used, anonymised, for improving the service?
show comments
matsemann
From an open non-profit to a war machine in such a short time is baffling.
e40
This is how OpenAI gets bailed out in an AI crash, too big to fail becomes too important to fail.
corford
If you're unhappy with this, an immediate way to signal it is with your wallet. In my case I've just uninstalled chatgpt from my phone, cancelled my subscription and will up my spend with anthropic.
show comments
deadbolt
Choosing to go along with calling it the "Department of War" tells you all you need to know.
show comments
wannabe_loser
I guess we aren't curing cancer with ai anymore
TeeWEE
If you work at OpenAI, leave now while you can.
jdiaz97
cancelling my openai subscription, they're gonna miss my 20 USD
insane_dreamer
I'm never using an OpenAI model or Codex ever again. Period. Idaf whether it scores better than Claude on benchmarks or not.
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
Does deploying these models in "the classified network" also mean this technology is going to be used to help kill people?
impulser_
For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
show comments
bambax
> In all of our interactions, the DoW displayed a deep respect for safety
Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.
throwaway20261
It is quite shocking that almost all AI companies are saying "we are not ok with domestic surveillance" but they'll happily sign up to surveilling the rest of the world population.
So by that measure the US govt can go get some Israeli software to surveill their domestic populace!
Remember that the US administration is supporting Israel on the ethnic-cleansing and genocide of Gaza, using Palantir technology and AI systems that generate kill lists. It's "IBM and the Holocaust" all over again.
levanten
Funny that these are the same people that have been blasting the alarm on dangers of AI singularity. Now they cannot wait to put their tools in weapons.
jaybrendansmith
What part of "These people are fascists, and need to be stopped" are people failing to understand?
show comments
kneel
Chatgpt has an export function for all of your chats
Use it to save your data, shouldn't be hard to get it working elsewhere
AmericanOP
Instant uninstall.
darkstarsys
All of this, the news articles, the social media discussion, this very discussion, will be part of the training set for future AIs. What will they learn from this?
kseniamorph
Is there anyone who really understands what’s different about the OpenAI agreement? Or maybe these are just Sam Altman’s public statements that don’t actually reflect the real terms of the deal. I honestly can’t figure it out.
imwideawake
Google, OpenAI, and Anthropic should all have each other's backs when it comes to hard lines like this. Sam can say whatever he wants, but signing this deal on the same day Trump and Hegseth went scorched earth on Anthropic — for standing up for the very values OpenAI claims to hold — is sleazy.
Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.
I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.
lm28469
> OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon
The same day:
Pssst psst Samy Samy, come here we have money and data psst
> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
petee
This explains the "Free Codex" offer i just got in my email
elAhmo
All that money and not a single ounce of integrity.
vorticalbox
> prohibitions on domestic mass surveillance
so foreign mass surveillance is all good?
superkuh
I have just canceled all services and deleted my account with OpenAI. They can get money from the current US regime but I will not contribute to their violations of the constitution.
jstummbillig
> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.
I am fully prepared to believe that they got absolutely nothing else out of it (to date).
show comments
m4rtink
So this is indeed how OpenAI survives (a little bit longer ?) - government bailout.
interestpiqued
What a snake
otterley
The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
redml
regardless of your opinion of ai in government, sam could not have picked a worse way for optics to swoop in and make a deal. it just looks incredibly bad.
DebtDeflation
At this point it seems the entire AI Safety/Ethics debate was nothing more than a Marketing campaign to hype up the capabilities of the models - get people to think that if they're potentially dangerous that must mean they're so capable and they need to sign up for a subscription.
owenthejumper
Well in the end this is great news - this virtually guarantees Anthropic win in the court.
d--b
At this stage, everything OpenAi does is to try to keep investors investing.
They’re willing to let their brand go to trash for this government contract.
Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.
But Altman seems so desperate to keep the cash coming he’s ready to do anything.
LarsDu88
China has evacuated its embassies in Iran.
This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.
When this happens, Altman will go from being merely a drifter to having blood on his hands.
show comments
mrweasel
Didn't the department of war announce that it would be working with xAI just this past December?
straydusk
I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
show comments
hnthrowaway0315
Ah, is it the time when Skynet starts to manifest itself...
tibbydudeza
While Dario is not my hero with the sometimes the outrageous things he says he has a firm moral compass and a backbone that aligns with mine and thus I will support his company and their products in my personal use and my work.
coryodaniel
Don’t just cancel, flood them with CCPA requests.
mkozlows
So there are two possibilities here:
1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.
2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.
Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.
show comments
outside1234
Screw OpenAI. Never opening that app again or using one of their models.
dataflow
This seems full of loopholes.
> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?
(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?
(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?
(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?
t0lo
Snakes- as predicted
rvz
Not a surprise here, that letter was a trap for OpenAI employees who filled it out with their names on it. [0]
The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?
the AI datacenter built for 180B are used for surveillance and control
weare138
There's was an 80s movie about this...
verdverm
If the "safety stack" (guardrails) bit is true, it's the exact opposite of their beef with Anthropic... which is not surprising given who's running the US right now.
I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.
arendtio
So now we are waiting for Anthropic to explain to us what Sam agreed to and what they rejected.
On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.
One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.
robertwt7
How did they agree to the terms that were initially put forward by Anthropic but with OpenAI? Surely there’s a catch here. Or is it just Sam negotiation skill?
drivebyhooting
In my experience ChatGPT is the most sanctimonious of the leading models.
When I need advice for my clandestine operations I always reach for Grok.
tayo42
How do llms get used in either survalience or for autonomous weapons. Using written English seems so inefficient?
_zoltan_
to all the naysayers: what did all these people doing AI research expect? that the military doesn't want to use their stuff? and then when it does, Pikachu face?
I know I'll get down voted but come on, this is so very naive.
looksjjhg
So it’s personal basically
camillomiller
Sam Altman is this.
Sam Altman needs to be stopped.
FrustratedMonky
Maybe the problem here is they are negotiating by using social media posts. Where is the team of Anthropic people, and the team of Gov people, that should be in a room somewhere doing this in private?
AmericanOP
Department of War just killed OpenAI's brand
dakolli
They're pretending like they didn't enter into this agreement last January and are completely entrenched in intelligence programs already. They are trying to make it look like they are stepping up in a time of need (time of need for the DoD), in reality they sold their soul to intelligence and the military a year ago.
I posted about this here after Sam made his tweet:
Perhaps Trump's DOD objects specifically to Anthropic models themselves declining to do immoral and illegal things, and not something just stipulated in an ignorable contract. That would give room for Sam to throw some public CYA into a contract, while neutering model safety to their requirements.
_alternator_
So while Sam Altman claims that OAI received promises not to have fully automated killbot-GPT from Hegseth, so did Anthropic(!)—but it contained weasel legal language that allowed the USG to ignore the restrictions at will. (We all know how the current admin reads such language.)
So until we see the contract I think it’s fair to assume that OAI and Anthropic got roughly the same deal, with Anthropic insisting on language that actually limits the government, while OAI licked the boot and is passing it off like filet mignon.
webdevver
TOTAL ALTMAN VICTORY
utopiah
Oh yeah, from the company which raison d'etre was being open and being good.
shocked pikachu face
Come on by now we all know the only thing Altman (who else is still at OpenAI from the start?) wants it more money and more power, it doesn't really matter how.
jackyli02
SA is a real weasel lol. Acted like he stood behind Anthropic's principles just to announce the deal with DoW a few hours later.
show comments
transcriptase
Sam must not be aware of what happened to any business or foreign nation/leader considered outwardly friendly to the first Trump administration when the democrats regained control in 2020.
show comments
resters
There will be a scene in some future movie about Trump's authoritarian rise (we are still early in it) that shows Sam signing this agreement. Sam will be played by a character actor meant to symbolize silicon valley opportunism and greed.
What sam and greg don't realize is that the many who succumb to trump's pressure tactics will all be lumped into the same category by history.
Sam and Greg are handing an authoritarian regime that has broken so many laws in the past year a superweapon.
croes
Is OpenAI and ChatGPT nie a national security threat for other countries?
khalic
Their models are crappy anyway, the "super intelligence" BS is nowhere to be seen. Just let them die or become a US government asset.
gaigalas
We really need a plan for the scenario in which the US loses the trade war and decides to go homicidal AI on the whole world. Like, help them recover or something.
lefrenchy
This will backfire on Sam someday, he’s just a pawn in the agenda of the Trump admin.
show comments
riazrizvi
Refreshing sanity.
romulussilvia
I wonder if this will cause this to save open ai from the bubble! i am sure i am wrong;-)
rambojohnson
get fucked OpenAI. cancelled my subscription.
SilverElfin
So basically Greg Brockman of OpenAI, currently the largest MAGA PAC donor, used his bribe to make the government destroy his main competition? I’m absolutely cancelling ChatGPT and will tell everyone I know to cancel as well.
I also absolutely do not trust sleezy Sam Altman when he claims he has the same exact redlines as Anthropic:
> AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
If Hegseth and Trump attack Anthropic and sign a deal with OpenAI under the same restrictions, it means this is them corrupting free markets by picking which companies win. Maybe it’s at the behest of David Sacks, the corrupt AI czar who complained about lawfare throughout the Biden administration but now cheers on far worse lawfare.
So it’s either a government looking to surveil citizens illegally or a government that is deeply corrupt and is using its power to enrich some people above others.
Uptrenda
is there a single thing left that altman promised that he hasn't broken with this company...
apexalpha
"We will not be divided!"
They got divided 12 hours later, lol.
0xfedbee
Honestly not even surprised. What else could you expect from a zionist?
pluc
lol it didn't even take them a whole night.
neuroelectron
Sam Altman is a psychopath and his only talent is lying to people and convincing them of his lies.
saos
Musk 100% right about this guy
ares623
Is this setting the stage for a bailout? Was the whole thing between the three parties smoke and mirrors to justify a bailout down the line? It's conspiracy theory territory but, you know who we're dealing with here.
calvinmorrison
perhaps us mere mortals should petition our lawmakers to ban mass surveillance.
angoragoats
It’s the Department of Defense, and let’s not have the main post be a link to the non-consensual-porn-generating and Nazi-supporting site. Could an admin change the main link to the Fortune article also linked here?
mrcwinn
So nice of him! I am sure he believes they should offer these terms to all competitors.
HN: if you continue to subscribe to OpenAI, if you use it at your startup, you’re no better than the tech bros you often criticize. This is not surprising but beyond shady.
ukblewis
Thank you Sam Altman for being a man with a good sense of ethics and empowering the US Military while it fights evil in Iran and empowering the US government and ignoring the idiotic haters
cwyers
There's a lot of people in this thread that assume that Sam Altman is the one who is being dishonest here, and I kind of understand, but the other two parties who could just as easily be lying are Pete Hegseth and Donald Trump, and of the three of them if you think sama is the _most_ likely to lie I feel like you have not been paying attention.
eoskx
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place."
show comments
mythz
Sam is just about the least trustworthy person in AI, I don't trust his words as face value and I consider these weasel words:
> prohibitions on domestic mass surveillance and human responsibility *for the use of force*
show comments
skeledrew
> We also will build technical safeguards to ensure our models behave as they should
A bold statement. It would appear they've definitively solved prompt injection and all the other ills that LLMs have been susceptible to. And forgot to tell the world about it.
/s
mrcwinn
Hey dang I know I’m not allowed to say this due to community guidelines, but Sam Altman is a lying sack of shit.
gurumeditations
The “Department of War” does not exist and no one should use their preferred pronouns.
charcircuit
I am glad OpenAI stood up to do what's right and give the American people the ability to choose how AI is used for themselves rather than dictating it from their high horse.
Edit: It looks like the terms are similar in OpenAI's deal in what they prohibit so it isn't clear why they are any better. We should be the ones dictating what is and isn't prohibited. Not Sam. We will have to wait for more news on what is actually different.
show comments
Robdel12
Raise your hand if you actually read it or if you read the title and replied? I see a lot of comments that sure seem like they didn’t read it.
> Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
IF this is true, it SHOULD be verifiable. So, we wait? I mean, I am a dummy, but that language doesn’t seem too washy too me? Either it’s a bold face lie and OpenAI burns because of it or it’s true and the Trump admin is going after the “left” AI company. Or whatever. My point is, someone smarter than me/us is going to fact check Sam’s claim.
show comments
nateburke
Plain and simple this is revenge for the Anthropic super bowl ads, which were epic burns against openAI's primary future revenue stream.
I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI
But what's the most charitable / objective interpretation of this?
For example - https://x.com/UnderSecretaryF/status/2027594072811098230
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.
It's only $200 from me for the remainder of the year but you're not getting it anymore OpenAI. Voting with my wallet tonight. Really sad, I've followed OpenAI for years, way before ChatGPT. It's just too hard to true up my values with how they've behaved recently. This sucks. Goodnight everyone.
More details on the difference between the OpenAI and Anthropic contracts from one of the Under Secretaries of State:
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
https://x.com/UnderSecretaryF/status/2027566426970530135
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
https://x.com/UnderSecretaryF/status/2027594072811098230
If the redlines are the same how'd this deal get struck?
ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...
Just uninstalled the app and canceled subscription. OpenAI can't justify their insane valuation without an user base. Especially when there are capable models elsewhere.
All OpenAI employees during the board revolt that vouched for sama's return are personally responsible.
I would put bets on the issue probably being that it was pointed out that Anthropic's models were used to assist the raid in Venezuela, Anthropic then aggressively doubled down on their rules/principles and the DOD didn't like being called out on that so they lashed out, hard.
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
So they agreed to the same red lines that had earlier led to the fallout with Anthropic? Kind of strange.
We need some kind of group like "tech people with morals". I'm done with these people and their corruption and garbage.
In an imaginary world, this would be a precursor to Anthropic coming to EU in a greater capacity and teaming up with Mistral, eventually leading to similar innovation and progress that DeepSeek forced upon the West, benefitting everyone in the long run. They seem to have the morals for it and the respect for human rights and life given their recent announcement (after some backtracking), unlike OpenAI. Sadly, that's not the real world.
Do I understand this correctly:
An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.
So killing people is legal,
Killing people by a random process is legal,
A randomized algorithm deciding on who to kill is legal,
And some of you think you are legally protected because they used the word “domestic”?
Difference from Anthropic's deal is:
- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"
- Anthropic is not ok with use of their AI for autonomous weapons
I had kept my Plus subscription just because I was lazy, and it was inexpensive and convenient… but this turn definitely helped me get off the fence. I am exporting and deleting my data now, and the cancellation is already done.
>human responsibility for the use of force, including for autonomous weapon systems
So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.
Cancel your subscription. It's the least you can do.
I canceled my ChatGPT subscription and switched to Lumo Plus subscription https://lumo.proton.me/about I also considered https://mistral.ai/products/le-chat
Both are based in Europe but Proton Lumo has the better privacy promises.
Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)
Let me reiterate some points for people here:
Income and revenue sources always, inevitably, and without fail, determine behavior.
Sam Altman being a complete bell end? Who'd have thought it.
I hope everyone goes and works for Anthropic and OpenAI collapses.
Markets going to be interesting on Monday. This plus a war. Urgh.
So this week we've learned that even the government asseses Anthropic has the better model, and that OpenAI leadership has no concern for safety whatsoever.
So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?
It’s amazing how quickly the players keep shifting here.
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
I'm unsure how to feel about this whole dust-up. It doesn't seem like much has changed in substance. Maybe OpenAI outmaneuvered Anthropic behind the scenes. Possibly Anthropic was seen as not behaving deferentially enough towards the government. But this administration has proven comically corrupt, so it wouldn't surprise me if money was involved. Will be interested to see what journalists turn up.
Subscribers should be aware what they are supporting. I think that keeping an OpenAI account can be considered an active support of this decision, at least for private people who can easily change providers.
Anyone having success with exporting data from ChatGPT? Got the export email 11 hours ago but still no download link..
Sorry, despite the public statements of some sort of solidarity with Anthropic by sama this looks like a plot to take over from losing position.
Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.
(Guess I need to build everything I intended this year in a weekend.)
This is awkward? https://news.ycombinator.com/item?id=47188473
I would not be surprised if Sam A. helped engineer this whole situation… “Child’s play,” like replacing a reddit ceo.
Well ... bumped up my Claude subscription from Pro to Max and closed out my OpenAI accounts. It's a drop in the ocean but I'll sleep better knowing I did the right thing. Thanks ChatGPT! It was good knowing you.
Did anyone ever doubt sama would just follow the money?
weasels gonna weasel
Absolute disgrace of a person and organization.
Is the Pentagon signing a EULA confirming all their data will now be used, anonymised, for improving the service?
From an open non-profit to a war machine in such a short time is baffling.
This is how OpenAI gets bailed out in an AI crash, too big to fail becomes too important to fail.
If you're unhappy with this, an immediate way to signal it is with your wallet. In my case I've just uninstalled chatgpt from my phone, cancelled my subscription and will up my spend with anthropic.
Choosing to go along with calling it the "Department of War" tells you all you need to know.
I guess we aren't curing cancer with ai anymore
If you work at OpenAI, leave now while you can.
cancelling my openai subscription, they're gonna miss my 20 USD
I'm never using an OpenAI model or Codex ever again. Period. Idaf whether it scores better than Claude on benchmarks or not.
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
https://www.nytimes.com/2026/02/27/technology/openai-reaches...
Does deploying these models in "the classified network" also mean this technology is going to be used to help kill people?
For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
> In all of our interactions, the DoW displayed a deep respect for safety
Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.
It is quite shocking that almost all AI companies are saying "we are not ok with domestic surveillance" but they'll happily sign up to surveilling the rest of the world population.
So by that measure the US govt can go get some Israeli software to surveill their domestic populace!
Homo sapiens deserve to become extinct.
https://news.ycombinator.com/item?id=47195085
Remember that the US administration is supporting Israel on the ethnic-cleansing and genocide of Gaza, using Palantir technology and AI systems that generate kill lists. It's "IBM and the Holocaust" all over again.
Funny that these are the same people that have been blasting the alarm on dangers of AI singularity. Now they cannot wait to put their tools in weapons.
What part of "These people are fascists, and need to be stopped" are people failing to understand?
Chatgpt has an export function for all of your chats
Use it to save your data, shouldn't be hard to get it working elsewhere
Instant uninstall.
All of this, the news articles, the social media discussion, this very discussion, will be part of the training set for future AIs. What will they learn from this?
Is there anyone who really understands what’s different about the OpenAI agreement? Or maybe these are just Sam Altman’s public statements that don’t actually reflect the real terms of the deal. I honestly can’t figure it out.
Google, OpenAI, and Anthropic should all have each other's backs when it comes to hard lines like this. Sam can say whatever he wants, but signing this deal on the same day Trump and Hegseth went scorched earth on Anthropic — for standing up for the very values OpenAI claims to hold — is sleazy.
Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.
I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.
> OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon
The same day:
Pssst psst Samy Samy, come here we have money and data psst
> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
This explains the "Free Codex" offer i just got in my email
All that money and not a single ounce of integrity.
> prohibitions on domestic mass surveillance
so foreign mass surveillance is all good?
I have just canceled all services and deleted my account with OpenAI. They can get money from the current US regime but I will not contribute to their violations of the constitution.
> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.
I am fully prepared to believe that they got absolutely nothing else out of it (to date).
So this is indeed how OpenAI survives (a little bit longer ?) - government bailout.
What a snake
The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
https://www.wsj.com/tech/ai/trump-will-end-government-use-of...
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
regardless of your opinion of ai in government, sam could not have picked a worse way for optics to swoop in and make a deal. it just looks incredibly bad.
At this point it seems the entire AI Safety/Ethics debate was nothing more than a Marketing campaign to hype up the capabilities of the models - get people to think that if they're potentially dangerous that must mean they're so capable and they need to sign up for a subscription.
Well in the end this is great news - this virtually guarantees Anthropic win in the court.
At this stage, everything OpenAi does is to try to keep investors investing.
They’re willing to let their brand go to trash for this government contract.
Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.
But Altman seems so desperate to keep the cash coming he’s ready to do anything.
China has evacuated its embassies in Iran.
This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.
When this happens, Altman will go from being merely a drifter to having blood on his hands.
Didn't the department of war announce that it would be working with xAI just this past December?
I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
Ah, is it the time when Skynet starts to manifest itself...
While Dario is not my hero with the sometimes the outrageous things he says he has a firm moral compass and a backbone that aligns with mine and thus I will support his company and their products in my personal use and my work.
Don’t just cancel, flood them with CCPA requests.
So there are two possibilities here:
1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.
2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.
Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.
Screw OpenAI. Never opening that app again or using one of their models.
This seems full of loopholes.
> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?
(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?
(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?
(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?
Snakes- as predicted
Not a surprise here, that letter was a trap for OpenAI employees who filled it out with their names on it. [0]
The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?
[0] https://news.ycombinator.com/item?id=47176170
[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
Opportunism without principles at its finest.
the AI datacenter built for 180B are used for surveillance and control
There's was an 80s movie about this...
If the "safety stack" (guardrails) bit is true, it's the exact opposite of their beef with Anthropic... which is not surprising given who's running the US right now.
I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.
So now we are waiting for Anthropic to explain to us what Sam agreed to and what they rejected.
On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.
One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.
How did they agree to the terms that were initially put forward by Anthropic but with OpenAI? Surely there’s a catch here. Or is it just Sam negotiation skill?
In my experience ChatGPT is the most sanctimonious of the leading models.
When I need advice for my clandestine operations I always reach for Grok.
How do llms get used in either survalience or for autonomous weapons. Using written English seems so inefficient?
to all the naysayers: what did all these people doing AI research expect? that the military doesn't want to use their stuff? and then when it does, Pikachu face?
I know I'll get down voted but come on, this is so very naive.
So it’s personal basically
Sam Altman is this. Sam Altman needs to be stopped.
Maybe the problem here is they are negotiating by using social media posts. Where is the team of Anthropic people, and the team of Gov people, that should be in a room somewhere doing this in private?
Department of War just killed OpenAI's brand
They're pretending like they didn't enter into this agreement last January and are completely entrenched in intelligence programs already. They are trying to make it look like they are stepping up in a time of need (time of need for the DoD), in reality they sold their soul to intelligence and the military a year ago.
I posted about this here after Sam made his tweet:
https://news.ycombinator.com/item?id=47189756
Source: https://defensescoop.com/2025/01/16/openais-gpt-4o-gets-gree...
Perhaps Trump's DOD objects specifically to Anthropic models themselves declining to do immoral and illegal things, and not something just stipulated in an ignorable contract. That would give room for Sam to throw some public CYA into a contract, while neutering model safety to their requirements.
So while Sam Altman claims that OAI received promises not to have fully automated killbot-GPT from Hegseth, so did Anthropic(!)—but it contained weasel legal language that allowed the USG to ignore the restrictions at will. (We all know how the current admin reads such language.)
So until we see the contract I think it’s fair to assume that OAI and Anthropic got roughly the same deal, with Anthropic insisting on language that actually limits the government, while OAI licked the boot and is passing it off like filet mignon.
TOTAL ALTMAN VICTORY
Oh yeah, from the company which raison d'etre was being open and being good.
shocked pikachu face
Come on by now we all know the only thing Altman (who else is still at OpenAI from the start?) wants it more money and more power, it doesn't really matter how.
SA is a real weasel lol. Acted like he stood behind Anthropic's principles just to announce the deal with DoW a few hours later.
Sam must not be aware of what happened to any business or foreign nation/leader considered outwardly friendly to the first Trump administration when the democrats regained control in 2020.
There will be a scene in some future movie about Trump's authoritarian rise (we are still early in it) that shows Sam signing this agreement. Sam will be played by a character actor meant to symbolize silicon valley opportunism and greed.
What sam and greg don't realize is that the many who succumb to trump's pressure tactics will all be lumped into the same category by history.
Sam and Greg are handing an authoritarian regime that has broken so many laws in the past year a superweapon.
Is OpenAI and ChatGPT nie a national security threat for other countries?
Their models are crappy anyway, the "super intelligence" BS is nowhere to be seen. Just let them die or become a US government asset.
We really need a plan for the scenario in which the US loses the trade war and decides to go homicidal AI on the whole world. Like, help them recover or something.
This will backfire on Sam someday, he’s just a pawn in the agenda of the Trump admin.
Refreshing sanity.
I wonder if this will cause this to save open ai from the bubble! i am sure i am wrong;-)
get fucked OpenAI. cancelled my subscription.
So basically Greg Brockman of OpenAI, currently the largest MAGA PAC donor, used his bribe to make the government destroy his main competition? I’m absolutely cancelling ChatGPT and will tell everyone I know to cancel as well.
I also absolutely do not trust sleezy Sam Altman when he claims he has the same exact redlines as Anthropic:
> AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
If Hegseth and Trump attack Anthropic and sign a deal with OpenAI under the same restrictions, it means this is them corrupting free markets by picking which companies win. Maybe it’s at the behest of David Sacks, the corrupt AI czar who complained about lawfare throughout the Biden administration but now cheers on far worse lawfare.
So it’s either a government looking to surveil citizens illegally or a government that is deeply corrupt and is using its power to enrich some people above others.
is there a single thing left that altman promised that he hasn't broken with this company...
"We will not be divided!"
They got divided 12 hours later, lol.
Honestly not even surprised. What else could you expect from a zionist?
lol it didn't even take them a whole night.
Sam Altman is a psychopath and his only talent is lying to people and convincing them of his lies.
Musk 100% right about this guy
Is this setting the stage for a bailout? Was the whole thing between the three parties smoke and mirrors to justify a bailout down the line? It's conspiracy theory territory but, you know who we're dealing with here.
perhaps us mere mortals should petition our lawmakers to ban mass surveillance.
It’s the Department of Defense, and let’s not have the main post be a link to the non-consensual-porn-generating and Nazi-supporting site. Could an admin change the main link to the Fortune article also linked here?
So nice of him! I am sure he believes they should offer these terms to all competitors.
HN: if you continue to subscribe to OpenAI, if you use it at your startup, you’re no better than the tech bros you often criticize. This is not surprising but beyond shady.
Thank you Sam Altman for being a man with a good sense of ethics and empowering the US Military while it fights evil in Iran and empowering the US government and ignoring the idiotic haters
There's a lot of people in this thread that assume that Sam Altman is the one who is being dishonest here, and I kind of understand, but the other two parties who could just as easily be lying are Pete Hegseth and Donald Trump, and of the three of them if you think sama is the _most_ likely to lie I feel like you have not been paying attention.
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place."
Sam is just about the least trustworthy person in AI, I don't trust his words as face value and I consider these weasel words:
> prohibitions on domestic mass surveillance and human responsibility *for the use of force*
> We also will build technical safeguards to ensure our models behave as they should
A bold statement. It would appear they've definitively solved prompt injection and all the other ills that LLMs have been susceptible to. And forgot to tell the world about it.
/s
Hey dang I know I’m not allowed to say this due to community guidelines, but Sam Altman is a lying sack of shit.
The “Department of War” does not exist and no one should use their preferred pronouns.
I am glad OpenAI stood up to do what's right and give the American people the ability to choose how AI is used for themselves rather than dictating it from their high horse.
Edit: It looks like the terms are similar in OpenAI's deal in what they prohibit so it isn't clear why they are any better. We should be the ones dictating what is and isn't prohibited. Not Sam. We will have to wait for more news on what is actually different.
Raise your hand if you actually read it or if you read the title and replied? I see a lot of comments that sure seem like they didn’t read it.
> Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
IF this is true, it SHOULD be verifiable. So, we wait? I mean, I am a dummy, but that language doesn’t seem too washy too me? Either it’s a bold face lie and OpenAI burns because of it or it’s true and the Trump admin is going after the “left” AI company. Or whatever. My point is, someone smarter than me/us is going to fact check Sam’s claim.
Plain and simple this is revenge for the Anthropic super bowl ads, which were epic burns against openAI's primary future revenue stream.