This has much broader implications for the US economy and rule of law in the US.
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
This marks an important turning point for the US.
show comments
kace91
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
show comments
jspdown
Domestic mass surveillance might feel tolerable when you live in the country conducting it. But how would you feel about other countries adopting similar policies, and thereby mass-surveilling the American people? Because that's exactly what these policies authorize when applied to the rest of the world.
show comments
thimabi
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
show comments
5o1ecist
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This is a trap. Two, I guess, but let's take the first one:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
show comments
ArchieScrivener
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
show comments
dang
Here's the sequence (so far) in reverse order - did I miss any important threads?
The talk about declaring anthropic a supply chain security risk (which doesn’t just remove it from DoW but also all the contractors and suppliers that supply DoW) was also accompanied by a completely different threat: to declare it national security need to take over then company.
Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses
show comments
1vuio0pswjnm7
This appears to a form to collect the identities of past or present OpenAI and Google employees who share certain political views
It requires proof of employment, e.g., company email aaddress, photo of employee badge, and discloses a US-based "cloud computing" vendor where the identities will be stored in the cloud
After employment verification it claims the stored identities will be destroyed upon request. The site operator is apparently anonymous
One can imagine this list could be useful to multiple parties for multiple purposes
davidw
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
show comments
largbae
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
show comments
pavel_lishin
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.
show comments
Meekro
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
What, then, is this really about?
show comments
davidmurdoch
What is this supposed to do? OpenAI is already cozied up and in bed with Dept of War, they're already busy making lots of little surveillance babies.
show comments
culi
Before you leave a comment about how meaningless this is unless they do XYZ,
please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
XCSme
This reminds me a bit of the Black Mirror episodes with the bees. Where the people whose names tweeted something were actually the targets...
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.
All of this should remain a bridge too far, forever.
EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.
Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
show comments
pciexpgpu
The common people have viewed tech elites being out of touch. Tech elites have some sort of moral higher ground they like to espouse but rarely have the goods to show.
You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.
I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.
The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.
I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.
Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.
It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.
I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
show comments
rabbitlord
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
show comments
lightyrs
» Have there been any mistakes in signature verification for this letter?
» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
xphos
This should be flagged political like literally everything else that has been flagged ironic how when your on the menu you dont follow the same protocols applied everyone else too.
I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?
Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech
codepoet80
Nicely done. Hold this line — there’s got to be one somewhere.
david_shaw
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
show comments
txrx0000
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
show comments
conductr
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
show comments
GaryBluto
If the DoW/DoD wants Anthropic, they'll get Anthropic, whether we know about it publicly or not. It's not unlikely that they're already working together and just putting on a show for the public.
I'd even go as far to say that if this is indeed a publicity campaign it is the most successful one I've seen in years. Many detractors of the existence of LLMs are suddenly leaping to Anthropic's defence.
show comments
_aavaa_
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
show comments
fragkakis
I clearly see the point against using AI for mass surveillance and fully autonomous weapons. But for the latter, I don't see a choice. If other countries are willing to allow fully autonomous weapons using their own AI, it's no longer a matter of choice, you have to do it too.
show comments
threethirtytwo
It's like watching Darth Vader Senior fight Darth Vader Junior and luke skywalker is nowhere in sight.
mitch-flindell
The primary purpose of these products is mass surveillance why else would they be allowed to be built ?
"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."
If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
show comments
groundzeros2015
For all the authoritarian regime talk. Here we have a list of many non-citizens willing to argue with the secretary of war of a country they are a temporary resident of, with no concern of repercussion.
show comments
Dansvidania
I think the time when engineers could steer the heading of the companies they work for is long gone, sadly.
It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.
I’d rather someone of these very smart people start to develop countermeasures.
sourcegrift
Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
celltalk
Wouldn’t it be ironic if US used open source Chinese models for domestic mass surveillance and autonomously killing people without human oversight… democracy at its best.
driverdan
This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.
If you're an employee and actually believe in this you need to commit to something, like resigning.
show comments
abhijitr
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
show comments
kapluni
Sadly didn’t age well - OpenAI enthusiastically caved
show comments
hrtk
More like “you have been divided” — OpenAI
hedayet
Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.
PostOnce
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.
Prisoner's Dilemma in Action!
Quarrel
I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...
Maybe it can get reused after this stuff is over.
tomcam
Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.
What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
show comments
andytratt
HN should apply their flagging of posts consistently. either flag the politics or not at all.
show comments
MattDaEskimo
This was a brave, heartwarming read. Thank you to the teams
djgrant
The regulatory environment in the US is insane
gunnihinn
The bravery of the people signing this anonymously is inspiring.
show comments
mythz
These 2 Exceptions shouldn't have to be disputed.
At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.
Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.
vander_elst
What's crazy here is that a government I'd requiring de-regulation while companies are trying to keep stricter rules. What a time.
bcooke
I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
show comments
motbus3
The important thing to know is that no one wants a conflict. Don't be used for that. Don't accept that.
We protect our families when we are home. That's all everybody wants.
khannn
Shades of "He Will Not Divide Us"
snickerbockers
>We are the employees of Google and OpenAI, two of the top AI companies in the world.
Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.
fschuett
Ted Kaczynski was right about technology
poisonborz
So these are the employees that ignore the hundreds of other atrocities their companies do against other countries, small firms, individuals, come out flags waving for some cherry-picked issues, and next day go back to their well paid jobs, vested stocks and office perks and lunch chefs to passively support these agendas further, even if they have the best career mobility across almost all industries.
I mean it's neat, but naive at best.
zahlman
Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?
pluc
Hey did someone show this to Sam? I don't think he knows.
siliconc0w
We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.
show comments
succo
This is game theory 100%, who's gonna be the bad guy?
himata4113
Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
focusgroup0
> domestic mass surveillance and autonomously killing people without human oversight
spoiler alert: this is already happening
do labs in China have a choice in the matter?
guywithahat
> Label the company a "supply chain risk"
Are they not a huge supply chain risk? Anthropic, being second chicken to OpenAI for a long time, decided to integrate tightly with the DoW. Now that their consumer products are doing better they're making decisions for the DoW as a supplier. This isn't about whether I agree with the DoW or not, it's just that behavior obviously would never fly with any customer.
The only real surprise is I haven't heard of the DoW considering Grok, which is not only a frontier model but has an existing gov cloud platform.
gcanyon
No problem! The DoD^HW will just use DeepSeek!
(I wish this were a joke)
show comments
latencyhawk
Well, I think I will get the 200 sub.
bottlepalm
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.
The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
spuz
They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
A unified front from tech companies could have stood a chance, but there's too much money to be made and the imbalance of power is too great without departing the area of influence of the US government entirely (and then go where? China, UK, Australia, etc. are equally not shy of coercing commercial capabilities in pursuit of government goals, including military goals).
jfengel
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
trinsic2
I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
bambax
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands...
WTF does that even mean, we "hope"???!? You know they won't, what's the point of hoping? Why not quit if you have the courage, or not quit -- and shut up?
foota
Well that aged poorly.
tonymet
Allowing anonymous signatories only weakens the petition. Two important people signing a petition is worth more than 10000 anons.
I scrolled through a few pages and 40-60% are anonymous. Even a handful weakens the petition.
I wish more people would participate in civics . Attend your city council or local political party meeting. See what it takes to actually collect signatures, run a campaign.
Online slactivism actually just worsens the cause, because potential energy is vented on futile online “petitions” rather than taking real action.
mortsnort
Kneecapping the country's best AI lab seems like a bad way to win at the cyber.
yayr
It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
show comments
dvfjsdhgfv
The counterargument by the other side will always be, if we don't do it it doesn't matter because the Chinese will do it anyway - and then, common people will be at a disadvantage.
anonnon
> Signed,
The people who:
> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you
> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages
> flood the internet with artificial, superficial content
> aggressively DDoS your website
Real pillars of society.
siva7
At least they're making it easy for HR.
love2read
How is posting on this website with your full name not career suicide?
show comments
ipaddr
And people were wondering how OpenAI will find profitability.
tgv
So now they suddenly develop a conscience? Killing education, and by implication actively dumbing the future world, putting large parts of the labor market at risk, porn fakes, and destroying artistic creation, are acceptable in the name of profit, apparently.
anigbrowl
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
[90 minutes later]
Ah! Well, nevertheless
OK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.
If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).
Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.
monkaiju
I'm regularly surprised how otherwise intelligent people with "good intentions" keep going to work at these places in the first place, then get all "surprised pikachu" when it turns out their work might go towards nefarious ends. These technologies are inherently anti-creativity and researchers have been sounding the alarms about their efficacy for mass surveillance for a long time. Even this petition only seems concerned with "domestic mass surveillance", as if the tools used by an empire abroad dont inevitably get turned inwards.
At some point its hard not to think they just cant avoid the money. At least for the SWEs these are folks who could work at much less "evil" businesses and still easily clear $150k or $200k but they just cant help themselves. This is a company that steals its training data and whose primary product is at best an anti working-class cudgel that management can use to intimidate workers and threaten them with replacement and at worse is a mass-surveillance/killing tool.
dmix
Not using Claude only weakens the state. Just don’t oblige
nailer
All that will happen as a result of US companies not willing to work on weapons is that the US will be made more vulnerable to adversaries, particularly the CCP who don’t care about these things.
ozgung
Am I the only one who is really freaking out?
They deploy BOTS to KILL PEOPLE!
This is the only big news here.
This is the only time in this timeline where we must say "you shall not pass". The ultimate red line. And there is no going back. It's just escalation in an arms race from now on. Nothing good can come out of this.
And you are talking about details, if some guys mentioned the word "domestic" in their tweet etc.
BOTS will autonomously KILL PEOPLE!
qup
Hegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.
> permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This sounds way worse than dystopian, Orwellian or big-brotherly, in a world where you can't even get a human to review the 'autonomously placed lock' on your email or social media account. The Terminator saga is perhaps a good fit. But I have a feeling that they won't stop even at that.
mellosouls
"Domestic".
Very disappointing the letter signatories have chosen to reinforce the US-centric idea that using the models to spy on other democracies is fine and dandy.
Altman and senior others names notable by their absence; not unexpected given the quickly following apparent submission to DoW, which leaves the signatories here (while well-intentioned) in exposed ethical positions now.
pluc
They have now deleted/hid all signatures because their corpodaddy went the other way.
This is so great.
bufio
Hacker news?
theahura
OpenAI is nothing without its people
shevy-java
"We are the employees of Google and OpenAI, two of the top AI companies in the world."
Well, good luck to them, but the state can control from top-down via laws, so they WILL eventually abuse people and violate their rights by proxy-force. I would not trust any of them with my data.
nailer
From the HN Guidelines:
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
drcongo
If I was Anthropic, I'd be saving this as a list of potential hires who share the company's values and shortlisting some to call up on Monday morning.
surume
You must follow the law in your home country. Your refusal to do so constitutes Treason. Obey the law.
My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.
show comments
kittikitti
I respect this and everyone who signed it. Not that I was ever employed by them, I also wouldn't be confident enough to do this, and I wish it were any other way. This is inspiring, thank you.
asmor
This is the line? Really?
Not all the other shit this administration has been doing?
God, I hate it here.
singlewind
The beauty of balance is someone can say yes and someone can say no. No matter how good do you calculate there is theory behind.
paganel
Jeff Dean could have done a lot of good and add his name to the list of signatories, seeing as how leaf of AI at Google or some such. He was supposed to be this super-smart dude, I guess he’s far from that.
Huge props for the the Google and OpenAI engineers that did sign this, for those that did realize that they’re fighting for a greater thing, not just for an extra zero or two added at the end of their bank accounts. Especially as they’re taking a great amount of risk by doing it, first of all, imo they are risking their current employment status.
yoyohello13
I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.
show comments
dluan
oops turns out you will all be divided
paradoxyl
More Far Left treason, documented.
ReptileMan
It is really nice to see employees creating lists for the next round of laoffs themselves.
chkaloon
Too late
csneeky
Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?
blaze998
December 14, 2024
>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.
>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.
>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.
...
keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.
lazzlazzlazz
The signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.
As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.
They always already wanted it to be Grok, Grok is the only, what they call "not woke AI".
uwagar
isnt the pentagon just asking for total access to source code and data silos of anthropic and openai...that we cant ask because its proprietary software?
lovich
You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”
I appreciate the sentiment but don’t preconcede to your opposition by using their framing.
show comments
amelius
Hegseth is discovering the shittiness of the SaaS model.
Samarrrtthh
why
senderista
"We hope our leaders will put aside their differences and stand together"
nullbyte
"He will not divide us!"
show comments
alsufinow
W
HardCodedBias
So much insanity.
Anthropic wanted a veto on use of force by USG. That is intolerable, no private party can have a veto over the sovereign. It is that simple.
Anthropic should have just walked away (and taken the settlement lumps) when they realized that the USG knew. But no, they started some crazy campaign.
This is so irrational on Anthropic. Purchasing managers across the US (and the world) have to understand now that while Anthropic has the best model on the planet, it is not rational (if you prefer it is not rational in ways commonly understood). It is a risk and must be treated as such.
moogly
We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.
So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.
I don't love talking politics on this site. Hackernews has done a pretty decent job of staying non-political and I think that's been a positive thing.
AI is re-shaping American society in a lot of ways. And this is happening at a time where the U.S. is more politically divided than it's ever been. People who use LLMs regularly (most SWEs at this point) can understand the danger signs. The bad outcomes are not inevitable. But the conversations around this cannot only be held in internet forums and blogposts.
Hackernews is an echo chamber of early adopters of tech. The discussions had here don't percolate to the general population.
I believe many of us have a duty to make this feel real to the less technical people in our lives. Too many folks have an information filter that is one of Fox News/CNN/MSNBC. Fox is the worst on misinformation. The others are also bad. Their viewers will not hear, in any clear way, how the Trump admin is trying to bully AI companies into doing what it wants. This will be a headline or an article. A footnote not given the attention it deserves.
Plainly: there is an attempt to turn AI into a political weapon aimed at the general population. Misinformation and surveillance are already out of control. If you can, imagine that getting worse.
This feels like one of those hinge moments. If you can, have real-life conversations with people around you. Explain what's at stake and why it matters now, not later.
ineedaj0b
really dumb. you don’t win this
HWNDUS7
Sweet. Looking forward to another CTF season of He Will Not Divide Us.
I love performative acts of wealthy Silicon Valley drags.
verdverm
Use the feedback forms within their platforms to let the companies know your thoughts
fzeroracer
It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?
That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.
imiric
The levels of irony in this case are staggering.
The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?
Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.
alfiedotwtf
It would be funny in the end if the only ones left to not say no to Trump were Alibaba
krautburglar
You have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.
Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.
duped
The Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.
jackblemming
So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
verisimi
It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.
However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.
How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?
Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.
nilespotter
These models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.
Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.
You guys are batshit insane.
remarkEon
This whole episode is very bizarre.
Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:
>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.
So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.
It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.
I'm going to copy a comment I made in a related thread:
I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.
"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox. Also, once our models improve enough then we'll be sending in The Borg to autonomously target our Enemies™"
I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.
Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".
One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.
nobodywillobsrv
It really feels like I am no longer impressed with Anthropic safety.
Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?
It would be fine to say they are opting out of all forms of protection against adversaries.
But it feels like just more insane naive tech bro stuff.
As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?
Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.
politician
I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?
It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.
show comments
hakrgrl
1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
"
show comments
mrcwinn
OpenAI employees lol.
You’ve lost utterly and completely. Even if you, as an individual, are a good person.
nemo44x
Correct. You will not be divided. You will likely be subtracted.
kopirgan
We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
charcircuit
Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.
I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.
This has much broader implications for the US economy and rule of law in the US.
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
This marks an important turning point for the US.
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
Domestic mass surveillance might feel tolerable when you live in the country conducting it. But how would you feel about other countries adopting similar policies, and thereby mass-surveilling the American people? Because that's exactly what these policies authorize when applied to the rest of the world.
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This is a trap. Two, I guess, but let's take the first one:
Domestic mass surveillance. Domestic.
Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...
Expanding:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
Here's the sequence (so far) in reverse order - did I miss any important threads?
Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)
I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)
President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)
Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)
The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)
Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)
The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)
US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)
Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
The talk about declaring anthropic a supply chain security risk (which doesn’t just remove it from DoW but also all the contractors and suppliers that supply DoW) was also accompanied by a completely different threat: to declare it national security need to take over then company.
Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses
This appears to a form to collect the identities of past or present OpenAI and Google employees who share certain political views
It requires proof of employment, e.g., company email aaddress, photo of employee badge, and discloses a US-based "cloud computing" vendor where the identities will be stored in the cloud
After employment verification it claims the stored identities will be destroyed upon request. The site operator is apparently anonymous
One can imagine this list could be useful to multiple parties for multiple purposes
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
What, then, is this really about?
What is this supposed to do? OpenAI is already cozied up and in bed with Dept of War, they're already busy making lots of little surveillance babies.
Before you leave a comment about how meaningless this is unless they do XYZ,
please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
This reminds me a bit of the Black Mirror episodes with the bees. Where the people whose names tweeted something were actually the targets...
https://en.wikipedia.org/wiki/Hated_in_the_Nation
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.
All of this should remain a bridge too far, forever.
EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.
Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
The common people have viewed tech elites being out of touch. Tech elites have some sort of moral higher ground they like to espouse but rarely have the goods to show.
You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.
I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.
The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.
I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.
Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.
It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.
I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.
Yeah, I guess OpenAI is so upset with the Department of War that they signed a deal with it! Hypocrisy all around. https://x.com/grok/status/2027769947913425390?s=20
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
» Have there been any mistakes in signature verification for this letter?
» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
This should be flagged political like literally everything else that has been flagged ironic how when your on the menu you dont follow the same protocols applied everyone else too.
I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?
Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech
Nicely done. Hold this line — there’s got to be one somewhere.
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
If the DoW/DoD wants Anthropic, they'll get Anthropic, whether we know about it publicly or not. It's not unlikely that they're already working together and just putting on a show for the public.
I'd even go as far to say that if this is indeed a publicity campaign it is the most successful one I've seen in years. Many detractors of the existence of LLMs are suddenly leaping to Anthropic's defence.
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
I clearly see the point against using AI for mass surveillance and fully autonomous weapons. But for the latter, I don't see a choice. If other countries are willing to allow fully autonomous weapons using their own AI, it's no longer a matter of choice, you have to do it too.
It's like watching Darth Vader Senior fight Darth Vader Junior and luke skywalker is nowhere in sight.
The primary purpose of these products is mass surveillance why else would they be allowed to be built ?
This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."
If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
For all the authoritarian regime talk. Here we have a list of many non-citizens willing to argue with the secretary of war of a country they are a temporary resident of, with no concern of repercussion.
I think the time when engineers could steer the heading of the companies they work for is long gone, sadly.
It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.
I’d rather someone of these very smart people start to develop countermeasures.
Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
Wouldn’t it be ironic if US used open source Chinese models for domestic mass surveillance and autonomously killing people without human oversight… democracy at its best.
This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.
If you're an employee and actually believe in this you need to commit to something, like resigning.
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
Sadly didn’t age well - OpenAI enthusiastically caved
More like “you have been divided” — OpenAI
Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
How come this is signed by OpenAI engineers while OpenAI participates in it with DoW? https://x.com/sama/status/2027578652477821175
> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.
Prisoner's Dilemma in Action!
I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...
Maybe it can get reused after this stuff is over.
Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.
What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
HN should apply their flagging of posts consistently. either flag the politics or not at all.
This was a brave, heartwarming read. Thank you to the teams
The regulatory environment in the US is insane
The bravery of the people signing this anonymously is inspiring.
These 2 Exceptions shouldn't have to be disputed.
At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.
Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.
What's crazy here is that a government I'd requiring de-regulation while companies are trying to keep stricter rules. What a time.
I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
The important thing to know is that no one wants a conflict. Don't be used for that. Don't accept that.
We protect our families when we are home. That's all everybody wants.
Shades of "He Will Not Divide Us"
>We are the employees of Google and OpenAI, two of the top AI companies in the world.
Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.
Ted Kaczynski was right about technology
So these are the employees that ignore the hundreds of other atrocities their companies do against other countries, small firms, individuals, come out flags waving for some cherry-picked issues, and next day go back to their well paid jobs, vested stocks and office perks and lunch chefs to passively support these agendas further, even if they have the best career mobility across almost all industries.
I mean it's neat, but naive at best.
Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?
Hey did someone show this to Sam? I don't think he knows.
We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.
This is game theory 100%, who's gonna be the bad guy?
Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
> domestic mass surveillance and autonomously killing people without human oversight
spoiler alert: this is already happening
do labs in China have a choice in the matter?
> Label the company a "supply chain risk"
Are they not a huge supply chain risk? Anthropic, being second chicken to OpenAI for a long time, decided to integrate tightly with the DoW. Now that their consumer products are doing better they're making decisions for the DoW as a supplier. This isn't about whether I agree with the DoW or not, it's just that behavior obviously would never fly with any customer.
The only real surprise is I haven't heard of the DoW considering Grok, which is not only a frontier model but has an existing gov cloud platform.
No problem! The DoD^HW will just use DeepSeek!
(I wish this were a joke)
Well, I think I will get the 200 sub.
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.
The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
Apparently, OpenAI already folded.
https://www.cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-...
A unified front from tech companies could have stood a chance, but there's too much money to be made and the imbalance of power is too great without departing the area of influence of the US government entirely (and then go where? China, UK, Australia, etc. are equally not shy of coercing commercial capabilities in pursuit of government goals, including military goals).
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands...
WTF does that even mean, we "hope"???!? You know they won't, what's the point of hoping? Why not quit if you have the courage, or not quit -- and shut up?
Well that aged poorly.
Allowing anonymous signatories only weakens the petition. Two important people signing a petition is worth more than 10000 anons.
I scrolled through a few pages and 40-60% are anonymous. Even a handful weakens the petition.
I wish more people would participate in civics . Attend your city council or local political party meeting. See what it takes to actually collect signatures, run a campaign.
Online slactivism actually just worsens the cause, because potential energy is vented on futile online “petitions” rather than taking real action.
Kneecapping the country's best AI lab seems like a bad way to win at the cyber.
It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
The counterargument by the other side will always be, if we don't do it it doesn't matter because the Chinese will do it anyway - and then, common people will be at a disadvantage.
> Signed,
The people who:
> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you
> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages
> flood the internet with artificial, superficial content
> aggressively DDoS your website
Real pillars of society.
At least they're making it easy for HR.
How is posting on this website with your full name not career suicide?
And people were wondering how OpenAI will find profitability.
So now they suddenly develop a conscience? Killing education, and by implication actively dumbing the future world, putting large parts of the labor market at risk, porn fakes, and destroying artistic creation, are acceptable in the name of profit, apparently.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
[90 minutes later]
Ah! Well, nevertheless
OK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.
If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).
Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.
I'm regularly surprised how otherwise intelligent people with "good intentions" keep going to work at these places in the first place, then get all "surprised pikachu" when it turns out their work might go towards nefarious ends. These technologies are inherently anti-creativity and researchers have been sounding the alarms about their efficacy for mass surveillance for a long time. Even this petition only seems concerned with "domestic mass surveillance", as if the tools used by an empire abroad dont inevitably get turned inwards.
At some point its hard not to think they just cant avoid the money. At least for the SWEs these are folks who could work at much less "evil" businesses and still easily clear $150k or $200k but they just cant help themselves. This is a company that steals its training data and whose primary product is at best an anti working-class cudgel that management can use to intimidate workers and threaten them with replacement and at worse is a mass-surveillance/killing tool.
Not using Claude only weakens the state. Just don’t oblige
All that will happen as a result of US companies not willing to work on weapons is that the US will be made more vulnerable to adversaries, particularly the CCP who don’t care about these things.
Am I the only one who is really freaking out?
They deploy BOTS to KILL PEOPLE!
This is the only big news here.
This is the only time in this timeline where we must say "you shall not pass". The ultimate red line. And there is no going back. It's just escalation in an arms race from now on. Nothing good can come out of this.
And you are talking about details, if some guys mentioned the word "domestic" in their tweet etc.
BOTS will autonomously KILL PEOPLE!
Hegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.
https://x.com/i/status/2027487514395832410
The “Department of War” DOES NOT EXIST.
No surprise to have not heard anything from xAI
Stand your ground.
> permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This sounds way worse than dystopian, Orwellian or big-brotherly, in a world where you can't even get a human to review the 'autonomously placed lock' on your email or social media account. The Terminator saga is perhaps a good fit. But I have a feeling that they won't stop even at that.
"Domestic".
Very disappointing the letter signatories have chosen to reinforce the US-centric idea that using the models to spy on other democracies is fine and dandy.
Altman and senior others names notable by their absence; not unexpected given the quickly following apparent submission to DoW, which leaves the signatories here (while well-intentioned) in exposed ethical positions now.
They have now deleted/hid all signatures because their corpodaddy went the other way.
This is so great.
Hacker news?
OpenAI is nothing without its people
"We are the employees of Google and OpenAI, two of the top AI companies in the world."
Well, good luck to them, but the state can control from top-down via laws, so they WILL eventually abuse people and violate their rights by proxy-force. I would not trust any of them with my data.
From the HN Guidelines:
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
If I was Anthropic, I'd be saving this as a list of potential hires who share the company's values and shortlisting some to call up on Monday morning.
You must follow the law in your home country. Your refusal to do so constitutes Treason. Obey the law.
Well, it looks like OpenAI will be working with the Pentagon: https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...
My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.
I respect this and everyone who signed it. Not that I was ever employed by them, I also wouldn't be confident enough to do this, and I wish it were any other way. This is inspiring, thank you.
This is the line? Really?
Not all the other shit this administration has been doing?
God, I hate it here.
The beauty of balance is someone can say yes and someone can say no. No matter how good do you calculate there is theory behind.
Jeff Dean could have done a lot of good and add his name to the list of signatories, seeing as how leaf of AI at Google or some such. He was supposed to be this super-smart dude, I guess he’s far from that.
Huge props for the the Google and OpenAI engineers that did sign this, for those that did realize that they’re fighting for a greater thing, not just for an extra zero or two added at the end of their bank accounts. Especially as they’re taking a great amount of risk by doing it, first of all, imo they are risking their current employment status.
I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.
oops turns out you will all be divided
More Far Left treason, documented.
It is really nice to see employees creating lists for the next round of laoffs themselves.
Too late
Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?
December 14, 2024
>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.
>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.
>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.
...
keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.
The signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.
As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.
[1]: https://x.com/UnderSecretaryF/status/2027594072811098230
Previously: https://news.ycombinator.com/item?id=47175931
They always already wanted it to be Grok, Grok is the only, what they call "not woke AI".
isnt the pentagon just asking for total access to source code and data silos of anthropic and openai...that we cant ask because its proprietary software?
You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”
I appreciate the sentiment but don’t preconcede to your opposition by using their framing.
Hegseth is discovering the shittiness of the SaaS model.
why
"We hope our leaders will put aside their differences and stand together"
"He will not divide us!"
W
So much insanity.
Anthropic wanted a veto on use of force by USG. That is intolerable, no private party can have a veto over the sovereign. It is that simple.
Anthropic should have just walked away (and taken the settlement lumps) when they realized that the USG knew. But no, they started some crazy campaign.
This is so irrational on Anthropic. Purchasing managers across the US (and the world) have to understand now that while Anthropic has the best model on the planet, it is not rational (if you prefer it is not rational in ways commonly understood). It is a risk and must be treated as such.
We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.
So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.
https://www.state.gov/bureau-of-arms-control-deterrence-and-...
I don't love talking politics on this site. Hackernews has done a pretty decent job of staying non-political and I think that's been a positive thing.
AI is re-shaping American society in a lot of ways. And this is happening at a time where the U.S. is more politically divided than it's ever been. People who use LLMs regularly (most SWEs at this point) can understand the danger signs. The bad outcomes are not inevitable. But the conversations around this cannot only be held in internet forums and blogposts.
Hackernews is an echo chamber of early adopters of tech. The discussions had here don't percolate to the general population.
I believe many of us have a duty to make this feel real to the less technical people in our lives. Too many folks have an information filter that is one of Fox News/CNN/MSNBC. Fox is the worst on misinformation. The others are also bad. Their viewers will not hear, in any clear way, how the Trump admin is trying to bully AI companies into doing what it wants. This will be a headline or an article. A footnote not given the attention it deserves.
Plainly: there is an attempt to turn AI into a political weapon aimed at the general population. Misinformation and surveillance are already out of control. If you can, imagine that getting worse.
This feels like one of those hinge moments. If you can, have real-life conversations with people around you. Explain what's at stake and why it matters now, not later.
really dumb. you don’t win this
Sweet. Looking forward to another CTF season of He Will Not Divide Us.
I love performative acts of wealthy Silicon Valley drags.
Use the feedback forms within their platforms to let the companies know your thoughts
It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?
That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.
The levels of irony in this case are staggering.
The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?
Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.
It would be funny in the end if the only ones left to not say no to Trump were Alibaba
You have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.
Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.
The Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.
So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.
However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.
How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?
Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.
These models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.
Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.
You guys are batshit insane.
This whole episode is very bizarre.
Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:
>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.
So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.
It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.
[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...
I'm going to copy a comment I made in a related thread:
I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.
"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox. Also, once our models improve enough then we'll be sending in The Borg to autonomously target our Enemies™"
I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.
Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".
One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.
It really feels like I am no longer impressed with Anthropic safety.
Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?
It would be fine to say they are opting out of all forms of protection against adversaries.
But it feels like just more insane naive tech bro stuff.
As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?
Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.
I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?
It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.
1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.
So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "
OpenAI employees lol.
You’ve lost utterly and completely. Even if you, as an individual, are a good person.
Correct. You will not be divided. You will likely be subtracted.
We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.
I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.
How cute they bought a domain and everything
[flagged]
I'm here to support Pentagon (: