Interesting timing given the quantum computing timeline pressure from this week's cryptography discussions. $30B run-rate and gigawatts of TPU capacity — and meanwhile the most interesting AI work I've seen lately runs on a phone in Termux with no cloud dependency at all. Both things are true simultaneously.
skybrian
I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:
> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]
Is it priced that way, though? I assume next-gen TPU's will be more efficient?
show comments
ketzo
$19B -> $30B annualized revenue in a month?
Feels like the lede is buried here!
show comments
nopurpose
How is compute shortage to satisfy demand manifested? Obviously they never close sign-ups, so only option is to extended queues? But if demand grows like crazy, then queues should get longer, yet my pro claude plan seems snappy with only occasional retries due to 429.
show comments
chimpanzee2
On a tangential note: It seems the whole theater with the DoD is over for now, am I seeing this right?
cebert
I’m surprised Anthropic wanted to partner with Broadcom when they have such a negative reputation with antics such as their VMWare acquisition.
Interesting to see Anthropic investing in compute infrastructure. The bottleneck I keep hitting is not
raw compute but where that compute lives — EU customers increasingly need guarantees their data stays
in-region. More sovereign compute options in Europe would unlock a lot of enterprise AI adoption.
show comments
Eufrat
Can someone explain why everything is being marketed in terms of power consumption?
show comments
holografix
I don’t understand Claude Code’s moat here. What can it do that opencode can’t or couldn’t fairly easily implement?
show comments
mikert89
There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine
Interesting timing given the quantum computing timeline pressure from this week's cryptography discussions. $30B run-rate and gigawatts of TPU capacity — and meanwhile the most interesting AI work I've seen lately runs on a phone in Termux with no cloud dependency at all. Both things are true simultaneously.
I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:
> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]
https://backofmind.substack.com/p/new-new-rules-for-the-new-...
Is it priced that way, though? I assume next-gen TPU's will be more efficient?
$19B -> $30B annualized revenue in a month?
Feels like the lede is buried here!
How is compute shortage to satisfy demand manifested? Obviously they never close sign-ups, so only option is to extended queues? But if demand grows like crazy, then queues should get longer, yet my pro claude plan seems snappy with only occasional retries due to 429.
On a tangential note: It seems the whole theater with the DoD is over for now, am I seeing this right?
I’m surprised Anthropic wanted to partner with Broadcom when they have such a negative reputation with antics such as their VMWare acquisition.
TPU architecture explained
https://news.ycombinator.com/item?id=47637597
Interesting to see Anthropic investing in compute infrastructure. The bottleneck I keep hitting is not raw compute but where that compute lives — EU customers increasingly need guarantees their data stays in-region. More sovereign compute options in Europe would unlock a lot of enterprise AI adoption.
Can someone explain why everything is being marketed in terms of power consumption?
I don’t understand Claude Code’s moat here. What can it do that opencode can’t or couldn’t fairly easily implement?
There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine