intoXbox

They used a custom neural net with autoencoders, which contain convolutional layers. They trained it on previous experiment data.

https://arxiv.org/html/2411.19506v1

Why is it so hard to elaborate what AI algorithm / technique they integrate? Would have made this article much better

show comments
serendipty01

Might be related: https://www.youtube.com/watch?v=T8HT_XBGQUI (Big Data and AI at the CERN LHC by Dr. Thea Klaeboe Aarrestad)

https://www.youtube.com/watch?v=8IZwhbsjhvE (From Zettabytes to a Few Precious Events: Nanosecond AI at the Large Hadron Collider by Thea Aarrestad)

Page: https://www.scylladb.com/tech-talk/from-zettabytes-to-a-few-...

quijoteuniv

A bit of hype in the AI wording here. This could be called a chip with hardcoded logic obtained with machine learning

show comments
nerolawa

the fact that 99% of LHC data is just gone forever is insane

[deleted]
Janicc

I think chips having a single LLM directly on them will be very common once LLMs have matured/reached a ceiling.

mentalgear

That's what Groq did as well: burning the Transformer right onto a chip (I have to say I was impressed by the simplicity, but afterwards less so by their controversial Kushner/Saudi investment) .

show comments
WhyNotHugo

Intuitively, I’ve always had an impression that using an analogue circuit would be feasible for neural networks (they just matrix multiplication!). These should provide instantaneous output.

Isn’t this kind of approach feasible for something so purpose-built?

v9v

Do they actually have ASICs or just FPGAs? The article seems a bit unclear.

rakel_rakel

Hey Siri, show me an example of an oxymoron!

> CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).

show comments
seydor

cern has been using neural networks for decades

randomNumber7

Does string theory finally make sense when we ad AI hallucinations?

100721

Does anyone know why they are using language models instead of a more purpose-built statistical model? My intuition is that a language model would either be overfit, or its training data would have a lot of noise unrelated to the application and significantly drive up costs.

show comments
claytonia

[dead]

Remi_Etien

[dead]

TORcicada

[dead]