Kinda crazy that ts-node is still the recommendation when it hasn't been updated since 2023. And likewise crazy that no other lib has emerged that has typescript compilation and typechecking. Of course if it works, don't fix it, but typescript has evolved quite a bit since 2023.
show comments
conartist6
It's funny to me that people should look at this situation and say "this is OK".
The upshot of all these projects to make JS tools faster is a fractured ecosystem. Who if given the choice would honestly want to try to maintain Javascript tools written in a mixture of Rust and Go? Already we've seemingly committed to having a big schism in the middle. And the new tools don't replace the old ones, so to own your tools you'll need to make Rust, Go, and JS all work together using a mix of clean modern technology and shims into horribly legacy technology. We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
All I really see is an explosion of complexity.
show comments
fsmedberg
I'm very surprised the article doesn't mention Bun. Bun is significantly faster than Vite & Rolldown, if it's simply speed one is aiming for. More importantly Bun allows for simplicity. Install Bun, you get Bundler included and TypeScript just works, and it's blazing fast.
show comments
gaoshan
This smells of "I like to solve puzzles and fiddle with things" and reminds of hours spent satisfyingly tweaking my very specific and custom setups for various things technical.
I, too, like to fiddle with optimizations and tool configuration puzzles but I need to get things done and get them done now. It doesn't seem fast, it seems cumbersome and inconsistent.
show comments
austin-cheney
Any method for front end tooling is potentially the fastest. It always comes to what you measure and how you measure it. If you don't have any measures at all then your favorite method is always the fastest no matter what, because you live in a world without evidence.
Even after consideration of measurements radical performance improvements are most typically the result of the code's organization and techniques employed than the language its written in. But, of course, that cannot be validated without evidence from comparison of measurements.
The tragic part of all this is that everybody already knows this, but most front end developers do not measure things and may become hostile when measurements do occur that contradict their favorite techniques.
show comments
Exoristos
All y'all need more RAM in your development laptops. Maybe. At least, I've never been bothered by the performance of standard tooling like prettier, ESLint, and npm.
show comments
insin
Any plans to create a combined server + web app template using @hono/vite-dev-server for local development, with both sides of auth preconfigured, with the server serving up the built web app in production?
I've used this setup for my last few projects and it's so painless, and with recent versions of Node.js which can strip TypeScript types I don't even need a build step for the server code.
Edit: oops, I didn't see nkzw-tech/fate-template, which has something like this, but running client and server separately instead
huksley
One nitpick is Claude Code on the web does not do linting by default, so you need to run lint for its changes manually.
e10jc
Very cool list but why no mention of biome? I’ve been using that on a recent project and it’s been pretty great. Also bun test instead of vitest.
EvgheniDem
The bit about strict guardrails helping LLMs write better code matches what we have been seeing. We ran the same task in loose vs strict lint configurations and the output quality difference was noticeable.
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
show comments
the_harpia_io
the ecosystem fragmentation thing hit me pretty hard when i was trying to set up a consistent linting workflow across a mono-repo last year. half the team already using biome, half still on eslint+prettier, and adding any shared tooling meant either duplicating config or just picking a side and upsetting someone
i get why the rust/go tools exist - the perf gains are measurable. but the cognitive overhead is real. new engineer joins, they now need 3 different mental models just to make a PR. not sure AI helps here either honestly, it just makes it easier to copy-paste configs you don't fully understand
sunaookami
Oxfmt!? I just switched from ESLint and Prettier to Biome!
anyone have any insight as to why microsoft chose go? I feel like with rust it could have been even faster!
show comments
_pdp_
This is a good list. Bookmarked.
vivzkestrel
get rid of both Oxfmt and Oxlint and use biome OP
show comments
dejli
It looks more functional i like it.
sublinear
I'm confused by this, but also curious what we mean by "fastest".
In my experience, the bottleneck has always been backend dev and testing.
I was hoping "tooling" meant faster testing, not yet another layer of frontend dev. Frontend dev is pretty fast even when done completely by hand for the last decade or so. I have and have also seen others livecode on 15 minute calls with stakeholders or QA to mock some UI or debug. I've seen people deliver the final results from that meeting just a couple of hours later. I say this as in, that's what's going to prod minus some very specific edge case bugs that might even get argued away and never fixed.
Not trying to be defensive of pure human coding skills, but sometimes I wonder if we've rolled back expectations in the past few years. All this recent stuff seems even more complicated and more error prone, and frontend is already those things.
Kinda crazy that ts-node is still the recommendation when it hasn't been updated since 2023. And likewise crazy that no other lib has emerged that has typescript compilation and typechecking. Of course if it works, don't fix it, but typescript has evolved quite a bit since 2023.
It's funny to me that people should look at this situation and say "this is OK".
The upshot of all these projects to make JS tools faster is a fractured ecosystem. Who if given the choice would honestly want to try to maintain Javascript tools written in a mixture of Rust and Go? Already we've seemingly committed to having a big schism in the middle. And the new tools don't replace the old ones, so to own your tools you'll need to make Rust, Go, and JS all work together using a mix of clean modern technology and shims into horribly legacy technology. We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
All I really see is an explosion of complexity.
I'm very surprised the article doesn't mention Bun. Bun is significantly faster than Vite & Rolldown, if it's simply speed one is aiming for. More importantly Bun allows for simplicity. Install Bun, you get Bundler included and TypeScript just works, and it's blazing fast.
This smells of "I like to solve puzzles and fiddle with things" and reminds of hours spent satisfyingly tweaking my very specific and custom setups for various things technical.
I, too, like to fiddle with optimizations and tool configuration puzzles but I need to get things done and get them done now. It doesn't seem fast, it seems cumbersome and inconsistent.
Any method for front end tooling is potentially the fastest. It always comes to what you measure and how you measure it. If you don't have any measures at all then your favorite method is always the fastest no matter what, because you live in a world without evidence.
Even after consideration of measurements radical performance improvements are most typically the result of the code's organization and techniques employed than the language its written in. But, of course, that cannot be validated without evidence from comparison of measurements.
The tragic part of all this is that everybody already knows this, but most front end developers do not measure things and may become hostile when measurements do occur that contradict their favorite techniques.
All y'all need more RAM in your development laptops. Maybe. At least, I've never been bothered by the performance of standard tooling like prettier, ESLint, and npm.
Any plans to create a combined server + web app template using @hono/vite-dev-server for local development, with both sides of auth preconfigured, with the server serving up the built web app in production?
I've used this setup for my last few projects and it's so painless, and with recent versions of Node.js which can strip TypeScript types I don't even need a build step for the server code.
Edit: oops, I didn't see nkzw-tech/fate-template, which has something like this, but running client and server separately instead
One nitpick is Claude Code on the web does not do linting by default, so you need to run lint for its changes manually.
Very cool list but why no mention of biome? I’ve been using that on a recent project and it’s been pretty great. Also bun test instead of vitest.
The bit about strict guardrails helping LLMs write better code matches what we have been seeing. We ran the same task in loose vs strict lint configurations and the output quality difference was noticeable.
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
the ecosystem fragmentation thing hit me pretty hard when i was trying to set up a consistent linting workflow across a mono-repo last year. half the team already using biome, half still on eslint+prettier, and adding any shared tooling meant either duplicating config or just picking a side and upsetting someone
i get why the rust/go tools exist - the perf gains are measurable. but the cognitive overhead is real. new engineer joins, they now need 3 different mental models just to make a PR. not sure AI helps here either honestly, it just makes it easier to copy-paste configs you don't fully understand
Oxfmt!? I just switched from ESLint and Prettier to Biome!
You can omit tsc with : https://oxc.rs/docs/guide/usage/linter/type-aware.html#type-..., so one less script to run in paralell
anyone have any insight as to why microsoft chose go? I feel like with rust it could have been even faster!
This is a good list. Bookmarked.
get rid of both Oxfmt and Oxlint and use biome OP
It looks more functional i like it.
I'm confused by this, but also curious what we mean by "fastest".
In my experience, the bottleneck has always been backend dev and testing.
I was hoping "tooling" meant faster testing, not yet another layer of frontend dev. Frontend dev is pretty fast even when done completely by hand for the last decade or so. I have and have also seen others livecode on 15 minute calls with stakeholders or QA to mock some UI or debug. I've seen people deliver the final results from that meeting just a couple of hours later. I say this as in, that's what's going to prod minus some very specific edge case bugs that might even get argued away and never fixed.
Not trying to be defensive of pure human coding skills, but sometimes I wonder if we've rolled back expectations in the past few years. All this recent stuff seems even more complicated and more error prone, and frontend is already those things.