"For the longest time, I would NOT allow people to write tests because I thought that culturally, we need to have a culture of shipping fast"
Tests are how you ship fast.
If you have good tests in place you can ship a new feature without fear that it will break some other feature that you haven't manually tested yet.
show comments
ralferoo
It's not really hinted at in the article, which doesn't actually mention whether the rewrite was a net gain - I presume it was or they wouldn't have written the article, and the lead-in picture paints a rosy picture, but the tone at the end suggests he's not happy with how things turned out.
But one thing that used to be a common design anti-pattern was the "version 2 problem". I think I first heard about it when Netscape were talking about how NN2 was a disaster, and they were finally happy with NN3 or NN4.
Often version 1 is a hastily thrown together mess of stuff, but it works and people like it. But there's lots of bad design decisions and you reach a limit with how far you can continue pushing that bad design before it gets too brittle to change. So you start on version 2, a complete rewrite to fix all the problems and you end up with something that's "technically perfect" but so overengineered, it's slow and everybody hates it, plus there are probably lots of workflow hoops to jump through to get things approved that you end up not making any progress, and possibly version 2 kills the product and/or the company.
The idea is that the "version 3" is a pragmatic compromise - the worse design problems from version 1 are gone, but you forego all the unnecessary stuff that you added in version 2, and finally have a product that customers like again (assuming you can convince them to come back and try v3 out) and you can build into future versions.
To a large degree I think this "version 2 problem" was a by product of waterfall design, it's certainly been less common since agile development became popular in the early 2000s and tooling made large scale refactoring easier, but even so I remember working somewhere with a v1 that the customers were using and a v2 that was a 3-year rewrite going on in parallel. None of the developers wanted to work on v1 even though that's what brought in the revenue, and v2 didn't have any of the benefit of the bug fixes accumulated over the years to fix very specific issues that were never captured in any of the scope documents.
show comments
maplant
Having a culture of not ever writing tests and actively disallowing them is so insane I can't even imagine why there's anything else in this post
show comments
steve_adams_86
This seems very post-hoc and like they're fortunate they happened to arrive at something better rather than worse.
The justification for K8s seems pretty thin. It makes me wonder if the author understands why they need it. I'm guessing it's because they've got substantial parallel, multi-tenant networking of stateful processes, which is a pretty defensible reason to use K8s. And easy to say. It seems strange to leave it out.
The argument against Temporal also seems invalid, but I'm not certain. It has been years since I used it, but wouldn't it be possible to poll for completion? It seems like you'd wind up with better observability/retryability tooling, and it's much simpler overall. Polling seems like a good compromise for what I'd consider much easier tooling to reason about.
I'd also posit that you could model a lot of this using your own serializable state machines. They're in the JS ecosystem, so XState is an excellent option. You'd get incredible visibility into your orchestration, deep access to testing the semantics and logic you care most about, and the ability to have your entire architecture be containers on the fly with no blackbox orchestration.
Of course, I'm speculating after browsing through their website a bit and thinking about the problems they described. I'm missing a lot of context. K8s could be the clear winner.
Still, after reading this I would never use this product. I don't mean to sound unkind. I'd never trust the decision-making of the people who followed this trajectory. If I were the author I'd take this down ASAP.
dagi3d
sorry, still don't get no tests as an excuse to go faster. obviously ymmv, but you will need to test your implementation somehow, and manual testing usually takes more time than running your automated tests. no need to over test, but definitely tests doesn't mean it will slow you down, unless you don't know how to test, which in that case, that's totally up to you.
elAhmo
I can't imagine working as a developer at a place where manager/founder "does NOT allow" tests to be written. This, combined with four pivots mentioned in the article seems like they are just riding the hype and trying to brute-force a product without having any basics or PMF.
fabiensanglard
Pearls.
> I would NOT allow people to write tests
> now [...] we started with tests from the ground up
show comments
notorandit
It's a big move. But I understand it.
Sometimes your code is "just" a proof of concept, a way to test the idea.
Very far from a decent product.
That is the time you ditch the code, keep the ideas (both good and bad) and start over.
renewiltord
Tests are most useful for regression detection, so it's a good instinct to not add them when you're primarily exploring. Once you've decided to switch to exploitation, though, regression will hurt. I think it's just a classic 0 to 0.1 not being the same thing as 0.1 to 1.
4rtem
Nice using of the io domain there
andrewstuart
I wouldn’t admit to this level of frankly incompetence.
Wildly swinging dogmatism on how to do software development that’s so wrong you have to throw it all away - then repeating this failure loop multiple times.
Doesn’t inspire any confidence in the person I wouldn’t get them to lead a project.
Why would you be so loud and proud about all this.
show comments
heliumtera
So you started with 2023 theo.gg philosophy but now moved on to 2026 theo.gg philosophy
ramesh31
Next is such a dumpster fire. So much wasted effort due to the Node ecosystem never developing a universal batteries included framework like Rails or Django.
"For the longest time, I would NOT allow people to write tests because I thought that culturally, we need to have a culture of shipping fast"
Tests are how you ship fast.
If you have good tests in place you can ship a new feature without fear that it will break some other feature that you haven't manually tested yet.
It's not really hinted at in the article, which doesn't actually mention whether the rewrite was a net gain - I presume it was or they wouldn't have written the article, and the lead-in picture paints a rosy picture, but the tone at the end suggests he's not happy with how things turned out.
But one thing that used to be a common design anti-pattern was the "version 2 problem". I think I first heard about it when Netscape were talking about how NN2 was a disaster, and they were finally happy with NN3 or NN4.
Often version 1 is a hastily thrown together mess of stuff, but it works and people like it. But there's lots of bad design decisions and you reach a limit with how far you can continue pushing that bad design before it gets too brittle to change. So you start on version 2, a complete rewrite to fix all the problems and you end up with something that's "technically perfect" but so overengineered, it's slow and everybody hates it, plus there are probably lots of workflow hoops to jump through to get things approved that you end up not making any progress, and possibly version 2 kills the product and/or the company.
The idea is that the "version 3" is a pragmatic compromise - the worse design problems from version 1 are gone, but you forego all the unnecessary stuff that you added in version 2, and finally have a product that customers like again (assuming you can convince them to come back and try v3 out) and you can build into future versions.
To a large degree I think this "version 2 problem" was a by product of waterfall design, it's certainly been less common since agile development became popular in the early 2000s and tooling made large scale refactoring easier, but even so I remember working somewhere with a v1 that the customers were using and a v2 that was a 3-year rewrite going on in parallel. None of the developers wanted to work on v1 even though that's what brought in the revenue, and v2 didn't have any of the benefit of the bug fixes accumulated over the years to fix very specific issues that were never captured in any of the scope documents.
Having a culture of not ever writing tests and actively disallowing them is so insane I can't even imagine why there's anything else in this post
This seems very post-hoc and like they're fortunate they happened to arrive at something better rather than worse.
The justification for K8s seems pretty thin. It makes me wonder if the author understands why they need it. I'm guessing it's because they've got substantial parallel, multi-tenant networking of stateful processes, which is a pretty defensible reason to use K8s. And easy to say. It seems strange to leave it out.
The argument against Temporal also seems invalid, but I'm not certain. It has been years since I used it, but wouldn't it be possible to poll for completion? It seems like you'd wind up with better observability/retryability tooling, and it's much simpler overall. Polling seems like a good compromise for what I'd consider much easier tooling to reason about.
I'd also posit that you could model a lot of this using your own serializable state machines. They're in the JS ecosystem, so XState is an excellent option. You'd get incredible visibility into your orchestration, deep access to testing the semantics and logic you care most about, and the ability to have your entire architecture be containers on the fly with no blackbox orchestration.
Of course, I'm speculating after browsing through their website a bit and thinking about the problems they described. I'm missing a lot of context. K8s could be the clear winner.
Still, after reading this I would never use this product. I don't mean to sound unkind. I'd never trust the decision-making of the people who followed this trajectory. If I were the author I'd take this down ASAP.
sorry, still don't get no tests as an excuse to go faster. obviously ymmv, but you will need to test your implementation somehow, and manual testing usually takes more time than running your automated tests. no need to over test, but definitely tests doesn't mean it will slow you down, unless you don't know how to test, which in that case, that's totally up to you.
I can't imagine working as a developer at a place where manager/founder "does NOT allow" tests to be written. This, combined with four pivots mentioned in the article seems like they are just riding the hype and trying to brute-force a product without having any basics or PMF.
Pearls.
> I would NOT allow people to write tests
> now [...] we started with tests from the ground up
It's a big move. But I understand it.
Sometimes your code is "just" a proof of concept, a way to test the idea. Very far from a decent product.
That is the time you ditch the code, keep the ideas (both good and bad) and start over.
Tests are most useful for regression detection, so it's a good instinct to not add them when you're primarily exploring. Once you've decided to switch to exploitation, though, regression will hurt. I think it's just a classic 0 to 0.1 not being the same thing as 0.1 to 1.
Nice using of the io domain there
I wouldn’t admit to this level of frankly incompetence.
Wildly swinging dogmatism on how to do software development that’s so wrong you have to throw it all away - then repeating this failure loop multiple times.
Doesn’t inspire any confidence in the person I wouldn’t get them to lead a project.
Why would you be so loud and proud about all this.
So you started with 2023 theo.gg philosophy but now moved on to 2026 theo.gg philosophy
Next is such a dumpster fire. So much wasted effort due to the Node ecosystem never developing a universal batteries included framework like Rails or Django.