There are few principle of software engineering that I hate more than this one, though SOLID is close.
It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.
Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.
Today, late optimization is just as bad as premature optimization, if not more so.
“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”
show comments
conartist6
Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why
show comments
deaux
Laws of Software Engineering (2026 Update)
- Every website will be vibecoded using Claude Opus
This will result in the following:
- The background color will be a shade of cream, to properly represent Anthropic
- There will be excessive use of different fonts and weights on the same page, as if a freshman design student who just learned about typography
- There will be an excess of cards in different styles, a noteworthy amount of which has a colored, round border either on hover or by default on exactly one side of the card
show comments
dataviz1000
I did not see Boyd’s Law of Iteration [0]
"In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."
What do you call the law that you violate when you vibe code an entire website for "List of 'laws' of software engineering" instead of just creating a Wikipedia page for it
show comments
g051051
> When I first started, I was enamored with technology and programming and computer science. I’m over it.
Wow, that is incredibly sad to hear. I'm 40+ years in, and still love all of that.
meken
I love Kernighan’s Law:
> "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
show comments
hatsix
I know it's not software-engineering-only, but Chesterton's Fence is often the first 'law' I teach interns and new hires: https://fs.blog/chestertons-fence/
show comments
RivieraKid
Not a law but a a design principle that I've found to be one of the most useful ones and also unknown:
Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.
show comments
t43562
The conservation of Complexity (Tesler) seems immediately insightful to me just as a sentence:
"Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated."
But then in the explanation seems to me to devolve down to a trite suggestion not to burden your users. This doesn't interest me because users need the level of complexity they need and no more whatever you're doing and making it less causes your application to be an unflexible toy. So this is all, to a degree, obvious.
I think it's more useful to remember when you're refactoring that if you try to make one bit of a system simpler then you often just make another part more complex. Why write something twice to end up with it being just as bad the other way round?
kwar13
Half of these are not about software engineering and just general management principles.
pkasting
This list is missing my personal law, Kasting's Law:
Asking "who wrote this stupid code?" will retroactively travel back in time and cause it to have been you.
show comments
fenomas
Nice to have these all collected nicely and sharable. For the amusement of HN let me add one I've become known for at my current work, for saying to juniors who are overly worried about DRY:
> Fen's law: copy-paste is free; abstractions are expensive.
edit: I should add, this is aimed at situations like when you need a new function that's very similar to one you already have, and juniors often assume it's bad to copy-paste so they add a parameter to the existing function so it abstracts both cases. And my point is: wait, consider the cost of the abstraction, are the two use cases likely to diverge later, do they have the same business owner, etc.
show comments
hunterpayne
This is the best comment on this article but it was deleted for some reason.
"The meta-law of software engineering: All laws of software engineering will be immediately misinterpreted and mindlessly applied in a way that would horrify their originators. Now that we can observe the behaviour of LLMs that are missing key context, we can understand why."
Or, you can't boil down decades of wisdom and experience into a pithy, 1 sentence quote.
davery22
A few extra from my own notes-
- Shirky Principle: Institutions will try to preserve the problem to which they are the solution
- Chesterton's Fence: Changes should not be made until the reasoning behind the current state of affairs is understood
- Rule of Three: Refactoring given only two instances of similar code risks selecting a poor abstraction that becomes harder to maintain than the initial duplication
austin-cheney
My own personal law is:
When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.
It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.
show comments
ozgrakkurt
For anyone reading this. Learn software engineering from people that do software engineering. Just read textbooks which are written by people that actually do things
show comments
merge_software
> YAGNI (You Aren't Gonna Need It)
This one is listed as design, but it could just as easily count as architecture. Guessing a lot developers have worked on scaling with lambda functions or a complex IAC setup when a simple API running on a small VPS would have done the trick, at least until enough people are using the application for it to be considered profitable.
show comments
ryanshrott
People use the premature optimization principle in exactly the wrong way these days. Knuth's full quote is, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." That 97%/3% split is the whole point.
People bring it up to argue for never thinking about performance, which flips the intent on its head. The real takeaway is that you need to spot that critical 3% early enough to build around it, and that means doing some optimization thinking up front, not none at all.
Kinrany
SOLID being included immediately makes me have zero expectation of the list being curated by someone with good taste.
show comments
dassh
Calling them 'laws' is always a bit of a stretch. They are more like useful heuristics. The real engineering part is knowing exactly when to break them.
show comments
r0ze-at-hn
Love the details sub pages. Over 20 years I collected a little list of specific laws or really observations (https://metamagic.substack.com/p/software-laws) and thought about turning each into specific detailed blog posts, but it has been more fun chatting with other engineers, showing the page and watch as they scan the list and inevitably tell me a great story. For example I could do a full writeup on the math behind this one, but it is way more fun hearing the stories about the trying and failing to get second re-writes for code.
9. Most software will get at most one major rewrite in its lifetime.
TheGRS
You know, I mention this stuff all the time in various meetings and discussions. I read a lot of stuff on Hacker News and just have years of accumulated knowledge from the various colleagues I've worked with. Its nice to have a little reference sheet.
4dregress
I like to replace the bus factor with the Lottery Factor.
I actually had a college run over by a bus on the way to work in London, was very lucky and made a full recovery.
Head poking out under the main exit of the bus.
asmodeuslucifer
I learned about Dunbar’s number
(~150) is the size of a community in which everyone knows each other’s identities and roles.
In anthropology class. You can ask someone to write down the name of everyone they can think of, real or fictional, live or dead and most people will not make it to 250.
Some individuals like professional gossip columnists or some politicians can remember as many as 1,000 people.
show comments
biscuits1
Today, I was presented with Claude's decision to include numerous goto statements in a new implementation. I thought deeply about their manual removal; years of software laws went against what I saw. But then, I realized it wouldn't matter anymore.
Then I committed the code and let the second AI review it. It too had no problem with goto's.
Claude's Law:
The code that is written by the agent is the most correct way to write it.
show comments
mojuba
> Get it working correctly first, then make it fast, then make it pretty.
Or develop a skill to make it correct, fast and pretty in one or two approaches.
show comments
tmoertel
One that is missing is Ousterhout’s rule for decomposing complexity:
complexity(system) =
sum(complexity(component) * time_spent_working_in(component)
for component in system).
The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.
[EDIT: Ninja'd a couple of times. +1 for Shirky's principle]
regular_trash
Hot take - I hate YAGNI. My personal pet peeve is when someone says YAGNI to a structure in the code they perceive as "more complex than they would have done it".
Sure, don't add hooks for things you don't immediately need. But if you are reasonably sure a feature is going to be required at some point, it doesn't hurt to organize and structure your code in a way that makes those hooks easy to add later on.
Worst case scenario, you are wrong and have to refactor significantly to accommodate some other feature you didn't envision. But odds are you have to do that anyway if you abide by YAGNI as dogma.
The amount of times I've heard YAGNI as reasoning to not modularize code is insane. There needs to be a law that well-intentioned developers will constantly misuse and misunderstand the ideas behind these heuristics in surprising ways.
show comments
serious_angel
Great! Do principles fit? If so, considering presence of "Bus Factor", I believe "Chesterton's Fence" should be listed, too.
Nice site, but missing the Law of Conservation of Misery.
WillAdams
Visual list of well-known aphorisms and so forth.
A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout
(and of course Brook's Law is from _The Mythical Man Month_)
Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....
0xpgm
An extension to Zawinski's Law, every web service attempts to expand until it becomes a social network.
Symmetry
On my laptop I have a yin-yang with DRY and YAGNI replacing the dots.
show comments
netdevphoenix
"This site was paused as it reached its usage limits. Please contact the site owner for more information."
I wish AWS/Azure had this functionality.
show comments
galaxyLogic
The Law of Leaky Abstractions. What is a "leaky" abstraction? How does it "leak"?
I wonder if it should be called "Law of Leaky Metaphors" instead. Metaphor is not the same thing as Abstraction. I can understand a "leaky metaphor" as something that does not quite make it, at least not in all aspects. But what would be a good EXAMPLE of a Leaky Abstraction?
Sergey777
A lot of these “laws” seem obvious individually, but what’s interesting is how often we still ignore them in practice.
Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.
I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.
tfrancisl
Remember, just because people repeated it so many times it made it to this list, does not mean its true. There may be some truth in most of these, but none of these are "Laws". They are aphorisms: punchy one liners with the intent to distill something so complex as human interaction and software design.
nopointttt
The one I keep coming back to is "code you didn't write is code you can't debug." Every fancy dep I grabbed to save an afternoon ended up costing me weeks later when something upstream broke in some way I had no mental model for. LLM generated code has the same problem now. Looks fine until you hit a case it doesn't cover and you're trying to reverse engineer what you let it write.
macintux
Some similarly-titled (but less tidily-presented) posts that have appeared on HN in the past, none of which generated any discussion:
Another commenter WillAdams has mentioned A Philosophy of Software Design (which should really be called A Set of Heuristics for Software Design) and one of the key concepts there are small (general) interfaces and deep implementations.
A similar heuristic also comes up in Elements of Clojure (Zachary Tellman) as well, where he talks about "principled components and adaptive systems".
The general idea: You should greatly care about the interfaces, where your stuff connects together and is used by others. The leverage of a component is inversely proportional to the size of that interface and proportional to the size of its implementation.
I think the way that connects to testing is that architecturally granular tests (down the stack) is a bit like pouring molasses into the implementation, rather than focusing on what actually matters, which is what users care about: the interface.
Now of course we as developers are the users of our own code, and we produce building blocks that we then use to compose entire programs. Having example tests for those building blocks is convenient and necessary to some degree.
However, what I want to push back on is the implied idea of having to hack apart or keep apart pieces so we can test them with small tests (per method, function etc.) instead of taking the time to figure out what the surface areas should be and then testing those.
If you need hyper granular tests while you're assembling pieces, then write them (or better: use a REPL if you can), but you don't need to keep them around once your code comes together and you start to design contracts and surface areas that can be used by you or others.
show comments
noduerme
I'd like to propose a corollary to Gall's Law. Actually it's a self-proving tautology already contained with the term "lifecycle." Any system that lasts longer than a single lifecycle oscillates between (reducing to) simplicity and (adding) complexity.
My bet is on the long arc of the universe trending toward complexity... but in spite of all this, I don't think all this complexity arises from a simple set of rules, and I don't think Gall's law holds true. The further we look at the rule-set for the universe, the less it appears to be reducible to three or four predictable mechanics.
wesselbindt
Two of my main CAP theorem pet peeves happen on this page:
- Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.
- Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.
A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.
show comments
toolslive
maybe add: "the universe is winning" (in the design department).
Full quote: "software engineers try to build "idiot-proof" systems, while the universe creates "bigger and better idiots" to break them. So far, the universe is winning"
mchl-mumo
I find myself guilty of giving over ambitious timelines even when I try to take that into account.
cientifico
There is one missing that i am using as primary for the last 5 years.
The UX pyramid but applied to DX.
It basically states that you should not focus in making something significant enjoyable or convenient if you don't have something that is usable, reliable or remotely functional.
Don't see a really important one in my opinion:
Refactor legacy code, don't rewrite it. All that cruft you see are bug fixes.
Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.
Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked...
...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.
show comments
quantum_state
Unfortunately, violation of any of these laws don't seem to have immediate consequences. That's why the IT industry is in ruin.
lifeisstillgood
Just throwing one of my favourites in:
As JFK never said:
“””We do these things, not because they are easy,
But because we thought they would be easy”””
darccio
I wonder if it's usual for other professions/fields to have this tendency to create laws/aphorisms so ingrained. I'm biased as software engineer but it seems to me that is more common in computer science than others.
hintymad
With the current AI wave, a fun question to ask is: which of these laws do people think no longer apply.
bpavuk
> This site was paused as it reached its usage limits. Please contact the site owner for more information.
> The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It should be 90% code - 10% time / 10% code - 90% time
show comments
invalidSyntax
I just wish if this was a requirement to get a job. Everyone needs to know this.
grahar64
Some of these laws are like Gravity, inevitable things you can fight but will always exist e.g. increasing complexity. Some of them are laws that if you break people will yell at you or at least respect you less, e.g. leave it cleaner than when you found it.
Any time someone quotes a law named after some random person, it looks like a stuffy "I know something you don't." Amdahl is probably the only name here that deserves it, and it's a real law. I'd be fine if Eric Brewer put his name on CAP too, also a real law.
YAGNI and "you will ship the org chart" are the two most commonly useful things to remember, but they aren't laws.
sunkeeh
Good luck following the Dilbert Principle xD
Just because some things were observed frequently during a certain period, doesn't mean it's a "Law" or even a "Principle"; it's merely a trend.
jaggederest
TANSTAAFL was always one of my favorites - there ain't no such thing as a free lunch
Antibabelic
Software engineering is voodoo masquerading as science. Most of these "laws" are just things some guys said and people thought "sounds sensible". When will we have "laws" that have been extensively tested experimentally in controlled conditions, or "laws" that will have you in jail for violating them? Like "you WILL be held responsible for compromised user data"?
show comments
bronlund
Pure gold :) I'm missing one though; "You can never underestimate an end user.".
HoldOnAMinute
I like it, but this could have been a tab delimited text file.
Waterluvian
I think it would be cool to have these shown at random as my phone’s “screensaver”
This one belongs to history books, not to the list of contemporary best practices.
alsetmusic
Their statement of Dunning-Kruger is overly simplified such as to misdefine it:
> The less you know about something, the more confident you tend to be.
From the first line on the wiki article:
> systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.
Or, said another way, the more you know about something the more complexities you're aware of and the better assessment you can make about topics involving such. At least, that's how I understand it in a nutshell without explaining the experiments run and the observations that led to the findings.
yesitcan
None of these things matter anymore. All you need is vibe.
show comments
matt765
I love this
clauderx
Ah yes my favorite - Conway's Law is just a fancy way of saying "your architecture is whatever your political mess of a org chart accidentally produced, and everyone calls it 'design' afterward to avoid fixing it."
bofia
It would be nice to see what overlaps
cogman10
Uhh, I knew I wasn't going to like this one when I read it.
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.
This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.
I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.
The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.
And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).
But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".
When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.
I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.
show comments
Divergence42
fascinating and agree with many of the laws. In a 1 person agent only company this hits a bit different.
exiguus
This website should be a json file
show comments
amelius
No laws related to AI?
0xbadcafebee
A law of physics is inviolable.... A law of software engineering is a hot take.
Here's another law: the law of Vibe Engineering. Whatever you feel like, as long as you vibe with it, is software engineering.
lenerdenator
"No matter how adept and talented you are at your craft with respect to both technical and business matters, people involved in finance will think they know better."
That one's free.
asdfman123
> Leave the code better than you found it
In most places, people don't follow this rule, as it ensures either you're working an extra 10-20 hours a week to keep things clean, or stuck at mid-level for not making enough impact.
I choose the second option. But I see people who utterly trash the codebase get ahead.
James_K
I feel that Postel's law probably holds up the worst out of these. While being liberal with the data you accept can seem good for the functioning of your own application, the broader social effect is negative. It promotes misconceptions about the standard into informal standards of their own to which new apps may be forced to conform. Ultimately being strict with the input data allowed can turn out better in the long run, not to mention be more secure.
blauditore
Many of the "teams" laws are BS, especially the ones about promotions and management. I've never been a manager or high-level executive, but it's not that all of them are either non-technical or bad managers. It's just that the combination of both skills is rare.
duc_minh
Is it just me seeing the following?
Site not available
This site was paused as it reached its usage limits. Please contact the site owner for more information.
show comments
AtNightWeCode
"Polishing a turd" is missing. Making something slightly better when it should be removed. Like running Flash apps in 2026.
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.
Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.
Moreover:
>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.
I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.
Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
show comments
rapatel0
The list is great but the explanation are clearly AI slop.
"Before SpaceX, launching rockets was costly because industry practice used expensive materials and discarded rockets after one use. Elon Musk applied first-principles thinking: What is a rocket made of? Mainly aluminum, titanium, copper, and carbon fiber. Raw material costs were a fraction of finished rocket prices. From that insight, SpaceX decided to build rockets from scratch and make them reusable."
Everything including humans are made of cheap materials but that doesn't convey the value. The AI got close to the answer with it's first sentence (re-usability) but it clearly missed the mark.
andreygrehov
`Copy as markdown` please.
Lapsa
reminder - there's tech out there capable of reading your mind remotely
garff
Mad AI slop..
bakkerinho
> This site was paused as it reached its usage limits. Please contact the site owner for more information.
Law 0:
Fix infra.
show comments
threepts
I believe there should be one more law here, telling you to not believe this baloney and spend your money on Claude tokens.
> Premature optimization is the root of all evil.
There are few principle of software engineering that I hate more than this one, though SOLID is close.
It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.
Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.
Today, late optimization is just as bad as premature optimization, if not more so.
I’m missing Curly’s Law: https://blog.codinghorror.com/curlys-law-do-one-thing/
“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”
Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why
Laws of Software Engineering (2026 Update)
- Every website will be vibecoded using Claude Opus
This will result in the following:
- The background color will be a shade of cream, to properly represent Anthropic
- There will be excessive use of different fonts and weights on the same page, as if a freshman design student who just learned about typography
- There will be an excess of cards in different styles, a noteworthy amount of which has a colored, round border either on hover or by default on exactly one side of the card
I did not see Boyd’s Law of Iteration [0]
"In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."
Boyd invented the OODA loop.
[0]https://blog.codinghorror.com/boyds-law-of-iteration/
What do you call the law that you violate when you vibe code an entire website for "List of 'laws' of software engineering" instead of just creating a Wikipedia page for it
> When I first started, I was enamored with technology and programming and computer science. I’m over it.
Wow, that is incredibly sad to hear. I'm 40+ years in, and still love all of that.
I love Kernighan’s Law:
> "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
I know it's not software-engineering-only, but Chesterton's Fence is often the first 'law' I teach interns and new hires: https://fs.blog/chestertons-fence/
Not a law but a a design principle that I've found to be one of the most useful ones and also unknown:
Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.
The conservation of Complexity (Tesler) seems immediately insightful to me just as a sentence:
But then in the explanation seems to me to devolve down to a trite suggestion not to burden your users. This doesn't interest me because users need the level of complexity they need and no more whatever you're doing and making it less causes your application to be an unflexible toy. So this is all, to a degree, obvious.I think it's more useful to remember when you're refactoring that if you try to make one bit of a system simpler then you often just make another part more complex. Why write something twice to end up with it being just as bad the other way round?
Half of these are not about software engineering and just general management principles.
This list is missing my personal law, Kasting's Law:
Asking "who wrote this stupid code?" will retroactively travel back in time and cause it to have been you.
Nice to have these all collected nicely and sharable. For the amusement of HN let me add one I've become known for at my current work, for saying to juniors who are overly worried about DRY:
> Fen's law: copy-paste is free; abstractions are expensive.
edit: I should add, this is aimed at situations like when you need a new function that's very similar to one you already have, and juniors often assume it's bad to copy-paste so they add a parameter to the existing function so it abstracts both cases. And my point is: wait, consider the cost of the abstraction, are the two use cases likely to diverge later, do they have the same business owner, etc.
This is the best comment on this article but it was deleted for some reason.
"The meta-law of software engineering: All laws of software engineering will be immediately misinterpreted and mindlessly applied in a way that would horrify their originators. Now that we can observe the behaviour of LLMs that are missing key context, we can understand why."
Or, you can't boil down decades of wisdom and experience into a pithy, 1 sentence quote.
A few extra from my own notes-
- Shirky Principle: Institutions will try to preserve the problem to which they are the solution
- Chesterton's Fence: Changes should not be made until the reasoning behind the current state of affairs is understood
- Rule of Three: Refactoring given only two instances of similar code risks selecting a poor abstraction that becomes harder to maintain than the initial duplication
My own personal law is:
When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.
It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.
For anyone reading this. Learn software engineering from people that do software engineering. Just read textbooks which are written by people that actually do things
> YAGNI (You Aren't Gonna Need It)
This one is listed as design, but it could just as easily count as architecture. Guessing a lot developers have worked on scaling with lambda functions or a complex IAC setup when a simple API running on a small VPS would have done the trick, at least until enough people are using the application for it to be considered profitable.
People use the premature optimization principle in exactly the wrong way these days. Knuth's full quote is, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." That 97%/3% split is the whole point.
People bring it up to argue for never thinking about performance, which flips the intent on its head. The real takeaway is that you need to spot that critical 3% early enough to build around it, and that means doing some optimization thinking up front, not none at all.
SOLID being included immediately makes me have zero expectation of the list being curated by someone with good taste.
Calling them 'laws' is always a bit of a stretch. They are more like useful heuristics. The real engineering part is knowing exactly when to break them.
Love the details sub pages. Over 20 years I collected a little list of specific laws or really observations (https://metamagic.substack.com/p/software-laws) and thought about turning each into specific detailed blog posts, but it has been more fun chatting with other engineers, showing the page and watch as they scan the list and inevitably tell me a great story. For example I could do a full writeup on the math behind this one, but it is way more fun hearing the stories about the trying and failing to get second re-writes for code.
9. Most software will get at most one major rewrite in its lifetime.
You know, I mention this stuff all the time in various meetings and discussions. I read a lot of stuff on Hacker News and just have years of accumulated knowledge from the various colleagues I've worked with. Its nice to have a little reference sheet.
I like to replace the bus factor with the Lottery Factor.
I actually had a college run over by a bus on the way to work in London, was very lucky and made a full recovery.
Head poking out under the main exit of the bus.
I learned about Dunbar’s number
(~150) is the size of a community in which everyone knows each other’s identities and roles.
In anthropology class. You can ask someone to write down the name of everyone they can think of, real or fictional, live or dead and most people will not make it to 250.
Some individuals like professional gossip columnists or some politicians can remember as many as 1,000 people.
Today, I was presented with Claude's decision to include numerous goto statements in a new implementation. I thought deeply about their manual removal; years of software laws went against what I saw. But then, I realized it wouldn't matter anymore.
Then I committed the code and let the second AI review it. It too had no problem with goto's.
Claude's Law: The code that is written by the agent is the most correct way to write it.
> Get it working correctly first, then make it fast, then make it pretty.
Or develop a skill to make it correct, fast and pretty in one or two approaches.
One that is missing is Ousterhout’s rule for decomposing complexity:
The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.Great stuff!
Where's Chesterton's Fence?
https://en.wiktionary.org/wiki/Chesterton%27s_fence
[EDIT: Ninja'd a couple of times. +1 for Shirky's principle]
Hot take - I hate YAGNI. My personal pet peeve is when someone says YAGNI to a structure in the code they perceive as "more complex than they would have done it".
Sure, don't add hooks for things you don't immediately need. But if you are reasonably sure a feature is going to be required at some point, it doesn't hurt to organize and structure your code in a way that makes those hooks easy to add later on.
Worst case scenario, you are wrong and have to refactor significantly to accommodate some other feature you didn't envision. But odds are you have to do that anyway if you abide by YAGNI as dogma.
The amount of times I've heard YAGNI as reasoning to not modularize code is insane. There needs to be a law that well-intentioned developers will constantly misuse and misunderstand the ideas behind these heuristics in surprising ways.
Great! Do principles fit? If so, considering presence of "Bus Factor", I believe "Chesterton's Fence" should be listed, too.
I would add also Little's law for throughput calculation https://en.wikipedia.org/wiki/Little%27s_law
Not sure that Linus would actually agree with Linus' law. So it's a bad name. Call it the ESR observation or something else.
Separately, I'd add Rust's API design principles, though it's more of a adjunct with things in common. https://gist.github.com/mjball/9cd028ac793ae8b351df1379f1e72...
Nice site, but missing the Law of Conservation of Misery.
Visual list of well-known aphorisms and so forth.
A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout
https://www.goodreads.com/en/book/show/39996759-a-philosophy...
(and of course Brook's Law is from _The Mythical Man Month_)
Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....
An extension to Zawinski's Law, every web service attempts to expand until it becomes a social network.
On my laptop I have a yin-yang with DRY and YAGNI replacing the dots.
"This site was paused as it reached its usage limits. Please contact the site owner for more information."
I wish AWS/Azure had this functionality.
The Law of Leaky Abstractions. What is a "leaky" abstraction? How does it "leak"?
I wonder if it should be called "Law of Leaky Metaphors" instead. Metaphor is not the same thing as Abstraction. I can understand a "leaky metaphor" as something that does not quite make it, at least not in all aspects. But what would be a good EXAMPLE of a Leaky Abstraction?
A lot of these “laws” seem obvious individually, but what’s interesting is how often we still ignore them in practice.
Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.
I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.
Remember, just because people repeated it so many times it made it to this list, does not mean its true. There may be some truth in most of these, but none of these are "Laws". They are aphorisms: punchy one liners with the intent to distill something so complex as human interaction and software design.
The one I keep coming back to is "code you didn't write is code you can't debug." Every fancy dep I grabbed to save an afternoon ended up costing me weeks later when something upstream broke in some way I had no mental model for. LLM generated code has the same problem now. Looks fine until you hit a case it doesn't cover and you're trying to reverse engineer what you let it write.
Some similarly-titled (but less tidily-presented) posts that have appeared on HN in the past, none of which generated any discussion:
* https://martynassubonis.substack.com/p/5-empirical-laws-of-s...
* https://newsletter.manager.dev/p/the-unwritten-laws-of-softw..., which linked to:
* https://newsletter.manager.dev/p/the-13-software-engineering...
Time to mention my tongue-in-cheek law:
> When describing phenomena in the social world
> Software Engineers gravitate towards eponymous 'laws'.
https://pincketslaw.com/
I like this collection. It's nicely presented and at least at a glance it adds some useful context to each item.
While browsing it, I of course found one that I disagree with:
Testing Pyramid: https://lawsofsoftwareengineering.com/laws/testing-pyramid/
I think this is backwards.
Another commenter WillAdams has mentioned A Philosophy of Software Design (which should really be called A Set of Heuristics for Software Design) and one of the key concepts there are small (general) interfaces and deep implementations.
A similar heuristic also comes up in Elements of Clojure (Zachary Tellman) as well, where he talks about "principled components and adaptive systems".
The general idea: You should greatly care about the interfaces, where your stuff connects together and is used by others. The leverage of a component is inversely proportional to the size of that interface and proportional to the size of its implementation.
I think the way that connects to testing is that architecturally granular tests (down the stack) is a bit like pouring molasses into the implementation, rather than focusing on what actually matters, which is what users care about: the interface.
Now of course we as developers are the users of our own code, and we produce building blocks that we then use to compose entire programs. Having example tests for those building blocks is convenient and necessary to some degree.
However, what I want to push back on is the implied idea of having to hack apart or keep apart pieces so we can test them with small tests (per method, function etc.) instead of taking the time to figure out what the surface areas should be and then testing those.
If you need hyper granular tests while you're assembling pieces, then write them (or better: use a REPL if you can), but you don't need to keep them around once your code comes together and you start to design contracts and surface areas that can be used by you or others.
I'd like to propose a corollary to Gall's Law. Actually it's a self-proving tautology already contained with the term "lifecycle." Any system that lasts longer than a single lifecycle oscillates between (reducing to) simplicity and (adding) complexity.
My bet is on the long arc of the universe trending toward complexity... but in spite of all this, I don't think all this complexity arises from a simple set of rules, and I don't think Gall's law holds true. The further we look at the rule-set for the universe, the less it appears to be reducible to three or four predictable mechanics.
Two of my main CAP theorem pet peeves happen on this page:
- Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.
- Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.
A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.
maybe add: "the universe is winning" (in the design department). Full quote: "software engineers try to build "idiot-proof" systems, while the universe creates "bigger and better idiots" to break them. So far, the universe is winning"
I find myself guilty of giving over ambitious timelines even when I try to take that into account.
There is one missing that i am using as primary for the last 5 years.
The UX pyramid but applied to DX.
It basically states that you should not focus in making something significant enjoyable or convenient if you don't have something that is usable, reliable or remotely functional.
https://www.google.com/search?q=ux+pyramid
It's proverbs, not laws
Most of them are also wrong
Don't see a really important one in my opinion: Refactor legacy code, don't rewrite it. All that cruft you see are bug fixes.
Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.
Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked... ...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.
Unfortunately, violation of any of these laws don't seem to have immediate consequences. That's why the IT industry is in ruin.
Just throwing one of my favourites in:
As JFK never said:
“””We do these things, not because they are easy,
But because we thought they would be easy”””
I wonder if it's usual for other professions/fields to have this tendency to create laws/aphorisms so ingrained. I'm biased as software engineer but it seems to me that is more common in computer science than others.
With the current AI wave, a fun question to ask is: which of these laws do people think no longer apply.
> This site was paused as it reached its usage limits. Please contact the site owner for more information.
ha, someone needs to email Netlify...
It's proverbs, not laws
The wadsworth constant is missing :| https://www.reddit.com/r/reddit.com/comments/kxtzp/and_so_th...
There is also https://hacker-laws.com.
It's strange to see many non-software related "laws" here, such as the Dilbert Principle, but not Internet cornerstones such as Godwin's Law.
Since the site is down, you can use the archive.org link:
https://web.archive.org/web/20260421113202/https://lawsofsof...
There is a small typos in The Ninety-Ninety Rule
> The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It should be 90% code - 10% time / 10% code - 90% time
I just wish if this was a requirement to get a job. Everyone needs to know this.
Some of these laws are like Gravity, inevitable things you can fight but will always exist e.g. increasing complexity. Some of them are laws that if you break people will yell at you or at least respect you less, e.g. leave it cleaner than when you found it.
Is it not the same as https://github.com/dwmkerr/hacker-laws ?
Knuth's Optimization Principle: The computer scientist Rod Burstall had a pithy way of saying this: "Efficiency is the enemy of clarity"
In the 25 odd years I developed software, I learnt all the rules the hard way.
Relax. You will make all the mistakes because the laws don't make sense until you trip over them :)
Comment your code? Yep. Helped me ten years later working on the same codebase.
You can't read a book about best practises and then apply them as if wisdom is something you can be told :)
It is like telling kids, "If you do this you will hurt yourself" YMMV but it won't :)
It's missing:
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
Any time someone quotes a law named after some random person, it looks like a stuffy "I know something you don't." Amdahl is probably the only name here that deserves it, and it's a real law. I'd be fine if Eric Brewer put his name on CAP too, also a real law.
YAGNI and "you will ship the org chart" are the two most commonly useful things to remember, but they aren't laws.
Good luck following the Dilbert Principle xD
Just because some things were observed frequently during a certain period, doesn't mean it's a "Law" or even a "Principle"; it's merely a trend.
TANSTAAFL was always one of my favorites - there ain't no such thing as a free lunch
Software engineering is voodoo masquerading as science. Most of these "laws" are just things some guys said and people thought "sounds sensible". When will we have "laws" that have been extensively tested experimentally in controlled conditions, or "laws" that will have you in jail for violating them? Like "you WILL be held responsible for compromised user data"?
Pure gold :) I'm missing one though; "You can never underestimate an end user.".
I like it, but this could have been a tab delimited text file.
I think it would be cool to have these shown at random as my phone’s “screensaver”
Oh dear, not again: https://lawsofsoftwareengineering.com/laws/brooks-law/
This one belongs to history books, not to the list of contemporary best practices.
Their statement of Dunning-Kruger is overly simplified such as to misdefine it:
> The less you know about something, the more confident you tend to be.
From the first line on the wiki article:
> systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.
Or, said another way, the more you know about something the more complexities you're aware of and the better assessment you can make about topics involving such. At least, that's how I understand it in a nutshell without explaining the experiments run and the observations that led to the findings.
None of these things matter anymore. All you need is vibe.
I love this
Ah yes my favorite - Conway's Law is just a fancy way of saying "your architecture is whatever your political mess of a org chart accidentally produced, and everyone calls it 'design' afterward to avoid fixing it."
It would be nice to see what overlaps
Uhh, I knew I wasn't going to like this one when I read it.
> Premature Optimization (Knuth's Optimization Principle)
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.
This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.
I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.
The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.
And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).
But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".
When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.
I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.
fascinating and agree with many of the laws. In a 1 person agent only company this hits a bit different.
This website should be a json file
No laws related to AI?
A law of physics is inviolable.... A law of software engineering is a hot take.
Here's another law: the law of Vibe Engineering. Whatever you feel like, as long as you vibe with it, is software engineering.
"No matter how adept and talented you are at your craft with respect to both technical and business matters, people involved in finance will think they know better."
That one's free.
> Leave the code better than you found it
In most places, people don't follow this rule, as it ensures either you're working an extra 10-20 hours a week to keep things clean, or stuck at mid-level for not making enough impact.
I choose the second option. But I see people who utterly trash the codebase get ahead.
I feel that Postel's law probably holds up the worst out of these. While being liberal with the data you accept can seem good for the functioning of your own application, the broader social effect is negative. It promotes misconceptions about the standard into informal standards of their own to which new apps may be forced to conform. Ultimately being strict with the input data allowed can turn out better in the long run, not to mention be more secure.
Many of the "teams" laws are BS, especially the ones about promotions and management. I've never been a manager or high-level executive, but it's not that all of them are either non-technical or bad managers. It's just that the combination of both skills is rare.
Is it just me seeing the following?
Site not available This site was paused as it reached its usage limits. Please contact the site owner for more information.
"Polishing a turd" is missing. Making something slightly better when it should be removed. Like running Flash apps in 2026.
Better list https://github.com/globalcitizen/taoup
Good list. Missing for me
- NIH
- GIGO
- Rule of 3
This is really good and comprehensive, thanks for sharing!
I like the website. Simple and snappy.
Calling these "laws" is a really really bad idea.
I have a lot of issues with this one:
https://lawsofsoftwareengineering.com/laws/premature-optimiz...
It leaves out this part from Knuth:
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.
Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.
Moreover:
>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.
I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.
Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
The list is great but the explanation are clearly AI slop.
"Before SpaceX, launching rockets was costly because industry practice used expensive materials and discarded rockets after one use. Elon Musk applied first-principles thinking: What is a rocket made of? Mainly aluminum, titanium, copper, and carbon fiber. Raw material costs were a fraction of finished rocket prices. From that insight, SpaceX decided to build rockets from scratch and make them reusable."
Everything including humans are made of cheap materials but that doesn't convey the value. The AI got close to the answer with it's first sentence (re-usability) but it clearly missed the mark.
`Copy as markdown` please.
reminder - there's tech out there capable of reading your mind remotely
Mad AI slop..
> This site was paused as it reached its usage limits. Please contact the site owner for more information.
Law 0: Fix infra.
I believe there should be one more law here, telling you to not believe this baloney and spend your money on Claude tokens.