Do the simplest thing that could possibly work

874 points335 comments21 hours ago
codingwagie

I think this works in simple domains. After working in big tech for a while, I am still shocked by the required complexity. Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

Anyone proclaiming simplicity just hasnt worked at scale. Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider.

A classic, Chesterton's Fence:

"There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”"

show comments
ternaryoperator

It's a shame he doesn't give the origin of this expression in programming. It comes from Ward Cunningham (inventor of the wiki) in his work with Kent Beck. In an interview a few years back on Dr. Dobb's, he stated that as the two of them were coding together in the late 80s, they would regularly remind each other of the principle. Eventually, it became a staple of their talks and writing.

They were cognizant of the limitations that are touched on in this article. The example they gave was of coming to a closed door. The simplest thing might be to turn the handle. But if the door is locked, then the simplest thing might be to find the key. But if you know the key is lost, the simplest thing might be to break down the door, and so on. Finding the simplest thing is not always simple, as the article states

IIRC, they were aware that this approach would leave a patchwork of technical debt (a term coined by Cunningham), but the priority on getting code working overrode that concern at least in the short term. This article would have done well to at least touch on the technical debt aspect, IMHO.

show comments
GMoromisato

One of the ironies of this kind of advice is that it's best for people who already have a lot of experience and have the judgement to apply it. For instance, how do you know what the "simplest thing" is? And how can you be sure that it "could possibly work"?

Yesterday I had a problem with my XLSX importer (which I wrote myself--don't ask why). It turned out that I had neglected to handle XML namespaces properly because Excel always exported files with a default namespace.

Then I got a file that added a namespace to all elements and my importer instantly broke.

For example, Excel always outputs <cell ...> whereas this file has <x:cell ...>.

The "simplest thing that could possibly work" was to remove the namespace prefix and just assume that we don't have conflicting names.

But I didn't feel right about doing that. Yes, it probably would have worked fine, but I worried that I was leaving a landmine for future me.

So instead I spent 4 hours re-writing all the parsing code to handle namespaces correctly.

Whether or not you agree with my choice here, my point is that doing "the simplest thing that could possible work" is not that easy. But it does get easier the more experience you have. Of course, by then, you probably don't need this advice.

show comments
hinkley

One of the biggest, evergreen arguments I’ve had in my career revolves around the definition of “works”.

“Just because it works doesn’t mean it isn’t broken.” Is an aphorism that seems to click for people who are also handy in the physical world but many software developers think doesn’t sound right. Every handyman has at some time used a busted tool to make a repair. They know they should get a new one, and many will make an excuse to do so at the next opportunity (hardware store trip, or sale). Maybe 8 out of ten.

In software it’s probably more like 1 out of ten who will do the equivalent effort.

show comments
zkmon

Principles like these deserve to be part of curriculum for undergrad courses. People are incorrectly trained to go for some ideal forms at the cost of complexity and fragility. Every approach, idea should be put to ruthless cost-benefit comparison, without any regard to who is proposing it, or how it sounds.

Education and training sometimes enforces prejudices, rules and stigmas that evade inspection of the subject matter in raw form.

Preference to idealism probably emerged from peace times which have no struggle. Someone would be obsessed with perfectness of a sculpture only when they don't need to hunt for the next meal. The real world runs on minimal, conservative, durable and robust approaches.

sfpotter

Generally speaking, when I hear people say this, it's a huge red flag. Really, any time anyone puts forth any kind of broad proclamation about how software development should be done, my hackles go up. Either they don't know what they're talking about, they're full of shit, or both. The only reasonable thing to conclude after lots of experience with software development is that it's hard and requires care and deliberation. There is no one-size-fits-all advice. What I want to see is people who are open-minded and thoughtful.

show comments
GarnetFloride

It seems like a lot of people think that the first draft/prototype/whatever has to be perfect.

It doesn't. It never is. It can't be.

My favorite example of this was the Moon shot. Each step was learning how to do just that one step. Mercury was just about getting into orbit, not easy even now with SpaceX though they are standing on the shoulders of those giants. Then Gemini for multiple people and orbital maneuvering (that experience gained them lots of learning) and then Apollo 8 was still a dress rehearsal even though they flew around the Moon.

Each step HAD to be simple because complexity weighed too much. But each of those simple steps were still wildly complex.

Every time I would dive in and code up something that I though was easy, it would blow up in some weird way, and I have found that doing each step individually and getting it right, might sound like I was going really slow, but it was smoother so it was faster in the end because I wasn't chasing bugs in all the places, but just one.

evelant

The more years I've been developing software (and it's been awhile) the more I come to find that the answer is almost always "it depends". The definition of "simple" depends on the case, the requirements, and the level of uncertainty about the future. The definition of "complex" depends on the case, the requirements, and the level of uncertainty about the future.

I've run into a lot of situations where doing the "simplest thing that could possibly work" ultimately led to mountains of pain and failure. I've also run into situations where things were over-engineered to account for situations that never came to be resulting in similar pain.

It boils down to "it depends" -- a careful analysis of the requirements, tradeoffs, future possibilities, and a mountain of judgement based on experience.

For sure, err on the side of "simple" when there's uncertainty but don't be dogmatic about it. Apply simple principles like loose coupling, pure functions, minimal state, avoid shared state, pick boring tools, and so on to ensure that "the simplest thing that could possibly work" doesn't become "the simplest thing that used to work but is now utter hell in the face of the unexpected". It all depends.

jumploops

As someone who has built 0-1 systems at multiple startups (Seed to Series C), I’ve settled on one principle above all else:

“Simple is robust”

It’s easy to over-design a system up front, and even easier to over-design improvements to said system.

Customer requirements are continually evolving, and you can never really predict what the future requirements will be (even if it feels like you can).

Breaking down the principle, it’s not just that a simple system is less error prone, it’s just as important that a simple architecture is easier to change in the future.

Should you plan for X, Y, and Z?

Yes, but counterintuitively, by keeping doors open for future and building “the simplest thing that could possibly work.”

Complexity adds constraints, these limitations make the stack more brittle over time, even when planned with the best intentions.

matthewsinclair

It's worth quoting "Gall's Law" [0] here:

> “A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”

I have this up on my wall and refer to it constantly (for both tech and non-tech challenges).

[0]: https://blog.prototypr.io/galls-law-93c8ef8b651e

egorfine

An extremely important classic to consider: https://www.joelonsoftware.com/2001/04/21/dont-let-architect...

dlenski

> A lot of engineers design by trying to think of the “ideal” system: something well-factored, near-infinitely scalable, elegantly distributed, and so on.

Was it Donald Knuth who said "premature optimization is that root of all evil"?

This article made this point very well, especially regarding the obsession with "scaling" in the SaaS world.

I've seen thousands and thousands of developer hours completely wasted, because developers were forced to massively overcomplicate greenfield code in anticipation of some entirely hypothetical future scaling requirement which either never materialized (95% of the time) or which did appear but in such a different form that the original solution only got in the way (remaining 5%).

John Ousterhout’s Philosophy of Software Design makes the case for simplicity in a book-length form. I really like how he emphasizes the importance of design simplicity for the maintainability of software; this is where I've seen it matter the most in practice.

show comments
boricj

I disagree. You prototype the simplest thing that could possibly work. Then, you design stuff so that you don't end up regretting it later.

I happen to have a recent example with an add-on card. For reasons that I won't get into, we needed both insurance against major rearchitecturing, as well as leverage synergy with other product lines when it comes to configuration. That led me to design a fairly intricate library that runs visitors against a data model, as well as a mechanism where the firmware dumps its entire state as a BSON document in a dedicated Flash partition before upgrading. That gave us the peace of mind that whatever happens, we can always just restore from that document and nuke everything else when booting a newer firmware for the first time.

The simplest thing that could possibly work would've been to not do any of that and let future firmware versions deal with it. Instead, I designed it so that I don't end up regretting it later, regardless of how the product evolved.

The only point I do regret was missing one piece of information to save on the initial release-to-manufacturing version. I had to put in one hack that if the saved document has version 1.0, then go spelunking in the raw flash to retrieve that piece of information where it was in version 1.0 and slipstream it into the document before processing it. Given the data storage mechanism of that platform, I'd be tearing my hair apart dealing with multiple incompatible data layouts across firmware versions if I did the simplest thing.

rossdavidh

I have worked at places where, for a website that would require one server (or at most two or three, for redundancy), Kubernetes and React and all the trappings of a humongous website were implemented. It took me a bit to figure out why we were doing all this for a website which would basically just publish some static html: it was for resume building. The lead dev, and the project manager, were not working on that project (a website which would never get all that big), they were working on a resume for FAANG.

Once you see this, you see it everywhere. 90% of the places using "modern" web technology, are built as if they were anticipating FAANG scale, not because they are, but because the people building them hoped to be working at FAANG soon.

show comments
al_borland

“Everything should be made as simple as possible, but not simpler.”

As someone who has strived for this from early on, the problem the article overlooks is not knowing some of these various technologies everyone is talking about out, because I never felt I needed them. Am I missing something I need, but just ignorant, or is that just needless complexity that a lot of people fall for?

I don’t want to test these things out to learn them in actual projects, as I’d be adding needless complexity to systems for my own selfish ends of learning these things. I worked with someone who did this and it was a nightmare. However, without a real project, I find it’s hard to really learn something well and find the sharp edges.

show comments
spectraldrift

I agree with the spirit of the article, but I think the definition of "simple" has been inverted by modern cloud infrastructure. The examples create a false choice between a "simple but unscalable" system and a "complex but scalable" one. That is rarely the trade-off today.

The in-memory rate-limiting example is a perfect case study. An in-memory solution is only simple for a single server. The moment you scale to two, the logic breaks and your effective rate limit becomes N × limit. You've accidentally created a distributed state problem, which is a much harder issue to solve. That isn't simple.

Compare that to using a managed service like DynamoDB or ElastiCache. It provides a single source of truth that works correctly for one node or a thousand. By the author's own definition that "simple systems are stable" and require less ongoing work, the managed service is the fundamentally simpler choice. It eliminates problems like data loss on restart and the need to reason about distributed state.

Perhaps the definition of "the simplest thing" has just evolved. In 2025, it's often not about avoiding external dependencies. You will often save time by leveraging battle-tested managed services that handle complexity and scale on your behalf.

show comments
daxfohl

I wholeheartedly agree with this. The challenge is perception though. Many managers will see a simple solution to a complex problem and dock you for not doing real engineering, whereas a huge convoluted mess to solve a simple problem (or non-problem) gets you promoted. And in design interviews, "I'd probably implement a counter in memory" would be the last time you ever heard from that company.

baronswindle

In my experience, simplicity can be a bit of a slippery concept. Often, people use the word “simple” to mean “intuitive to me”.

show comments
bwy

From https://nshipster.com/uncertainty/ recently: "Working in software, the most annoying part of reaching Senior level is having to say “it depends” all the time. Much more fun getting to say “let’s ship it and iterate” as Staff or “that won’t scale” as a Principal."

IIUC, author is a Staff SWE, so this tracks.

See also "Worse is better" which has been debated a million times by now.

michalc

> real mastery often involves learning when to do less, not more

Really love and agree with this, and (shameless plug?) I think really aligns with a way of working I (and some colleagues) have been working on: https://delivervaluedaily.dev/

sltr

I'm feeling surprised that ao far nobody has mentioned Fred Brooks' essential vs incidental complexity.

cmertayak

It nails the value of keeping things simple, and I think the link to cognitive load deserves even more emphasis. In most of the cases, simplifying means reducing the cognitive load. Sometimes consolidating pieces with DRY, sometimes using design patterns, sometimes decomposing things into services...

Every extra details or workaround increases the number of things you need to keep in your head, not just when building the system, but every time you come back to maintain or extend it. "Simple systems have fewer 'moving pieces': fewer things you have to think about when you're working with them."

Simplicity isn't just about getting the job done quickly; it's about making sure future you (or someone else) can actually understand and safely change the system later. Reducing cognitive load with simplicity pays off long after the job is done.

mactavish88

> A lot of engineers design by trying to think of the “ideal” system: something well-factored, near-infinitely scalable, elegantly distributed, and so on.

> Instead, spend that time understanding the current system deeply, then do the simplest thing that could possibly work.

I'd argue that a fair amount of the former results in the ability to do the latter.

There's a substantial amount of wisdom that goes into designing "simple" systems (simple to understand when reading the code). Just as there's a substantial amount of wisdom that goes into making "simple" changes to those systems.

mindcrime

IMO, the most important thing about this sort of advice (and maybe most advice) is to treat it as a "generally useful heuristic, subject to refinement based on judgment" and not as an "ironclad, immutable law of the kingdom, any transgression from which, will be severely punished".

Sure, try to keep things simple. Unless it doesn't make sense. Then make them less simple. Will you get it wrong sometimes? Yes. Does it matter? Not really. You'll be wrong sometimes no matter what you do, unless you are, in fact, the Flying Spaghetti Monster. You're not, so just accept some failures from time to time and - most importantly - reflect on them, try to learn from them, and expect to be better next time.

show comments
msephton

There is a sign on a wall at Apple that reads: "Simplify. Simplify. Simplify." (with the first two struck out) https://www.forbes.com/sites/kenmakovsky/2012/12/06/inside-a...

nickm12

This piece is arguing against what is often called "over-engineering" or unnecessary abstraction (YAGNI). Over-engineering is a problem in software systems because it generally takes longer to build and it makes the systems unnecessarily more difficult to understand. While a problem, I don't think it is the most important design issue I see in software systems.

The bigger problem I see is lack of abstraction, or systems where the components have too much knowledge of each other. These are fast to build and initially work just as well as the more "heavily engineered" systems. However as the code evolves to meet changing requirements, what happens is that code becomes more complex, either through even more coupling or through ad hoc abstractions that get implemented haphazardly.

It can be really difficult to know where to draw the lines, though. When you are trying to decide between a simpler and more complex option, I think it's worth starting with the simplest thing that could possibly work, but build it in a way that you can change that decision without impacting the rest of the system.

On a tangent, I'm not a fan of Uncle Bob (to put it mildly) but this topic makes me think of his quote "a good architecture allows decisions to be deferred". I would restate it as "a good architecture allows decisions to be changed in isolation".

daxfohl

"It’s fun to decouple your service into two pieces so they can be scaled independently (I have seen this happen maybe ten times, and I have seen them actually be usefully scaled independently maybe once)."

Same, or reliability-tiered separately. But in both aspects I more frequently see the resulting system to be more expensive and less reliable.

BenoitEssiambre

This is good advice but it can be difficult to define what simple means. The only technical way I was able to make sense of it is by targeting reducing code entropy and scopes (Inspired by how language models try to minimize Solomonoff/Kolmogorov entropy).

https://benoitessiambre.com/entropy.html https://benoitessiambre.com/integration.html

Nevermark

Looking over the threads running here, it is interesting how differently the article's title/point is being taken, filtered through different commenter situations and experiences.

I don't see it as a blind prescription.

It doesn't imply that choosing what is simple, will be simple. Or that simplest, will be simple. Or that this is a process uniquely immune from problems or tradeoffs.

Just a reminder to never forget to aim for simplest.

A tautological cookie fortune, of something important we often functionally forget or slide on.

There is a lot of wisdom in recognizing and repeating the most important "mantras of the obvious". And listening to them reformulated, in other ways, by other people.

The greatest craftbeings never stop revisiting the basics.

egorfine

> What’s wrong with doing the simplest thing?

Staff. You've got developers and they will continue working on a product oftentimes way past the "perfect" stage.

Case in point: log aggregation services like Sentry/etc. It always starts with "it's so complex, let's make a sane log ingestion service with a simple web viewer" and then it inevitably spirals into an unfathomable pile of abstractions and mind-boggling complexity to a point where it is literally no longer usable.

show comments
sdeframond

In civil engineering they say that "Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”

0xbadcafebee

Hard, hard disagree.

First of all, simplicity is the hardest thing there is. You have to first make something complex, and then strip away everything that isn't necessary. You won't even know how to do that properly until you've designed the thing multiple times and found all the flaws and things you actually need.

Second, you will often have wildly different contexts.

- Is this thing controlling nuclear reactors? Okay, so safety is paramount. That means it can be complex, even inefficient, as long as it's safe. It doesn't need to be simple. It would be great if it was, but it's not really necessary.

- Is the thing just a script to loop over some input and send an alert for a non-production thing? Then it doesn't really matter how you do it, just get it done and move on to the next thing.

- Is this a product for customers intended to solve a problem for them, and there's multiple competitors in the space, and they're all kind of bad? Okay, so simplicity might actually be a competitive advantage.

Third, "the simplest thing that could possibly work" leaves a lot of money on the table. Want to make a TV show that is "the simplest thing that could possibly work"? Get an iPhone and record 3 people in an empty room saying lines. Publish a new episode every week. That is technically a TV show - but it would probably not get many views. Critics saying that you have "the simplest show" is probably not gonna put money in your pocket.

You want a grand design principle that always applies? Here's one: "Design for what you need in the near future, get it done on time and under budget, and also if you have the time, try to make it work well."

show comments
austin-cheney

Simplicity applies universally and is almost always more correct. Simple just means the option of fewer steps. Simple does not mean easy, so therefore simplicity as a principle is always objective and measurable.

Where people fail on this most frequently is reasoning in first person pronouns whether explicitly or implicitly and whether or not stated, thought, or considered. Simplicity is but merely the result from a comparison of measures. It’s not about you and has nothing to do with your opinion.

metanonsense

I would be very cautious to give an advice like this to my team. Making a thing simple is actually very hard, and many, who hear the words, may just equate „simple thing“ with „first thing that comes to mind“, which may eventually turn into a nightmare of complexity.

sesm

Author's definition of simplicity is self-contradictory: fewer moving pieces result in highly-specialized monolith systems, while breaking up everything into components with clear interfaces results in more moving pieces.

shellkr

I think I learned this from Arch Linux and their focus on the KISS principle (https://en.wikipedia.org/wiki/KISS_principle). This is also something that goes beyond the computing sphere. It is really a good principle for life in general.

logsr

> design the best system for what your requirements actually look like right now

this is the key practical advice. when you start designing for hypothetical use cases that may never happen you are opening up an infinite scope of design complexity. setting hard specifications for what you actually need and building that simplifies the design process, at least, and if you start with that kind of mindset one can hope that it carries over to the implementation.

the simplest things always win because simple is repeatable. not every simple thing wins (many are not useful or have defects) but the winners are always simple.

show comments
phendrenad2

I think there's a natural rhythm to complexity. After a tech crash, everyone abandons what are generally regarded as best practices, and just writes scrappy code, I.E. the "simplest thing". Then, there's a backlash against it, and consultants make a lot of money teaching people how to avoid "code smells". Then a bubble forms and code quality stops mattering much relative to marketing and sales effectiveness.

tanelpoder

When I'm on the fence about some (technical) decision, I use a "razor": if all options seem equal, go with whichever is the simpler one. The results are ok so far and it has been great for reducing my brain-energy spent on pontification and early optimization too far ahead.

I liked the post, but these kinds of articles do make sense to people who've already been through the trenches & view the advice from their seasoned experience PoV and apply it accordingly. But if people without such experience follow it to the letter just because it's written, can have surprises ahead.

bob1029

If we were willing to abandon all of our tooling and technology tribalism, it would become feasible to replace many B2B SaaS products with a scheduled ETL job that runs on an MSSQL or Oracle box once per day. Most business processes can be modeled as extract-transform-load. The business would generally agree that fancy spreadsheets on the server is much simpler compared to a custom codebase. Especially, when granted access to this data and involved in its schema design.

show comments
mcv

It's important to understand the difference between easy and simple. It's easy to add complexity, it can sometimes be hard to keep things simple.

But with that in mind, I do agree that a lot of systems are more complex than they need to be. I like to keep things simple.

Of course scalability adds complexity, and sometimes you need that. But you don't always need that, and making things scalable that don't need to be, makes them harder to understand and maintain.

hiAndrewQuinn

On the meta level, the simplest thing that could possibly work is usually paying someone else to do it.

Alas, you do not have infinite money. But you can earn money by becoming this person for other people.

The catch 22 is most people aren't going to hire the guy who bills himself as the guy who does the simplest thing that could possibly work. It turns out the complexities actually are often there for good reason. It's much more valuable to pay someone who has the ability to trade simplicity off for other desirable things.

show comments
Leo-thorne

We once rebuilt an old system and went with the simplest thing that could work. It ran great for the first few weeks, but then all kinds of edge cases started creeping in. We ended up spending more time patching things up.

Gehinnn

Is doing a refactoring ever the simplest thing that could have been done? I think "do the simplest thing" should be "do the thing that increases complexity the least" (which might be difficult to do and require restructuring).

yehat

That principle is valid also outside the software engineering domain. Except for German engineers...

arthurofbabylon

Useful principle. But… (sorry to make a simple phrase more complex) the notion just scratches the surface of complexity management.

I appreciate “Philosophy of Software Design” by Ousterhout. I recently read that while rebuilding a text editor. Mind blowing experience. There is a lot of opportunity to more tightly encapsulate logic, to more clearly abstract a system, to keep a system simple yet powerful and extensible. I believe I became twice as good of a developer just by reading a chapter a day and sticking with the workflow.

underdeserver

Great advice.

I always felt software is like physics: Given a problem domain, you should use the simplest model of your domain that meets your requirements.

As in physics, your model will be wrong, but it should be useful. The smaller it is (in terms of information), the easier it is to expand if and when you need it.

mingtianzhang

This is also called occam's razor principle.

oncallthrow

This just kicks the can down the road. What is "simple"? What does "works" mean?

show comments
travisgriggs

All too often I see this mantra used to justify doing the “easiest thing that could possibly work.” Which in my experience is not the same thing. They can overlap, and often upstream simplicity can create a downstream effect of ease for consumers. Simplicity often requires some real effort to execute well.

hyperpape

> You should do that too! Suppose you’ve got a Golang application that you want to add some kind of rate limiting to...Actually, are you sure your edge proxy doesn’t support rate limiting already? Could you just write a couple of lines in a config file instead of implementing the feature at all?

As I'm doing the simplest thing that could possibly work, I do not have an edge proxy.

Of course, the author doesn't mean _that_ kind of simplicity. There are always hidden assumptions about which pieces of complexity are assumed, and don't count against your complexity budget.

cranx

I think a lot of engineers think, “I thought of a complicated set of abstract ideas that mean nothing to anyone else but will demonstrate my superior intellect” and then make that. It took a lot of self control to not curse.

AlotOfReading

It's a pithy philosophy if you already know what it means to "work". You probably don't, especially if your system is human facing. Figuring out what "works" means is almost as difficult as building things in the first place. You may as well commit to building it twice [0].

[0] https://ratfactor.com/cards/build-it-twice

bvirb

Very much agree for the type of software I've worked on my whole career. I've seen way more time and energy wasted by people trying to predict the future than fixing bugs. In practice I think it's common to realize something didn't "possibly work" until after it's already deployed, but keeping things simple makes it easy to fix. So this advice also ends up basically being "move fast break things".

aryehof

> Do the simplest thing that could possibly work …

That advice these days surely means having an LLM vibecode a mess of something?

Is such obvious and unquantifiable advice actually useful?

bubblyworld

Simplicity is brittle. Perhaps we should take a page from nature. Nothing in nature in simple, and yet it has built some of the most robust systems (by far) that we know of.

Can your software run for millions of years?

show comments
bearjaws

You know what taught me this the best? Watching Mythbusters.

Time and time again amazingly complex machines and they just fail to perform better than a rubber-band and bubble gum.

show comments
wanderingmind

This consistent with Gall's law, that says complex systems that work can only be achieved by building complexity over simple systems. Complex systems built from scratch do not work. So build the simplest system that works and then keep adding complexity to it based on requirements

show comments
sarchertech

Everything is a trade off and experience is knowing when it’s worth it to add a hook for future expansion and when you can wait.

The problem is that when you let people without experience design things, they tend towards what I call “what if driven development”.

At every point in the design they ask what if I need to extend this later. Then they add a hook for that future expansion.

When we are training juniors the best thing is not to show them all the best practices in Clean Code because most of them gravitate towards overengineering anyway. If you give them permission and “best practices” to justify their desires, they write infuriatingly complicated layers upon layers of indirection. The thing to do is to teach them to do the simplest thing that works—just like this blog post says.

Will that result in some regrets when they build something that isn’t extensible? Absolutely! But it’s generally much easier to add a missing abstraction than it is to remove a bad one.

kmoser

Not arguing against this article but aren't all these ideas already well known in the industry? Start with an MVP, don't optimize prematurely, avoid writing brittle code (systems, really), abstract implementation details away when possible, and KISS.

kiitos

using unicorn as a positive example is, well, a pretty negative signal

unicorn, i.e. CGI, i.e. process-per-request, became anachronistic, gosh, more than 20 years ago at this point!

at least, if you're serving any kind of meaningful load -- a bash script in a while loop can serve 100RPS on an ec2.micro, that's (hopefully) not what anyone is talking about

daitangio

Right in theory hard to do in practice. Also business needs are awful complex, guys.

screye

It's great advice for an individual or a work item. It's great advice for teams where untamed programmers run rampant. It's good advice when teams are sane.

But, it's terrible for 2025's median software team. I know that isn't OP's intention. But inevitably, all good advice falls prey to misinterpretation.

In contemporary software orgs, build fast is the norm. PMs, managers, sales & leadership want Engg to build something that works, with haste. In this imagination, simple = fast. Let me say this again. No, you cannot convince them otherwise. To the median org, SIMPLE = FAST. The org always chooses the fastest option, well past the point of diminishing returns. Now once something exists, product managers and org leaders will push to deploy it and sales teams will sell dreams around it. Now you have an albatross around your neck. Welcome to life as a fire fighter.

For the health of a company and the oncall's sanity, Engg must tug at the rope in the opposite direction from (perceived) simplicity. The negotiated middle ground can get close to the 'simple' that OP proposes. But today, if Engg starts off with 'simple', they will be rewarded with a 'demo on life support'. At a time when vibe coding is trivial, I fear things will get worse before they get better.

All that being said, OP's intended advice is one that I personally live by. But often, simple is slow. So, I keep it to myself.

cratermoon

We had this discussion around "the simplest thing" way back in the late 90s/early 2000s when XP was getting started. One comment I remember well is, "it's do the simplest thing, not the stupidest thing". Maybe a boring relational database is the simple thing, compared to some distributed eventually consistent map-reduce system using HDFS. Or maybe a simple key-value store like Memcached is all you need.

Even in complex domains, the aphorism, "everything should be made as simple as possible, but not simpler" applies.

As for working at scale, complex systems are made of small, simple, working systems, where dependencies between them are limited and manageable. Without an effort to keep interdependencies simple, the result is a Big Ball of Mud, and we all know how well that works.

JackFr

Before you write a parser, try a regex. (But some times you really do need a parser.)

show comments
signa11

hard disagree (fwiw).

this *tactical* style of development is the same thing propounded by TDD folks. there is no design, just a wierdly glued together mishmash of things that just happen to work.

i am (fwiw once again) not against unit-testing, that is almost always needed.

silverkiwi

Copy/paste/save as system prompt

ashwinsundar
show comments
jchook

Obligatory reference to Simple Made Easy https://www.infoq.com/presentations/Simple-Made-Easy/

I watch this talk about once per year to remind myself to eschew complexity.

silverkiwi

Copy/Paste save as system prompt

cyprx

it won't work when every single PRD now has the word "extensible". i think overcomplexity often comes from requirements/ business usecases first

desio

shouldn't it be "the simplest possible thing that could work"?

anonu

The problem is sometimes you don't get the right feedback to know when to stop building.

evo

Another way I like to think about this is finding 'closeable' contexts to work in; that is, abstractions that are compact and logically consistent enough that you can close them out and take them on their external interface without always knowing the inner details. Metaphorically, your system can be a bunch of closed boxes that you can then treat as boxes, rather than a bunch of open boxes whose contents are spilling out and into each other. Think 'shipping containers' instead of longshoremen throwing loose cargo into your boat.

If you can do this regularly, you can keep the _effective_ cognitive size of the system small even as each closed box might be quite complex internally.

kid64

The author should actually design a successful large software system and try again.

randyrand

if only designing simple systems as it’s easy as it sounds.

it takes maybe 3 to 5 rewrites before you truly grasp a problem.

zduoduo

Indeed, this is a good way

ineedasername

A few notes:

1) Sometimes the simplest things is still extremely complex

2) The simplest thing that works is often very hard to find

ocdtrekkie

One of my favorite movie quotes is from Scotty in Star Trek: "The more they overthink the plumbing, the easier it is to stop up the drain."

Complexity is sometimes necessary, but it always creates more ways things can break.

uberduper

I wanted to like this article and there's some things in there to agree with but ultimately it's a very uninteresting take with a very unconvincing rate limiting example.

> System design requires competence with a lot of different tools: app servers, proxies, databases, caches, queues, and so on.

Yes! This is where I see so many systems go wrong. Complex software engineering paving over a lack of understanding of the underlying components.

> As they gain familiarity with these tools, junior engineers naturally want to use them.

Hell yea! Understanding how kafka works so you don't build some crazy queue semantics on it. Understanding the difference between headless and clusterIP services in kubernetes so you don't have to build a software solution to the etcd problems you're having.

> However, as with many skills, real mastery often involves learning when to do less, not more. The fight between an ambitious novice and an old master is a well-worn cliche in martial arts movies

Wait what? Surely you mean doing more by writing less code. Are you now saying that learning and using these well tested, well maintained, and well understood components is amateurish?

m463

Honestly, this is how all computing works.

For example, chips just barely work.

If they work too well, you could shrink the chip until it barely works making it cheaper or faster or use less power.

That said, although this exercise is kind of interesting - like playing jenga - it might not be fun or satisfying.

better faster cheaper - sometimes you need to choose better.

show comments
motorest

I think this article actually expresses a dangerous, risk-prone approach to problem solving, and one which ultimately causes more problems than the ones it solves.

The risk is misunderstanding the problems they are solving, and ignoring all the constraints that drove the need for some key design traits that were in place to actually solve the problem (i.e., complexity)

Take the following example from the article:

> You should do that too! Suppose you’ve got a Golang application that you want to add some kind of rate limiting to. What’s the simplest thing that could possibly work? Your first idea might be to add some kind of persistent storage (say, Redis) to track per-user request counts with a leaky-bucket algorithm. That would work! But do you need a whole new piece of infrastructure?

Let's ignore the mistake of describing Redis as persistent storage. The whole reason why rate limiting data is offloaded to a dedicated service is that you want to enforce rate limiting across all instances of an API. Thus all instances update request counts on a shared data store to account for all traffic hitting across all instances regardless of how many they might be. This data store needs to be very fast to minimize micro services tax and is ephemeral. Hence why a memory cache is often used.

And why do "per-user request counts in memory" not work? Because you enforce rate-limiting to prevent brownouts and ultimately denials of service triggered in your backing services. Each request that hits your API typically triggers additional requests to internal services such as memory stores, querying engines, etc. Your external facing instances are scaled to meet external load, but they also create load to internal services. You enforce rate-limiting to prevent unexpected high request rates to generate enough load to hit bottlenecks in internal services which can't or won't scale. If you enforce rate limits per instance, scaling horizontally will inadvertently lift your rate limits as well and thus allow for brownouts, thus defeating the whole purpose of introducing rate limiting.

Also, leaky bucket algorithms are employed to allow traffic bursts but still prevent abuse. This is a very mundane scenario that happens on pretty much all services consumed by client apps. Once an app is launched, they typically do authentication flows and fetch data required in app starts and get data, etc. After app inits the app is back to baseline request rates. If you have a system that runs more than a single API instance, requests are spread over instances by a load balancer. This means a user's request can be routed to any instance at an unspecified proportion. So how do you prevent abuse while still allowing these bursts to take place? Do you scale your services to handle peak loads 24/7 to accommodate request bursts from all your active users at any given moment? Or do you allow for momentary bursts spread across all instances, regardless of what instances they hit?

Sometimes a problem can be simple. Sometimes it can be made too simple, but you accept the occasional outage. But sometimes you can't afford frequent outages and you understand a small change, like putting up a memory cache instance, is all it takes to eliminate failure modes.

And what changed in the analysis to understand that your simple solution is no solution at all? Only your understanding of the problem domain.

Starlevel004

The simplest option usually also comes with all the drawbacks of being the simplest option. "Keep it simple" is one of the more irritating thought-terminating cliches.

show comments
axblount

"When in doubt, use brute force." --Ken Thompson

RajT88

I appreciate the sentiment, and absolutely think it should be kept in mind more.

But of course, it runs afoul of reality a lot of the time.

I recently got annoyed that the Windows Task scheduler just sometimes... Doesn't fucking work. Tasks don't run, and you get a crazy mystery error code. Never seen anything like it. Drives me nuts. Goddamned Microsoft making broken shit!

I mostly write Powershell scripts for automating my system, so I figure I'll make a task scheduler which uses the C# PSHost to run the scripts, and keep the task configuration in a SQLite database. Use a simple tray app with a windows form and EFCore for SQLite to read and write the task configuration. Didn't take too long, works great. I am again happy, and even get better logging out of failure conditions, since I can trap the whole error stream instead of an exit code.

My wife is starting a business, and I start to think about using the business to also have a software business piece to it. Maybe use the task scheduler as a component in a larger suite of remote management stuff for my wife's business which we sell to similar businesses.

Well. For it to be ready, it's got to be reliable and secure. Have to implement some checks to wait if the database is locked, no biggie. Oh, but what happens if another user running the tray icon somehow can lock the database, I've got to work out how to notify the other user that the database is locked... Also now tasks need to support running as a different user than the service is running under. Now I have to store those credentials somewhere, because they can't go into the SQLite DB. DPAPI stores keys in the user profile, so all this means now I have to implement support for alternative users in the installer (and do so securely, again with the DPAPI, storing the service's credentials).

I've just added a lot of complexity to what should be a pretty simple application, and with it some failure modes. Paying customers want new features, which add more complexity and more concern cases.

This is normal software entropy, and to some extent it's unavoidable.

show comments
ChrisMarshallNY

Ockham's Software Architecture...

anon6362

- Robust

- Simple

- Easy

- Fast

- Understandable by mere mortals

- Memory efficient

- CPU efficient

- Storage efficient

- Network efficient

- Safe

Pick up to 3

stpedgwdgfhgdd

XP

journal

i want to hear an input from winapi team on this.

dihedihe

Broblim

jdblair

but it was so much fun to write my own serial networking protocol! and my 2nd stage bootloader for arduino!

...and by the time I finished, the esp32c hardware was released and I didn't need it anymore.

smitty1e

I call it the "Ditt-Kuh-Pow", the Dumbest Thing That Could Possibly Work.

Said that in in a telephone call one time, and the guy leading that was all "I'm mildly disturbed that you had a verbalization for that."

fijiaarone

Mostly I agree with what the author is saying. But there is a clear distinction between the simplest “system” and do the simplest thing.

The simplest thing to do is almost always the easiest, but knowing what is easiest thing to do is a lot trickier —- see Javascript frameworks.

fijiaarone

Mostly I agree with what the author is saying. But there is a clear distinction between the simplest “system” and do the simplest thing.

The simplest thing to do is almost always the easiest, but knowing what is easiest thing to do is a lot trickier —- see Javascript frameworks.

But I think I disagree with the author’s second axiom:

“2. Simple systems are less internally-connected.”

Creating interfaces is more complex than not. Even if it leads to a cleaner design because of interface boundaries. At the least, creating those boundaries adds complexity, and I don’t mean “more effort”. I mean it in the sense that creating functions is more complex than calling “goto”. And it took decades to invent the mechanism needed to call functions —- which is probably the next most simple thing.

However, using call stacks and named pointers and memory separation (functions) leads to vastly improved simplicity of the system as the system as a while grows in complexity.

So in fact, using your own in-memory rate limiter may be a simpler implementation than using Redis, but it it also violates the second principle (using clear interfaces leads to simpler systems.)

And it turns the author’s first premise — Gunicorn is simpler than Puma. Because Puma does the equivalent of building their own rate limiter — managing its own memory and using threads instead of processes.

And Gunicorn does the equivalent of using Redis — externalizing the complexity.

What Gunicorn did was simpler to implement (because it relies on an existing isolated architecture - Unix processes and files) but means it has a greater complexity (if you take into account that it needs that whole system to work.

However that system is a brilliant set of reductions in complexity itself, but it runs up against limitations and performance at some point.

Puma takes on itself more complexity to make administering the server less complex and more performant under load. Also, because it is, in a sense, reinventing the wheel, it lacks the distillation of simplicity that is Unix.

So, less internally connected systems are easier to expand and maintain and interface boundaries lead to less complex systems as a whole, but are not, in themselves less complex.

Limitations in the system that cause performed problems (like Unix processes and function calls) are not necessarily “more simple than can possibly work” —- but the implementations of those abstractions are not perfect and could be improved.

Sometimes it’s not clear where to push the complexity, and sometimes it’s not clear what the right abstraction level is; but mostly it’s about making due with the existing architecture you have, and not having the time or resources to fix it. Until the complexity at your level reaches a point that it’s worth adding complexity at a higher level due to being unable to add the right amount of complexity at a lower level.

jiggawatts

This is the advice I've been unsuccessfully trying to drill into the heads of developers at a large organisation. Unfortunately, it turns out that the "simplest thing" can be banged out in a couple of days -- mere hours with an AI -- and that just isn't compatible with a career that is made up of 6-month contracting stints. It's much, much more lucrative to drag out every project over years and keep collecting that day-rate.

Many "industry best-practices" seen in this light are make-work, a technique for expanding simple things to fill the time to keep oneself employed.

For example, the current practice of dependency injection with interfaces, services, factories, and related indirections[1] is a wonderful time waster because it can be so easily defended.

"WHAT IF we need to switch from MySQL to Oracle DB one day?" Sure, that... could happen! It won't, but it could.

[1] No! You haven't created an abstraction! You've just done the same thing, but indirectly. You've created a proxy, not a pattern. A waste of your own time and the CPU's time.

progx

Do something simple. Dream vs. reality.

867-5309

instructions unclear

return segfault -1;

spelunker

This is also good advice for personal projects - want to ship stuff? Just do what works, nobody cares!

del_operator

Get it out. Make it work. Love your work.

wtbdbrrr

wonderful piece.

applies to the narrative

'unfuck' anything

as well. any industry and any

'behavioral lock in'

and so on.

nojs

Claude is great at this! If you avoid refactoring at all costs and put everything as close as possible to the relevant code, you maximise the chance it works and minimise pesky DRY complexities.

moffkalast

Nooo you can't use deterministic math, you gotta use kalman filters and particle filters and factor graphs and spend three months tweaking parameters to get a 5% improvement! /s

Like alright in some situations that's the only thing that could possibly work, but shoving that complexity into every state estimator without even having a way to figure out the actual covariance of the input data is a bit much. Something that behaves in an expected way and works reliably with less accuracy beats a very complex system that occasionally breaks completely imo.

deepsun

Don't bother with SSL, it's adds complexity.

Don't add passwords, just "password" is fine. Password policies add complexity.

For services that require passwords just create a shared spreadsheet for everyone.

/s

show comments
Scubabear68

I get it.

YAGANI.

A term coined by consultants who don’t understand an industry who basically say “do the least possible thing that will work” because they don’t understand the domain and don’t understand what requirements are often non-negotiable table stakes of complexity you need to compete.

It reminded me of a Martin Fowler post where he was showing implementation of discounts in some system and advocating to just hard code the first discount in the method (literally getDiscount() {return 0.5}).

Even the most shallow analysis would show this was incredibly stupid and short sighted.

But this was the state of the art, or so we were told.

See also Ward Cunningham trying and failing to solve Sudoku using TDD.

The reality is most business domains are actually complex, and the one who best tackles that complexity up front can take home all the marbles.

show comments