> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
In the mid 1990's the hardware driver support on Linux was much broader.
Copy / paste of my comment from last year about FreeBSD
I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.
The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.
The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.
The sound card was another issue.
I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"
When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.
show comments
palata
Nice article!
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things: [...] Somehow we ended up with an overengineered mess of leaky abstractions
Not sure I like the value judgement here. I think it's more of a consequence of Linux' success. I am convinced that if it was reversed (Linux was niche and *BSD the norm), then a ton of abstractions would come, and the average user would "use an overengineered mess" because they don't know better (or don't care or don't have a need to care).
Not that I like it when people ship their binary in a 6G docker image. But I don't think it's fair to put that on "those Linux engineers".
show comments
razighter777
I frequently see freeBSD jails as a highlighted feature, lauding their simplicity and ease of use. While I do admire them, there are benefits to the container approach used commonly on linux. (and maybe soon freebsd will better support OCI).
First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).
OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.
This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.
show comments
matheus-rr
The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
show comments
lifeisstillgood
I ran a whole company on top of FreeBSD back in the day (2005 ish). It was great, and ran all my personal pcs the same way (hell, refusing to install windows to try out this bitcoin idea is even now a good idea).
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
show comments
nesarkvechnep
I’m always going to like articles introducing people to FreeBSD.
mono442
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
OpenVZ and Linux vserver are older than LXC and were commonly used, though they required a patched kernel.
show comments
flipped
Is there any technical writeup which explains how the isolation exactly works, on containers and VMs? I have always heard the high level arguments of weak isolation, same kernel, etc but never the implementation details.
epistasis
Long emotional rant ahead, you have been warned. The poorly thought out adoption of parts of systemd, and in particular systemd-oomd, are making me yearn for FreeBSD.
For all my computing career, I'd use Unix-alike because they let me develop software by having an idea, write some code, let it run in the terminal, chain things together, and see where it failed, and iterate. Terminals let this happen much faster than clicky GUI software, and contain a log of what was happening in a terminal pane (or gnus screen or tmux pane, because usually this is happening on big servers and compute clusters rather than my terminal/laptop)
I could launch a bunch of panes, have several lines of investigation going, and come back to it a day or week later when I had time because the terminal kept a log of everything that happened.
And a couple years ago I started noticing that things I thought I had launched would start disappearing. At first I thought I had started accidentally mispressing a key and killing a tmux pane rather than disconnecting.
But no! When I finally went back to a in-person Linux workstation and saw it happen to the entire terminal window, I knew something major had changed. It turns out that something called systemd-oomd was added to a bunch of distributions that now kills only entire cgroups of processes at a time, rather than a single process offender.
So now if you want to run processes and isolate the kill zone of a process, you have to wrap every freaking subprocess in an entire systemd-run wrapper or docker wrapper. And systemd-run won't work from many contexts, such as inside a Jupyter kernel.
Major breaking changes on fundamental system behavior are a huge problem these days. It's one thing to let the OS kill processes more when there's a memory issue, fine, great, go ahead. But why kill all the lightweight processes that could give feedback to a user?! And why force non-portable process launching semantics, that aren't even consistent across the entire system?!? So infuriating.
flipped
Anyone looking to use jails might find BastileBSD helpful. It's a nice and modern jail manager.
I am not quite sure what this means. I had a jail a few years ago and I remember there was a utility to "back" the jail up so you could put it on another system. Are there constraints with that utility. It seemed to work, maybe I am forgetting something ?
In any case I still think Jails are much better than the things Linux has. To me, it is creating a jail that is more difficult. There were ports that made it easier, I used one of them, but that port was abandoned at some point. I think it was "ezjail".
user3939382
I switched my startup’s whole infra to FreeBSD a couple months ago. Found a use after free bug that Linux’s memory management was just fine with in Gnome XSLT lib that FreeBSD properly refused. Other than that smooth sailing, jails work great.
After IBM destroyed CentOS, all the Xorg politics nonsense, the list goes on with Linux, not interested. I just want something quiet and boring and stable and correctly designed. NetBSD would be my first choice but they don’t get the $ they need for drivers.
show comments
smitty1e
Is this fair?
Linux is to *BSD as
VHS was to Betamax.
show comments
shevy-java
> FreeBSD is worth a brief aside here, because it differs from Linux in a fundamental way. Linux is a kernel. What most people call "Linux" is actually that kernel combined with a GNU userland, a package ecosystem, and a set of choices that vary from distro to distro — Ubuntu, Fedora, and Arch are all running the same kernel but are meaningfully different systems underneath.
It is not incorrect but ... do people really care about that distinction?
Because in most situations I know of, when people refer to Linux, they almost never refer to the linux kernel. They refer to the whole operating system stack, which is typically put down via a distribution. So, Fedora, Gentoo, Arch, and so forth, are all "kind of" Linux. Barely anyone refers to the linux kernel if you look at all the discussions on the world wide web.
> FreeBSD ships as a complete, coherent OS
The BSDs often promote that aka "Linux is chaos, we are coherent and consistent operating system following intelligent design". Well ... this is the rise of worse is better, repeated: https://dreamsongs.com/WorseIsBetter.html
It is a great analogy that works on so many levels. Broken down to Linux versus the BSDs, I think 500 out of 500 top supercomputers running Linux kind of show which philosophy is better. The one that works better. That does not mean the BSDs are useless, but I am getting tired of the promo used by the BSD as "we are order, Linux is chaos". I compare this more to Lego building blocks. With Linux there is a stronger focus on having building blocks available. You can build up things. You have projects such as LFS/BLFS (Linux from scratch). The BSDs do not have something comparable. Which operating system is the better tinker OS? Which community created git? (Ok ok that was Linus so not really a community per se, but it originated from Linux and perhaps that was not an accident either.)
> FreeBSD pioneered the practical implementation of what we now call containers.
Ok great. Many modern programming languages learned from older languages; many of these older languages are dead now. You need to keep on innovating. Why is BSD so dead set on the past?
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
Dumdedum ... it kind of sounds as if the FreeBSD guys are sad that Linux went on to dominate. It reminds me of NetBSD aka "we work on every toaster in the world". Then suddenly on a mailing list many years ago "wait a moment ... Linux now works on more toasters than we do". The BSDs don't seem to understand how momentum can be dominating.
> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
Ok that flat out is incorrect. First - GPL worked well for the linux kernel, that is true. But the ecosystem includes many BSD-licences programs too, on Linux. So that explanation fails already here. LLVM has Apache License 2.0 which I kind of feel is a mix between GPL and BSD (not quite true but this is how I remember it).
Then the claim is Linux won because of Red Hat. I actually find Red Hat annoying and I am glad to not depend on it. Linux is way bigger than Red Hat. IBM? I don't see what IBM did for Linux really. So that explanation also does not work.
Google, Facebook, and Amazon - well, they profited from Linux. They didn't really ENABLE Linux. They would not have used Linux if Linux would have been useless. So that part came afterwards.
So none of those explanations really work well here.
> Linux rapidly went from "the free OS for people who can't afford commercial licences" to "the only acceptable OS for servers".
That is true but not for the claims made, e. g. "because of Google". The more important question is: why did the BSDs fail?
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things
> Somehow we ended up with an overengineered mess of leaky abstractions for cloud-based, vendor-locked infrastructure.
Wait a moment - he cites Docker. That's owned by a private company. What does this have to do with Linux? If company xyz does something based on FreeBSD, we would then say company xyz is responsible for FreeBSD failing or not failing? How does that work?
> And this complexity has quietly reshaped how the industry thinks about deploying software. Today, if you want to run an application in a larger system, the implicit assumption is that you containerise it with Docker and orchestrate it with Kubernetes.
Personally I find all this abstraction crap. With all their failures, though, things such as docker kind of present a "download this one file, then it will work fine". And that is kind of true. I saw that in in-campus use for life science faculty clusters and what not. It simplifies things for the admin there. People give a similar rationale for systemd. Personally I don't think systemd should exist, but there are people who benefit from it - that simply is a factual statement.
All in all this is a very strange point of view from FreeBSD folks. At the least the NetBSD folks back then on the mailing list acknowledged the situation and then tried to find alternative strategies and in some ways succeeded (although I am not sure whether NetBSD right now runs on more toasters than Linux does - anyone has updated statistics for that?).
> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
In the mid 1990's the hardware driver support on Linux was much broader.
Copy / paste of my comment from last year about FreeBSD
I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.
The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.
https://en.wikipedia.org/wiki/CMD640
The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.
The sound card was another issue.
I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"
When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.
Nice article!
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things: [...] Somehow we ended up with an overengineered mess of leaky abstractions
Not sure I like the value judgement here. I think it's more of a consequence of Linux' success. I am convinced that if it was reversed (Linux was niche and *BSD the norm), then a ton of abstractions would come, and the average user would "use an overengineered mess" because they don't know better (or don't care or don't have a need to care).
Not that I like it when people ship their binary in a 6G docker image. But I don't think it's fair to put that on "those Linux engineers".
I frequently see freeBSD jails as a highlighted feature, lauding their simplicity and ease of use. While I do admire them, there are benefits to the container approach used commonly on linux. (and maybe soon freebsd will better support OCI).
First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).
OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.
This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.
The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
I ran a whole company on top of FreeBSD back in the day (2005 ish). It was great, and ran all my personal pcs the same way (hell, refusing to install windows to try out this bitcoin idea is even now a good idea).
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
I’m always going to like articles introducing people to FreeBSD.
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
OpenVZ and Linux vserver are older than LXC and were commonly used, though they required a patched kernel.
Is there any technical writeup which explains how the isolation exactly works, on containers and VMs? I have always heard the high level arguments of weak isolation, same kernel, etc but never the implementation details.
Long emotional rant ahead, you have been warned. The poorly thought out adoption of parts of systemd, and in particular systemd-oomd, are making me yearn for FreeBSD.
For all my computing career, I'd use Unix-alike because they let me develop software by having an idea, write some code, let it run in the terminal, chain things together, and see where it failed, and iterate. Terminals let this happen much faster than clicky GUI software, and contain a log of what was happening in a terminal pane (or gnus screen or tmux pane, because usually this is happening on big servers and compute clusters rather than my terminal/laptop)
I could launch a bunch of panes, have several lines of investigation going, and come back to it a day or week later when I had time because the terminal kept a log of everything that happened.
And a couple years ago I started noticing that things I thought I had launched would start disappearing. At first I thought I had started accidentally mispressing a key and killing a tmux pane rather than disconnecting.
But no! When I finally went back to a in-person Linux workstation and saw it happen to the entire terminal window, I knew something major had changed. It turns out that something called systemd-oomd was added to a bunch of distributions that now kills only entire cgroups of processes at a time, rather than a single process offender.
So now if you want to run processes and isolate the kill zone of a process, you have to wrap every freaking subprocess in an entire systemd-run wrapper or docker wrapper. And systemd-run won't work from many contexts, such as inside a Jupyter kernel.
Major breaking changes on fundamental system behavior are a huge problem these days. It's one thing to let the OS kill processes more when there's a memory issue, fine, great, go ahead. But why kill all the lightweight processes that could give feedback to a user?! And why force non-portable process launching semantics, that aren't even consistent across the entire system?!? So infuriating.
Anyone looking to use jails might find BastileBSD helpful. It's a nice and modern jail manager.
A two-server networking setup with VNET Jails:
https://ericfortis.com/blog/freebsd-jails-network-setup
>but they don't have a native answer to shipping
I am not quite sure what this means. I had a jail a few years ago and I remember there was a utility to "back" the jail up so you could put it on another system. Are there constraints with that utility. It seemed to work, maybe I am forgetting something ?
In any case I still think Jails are much better than the things Linux has. To me, it is creating a jail that is more difficult. There were ports that made it easier, I used one of them, but that port was abandoned at some point. I think it was "ezjail".
I switched my startup’s whole infra to FreeBSD a couple months ago. Found a use after free bug that Linux’s memory management was just fine with in Gnome XSLT lib that FreeBSD properly refused. Other than that smooth sailing, jails work great.
After IBM destroyed CentOS, all the Xorg politics nonsense, the list goes on with Linux, not interested. I just want something quiet and boring and stable and correctly designed. NetBSD would be my first choice but they don’t get the $ they need for drivers.
Is this fair?
Linux is to *BSD as
VHS was to Betamax.
> FreeBSD is worth a brief aside here, because it differs from Linux in a fundamental way. Linux is a kernel. What most people call "Linux" is actually that kernel combined with a GNU userland, a package ecosystem, and a set of choices that vary from distro to distro — Ubuntu, Fedora, and Arch are all running the same kernel but are meaningfully different systems underneath.
It is not incorrect but ... do people really care about that distinction?
Because in most situations I know of, when people refer to Linux, they almost never refer to the linux kernel. They refer to the whole operating system stack, which is typically put down via a distribution. So, Fedora, Gentoo, Arch, and so forth, are all "kind of" Linux. Barely anyone refers to the linux kernel if you look at all the discussions on the world wide web.
> FreeBSD ships as a complete, coherent OS
The BSDs often promote that aka "Linux is chaos, we are coherent and consistent operating system following intelligent design". Well ... this is the rise of worse is better, repeated: https://dreamsongs.com/WorseIsBetter.html
It is a great analogy that works on so many levels. Broken down to Linux versus the BSDs, I think 500 out of 500 top supercomputers running Linux kind of show which philosophy is better. The one that works better. That does not mean the BSDs are useless, but I am getting tired of the promo used by the BSD as "we are order, Linux is chaos". I compare this more to Lego building blocks. With Linux there is a stronger focus on having building blocks available. You can build up things. You have projects such as LFS/BLFS (Linux from scratch). The BSDs do not have something comparable. Which operating system is the better tinker OS? Which community created git? (Ok ok that was Linus so not really a community per se, but it originated from Linux and perhaps that was not an accident either.)
> FreeBSD pioneered the practical implementation of what we now call containers.
Ok great. Many modern programming languages learned from older languages; many of these older languages are dead now. You need to keep on innovating. Why is BSD so dead set on the past?
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
Dumdedum ... it kind of sounds as if the FreeBSD guys are sad that Linux went on to dominate. It reminds me of NetBSD aka "we work on every toaster in the world". Then suddenly on a mailing list many years ago "wait a moment ... Linux now works on more toasters than we do". The BSDs don't seem to understand how momentum can be dominating.
> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
Ok that flat out is incorrect. First - GPL worked well for the linux kernel, that is true. But the ecosystem includes many BSD-licences programs too, on Linux. So that explanation fails already here. LLVM has Apache License 2.0 which I kind of feel is a mix between GPL and BSD (not quite true but this is how I remember it).
Then the claim is Linux won because of Red Hat. I actually find Red Hat annoying and I am glad to not depend on it. Linux is way bigger than Red Hat. IBM? I don't see what IBM did for Linux really. So that explanation also does not work.
Google, Facebook, and Amazon - well, they profited from Linux. They didn't really ENABLE Linux. They would not have used Linux if Linux would have been useless. So that part came afterwards.
So none of those explanations really work well here.
> Linux rapidly went from "the free OS for people who can't afford commercial licences" to "the only acceptable OS for servers".
That is true but not for the claims made, e. g. "because of Google". The more important question is: why did the BSDs fail?
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things
No, that is also incorrect. cgroups are also very different to seccomp and the latter is even maintained independently: https://github.com/seccomp/libseccomp/releases
> Somehow we ended up with an overengineered mess of leaky abstractions for cloud-based, vendor-locked infrastructure.
Wait a moment - he cites Docker. That's owned by a private company. What does this have to do with Linux? If company xyz does something based on FreeBSD, we would then say company xyz is responsible for FreeBSD failing or not failing? How does that work?
> And this complexity has quietly reshaped how the industry thinks about deploying software. Today, if you want to run an application in a larger system, the implicit assumption is that you containerise it with Docker and orchestrate it with Kubernetes.
Personally I find all this abstraction crap. With all their failures, though, things such as docker kind of present a "download this one file, then it will work fine". And that is kind of true. I saw that in in-campus use for life science faculty clusters and what not. It simplifies things for the admin there. People give a similar rationale for systemd. Personally I don't think systemd should exist, but there are people who benefit from it - that simply is a factual statement.
All in all this is a very strange point of view from FreeBSD folks. At the least the NetBSD folks back then on the mailing list acknowledged the situation and then tried to find alternative strategies and in some ways succeeded (although I am not sure whether NetBSD right now runs on more toasters than Linux does - anyone has updated statistics for that?).
"failed to verify your browser"