A decade of Docker containers

202 points139 comments6 hours ago
pixelmonkey

The math of “a decade” seemed wrong to me, since I remembered Docker debuting in 2013 at PyCon US Santa Clara.

Then I found an HN comment I wrote a few years ago that confirmed this:

“[...] I remember that day pretty clearly because in the same lightning talk session, Solomon Hykes introduced the Python community to docker, while still working on dotCloud. This is what I think might have been the earliest public and recorded tech talk on the subject:”

YouTube link: https://youtu.be/1vui-LupKJI?t=1579

Note: starts at t=1579, which is 26:19.

Just being pedantic though. That’s about 13 years ago. The lightning talk is fun as a bit of computing history.

(Edit: as I was digging through the paper, they do cite this YouTube presentation, or a copy of it anyway, in the footnotes. And they refer to a 2013 release. Perhaps there was a multi-year delay between the paper being submitted to ACM with this title and it being published. Again, just being pedantic!)

show comments
bmitch3020

I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.

show comments
rr808

Back then I didn't foresee the 22GB image our jupyter/ML is in 2026. There must be a better way.

tzs

I've not done serious networking stuff for over two decades, and never in as complex an environment as that in the article, so the networking part of the article went pretty much over my head.

What I want to do when running a Docker container on Mac is to be able to have the container have an IP address separate from the Mac's IP address that applications on the Mac see. No port mapping: if the container has a web server on port 80 I want to access it at container_ip:80, not 127.0.0.1:2000 or something that gets mapped to container port 80.

On Linux I'd just used Docker bridged networking and I believe that would work, but on Mac that just bridges to the Linux VM running under the hypervisor rather than to the Mac.

Is there some officially recommended and supported way to do this?

For a while I did it by running WireGuard on the Linux VM to tunnel between that and the Mac, with forwarding enabled on the Linux VM [1]. That worked great for quite a while, but then stopped and I could not figure out why. Then it worked again. Then it stopped.

I then switched to this [2] which also uses WireGuard but in a much more automated fashion. It worked for quite a while, but also then had some problems with Docker updates sometimes breaking it.

It would be great if Docker on Mac came with something like this built in.

[1] https://news.ycombinator.com/item?id=33665178

[2] https://github.com/chipmk/docker-mac-net-connect

show comments
mrbluecoat

> Docker repurposed SLIRP, a 1990s dial-up tool originally for Palm Pilots, to avoid triggering corporate firewall restrictions by translating container network traffic through host system calls instead of network bridging.

Genuinely fascinating and clever solution!

show comments
talkvoix

A full decade since we took the 'it works on my machine' excuse and turned it into the industry standard architecture ('then we'll just ship your machine to production').

show comments
avsm

An extremely random fact I noticed when writing the companion article [1] to this (an OCaml experience report):

    "Docker, Guix and NixOS (stable) all had their first releases
    during 2013, making that a bumper year for packaging aficionados."
Now we get coding agent updates every week, but has there been a similar year since 2013 where multiple great projects all came out at the same time?

[1]: https://anil.recoil.org/papers/2025-docker-icfp.pdf

show comments
netrem

With ML and AI now being pushed into everything, images have ballooned in size. Just having torch as a dependency is some multiple gigabytes. I miss the times of aiming for 30MB images.

Have others found this to be the case? Perhaps we're doing something wrong.

show comments
zacwest

The historic information in here was really interesting, and a great example of an article rapidly expanding in scope and detail. How they combatted corporate IT “security” software by pretending to be a VPN is quite unexpected.

the__alchemist

I'm optimistic we will succeed in efforts to simplify linux application / dependency compatibility instead of relying on abstractions that which work around them.

show comments
brtkwr

I realise apple containers haven't quite taken off yet as expected but omission from the article stands out. Nice that it mentions alternative approaches like podman and kata though.

show comments
rando1234

Didn't Vagrant/Vagrantfiles precede Docker? Unclear why that would be the key to its success if so.

benatkin

> If you are a developer, our goal is to make Docker an invisible companion

I want it not to just be invisible but to be missing. If you have kubernetes, including locally with k3s or similar, it won't be used to run containers anyway. However it still often is used to build OCI images. Podman can fill that gap. It has a Containerfile format that is the same syntax but simpler than the Docker builds, which now provides build orchestration features similar to earthly.dev which I think are better kept separate.

mberning

I remember being pretty skeptical of “dockerizing” applications when it first becamee popular. But I’ve come around to it, if for no other reason than it provided an easily understandable concept which anyone could understand and more importantly use. The onramp to using docker is very gentle.

phplovesong

We have shipped unikernels for the last decade. Zero sec issues so far. I highly recommend looking into the unikernel space for a docker alternative. MirageOS being a good start.

show comments
politelemon

Somewhere along the line they started prioritising docker desktop over docker. It's a bit jarring to see new features coming to desktop before it comes to Linux, such as the new sandbox features.

Is there any insight into this, I would have thought the opposite where developers on the platform that made docker succeed are given first preview of features.

show comments
tsoukiory

I dont no spek anglais

arikrahman

I'm hoping the next decade introduces more declarative workflows with Nix and work with docker to that end.

INTPenis

I thought it was 2014 when it launched? The article says the command line interface hasn't changed since 2013.

show comments
heraldgeezer

I still havent learned it being in IT its so embarassing. Yes I know about the 2-3h Youtube tutorials but just...

1970-01-01

I now wonder if we'll end up switching it all back to VMs so the LLMs have enough room to grow and adapt.

show comments
callamdelaney

The fact that docker still, in 2026, will completely overwrite iptables rules silently to expose containers to external requests is, frankly, fucking stupid.

show comments
brcmthrowaway

I dont use Dockerfile. Am i slumming it?

show comments
user3939382

It solves a practical problem that’s obvious. And on one hand the practical where-were-at-now is all that matters, that’s a legitimate perspective.

There’s another one, at least IMHO, that this entire stack from the bottom up is designed wrong and every day we as a society continue marching down this path we’re just accumulating more technical debt. Pretty much every time you find the solution to be, “ok so we’ll wrap the whole thing and then…” something is deeply wrong and you’re borrowing from the future a debt that must come due. Energy is not free. We tend to treat compute like it is.

Maybe I’m in a big club but I have a vision for a radically different architecture that fixes all of this and I wish that got 1/2 the attention these bandaids did. Plan 9 is an example of the theme if not the particular set of solutions I’m referring to.

forrestthewoods

I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.

Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.

If you’re going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux “shared” filesystem is a gross hack.

This is a distinctly Linux problem. Windows software doesn’t typically have this issue. Because programs ship their dependencies and then work.

Docker is one way to ship dependencies. So it’s not the worst solution in the world. But I swear it’s a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And don’t you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!

show comments