Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Podman vs. Docker: Comparing the two containerization tools (linode.com)
228 points by tambourine_man on Feb 9, 2023 | hide | past | favorite | 175 comments


This page is unreadable without clicking through a cookie popup that takes up the entire screen. Here's a tip: nobody wants "targeting cookies", and if someone belongs to the 0.01% that do, you can let them enable it manually, instead of using semi-dark patterns to have uneducated users accept all cookies without understanding what that choice means.

I understand some pages need advertising revenue to survive, and that targeted advertising brings in a lot more revenue, but Linode is not a blog, it's a cloud provider valued at almost a billion dollars. They don't need to do this.


The "annoyances" filters of uBlock Origin remove most of them. It improves the web so much that it affects my browser and OS choices. The web us unbearable without it.


Having used an adblock for a decade or more, it's actually shocking to browse without one these days, it actually feels like I've got malware, with many many sites having LAYERS of ads, like they are buckshotting and expecting most to be blocked.


Right? I bought an iPad, and it doesn't let me filter all of that noise. Boy howdy is the web loud!

A few years ago, I built a website while kind of forgetting how others experience the web. It's lightweight, and eschews ads, newsletter prompts, cookie notices, chatboxes, tracking and all the other annoyances. I didn't think much of it, but a lot of people notice and bring it up when they contact me.

I'm tapping into an undiscovered niche: websites that don't piss people off!


Safari on iOS allows extensions nowadays. Wipr, SponsorBlock and StopTheMadness are some you should check out.


Some companies rely their business model just upon upon the EU regulation e.g.:

https://usercentrics.com

I know them only because their cookie banners often use a layer to block the website, sometimes doesn't load with WebKit on Linux or are slow. I've doubts about a centralized storage of cookie data across different websites.


And use plausible/fathom/goatcounter/... instead of something that uses cookies


I use Plausible (it's fantastic), but it misses a lot of the insights I used to get with Google Analytics, including simple navigation flow between pages.

This is fine. Insight is overrated. However I'm not optimizing for conversions, while Linode probably has professionals doing that full time.

I can't imagine trying to convince my boss to blind the marketing team, especially since the effect is immeasurable by design. I can use Plausible because it doesn't threaten a whole department.


Relevant legislation (say, GDPR) says nothing specific about cookies, just about using client-side state. Granted, relevant legislation also says privacy should be the default, and that you can't coerce people into accepting loss of privacy.

Anyways, that would then need to use a, say, "local storage banner" for most intrusive tracking and we're back to square one.


I HATE these popups. I'm in the UK and I spend most my time on the web having to deal with them. Truly awful, should be some regulation to stop companies with these horrible dark patterns.



On Firefox I use the addon Consent-O-Matic which does the automatic handling of GDPR consent forms.


I use the same but it didn't work very well on this site. It took a long time, and the background remained semi-transparent black after it finished.


I'm surprised that the article doesn't mention the royal pain that is of the user id mappings required for execution of limited privilege containers. This is by far the biggest con and pain in the butt with using Podman.*

Though I acknowledge the problem, the user id mapping file is really unfortunate and basically non-portable between systems. I hope there has been better tooling built up around this lately, as Podman basically "wins" over Docker in my book, in all other ways. But the pain required to setup and properly manage user-privileged containers with Podman is just a bit too terse and becomes a significant barrier.

*[edit] to be fair, also a pain with rootless Docker too.


I wrote a portable solution for the user ID mapping problem: https://github.com/FooBarWidget/matchhostfsowner

It's an init program — written in Rust — that changes the UID of an account (including home directory) inside the container to match that of the host. It takes care of corner cases such as what happens if there's a conflicting account, or when the user wants to run as root, or when chowning the home directory is too slow, etc.

In a related article I detail why such a program is non-trivial, what all the caveats are that one might have to look out for, and what alternative approaches exist (e.g. bind mounts): https://www.joyfulbikeshedding.com/blog/2021-03-15-docker-an...


I remember it used to be a thing to deal with, but in the last year installing Podman has been the only step I’ve needed to take. Well, and I use Fedora now, where it obviously works out of the box. I think it’s a solved problem at this point.


I wrote a mini tutorial (as a Reddit comment) about how to deal with UID/GID mappings when you run rootless podman and you want a specific container user to write to a bind-mounted directory:

https://www.reddit.com/r/podman/comments/103ut7z/comment/j31...

Short summary: My best tip is to see if either "--user $uid:$gid" or "--user 0:0" works together with this command:

podman run --rm --user $uid:$gid --volume ./dir:/dir:Z --userns keep-id:uid=$uid,gid=$gid IMAGE

(requires podman > 4.3.0)


The main architectural problem here (originating in Docker) is promoting the shared volume system in the first place. It's just the wrong thing to try in so many ways. It's insecure and runs are non-reproducible. And the OS integration is like a tragedy novel when you look at all the different platforms with unique problems on each. A much better way would be to support automatically copying inputs in and outputs out using the existing layer system.

(As an optimization you could it in a rsyncish way)


What's the issue? I find podman to be better in this exact scenario, specifically with bind mount permission shenanigans. Don't you just set up $USER to have a couple of thousand uidns once on your hosts? Is there any other friction I'm missing?


That is true. But I have to say that migrating my home server to using rootless Docker (which included all of the same pain), was a fantastic improvement when I finally got around to doing it. I could now migrate to Podman, but less motivated now, since rootless Docker works so nicely


I've not had any issues migrating containers between systems and setting up said systems in the past year when it came to this particular issue. I think this particular hurdle has been eliminated.


I got fed up with Docker having a "Virtual machine service" using like 4GB of memory (while running no containers) on my 8GB Mac so I investigated what I thought would be "native" (arm64) Lima.

I tried this k8s https://github.com/lima-vm/lima/blob/f7e7addab557da560da7146... example thinking it'd be a thin wrapper around QEMU.

6gb+ usage RAM with nothing deployed lol

The disconnect to me is weird. I'm pretty sure with qemu-system-aarch64, Debian/Alpine don't use more than 100-200mb of RAM sitting idle. Where does 6GB of RAM usage come from?

Not that I think it's a great use of time to optimize for this whatsoever. I just thought it'd be a fun exercise.


Its page cache[0]. A similar thing happens with wsl2 on Windows[1]. The issue is the linux kernel running in the lima-vm thinks all the ram you give the VM is free to cache files on, it doesn't know there is a host OS that could also make use of that ram. So after loading up a ton of containers where each has its own OS dependencies that ends up being a lot of files the kernel keeps cached so long as nothing else needs the memory. There isn't really a fix besides right sizing the amount of ram you give the VM. Or I guess you could disable file caching but I wouldn't recommend it[2].

[0] https://www.linuxatemyram.com/

[1] https://news.ycombinator.com/item?id=26897095

[2] https://github.com/Feh/nocache


When you're right, you're right.

https://github.com/lima-vm/lima/blob/master/examples/default... 4gig default

~$ free -m total used free shared buff/cache available Mem: 3907 304 1446 1 2156 3431 Swap: 0

Thank you.


To add to this, the hypervisor can do more to integrate with the guest so that this issue is mitigated somewhat. For example, with KVM and a Linux guest there's a lot of optimizations to share memory pages between the two. For reference: https://access.redhat.com/documentation/en-us/red_hat_enterp....


I remember VMWare solved this with a balloon driver, that basically asks the guest for a lot of memory at a very low priority, so it has precedence over the page cache, but under any program that actually needs it.


One coukd also put their VM root filesystem in ZFS, switching the Linux page cache for more tunable ARC.


Interesting!

It looks like Fedora CoreOS inside podman maybe doesn't do this?

    [root@localhost ~]# free -h
                   total        used        free      shared  buff/cache   available
    Mem:           7.8Gi       212Mi       7.2Gi       8.0Mi       333Mi       7.3Gi
    Swap:             0B          0B          0B
And Activity Monitor says that qemu-system-x86_64 is using 1.27GB of real memory.


Does HyperV have a solution where the hypervisor integrates with the guest kernel, telling it to evict page caches when the host needs more memory?


The mechanism is called balooning and technically doesn't need cooperation on guest kernel side (but it makes everything much smoother). Not idea about HyperV, but it is implemented e.g. in virtio https://pmhahn.github.io/virtio-balloon/ . You can google some failed experiments to add it to qemu


Honestly, Apple either need to support native containerization in the MacOS kernel (allowing people to create MacOS native images) or help bring Linux to the MBP.

Docker based workflows are rubbish on my MBP.


Honestly I hate using my Mac for container based development. I know it is "hypervisors so faster than VMs", but the workflow is so slow that when I do personal Dev work on my older £400 Lenovo with Linux it runs rings around my MacBook when dealing with containers.


Agreed, I too have been moving off MacOS for my workflow. Wouldn't it be great if we could take full advantage of the incredible battery life and single threaded performance of our MBPs?

I think Linux running flawlessly, with full hardware acceleration, all the drivers and equal or close battery life on a MBP would make it the best laptop money could buy. Until that day, it's a ferrari with square wheels


> hypervisors so faster than VMs

Not sure what you mean by that, hypervisor is what allows the VMs to exist.

Anyway, containers are a Linux thing so I use a Linux laptop when working with containers. No VM is needed and life is great.


Because it's hard to talk about some of these things without getting someone trying to "correct" me on terminology.

Hypervisors are one method of running virtual machines, you also have classical virtualisation methods and when I bring up the virtual machines being slow, I always get someone who pushes up their glasses with an "well actually its a Hypervisor running it, so it's essentially native speed".

I find Linux is a much nicer development environment all around though.

[Edit] Yup, already got one.


No, it’s not being pedantic when you are using terminology in a confusing or incorrect way that leads to it being unclear exactly what you are complaining about.

What do you mean by “classical virtualisation methods”, actual CPU emulation so you can run e.g. PowerPC on x86?


This adds nothing to the conversation.

There is nothing unclear about what I said, I understand that hypervisors are supposed to be fast "almost native speed!". I also know that running Docker/Podman in Linux on MacOS isn't fast for many reasons.


I'm also confused as to what you mean. The hypervisor is what's running the VMs, no? So I don't understand your original comment where you say "hypervisors so faster than VMs" either.

I don't think misnome is trying to be pedantic either, I think we're both unclear as to what you're trying to say.

Edit: I'm also dubious that hypervisors always offer "native" speed. To me this seems like it depends a lot on the hypervisor, the workload, the guest OS, etc.


Wow, it's been 5 years since Docker moved to using the Hyperkit hypervisor.

But anyway prior to that it used VirtualBox (on Mac) if I remember correctly, which is a full virtualisation environment.


But...VirtualBox is a hypervisor, too. It says so in their documentation. I'm sure using the built-in hypervisor framework in MacOS bring performance improvements, but they're changing one hypervisor for the other.

Anyway, taking a guess at what you're saying: do you mean full software virtualization vs. hardware-assisted virtualization? If so, I would agree with you that even best-case performance of HA VMs is not quite bare-metal; sometimes it's close enough to not matter much.


Hyperkit is a hypervisor. Virtualbox is a hypervisor. Hyper-V is a hypervisor. VMWare products are hypervisors. They are all hypervisors, a hypervisor is just the platform that manages the virtualisation (which is a CPU feature). What changes is what parts of the host system are emulated and the manner in which system peripherals are passed through. The only alternative to a hypervisor that you could mean is full-system emulation, e.g. some modes of QEMU where the CPU itself is emulated on top of another system. This is obviously slow.

Virtualised OS are obviously "slightly" slower because they now don't have 100% of a system to use, and usually require a dedicated chunk of the system RAM, but usually what people actually mean by saying they are slower is because often they don't have direct hardware peripheral access (e.g. their own dedicated network card and disk drives) and the hypervisor emulates some amount of the system. This is the case with the most accessible linux VMs that run docker, because simulating the block devices on top of the MacOS filesystem is slow.

Docker itself is none of these, it's a fancy chroot enabled by linux kernel features, so _requires_ running on top of linux or a sufficiently linux-like compatibility layer.

So you see why it's somewhat incoherent and makes it sound like you are talking out of your ass when you complain that Hyperkit is a hypervisor but "Virtual machines aren't", but all these horrible people keep disagreeing with you.


Docker on Mac creates a full Linux VM, thats one reason it is so slow, the other being the filesystem IIRC. The hypervisor does not matter that much.


In my day job we are investigating dropping containers for the cases where it's used for simply having some setup that can be shared between developers and have the build work.

Currently trialing nix-shell (https://nixos.org/manual/nix/stable/command-ref/nix-shell.ht...) in the iOS team (this team doesn't need to deploy software to servers, so its a small trial for now to make sure certain build tools are useable and on the right version for our build), but see no reason why it couldn't extend to other similar workflows in other teams.


I'm already using NixOS on my private laptop (sticking with macOS for work), but Nix on other OS's keeps tempting me. We're working on an application that needs to pull a whole bunch of "weird" dependencies, from all over the place: a specific version of Python, PyPI, apt/homebrew, binary-only proprietary software, etc. So far the answer has always been Docker and a specific release of Debian/Ubuntu, but keeping the code running on the host OS is always desirable, because there are occasionally features that need to work with host hardware, like audio.

My problem with Nix is, just like Rust, while it provides a very high ROI, the barrier to entry seems very steep; and unlike Rust, the documentation still leaves a lot to wish for. I'd go all-in on it right now, but I'm (rightfully, I guess?) scared of Nix-unique problems that will take 10x the effort to address than the mess that we're currently dealing with.


I won't be the person to deny it can be tough to deal with issues. The documentation is vast but very shallow too, so you end up having to figure stuff our yourself.


Same thing here. We're trying to cut down on the amount of OS-level dependencies for each service, which makes them feasible to run locally with minimal fuss.


That's one reason why we ditched the Mac desktops for development in favour of SFF PCs running Linux.


This doesn't make much sense. You're probably running your software on Linux servers, so having native containers in MacOS doesn't really do much (Also...there is sandboxing in MacOS/Darwin already). And bringing Linux to the MBP, if you want containers..is going to require a Linux kernel, so at the end you're still going to be running Linux VMs, no matter what. This is exactly what WSL2 is doing.


Yeah I have a recurring issue where docker desktop on Mac will use >100% cpu with no containers running (verified with docker ps and docker stats commands). I’ve tried all of the troubleshooting they have posted online. Currently got it down to ~40% cpu usage when idle, but no guarantee that it stays there.

Tried switching to podman with docker-compose and podman-compose, but can’t get the auth to work for private repos on dockerhub. So basically just accepting my fate of crazy idle memory and cpu usage.


My most popular StackOverflow question is that phenomenon exactly: https://stackoverflow.com/questions/58277794

No root cause after 3 years. The best guess is the overhead of IO somewhere in the stack.


Are you using the “new” file synchronization? The combination of that, the new virtualization framework that stabilized last year, and upgrading to a MacBook with an M1 Pro, the relatively large codebase I run daily zips along very quickly. It’s made up of:

- a PHP-fpm container running a 5m LoC codebase - a nodeJS container running a webpack dev server instance on a 1m LoC frontend codebase - 2 elasticsearch containers (one for logs and 1 for for app search) - 2 Kibana container - 1 logstash container - A rabbitmq container - a memcached container - a redis container - a Percona container - 3-4 micro services each in there own container. - an imgproxy container

All of this runs quite snappily in ~6 GB of RAM and I’ve never noticed any slowness while also running Edge, slack, VSCode, PHPStorm, zoom, and a bunch of terminal windows, and I could probably get that down a bit by consolidating the elasticsearch and kibana instances.


I had a similar prob and it was fixed by changing one of the experimental virtualisation options. Still a resource hog though.


This is actually one of the driving factors that caused me to migrate away from Mac to Linux 5-6 years ago. It's been great.

It wasn't the only factor though. The lack of options for laptops and inability to self repair drove me away too. Used to be one of the strong points.


I spent some time digging into this for the fast, light, and easy-to-use replacement for Docker Desktop and (co)lima that I've been working on.

The root of the problem is that VM memory is allocated on demand, but never freed after it's used for the first time. In other words, once used, it can never be released. Since my app has a lighter userspace, it starts out using less memory than other VMs, but eventually reaches the memory limit given enough usage and time. (My optimized memory management setup means the VM works well with a lower memory limit than others, but it doesn't solve the fundamental problem.)

Linux has a feature called "page reporting" to report chunks of memory that are no longer used to the hypervisor, which can then drop the to reduce usage on the host side. WSL 2 actually uses this feature, but I suspect it becomes less effective with longer VM uptime because memory becomes more fragmented over time. Since Hyper-V has been limited to dropping contiguous 2 MiB chunks of memory until recently [1], fragmentation is likely the reason many users report high memory usage. Page cache is definitely a contributor as well, but a much easier one to fix. It looks like Microsoft is working on the problem with page reporting.

Although Apple's Virtualization.framework doesn't support page reporting, I was able to implement it with some workarounds and confirmed that this works with QEMU on Linux. Unfortunately, while free memory is correctly reported to macOS, nothing actually seems to get freed. I'm planning to report this to Apple because it seems like memory ballooning (essentially a more limited and primitive version of page reporting) doesn't work as documented, whether it's Virtualization.framework or another VMM like QEMU. If/when Apple fixes this, it'll be possible to reduce memory usage significantly. Details from my investigation into what's going on with memory management on the XNU side: https://twitter.com/kdrag0n/status/1612309883411640321

The good news: From my testing, the issue isn't as bad as it appears. The "free" memory tends to compress quite well, so XNU's memory compression does a good job at taking care of it when you're actually running low on memory.

(Shameless plug on this topic: The app I'm working on already has quite a few improvements over others: fast networking (30 Gbps), VirtioFS and bidirectional filesystem sharing, Rosetta for fast x86, full Linux (not only Docker), lower CPU usage, native Swift UI, and other tweaks. Email in bio for waitlist. Details to avoid spamming this thread: https://news.ycombinator.com/item?id=34374176)

[1] https://lkml.org/lkml/2022/9/30/81


https://devblogs.microsoft.com/commandline/memory-reclaim-in...

> This feature is powered by a Linux kernel patch that allows small contiguous blocks of memory to be returned to the host machine when they are no longer needed in the Linux guest. We updated the Linux kernel in WSL2 to include this patch, and modified Hyper-V to support this page reporting feature. In order to return as much memory to the host as possible, we periodically compact memory to ensure free memory is available in contiguous blocks. This only runs when your CPU is idle. You can see when this happens by looking for the ‘WSL2: Performing memory compaction’ message inside of the output of the dmesg command.


I didn't realize they already had triggers for compaction, thanks for sharing! I suspect fragmentation is still a major issue (along with page cache management, which DAX should help with), so it'll be interesting to see if/how Microsoft improves memory management.


The most important word is not even on the page: systemd. If you want to run containers as a service with systemd, the only proper way you can do that is with podman, because of the daemonless model. When you use `docker run` command in a systemd unit, that only monitors the docker client exit status and whatnot, the actual process will run in a separate process (started by docker daemon) so systemd can't stop/restart or monitor it at all. Only with podman can you do that.


That's an important point. While I have my own reservations wrt systemd, you do want a single, coherent supervision scheme that is the source of all truth about the current (and desired) state of the system, with centralised logging, metrics, etc. Docker can't fill the role of that "one true" supervisor, because you still need some host-side services to bootstrap it, like a dhcp/ntp, acpid, sshd, or the Docker daemon itself.

In defense of Docker (at least it's my impression for why they do things the way they do), you actually want a higher-level abstraction over running a single, specific container, and it starts making sense as you move into a clustering context. Even if you have a basic 2-node cluster, and want to deploy a couple different workloads to it (like some internal webapps, a memcache, a cron job, etc), neither systemd nor any of its direct alternatives give you the necessary tools to "just throw" the workloads at the cluster as a single logical entity. (Of course clusters start getting more interesting again, once you want to deploy stateful things like databases.)


I actually just switched my CI pipelines from Docker to Buildah this week to avoid using an old version of docker-dind and drop permissions on my build containers.

It was incredibly easy. The arguments are exactly the same, so it was just a %s/docker/buildah/g

Images from dockerhub also need to have docker.io/ appended to the front, since buildah doesn't assume dockerhub by default. There is apparently an option for aliases to set a default search namespace but it wasn't important enough for me to dive into.


Dodging dind is a worthy achievement. Thanks for posting your story. I would love to hear more details.


That's basically all there is to it. I'm using GitLab CI, pulling the official buildah image from quay.io. Everything worked on the second try (after appending docker.io/) for about a dozen "core" container builds.

The downstream ones use Google Jib kicked off by a gradle build for our spring apps.


Kaniko is also a good alternative to buildah for building containers inside containers without dind.


Kaniko is what I use as well, since I initially had some containers that were not building correctly with buildah. Not quite as seamless as buildah promised to be, but it can read Dockerfiles with no modifications, so it was easy enough to move to.


I've asked ChatGPT if the text (I copy&pasted the paragraph about pros/cons) is generated by it, and it said yes.

See screenshot: https://i.ibb.co/NmmytNB/Screenshot-20230209-094632-Firefox....


I'm not sure the actual ChatGPT chat interface is capable of measuring perplexity/burstiness yet (unless OpenAI integrated their recent detection work into the actual ChatGPT model which I don't think is the case).

Per GPTZero on the first few thousand characters https://gptzero.me/:

Average Perplexity Score: 632.205

A document's perplexity is a measurement of the randomness of the text

Burstiness Score: 1089.841

A document's burstiness is a measurement of the variation in perplexity

Its conclusion was "Your text is most likely human written but there are some sentences with low perplexities". There was only one sentence highlighted as possibly written by AI: "However, each tool has its pros and cons."

It's also possible to prompt ChatGPT to write with high perplexity and burstiness, so also decent chance of false negatives.


Oh, interesting tool, thanks!


And so was your comment. I knew it!:

https://imgz.org/iALu6WAz/


Oh no, you exposed me!


Do you have any particular problem with the content of the article?


There is already a ton of generated spam content to the point where search is almost broken. Spam generated by a better generator trained on spam is still spam.

Also normal etiquette is to quote text that isn't yours. To me, this reflects extremely poorly on the author.


HN etiquette is to assume good faith and to not post insinuations about stuff like bot-written text. Maybe this thread reflects poorly on us?


Nope, it's good for me. :) I just shared fun fact (that chatgpt thinks it's generated by it), but it doesn't mean that it's true or the article is bad.


What’s the accuracy of this tool?


Good question, I have no idea to be honest. The text looked like ChatGPT's for me (after many hours talking to it :D), that's why I asked to check what it thinks. But I won't be surprised if it's wrong and text is written by human. I wouldn't take it so serious, even if it's generated I don't see a problem here while it's correct (if so).


The biggest difference is that Podman and Buildah are funded by and used underneath OpenShift which is a huge money maker for Red Hat while Docker is on life support with Mirantis holding the plug.

Momentum around Docker that isn't Desktop all but halted in 2018 or so.


This is a common misconception. While podman is indeed installed on Red Hat CoreOS it does not run the containers used by OpenShift. OCP 4 uses CRI-O to run all containers and if you login as the core or root user and run podman ps you wil not see any running containers.


Oh wow, I thought OCP used crun at least. Surprising!

I do think the revenue from OpenShift and friends funds Podman/buildah/crun/etc.


Podman and CRI-O share the low level libraries (namely containers/image and containers/storage).

crun is Tech Preview in OpenShift 4.12: https://cloud.redhat.com/blog/whats-new-in-red-hat-openshift...


Right. But Docker is not a charity. I think that Podman/Buildah is actually a superior solution. This might unfortunately mean the ultimate demise of Docker, Inc. as we know them.

To be fair, I literally use Docker everyday. So I'm not necessarily rooting for its demise. But I'm seemingly happy that innovation is at least coming from somewhere in this space.


Mirantis acquired the Docker Enterprise business. When Docker focused on Desktop, they became a $50MM/year company.


You mean "when Docker decided to strong-arm their wealthiest customers into paying tens of thousands per month with a few months of warning, they became a $50MM/year company?"

Interested in seeing how sticky that revenue is. A big chunk of its value is provided by other free solutions.


Really? I thought Docker was pretty widely used at companies for deployment, etc. At least it was at my company ~1 year ago. I wasn't aware that Docker was on life support in any sense.


Widely used, but I don’t think it makes a whole lot of money to sustain itself


I think it’s north of $100m a year now so all of the life support talk is not based in truth.


Around $135m / year according to this: https://sacra.com/p/docker-plg-pivot/


A lot of people use Docker

A lot of people also don't pay for Docker


> Because of its daemonless architecture, Podman can perform truly rootless operations.

Daemonless isn't really relevant to rootless.

containerd/buildkitd/dockerd have been supporting rootless mode too, and lots of rootless codes have been mutually ported over across containerd/buildkitd/dockerd and Podman.


What exactly does rootless achieve? Is it some slight security benefit as in running OpenBSD instead of Linux but no difference for day to day usage?

What are the advantage of running Podman over Docker except Podman ecosystem is less mature?


The biggest problem with Docker is that its containers are effectively running as root.

This is basically OK when the containers you want to run are more like traditional daemons. But if you allow normal users to run containers, in a shared multiuser system, you are basically giving them root to do what they want with your system.

e.g. if a normal user can execute a docker container, they can create a mount point for anywhere in your system. They can mount /etc or any other spooky place and be able to read from it like they are root.

This is also potentially bad, for example, if you have a network facing daemon, like a web server. Let's say that you bind mount a directory on the host (because yeah, you want to serve up those static HTML files). The privileges of that container (Apache httpd or whatever) are basically running as root on the host system. Not good.

There are solutions for all this, of course. But this is really where Podman was trying to bring in advantage and added-value over Docker. That and just running as a normal process rather than as a daemon.


> The biggest problem with Docker is that its containers are effectively running as root.

Both Docker and Podman support rootless mode (and rootful mode).


Sure but to me, and I am obviously biased, if I want to run a container on my system I don't want to fire up multiple daemons in my homedir and then have them sitting out there using resources, when all I want to do is run a containerized application.

One beautiful thing about Podman is it can fire up, pull the image from the container registry, start the container and then go away. Leaving you with only the containerized application running in rootless mode.

To do this with Docker, you fire up the entire Docker infrastructure, then launch the docker client, once the application is up and running, you still need to shut down the docker infrastructure.

Even if you run with podman socket activated server, the podman service will not be running until someone connects to the service, once the connection to the service goes away, the podman service shuts down no longer using system resources.


I mean, that's totally fair response. I should have provided a caveat, "historically speaking", that Docker has lacked support for rootless containers. But yes, solutions have emerged recently.

Podman out the gate has had the facility, which I think it used as a means to distinguish itself. This is great, because maybe that helped push Docker in the right direction.


Not really recently. Both were implemented almost simultaneously in circa June 2018.

https://github.com/AkihiroSuda/docker/commit/588a4e91fc8cb99... https://github.com/containers/podman/commit/19f5a504ffb14709...

Rootless Docker wasn't merged/released until Docker 19.03, though , but still it is already nearly 4 years old.


The older I get, the more it is that "4 years" ago feels like 4 days ago. You calling out my perspective of "recently" is totally fair, because to me, it feels like recently, whereas truthfully maybe not so. Thanks for the reply, I appreciate the facts.


Sure, but I’m pretty sure you still need root to access the Docker daemon. And if you don’t need root, like if you add your account to the Docker group, then your account is essentially always root (since having access to the Docker daemon let’s you do anything).

Podman is just an app. It’s like Vim or ffmpeg. Imagine if Vim ran as a root daemon at all times and you ran sudo to connect to it and edit text files in your home directory. That’s how silly the Docker architecture is.



No. The Docker daemon runs as your user (not root) in rootless mode.


With the Docker enterprise licensing stuff last year, I had to cease use of Docker Desktop at work which has been really annoying. As a non-dev who dabbles and roles their own tools it means my primary use case of firing up a DB really quickly and easily when needed is gone.

I've been eyeing podman but the additional friction scares me off from jumping in. Has anyone else not doing full-time dev found it (or similar) a simple enough replacement?


> With the Docker enterprise licensing stuff last year

(In case someone else also had missed all of that.)

Commercial use of Docker Desktop at a company of more than 250 employees OR more than $10 million in annual revenue requires a paid subscription (Pro, Team, or Business) to use Docker Desktop.


I am using Podman infrequently on Windows. It works just fine, although I had to go rootful for a few containers. They've seen been updated to not need it.

Podman Desktop replaces Docker Desktop.

The only "problem" is that the auto-restart policy doesn't apply between reboots. I kinda get why, and it's not a huge problem for me to start the containers I care about via Podman Desktop when I want to use them. In a sense it works better because if I reboot they all stop and are not using resources.


If you enable the podman-restart service on the linux server, restart on boot should work fine. For rootless containers, you also need to enable linger mode for the rootless user.


I'm using Colima ( https://github.com/abiosoft/colima ) and am very happy with it so far.

Here's an article about it: How Colima is a good alternative to Docker Desktop ( https://kumojin.com/en/colima-alternative-docker-desktop/ )

I'm using Colima together with DDEV ( https://ddev.com ) to create and run PHP projects (webserver + db) in containers. Clean, very easy to use, and fast.


What can you do with docker desktop that you can't with docker on its own?

Besides there are myriads of other docker gui than docker desktop. Even the non official podman desktop gui can connect to a docker daemon.


Looks like you can install the Apache-licensed Docker engine + CLI via Chocolatey:

https://community.chocolatey.org/packages/docker-engine

https://community.chocolatey.org/packages/docker-cli

So it seems the Docker Inc. enterprise licensing is all based on the "Docker Desktop" UI. Can someone confirm this?


Install Docker in WSL (still free). Install Rancher Desktop. Connect to Docker in WSL.

Or if on Linux same as above, but remove the WSL part.


Problem for developers in the Microsoft stack is that Visual Studio requires Docker Desktop to be installed - unless Rancher is able to fake that.


> Problem for developers in the Microsoft stack is that Visual Studio requires Docker Desktop to be installed

I'm not sure what you are referring to here. I run Docker engine in WSL2, and vscode in Windows. It all seems to work seemlessly to me. What am I missing?


Visual Studio is not vscode.


How do you interact with Visual Studio and Docker? I'll see what I can come up with.


There are two facets to this:

- It calls Docker (docker command) to build and run images

- Visual Studio has an UI to manage containers and images (not sure what it uses under the hood for this)


So by doing what I said I have `docker` in Powershell.

To recap, I have the following:

Docker (CLI ONLY!!!) on Ubuntu on WSL as described here: https://docs.docker.com/engine/install/ubuntu/

   sudo apt list --installed 2>&1 | grep docker
  docker-buildx-plugin/kinetic,now 0.10.2-1~ubuntu.22.10~kinetic amd64 [installed,automatic]
  docker-ce-cli/kinetic,now 5:23.0.0-1~ubuntu.22.10~kinetic amd64 [installed]
  docker-compose-plugin/kinetic,now 2.15.1-1~ubuntu.22.10~kinetic amd64 [installed,automatic]
  docker-compose/kinetic,now 1.29.2-2 all [installed]
  docker-scan-plugin/kinetic,now 0.23.0~ubuntu-kinetic amd64 [installed,automatic]
  python3-docker/kinetic,now 5.0.3-1 all [installed,automatic]
  python3-dockerpty/kinetic,now 0.4.1-4 all [installed,automatic]
So I'd say start with installing

  docker-ce-cli

Rancher Desktop from here: https://github.com/rancher-sandbox/rancher-desktop/releases

Then in Rancher Desktop you enable WSL integration as shown here: https://imgur.com/a/yKV8S9e

With this done you should be able to have the docker command in Pwsh

I can do docker image ls etc like a full Docker setup!


have you looked into installing an actual linux distribution on wsl2 and using docker on that?


Daemonless operation sounds very compelling. It is often slow or complicated to deploy a docker installation just to be able to run a container when all I really want is a process. Conceptually (and I really mean conceptually, not technically) I don’t need a Python daemon to run a Python tool so why do I need a Docker daemon to run a containerized Python tool?

LXC did this nicely, imho, by storing the state and configuration of each container as files in a directory. It doesn’t seem like LXC/LXD has really taken off (see also: bzr) — bad luck Canonical, but thank you for trying.

Relating to both LXC vs Docker, and Docker vs Podman: oftentimes, the presence of competition is what drives the winner to victory. It might seem like wasted effort to have two competing solutions for a while but the final outcome of a winner that’s better than it would have been is of benefit to everyone. If I had a dollar for every project I started at work just to stone-soup one of my peers into action I would have more than… $20? $50?

Postscript, written after reading the comments below (thanks!): when you need to get a project going in a creative field like software engineering, a useful technique is to start with something — anything at all — announce it, and invite meaningful contributions to take that thing from trivial starting point to a rich and useful product. The analogy is “stone soup”: a villager wants to put together a feast but doesn’t have the means to do it themselves. What they can do is get a fire going, get a pot of water simmering over the fire, and put a single rock in it. They then suggest to another villager “hey I’ve got this soup going but it would be even better with one of your onions: how about we work together?”. To another they say the same thing but for potatoes, to another a ham hock, ad finitum until the pot contains a rich and tasty broth. By kicking off the project, albeit with something that was just a framework, they were able to engage others interest and take the project to completion as a team.

The more scurrilous version of this is shit soup. One villager sets up the pot / asks a question on stackoverflow. Predictably, no one engages. Another villager — secretly conspiring with the first — shows up and announces the correct ingredient to be added next is goat turd / replies to the stackoverflow question with a deliberately incorrect answer. Lo and behold all the other villagers now show up to explain why that answer is dumb, and in fact we should be adding onions, potatoes, and ham / insert correct technical answer to SO question here.

Many of us will have come across this in The Pragmatic Programmer under “Stone Soup and Boiled Frogs”.


I know the interpretation 'stone soup' you use, however there are other parts of the world that know it in a different story, i'll tell it here.

When the english first attempted to colonize australia to Australia, an englishman noticed a native 'bush turkey' was not regularly eaten by the local native aboriginals. The englishman had tried to cook and eat them, but it ended up anything but tastey.

He asked one of the tribal elders the secret to eating them, the elder responded.

"You put a stone into the pot, you bring it to boil When its boiling you throw in the plucked bush turkey, you eat it when the stone is soft."

The lesson I take from this is "sometimes things are just going to be bad, you just gotta accept that and move on".


I’m not sure I understand your anecdote. I had actually never heard of “stone soup” by grandparent but I found it in Wikipedia as this story: https://en.m.wikipedia.org/wiki/Stone_Soup

My interpretation of grandparent as applied onto that story was that some of their colleagues had some local solutions that were part of the puzzle and OP purposefully created some ancillary solution (the “stone soup”) to get them to contribute to the bigger picture. But perhaps I’m seeing this entirely wrong?


The stone will never become soft in the soup - the bird will never be ready to eat.


You got it, and thanks for replying! I added a ps to my original comment, to elaborate.


> It doesn’t seem like LXC/LXD has really taken off

Maybe for you but LXC is very useful for deploying development environment isolated for each developers. systemd-nspawn is quite simple but LXC provides tooling that makes operations easier.

Docker and LXD serve different purposes. Docker is meant for an app to be run isolated but LXD can run a whole OS.


Podman can actually quite easily run host environments in containers with systemd running as PID1, giving you very similar features as LXC.

Checkout `podman run --rootfs`


I actually run all my Docker containers in an LXC container.


Does anyone here have experience / advice for building upon docker images to be run rootless? So many of the images in Docker Hub run as root by default, and its unclear to me if there's a standard UID for example to use when building them. Any resources on this would be helpful as I learn more about the topic. Thanks!


That's not what rootless means at all. The image does not matter at all. What's at stake here is whether users can run a container under their own privileges, or whether the container ultimately is run by a root level user.

In either case, whether the container uses a root or user within it isn't really a factor. There's good reasons to assume user permissions within the container, but they have nothing to do with rootless containers: the idea that a regular user can launch a container is a different concept, from the permissions inside the container.


Thanks for your response. I was under the impression that the UID of a user within a container matches that which it runs under in the host machine. It sounds like that is not the case.

From what I've read, the user running a container really starts to matter when the container is given access to the filesystem (using a volume). It sounds like the user within the container still wouldn't matter in this situation, but the user running the container on the host system would have to have appropriate permissions to the volume.


In rootfull docker, that is the case, and the uid inside the container is identical to the uid outside the container. In rootless docker, the uid inside the container maps to a sub-uid outside the container. This way, it inherits the same permissions that your own uid has.

Honestly, I think the default behavior of rootfull docker is broken by design. Being able to run rootfull docker commands is equivalent to having sudo privileges, because the docker daemon has root privileges and will mount arbitrary files on your behalf.


The default behavior of rootless podman: in-container-root gets mapped to host user, anything else gets mapped to namespaced uids in a per-user specified range.


I tend to grab /etc/passwd out of the container to see what user they added for this purpose. For my own images, I use distroless (gcr.io/distroless/static-debian11:nonroot), and they use 65532: https://github.com/GoogleContainerTools/distroless/blob/main...

(I tend to write go and CGO_ENABLED=0, so I typically just pick any user and it doesn't matter. But it can matter in many common cases, so it's worth checking now to save confusion and trouble later. But, it's /etc/passwd that names the user, and filesystem internals that control file access. Typically if a container is designed for nonroot, the relevant files will be owned by the nonroot user they picked. But not always!)


I recently converted a team project over to Podman. I only had an issue with one container running rootless. Redirecting port 80 to a non-privileged port solved the issue. What are you seeing?


My eventual goal for this image is to deploy it in a Kubernetes cluster. As you can tell I'm just starting this process, but I want to be mindful of how I can keep my container secure as it runs on shared resources.


How do you handle binding to privileged ports with rootless Podman? That's what stopped me from deploying via Podman the last time I tried.


I'm using a firewalld rich rule to forward host 443 to 8443, then the proxy container has 8443:8443 mapped. Works perfectly. All on Fedora but other firewalls should be able to do the same.


I didn't. We run reverse proxies in front of our apps anyway so I just changed the port at the proxy. I wish I had a better answer for you. Maybe someone else does.


Set `sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80`


On FreeBSD there is something called Bhyve. It is an efficient container solution with several native FreeBsd advantages like OpenZFS and NVMe support. That enables features like rapid async replication and extreme corruption protection. Also in terms of speed and overhead (resource usage) it works amazing.

Another FreeBSD solution is Jails. I never seen a more powerful container solution. It is lightweight and has unique features like hierarchical Jails. Management is done over cli or solutions like Iocage. Jails and its ancestors have been developed since 1975. The modern Jails is developed since 1998. It is an incredible flexible and lightweight virtualization platform. Michael W Lucas wrote a very good book about it ("FreeBSD Mastery: Jails").


Sounds like FreeBSD could make DX much better even 20 years ago, before any Docker was created, but opted out of such step further? Or have not seen opportunity? Or some other specific reason?


In popularity Unix is not the main OS for most developers. However, FreeBSD and jails, are used by the worlds largest companies. Netflix is famous for it but there are many examples. In my opinion the popularity of the OS among developers might be one of the reasons. Unix seems hardcore compared to Linux. I started to look at FreeBSD when I needed the network power, now I don't use anything else.

For starters; Try GhostBSD - FreeBSD with a MATE desktop.


Linux is certified UNIX...

Developers aren't "scared of UNIX". They are scared of things they don't know, and "don't know" in this context means Google search page doesn't explode with results when they search for the thing.

Anecdotally, people in storage business are much more familiar with FreeBSD due to FreeNAS and ZFS in general, but as some of my colleagues who previously worked for Dell EMC "storage division"^* recalled, the second most popular platform they had to adapt their storage solutions for was AIX... (and that trailed far, far behind Linux). It's also funny, in a way, since internally, they often did use FreeBSD in various departments because of FreeNAS. In a similar company, our home directory on dev. server was backed up by FreeNAS (we were also developing a storage product... for Linux).

----

* - There isn't really a "storage division" within Dell EMC afaik, just a bunch of different storage-related projects. I just had to describe those projects in some way.


Honestly I couldn't understand you answer.

> In popularity Unix is not the main OS for most developers.

Because of FreeBSD don't want to be helpful? Had idea of Jails but had no resources to make it usable for developers? What's your opinion, knowing FreeBSD backstage?


Linux, MacOs and Windows are better environments if you want drop-in solutions. FreeBSD does not have a desktop environment (Use GhostBSD for that or add something custom). FreeBSD does not support Docker because it has better solutions in place. It can be a headache to set things up when you are new to Unix. Don't expect that the latest version of some Jetbrains IDE is 2 commands away. Compiling and other solutions might be needed. The community of FreeBSD is very helpful. The forum and several maillists are the best way to go with questions.


BTW Podman has been ported to FreeBSD and works with jails. Although I have never tried it.


Isn't Bhyve a full-on hypervisor ?


I compare Bhyve often with Docker or Podman. It is a hypervisor and virtual machine manager. Basically you link a ISO and run it. It can be any OS.

There are several ways to run Bhyve:

- CLI - Libvert and virt-manager - vm-bhyve

A vm-bhyve template can look like this:

  loader="uefi"
  graphics="yes"
  xhci_mouse="yes"
  cpu=2
  cpu_sockets=1
  cpu_cores=2
  memory=4G
  network0_type="virtio-net"
  network0_switch="public"
  disk0_type="nvme"
  disk0_name="disk0.img"
  disk1_type="ahci-cd"
  disk1_dev="custom"
  disk1_name="/path/to/disc.iso"
Sample config: https://github.com/churchers/vm-bhyve/blob/master/sample-tem...

I use Jails to run applications like Postgres, Redis, Python api in an isolated environment. Jails is native FreeBSD, but isolated.


It is. Parent comment just messes and misunderstands/focus only on "isolation" part of Podman/Docker, which is foundation but hardly a selling point of those products.


Running containers without a daemon in rootful and rootless mode is better and more secure in many ways.

If you want to understand all of the security features of Podman take a look at chapters 10 and 11 of my book `Podman in Action`.

In the book I also have a nice comparison of features in Podman over Docker.

Podman 4.4 was just released with a new feature called quadlet, which makes running podman containers under systemd really easy. You should see blogs on this very soon.


> Thus, even limited users executing Docker commands are getting those commands fulfilled by a process with root privileges, a further security concern.

I would say: A complete security nightmare which make all kind of "smaller" security issues in other places (e.g. IDE) WAY worse and would put docker on a ban-list if the industry really did care about security.

Through then you _can_ setup docker without this vulnerability AFIK it's just not done by default in most setups and I'm not sure how hard it is.


Docker is already on an informal ban list when it come to US government container deployments in higher classification environments. Most of those situations require Podman based solutions.


This is also related to the Client Server model supported by Docker versus the Fork/Exec Model supported by Podman.

Podman works closely with the HPC (High Performance Computing) world. Checkout the article about how the fastest computers in the world in the most secure facilities in the world are using Podman.

https://www.nersc.gov/assets/Uploads/06-Containers-for-HPC-S...

https://opensource.com/article/23/1/hpc-containers-scale-usi...


But docker can also run rootless for almost as long as podman has existed. Why would it still be on such a blacklist?


Because by default it does not, and default matter (a lot!).

Also license and cost aspects.

Like ask 100 devs which have Linux and Docker, I would be surprised if more then 10 made sure that docker _can only run_ without root rights (and there are two ways to do so with different complexity and consequences).


Not so much a blacklist as just a cost saving measure, as the other advantage Podman has is you don’t have to pay the Mirantis bill, both in terms of money and IT overhead.


So there’s the answer: industry doesn’t care. Industry is all about ‘just turn off UAC, just run it all as administrator, just click to bypass the unsigned driver dialogs’ as long as it works.


Is there a way to run podman as non-root but with a privileged helper (or some caps) for proper bridged network setup?


No, but you can precreate a network namespace and allow rootless Podman to join the network namespace.


BTW Podman now supports pasta as well as slirp4netns. I am told pasta gives you better network performance in rootless mode, although I have never tried it.


I wonder if they could make the text smaller and make the contrast even lower? I don't think this passes WCAG accessibility guidelines (although I didn't check the exact size and hex color). Really disappointing that it's too difficult to read, because it looks interesting.


I think my main pain point would be switching from docker-compose to something that would work with podman...


The answer, ironically, is `docker-compose` with the `-H` flag pointing to your podman socket.


Yes, I wrote that comment without researching the alternatives. Which is also a point: Ain't nobody (or at least somebody) got time for that.

Docker(compose) worked well for me for a long time. I'm chafing a little at the "run as root" problems, so I'm somewhat open to alternatives. I'm also a bit hesitant to relearn or run into problems I don't actually NEED to run into.


Like podman-compose? Seems to have the same cli. I think you cna install it with pip install podman-compose


docker-compose can work with podman running its socket service. There is also a podman-compose, which is mostly feature compatible. That said, the transition can be painful. Especially from root to rootless.


Just make a pod.yaml and run it with

podman pod play pod.yaml

I think that’s it. I’m on mobile at the moment.


Agree, I believe that the world should move to using Kubernetes YAML for running multiple container workloads. Whether you are running on a single node or in a kubernetes cluster. Docker-Compose while supported by Podman, does not allow you to easily migrate your applications to Kubernetes.


From my subjective experience, Podman to date remains an unfinished toy and Docker is still the gold standard for running containers.


Can you elaborate on what caused you to have that view?


Some of the cons of rootless/podman

* last time I used systemd units to manage podman containers, it was an inconsistent failure and a catastrophe with containers failing to start.

* podman-compose is not yet officially supported.

* pods are confusing.

* any use of privileged ports (<1024) requires messing with sysctl values as workaround. yikes!

* no pre-defined apparmor/selinux profiles for common processes.

* folder/file permissions under user namespaces is a confusing mess.

* slirp4netns eats cpu and has awful performance.

* can't do GPU and other deeper HW related tasks.

I could go on and on...

Docker has none of these inadequacies.


Most of these are issues with running rootless not necessarily with running podman. Podman in rootful mode does not have most of these issues.


Podman is default rootless and Docker is default rootful. It's a fair comparison.


Sure, and bottom line the OS/Kernel prevent you from doing some things in rootless mode, although we are always attempting to push the boundaries on what is allowed, in a secure way.

Rootless mode works for the great majority of containers, and in most cases users have work arounds for containers that do not work, like binding to ports < 1024. But I agree that understanding these limitations, sometimes requires users to learn new things.

But Security often requires compromise, we don't run all processes as root for a reason in Linux.Running processes with privilege mode by default is way more secure.


I don't disagree with what you say. Generally if you pick security over the conventional you are bound to face limitations for the sake of security. But podman as a product compared to docker to me looks very less mature (things like podman-compose should be included in the box 4 years on). I also get the feeling people who compare podman to docker only run wordpress as a test then call it a success without getting deep into what problems both podman and docker solve.


thankfully i work with compiled language so can run native binaries and not have to deal with this crap.


Do you statically link glibc? Everything needs an environment to run in. I wrap all my Rust projects in a container for ease of deployment.


> Everything needs an environment to run in

It's called the Operating System.(obviously i am talking about production not development).


A self contained binary is fine, but if a program is going to clutter my system then I prefer it to be containerized.


That sounds like a development environment problem, not a production environment problem.


Are there any real world incidents where the container-as-root thing has become a problem? A meta-study even?

In these things I'm a bit helpless judging the cost vs benefit ratio.


There have been major vulnerabilities over the years where being root allow the container to take over the system.

Checkout https://www.stackrox.io/blog/the-runc-vulnerability-a-deep-d...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: