There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.
But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling
docker ps | wc -l
For those wanting a quick count.
I recently went from 0 to 1. Reinstalled my VPS under debian, and decided to run my forgejo instance with their rootless container. Mostly as a learning experience, but also to easily decouple the forgejo version from whichever version my distro packages.
Right now I’m at 33 with 3 stopped I haven’t used in a while. Also got 3 VMs running. A handful are duplicates eg redis/postgresql/photon/caddy
40 containers behind traefik, but I did just add a new sablier middleware to stop when iddle and start when first requested. Electricity is not cheap for me. But i got lucky to add 64GB RAM in my NAS and 128GB Ram in Desktop last march because prices went crazy
but I did just add a new sablier middleware to stop when iddle and start when first requested.
Would you mind expounding on this? Electricity is fairly affordable in my locale, however I’ve been on a mission to cut out consumption when it’s not needed. Have you noticed an ROI?
31 Containers in all. I have been up as high as ~60 and have paired it back removing the things I wasn’t using.
I also tend to remove anything that uses appreciable CPU at idle and I rarely run applications that require further containers in a stack just to boot, my needs aren’t that heavy.
How it started : 0
Max : 0
Now : 0
Iso27002 and provenance validation goes brrrrr
My containers are running containers… At least 24.
Zero.
About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,…).
There’s additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.
SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.
A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.
In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run
restore-<whatever>.On an average day, I spend 0 minutes managing the homelab.
Is this in a repo somewhere we can have a look?
I’ll DM you… Not sire I want to link those two accounts publicly 😄
On an average day, I spend 0 minutes managing the homelab.
0 is the goal. Well done !
74 across 2 proxmox nodes in a few lxcs

64 containers in total, 60 running - the remaining 4 are Watchtowers that I run manually whenever I feel like it (and have time to fix things if something should break).
What tool is that screenshot from?
There is a post about getting overwhelmed by 15
I made the comment ‘Just 15’ in jest. It doesn’t matter to me. Run 1, run 100. The comment was just poking the bear as it were. No harm nor foul intended. Sorry if it was received differently.
I am like Oprah yelling “you get a container, you get a container, Containers!!!” At my executables.
I create aliases using toolbox so I can run most utils easily and securely.
Toolbox?
Edit: Oh cool! Thanks for sharing.
https://github.com/containers/toolbox
Podman toolboxes, which layer a do gained over your user file system, allowing you to make toolbox specific changes to the system that only affect that toolbox.
I think it’s oringinally meant for development of desktop environments and OS features, but you can put most command line apps in them without much feauture breakage.
I always saw them pitched by Fedora as the blessed way to run CLI applications on an immutable host.
That’s why I use them, but they are missing the in ramp to getting this working nicely for regular users.
E.g. how do I install neovim with toolbox and get Wayland clipboard working, without doing a bunch of manual work? It’s easy to add to my ostree, but that’s not really the way it should be.
I ended up making a bunch of scripts to manage this, but now I feel like I’m one step away from just using nixos.
None, if it’s not in a Debian repo I don’t deploy it on my stable server.
It’s not really about docker itself, I just don’t think software has married enough if it’s not packaged properly
My kubernetes cluster is sitting happily at 240, and technically those are pods some of which have up to 3 or 4 containers, so who knows the full number.
35 stacks 135 images 71 containers
All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?
100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.
Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.
In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.
Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.
On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.
Ironic that Nextcloud AIO spins up multiple…
Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.
And that is just for one of my web crawlers.
/S




