What to people use and recommend for this? I’ve read a bit about portainer, but I’m still learning - and don’t know what the best solutions are.

Today I have a handful of selfhosted services running on my home machine - mostly installed directly, but a couple running as docker containers. As the scale of my selfhosting has grown, I’ve realized that things would be a lot easier to manage if each service was run as its own container, so that installed services are isolated.

The solution I’m looking for would make it easy (possibly a web UI) for me to monitor, modify, update, and remove containerized services, including networking and storage.

Edit: Also I would only want a FOSS solution.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    27 minutes ago

    I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run helm install ... either directly or via the CI and stuff is deployed.

    So for example at my last job, our GitLab CI just had a section ran only when merging into master that ran helm install. We had three values.yaml files, one for each environment, and when we wanted to deploy a new version, the process was:

    1. Create a tag for our release version (ie. 1.2.3) and push it to the repo. This would trigger a build and pus the resulting image into the container registry.
    2. Push an update to the repo with the new tag set in the appropriate Helm values file. If we wanted to deploy 1.2.3 to development but not yet to staging or production, then the tag: value in each of the environment files would look like this:
    • k8s/chart/environments/development.yaml: tag: 1.2.3
    • k8s/chart/environments/staging.yaml: tag: 1.2.2
    • k8s/chart/environments/production.yaml: tag: 1.2.2

    Once that change is pushed, the CI will automatically apply it with helm install ... and make sure that all three environments are what they’re supposed to be.

    As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:

    The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:

    #!/bin/sh
    
    while ! nc -z postgres 5432; do
      echo "Waiting for postgres..."
      sleep 0.1
    done
    echo "PostgreSQL started"
    
    touch /tmp/ready
    
    exec "$@"
    

    I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.