My home infrastructure has been gradually more and more complex, mostly because I like testing things in home before using them in production. This describes bit more my multi-container workload handling (an earlier post in 2024 covered it briefly too, but there has been a lot of developments since).

Background

A lot of the software I self-host consists of single containers. For them I have rather nice pyinfra deployment script for each, which sets up the container as systemd unit, making it possible to start and stop the podman container on demand (and to start it on boot), handle upgrades, etc. in unified way.

Some of the software is more complex, with multiple interlinked containers. For that, docker compose (or podman-compose) is the typical solution. I am not personally a fan though, as I try to make the multi-container workloads highly available and as easy to manager and as observable as possible (with minimal effort) and for that sticking them in my Kubernetes cluster is the thing.

The problem with that is infrastructure as code approach to my home infrastructure. I have yet to come up with good plan for it. This describes mostly stuff that doesn’t work, or at least is bit painful to deal with, so caveat emptor.

Different ‘docker composes at home’

First off, my home infra IaC has two orchestrator pieces:

  • pyinfra for managing the servers at home and in the cloud (provisioned using pulumi)
  • pulumi for managing resources within home Kubernetes (set up with pyinfra)and cloud services (Cloudflare, Oracle)

Here’s what I produced when trying to configure combination of Cloud Native pg (using Pulumi) and combining that with third-party docker compose fragment (with database credentials pointing at that pg).

pyinfra for multiple containers? eww.. (even with utility library)

I tried managing set of scripts for number of containers (per service), but it got quite messy and not HA. So I backed off from it pretty soon (this was sometime in 2024 I think).

podman-compose?

I tried it briefly, but it handles only subset of docker compose content, and I really wanted to use my Kubernetes cluster, so it wasn’t really worth it.

kompose (-> helm chart) -> python Pulumi

‘just scripts + glue’ approach with Kompose is what I tried first. Kompose can be used to convert docker-compose.yml file to Helm chart, and then Pulumi can ingest the helm chart. The outcome looks relatively verbose, though.

Example: Ryot (container + cloud native pg)

  • 22 line docker-compose fragment (just service, no pg)
  • 64 lines of Kompose’d helm chart
  • 108 LoC of Python Pulumi to set up the cloud native pg (using utility function elsewhere) + instantiate the Helm chart

= about 5-10x overhead compared to just using docker-compose.yml. 😔

Python Pulumi (manually converted from docker-compose.yml)

Example: Airtrail (again, one container and cloud native pg)

148 LoC of Python Pulumi setting up manually all resources

  • ConfigMap to represent .env file
  • PersistentVolumeClaim for persistent volume (uploads)
  • Deployment
  • Service

This turned out to be about equally verbose as Kompose solution, but potentially more reusable. Due to that, I like it bit more (but still not much). 😐

Conclusion?

I don’t like any of these options.

I think I will continue with the Python Pulumi approach, changing it so that it will have more reusable bits, turning it essentially to bit more compact DSL. But even with it, it will not be as expressive as docker-compose so I will need to spend more effort than what I would have if I just copied docker-compose.yml and .env file over to the host and ran podman-compose up -d or so.

I suppose it is not too much of a bother, and I get some benefits out of it (snapshotted and backed up Kubernetes volumes for example, as well as HA), but still, it feels like an extra chore.