A better Kamal deployment strategy?

An article, posted 7 months ago filed in deployment, docker, deploy, capistrano, server, debian, nginx, rails, ruby & proxy.

Kamal was introduced in 2023 (back then as MRSK) as an alternative way to deploy and manage containers on a server. It is marketed as Capistrano for containers, and as a big fan of the simplicity of Capistrano I was intrigued. I despise the political ideas of one of Kamal’s creators, but I think on the tech/implementation side he is promoting solutions that I honestly think are good (including HTML over the wire). Kamal is ‘simply’ some tooling around running images using Docker on a server, with zero-downtime deployments.

Some preparation

This is how I prepared for my testing:

  1. I had to set up an SMTP server as my sendmail solution (that actually worked quite well for my smallish projects, no need for sendgrid or the like); see my post on getting Chasquid up and running on Debian.
  2. I installed docker from the Debian repo’s (and not Docker’s), so it is automatically kept up to date, while not being yet another thing to think about when doing a dist-upgrade. Kamal only uses a small subset of docker, so no need for advanced features.
  3. Verified some security features (that enable improved Docker security) were enabled (both were enabled by default for me):
    1. Made sure SecComp was enabled
    2. Made sure AppArmor was enabled

Why am I considering dropping Capistrano

Capistrano has worked quite well for me, but every now and then I had to visit a server via SSH, and that is a bad when you’re thinking about reproducibility. Reproducibility is important for scalability and resilience. And if you update a few servers using Capistrano, that is still easy to do, but I needed something better.

Kamal deployment strategies considered

Some options I considered:

Option 1: Web > Kamal Proxy > Rails app (& supporting services)

This is the default approach: configure your server securely, point Kamal at it from your local machine and have it deploy. Live in little time, and TLS included via Let’s Encrypt.

I dismissed this option due to my cautious nature: Web > Kamal and then Rails app server feels almost like being on the web naked. Currently all my apps are hosted via nginx, which I not only use to serve my app (using Passenger), but also to monitor response codes; leading to automatic bans, additional traffic filtering. What I have seen being recommended is putting Cloudflare in front of it, but for my own projects I don’t like to lazily rely on big tech. The internet is supposed to be distributed, not rely on a few giant nodes.

Option 2: Web > nginx > Kamal Proxy > Rails

Logically following from the first option, I put nginx in front of it. It works

Contents of /etc/nginx/sites-enabled/example.com:

server {
	server_name example.com;

	root /var/www/html/example.com;

	# Added to make sure certbot can still do its thing
	location / {
		try_files $uri @backend;
	}

	location @backend {
		proxy_pass http://127.0.0.1:88;
		
		# Required, because Kamal needs this information
		proxy_set_header Host $host;
		
		# We're a trusted proxy
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	}
	
	# ... start with this and listening on :80, and let certbot do its thing, it will automatically rewrite some of this
	listen 80
	listen [::]:80
}

To make sure Kamal listens on port 88, make sure this file ~/.kamal/proxy/options on the server contains the following:

--publish 88:80 --log-opt max-size=10m

Additionally make sure you deploy the proxy without ssl set to false, we’re terminating the secure connection at nginx, and continue encrypted thereafter.

Reflection on this solution

I don’t like this unencrypted channel. While TLS between the web and your very own server is good, another user on the same server might be able to listen to traffic on localhost, or worse might be able to start something on localhost:8088 before the app boots, making it easy to spoof a login form for stealing credentials. Things might be bad already when an unwanted person was able to run something on your sever, but security is about layers.

Considered work arounds:

  • Tunnel both 80 and 443 traffic from nginx to Kamal; but that reduces our options to filter traffic because nginx wouldn’t be able to read it.
  • Set up a new tunnel. One can set up self signed certificates, and encrypt.
  • Using unix domain sockets to connect to Kamal proxy, but it doesn’t support this.

Option 3: Web > Kamal Proxy > nginx > Rails

Following up on option 3: switch Kamal Proxy and nginx. Let kamal take care of the deployment of nginx (now running in a docker container). While nginx is now part of the constellation, this is a no go for me: I would have to listen to many nginx instances. That’s just crazy overhead.

Option 4: Separate deployment from apps

This is not an unusual configuration, a single definition of (all) servers maintained centrally. But this is not matching the intent of Kamal; the structure of the deploy.yml file, although it probably can be pushed to, seems mainly focussed on the deployment of a single app along with its ‘accessories’. Perhaps to make this solution work, Kubernetes based tooling would be the right choice.

Final solution: option 2

I decided to go with option 2, despite my reservations on ‘early’ TLS termination. The users active on the machine are either app users, maintainers (with sudo abilities) or root. Apps will be deployed with limited user capabilities, hence not be able to listen onto the local network; regular users could perhaps do this when they have received increased abilities, but if an intruder would gain access to these accounts, we would have a problem anyway. Unix sockets, as these are more strictly tied to users, would have been nice security wise, but on the other hand it would only work for as long we’re thinking single machine deployment. In that case we could reside to tunnelling (using e.g. Wireguard), but for now it is for a later day. Bonus feature of option 2 for me personally is that I can start rolling out the new style solution next to my “traditionally” deployed apps.

Option 4 is not a bad idea, but a bit in conflict with Kamal’s more application-centric deployment. I’m not tied to Kamal, but getting a totally different server architecture (using e.g. Kubernetes) is beyond what I would like to manage at this point. A big migration for my applications currently is moving from Capistrano, which let’s the code run directly on the (virtualised) host OS, to containers. When every app is nicely contained, it will be easier to migrate to other container based solutions.

Enjoyed this? Follow me on Mastodon or add the RSS, euh ATOM feed to your feed reader.

Op de hoogte blijven?

Maandelijks maak ik een selectie artikelen en zorg ik voor wat extra context bij de meer technische stukken. Schrijf je hieronder in:

Mailfrequentie = 1x per maand. Je privacy wordt serieus genomen: de mailinglijst bestaat alleen op onze servers.