We recently had to deploy a Go HTTP API in production. We decided to set up continuous delivery for this API and to use systemd socket activation. This blog post presents what we have learnt.

Deploying a Go application with systemd socket activation

When it comes to deploy a Go application, one will find many blog posts and other resources to do that. Nowadays, it is often related to Docker though, building a Docker image containing the application. Thorough people will even distinguish the Go build process and the Docker build to slim the resulting image. Cool.

Yet, using Docker in production is… not all sunshine, lollipops and rainbows. On the other hand, Go has a decent cross compilation support since version 1.5, which means one can build a Go application targetting different platforms. In other words, it is possible to build an ELF for Linux 64 bits on a MacOS laptop. The resulting binary is also the whole application as Go does static compilation. Hence, there is little need for Docker here, at least for us.

Continuous delivery

We chose to let CircleCI build the Go application and use rsync to transfer it to the production server. The application is managed by systemd (more on that later). To roll out a new version, we chose a boring solution: we rsync the binary and reload the systemd configuration. Our circle.yml deployment configuration is quite straightforward:

    branch: master
      - cd api/ && go build -o crick-api-server
      - rsync api/crick-api-server server:/path/to/crick-api/production/
      - ssh server 'sudo /bin/systemctl reload crick-api.service'

Both our production servers and the CircleCI servers have Linux-based 64 bits Operating Systems, hence no need to specify the GOOS or GOARCH variables when building the Go binary. That is how we continuously deploy the Go API on each commit on the master branch, and there is nothing fancy here. For the sake of readability, we omitted the rsync and ssh options (checksums, identity, etc.).

Even if the API is only used by us right now, we are not happy with the downtimes implied by such a solution. Reloading a Go HTTP server means shutting down the socket and opening a new one. It is usually done very fast, but API users might be unable to reach the server in the meantime.

Systemd socket activation to the rescue!

Systemd and socket activation

Systemd is fantastic! Among all the features it provides, there is one called socket activation, which is neither a new concept nor a systemd creation. Because we are definitely not systemd-experts, we can only tell you the very naive idea behind this feature, i.e. what we think it is.

The application uses a socket offered by systemd instead of creating one itself. That is super-great for, at least, two reasons:

  • if the application dies, it does not listen to the socket anymore. Makes sense, right? Now, what if I tell you that systemd is notified and spawns a new application instance? :tada:;
  • if the application dies (again), no information is lost since the socket remains opened, because it is not the application one. This allows to target zero-downtime deployments.

The go-systemd package has a nice example on how to setup this mechanism. You might also be interested in reading about readiness and liveness with systemd and Go. For the Go application I am talking about since the beginning, we kept things simple and used pretty much the same Go code as the example provided by go-systemd:

listeners, err := activation.Listeners(true)
if err != nil {
    logger.Fatal("failed to get a socket", zap.Error(err))

if len(listeners) != 1 {
    logger.Fatal("Unexpected number of socket activation fds")

log.Fatal(http.Serve(listeners[0], handler))

As for the systemd configuration, the result of our Ansible provisioning gives two configuration files: crick-api.service and crick-api.socket. Some explanations follow the content of each file.

; Ansible managed - crick-api.service

ExecReload=/bin/kill -SIGINT $MAINPID


; <REDACTED> ... some security directives
; cf. https://www.darkcoding.net/software/the-joy-of-systemd/


First, the ExecStart does not directly point to the Go application but to a shell script that exec the application (whose name is crick-api-server). We have this script to perform database migrations on-the-fly with migrate before running the Go application.

“Why?” I am glad you ask! Continuous delivery is one part of the answer. The other part lies in the way we manage the database credentials: only the database and the systemd configuration are aware of these credentials. The systemd Environment directives are used to pass environment variables to the service. By using an intermediary shell script, we have access to these variables, especially the CRICK_DSN that we can give to migrate to perform the database migrations. Simple yet efficient, and sufficient for our current needs.

Now you are wondering where these migrations come from. Still for the sake of readability, I removed a second rsync command that sends the migration files next to the Go app in the circle.yml file. Now you know :wink:

The ExecReload is pretty explicit too. It is the command used when one does systemctl reload crick-api.service. It sends a SIGINT signal to the application, which traps it and cleans everything up before shutdown.

; Ansible managed - crick-api.socket


ListenStream in this second file configures the exposed port of the service, bound to the socket that systemd creates and used by the Go application. A simple Nginx proxy configuration redirects api.crick.io to this exposed port.

That’s it!

This deployment strategy has been set up during our last Le lab session, a time-boxed hack week we organize every Quarter at TailorDev. There is still room for improvements and we would be glad to hear from you. Have you ever deployed a Go API? How did you do that?