Setting Up Glitchsoc (with Docker)

Recently, as you may have heard, there’s been somewhat of a jump from Twitter over to Mastodon. While I won’t get into the reasons here, I will link Mastodon’s official post about it if you want more information on the whats and whys of the situation (what Mastodon is, why people are leaving Twitter). This jump is one I’m also trying to make, though (as you may know, if you came here from the link I posted on Twitter) I’m not necessarily abandoning Twitter entirely.

Anyway, as part of that (partially for reasons related to where my old Mastodon account was, and partially because I just wanted to), I decided to set up my own Mastodon instance running glitch-soc. Rather than run Mastodon on my host directly, I decided to use Docker (specifically Yakumo Saki’s container). While this wasn’t too complicated, I did run into some issues, so I thought I’d take the time to write this out.

This is not really a guide on how to set it up; this is more akin to an errata document on a guide that doesn’t exist in the first place, listing out things I had to figure out myself from bits and pieces spread around the internet. If you’re wanting to follow along, you’ll need to have a basic understanding of Docker and docker-compose. The sections here are organized roughly in the order I did things while setting up, though I may move some sections around if they make more sense earlier.


Network setup

Yakuko’s default compose file is pretty solid, but since I will eventually want to run another project on the server I’m using, I wanted the Postgres server on a different network than the glitch-soc containers. To accomplish this, I changed the network name it’s attached to to shared_internal and added this block to the end of its compose file:

    name: shared_internal
    internal: true

I also listed this in the glitch-soc compose file, and attached any container that needed to connect to Postgres to this network. This tells docker-compose not to automatically generate a network name, but instead create or attach to the network with that specific name.

Database setup

You may want to give Mastodon its own database account. If you do, you probably still want to make it a superuser–PgHero comes bundled with Mastodon, and it needs a superuser account to work effectively. You will need to make sure the user has CREATEDB, or the Mastodon setup will fail (even if the database already exists).

You may also want to enable query stats, which will require editing your postgresql.conf file. If you’re running Postgres in Docker, you can find this file in the volume you mounted. PgHero will tell you the changes to make, but to save you some time:

  • Find shared_preload_libraries and add pg_stat_statements to it. Uncomment the line if necessary.
  • At the end of the config file, add pg_stat_statements.track = all.
  • Restart Postgres.

Docker bind permissions

If you’re binding a folder to a container, some containers (like Postgres and Redis) will set their own permissions on startup. Other containers (like glitch-soc and elasticsearch) require you to set the permissions manually. For elasticsearch, you’ll want to make sure the folder is owned by UID 1000 and GID 1000, while glitch-soc will expect 991.

Mastodon setup

Once you’ve got Postgres and Redis running, you’ll need to get Mastodon set up. I imagine you could do this just by starting the whole compose file and then docker execing bash inside the container, but I chose to use docker run --rm -it --entrypoint bash --network "shared_internal" yakumosaki/glitch-soc instead. Either way, you don’t need to save the .env.production file generated by the setup, you just need to copy it outside the container. Once you’re inside, just start the interactive setup wizard (RAILS_ENV=production bundle exec rake mastodon:setup), it’ll print out the config as well as saving it to .env.production.

If you want full-text search, make sure your elasticsearch container is also running, and add this to the generated config:


Then run RAILS_ENV=production bin/tootctl search deploy and full-text search should work once you’ve got the service running.


As-is, streaming does not quite work. Mastodon binds to inside the container by default, which prevents anything from outside the container from communicating with it. You can fix this by adding the environment variable BIND to the streaming service, set to to make sure it’s listening on all the attached networks.

Firewall stuff

You may also want to prevent outside machines from directly communicating with your services. Rather than provide specific instructions for that, I’ll link to Jeff Geerling’s post on the issue. You may prefer to use REJECT over DROP, but either option will work here–so long as you don’t need to expose any ports from Docker to the internet.

Update: Ignore the above for now; this actually turns out to cause some issues by blocking communication from the services outbound as well. I’ll try to figure out a better approach.

Web service health check

Yakumo’s compose file has health checks for most of the services, but the health check for the web service doesn’t quite work (likely due to a Rails update since it was first created). The fix is easy: add --header 'Host: [your.domain]' to the wget command line. Without a host specified, Rails will reject the request to protect against DNS rebinding, but if you specify the host it’s looking for then it’ll work as expected. This is a minor fix, but it’s nice to see (healthy) in the output of docker ps.