This week I ended up propping up my first “Other people are going to use this application” rails app in production mode at work to help with normalizing and mapping some really ugly data. I’ve built a lot of half-baked tools for my own personal usage, but nothing yet that I’ve been comfortable or confident with to ask others to use it.

This part in itself was kind of exciting – but the thing that was far more exciting to me was the infrastructure I used to prop it up.

Setting the scene

  • I implemented a GitLab server here about a year ago. I use this for nearly all of my repo needs. Their recent versions have integrated CI (continuous integration) directly into their main repository management application, which opens up some interesting opportunities for automation. Basically, it’s GitHub + Travis-CI.

  • I’ve also been trying to learn Docker. I think Docker is a very hard concept to grok from scratch, but once you get it — you get it. At some point in the last month or so, I finally got it.

  • I had some ugly data that I wanted help with mapping. So I built a rails app and wanted to launch it in production mode behind Phusion Passenger. Passenger acts as an application server (separately from your http server). Lately I’ve been digging Nginx over Apache, so I was planning on using this as my http server.

  • I’m a big fan of using my on-premise servers over a cloud hosted service for the majority of my small-time apps. I’m comfortable in a shell, and I have the hardware available. For temporary things that only need access on our intranet, local hardware is just dandy.

I have an app, now what?

I got the application running in a state that I was pleased with and it was time to figure out how to deploy this thing. I’ve only deployed one or two other rails apps in production mode, and I did so very manually. I wanted to see if all that XP I dumped into my devops skill-tree would pay off. It looked like a cool opportunity to try out this auto nginx-reverse-proxy Docker container I’ve seen tossed about. That sounds so magical and cool.


After experimenting with a few image strategies in Docker, I came across this post talking about how to prop up a rails app behind an nginx proxy. Thank you, mystery writer. This was a seriously awesome experience. I made a few tweaks, since I wasn’t using any authentication, and the app is behind our firewall, so I opted to ignore SSL and self-signed certs – but if I wanted to, that ability is very much there.

First, let’s set up a Dockerfile for my app. I made a few tweaks to the Liberty Seed’s take on a Dockerfile to help optimize the rebuild time when I make changes. Essentially, I want my dependencies (Gemfile) to load and execute before the rest of my application. A Dockerfile basically creates a “save state” for each command it executes. bundle install takes a long time, and my Gemfile rarely changes, so it makes sense to cache that portion of my app’s build process.

I’m going to leave my files overly commented for the sake of my own review, and maybe it will help you too.


# Dockerfile
# Adapted from
# Which was adapted from

# Let's use Passenger/Nginx to prop up my rails app.
FROM phusion/passenger-ruby22:0.9.18

# Set environment variables.
ENV HOME /root
ENV RAILS_ENV production

# Use baseimage-docker's init process, as described in the image's readme.
#   -
CMD ["/sbin/my_init"]

# Use port 80

# Enable nginx by removing the "down" file. Not sure what's at play here and how it gets registered
#   But it's how the passenger image enables many different services: nginx, redis, sshd, etc.
RUN rm -f /etc/service/nginx/down

# Configure Nginx - Remove the default site, and add my own app's nginx config.
RUN rm /etc/nginx/sites-enabled/default
ADD docker/my-app.conf /etc/nginx/sites-enabled/my-app.conf

# Rails won't see my environment variables if nginx doesn't know to whitelist them.
# I created an env.conf file that is just a list of `env ENV_VAR_NAME;`
ADD ./docker/env.conf /etc/nginx/main.d/env.conf

# Install rails app
# Start with my dependencies files so that Docker can cache the image after this process is done, since this state rarely changes.

# Add the Gemfile, and Gemfile.lock
ADD ./Gemfile* /home/app/my-app/

# Take ownership from root -- not really sure if this is necessary as often as I do it, but hey - let's just make sure.
RUN chown -R app:app /home/app/my-app
WORKDIR /home/app/my-app

# Install deployment dependencies as user `app`
RUN sudo -u app bundle install --deployment

# Bring over the rest of the application now that Docker has built my container with dependencies
ADD . /home/app/my-app

## I was getting errors about the log file not previously existing. Apparently it doesn't auto create one? Or maybe a permissions issue? I have no clue, but this got me running. Maybe this could go further up for caching...
# Create log file, change r/w permissions, and take ownership from root to `app`
RUN touch /home/app/my-app/log/production.log
RUN chmod 0644 /home/app/my-app/log/production.log
RUN chown -R app:app /home/app/my-app

# Precompile the assets for zippy production fun
RUN sudo -u app RAILS_ENV=production bundle exec rake assets:precompile


# Nginx site configuration for this app
server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    # Domain name you want to map the application to

    root /home/app/my-app/public;

    # Passenger
    passenger_enabled on;
    passenger_user app;
    passenger_ruby /usr/bin/ruby2.2;



At this point, we’re ready for action.

Stand up the nginx reverse proxy

# Download (if needed) and run the Ngninx reverse proxy of magic.
docker run --restart=always --name nginx-proxy -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
  • --restart=always: If the docker daemon restarts, restart this container along with it.
  • --name nginx-proxy: the name for my container
  • -d: run detached – i.e. in the background
  • -p 80:80: Map the host’s port 80 to the container’s internal port 80. Think Firewall NAT at the container level.
  • -v /path/to/host/file:/path/to/container/file: Maps a directory or file from the host, to the container’s filesystem.
  • jwilder/nginx-proxy: The sweet nginx reverse proxy image I’ll be using.

After running this command, Docker will download and run the Nginx Reverse Proxy of Mystery. I call it that because it seriously feels like magic. It will act as the front man to port 80, redirecting to other containers I prop up based on domain names, using nothing more than an environment variable.

Stand up the application

# Build the container -- this will take a while the first time. After you've done it once, it only needs to recalculate the steps that actually change the system.
docker build -t map-app .

# Run it!!!!!!
docker run --restart=always --name rails-mapping-app --expose 80 -e SECRETE_KEY_BASE -e -d map-app
  • docker build -t map-app .
    • Build my image from the Dockerfile in this directory, and tag the final image (not container) as map-app
  • docker run ...
    • --restart=always: Restart whenever the docker daemon restarts
    • --name rails-mapping-app: The name of the container/process (not the image)
    • --expose 80: I believe this was already handled in the Dockerfile, but here it is again. Perhaps this would be the same as -p 80:80 at this point?
    • -e SECRET_KEY_BASE: -e sets env variables for the container. If you don’t assign it in the run command, it sends the current host’s env var of that name.
    • -e The magic that seems to make the Nginx container work. I have no idea how, but you could prop 100 of these up in your docker environment with different host names, and there they all are. Go ahead, try it with a Hello World web app container. Crazy.
    • -d: Detached / background
    • map-app: The name of the image I want to use for my container.

A lot of work went into getting this configuration to work – but now I had a remote server propping my app up with very little effort. If I had a change, I just had to clone the branchm, rebuild the image from the point of last changes, stop and remove the running container, and run a new one. It sounds like a lot, but it’s literally three lines.

$ git pull
$ docker build -t map-app .
$ docker stop rails-mapping-app && docker rm rails-mapping-app && docker run --restart=always --name rails-mapping-app --expose 80 -e SECRET_KEY_BASE -e -d map-app

This looks like a sweet candidate for some GitLab-CI interaction. Especially since I already had a GitLab-CI Runner installed on the server I was using for this anyway. I translated this into a deploy stage in my .gitlab-ci.yml file:

# .gitlab-ci.yml
    - docker build -t map-app .
    - docker stop justicetire-mapping && docker rm justicetire-mapping
    - docker run --restart=always --name rails-mapping-app --expose 80 -e SECRET_KEY_BASE -e -d map-app
    - master
    - qnap-ubuntu-ci

In GitLab I added my SECRET_KEY_BASE to Project/Settings/Variables and it worked great. When I would push to master – the application updated almost immediately with hardly noticeable downtime for all of my 4 internal users!

This felt like a very cool successful implementation of the things I’ve been wanting to learn more about the past few months: Docker, CI, and rails in production mode. More seasoned devops vets might be scoffing and squirming, but in the end, the folks here crushed 1,500 crappy looking data records in just a few hours, and I got to feel like a wizard!

Next up in the sometime/eventual/near future: Capistrano. Apparently I should be using that, instead of GitLab-ci.yml directly. But I don’t know why… yet.