My last post about getting HTTP/2 working on Ubuntu 14.04 involved building nginx from source to get ALPN support. It seemed like Docker could simplify things considerably, and the end result is this web page is now served using containers. This post describes how to use Docker to setup a simple statically-generated site with HTTP/2, Let’s Encrypt and nginx on a Digital Ocean droplet.
Disclaimer: This isn’t a guide for production systems and I’m not a Docker expert. I’m assuming this is being used to setup a test server and feedback is welcome on how to improve it. Say hi on twitter . If you are in the process of doing a HTTP/2 in production with Docker, remember this tweet:
HTTP/2 with nginx requires a bleeding-edge version of OpenSSL (1.0.2) and nginx (1.9.11) that isn’t available in the stable Ubuntu or Debian package managers in early 2016. Using Docker, it’s possible to use existing images to avoid building from source or upgrading the host operating system.
- Docker (version 1.10.2, build c3959b1)
- The latest and greatest
docker-machine(version 0.7.0-rc1, build 0fb68ca).
- Digital Ocean account (AWS and Azure are also options).
- Namecheap for a cheap .fail domain.
Insta-Docker in the cloud with Docker Machine
First, using Docker Machine, create a new Digital Ocean droplet—I called it
hip-h2 . Full documentation of the Docker Machine options (like specifying the region or IPv6 support), is available on Docker’s web site .
docker-machine create --driver digitalocean --digitalocean-access-token=digital-ocean-access-token --digitalocean-ssh-key-fingerprint=existing::ssh::key::fingerprint hip-h2
If all went well, it’s possible to see the IP of the host by running:
$ docker-machine ip hip-h2
If an existing SSH key was specified, the droplet can be accessed via SSH:
docker-machine ssh hip-h2
docker-machine to set environment variables for Docker commands using the name of the host (in this case,
$ eval $(docker-machine env hip-h2)
OpenSSL 1.0.2f + nginx 1.9.11 on Alpine Linux in 30 seconds
After setting the Docker Machine environment, there’s a host ready to run containers. Using a single command, it’s possible to verify the
nginx:mainline-alpine image supports HTTP/2—no building from source required:
$ docker run -it --rm nginx:mainline-alpine nginx -V nginx version: nginx/1.9.12 built by gcc 5.3.0 (Alpine 5.3.0) built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled ...
While this version of nginx will work perfectly for serving HTTP/2, generating a certificate and configuring the server is required. There’s a Docker image for that .
Make the droplet publicly accessible
Let’s Encrypt requires hosts requesting certificates to be accessible over the public internet with associated domain name. Digital Ocean has more information on how to connect droplets to domains here . After a few hours or minutes, confirm the new DNS setup is working by checking the domain nameservers and pinging the domain: it will resolve to the IP address of the droplet Docker Machine created.
$ host -t ns your-domain.fail $ ping your-domain.fail
Let’s Encrypt and automagic nginx configuration using docker-gen
By browsing Docker Hub , there are a number of HTTP/2-ready images available. The nginx-letsencrypt-proxy image has some nice features that make getting Let’s Encrypt running with HTTP/2 on nginx especially straightforward: the popular
docker-gen library is used to automatically create configuration files for nginx virtual hosts and a data container is used to store Let’s Encrypt certificates.
Following the documentation from the readme , create a data container for Let’s Encrypt certificates:
docker run -it --name letsencrypt-data --entrypoint /bin/echo quay.io/letsencrypt/letsencrypt:v0.4.2 "Data only container"
Next, start the nginx proxy container. This will be used for getting certificates from Let’s Encrypt using the webroot technique and serving virtual hosts.
$ docker run -it -d --name nginx --volumes-from letsencrypt-data -p 80:80 -p 443:443 arnaudrebts/nginx-letsencrypt-proxy
docker-gen container is used to automatically generate a new virtual host file and reload nginx when a new nginx container is started with several complicated command-line options:
docker run -it -d --name docker-gen --volumes-from nginx -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/docker-gen --watch -only-exposed -notify-sighup nginx /etc/nginx/conf.d/99-vhosts.conf.tmpl /etc/nginx/conf.d/99-vhosts.conf
Using the official Let’s Encrypt Docker client v0.4.2 (the latest tag had issues on my Mac running OS X 10.11), generate a new certificate for a domain and an associated email address. This is strictly rate-limited per domain , so it’s a good idea to only do this once:
docker run -it --rm --name letsencrypt --volumes-from letsencrypt-data --volumes-from nginx quay.io/letsencrypt/letsencrypt:v0.4.2 certonly --webroot-path /var/www/letsencrypt -a webroot --text --agree-tos --renew-by-default --email [email protected]
Finally, a nginx:mainline-alpine container—what serves the actual static content—is started with the
VIRTUAL_HOST environment variable set so
docker-gen runs and restarts nginx:
docker run --name=blog-nginx -it -d -e VIRTUAL_HOST=your-domain.fail nginx:mainline-alpine
If all goes well, the default welcome page for nginx on Alpine Linux,
/etc/nginx/html/index.html , will now be available using TLS and HTTP/2 on
Creating a volume for static blog content
For this blog, I created an additional data-only container that has static files generated using Hugo (using the publicly-available image smithclay/hugo-blog).
docker run --name blog-data -it -v /home/git/blog-content/site:/usr/share/blog -v /etc/nginx/html smithclay/hugo-blog hugo -d /etc/nginx/html
I’m still using a vanilla nginx container to serve static content, but it’s using volumes from the
blog-data container for content with the
volumes-from flag. The
VIRTUAL_HOST environment variable is needed for
docker run --name=blog-nginx -it --volumes-from=blog-data -e VIRTUAL_HOST=clay.fail -d nginx:mainline-alpine
In total, clay.fail uses 5 docker containers on a single host, visualized here using dockviz , where blue indicates actively running:
letsencrypt-dataare data volumes that hold static web site content and certificate data, respectively.
nginxis responsible for fetching Let’s Encrypt certificates or serving site content through virtual hosts.
blog-nginxis the web server used for serving static HTML files in the
docker-genis responsible for creating nginx virtual hosts when new containers spin up.
Docker with HTTP/2: still lots of room for improvement
For someone new to Docker, this setup is still considerably complex for a static site. Using Caddy , a Go-based HTTP/2 server, is far easier. I’m optimistic the Let’s Encrypt tooling will improve(specifically with a better nginx plugin) and the certificate retreval and configuration process will get easier. Unfortunately, using the setup described here certificates won’t automatically renew.
In general, Docker images based on operating system libraries like Alpine 3.3 and Ubuntu Xenial that support the right version of OpenSSL remain a compelling way to deploy HTTP/2 in production environments. It’s step forward in making HTTP/2 deployment more straightforward where system administrators are using Docker.
- Dead-simple HTTPS Set up with Docker and Let’s Encrypt
Disclaimer #2: I write about HTTP/2 occasionally on the New Relic blog , but this post does not in any way reflect the views of my employer.
转载本站任何文章请注明：转载至神刀安全网，谢谢神刀安全网 » Hip HTTP/2 with Docker Containers, Digital Ocean, and Let's Encrypt