So given a working local Docker configuration, as outlined last time, what does it take to go live on AWS? Actually not all that much, so long as you don’t yet worry about downtime.

The first step was to obtain a new EC2 instance. I know ECS is the real solution here, but I didn’t want to learn another config language this weekend. And I don’t care about downtime yet. So instead I created a vanilla Amazon Linux AMI and proceeded to install Docker per Amazon’s instructions.

Next was to get my Docker image onto the instance. Since I’m using Docker Hub as a repository, the instructions above covered this part too. Transporting the image to the cloud was a simple `docker push`, and getting it back down was easy given an SSH session to the EC2 instance: `docker login` and `docker pull`. A quick SCP to push my secret WordPress config onto the instance, and I was ready to `docker run`. Sure, my deployment process won’t scale, but at least I got it running! Sorta.

In reality, there is always more configuration that needs doing. Always. In this case I hadn’t yet configured the RDS database to allow connections from anywhere, but especially EC2. After much poking around the AWS Console, I found my target under the VPC Dashboard. Apparently there are lots of ways to manage Security Groups. Mine are VPC flavored, so the VPC Dashboard is where I need to manage them. Having found the right spot, it was easy to add an Inbound Rule to the RDS group that made it friendly to the EC2 group.

Now it works! But only on my development machine. Other computers around the apartment will load the original blog post, but the page is slow and the styles are gone. A glance at Chrome’s network debugger reveals the problem: WordPress is still trying to use the IP address of the local VM where I set it up. I guess it memorized that during configuration. And it kept working locally because I’d left that VM running and the cross-domain fetches actually worked. So I visited the local WordPress admin, pointed it at the AWS address, and tried again. At last everything worked.

Since I already had a URL I wanted to point here I also set up an Elastic IP and configured my registrar to point this way. At least that part of the configuration went smoothly, though I did have to redo the WordPress URLs again. Always more configuration.

Hello, Experiment!

Not so long ago I had this idea that I wanted to build my own little web ecosystem. I dreamt of a minimal setup. A convenient place to host my ideas and a blog about how they came to be. Almost as if I was just running it all locally, with me as the main user.

So we’re talking a pretty minimal traffic load, which means it’s best if everything runs together and cheaply. And I want to control it like a local machine, so no black boxes like Heroku or AppEngine. Docker claims to be the right abstraction and, having not yet done devops for a living, I’m willing to believe them. At first I’ll host it all on AWS, but because it’s containers all the way down I can always move it later.

It’s a good dream. But let’s back up a bit, because I’m not quite there yet. What I’ve done so far is booted up WordPress-in-Docker and started yammering. On my local machine. It’s actually not much of an accomplishment at all. In fact, the internet is chock full of ways to get WordPress up and running quickly.

WordPress maintains their own Docker images, Docker has a nice example with Compose, and you could follow an Amazon tutorial and end up on AWS ASAP.

These are all great starting points. But I’m me, and if this is going to be like my localhost, then I’m going to want to do a few things differently.

To start, what if I don’t want to run my database in a container? The experts seem to think that data persistence in and around Docker is a pain point, so I don’t want to go there. Maybe I should do it and mount my DB-in-Docker onto some EBS-backed drives, but I chose not to fight that battle today. Today I opt for a hosted solution, which for SQL-on-AWS means an RDS instance. Presumably I got that right and this blog will persist even as the containers running it come and go.

Next up, I got the idea to use NGINX instead of Apache. I think event-driven servers are great, and I think “engine-x” is a great name. I even read the intro docs and everything made sense, so I dove into best practices and examples. At which point I struggled with a lot of complexity that I think I could have avoided.

Before that happens to you, spin up a copy of their Docker image. Follow the examples and note that everything just works. Then open up a shell inside the image and navigate to /etc/nginx. There you will find a working default configuration, with the nginx.conf centerpiece that everyone else talks about.

The trick for me was realizing that nginx.conf usually dispatches elsewhere for most of the per-installation configuration. The default uses both conf.d/ and sites-enabled/, which works with every tutorial I’ve seen. I imagine that sometimes you might actually need to tweak the root nginx.conf, but I don’t know when that is because I haven’t had to yet. That helped quite a lot in interpreting the various tutorials, though some were still better than others.

Unfortunately, the docs supplied by WordPress did not seem very good. They even contained a few not-so-best-practices according to the NGINX docs. Digital Ocean, on the other hand, has a nice write-up on How To Install WordPress with Nginx on Ubuntu 14.04 and a working config to match. It further links to an article on getting the prerequisite LEMP stack installed. In our case the requirement is just to run PHP-FHM as a service alongside the NGINX service.

So those lead pretty naturally to a mostly-working Dockerfile. But just mostly. The first problem is how to tell WordPress about the RDS credentials from earlier without baking them into the Docker image. For now I solved this by manually finalizing the WordPress config outside of Docker, stowing it someplace safe, and then mounting it into the container at launch. I will need to do something better if I want to automate the release, but fortunately I’m not there yet.

The second problem was how to actually run two required services inside one container. The common answer appears to be supervisord, which will maintain the various processes for you. Since this was yet another tool to learn, I must admit that I borrowed my config from elsewhere.

In fact, as it turns out, very early in this quest I found an existing and probably working example for Docker+NGINX+WordPress. And I didn’t understand it at all. The configs were alien, it involved extra tools like supervisord, and MySQL seemed very embedded. Then I started to understand it, and I didn’t like it. Supervisord felt like cheating at containers, the WordPress install needs a mounted volume, and MySQL still seemed embedded. And then I finally understood it and that I needed most of it. In fact I had already rebuilt most of it.

Oh well, at least I know what most of it means and I could borrow the supervisord config. I’ll polish it up my implementation a bit and then go over it in an upcoming post.