Behind The Scenes: tools.maxcdn.com
April 14, 2014 | Dmitriy Akulov
At a high-level, we needed a flexible, easy-to-manage architecture to run performance tests from servers around the world. Here’s a summary of the major pieces:
The backend measures HTTP performance using node.js and the phantomas module, and measures ping performance with the standard Linux utility
We configure and deploy our backend servers with Docker
The user-facing website uses node.js with the forever module to keep it alive. The app runs as a daemon, and if anything goes wrong the script is automatically restarted. Our Github repo is synced with the front-end server, and changes are automatically deployed after being made.
When a visitor kicks off a performance test, the request is handed to our backend from the front-end VPS.
There’s a few advantages to this approach. First, this ensures a certain level of security and defines a known application workflow.
This arrangement also gives us flexibility with the infrastructure, letting us make adjustments on the fly. For example, we were able to use the /etc/hosts file on the frontend server to send requests to backend servers that didn’t yet have DNS entries. In general, having a single point of entry for all requests keeps the workflow predictable.
The front-end node.js aggregates the results from each backend server, calculates averages and percentage differences, and displays the results in a table on the website.
This mimics the performance a real-world user would see when visiting a site in a regular browser, like Chrome. Because the test is run in a full browser environment, you can test performance in various scenarios, such as HTTP vs HTTPs, CDN vs. no-CDN, or an entire page vs. individual assets.
When testing ping latency, each backend server gets a list of domains to ping (using the built-in ping command) and the individual results are aggregated on the frontend.
Server Configuration Management
The backend consists of many servers running the same code. It’s not easy to keep the code in sync with machines around the globe, which could be running with different virtualization environments, kernels, and OS distributions.
Docker lets you create a lightweight VM that contains your app and everything it needs to run, without worrying about the host environment. Every server can just run the Docker image, and the app inside can ignore differences in the parent OS.
We built an image for our backend application, and deployed it to each server. Because the backend is a node.js program, we only needed the dependencies that could run a single app.js file.
The image is created from a Dockerfile, which has the commands to setup the environment and execute the app. Here’s ours:
FROM centos:6.4 # Enable EPEL for Node.js RUN rpm -ivh http://download-i2.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm # Install Node.js and npm RUN yum install nodejs npm -y RUN yum install freetype freetype-devel fontconfig fontconfig-devel supervisor -y RUN mkdir -p /var/log/supervisor RUN rm -rf /etc/supervisord.conf ADD supervisord.conf /etc/supervisord.conf # Bundle app source ADD . /src # Install app dependencies RUN cd /src; npm install RUN npm install -g phantomas phantomjs RUN npm update EXPOSE 3405 CMD ["/usr/bin/supervisord"]
Let’s walk through it.
Our Docker image uses Centos 6.4 as the base, and we install the EPEL repo (needed to install certain applications)
We install node.js and npm, along with software needed for phantomjs
We include supervisor, to run node.js in daemon mode and restart it if anything goes awry
We created a supervisor config file in the same directory as the Dockerfile. This file is copied into the environment using the ADD command, to /etc/supervisord.conf
We ADD everything in the current directory to the /src directory, which includes our app.js file and package.json
We then install the npm modules needed by our app
Now we’re ready to rock. The last step is to expose port 3405, where our node.js app will listen for commands. We kick off supervisord to run our app, and make sure it doesn’t go down.
We can now build the Docker image with:
docker build -t username/reponame .
This looks for a file named Dockerfile in the current directory, and builds a local image. We can then login to our Docker account and publish the image to our repo:
docker login (enter credentials) docker push username/reponame
Done! Now the image is ready to be installed on our servers.
We use a collection of servers from DigitalOcean and VPS.NET to measure performance from around the world.
All the servers are running the latest version of Ubuntu. After setting them up, we used commando.io to simplify our management. Once the servers were in commando.io, we ran the following recipe:
sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list" apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 apt-get update -y apt-get upgrade -y apt-get install lxc-docker -y docker pull user/dockerimage host=`hostname | cut -f1 -d"-"` docker run -d -t -h $host -p 0.0.0.0:3405 :3405 username/reponame
This recipe adds the Docker repo, updates & upgrades the OS, and installs Docker itself. We then pull down the Docker image uploaded previously.
In order to run Docker (and our app inside), we pass a few flags. Specifically, we run in daemon mode, pass in the first part of the hostname, and open port 3405 on the real machine and proxy it to port 3405 of the Docker image.
And that’s it! We’ve now deployed our backend code to all servers, inside a well-understood environment, and the app is ready to run performance tests. Check out tools.maxcdn.com to see the architecture in action for yourself.