We don’t want every web request to make it all the way to our application servers, since that would strain them unnecessarily. So, before any web request makes it to the application, it passes through several layers that help offload and balance the traffic between multiple servers.
Content Delivery & Caching
After CloudFlare, any uncached request will go through one of two load balancers running HAProxy. These load balancers allow us to quickly add new application servers to the mix as the load increases. A lone HAProxy process can service many millions of requests per day, so we run only two of them: one to serve requests from the Internet, and one on standby, ready to take over if the hot server fails. This is called a “hot standby” configuration.
Finally, HAProxy routes the request down to our NGINX and Phusion Passenger stack. NGINX is a web server that handles static requests (like images), while Phusion Passenger manages our Node processes and dynamic requests (like the comments page) from within NGINX. We really enjoy Passenger on the DevOps side of things since it makes Node process management dead simple.
One Command to Deploy Them All
SaltStack is a configuration management system for servers that lets us configure the following:
- Deploy a new DigitalOcean server
- Grant access to the database servers
- Add the new server to the load balancer so it can start handling requests
It finishes this provisioning process for a new server in just a few minutes. Our process for code deployments is exactly the same, since SaltStack knows how to ship new code to existing servers as well.
In order to scale out horizontally, we needed a provider that not only spins up servers very quickly, but can also do so programmatically. DigitalOcean works very well on both fronts, and in multiple locations around the world. If we end up with more traffic than expected, more capacity is just a salt-cloud command away.
To create a website that’s ready to handle a great deal of traffic, you need a detailed plan. Not only do you need a caching layer, but you must also be able to spin up new servers quickly as more uncacheable requests flow in. The first step in this process is separating your stack into layers so that each can scale independently. The second is making use of tools like HAProxy, SaltStack, and DigitalOcean to bring it all together.