Resources:
Categories:
Give us your email and we'll send you the good stuff.
Categories:
When it comes to DNS, there's nothing we love more - except DNS management. And maybe Secondary DNS. Or Failover. Even anomaly detection. Oh who are we kidding, if it's even remotely close to the topic of DNS, we got you covered!
In this article, we’re going to show you the many different kinds of load balancing, how they differ from other traffic managers like failover and round robin, and how you can set it up in just a few minutes.
Load balancing is commonly used to balance traffic across redundant systems, like web or application servers. So if one server is unavailable, there are multiple other servers ready to take over the traffic load.
Load balancing can do some pretty amazing things, like:
If load balancing can do such amazing things, why isn’t everyone using it?! Good question, I don’t know... but some quick Googling told me this:
These points may have been true in the past, but DNS-based load balancing has taken massive strides in just the past few years.
With all that being said, you have to wonder where is all the confusion coming from? Why do people think load balancing is expensive and hard to set up? Because it used to be. Just a few years ago, load balancing was run on hardware that was costly to turn up and maintain.
So instead, people used failover as a primitive form of load balancing. Failover doesn’t require hardware and is a simple service you can set up through your DNS provider for just a few bucks a year.
Failover only uses the redundant systems if the primary is unavailable. While load balancing cycles through all the IP addresses in the configuration.
Because failover isn’t scalable. As your traffic loads increase, you can’t continue to rely on a single system. Let’s say you get on Shark Tank and all the sudden your primary web server can’t handle your website traffic. It will crash and your users will be sent to your backup web server. But now your backup is crumbling under the weight of the traffic… and you’ve just lost thousands of dollars in potential clients.
Load balancing handles these situations better by evenly spreading the traffic load across multiple systems. If a system is unavailable, it will stop sending traffic to it. Are you sold yet? Just wait, it gets even better. There are multiple kinds of load balancing.
Round robin is the simplest form of load balancing. It rotates through the IP addresses in the configuration with no regard to if those servers are even up.
Since there is no way to tell if those systems are available or not, you could be sending traffic to a slow or unhealthy server.
Say you have three systems in your network. If one of them goes down, roughly a third of your traffic will still be pointed to that system.
Round robin may have been great a decade ago, but it wasn’t meant to scale. No matter what you’re using load balancing for, the systems in your configuration will likely be very different from one another.
Some servers may be able to handle more traffic than others. While others may be on their way out. You can use weighted round robin to specify different weights for each of the servers in your network. Just increase the weight for the servers with greater capacity, and the IP of the “heavier” system will be returned more often in the round robin rotation.
We’re still missing the bar, though. Remember earlier when we talked about the differences between failover and round robin? One of the key points we mentioned was the ability to remove a system from the round robin configuration if it becomes unavailable.
Constellix comes with failover already baked into the load balancing functionality.
You can attribute monitoring checks to each of the endpoints in your round robin record. That way, when our monitoring nodes detect the system as down, we will remove it from the rotation. Once that system is back online, we can return it to the rotation without a hiccup.
Before we get into the next three kinds of load balancing, we need to get metaphorical.
Bob just moved to a new town. He asks his neighbor Terry where he can buy apples. Terry tells Bob he has two places to choose from:
But wait, there’s a catch. Option A doesn’t account for how close Bob is to the apple orchards. What if Bob only wants to eat locally grown apples? This method doesn’t account for distance, and could be sending Bob to an orchard that’s quick to get to, but not in his town.
Option B has its downsides, too. The closest orchard may be in a bigger town with more traffic or there could be a detour because of flooding. These kinds of delays could double or even triple the time it takes for Bob to get to the orchard.
So what does this have to do with network load balancing? Everything…
Bob’s options are the same ones you’ll have to choose from when you set up your load balancing configs. Do you want to send your users to the fastest responding server in your network? Or the closest one to each user?
This is Bob’s second option: the shortest distance.
Load balancing is segmented in regions, typically 5 to 7 depending on the provider’s network. The load balancer looks at which region the client is querying from, and returns the IP of a resource in that region.
In some cases, the closest server could also be the fastest resolution time. Physics dictates that the further away something is, then the longer it takes to get to it if you are traveling at a constant speed.
But, the internet is inherently volatile and traffic conditions can change in an instant. There could be an undersea cable cut or unusually heavy traffic conditions that could make the closest server the slowest.
You can also setup load balancing to use monitoring metrics that go beyond UP/DOWN statuses. You can actually detect the round trip time (RTT) of the systems in your load balancing configuration, and set it to only send traffic to the fastest responding system.
This is Bob’s first option, the least amount of time to reach the apple orchard.
What if Bob wants both? The closest orchard in the shortest amount of time.
Just recently this became possible with Constellix’s smart load balancing service. You can create groups of endpoints that are only to be used in a specific region. You can create groups like this for every region available to ensure the highest level of accuracy and performance.
In the near future, performance load balancing will be integrated with even more advanced monitoring metrics. Tools like Real-User Monitoring (RUM) give you real insight into how users are resolving your domains.
With the metrics from RUM, you’ll be able to create custom load balancing configurations that are tailored to each user’s network, location, and so much more.
We know what it does and how it works, but how do you set up load balancing for your managed DNS service?
A DNS Record can only point to a single IP address. Which makes load balancing near impossible. What you can do is create multiple records with the same name (like www), but each one points to a different IP address.
When a user queries your www record they will be answered with one of the IP addresses with the www name.
You can also set up load balancing with record pools. Pools are groups of IP addresses or hostnames. After you create a pool, you can apply to individual records. When that record is queried, the user will be answered by one or more of the IP addresses in the pool.
Pools have many advantages over using multiple records for load balancing. They are optimized for large accounts that use the same configuration for multiple domains. You can failover from one pool to another. And you can return multiple IP’s in a pool at a time.
We recommend using pools for all load balancing configurations because they are easier to maintain. Pools also make it easier for you switch from one kind of load balancing to another.
We made it easy to set up pools in Constellix. In the create a Pool modal, you’ll see the options for every kind of load balancing we talked about in this article.
1. Round Robin: Just enter the IP’s you want in the pool.
2. Weighted Round Robin: Step 1 + adjust the number in the weight column.
3. Round Robin + Monitoring: Step 1 + 2 + click the Sonar Check dropdown to select the associated monitoring check. You can see if it is UP/DOWN in the Status column.
4. Performance Optimization: Steps 1+ 2 +3 + Enable ITO.
5. Regional Optimization: Steps 1 + 2 + 3 + 4 + choose a monitoring region. Repeat for as many regions as you want.
So far, we’ve only talked about how to load balance across different resources. But what about providers?
You can actually manage traffic flow across resources offered by different vendors. The most common use case is called multi-CDN management. This is where you use CDN services from two or more providers.
You can set up your multi-CDN pools to only return the fastest responding CDN in a region. All you have to do is:
Sign up for news and offers from Constellix and DNS Made Easy