How to Evaluate CDN Performance with Cedexis Radar
March 10, 2016 | Robert Gibb
This is the fifth post in a series that covers all 6 steps of the CDN Framework – a guide designed to help you acquire and maintain the best CDN solution possible.
Evaluating CDN performance is critical and must happen at different points throughout the lifecycle of the CDN Framework. In fact, CDN performance should be evaluated and monitored at all times. This is because the performance of a CDN can change, as well as your audience demographics.
In this post, we’ll show you what CDN performance metrics to measure and how to measure them. The tool we’ll be using to measure CDN performance is Cedexis Radar.
What to Know Before Testing
If you know where the bulk of your audience is located, start testing from that location. For instance, say your central audience is in North America and you learn that your CDN performs great in Europe. This performance data doesn’t mean too much. You must measure CDN performance from where your audience actually is.
Your Use Case
Understanding the purpose of your CDN will help you measure things that matter. Say you’re a software company that needs to deliver new updates and downloads. For you, CDN throughput is probably the most important metric to keep an eye on. If you’re an e-commerce company, you probably care more about latency and page load time (PLT).
How to Test CDN Performance
The metrics you’ll want to focus on during CDN performance testing are latency, throughput, and availability. To gather data related to these metrics across various content delivery networks, we recommend using Cedexis Radar. This is a free tool specifically designed to compare the performance of CDNs.
The CDN performance reports below were generated using live data from March 6, 2016.
Latency is the measurement of raw speed. It impacts page load times as well as other types of delivery. In the example below, we’ve measured latency at the 75th percentile across four global CDNs. The measurements are restricted to North America, specifically the southwest region:
When measuring latency, lower is better. As you can see, MaxCDN has lower latency on average than the other three CDNs (though there is some crossover). You’ll also notice that, around 12 noon, there was a big latency spike for all four CDNs. This was probably caused by congestion of major ISPs in that region.
Throughput indicates how much data can be transmitted in a given amount of time. When measuring throughput, you generally want more. Therefore higher is better. For the example below, we restricted measurements to the southwest region of the North America:
MaxCDN also won this particular test. But let’s run another throughput measurement and change one thing – location. Here is what happens when we measure throughput in Asia:
When this report was generated, Limelight won Asia for throughout. This is typical as different CDNs are more performant in different areas of the world. It’s also the case that things change with time, and different metrics matter more than others depending on your use case.
If availability is the most important metric for your user base (meaning no down time), then knowing which CDNs are the most available is crucial.
In this performance report, one CDN is only 92% available for a number of hours in Europe. This happens regularly and is something referred to in the industry as a micro-outage. These result from a peering relationship going south, or from a point of presence (POP) suffering some type of outage. In either case, typically users are routed to a different POP further away from them. This isn’t optimal as latency increases and throughput decreases.
Take Testing a Step Further
You can take performance testing a step further by doing tests at the website and web application level. This will show you how CDN performance directly impacts metrics that are closer to your users and bottom line.
For instance, by using a performance monitoring tool like SOASTA mPulse, you could see how the micro-outage in the performance report above affected page load time and corresponding business metrics. This is incredibly powerful for understanding if things like micro-outages (availability) matter as much as you think they might.
If you have any questions about performance testing at the CDN and web site/app level, feel free to leave a comment below. Our friends from Cedexis and SOASTA will be happy to answer them.