Faster Sites: Beyond PageSpeed Insights

Faster Sites: Beyond PageSpeed Insights

Author: Benjamin Estes / Source: Moz Google’s PageSpeed Insights is an easy-to-use tool that tests whether a web page might be slower tha

After spurring M&A from Google, Namely raises $50M to take on the biggies in HR
How to Use Google UTMs to Analyze Your Social Traffic
How To Optimize Your Anchor Text Strategy For SEO

Google’s PageSpeed Insights is an easy-to-use tool that tests whether a web page might be slower than it needs to be. It gives a score to quantify page performance. Because this score is concrete, the PageSpeed Insights score is often used as a measure of site performance. Similarly to PageRank years back, folks want to optimize this number just because it exists. In fact, Moz has a popular article on this subject: How to Achieve 100/100 with the Google Page Speed Test Tool.

For small sites on common CMSes (think WordPress), this can be accomplished. If that’s you, PageSpeed Insights is a great place to start. For most sites, a perfect score isn’t realistic. So where do we start?

That’s what this post is about. I want to make three points:

  • Latency can hurt load times more than bandwidth
  • PageSpeed Insights scores shouldn’t be taken at face value
  • Improvement starts with measurement, goal setting, and prioritization

I’m writing with SEO practitioners in mind. I’ll skip over some of the more technical bits. You should walk away with enough perspective to start asking the right questions. And you may make better recommendations as a result.

Disclaimer: HTTP2 improves some of the issues discussed in this post. Specifically, multiple requests to the same server are less problematic. It is not a panacea.

Latency can hurt load times more than bandwidth

A first look at PageSpeed Insights’ rules could make you think it’s all about serving fewer bytes to the user. Minify, optimize, compress. Size is only half the story. It also takes take time for your request simply to reach a server. And then it takes time for the server to respond to you!

What happens when you make a request?

If a user types a URL into a browser address bar and hits enter, a request is made. Lots of things happen when that request is made. The very last part of that is transferring the requested content. It’s only this last bit that is affected by bandwidth and the size of the content.

Fulfilling a request requires (more or less) these steps:

  1. Find the server
  2. Connect to the server
  3. Wait for a response
  4. Receive response

Each of these steps takes time, not just the last. The first three are independent of file size; they are effectively constant costs. These costs are incurred with each request regardless of whether the payload is a tiny, minified CSS file or a huge uncompressed image.

Why does it take time to get a response?

The factor we can’t avoid is that network signals can’t travel faster than the speed of light. That’s a theoretical maximum; in reality, it will take longer than that for data to transfer. For instance, it takes light about 40ms for a round trip between Paris and New York. If it takes twice that time for data to actually cross the Atlantic, then the minimum time it will take to get a response from a server is 80ms.

This is why CDNs are commonly used. CDNs put servers physically closer to users, which is the only way to reduce the time it takes to reach the server.

How much does this matter?

Check out this chart (from Chrome’s DevTools):

The life of a request, measured by Chrome Dev Tools.

All of the values in the red box are what I’m considering “latency.” They total about 220ms. The actual transfer of content took 0.7ms. No compression or reduction of filesize could help this; the only way to reduce the time taken by the request is to reduce latency.

Don’t we need…

COMMENTS

WORDPRESS: 0
DISQUS: 0