The One-Hour Guide to SEO: Technical SEO – Whiteboard Friday

The One-Hour Guide to SEO: Technical SEO – Whiteboard Friday

Okay, when you encounter these types of pages as opposed to these unique and uniquely valuable ones, you want to think about: Should I be canonicalizing those, meaning point this one back to this one for search engine purposes? But speed is good not only because of technical SEO issues, meaning Google can crawl your pages faster, which oftentimes when people speed up the load speed of their pages, they find that Google crawls more from them and crawls them more frequently, which is a wonderful thing, but also because pages that load fast make users happier. They should be able to be fully parsed in essentially a text browser, meaning that if you have a relatively unsophisticated browser that is not doing a great job of processing JavaScript or post-loading of script events or other types of content, Flash and stuff like that, it should be the case that a spider should be able to visit that page and still see all of the meaningful content in text form that you want to present. Pages with valuable content are accessible through a shallow, thorough internal links structure Number four, pages with valuable content on them should be accessible through just a few clicks, in a shallow but thorough internal link structure. That gets me to 100 pages. So this page was here. This is one where we agree with Google. Google generally says, hey, if you have allmystuff.com/seattle/ storagefacilities/top10places, that is far better than /seattle- storage-facilities-top-10-places. Google generally learns some things from the structure of your website from using breadcrumbs. They also give you this nice benefit in the search results, where they show your URL in this friendly way, especially on mobile, mobile more so than desktop.

4 New Features for AdWords Advertisers
How to Use Experiential Marketing to Make Your Company Memorable
How to Get Ranked and Read With the Topic Cluster Content Model

We’ve arrived at one of the meatiest SEO topics in our series: technical SEO. In this fifth part of the One-Hour Guide to SEO, Rand covers essential technical topics from crawlability to internal link structure to subfolders and far more. Watch on for a firmer grasp of technical SEO fundamentals!

Wistia video thumbnail

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome back to our special One-Hour Guide to SEO Whiteboard Friday series. This is Part V – Technical SEO. I want to be totally upfront. Technical SEO is a vast and deep discipline like any of the things we’ve been talking about in this One-Hour Guide.

There is no way in the next 10 minutes that I can give you everything that you’ll ever need to know about technical SEO, but we can cover many of the big, important, structural fundamentals. So that’s what we’re going to tackle today. You will come out of this having at least a good idea of what you need to be thinking about, and then you can go explore more resources from Moz and many other wonderful websites in the SEO world that can help you along these paths.

1. Every page on the website is unique & uniquely valuable

First off, every page on a website should be two things — unique, unique from all the other pages on that website, and uniquely valuable, meaning it provides some value that a user, a searcher would actually desire and want. Sometimes the degree to which it’s uniquely valuable may not be enough, and we’ll need to do some intelligent things.

So, for example, if we’ve got a page about X, Y, and Z versus a page that’s sort of, “Oh, this is a little bit of a combination of X and Y that you can get through searching and then filtering this way.Oh, here’s another copy of that XY, but it’s a slightly different version.Here’s one with YZ. This is a page that has almost nothing on it, but we sort of need it to exist for this weird reason that has nothing to do, but no one would ever want to find it through search engines.”

Okay, when you encounter these types of pages as opposed to these unique and uniquely valuable ones, you want to think about: Should I be canonicalizing those, meaning point this one back to this one for search engine purposes? Maybe YZ just isn’t different enough from Z for it to be a separate page in Google’s eyes and in searchers’ eyes. So I’m going to use something called the rel=canonical tag to point this YZ page back to Z.

Maybe I want to remove these pages. Oh, this is totally non-valuable to anyone. 404 it. Get it out of here. Maybe I want to block bots from accessing this section of our site. Maybe these are search results that make sense if you’ve performed this query on our site, but they don’t make any sense to be indexed in Google. I’ll keep Google out of it using the robots.txt file or the meta robots or other things.

2. Pages are accessible to crawlers, load fast, and can be fully parsed in a text-based browser

Secondarily, pages are accessible to crawlers. They should be accessible to crawlers. They should load fast, as fast as you possibly can. There’s a ton of resources about optimizing images and optimizing server response times and optimizing first paint and first meaningful paint and all these different things that go into speed.

But speed is good not only because of technical SEO issues, meaning Google can crawl your pages faster, which oftentimes when people speed up the load speed of their pages, they find that Google crawls more from them and crawls them more frequently, which is a wonderful thing, but also because pages that load fast make users happier. When you make users happier, you make it more likely that they will link and amplify and share and come back and keep loading and not click the back button, all these positive things and avoiding all these negative things.

They should be able to be fully parsed in essentially a text browser, meaning that if you have a relatively unsophisticated browser that is not doing a great job of processing JavaScript or post-loading of script events or other types of content, Flash and stuff like that, it should be the case that a spider should be able to visit that page and still see all of the meaningful content in text form that you want to present.

Google still is not processing every image at the I’m going to analyze everything that’s in this image and extract out the text from it level, nor are they doing that with video, nor are they doing that with many kinds of JavaScript and other scripts. So I would urge you and I know many other SEOs, notably Barry Adams, a famous SEO who says that JavaScript is evil, which may be taking it a little bit far, but we catch his meaning, that you should be able to load everything into these pages in HTML in text.

3. Thin content, duplicate content, spider traps/infinite loops are eliminated

Thin content and duplicate content — thin content meaning content that doesn’t provide meaningfully useful, differentiated value,…

COMMENTS

WORDPRESS: 0
DISQUS: 0