Alright this one is for those that really want to delve deep into the underbelly of Technical SEO.
And there is good reason to get this right.
Here is the thing.
After the Panda update, getting your website squeaky clean and ensuring you present a perfect user experience has become more important than ever.
Unlike back in the Good Ole days where you could simply blast a website with links and rank – today if your website has technical errors no matter how many quality links are pointing to your site it’s going to drag down rankings.
But once you get rid of technical errors it’s like a weight has been lifted off your site.
The hand break is off and often you’ll see bigger jumps in your rankings than ever before.[feature_box style=”8″ only_advanced=”There%20are%20no%20title%20options%20for%20the%20choosen%20style” alignment=”center”]
GET A FREE TECHNICAL SEO AUDIT!
Get a full technical SEO Audit covering our official Technical Audit Checklist along with recommendations from our team of SEO Experts!
Click Here To Claim Your FREE Technical SEO Audit[/feature_box]
Take one of our clients for example.
They had a SEO company who had no idea about SEO doing their SEO… figure that!
Once we took them on board we completed a technical audit, fixed the mistakes and the site literally over a few days flung from somewhere in the 200’s to the second page for a VERY competitive keyword.
Check out the case study here:
So what I thought I would do is put together a full list of technical aspects you should be watching out for on your website and ensure your site is and stays squeaky clean.
Alright ready to get your nerd on?
Let’s delve into this…
Technical Website Audit Checklist
Indexing & Crawl-Ability
1) Check for pages with 4xx error status codes
4xx errors often point to a problem on a website. For example, if you have a broken link on a page, and visitors click it, they may see a 4xx error. It’s important to regularly monitor these errors and investigate their causes, because they may have negative impact and lower site authority in users’ eyes.
The absence of pages with 4xx status codes does not guarantee that users and search bots will have absolutely no trouble navigating your website content. Make sure all pages are available and load properly, check your website for Pages with 5xx status codes and make sure your custom 404 error page is set up correctly.
2) Check for pages with 5xx status codes
5xx error messages are sent when the server is aware that it has a problem or error. It’s important to regularly monitor these errors and investigate their causes, because they may have negative impact and lower site authority in search engines’ eyes.
Same as with 4xx errors, the absence of pages with 5xx status codes does not mean that users and search bots will flawlessly navigate your website content.
3) Ensure your 404 error page is set up correctly
Before we get into this one… in the spirit to try and keep things as light as possible, here’s a few pretty funny 404 pages I found… Hope this entertains you a little before getting back to dry SEO stuff 😉
Alright with that out of the way… let’s get back to business…
A custom 404 error page can help you keep users on the website. In a perfect world, it should inform users that the page they are looking for doesn’t exist, and feature such elements as: HTML sitemap, navigation bar, and a search field.
But most importantly, a 404 error page should return 404 response code. This may sound obvious, but unfortunately it’s rarely so. See why it happens and what it can result in, according to Google Webmaster Tools:
Just because a page displays a 404 File Not Found message doesn’t mean that it’s a 404 page. It’s like a giraffe wearing a name tag that says “dog.” Just because it says it’s a dog, doesn’t mean it’s actually a dog. Similarly, just because a page says 404, doesn’t mean it’s returning a 404…
Returning a code other than 404 or 410 for a non-existent page… can be problematic. Firstly, it tells search engines that there’s a real page at that URL. As a result, that URL may be crawled and its content indexed. Because of the time Googlebot spends on non-existent pages, your unique URLs may not be discovered as quickly or visited as frequently and your site’s crawl coverage may be impacted.
We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page.
A customized informative 404 error page can help you keep users on your website. And, most importantly, a 404-error page should return the 404 response code. So make sure it is set up correctly
4) Ensure you have a robots.txt file available on your website
Robots.txt file is automatically crawled by robots when they arrive at your website. This file should contain commands for robots, such as which pages should or should not be indexed. It must be well-formatted to ensure search engines can crawl and read it.
If you want to disallow indexing of some content (for example, pages with private or duplicate content), just use an appropriate rule in the robots.txt file.
For more information on such rules, check out robotstxt.org.
Please note that commands placed in the robots.txt file are more like directives rather than absolute rules for robots to follow. There’s no guarantee that some disobedient robot will not check the content that you have disallowed. Therefore, if you’ve got any secret or sensitive content on your site, robots.txt is not a way to lock it away from public.
The availability of a valid robots.txt file on your website does not guarantee that it’ll be quickly and correctly indexed by search engines.
First, the file itself must be setup correctly, so that not to exclude some important content from indexing by mistake. Check Pages restricted from indexing to see what’s closed from search engines on your site.
Next, to ensure fast indexation of the most important pages you should also check the availability of an xml.sitemap.
5) Make sure you have a .XML sitemap on your website.
An XML sitemap should contain all of the website pages that you want to get indexed, and should be located on the website one directory structure away from the homepage (ex. http://www.site.com/sitemap.xml). In general, it serves to aid indexing and saturation. It should be updated when new pages are added to the website, and needs to be correctly coded.
Besides, in this sitemap you can set the of each page, telling search engines which pages they are supposed to crawl more often (i.e. they are more frequently updated).
Learn how to create an .xml sitemap at sitemaps.org or watch this video here:[video_player type=”youtube” youtube_hide_controls=”Y” youtube_remove_logo=”Y” width=”460″ height=”259″ align=”center” margin_top=”0″ margin_bottom=”20″]aHR0cHM6Ly95b3V0dS5iZS9UTzhfbXc2cHUyNA==[/video_player]
A valid .xml sitemap on your website does not guarantee that it will be flawlessly indexed by search engines. For better indexation make sure the Robots.txt file is also available in the root directory of your website, and make sure you haven’t closed away from robots any important page (see Pages restricted from indexing.)
6) Ensure pages with useful content are not restricted from indexing and low quality pages are restricted from indexing.
A page can be restricted from indexing in several ways:
– in the robots.txt file
– by Noindex X-Robots tag
– by Noindex Meta tag.
Each of this is a line of HTML code that says how crawlers should handle specific pages on the site. Specifically, the tag tells crawlers if they are not allowed to index the page, follow its links, and/or archive its contents.
So make sure that pages with unique and useful content are available for indexing.
To make sure none of the website sections with valuable content is restricted from indexing, check Disallow rules in your robots.txt file. Same goes for low quality pages, you can restrict these pages in your robots.txt file.
7) Make sure you have fixed www. and non-www version of your URL
Usually, websites are available with and without “www” in the domain name. This issue is quite common, and people link to both www and non-www versions. Fixing this will help you prevent search engines from indexing two versions of a website.
Although such indexation won’t cause penalty, setting one version as a priority is best practice, especially because it helps you save link juice from links with and without www for one common version.
You can set up and view the primary www or non-www version for your site in the .htaccess file. Also, it is recommended to set the preferred domain in Google Webmaster Tools.
8) Remove duplicate http/https versions.
If the HTTP and HTTPS versions of your website are not set properly, both of them can get indexed by search engines and cause duplicate content issues. In order to fix these issues, it is recommended to set one version (either HTTP or HTTPS, depending on the content of the page) as a priority.
You can set up the primary HTTP or HTTPS versions of your pages in the .htaccess file, or using the rel=”canonical” tag.
9) Make sure 302 redirects are used correctly and justified.
302 redirects are temporary so they don’t pass any link juice. If you use them instead of 301s, search engines might continue to index the old URL, and disregard the new one as a duplicate, or they might divide the link popularity between the two versions, thus hurting search rankings.
That’s why it is not recommended to use 302 redirects if you are permanently moving a page or a website. Instead, stick to a 301 redirect to preserve link juice and avoid duplicate content issues.
The absence of 302 redirects does not guarantee that your website is free from redirect issues. To make sure all redirects are set up correctly, check your website for Pages with meta refresh and for Pages with rel = “canonical”.
10) Make sure 301 redirects are used correctly and justified
301 redirects are permanent and are usually used to solve problems with duplicate content, or if URLs are no longer necessary. The use of 301 redirects is absolutely legitimate, and it’s good for SEO because 301 redirect will funnel link juice from the old page to the new one. Just make sure you redirect old URLs to the most relevant pages.
If all of your 301 redirects are set up correctly, this does not guarantee that there are no other redirect issues on your website. To make sure you are not losing any link juice, check your website for Pages with 302 redirects, Pages with meta refresh and Pages with rel=”canonical”.
11) Remove Meta Refresh redirects from your website
Basically, meta refresh may be seen as violation of Google’s Quality Guidelines and therefore is not recommended from SEO point of view.
As one of Google’s representatives points out: “In general, we recommend not using meta-refresh type redirects, as this can cause confusion with users (and search engine crawlers, who might mistake that for an attempted redirect)… This is currently not causing any problems with regards to crawling, indexing, or ranking, but it would still be a good idea to remove that.”
So stick to the permanent 301 redirect instead.
The absence of meta refresh redirects does not guarantee that your website is free from redirect issues. To make sure all redirects are set up correctly, check your website for Pages with 302 redirects and for Pages with rel=”canonical”.
12) Fix pages with incorrect rel=”canonical” tags
In most cases duplicate URLs are handled via 301 redirects. However sometimes, for example when the same product appears in two categories with two different URLs and both need to be live, you can specify which page should be considered a priority with the help of rel=”canonical” tags. It should be correctly implemented within thetag of the page and point to the version which should rank.
To make sure there are no other redirect issues on your website, check it for Pages with 302 redirect and Pages with meta refresh.
Encoding and Technical Factors
13) Make sure your website is mobile friendly!
According to Google, the mobile-friendly algorithm affects mobile searches in all languages worldwide and has a significant impact on Google rankings. This algorithm works on a page-by-page basis – it is not about how mobile-friendly your pages are, it is simply are you mobile-friendly or not. The algo is based on such criteria as small font sizes, tap targets/links, readable content, your viewpoint, etc.
Since the mobile-friendly algo works on a page-by-page basis, you should make sure that all your landing pages are mobile-friendly too – you can check them in the Page Audit module. Additionally, you can check the Mobile Usability report in Google Webmaster Tools -> Search Traffic -> Mobile Usability and fix potential mobile issues found on your site.
Here’s Matt Cutt’s talking about mobile coding and best practice:[video_player type=”youtube” youtube_hide_controls=”Y” youtube_remove_logo=”Y” width=”460″ height=”259″ align=”center” margin_top=”0″ margin_bottom=”20″]aHR0cHM6Ly95b3V0dS5iZS9EMDN3UmI0czdNVQ==[/video_player]
14) Fix pages with duplicate rel+”canonical” tags.
Having duplicate rel=canonical code on a page happens frequently in conjunction with SEO plugins that often insert a default rel=canonical link, possibly unknown to the webmaster who installed the plugin. Double-checking the page’s source code will help correct the issue.
In cases of multiple declarations of rel=canonical, Google will likely ignore all the rel=canonical hints, so your efforts to avoid duplicate content issues may go useless.
15) Avoid Pages With Frames Where Possible.
Frames allow displaying more than one HTML document in the same browser window. As a result, text and hyperlinks (the most important signals for search engines) seem missing from such documents.
If you over-use Frames, search engines will fail to properly index your valuable content, and won’t rank your website high.
If there are specific reasons for using Frames, consider adding the NOFRAMES tag, where you can insert your optimized content. But note that people will see such content only when Frames are not displayed in their browser (in this respect the NOFRAMES tag is similar to the ALT attribute of an image). In general, we recommend trying to avoid using Frames at all.
16) Fix pages with W3C errors and warnings
The validation is usually performed via the W3C Markup Validation Service. And although it’s not obligatory and will not have direct SEO effect, bad code may be the cause of Google not indexing your important content properly.
We recommend checking your website pages for broken code to avoid issues with search engine spiders.
Search engine spiders find it easier to crawl through semantically correct markup, this is why site’s HTML markup should be valid and free of errors. CSS which is used to control the design and formatting of the website, and which makes the webpages lighter and easier to load, should be error-free too. If for example, one of the tags has been left unclosed, the spiders may miss an entire chunk, thus reducing the value of the page.
17) Fix BIG and slow loading pages
Naturally, there’s a direct correlation between the size of the page and its loading speed, which in its turn is one of the numerous ranking factors.
Basically, heavy pages load longer. That’s why the general rule of thumb is to keep your html page size up to 256kB.
Of course, it’s not always possible. For example, if you have an e-commerce website with a large number of images, you can push this up to more kilobytes, but this can significantly impact page loading time for users with a slow connection speed.
Big pages can influence user experience and even search engine ranking, that’s why think about reducing the size of such pages and make them load faster.
18) Avoid the use of dynamic URL’s
URLs that contain dynamic characters like “?”, “_” and parameters are not user-friendly, while they are not descriptive and are harder to memorize. To increase your pages’ chances to rank, it’s best to setup dynamic URLs so that they would be descriptive and include keywords, not numbers in parameters.
As Google Webmaster Guidelines state, “URLs should be clean coded for best practice, and not contain dynamic characters.”
To make sure all your website URLs are clean coded and search engine friendly, check your website for Too long URLs as well.
19) Avoid too long URL’s
URLs shorter than 115 characters are easier to read by end users and search engines, and will work to keep the website user-friendly.
To make sure all your website URLs are clean coded and search engine friendly, check your website for Dynamic URLs as well.
20) Fix broken internal links
Broken outgoing links can be a quality signal to search engines and users. If a site has many broken links it is logical to conclude that it has not been updated for some time. As a result, the site’s rankings may be downgraded.
Although 1-2 broken links won’t cause Google penalty, try to regularly check your website and fix broken links if any, and make sure their number doesn’t grow. Besides, your users will like you more if you don’t show them broken links pointing to non-existing pages.
This is just one of the factors relating to the quality of links on your website. To make sure your linking is totally fine, you should also check your website for Pages with excessive number of outgoing links.
21) Fix pages with excessive number of links
According to Matt Cutts (head of Google’s Webspam team), “…there’s still a good reason to recommend keeping to under a hundred links or so: the user experience. If you’re showing well over 100 links per page, you could be overwhelming your users and giving them a bad experience. A page might look good to you until you put on your “user hat” and see what it looks like to a new visitor.”
Although Google keeps talking about users experience, what they can really hurt if they see way too many links on a page is its rankings. So the rule is simple: the fewer links on a page, the fewer problems with its rankings.
In fact, there’s nothing to add here. Just try to stick to the best practices and keep the number of outgoing links (internal and external) up to 100.
This is just one of the factors relating to links on your website. To make sure your linking is totally fine, you should also check your website for Broken links.
On Page Factors
22) Fix empty title tags
If a page doesn’t have a title, or the title tag is empty (i.e. it just looks like this in the code:), Google and other search engines will decide on their own, what content to show on the results page. Thus if the page ranks on Google for a keyword, and someone sees it in Google’s results for their search, they may not want to click on it simply because it says something absolutely not appealing.
No webmaster would want this, because in this case you cannot control what people see on Google when they find your page. Therefore, every time you are creating a webpage, don’t forget to add a meaningful title that would attract people.
This is only one thing to check about the titles and this does not mean you don’t have any other problems with page titles on your site. To be sure your titles are totally fine, also check your site for Duplicate titles, and Too long titles.
23) Fix duplicate titles
A page title is often treated as the most important on-page element. It is a strong relevancy signal for search engines because it tells them what the page is really about. It is of course important that title includes your most important keyword. But more to that, every page should have a unique title to ensure that search engines have no trouble in determining which of the website pages is relevant for this or that query. Pages with duplicate titles have fewer chances to rank high. Even more, if your site has pages with duplicate titles, other pages may be hard to get ranked as well.
The absence of duplicate titles does not guarantee there are no other issues with title tags on your website. To review them on all sides, you should also check your site for Empty title tags and Too long titles.
24) Fix too long titles
Every page should have a unique, keyword rich title. At the same time, you should try to keep title tags not too long. Titles that are longer than 55 characters get truncated by search engines and will look unappealing in search results. You’re trying to get your pages ranked on page 1 in search engines, but if the title is shortened and incomplete, it won’t attract as many clicks as you deserve.
The fact that all of your titles are within the required length does not guarantee there are no other issues with title tags on your website. To eliminate any possible trouble, check your site for Empty title tags and Duplicate titles.
25) Fix empty META descriptions
Although meta descriptions don’t have direct influence on rankings, they are still important while they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through.
If the meta description is empty, search engines will themselves decide what to include in a snippet. Most often it’ll be the first sentence on the page. As a result, such snippets may be unappealing and irrelevant.
That’s why you should write meta descriptions for each of your website pages (at least for the landing pages) and include marketing text that can lure a user to click.
The fact that meta descriptions are present on your website does not mean that they form an appealing snippet. To make sure your snippets will attract users, you should also check your website for Too long Meta descriptions.
26) Fix Duplicate META descriptions
According to Matt Cutts, it is better to have unique meta descriptions and even no meta descriptions at all, then to show duplicate meta descriptions across pages.[video_player type=”youtube” youtube_hide_controls=”Y” youtube_remove_logo=”Y” width=”460″ height=”259″ align=”center” margin_top=”0″ margin_bottom=”20″]aHR0cHM6Ly95b3V0dS5iZS9XNGdyODhvSGItaw==[/video_player]
That’s why make sure that your top important pages have unique and optimized descriptions.
The absence of duplicate descriptions does not guarantee there are no other issues with description tags on your website. To review them on all sides, you should also check your site for Empty META descriptions and Too long META descriptions.
27) Fix too long META descriptions
Although meta descriptions don’t have direct influence on rankings, they are still important while they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through. If the meta description is too long, it’ll get cut by the search engine and may look unappealing to users.
For a meta description, use a maximum of 155 characters while longer meta descriptions will get truncated by search engines – so people simply won’t see all your attractive marketing text you were planning to use to attract more clicks.
– – –
Whew… that’s a whole heap of technical SEO geek glory right there.
It’s a complete list of the exact areas we check as a very first point before we begin any of our own or our clients campaigns.
If all this went a little bit over your head, we also offer FREE audits to be completed on your website.
So if this is something you’re after and you’d like us to check your website against all of the above factors simply request your free SEO technical audit below.[feature_box style=”8″ only_advanced=”There%20are%20no%20title%20options%20for%20the%20choosen%20style” alignment=”center”]
GET A FREE TECHNICAL SEO AUDIT!
Get a full technical SEO Audit covering our official Technical Audit Checklist along with recommendations from our team of SEO Experts!
Click Here To Claim Your FREE Technical SEO Audit[/feature_box]
Hope you enjoyed this feast of Geek SEO talk and well talk more next time.
– – –
WEBSITES THAT SELL OFFICE LOCATIONS
Websites That Sell provides SEO services to clients all across Australia. Their unique & proprietary Targeted SEO system helps local, national & International clients get found on Google. The companies head office is located on the Sunshine Coast with satellite offices in Brisbane, Gold Coast, Melbourne, Adelaide, Hobart, Sydney and Perth. David Krauter, head of Strategy, operates from the Sunshine Coast office which can be reached on 1300 188 662.