How To Solve SEO Errors from a Website Crawl: Part 2
In my previous article “How To Solve SEO Errors from a Website Crawl: Part 1“, I outlined a few common errors that SEO spiders recognize when crawling a website. This week, we’ll go over a some other SEO errors that are commonly found. These solutions, however, can sometimes be more technical and difficult than the SEO errors we have previously discussed. Make sure that, before you are start following the instructions below, you fully understand what you are doing.
Missing Meta Titles & Meta Descriptions
If you are missing meta titles or meta descriptions on a page, your search results will look very empty and may not even be considered for ranking on search engines. These are two of the most important aspects of search engine optimization and without meta titles and meta descriptions your website will perform poorly when compared against websites that utilize them.
Typically content management systems such as WordPress and Drupal have default functionality to display the meta title and meta descriptions based off of the content used on the page. However, this not the best method to generate optimized meta titles and meta descriptions for search engines. A plug-in or extension for a content management system (such as SEO Yoast) allows you to optimize for search engines on a site-wide and page-by-page basis.
If you do not have a content management system, you will need to insert the meta title and meta description manually into each displayed website file. In html, a title tag looks like this: <title>This is where you put title | Website</title>, and a meta description looks like this: <meta name=”description” content=”This where you put your meta description limited to 156 chars”>. Make sure to follow SEO standards while adding these to your website. This can prove to be a long and monotonous task.
Refer to Ralf’s article “Resources for Beginning SEO Practices” for more information on how to best optimize your meta description and meta title.
Duplicate content can show up in many forms in an SEO spider-crawl. Sometimes you will see duplicate meta titles, duplicate meta descriptions or just plain duplicate main-page content. In any of these cases, this issue should be resolved quickly. One of the more common reasons for duplicate content is page pagination. Quite often, paginated pages will contain the exact same meta title or meta description. Other common issues include URL parameters or content management systems using multiple URIs, which register as a unique URL to a search engines. There a few options when solving these issues and some may be better than others. It boils down to using your best judgement to provide the best experience possible for your users and the search engine crawlers.
One solution to duplicate content is a 301 redirect. The best uses for 301 redirects is when you have an exact copy between two or more pages, or when the exact same page is being registered under multiple URIs by a content management system. A 301 redirect can be created via the .htaccess on Apache or your server config file on other server operation systems. When editing server configuration files, you do have the chance of creating internal server errors if you don’t know what you are doing. Some content management systems may include default functionality or community plug-ins, which allow you to make 301 redirects without editing server config files.
Another solution is to use a canonical, which looks like this: <link rel=”canonical” href=”https://www.digitalreachagency.com” />. This tag, which goes into the <head></head> section, tells the search engine what the preferred URL is for the content. It can be a good resolution for paginated pages.
What if you don’t have the ability or the access needed to complete the previously mentioned solutions? While it is the least preferred method, you can disallow the URL in your robots.txt file. This method tells the search engine robots to not even crawl the URL. A robots.txt file can be found in your website’s root directory and if you can’t find one, you should make one following this robots.txt guide.
A 302 redirect serves as a temporary redirect, but many people mistakenly use them to create permanent redirects on a website. This is most often an issue for Windows servers, because generating a 302 is much easier than a 301. Don’t take shortcuts, though, because a 302 redirect can have a negative impact on search results for a page, as Google will treat it as a temporary redirect. Instead of changing the URL that is indexed for the 302 redirected page, Google with index both the old url and new url. This is why it is important to change all 302 redirects over to 301 redirects.
Creating redirects can be complicated if you don’t have an easy 301 redirect generator included in your content management system. It involves editing server config files, such as .htaccess on Apache, which can lead to internal server errors if you are inexperienced with the configuration of the server. Make certain you understand how to edit your server config file and how the website’s server host is configured before continuing with edits.
In my next article “How To Solve SEO Errors from a Website Crawl: Part 3”, I will be going over the final SEO error, “crawl errors”. While it is last on our lists, it is far from the least important! More to come.