fbpx

Common Website Status Code Errors That Impact Your SEO

There are a variety of different errors that your website or web pages could
be experiencing, and it’s not uncommon to see some of these errors every now and then. Familiarizing yourself with these issues will help you understand what they mean and subsequently the steps you need to take to fix them as soon as possible.

When you identify these types of errors, it is crucial to address them as soon as you can since they negatively impact your SEO. This is because when a website/ web page experiences a status code, it’s likely that the website hasn’t been crawled or indexed since the error has occurred–and in order for your website to be found via Google (or any other search engine), it requires regular crawling. 

So, what are these common errors? What do they mean, and more importantly, how can you fix them? Well, let’s dive in!

Status Codes

Status codes are represented by a three digit number, and different classes of status codes will be represented by beginning with a different number, ranging from 1 to 5.

The classes can be seen below:

1xx – Informational responses

2xx – Everything is okay!

4xx – Client errors

5xx – Server errors

Let’s go over the most important status code errors to know for SEO and how to address them; these fall under the 4-5 class. While classes 1 – 3 are not errors, it is important to also touch on class 3 since they relate to links and can have an impact on how Googlebots crawl your website.

Class 3 Status Codes

As mentioned, class 3 status codes aren’t errors, but rather provide information about a URL that has been redirected. There are two important status codes that relate to SEO: permanent redirect and temporary redirect.

HTTP Status Code 301: Permanent Redirect

This status code will send both Googlebots and visitors to a different URL than original selected. It ultimately means that two or more URLs will take you to the same page. This could be done for a variety of reasons, including when creating a new website. This way, the older URL doesn’t need to be deleted or changed, but when users (and Googlebots) go to that URL, it permanently takes them to the new one.

This status code is seen as the better of the two discussed in this section. This is because a 301 status code indicates that the web page doesn’t lose any of its authority; a 301 allows for the original URLs link equity to be passed along to the new URL.

HTTP Status Code 302: Temporary Redirect

This status code is very similar to the 301 in that both Googlebots and visitors are sent to a different URL than the one originally selected, however no link equity is passed. This is because it is seen as a temporary redirect, not a permanent one.

Class 4 Status Codes

Class 4 status codes are errors that appear when there seems to be a problem on the client’s end. Let’s examine the four common class 4 status errors.

HTTP Status Code 401: Authentication Required

When a page shows a 401, it means that an authentication is required – this indicates that the user needs to be logged in. This status code will also apply if the authentication has failed. In this case, it means that the username and/or password is incorrect. The user will have to ensure that their credentials are correct or reset them.

HTTP Status Code 403: Forbidden Action

The 403 status code indicates that the user does not have access to the web page due to either a lack of credentials or they do not have the correct credentials to login.

HTTP Status Code 404: Page Not Found

This status code indicates that the page being request cannot
be found by the browser, but it doesn’t indicate whether this is
permanent or temporary. In most cases, a 404 indicates that the
content has been deleted.

404s, as hyperlinks that point to web pages that have been deleted or moved, whether it be permanent or temporary, are also referred to as orphan links/ orphan pages. 404s should be fixed as soon as you are aware of them since they not only eat up your crawl budget, but they are also a deterrent for visitors. You could use 301 to redirect Googlebots and viewers to another page, or it’s replacement page.

You can monitor these errors though using the Google Search Console.

There are a number of factors that could be the cause of a 404 error; we will take a closer look at what these could be when we discuss crawl errors.

HTTP Status Code 410: Page Is Gone

This status code indicates that while this page used to exist with the given URL, it no longer exists and is permanently gone.

Class 5 Status Codes

Class 5 status codes are errors that indicate that the server failed to fulfill a request. In this case, there is really only one that’s common: server unavailable.

HTTP Status Code 503: Server Unavailable

This status code indicates that the request for the web page cannot be handled due outage or overload. It essentially means that a web page is “down” due to immediate temporary issues, such as site maintenance. In this case, it is just up to the user to try again later.

As a webmaster, try to give notice of when your website will be down for maintenance in order to give users a heads up.

Crawl Errors

Crawl errors occur when Googlebots try to access a web page in order to crawl it but
are unable to do so, which results in the web page not being indexed. Sometimes this is deliberate, where for instance, a webmaster will include a robots.txt to prevent wasting crawl budget on a web page that includes infinite spaces. Sometimes, however, this is unintended, and it’s not uncommon for webmasters to experience these errors.

Webmasters are able to view the crawl errors on their website through the Google Search Console.

There are two types of crawl errors: site errors and URL errors: site errors and URL errors.

Site Errors

Site errors indicate that Googlebots are unable to access your website to crawl and index. There are three common types of site errors, and each can be resolved without too much difficulty:

Server Error: This error is commonly indicated through a HTTP Status Code 503 Error, which usually indicates that your site has too many visitors, making the bots unable to crawl. It could also indicate that your page might be having issues loading. To address an issue like this, you are able to add an HTML header Retry-After to indicate to the bots when to attempt the re-crawl.

Robots.txt Issues: Ensure that your robots.txt is not set to Disallow – this will prevent Googlebots from crawling your web page. Further, make sure that it is available to be found by the Googlebots; it needs to be located in the root directory file to be read properly.

DNS Error: This simply means that Googlebots couldn’t access your website due to mostly temporary issues. If experiencing this issue, it is best to just wait, as it will likely resolve itself.

URL Errors

URL errors refer to crawl errors on a specific web page. Of all the types of URL errors that can occur, a 404 error is one of the most common, as seen in Image 14. To address URL errors, there are a few things that you can do:

NoIndex, No Follow: Make sure that these are not written in your source code as they will prevent a link from being crawled or indexed.

Sitemap: Update and refresh your sitemap. It may be the case that your map is outdated and contains old URLs that are not longer in use.

Internal Link: Check to ensure that all of your internal links have an up-to-date URL. If you have internal links that connect to a web page that has been deleted, or the URL has changed, this could be a reason for a URL error.

Concluding Thought

Now that you know the most common status code errors and how to address them, make sure that you take action. The more quickly you address these issues, the faster your SEO will be improved. 

SEO is by no means a quick process — it requires careful attention to detail and thoughtful linking. To learn more about SEO and how to improve your website/ webpages rank on Google, be sure to download our free eBook, The Big Book of SEO.

Translate »