fbpx

Google’s Martin Splitt Shares Some Common JavaScript Errors That Can Harm SEO

In a live video session with Search Engine Land, Google’s Search Developer Advocate – Martin Splitt was asked about some common JavaScript errors that SEO professionals and developers both should have taken care of. In this blog post, we cover what Martin’s take on this question was.

Don’t Blame JavaScript For Everything

The very first thing Martin highlighted was that people often consider JavaScript as an issue when the problem is something else. It’s not necessary that if something goes wrong with your SEO, the reason behind it has to be JavaScript. People often consider that JavaScripts aren’t Search Engine friendly, which is incorrect.

Common JavaScript Errors That Should Be Avoided

Martin mentioned some common JavaScript errors that SEOs and developers should take care of while optimizing their websites which are listed below –

Let’s understand them in detail.

Blocking Access To JavaScript Files

Martin mentioned that people often block Googlebot from accessing their external JavaScript files in their robots.txt file. If the crawler is not allowed to access JavaScript, it won’t be able to render or crawl your page efficiently. And people feel that these crawling issues are due to the use of JavaScript, which is not the case. Let Googlebot and other user-agents access your external JavaScript files to ensure proper crawling.

Poor User Experience

“People break websites for Users rather than Search Engines”, said Martin. These websites are indexable and also get ranked; however, they provide a poor user experience for the user. E.g., just to load a simple, scrollable list of products a website makes use of an enormous 5Mb JavaScript file that hampers the user experience.

Incorrect Implementation Of Lazy loading Or Infinite Scrolling

Martin says that Lazy loading and Infinite scrolling are two great techniques to display listings on a page only if implemented correctly. Improper implementation of these techniques can lead to Googlebot being not able to crawl the page content properly, which is not desirable. He also added that they are working on better guidelines for the implementation of infinite scrolling.

Relying Too Much On JavaScript

Martin also added that people are using JavaScript for implementing functionalities that can be done without using JavaScript as well. Although he added a disclaimer that this is something you don’t need to be careful about inherently, however, he feels it is pointless 

For example – If you want to add links to your website, it can be simply done using a standard HTML tag rather than using a JavaScript. Another example is that Dropdowns, Side navigation panels, and transitions can be easily implemented using CSS, why use JavaScript then?

The main highlight here is to minimize the usage of JavaScript as it also helps in improving page load time.

We all know how JavaScripts can add numerous features to a website. However, it needs to be done correctly without hampering the user experience. All in all, it is advisable to use simpler methods rather than complex JavaScripts to introduce new functionalities if possible. In case JavaScripts are added, ensure that they are accessible to crawlers and are implemented without any errors to avoid any future SEO problems.

Translate »