Log in

Login to your account

Username *
Password *
Remember Me

Create an account

Fields marked with an asterisk (*) are required.
Name *
Username *
Password *
Verify password *
Email *
Verify email *
Captcha *

SEO news by Clarity

SEO news by ClarityWatch out, site-owners! It’s time to check your robots.txt file! Starting from the 1st September, Google will stop supporting the noindex directive in the robots.txt file.

Let us start with a small disclaimer: all this is a bit technical, so if you find the explanation below confusing, try first checking our previous articles about crawlability, where we explained in detail everything you need to know about the robots.txt file.

Coming back to the news, if you are following and reading our articles for some time, you might already know that the robot.txt is a file where you can communicate to Google bots which parts of your website they should not enter or crawl. This will not change, and you will be able to keep using this file from September onwards.

So, what is changing? Some site-owners have been using robots.txt file in a way it was not supposed to be used: they were adding noindex directives into the file. The noindex directive is a way to let Google know that you are OK with Google crawling the page, you just don’t want it to be added to the index. This directive was not supposed to be included in the robots.txt file. So, starting from September 1st, Google will not support robots.txt file, if he finds the noindex directive in it.

What do you need to do?
Most likely nothing! If you have been following the Google’s SEO best practices, you will not be affected by this change. But just in case, check your robots.txt file.

I found the “noindex” directive in my robots.txt file, what should I do?
First of all, delete it from there. Then add your noidex tag to the of the page you don’t want to be indexed (you will have to edit the HTML, so if you are not sure how to do it, it’s probably best to ask for some professional help).
Another way to keep the page hidden from the public and out of Google’s index is to password protect it. Google bots will discover the page, see that a password is needed, and as robots still can´t guess and enter the password, Google will just keep the page out of its index.
If the page you are trying to hide is old and you don’t need it anymore, let it return a 404 response. Google will thus know that the page doesn’t exist and with time will take it off its index.

In any case, don’t panic, and keep optimising your website!

For more information, bespoke strategies and efficient digital marketing solutions, just contact the Clarity’s girls through info@yourdigitalclarity.com or visit our website at www.yourdigitalclarity.com.

 

Pin It

You must be a registered user to make comments.
Please register here to post your comments.