search engine bots Archives - Bruce Clay, Inc. https://www.bruceclay.com/blog/tag/search-engine-bots/ SEO and Internet Marketing Mon, 07 Aug 2023 19:44:12 +0000 en-US hourly 1 What Is robots.txt? A Beginner’s Guide to Nailing It with Examples https://www.bruceclay.com/blog/robots-txt-guide/ https://www.bruceclay.com/blog/robots-txt-guide/#comments Tue, 29 Mar 2022 00:11:52 +0000 https://www.bruceclay.com/?p=124203 The one technical SEO element you don’t want to get wrong is robots.txt. So here's a handy guide that explains why every website needs it and how to create one.

The post What Is robots.txt? A Beginner’s Guide to Nailing It with Examples appeared first on Bruce Clay, Inc..

]]>
Wooden robot figure stands on a patch of grass.
Ah, robots.txt — one teeny tiny file with big implications. This is one technical SEO element you don’t want to get wrong, folks.

In this article, I will explain why every website needs a robots.txt and how to create one (without causing problems for SEO). I’ll answer common FAQs and include examples of how to execute it properly for your website. I’ll also give you a downloadable guide that covers all the details.

Contents:

What Is robots.txt?

Robots.txt is a text file that website publishers create and save at the root of their website. Its purpose is to tell automated web crawlers, such as search engine bots which pages not to crawl on the website. This is also known as robots exclusion protocol.

Robots.txt does not guarantee that excluded URLs won’t be indexed for search. That’s because search engine spiders can still find out those pages exist via other webpages that are linking to them. Or, the pages may still be indexed from the past (more on that later).

Robots.txt also does not absolutely guarantee a bot won’t crawl an excluded page, since this is a voluntary system. It would be rare for major search engine bots not to adhere to your directives. But others that are bad web robots, like spambots, malware, and spyware, often do not follow orders.

Remember, the robots.txt file is publicly accessible. You can just add /robots.txt to the end of a domain URL to see its robots.txt file (like ours here). So do not include any files or folders that may include business-critical information. And do not rely on the robots.txt file to protect private or sensitive data from search engines.

OK, with those caveats out of the way, let’s go on…

Why Is robots.txt Important?

Search engine bots have the directive to crawl and index webpages. With a robots.txt file, you can selectively exclude pages, directories, or the entire site from being crawled.

This can be handy in many different situations. Here are some situations you’ll want to use your robots.txt:

  • To block certain pages or files that should not be crawled/indexed (such as unimportant or similar pages)
  • To stop crawling certain parts of the website while you’re updating them
  • To tell the search engines the location of your sitemap
  • To tell the search engines to ignore certain files on the site, like videos, audio files, images, PDFs, etc., and not have them show up in the search results
  • To help ensure your server is not overwhelmed with requests*

*Using robots.txt to block off unnecessary crawling is one way to reduce the strain on your server and help bots more efficiently find your good content. Google provides a handy chart here. Also, Bing supports the crawl-delay directive, which can help to prevent too many requests and avoid overwhelming the server.

Of course, there are many applications of robots.txt, and I’ll outline more of them in this article.

But, Is robots.txt Necessary?

Every website should have a robots.txt file, even if it is blank. When search engine bots come to your website, the first thing they look for is a robots.txt file.

If none exists, then the spiders are served a 404 (not found) error. Although Google says that Googlebot can go on and crawl the site even if there’s no robots.txt file, we believe that it is better to have the first file that a bot requests load rather than produce a 404 error.

What Problems Can Occur with robots.txt?

This simple little file can cause problems for SEO if you’re not careful. Here are a couple of situations to watch out for.

1. Blocking your whole site by accident

This gotcha happens more often than you’d think. Developers can use robots.txt to hide a new or redesigned section of the site while they’re developing it, but then forget to unblock it after launch. If it’s an existing site, this mistake can cause search engine rankings to suddenly tank.

It’s handy to be able to turn off crawling while you’re preparing a new site or site section for launch. Just remember to change that command in your robots.txt when the site goes live.

2. Excluding pages that are already indexed

Blocking in robots.txt pages that are indexed causes them to be stuck in Google’s index.

If you exclude pages that are already in the search engine’s index, they’ll stay there. In order to actually remove them from the index, you should set a meta robots “noindex” tag on the pages themselves and let Google crawl and process that. Once the pages are dropped from the index, then block them in robots.txt to prevent Google from requesting them in the future.

How Does robots.txt Work?

To create a robots.txt file, you can use a simple application like Notepad or TextEdit. Save it with the filename robots.txt and upload it to the root of your website as www.domain.com/robots.txt —— this is where spiders will look for it.

A simple robots.txt file would look something like this:

User-agent: *
Disallow: /directory-name/

Google gives a good explanation of what the different lines in a group mean within the robots.txt file in its help file on creating robots.txt:

Each group consists of multiple rules or directives (instructions), one directive per line.

A group gives the following information:

  • Who the group applies to (the user agent)
  • Which directories or files that agent can access
  • Which directories or files that agent cannot access

I’ll explain more about the different directives in a robots.txt file next.

Robots.txt Directives

Common syntax used within robots.txt includes the following:

User-agent

User-agent refers to the bot in which you are giving the commands (for example, Googlebot or Bingbot). You can have multiple directives for different user agents. But when you use the * character (as shown in the previous section), that is a catch-all that means all user agents. You can see a list of user agents here.

Disallow

The Disallow rule specifies the folder, file or even an entire directory to exclude from Web robots access. Examples include the following:

Allow robots to spider the entire website:

User-agent: *
Disallow:

Disallow all robots from the entire website:

User-agent: *
Disallow: /

Disallow all robots from “/myfolder/” and all subdirectories of “myfolder”:

User-agent: *
Disallow: /myfolder/

Disallow all robots from accessing any file beginning with “myfile.html”:

User-agent: *
Disallow: /myfile.html

Disallow Googlebot from accessing files and folders beginning with “my”:

User-agent: googlebot
Disallow: /my

Allow

This command is only applicable to Googlebot and tells it that it can access a subdirectory folder or webpage even when its parent directory or webpage is disallowed.

Take the following example: Disallow all robots from the /scripts/folder except page.php:

Disallow: /scripts/
Allow: /scripts/page.php

Crawl-delay

This tells bots how long to wait to crawl a webpage. Websites might use this to preserve server bandwidth. Googlebot does not recognize this command, and Google asks that you change the crawl rate via Search Console. Avoid Crawl-delay if possible or use it with care as it can significantly impact the timely and effective crawling of a website.

Sitemap

Tell search engine bots where to find your XML sitemap in your robots.txt file. Example:

User-agent: *
Disallow: /directory-name/
Sitemap: https://www.domain.com/sitemap.xml

To learn more about creating XML sitemaps, see this: What Is an XML Sitemap and How do I Make One?

Wildcard Characters

There are two characters that can help direct robots on how to handle specific URL types:

The * character. As mentioned earlier, it can apply directives to multiple robots with one set of rules. The other use is to match a sequence of characters in a URL to disallow those URLs.

For example, the following rule would disallow Googlebot from accessing any URL containing “page”:

User-agent: googlebot
Disallow: /*page

The $ character. The $ tells robots to match any sequence at the end of a URL. For example, you might want to block the crawling of all PDFs on the website:

User-agent: *
Disallow: /*.pdf$

Note that you can combine $ and * wildcard characters, and they can be combined for allow and disallow directives.

For example, Disallow all asp files:

User-agent: *
Disallow: /*asp$

  • This will not exclude files with query strings or folders due to the $ which designates the end
  • Excluded due to the wildcard preceding asp – /pretty-wasp
  • Excluded due to the wildcard preceding asp – /login.asp
  • Not excluded due to the $ and the URL including a query string (?forgotten-password=1) – /login.asp?forgotten-password=1

Not Crawling vs. Not Indexing

If you do not want Google to index a page, there are other remedies for that other than the robots.txt file. As Google points out here:

Which method should I use to block crawlers?

  • robots.txt: Use it if crawling of your content is causing issues on your server. For example, you may want to disallow crawling of infinite calendar scripts. You should not use the robots.txt to block private content (use server-side authentication instead), or handle canonicalization. To make sure that a URL is not indexed, use the robots meta tag or X-Robots-Tag HTTP header instead.
  • robots meta tag: Use it if you need to control how an individual HTML page is shown in search results (or to make sure that it’s not shown).
  • X-Robots-Tag HTTP header: Use it if you need to control how non-HTML content is shown in search results (or to make sure that it’s not shown).

And here is more guidance from Google:

Blocking Google from crawling a page is likely to remove the page from Google’s index.
However, robots.txt Disallow does not guarantee that a page will not appear in results: Google may still decide, based on external information such as incoming links, that it is relevant. If you wish to explicitly block a page from being indexed, you should instead use the noindex robots meta tag or X-Robots-Tag HTTP header. In this case, you should not disallow the page in robots.txt, because the page must be crawled in order for the tag to be seen and obeyed.

Tips for Creating a robots.txt without Errors

Here are some tips to keep in mind as you create your robots.txt file:

  • Commands are case-sensitive. You need a capital “D” in Disallow, for example.
  • Always include a space after the colon in the command.
  • When excluding an entire directory, put a forward slash before and after the directory name, like so: /directory-name/
  • All files not specifically excluded will be included for bots to crawl.

The robots.txt Tester

Always test your robots.txt file. It is more common that you might think for website publishers to get this wrong, which can destroy your SEO strategy (like if you disallow the crawling of important pages or the entire website).

Use Google’s robots.txt Tester tool. You can find information about that here.

Robots Exclusion Protocol Guide

If you need a deeper dive than this article, download our Robots Exclusion Protocol Guide. It’s a free PDF that you can save and print for reference to give you lots of specifics on how to build your robots.txt.

Closing Thoughts

The robots.txt file is a seemingly simple file, but it allows website publishers to give complex directives on how they want bots to crawl a website. Getting this file right is critical, as it could obliterate your SEO program if done wrong.

Because there are so many nuances on how to use robots.txt, be sure to read Google’s introduction to robots.txt.

Do you have indexing problems or other issues that need technical SEO expertise? If you’d like a free consultation and services quote, contact us today.

FAQ: How can I optimize my website’s performance with an effective robots.txt file?

Ensuring your website’s optimal performance is paramount to success. A key aspect often overlooked is the strategic use of a robots.txt file. This unassuming text document wields the power to significantly impact your site’s search engine optimization (SEO) and overall performance.

At its core, a robots.txt file is a gatekeeper for search engine bots, guiding them on which parts of your website to crawl and index. By skillfully crafting this file, you can strategically control how search engines interact with your content. This optimization technique is vital for preventing unnecessary strain on your server, ensuring that valuable resources are allocated efficiently.

One essential application of robots.txt optimization is the ability to exclude specific pages or directories from being crawled. This is particularly useful for hiding unimportant or redundant pages, preventing search engines from wasting resources on irrelevant content. For instance, you can avoid video or audio files from being crawled, preserving your server’s bandwidth for more critical components.

Updating your website can be delicate, often requiring temporary withdrawal of specific pages. By utilizing robots.txt optimization, you can gracefully handle this situation without affecting SEO rankings. Temporarily blocking crawling on pages undergoing updates ensures that search engines won’t index incomplete or inconsistent content, maintaining your site’s credibility.

Moreover, robots.txt optimization empowers you to guide search engines toward your sitemap’s location. This simple step helps search engine bots navigate your site’s structure efficiently, ensuring no valuable content is overlooked. Strategically placing your sitemap in robots.txt enhances the discoverability of your most important pages.

While the benefits of robots.txt optimization are substantial, it’s crucial to proceed cautiously. Improper configuration can inadvertently block important pages, leading to declining search engine rankings. Therefore, seeking the guidance of SEO experts or referring to reputable resources, such as Google’s guidelines, is highly recommended before implementing changes.

A practical robots.txt file is a powerful tool in your SEO arsenal. By optimizing this seemingly unassuming element, you can exert control over how search engines interact with your website, ultimately enhancing performance, resource allocation, and overall user experience.

Step-by-Step Procedure for robots.txt Optimization:

  1. Understand the role of robots.txt in SEO and website performance.
  2. Select any pages or directories from which you would like to exclude crawling.
  3. Create a robots.txt file using any plain-text editor like Notepad or TextEdit.
  4. Specify user-agent directives to target search engine bots (e.g., User-agent: Googlebot).
  5. Utilize the Disallow directive to block access to pages or directories you want to exclude (e.g., Disallow: /videos/).
  6. Implement the Allow directive for specific pages within blocked directories (e.g., Allow: /videos/index.html).
  7. Use the Crawl-delay directive to control the rate at which bots crawl your site, if necessary.
  8. Include the Sitemap directive to guide search engines to your XML sitemap (e.g., Sitemap: https://www.domain.com/sitemap.xml).
  9. Test your robots.txt file using Google’s robots.txt Tester tool to identify any issues or errors.
  10. Upload the robots.txt file to the root directory of your website via FTP or your content management system (CMS).
  11. Monitor your website’s performance and search engine rankings after implementing robots.txt optimization.
  12. Regularly update and refine your robots.txt file as your website’s structure and content evolve.
  13. Consult SEO experts or reputable resources for guidance on best practices and advanced optimization techniques.
  14. Review and analyze your website’s crawl and index statistics to ensure effective robots.txt optimization.
  15. Adjust directives as needed based on changes in your website’s content and goals.
  16. Avoid blocking critical pages that are essential for search engine visibility and user experience.
  17. Continuously stay informed about updates and changes to search engine algorithms that may impact robots.txt optimization.
  18. Prioritize user experience and ensure that any exclusions align with your website’s content strategy.
  19. Regularly audit and maintain your robots.txt file to ensure ongoing optimization and performance.
  20. Keep abreast of emerging trends and best practices in SEO and robots.txt optimization for sustained success.

The post What Is robots.txt? A Beginner’s Guide to Nailing It with Examples appeared first on Bruce Clay, Inc..

]]>
https://www.bruceclay.com/blog/robots-txt-guide/feed/ 26