Meta robots tool.
The robots meta tag is used to handle page-level: indexing content, indexing links, indexing images and more..
The HTML syntax used for the meta robots tag (some examples):
- <meta name=”robots” content=”nofollow”>
- <meta name=”robots” content=”noindex, nofollow”>
- <meta name=”googlebot” content=”none” />
Use the meta robots tool to check to check:
- Does the page have a robots tag?
- Yes a Robots meta tag is found.
- Are there multiple robots meta tags found?
- What indexing & serving directives are specified using the robots meta tag?
- No Robots tag found.
- Yes a Robots meta tag is found.
This appears to only detect if the page has a meta tag in the section of the website which blocks robots from crawling.
This is useful if this is only what you want, but it doesn’t check for other means of blocking, such as:
– using a response header
– using a robots.txt
There may be other ways of blocking robots too. Please be sure you’ve properly researched robot blocking techniques before using this tool, as it may not cover all of your needs!
Hi Harvey,
This tool isn’t designed to check all robots index directives.
It checks the robots meta data settings for a specific page, that’s why the tool is called the “Bulk meta robots checker” ;-)
Update: 16 December 2016
– Solved design issue caused by the recent redesign (feedback from the tool wasn’t presented the right way).