Robots.txt Generator
Generate a robots.txt file — control which search engine crawlers can access which parts of your site.
Global rules (all bots)
Specific bot rules
About robots.txt
The robots.txt file (the Robots Exclusion Protocol) is a text file at the root of your domain that tells search engine crawlers which pages they may or may not request. It is a courtesy file — malicious bots ignore it, and it does not prevent access to disallowed URLs (use server-side authentication for that). The file must be at https://yourdomain.com/robots.txt. Blocking a URL in robots.txt does not remove it from search results — use the noindex meta tag for that.
FAQ
Does blocking in robots.txt prevent indexing?
No. Blocking a URL in robots.txt prevents Googlebot from crawling it, but Google can still index the URL if it appears in links on other pages — it will just not know the content. To prevent indexing, use a noindex meta tag or X-Robots-Tag header (which requires the page to be crawlable). If you block a page in robots.txt AND want it not indexed, you need both: allow crawling (remove from robots.txt) but add noindex.