The robots.txt file instructs search engine crawlers which parts of a site should be crawled and which should be ignored, helping to control crawler behaviour.
A well-formed set of directives (User-agent, Allow, Disallow, Sitemap) simplifies indexation management and prevents accidental indexing of administrative pages.
This tool generates a properly structured file tailored for common scenarios, useful when launching a site or restructuring content.