robots.txt is a file that can be placed in the root folder of your website to help search engines index your site more appropriately. Search engines such as Google use website crawlers, or robots that review all the content on your website. There may be parts of your website that you do not want them to crawl to include in user search results, such as the admin page. You can add these pages to the file to be explicitly ignored. Robots.txt files use something called the Robots Exclusion Protocol. This website will easily generate the file for you with inputs of pages to be excluded.
Search Engines are using robots (or so-called User-Agents) to crawl your pages. The robots.txt file is a text file that defines which parts of a domain can be crawled by a robot. In addition, the robots.txt file can include a link to the XML-sitemap.