If you wish to prevent certain pages or directories from showing up in our sitemap and prevent search engines from indexing those pages all together, you can block our crawler and search engine crawlers by adding an exclude line to your public_html/robots.txt file.
For example: To block all crawlers from indexing and storing http://yourdomain.com/private you can add the following lines to your robots.txt file:
User-agent: * Disallow: /private
The User-agent line tells which crawlers to block. * means all crawlers (Attracta, Google, Bing, Yahoo, etc). The Disallow line means which folders or files in your site to block, one folder or file per Disallow line.
User-agent: * Disallow: /private Disallow: /staff/bios.html
More detailed information about using the robots.txt file can be found at http://www.robotstxt.org/robotstxt.html