Was reading an article about indexing .. Actually there are XML based standards which lets the web master of a website help web crawlers index the pages for the site. This helps visitors and search engine bots find pages on the site and ensures that all links are reachable by the crawler. This is especially important if a site uses a dynamic access to content such as Adobe Flash or JavaScript menus that do not include HTML links.
A good place to start reading is of-course wikipedia :-)
http://en.wikipedia.org/wiki/Site_map
Another way for to help the crawlers is using the robots.txt file,
http://en.wikipedia.org/wiki/Robots.txt
A good place to start reading is of-course wikipedia :-)
http://en.wikipedia.org/wiki/Site_map
Another way for to help the crawlers is using the robots.txt file,
http://en.wikipedia.org/wiki/Robots.txt