Every Search Engine Robot Needs Validation

 

Your website is ready. Your content is in place, you have optimized your pages. What is the last thing you should do before uploading your hard work? Validate. It is surprising how many people do not validate the source code of their web pages before putting them online.

Search engine robots are automated programs that traverse the web, indexing page content and following links. Robots are basic, and robots are definitely not smart. Robots have the functionality of early generation browsers: they don't understand frames; they can't do client-side image maps; many types of dynamic pages are beyond them; they know nothing of JavaScript. Robots can't really interact with your pages: they can't click on buttons, and they can't enter passwords. In fact, they can only do the simplest of things on your website: look at text and follow links. Your human visitors need clear, easy-to-understand content and navigation on your pages; search engine robots need that same kind of clarity.

Looking at what your visitors and the robots need, you can easily see how making your website "search engine friendly", also makes the website visitor friendly.

For example, one project I worked on had many validation problems. Because of the huge number of errors generated by problems in the source code, the search engine robots were unable to index the web page, and in particular, a section of text with keyword phrases identified specifically for this page. Ironically, human users had problems with the page as well. Since humans are smart, they could work around the problem, but the robots could not. Fixing the source code corrected the situation for human and automated visitors.

As I said before, what works for your website visitors works for the search engine robots. Usability is the key for both your human visitors and automated robots. Why not provide the best chance for optimum viewing by both?

 



  • On main