The Myth of W3C Compliance?
|Average rating: Rate this article|
The past few years have seen a huge increase in the number of search engine optimisers preaching about the vital importance of W3C Compliance as part of any effective web promotion effort. But is compliant code really the 'Magic SEO Potion' so many promoters make it out to be?
For those of you not familiar with the term; a W3C compliant web site is one which adheres to the coding standards laid down by the World Wide Web Consortium (W3C), an organisation comprising of over 400 members including all the major search engines and global corporations such as AT&T, HP and Toshiba amongst many others. Headed by Sir Timothy Berners-Lee, the inventor of the internet as we know it today, the W3C has been working to provide a set of standards designed to keep the web's continuing evolution on a single, coherent track since the Consortium's inception in 1994.
Whilst the W3C has been a fact of life on the web since this time, general industry awareness of the benchmarks set down by the Consortium has taken some time to filter through to all quarters. Indeed, it is only within the past 24 to 36 months that the term W3C Compliance has emerged from general obscurity to become a major buzzword in the web design and SEO industries.
Although personally, I have been a staunch supporter of the Consortium's standards for a long time, I cannot help but feel that their importance has been somewhat overplayed by a certain faction within the SEO sector, who are praising code compliance as a 'cure-all' for poor search engine performance.
Is standards compliance really the universal panacea it is commonly claimed to be these days?
Let's take a quick look at some of the arguments most commonly used by SEOs and web designers:
- Browsers such as Firefox, Opera and Lynx will not display your pages properly.
Browser compatibility is possibly one of the most frequently cited reasons for standards compliance, with Firefox being the usual target for these claims. Speaking from personal experience, Firefox will usually display all but the most broken code with reasonable success. In fact, this browser's main issue seems to lie more with its occasional failure to correctly interpret the exact onscreen position of layers (Div tags - this often causes text overlap) even when expressed correctly, than its inability to deal with broken code.
What about Lynx? Interestingly enough whilst it is somewhat more fragile than Firefox, most of the problems encountered by this text-only browser mostly seem to stem from improper content semantics (paragraphs out of sequence) than poor code structure.
- Search engines will have problems indexing your site.
Some SEOs actively claim that search engine spiders have trouble indexing non-compliant web pages. Whilst, again speaking from personal experience, there is an element of truth to these claims; it is not the sheer number of errors which causes a search engine spider to have a 'nervous breakdown', but the type of error encountered. So long as the W3C Code Validator is able to parse* a page's source code from top to bottom, a search engine will likely be able to index it and classify its content. On the whole, indexing problems arise when code errors specifically prevent a page from being parsed altogether, rather than non-critical errors which allow the process to continue.
* To parse is to process a file in order to extract the desired information. Linguistic parsing may recognise words and phrases or even speech patterns in textual content.
Disabled internet users will not be able to use your site.
The inevitable, but somewhat weak, counter-argument to this point is that only an infinitely small percentage of internet users are visually or aurally impaired. However, it is a fact that browsers such a Lynx and JAWS (no, not the shark) will view a web page's code in much the same way as a search engine spider. From this perspective, we once again return to the difference between critical and non-critical W3C compliance errors. As long as whatever tool/browser/spider is used to extract text content from a page's code is able to continue its allotted task, the user is likely to be able to view the page in a satisfactory manner.
Interestingly, one of my fellow designer/SEOs over in Japan has just run an experiment entitled "W3C Validation; Who cares?" testing the overall importance of W3C compliance to long-term web promotion efforts. Whilst the results of this, the world's most non-compliant web page, do initially indicate that compliance does not make much of a difference to a search engine's ability to index and classify a web page, I do rather suspect that further research may be needed in order to establish the long-term effects of this experiment.
At the time of writing however, the page ranks well with Google for the following two non-specific search terms; Does Google care about validation and Google care validation - not bad for a page which is supposed to be utterly and completely un-indexable.
What then is the answer to the W3C compliance conundrum?
In conclusion I would say that ignoring the World Wide Web Consortium's standards at this stage may well have negative consequences in the long-term, as the internet's continuing evolution is likely to place greater emphasis on good coding practices in the future. Having said this, I would also say that the current value of W3C compliance has been overplayed by some professionals in the web design and SEO industries.
Print this article Bookmark:
About The Author
Rate This Article
Use your mouse pointer to select as many stars as you want, and press the left mouse button to vote.
Other HTML Articles
Rating: 3.3 stars
|3 Principles of HTML Code Optimization by George Peirson (May 5, 2008)|
|Just like spring cleaning a house, the html code of your web pages should get periodic cleaning as well. Over time, as changes and updates are made to a web page, the code can become littered with unnecessary clutter, slowing down page load times and hurting the efficiency of your web page...|
Rating: 2 stars
|XHTML (eXtended Hypertext Markup Language): An Overview by Phillip Kimpo Jr. (Jul 24, 2006)|
|Many Web pages today are poorly written. Syntactically incorrect HTML code may work in most browsers even if it does not follow HTML rules. Browsers employ heuristics to deal with these flawed Web pages; however, Web-enabled wireless devices (such as PDAs) cannot accommodate these hefty Web browsers...|
Rating: 3.8 stars
Rating: 3.2 stars
|Writing Semantic HTML by Jesse Skinner (May 2, 2006)|
|Semantic HTML means using HTML tags for their implied meaning, rather than just using (meaningless) div and span tags for absolutely everything. Why would you want to do this? Depending on the tag, the content in the tag can be interpreted in a certain way...|
Rating: 3.6 stars
|A practical guide to meta tags - NAME or HTTP-EQUIV? by Andrei Smith (Jan 20, 2006)|
|META tags are a way for you to define your web page and web site to the outside world. You can declare the keywords and description, which help your placement in search engines. In addition, you can specify who owns the copyright, how often the page is to be visited by search engines and many other useful pieces of information...|