Saturday, November 24, 2012

Prevent duplicate content by blocking indexing of archive pages

Search engines don't like archive pages for the simple reason because they represent duplicate content - indexing both archive pages and single posts will have the same content show up in two indexed "entries" - the archive and the actual post. And search engines don't like duplicate content.

You can prevent duplicate content by telling search engines not to index archive pages. This can be achieved by adding a "noindex" robots meta tag to the archive pages.

Go to Dashboard -> Design -> Edit HTML.

Find the <head> tag and add the following code below it:

<b:if cond='data:blog.pageType == &quot;archive&quot;'>
<meta content='noindex,noarchive' name='robots'/>
</b:if>

After a month, your archives will disappear from the search indexes. You will be happy, and your SEO conscience will be happy too.