Duplicate Content Can Mean Lost Results

 

Google recently posted a useful video about Duplicate Content and Multiple Site issues, a talk which brought up some very interesting points to consider for many webmasters.

It is regularly told by both us and other SEOs around the globe (not just London PPC agencies!) that duplicate content needs to be managed carefully, especially amongst bigger sites, as it can impact the search engines’ rankings of your pages. Duplicate content between different domains is most certainly a troubling asset to have, and Google works hard to try and eliminate the more spammy sites which copy content from other producers, something that generally they do very well.


But what of other types of duplicate content?


Some sites may have two pages with the same content because of a specialised “text only” version or a “print version” of their articles, such as Wikipedia. Others have navigation which leads to the same site but because of user friendly URLs and navigation, have a different address – for example on a shopping site someone might browse shirts and then football shirts, others might go through football and then click on shirts, producing two different URLs so the user can retrace their steps but the same content. Another example is duplicate content across multiple domains - .com, .co.uk, .de, .fr etc.


All these examples need to be managed very carefully as the Google Bots will not necessarily be able to differentiate on its own, and you may very well end up with two pages with the same content but half the link equity. Remember link consolidation? It’s a similar theory – don’t dilute page rank over too many areas .


The old way of approaching such an issue is through the 301 redirect tag, which takes the user from the URL they initially clicked on through to the key central page you want to be getting the link equity instead. This works fine for certain examples, such as a new homepage or similar such movement of content, but doesn’t apply for the examples above – such cases don’t want the user being redirected because that defeats the point of having the separate URLs in the first place.

So what can you do? Well, Google has established the valid use of a new HTML tag called the “rel=canonical” tag which can help resolve these problems!
The “canonical” page is the primary page you want the link equity to be passed on to – though it must be the same domain. To place it in the HTML, put the following tag within the header:

<link rel=canonical href="YOUR CANONICAL PAGE">


This will enable those pages to remain separate and independent, but the chosen page of the two will take all the credit in Google – ideal for giving that page a boost in the rankings!


Prioritising pages is one thing, but when you have several domains with the same content – for example a US and a UK version of the same article – you need the two pages to be priorities equally. Unfortunately Google will not differentiate between the two and will choose what it deems to be the more important page and index that, filtering out those links that have the same content contained within. As a result, it is important to note this factor – it might be that it suits your site and its geographical targeting, but all the same you should keep track of progress between the multiple domains.

 

Wikipedia - The International Encylopedia!!

Different languages are brilliant, as there aren't any duplicate content issues - regionalisation wiithin one language and you have to be very careful.



So in summation, Google’s interpretation of duplicate content is a bit of a minefield for large multi-domain sites, but if you keep to the simple rules above and ensure you maintain a close eye on its Google trends you shouldn’t have any major issues. The easiest solution is to not get too lazy with your writing!