Over my time as an SEO, I’ve noticed that duplicate content is one of the most understood concepts to most clients, with some still pursuing the old style “more pages submitted means I’ll do the Google better”; although this is not the case, In many scenarios our clients come to us completely unaware of their duplicate content issues, or the danger it is posing to their rankings.
Duplicate content can come in many forms, from an automated list of pages from a database, to boilerplate content and meta information – the list is endless when it comes to the root causes, but the impact is always the same: the inevitable, an ensured drop in rankings once search engines believe your website is attempting to rank pages maliciously with doorway or spam content, as opposed to content which is tailored for the best user experience possible.
Boilerplate content is content which can be reused in different contexts, usually by injection of certain database fields. A good example of this is location-based pages, in which copy stays the same, but the location changes. Location pages, and other similar variants are good pages to have if you are offering a service in different geographical locations; but in order to stay competitive for these terms, individual optimisation of pages, and strong competitor landing page analysis should be thought about. Just having a page with the keyword present should never be the be-all and end-all.
Every e-commerce website should have filtering functionality if their base product range spans across many variations, e.g. colours, sizes, additions etc. but it is always important, from a technical SEO perspective, to ensure that these are handled correctly, and that robots can decipher this content from the source (canonical) page. If left unchecked, this can lead to an abundance of URLs containing parameters which can dilute the relevance of your most valued pages.
(URL Parameter tool in Google Search Console)
Back before the web made a huge push to SSL (back in the days when you could sit with FireSheep open in a coffee shop and see who your pal was texting), HTTP content was everywhere, but with the migration over to HTTPS, and a great many people opening online stores, it was the next big step ensuring all data was encrypted and secured to give users peace of mind against any potential unlawful activity. Unfortunately, once HTTPS hit the mainstream, many websites unintentionally created duplicate content, owing to both an unsecured (http) and a secured (https) version of the site being available.
There are many examples of how this still occurs:
(Ayima’s redirect path Chrome extension)
Truth be told, if there is a duplicate content issue, or you believe you may have one materializing, there is no quantifiable “safe” threshold. Every site is different, and depending upon the severity of the duplicate content, can affect sites in different ways; from depreciating certain pages, to, in worst case scenarios, affecting the overall performance of the site in SERPs.
If you think that you have been impacted by a duplicate content penalty, or if you are worried that your website may be generating this content without your knowledge, there are a myriad of ways this can be fixed, from noindexation, canonicalisation of urls, to permanent redirects, or protocol site migrations (to fix URL structures).
At Colewood, we have worked with a veritable cornucopia of clients, all with technical issue manifesting within their index coverage (pages indexed within search engines). Time and time again we see new challenges to strategize for, and new problems to solve, and we’re always happy to do so.
If you are worried that duplicate content may be affecting your rankings, or you would like professional advice on how to improve your SEO from a well versed team, get in touch at firstname.lastname@example.org! We’re always happy to welcome new clients on-board, and prepare them to thrive in the organic-search environment.