< back to SEO

Why Technical SEO is Important

When you have a website, there’s several areas that need to be worked on to ensure that online visibility is at a maximum. After all, what’s the point in having a website that’s not being seen by users? It’s important for business owners to understand that there’s more to a website than what is shown to the users. There’s a backbone that holds everything together, and no website can stand up straight if it doesn’t have a strong one. As well as design and content, there’s one very specific element that can’t be forgotten about. And that’s technical SEO.

Technical Search Engine Optimisation Explained

Without search engine optimisation, your website will fall apart. Websites with bad technical SEO will also fail. You don’t have to be tech-savvy to understand the concept of technical SEO, but it does help to have a basic understanding of how Google works.

All SEO’s should be familiar with on-page and off-page ranking factors. Technical SEO refers to the work related to neither of those, and instead describes the efforts made in order to ensure websites are compatible with search engine guidelines. It also relates to how you can make improvements on how spiders crawl your site.

Technical aspects include but are not limited to:

  • Robots.txt file
  • Site structure
  • Load speed
  • XML sitemap
  • Error pages
  • Redirects
  • Canonicalisation
  • Structured data and schema markup
  • URL structure
  • Broken internal links
  • External links from detrimental sources
  • Website cannibalisation

How Much Does it Matter

It matters, and that’s all you need to know. There’s no point in having well-written content on a website that looks nice if what’s going on behind the scenes isn’t working. Let’s take a further look into some of the most common mistakes people make and what you can do to make sure you don’t fall victim to these pitfalls.

Slow Website and Page Load Speed

Whether on a desktop or a mobile device, users demand quick content. And if they think your website takes too long to show them what they’re after, they’ll be going back to the results page and clicking on the next available link. According to research, 47% of people expect a website to load in less than 2 seconds, which doesn’t sound very long at all. But on the world wide web, anything over that can feel like a lifetime.

Google rolled out its mobile-first indexing update earlier this year. As part of its new algorithm, your rankings will be harmed if your website fails to load quickly on mobile devices. And with more and more people turning to their smartphones and tablets, it’s essential that your website and page load speed is kept to a minimum.

Incorrect Robots.txt Rules

A robots.txt file is something that webmasters create and upload to their website as it allows search engine robots to crawl what’s on there. It indicates the areas of a website certain user agents can and can’t access by following ‘disallow’ or ‘allow’ instructions set out by the person who created the file. Spending time ensuring this is done and uploaded correctly is important as it can be potentially harmful if you set an invalid rule. You can access your robots.txt by simply adding ‘/robots.txt’ at the end of your URL in the search bar.

Some examples of a robots.txt file are:

Blocking all web crawlers from all content

User-agent: *
Disallow: /

Allowing all web crawlers to access all content

User-agent: *
Disallow:

Blocking a specific crawler from a specific folder

User-agent: Googlebot
Disallow: /example-subfolder/

Blocking a specific crawler from a specific file

User-agent: Googlebot
Disallow: /file.html

Blocking a specific crawler from a specific page

User-agent: Googlebot
Disallow: /example-subfolder/blocked-page.html

Not Submitting an XML Sitemap

An XML sitemap is essentially a roadmap for Google. It acts as a pathway and allows the search engine to easily find all of the pages on your website. They’re beneficial to SEO as it means that Google can quickly find the important areas on your site. Although Google can also follow internal links, some pages may not have any pointing to them or they may be broken. By submitting and XML sitemap to Google Search Console, you can make sure that the search engine fully understands your site’s structure.

It’s important to mention that not all pages on your website need to be included on the XML sitemap. If there’s a page you don’t necessarily want users to land on, for example an image that opens out into a new page and therefore creating a new URL, this does not need to be added as part of the file.

There are several resources that easily allow you to create an XML sitemap, including https://www.xml-sitemaps.com/. All you need to do is follow the instructions it provides you with and this will generate a file which you then upload and submit to Google Search Console. This will then provide you with more useful insights to your website’s performance.

Not Using “Nofollow” and “Noindex” Tags

Remember that image page we mentioned earlier? As well as not including it on the XML sitemap, you may want to also add something called a “nofollow” or “noindex” tag to that page. Simply leaving it out of the sitemap doesn’t mean it won’t be indexed. If you really don’t want users organically landing on that page, adding these tags will prevent it from being indexed by Google or followed by crawlers.

Getting those Redirects Wrong

If you change the structure of your website, a URL redirect allows you to map one page to another. This is a simple instruction for search engines to follow and lets them know you don’t want them to show that page to the user. There’s a wide range of status codes to consider, but these are 6 of the most common ones:

  • 200 – everything is working fine with the page
  • 301 – the page has been permanently moved
  • 302 – the page has been temporarily moved
  • 404 – the page no longer exists
  • 500 – a server error exists and no content is accessible
  • 503 – a temporarily unavailable status code that informs both crawlers and users to come back later (i.e. the site is under maintenance)

Getting redirects right is essential for both users and crawlers. Using the wrong one can be detrimental and negatively effect the crawlability and rankability of your website.

Canonicalisation in its Simplest Form

Duplicate content is bad, we all know that. But sometimes webmasters simply can’t avoid it. However, if you use a canonical tag correctly, you can prevent it from becoming an issue.

Let’s use https://www.exampleproductpage.co.uk/ as page A and https://www.exampleproductpage.co.uk as page B. Search engines see both as different pages while users only see the same product and content. If the page with the trailing slash has more pageviews and conversions than the one without, it’s likely you’re going to want this as the primary page. Therefore, SEO best practices suggest adding a canonical tag on page B, telling search engines to ignore it and focus on page A instead.

A canonical tag looks something like this: <link rel=”canonical” href=”https://www.exampleproductpage.co.uk/” />

How to Avoid Issues

When it comes to a technical perspective, there’s many areas that need to be looked into when wanting to make a website fully search engine friendly. And if we were to talk about all of those in depth, then we’d be here for what would feel like forever. We’ve ran through some of the most common issues, many of which should be fairly quick and straightforward to resolve.

In an ideal world, we would all love for search engines to know and understand what we want and how we want it shown. However, despite technological advances over recent years, we’re not quite there yet. So, as website owners, that’s why we need to do all we can to help them out.

To correctly build and develop a website from scratch takes time and an understanding of how search engines work. Without this, it’s likely your website won’t rank or perform well, so all your efforts could be wasted. Once you believe your website to be ready or if you want to check any changes made, you can use something like Screaming Frog, a tool that crawls your website and checks for any potential problems. You can also contact an agency to do all the hard work for you. Take a look at Colewood’s main website to find out more about our SEO and digital marketing services.