Qined Community 🧑‍💻

Mohamed
Mohamed

Posted on

27 SEO Mistakes That Hurt Websites In 2021 (SEMrush)

The struggle to keep pace with Google's continual algorithm updates and avoid technical site SEO issues is part of everyday life for webmasters and SEO experts .

Image description

Several natural referencing errors can harm the ranking of your site on search engines. Even if you are already aware of a number of issues with your website, it can be difficult to maintain its health in an ever-changing world of SEO techniques.

With a solid understanding of the most common (and potentially harmful) SEO mistakes , however, you can give yourself a chance to keep your technical issues to a minimum and boost your website's performance to the max.

This guide from data-driven search engine optimization (SEO ) specialist SEMrush gives you a comprehensive website audit checklist that will help you do just that as an SEO expert. , regardless of the size of your site.

Try SemRush for Free

How SEMrush collected the data

SEMrush managed 250,000 websites across a range of niches, including health, travel, sports, and science, using its site audit tool to find the most common SEO mistakes for that year.

In total, SEMrush analyzed:

  • 310,161,067 webpages
  • 28,561,137,301 links
  • 6,910,489,415 pictures

This scope of analysis yielded enough insight to create a comprehensive site audit template that webmasters and SEO experts can use to avoid errors themselves, SEMrush says.

Creating a research-backed website audit template

It is clear that a properly conducted website audit is a task that takes time.

SEMrush's study revealed 27 common mistakes that obviously can't be fixed all at once. So the team broke the list down into digestible chunks to use as an actionable template:

Ignore HTTP status and server issues

Image description

The most critical technical issues with a website are often related to its HTTP status.

These include status codes such as error 404 (page not found), which indicate the server's response to a request from a client, such as a browser or search engine.

When the dialogue between a client and a server, in other words, a user and your website, is interrupted and broken, so is the user's trust in the site.

Serious server issues can not only lead to lost traffic due to inaccessible content, but they can also damage your long-term rankings if they leave Google unable to find suitable results on your site for the searcher.

SEO errors affecting your HTTP status:

1. Errors

4xx codes mean that a page is broken and cannot be reached. They can also apply to work pages when something prevents them from being crawled or the pages are deleted.

2. Uncrawled Pages

This occurs when a page cannot be reached for one of two reasons: 1) your website's response time is longer than five seconds; or 2) your server has denied access to the page.

3. Broken Internal Links

These are internal links on your site that lead users to another page on your site. Once broken, these links can damage your site's UX and SEO.

4. Broken External Links

These are external links (Backlinks) that lead users to web pages that no longer exist, which sends negative signals to search engines.

5. Broken internal images

This is reported when an image file no longer exists or its URL is misspelled.

Other common HTTP status errors include:

  • Permanent redirects
  • Temporary redirects

Meta tag under-optimization

Image description

Your meta tags help search engines identify topics on your pages to link them to the keywords and phrases used by searchers.

Creating the right title tags means choosing relevant keywords to form a unique, click-worthy link for users in search engine results pages (SERPs).

Meta descriptions give you additional opportunities to include related keywords and phrases.

They should be as unique and personalized as possible. If you don't create your own, Google will automatically generate them based on keywords in user queries, which can sometimes lead to inconsistent search terms and related results.

Optimized title tags and meta descriptions should include the most appropriate keywords, be the right length, and avoid duplicates as much as possible.

If you optimize your metadata and make it unique as much as possible, you give your site the best chance of maximizing its impact in the SERPs.

The most common meta tag SEO mistakes that can hurt your rankings:

6. Duplicate title tags and meta descriptions

Two or more pages with the same titles and descriptions make it difficult for search engines to correctly determine relevance and, in turn, rankings.

7. Missing H1 tags

H1 tags help search engines determine what your content is about. If they are missing, there will be gaps in Google's understanding of your website.

8. Missing meta descriptions

Well-written meta descriptions help Google understand relevance and encourage users to click on your result. If they are missing, click-through rates can drop.

9. Missing ALT attributes

ALT attributes provide search engines and visually impaired people with descriptions of images of your content. Without them, relevance is lost and engagement can suffer.

10. Duplicate H1 Tags and Title Tags

When H1 tags and title tags are identical on a given page, it may seem over-optimized and it may mean that ranking opportunities for other relevant keywords have been missed.

Other common meta tag errors include:

  • Short/long title elements
  • Several H1 tags

Duplicate content

Image description

Duplicate content has the ability to damage your rankings and potentially for some time.

You should avoid duplicating any type of content from any type of site, whether it is a direct competitor or not.

Look for duplicate descriptions, paragraphs and entire sections of copy, duplicate H1 tags across multiple pages, and URL issues, such as www and non-www versions of the same page.

Pay attention to the uniqueness of every detail to ensure that a page is not only rankable in the eyes of Google, but also clickable in the eyes of users. Use an online plagiarism tool to check your content.

The most common duplication issues that hold sites back:

11. Duplicate Content

The Site Audit tool flags duplicate content when pages on your website have the same URL or copy, for example. It can be solved by adding a rel="canonical" link to one of the duplicates, or by using a 301 redirect .

Other common duplication errors include:

  • Duplicate H1 tags and title tags
  • Duplicate meta descriptions

Neglecting the optimization of internal and external links (Backlinks)

Image description

The links that guide your visitors to your site (navigation between pages) as well as the incoming links (backlinks), if not well done, can harm the overall user experience of your site and, consequently, your performance of research. Google simply won't rank sites that provide a poor user experience.

This study found that nearly half of the sites we scanned through the Site Audit Tool had issues with both internal and external linking, suggesting that their individual link architectures are not optimized.

Some of the links themselves have underscores in the URLs, contain nofollow attributes, and are HTTP instead of HTTPS, this can impact rankings.

You can find broken links on your site with Semrush's site audit tool (or other similar SEO tools). The next step would be for you to identify the links that have the most effect on user engagement levels and fix them in order of priority.

The most common linking issues that can impact your SEO rankings:

12. Links that lead to HTTP pages on an HTTPS site

Links to old HTTP pages can cause a dangerous dialogue between users and a server, so be sure to check that all your links are up to date.

13. URLs Containing Underscores

Search engines can misinterpret underscores and incorrectly document your site's index. Use dashes instead.

Other common linking SEO mistakes include:

  • Broken internal links
  • Broken external links
  • Nofollow attributes in external links
  • Pages with a single internal link
  • Page crawl depths of more than 3 clicks

Make things difficult for robots

Image description

Alongside indexing issues, crawling is one of the crucial health indicators of a website.

There is ground to lose and gain in the SERPs when it comes to the crawlability of your site.

If you ignore crawling issues from a technical SEO perspective , some pages on your site may not be as visible to Google as they should be.

However, if you fix crawl issues, Google will be more likely to identify the right links for the right users in the SERPs.

You can avoid technical issues by evaluating your site for broken or blocked elements that limit its crawlability.

Kevin Indig , VP SEO & Content at G2dotcom , highlights the importance of the synergy between sitemaps and robots here:

What surprised me was that many XML sitemaps are not referenced in the robots.txt file. It seems like a standard to me. What is not surprising is the high degree of sites with a single internal link to pages or even orphan pages. This is a classic site structure problem that only SEOs are aware of.

The absence of a sitemap.xml file in your robots.txt file, for example, can lead search engine crawlers to misinterpret your site's architecture, as Matt Jones , SEO and CRO manager at Rise says. at Seven:

As sitemap.xml files can help search engine crawlers identify and find URLs that exist on your website, allowing them to crawl them [is] definitely a fantastic way to help search engines gain a deep understanding of your website and in turn get higher rankings for more relevant terms.

The most common problems faced by web crawlers:

14. nofollow attributes in outgoing internal links

Internal links that contain the nofollow attribute prevent any potential link equity from flowing through your site.

15. Incorrect pages found in sitemap.xml

Your sitemap.xml must not contain any broken pages. Check for redirect chains and non-canonical pages and make sure they return a 200 status code.

16. Sitemap.xml Not Found

Missing sitemaps make it harder for search engines to crawl, crawl, and index your site pages.

17. Sitemap.xml not specified in robots.txt

Without a link to your sitemap.xml in your robots.txt file, search engines will not be able to fully understand your site structure.

Other crawling-based SEO mistakes include:

  • Uncrawled pages
  • Broken internal images
  • Broken internal links
  • URLs containing underscores
  • 4xx errors
  • Resources formatted as page links
  • External resources blocked in robots.txt
  • Nofollow attributes in outgoing external links
  • Blocked from crawl
  • Pages with a single internal link
  • Orphan sitemap pages
  • Page crawl depths of more than 3 clicks
  • Temporary redirects

Ignore indexability by search engines

Image description

Good indexing indicators are vital for SEO. Simply put, if a page isn't indexed, it won't be seen by a search engine, so it won't be seen by users either.

Many factors can prevent your website from getting indexed , even if you don't seem to have crawlability issues.

Metadata and duplicate content, for example, can make it difficult for search engines to identify which pages to rank for certain similar search terms.

You can see from the SEMrush research above that nearly half of the sites audited suffer from indexing issues caused by duplicate title tags, descriptions, and body content.

This can mean that Google is forced to make decisions about which pages to rank , despite the fact that webmasters can anticipate issues like these and tell Google what to do.

A range of different issues can affect your site's indexability, from low word counts to hreflang gaps or conflicts for multilingual websites.

The most common problems with non-indexable websites:

18. Short/Long Title Tags The Tags

Titles longer than 60 characters are shortened in SERPs, while those shorter than 60 characters may be missed opportunities for further optimization.

19. Hreflang conflicts within page source code

Multilingual websites can confuse search engines if the hreflang attribute conflicts with a given page's source code .

20. Problems with incorrect hreflang links

Broken hreflang links can create indexing issues if, for example, relative URLs are used instead of absolute URLs: https://yoomweb.com/blog/article instead of /blog/article.

  1. Low word count SEMrush 's Site Audit tool or the like can flag pages that seem lacking in content, so it's worth reviewing them to make sure they're as informative as possible.

22. Missing hreflang and lang attributes

This issue is triggered when a page on a multilingual site is missing the necessary links or tags to tell search engines what to serve users in each region. Learn more about hreflang here .

23. AMP HTML Issues

This issue affects mobile users of your website and is reported when the HTML code is not AMP compliant .

Other common indexing errors include:

  • Duplicate H1 tags
  • Duplicate content
  • Duplicate title tags
  • Duplicate meta descriptions
  • Missing H1 tags
  • Several H1 tags
  • hreflang language mismatch issues

Forget about making your site mobile friendly

Image description

Directing your on-page SEO to a mobile-friendly site is essential .

We know that mobile friendliness is a default Google ranking criteria for mobile and desktop .

This means that as webmasters you need to ensure that your site complies with Google's guidelines .

Do not manage website performance

Image description

Page load time is becoming more and more important in SEO. The slower your site, the less likely it is to engage users who have the patience to wait for it to load.

You can get page speed suggestions for mobile and desktop directly from Google. Learn how to measure page speed and identify opportunities to make your site faster.

Gerry White , Director of SEO at Rise at Seven, suggests that minimizing code is a quick win when it comes to site performance and user experience:

One of the things that stands out from the data is the number of quick wins for page speed. It's not just about rankings, but about user and conversion - simple quick wins that can usually be achieved without too much development effort is where I would focus my efforts on this front. Tasks like compressing JavaScript and CSS take minutes, but can make huge improvements on many websites. This should be combined with ensuring that HTTPS is enabled with HTTP2.

The most common issues with website performance:

25. Slow Page Loading Speed ​​(HTML)

The time it takes for a page to fully render by a browser should be as short as possible, as speed directly affects your rankings .

26. JavaScript and CSS Files

This issue may be related to your page loading speed and occurs if browser caching is not specified in the response header.

27. Unminified JavaScript and CSS Files

This issue is about minifying JavaScript and CSS . Remove unnecessary lines, comments and white spaces to improve page loading speed.

Multi-category SEO issues

In some cases, the errors, warnings, and notices detected by the site audit tool will fall into more than one category.

This means that they can cause a range of problems for your website as shown below, so it is recommended to address them first.

The Importance of Leveraging Site Audit Tools, Tips & Tricks

Committing any of these SEO mistakes can prevent your website from reaching its full potential, so it's essential that you keep them up to date as a webmaster with regular site audits.

Whether you have crawling issues that prevent pages from being indexed or duplicate issues that are at risk of being penalized, you can use this checklist to prevent molehills from becoming mountains.

Make a habit of taking care of your SEO and UX health with tools like SEMrush's Site Audit tool and you'll be rewarded with the kind of search visibility and user engagement that have a positive impact. on your results.

Top comments (0)