Lately they have been annoying about the subject of CTR and its impact on SEO, if manipulating the CTR works , what seems to work or simply does not work β¦
For neophytes, the CTR (Click-through rate) in SEO is a ratio that measures the effectiveness of a result on the Google results page (SERP). It is the division of the times a result is clicked by the times it appears (impressions). This percentage can be easily checked in the "search analytics" report in Search Console (former Google Webmaster Tools)
When the results of an experiment are not reproducible, it is because the experiment is poorly done or the hypothesis is wrong . With which the result is not commensurable, or what is the same,You cannot accept the hypothesis that you want to prove as valid (in this case, if the CTR affects the rankings or not). But also in the world of SEO we have the handicap of the volatility of the universe in which it is experienced, where we cannot isolate all the variables.
As a consequence of the misinterpretation of this series of experiments and others that have been going on for a long time, 2 aspects have emerged:
- Those who say that Google does not take into account the CTR in its ranking algorithms.
- Those who say that it is taken into account and that it can be manipulated with bots or other herbs.
The CTR is necessary for Google
The CTR along with other user behavior metrics ( very well explained by Dan Petrovic in Moz ), are what Google and other search engines use to learn and improve results (machine learning).
They are metrics with which search engines receive feedback on what are the results that meet the needs of users (a priori in the SERP). A site can be very good at the on-page level and/or have an authority for X reasons, but it may not give the answer that users need.
As Ricardo Baeza (Yahoo) comments in the Web Retrieval chapter of his book Modern Information Retrieval (a book you have to read if you are an SEO), Click-through data is even better than Page Rank to determine the quality of a result. . It is also the best metric to evaluate the effectiveness of an algorithm when it is tested . To the point that there is a Yahoo patent that talks about replacing the PR with this user behavior data and where this microsoft study is cited that also proposes improvements . Google also has its patents on click-through feedback and here is the latest. Even Gary Illyes (Google) confirmed in the last SMX that he uses it as a ranking signal, but that it sometimes causes noise with what is used in some cases. It was confirmed again by Paul Haahr in 2016 .
The artificial manipulation of the CTR in SEO I tell you in advance that it is an absolute waste of time and if one day it were to work it would be temporary and for queries with little or no competition. If you think that a search engine like Google can be manipulated with "hey, search for this and click here", "mini jobs" or "pandabots" it is that you have not been in this for a long time or you have not bothered to understand the algorithm behind the search.
Reasons why artificially manipulating the CTR of a result does not work
- CTR is not binary : getting clicked is not always good and not clicking is not always bad. It is used in combination with other saved data such as the history of that query (permanence in the result, abandonment of the query , bounces to the SERP, clicks to other results...). The way to calculate it is a secret that each search engine keeps and this behavior you, "human or bot", cannot replicate .
- The CTR is biased : It does not only depend on the position of the result, the design of the interface (SERP configuration, device used, click baitβ¦) can vary the result since it is not homogeneous. This bias is corrected by very complex algorithms that you, "human or bot", cannot extrapolate. But in the end, and today, it is the human evaluators who can decide if click bait exists .
- The CTR generates noise: As we have advanced before, Gary Illyes said that the CTR is used only sometimes because it generates noise. You can't know what queries, "human or bot", Google uses it and when not to calculate rankings.
- User profile: one of the data that search engines save with the click is the profile of the user who performs it. Why? Because it is one of the ways to personalize results for subsequent searches of that user or of users with a profile. Similary. The moment you tell a group of friends or a private network to click on a result, you've screwed up the experiment. So you, "human or bot", cannot manipulate it.
- Dwell time: search engines know if a result is interesting by the CTR, but then depending on whether a user bounces to the SERP and the way to do it, they know if that result is good or not ( βdwell timeβ ). This metric is key and it is something that search engines try to fix because it is bad for them and for users (due to resource issues and because it is an indicator of poor quality of results). Ask already tried to mitigate this effect in 2004 by releasing a preview of results called Β«binocularΒ» and that Google later released (do you remember the preview that had a bug and doubled the visits in Analytics?). Yahoo also published a patent on this in 2008. So, do you know what is the dwell time that Google has as good for each query? Me neither, so you, "human or bot", can't tell.
- A URL β a keyword: A URL is not only entered through a query (right now I am looking at a URL with more than 1000 different input keywords). CTR is also used as an indicator at the URL and even domain level. Are you able to simulate all of the above and for more than one query? No "human or bot" friend, you can't fake it.
- Search intent: Search engines still have trouble identifying the intent behind a keyword and so use serendipity and disambiguation methods. This causes the SERP to change and once again skews the CTR. In addition, it may happen that it detects an intention for a result of yours that is not the one for which you are trying to appear. Are you able to know the intention that Google understands for a result of yours? Michael Knight can't and neither can you, "human or bot".
- Click fraud detection: Google has extensive experience fighting click spam . Do you think that what is applied with great results in AdWords is not used for organic results? If you think not then you, "human or bot", were born yesterday.
- Machine Learning : The CTR and the rest of the user metrics are used within an iterative learning process. As soon as a result is no longer clicked, it is considered to be of little relevance and is downgraded or not depending on the rest of the results. This is done once again due to the deficiencies that PageRank has in calculating the relevance of a result. A URL can be very authoritative but not respond to the current context of the search (date, device, location...). The day you are able to manipulate it, will you always be able to do it? If so, "human or bot", tell me who's your dealer because I want that shit.
Ways to improve CTR naturally
Just because CTR can't be artificially manipulated doesn't mean you don't have to improve it naturally. It would be stupid not to try it because it is a way for you to get more traffic, even in a world where Google did not take it into account to rank .
In January 2013 I explained a part of an experiment I did on CTR to understand why users clicked on one result and not another. From this experiment I came up with a list of natural factors that can impact this metric:
- intentionality
- Brand recognition
- Keywords in the title, description and url
- Persuasive snippets and/or with call to action
- Sitelinks
- Rich snippets (rating, breadcrumbβ¦)
In the example of the tweet, from the moment the title was changed by adding the number of elements it had in the categories of a site, the AV Position begins to rise. Why does it work? Categories are usually a type of query plus log tail, the effect of CTR is constant over time and I am not only working on one URL, I am working on several pages at the same time. With this simple change, thousands of URLs whose keywords range from 1K to 22K monthly searches (in exact match) went from the Top 10 to the Top 5, with a more than considerable impact on traffic. How much time and resources (friends, money in mini jobs, proxies, botsβ¦) could they achieve this one day?
Conclusion
As I always say, don't waste time fooling search engines, earn it by understanding them . It is much more productive and enriching to study how search engines work because you will begin to row in favor and not against.
Top comments (0)