Google Webmaster’s trend analyst also touched on topics such as the ability of the indexing API, whether there are inherent penalties for sending links and how clustering of duplicate pages works.
Google analyst Gary Illis, who has been working at Google for over eight years exclusively on search-related topics, participated in AMA (ask me about anything) in Reddit. During the conversation with webmasters, he covered all topics: from robots.txt to RankBrain, from behavioral signals to search for images and videos. We will save you from having to deal with all this by sharing the most important news here.
Google respects robots.txt. Googlebot will respect the directives you give it in your robots.txt. If you give conflicting Google directives or incorrect information in your robots.txt file, Googlebot may unauthorized. But if you use the robots.txt file correctly, Google will obey.
Gary wrote: "Robots.txt is respected for what he has to do."
ccTLD, gTLD, search console settings affect ranking. Gary said Google can rank content in a query higher than other content. These local queries provide Google information that helps determine that this domain is more relevant to people in a particular country or city. For example, if we write in this article that the company Futureinaps is promoting SEO in the city of Kazan, then Google, based on the above, should note this and slightly increase our rating for Kazan. We will check it out!
RankBrain explanation. RankBrain is an AI-based query interpretation system that helps Google better understand a query and thus rank more relevant pages for that query. Here is how Gary Illis explained it in AMA:
“RankBrain is a PR sexy machine learning component that uses historical search data to predict what the user will most likely click on for a previously invisible query,” he said. “This is a really cool development that has saved our asses many times ... requests like“ Oh, look, not .. ”- let's just ignore it! But no, he (RankBrain) relies on monthly data about what happened on the results page itself (with such queries), and not on the landing page. ”
It seems that RankBrain is useful for Google, but also useless due to outdated data.
UX and signals of behavior. One of the most controversial topics regarding Google's ranking factors is whether the search engine uses the UX and behavioral signals for ranking. Google has always denied using them for direct ranking. In AMA, Gary, referring to one person in the industry who many times stated the opposite, reiterated that Google does not use them.
“The waiting time, CTR, whatever the new Fishkin theory, is, as a rule, nonsense. Finding is much easier than people think, ”Gary said.
Evaluators and live tests. Then Gary delved into how Google uses data on clicks and other user data — not for direct ranking signals, but for evaluating search results. He spoke about search quality evaluators and how they evaluate Google search results. Also reported on experiments in real time - when Google checks how different scenarios affect the behavior of the search engine. But this does not directly affect key ratings.
“When we want to launch a new algorithm or update the main one, we have to test it,” he said. “The same goes for UX functions, such as changing the color of green links. Regarding the first, we have two ways of testing: (1) With evaluators, which is painfully described in detail in the recommendations for evaluators, (2) Through experiments in real time. "
Writing content using machine learning. Usually, Google does not approve when machines and computers write content. In fact, the rules require that search engines block indexing of automatically generated content. But with machine learning and artificial intelligence, perhaps this technology can make your content even better than human-written content. If so, what will Google answer?
