Informational retrieval process works in three phases:
Ranking, Indexing, Crawling – The Holy Trinity of SEO
During the first phase of crawling, the focus is discovery. It is a complicated process that takes advantage of software programs known as spiders or web crawlers. The most popular crawler out there is Googlebot.
The first thing that a crawler is going to do is fetch a web page. From there, it is going to follow the trail of links on the page. Those pages are fetched, and then it will follow the links on those pages and repeat the process. This is how the pages are indexed. The crawler is not about rendering the pages. Its focus is on analyzing the source code. It is going to extract the URLs located in the script. Crawlers are able to validate hyperlinks as well as HTML code.
It’s important to remember that when you or someone looking for a product or service your website offers does a search on Google, they are not searching the web. They are searching Google’s index of the web. The index is what was created during that crawling process.
These two phases almost work in tandem. When the crawler finds something, it sends it to the indexer. The indexer, in turn, feeds more URLs to the crawler. At the same time, the indexer is giving priority to URLs based on their value.
When this process is complete and no errors are returned in the Search Console, the ranking process starts. This is where time and effort must be dedicated to quality content and website optimization happens. This way a viable link building takes place following the quality guidelines from Google.
Tip1 – Avoid Accidental Cloaking
As we mentioned, what the user sees when they look at your website and what a search bot sees are two different things. Your goal should be to minimize the difference between the two. You increase your risk if there are things implemented on the SSR that is only for Google. This means adding content on the page that users will not see for SEO purposes.
For example, you might add an additional keyword-targeted copy or remove aggressive ads. Avoid the trap of adding these features or text to the “SEO version.” As we discussed, dynamic rendering is not cloaking. However, using dynamic rendering as a way to create differences in content with the goal of influencing your ranking definitely is.
A number of problems start when people confuse Googlebot with Caffeine, which is part of the indexing process. It’s really easy to keep these two things separate. The crawler does not render content. The indexer does render content. The crawler’s job is to simply fetch the content. One reason why this concept gets muddled is that some people say that the crawler assists Google in indexing the content. That’s not altogether true.
Tip 2 – Schedule Crawls for Monitoring SSR Issues
There are a number of monitoring sites that you may want to use to help you find the changes that you might not notice right away when you are at the site.
Tip 3 – Lookout for Design and CSS Failures in SSR
- Angular JS SEO