Google’s Search Relations answered several questions regarding webpage indexing on the latest episode of the ‘Search Off The Record’ podcast.
The topics discussed were how to block Googlebot from crawling specific sections of a page and how to prevent Googlebot from accessing a site altogether.
Google’s John Mueller and Gary Illyes answered the questions examined in this article.
Blocking Googlebot From Specific Web Page Sections
Mueller says it’s impossible when asked how to stop Googlebot from crawling specific web page sections, such as “also bought” areas on product pages.
“The short version is that you can’t block crawling of a specific section on an HTML page,” Mueller said.
He went on to offer two potential strategies for dealing with the issue, neither of which, he stressed, are ideal solutions.
Mueller suggested using the data-nosnippet HTML attribute to prevent text from appearing in a search snippet.
Alternatively, you could use an iframe or JavaScript with the source blocked by robots.txt, although he cautioned that’s not a good idea.
“Using a robotted iframe or JavaScript file can cause problems in crawling and indexing that are hard to diagnose and resolve,” Mueller stated.