Join the Google SEO Office Hours in November

November 29, 2022

Google just wrapped up the November edition of their SEO office hours, where members of the Google Search team provided answers to questions. The full recording is available on YouTube, and a transcript of the questions is also published in this blog post. The team is experimenting with various formats for the office hours to make them easier to publish and for other team members to contribute, with the current goal being a monthly audio-only format. Those who are interested in submitting a question for next month’s edition can do so via this form. However, if you have a specific question about your website, it’s best to go to the Google Search Central Forum instead.

This post by John Mueller, Developer Advocate, discusses the use of multiple values in one schema markup field using comma separation. In response to an inquiry from Lizzi about GTIN and ISBN specific properties, John advises that it is best practice to specify one value per field. Additionally, if both a GTIN and ISBN are provided, it is important to use the GTIN property first and then the ISBN property so that each value applies to the correct field. The post concludes with a prompt for users to leave feedback on this blog post or on the YouTube video via the “Send Feedback” button. A link is also included to a YouTube video discussing options for using disavow feature in Search Console for domain properties. Links: John Mueller, YouTube Video

John and Duy answered two separate questions from Pierre and Latha during their SEO Office Hours. For Pierre's question, "The disavow feature in Search Console is currently unavailable for domain properties. What are the options then?" John suggested that the option is to verify the prefix level without needing any additional tokens, but he also cautioned against wasting time disavowing random links that some tool has flagged. He suggested only using the disavow tool for situations where you have paid for links and cannot have them removed afterwards.

Latha asked about the helpful content update for sites accepting guest posts for money, to which Duy answered that our systems can identify low quality or low value content created by search engines, and sites accepting guest posts risk their ranking lowering as a result of not only this update, but also other systems already in place.

John and Gary both asked about Google's canonical tag and how it affects crawling and indexing of webpage content. Canonicalization is based on more than just the link rel="canonical" element. Google's systems take into account redirects, sitemaps, internal links, external links, and more when deciding which URL should represent the content best. It is important to make sure all these signals align if the proper URL should be chosen. The canonicalization process does not impact the ranking of a page but only which URL is shown.

Gary asked if Google is more likely to crawl and index pages with short content since it would be cheaper to store in the index. It is unclear how this influences crawling as this will depend on other signals as well such as internal/external links that point to a specific page or directory.

Paul asked an interesting question about whether dynamic sorting of listings could be a reason for not indexing product images. Alan from Google provided useful information regarding the topic, explaining that it is unlikely that dynamic sorting would be the reason for product images not being indexed. He suggests referencing product images from your product description pages, as well as creating a sitemap file or providing a Google Merchant Center feed to ensure that Google can find all your product pages without depending on listings pages. Additionally, Paul asked about whether there was a timeframe for the site migration, but this was not addressed in the text. Link 1, Link 2.

Sergey posed a question regarding the timeline of a site migration, as after four months there are still no signs of the new domain getting SERP positions of the old one. It is normal to see ranking fluctuations, especially while in the middle of a migration. Moving only one section of a site is not necessarily reflective of an entire site move, and it can be hard to suggest an appropriate next step without seeing the specific site. For more focused advice, Sergey is recommended to post on forums showcasing their specific situation.

Flavio also posed a question about whether HTTP/3 could improve SEO due to improved performance. Although HTTP/3 does improve performance, it isn't necessarily indicative of improving SEO unless it helps with other indicators- such as better user experience and content relevance- that are used by search engine algorithms to rank sites. Google currently does not use the HTTP/3 protocol as a factor in its ranking algorithms nor as a part of its crawling process. Additionally, performance gains from using the protocol would likely be negligible and not have an impact on the core web vitals which are used in the page experience ranking factor.

In response to Andrea's question about why Google continues to use backlinks as a ranking factor despite link building campaigns being disallowed, Duy unpacks this issue further by first noting that backlinks now have less impact on rankings than when Google Search first started out. He then adds that full link building campaigns are considered link spam according to the company's spam policy and would be detected by their algorithms at scale and nullified accordingly. This means that spammers or SEOs spending money on links cannot know if they will be worth it since they may already have been nullified by Google systems. John and Lizzi answer two questions related to SEO on webpages: Does it matter if the vast majority of anchors for internal content links are the same? and If you add the website schema, do you add software application schema too?.

John explains that when it comes to menus and products, it is normal for them to be linked with the same anchor all the time. Therefore, there is no need to focus on SEO in this instance.

Lizzi suggests that whether or not one should also add software application schema depends on their website, but advises that structured data should be nested so that only one website node appears on the homepage. Google has updated its documentation by adding website schema for brands [1].

[1] https://developers.google.com/search/docs/data-types/corporate-contact

In response to a question about how much of an issue an excessive number of noindex pages might be for Google, Gary responded that noindex is a very powerful tool and having many such pages will not influence how Google crawls and indexes a site. In response to Yasser's question about whether having URLs in different languages than the page content would affect site ranking, Alan indicated that there is no negative effect from an SEO point of view, though users may care if they share the URL with others. (Source 1) (Source 2).

Kristen asked how content creators should respond to sites that scrape their content and modify it, then outrank them in search results. Google has algorithms to detect and demote such sites from search results, and they encourage users to report sites engaging in such behavior using the spam report form.

In response to Ridwan’s question, “Is it true that Google rotates indexed pages?”, the answer is no. Google does not rotate the index based off of days of the week.

Google encourages users to report any suspicious sites for scraping content via their Spam Report Form: https://support.google.com/webmasters/answer/93713?hl=en John and Alan from Google answer two questions from users Anton and Joydev about Search Console, the crawl budget of a website, Discover feature, and enabling content for Discover.

Anton asks whether there is a specific ratio to watch out for between indexed and non-indexed pages in order to recognize potentially wasted crawl budget on non-indexed pages. John responds that there is no magic ratio to watch out for and that small to medium sized sites don’t need to worry too much about the crawl budget. Removing unnecessary internal links is more of a site hygiene topic than an SEO one.

Joydev asked how they can enable the Discover feature. Alan responds that no action needs to be taken in order to make content enabled for Discover, as this is done automatically by Google. Additionally, different criteria are used by Google when deciding whether or not show content in Discover versus search results - so receiving traffic from search does not guarantee traffic from Discover.

Gary and Lizzi answer two SEO-related questions about the use of noindex and thin content in relation to longer articles. Sam asked if having many noindex pages linked from spammy sites affects crawl budget. To that, Gary said that noindex is designed to keep certain pages out of the index, so it won't come with any unintended consequences such as affecting the crawl budget. In response to Lalindra's query about whether it counts as thin content if an article covering a lengthy topic is broken down into smaller pieces and interlinked, Lizzi said it depends on the content of the page and what is most helpful for users. The focus should be on providing sufficient value for whatever topic might be covered on each page.

Gary and Alan answer two questions related to web development: the impact of 404 errors on crawling and processing URLs, and the status of Key Moments video markup.

Gary addressed Michelle’s query about the effect of having a lot of 404 errors on a website. He clarified that HTTP status codes, not 404 errors, can cause crawling to stop on a site level. Having lots of linked 404 errors won’t affect overall crawling, as these pages are a normal part of the internet.

Alan replied to Iman’s inquiry regarding the current state of Key Moments video mark up. It is live and used by many providers other than YouTube like Vimeo. If online documentation does not provide enough information, users can consider searching public forums for answers.

John addresses a user's query about why their newly launched website, "Weird All", is not appearing in searches when people type in the full name of the website. As it turns out, this is likely due to the similarity of the name with that of singer "Weird Al". John explains that search engines may interpret "Weird All" as a typo and try to guide people towards what they're actually looking for, i.e., Weird Al. This is especially true if the person looking up "Weird All" is familiar with Weird Al's work. Ultimately, John advises against using names that are typos of well-known entities for SEO purposes as it will be difficult to differentiate oneself from them.

Anonymous asked if it is possible to get FAQ featured snippets with just HTML, without referring to schema markup. Lizzi responded that currently, FAQ schema markup is necessary for the enhancement in search results to appear. However, it is important to check the specific documentation for the feature in question since some features may be able to pick up on HTML without schema markup.

Esben asked John his opinion on self-referring canonicals and if they help for deduplication. John's view was that they do nothing, although they can become useful if a page is referred to under other URLs such as UTM tracking parameters. It is important to check the specific documentation when considering schema markup and self-referring canonicals for optimization purposes.

Alan, Duy, and John answered several questions related to search engine optimization.

The first question was whether there should be some percentage of randomness in the search results. The answer is that providing the most useful results to people performing searches is the primary purpose of search engines.

Kunal asked why Google is not taking action on copy or spun web stories. The answer is that Google is aware of these attempts and they are looking into them. Sites with spammy scraped content violate their spam policy, which algorithms are designed to demote in search results.

Tom asked if moderated blog comment pages internally linked as link rel="ugc" get deindexed. The answer is no, but if you want to prevent a page from being indexed, then you should use the noindex robots meta tag instead.

Abdul Rahim asked how Google evaluates product reviews for non-physical products, such as digital products, that don't have affiliate links. It is not necessary to include an affiliate link when reviewing a product; it is recommended to provide useful links to other resources which may be helpful to the reader.

Siddiq also asked if there are any shortcuts for writing descriptions for 10,000+ pages on his website. While it can be a good idea to generate meta descriptions programmatically, especially for larger database driven sites, it is important that the descriptions remain unique and relevant to each page. Google Docs has guidance on this subject which can be found here: (INSERT LINK). John is looking for best practices for purging 10,000 pages of thin blog content. The best practice is simply to delete them; however, doing so does not necessarily make the site more valuable.

Duy, who is new to SEO, wants to know if backlinks are powerful or if they should focus on maximizing the quality of their site. While links have been used predominantly in the past 20 years, many algorithms have been launched that nullify link spam. Therefore, it's not worth wasting money on link spamming and instead should be spent on creating a great website with great user experience and helpful content.

John asked whether a page needs to have a cached copy to appear in the search results. The answer is no; the caching system is independent of search's indexing and ranking and its absence is not an indication of quality.

Esben Rasmussen wanted to know why Google Search Console reports an unknown internal URL as the referring URL for 400 series error URLs. This may be because Google does not index every page it crawls, so if it does not get indexed, it will be reported as unknown.

Yusuf asked if WordPress websites or blogs should mark automatically created pages such as abc.com/page/one and page/two as nofollow. It would be better to use robots.txt disallow rules instead, since this would give greater control over crawling or indexing of certain pages on the site and also requires less effort than nofollowing URLs pointing to those pages.

John and Lizzi are faced with two questions regarding the Search Console Performance report and paywalled content. In response to Bobby's question about the confusing metrics in the Performance report, it is normal as there is a difference between when you look by query and by page, as well as some privacy filtering that occurs in the per query data. It is important to note that trends tend to be the same and more information can be found on this topic in a blog post called A Deep Dive into Performance Data.

In response to Michal's question about preventing paywalled content from appearing in search results, it is important to ensure that the correct paywalled content markup has been properly implemented. This will help prevent unsatisfied users who are not paying for access from viewing the content.

This text discusses how to choose the right paywall content and if the spam score affects an individual page's ranking on Google. Usama asked if there was a way to improve the spam score for any website, outside of disavowing links. John replied that Google does not use third party SEO tools to determine scores for individual pages, but this does not mean these tools are not useful. Instead, it is important to try and understand what the tool is telling you and if there is something actionable behind it. To prevent paywall content from appearing in Search entirely, one should consider using noindex for those pages.

Alan and John both answer questions about product pages and hreflang tags. Alan suggests that when products are sold out and will not be restocked, it is acceptable from a search perspective to delete the page, but from a usability perspective, it may be beneficial to keep the page up or redirect in case there are external references or customers who have bookmarked the page. John addresses Tomas' question about what happens if hreflang tags are missing return tags. He explains that any valid hreflang annotations will be taken into account, but broken ones will simply be ignored. If some tags work and some do not, the working ones will be considered while the broken ones will be dropped without any negative effect other than those pages being excluded from hreflang. John recommends still fixing any broken annotations for peace of mind.

Damian asked how a website can implement hreflang when there is no control over brand sites in many countries. Gary suggested to use sitemaps as an easy way to control all the hreflang from a single location. More information on setting it up can be found on sitemaps.org. Furthermore, John answered the question of whether hreflang sitemaps can be placed in any folder location; they are just traditional sitemap files with additional hreflang annotations included, so they can be positioned like any other sitemap file and submitted via robots.txt or Search Console within verified sites (Search Console). The same applies to sitemaps with image or video annotations.

Alan and John provide helpful advice on two topics related to website indexing. First, Alan suggested that if a website is not being indexed, he would first check Google Search Console to see if there are any errors blocking crawlers from the site. Additionally, he advises that quality content should be ensured as Google does not index everything on the web.

John then responds to Amy’s question about what to do with x-default hreflang for new language updates. He explains that it is up to the user to decide which language should be the default for each page-set, and it does not have to be the same across their entire website. He also recommends focusing more on what would be useful for users when determining which language should serve as default rather than their internal processes (https://www.eliteenterprisesoftwareteam.com/what-is-happening-to-a-website-that-isnt-being-indexed/).

Alan, Gary, and Lizzi discussed three frequently asked questions about product reviews, removing old content, and text in images influencing ranking in image search.

Lucy asked "Why do the product review updates impact non-review content?" Alan suggested that if there were broad impacts across the site they were not likely due to product reviews, but instead another unrelated update.

Anonymous asked "What's the best way to remove old content and stop it from being indexed?" Gary suggested either deleting the page completely and serving a 404 or 410 status code for its location when retiring a page or redirecting it to a similar page that still helps the user accomplish their goal.

Finally, Sean B asked "Does text in images influence ranking in Image Search," specifically for t-shirt printing? Lizzi stated that ultimately it depends on what makes sense for the user rather than search engines.

Akshay Kumar Sharma and Alfonso asked two related questions about improving SEO: Is it beneficial for the local businesses to use many local listing websites? And, is a query parameter added to a site URL bad for SEO?

Alan answered that while it's not necessary to use local listing sites to improve SEO, they can be used to get more traffic unrelated to Search.

John answered that query parameters added to URLs on its own is not bad for SEO as long as they don't cause the number of URLs found on the website to explode exponentially. He suggested that if the website is larger than medium size, it's best to reduce the number of parameters being added.

Unlocking the Secrets of Google Search Ranking Systems with our New Guide
On Monday, November 21, 2022, Google introduced a new centralized page called "A guide to Google Search ranking systems" to make it easier for creators and others to learn about notable Goog...
Read More
Ex-Googler Answers Why Google Search May Be Getting Worse
In the Freakonomics podcast "Is Google Getting Worse?," host Stephen Dubner interviews Marissa Mayer, former president and CEO of Yahoo! and one of Google's first 20 employees. They discuss...
Read More