In a recent episode of Google’s Search Off The Record podcast, team members got hands-on with Gemini to explore creating SEO-related content.
However, their experiment raised concerns over factual inaccuracies when relying on AI tools without proper vetting.
The discussion involved Lizzi Harvey, Gary Illyes, and John Mueller taking turns utilizing Gemini to write sample social media posts on technical SEO concepts.
As they analyzed Gemini’s output, Illyes highlighted a limitation shared by all AI tools:
“My bigger problem with pretty much all generative AI is the factuality – you always have to fact check whatever they are spitting out. That kind of scares me that now we are just going to read it live, and maybe we are going to say stuff that is not even true.”
The concerns stemmed from an AI-generated tweet suggesting using rel=”prev/next” for pagination – a technique that Google has deprecated.
Gemini suggested publishing the following tweet:
“Pagination causing duplicate content headaches? Use rel=prev, rel=next to guide Google through your content sequences. #technicalSEO, #GoogleSearch.”
Harvey immediately identified the advice as outdated. Mueller confirms rel=prev and rel=next is still unsupported:
“It’s gone. It’s gone. Well, I mean, you can still use it. You don’t have to make it gone. It’s just ignored.”
Earlier in the podcast, Harvey warned inaccuracies could result from outdated training data information.
Harvey stated:
“If there’s enough myth circulating or a certain thought about something or even outdated information
that has been blogged about a lot, it might come up in our exercise today, potentially.”
Sure enough, it took only a short time for outdated information to come up.
While the Google Search Relations team saw the potential for AI-generated content, their discussion stressed the need for human fact-checking.
Illyes’ concerns reflect the broader discourse around responsible AI adoption. Human oversight is necessary to prevent the spread of misinformation.
As generative AI use increases, remember that its output can’t be blindly trusted without verification from subject matter experts.
While AI-powered tools can potentially aid in content creation and analysis, as Google’s own team illustrated, a healthy degree of skepticism is warranted.
Blindly deploying generative AI to create content can result in publishing outdated or harmful information that could negatively impact your SEO and reputation.
Hear the full podcast episode below:
Using AI-generated content for your website can be risky for SEO because the AI might include outdated or incorrect information.
Search engines like Google favor high-quality, accurate content, so publishing unverified AI-produced material can hurt your website’s search rankings. For example, if the AI promotes outdated practices like using the rel=”prev/next” tag for pagination, it can mislead your audience and search engines, damaging your site’s credibility and authority.
It’s essential to carefully fact-check and validate AI-generated content with experts to ensure it follows current best practices.
To ensure the accuracy of AI-generated content, companies should:
Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, April 2024.
Google revealed details of two new crawlers that are optimized for scraping image and video…
Here is a recap of what happened in the search forums today, through the eyes…
YouTube unveiled four new content and ad offerings at its 13th annual Brandcast at David…
What Is Direct Traffic in Google Analytics? Direct traffic in Google Analytics 4 (GA4) refers to…
Google looks like it will discontinue the direct ordering option with the Order with Google…
Google Ads continues to roll out AI features within the advertiser console. Now some advertisers…
This website uses cookies.
Leave a Comment