Cookie Consent by Free Privacy Policy Generator Google-Extended User Agent - Google Bot Details | CL SEO

Google-Extended

Google Since 2023
Ai Respects robots.txt
#ai #bard #google #training
Quick Actions
Official Docs

What is Google-Extended?

Google-Extended is a specialized crawler introduced by Google in September 2023 to give website owners granular control over how their content is used for AI training. This bot specifically collects data for training Google's Bard chatbot and Vertex AI generative APIs, separate from traditional Google Search indexing. The introduction of Google-Extended represents Google's response to publisher concerns about AI training, allowing sites to remain in Google Search while opting out of AI model training. Website owners can block Google-Extended through robots.txt without affecting their Google Search visibility, providing a clear separation between search indexing and AI training data collection.

User Agent String

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Extended/1.0; +https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers)

How to Control Google-Extended

Block Completely

To prevent Google-Extended from accessing your entire website, add this to your robots.txt file:

# Block Google-Extended User-agent: Google-Extended Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Google-Extended Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Google-Extended Crawl-delay: 10

How to Verify Google-Extended

Verification Method:
Verify via Google's IP ranges and reverse DNS

Learn more in the official documentation.

⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.

Detection Patterns

Multiple ways to detect Google-Extended in your application:

Basic Pattern

/Google\-Extended/i

Strict Pattern

/^Mozilla/5\.0 AppleWebKit/537\.36 \(KHTML, like Gecko; compatible; Google\-Extended/1\.0; \+https\://developers\.google\.com/search/docs/crawling\-indexing/google\-common\-crawlers\)$/

Flexible Pattern

/Google\-Extended[\s\/]?[\d\.]*?/i

Vendor Match

/.*Google.*Google\-Extended/i

Implementation Examples

// PHP Detection for Google-Extended function detect_google_extended() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Google\\-Extended/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Google-Extended detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Google-Extended import re from flask import request, make_responsedef detect_google_extended(): user_agent = request.headers.get('User-Agent', '') pattern = r'Google-Extended' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class GoogleExtendedMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Google-Extended const express = require('express'); const app = express();// Middleware to detect Google-Extended function detectGoogleExtended(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Google-Extended/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Google-Extended detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Google-Extended'; } next(); }app.use(detectGoogleExtended);
# Apache .htaccess rules for Google-Extended# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Google\-Extended [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Google\-Extended [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "Google\-Extended" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Google\-Extended/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Google-Extended# Map user agent to variable map $http_user_agent $is_google_extended { default 0; ~*Google\-Extended 1; }server { # Block the bot completely if ($is_google_extended) { return 403; } # Or serve cached content location / { if ($is_google_extended) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_google_extended) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Limit Access Protect pricing and inventory data from AI training
Blog/News Consider Blocking Your content may be used for AI training without compensation
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Google-Extended Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Google-Extended # Blocking AI training bot Disallow: /

SaaS/Application Configuration

User-agent: Google-Extended Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Google-Extended

Robots.txt Name

Google-Extended

Category

ai

Respects robots.txt

Yes
Copied to clipboard!