Google-Extended is a specialized crawler introduced by Google in September 2023 to give website owners granular control over how their content is used for AI training. This bot specifically collects data for training Google's Bard chatbot and Vertex AI generative APIs, separate from traditional Google Search indexing. The introduction of Google-Extended represents Google's response to publisher concerns about AI training, allowing sites to remain in Google Search while opting out of AI model training. Website owners can block Google-Extended through robots.txt without affecting their Google Search visibility, providing a clear separation between search indexing and AI training data collection.
User Agent String
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Extended/1.0; +https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers)
How to Control Google-Extended
Block Completely
To prevent Google-Extended from accessing your entire website, add this to your robots.txt file:
⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.
Detection Patterns
Multiple ways to detect Google-Extended in your application:
Basic Pattern
/Google\-Extended/i
Strict Pattern
/^Mozilla/5\.0 AppleWebKit/537\.36 \(KHTML, like Gecko; compatible; Google\-Extended/1\.0; \+https\://developers\.google\.com/search/docs/crawling\-indexing/google\-common\-crawlers\)$/
Flexible Pattern
/Google\-Extended[\s\/]?[\d\.]*?/i
Vendor Match
/.*Google.*Google\-Extended/i
Implementation Examples
// PHP Detection for Google-Extended
function detect_google_extended() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/Google\\-Extended/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('Google-Extended detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for Google-Extended
import re
from flask import request, make_responsedef detect_google_extended():
user_agent = request.headers.get('User-Agent', '')
pattern = r'Google-Extended'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class GoogleExtendedMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for Google-Extended
const express = require('express');
const app = express();// Middleware to detect Google-Extended
function detectGoogleExtended(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /Google-Extended/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('Google-Extended detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'Google-Extended';
}
next();
}app.use(detectGoogleExtended);
# Apache .htaccess rules for Google-Extended# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Google\-Extended [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} Google\-Extended [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "Google\-Extended" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /Google\-Extended/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for Google-Extended# Map user agent to variable
map $http_user_agent $is_google_extended {
default 0;
~*Google\-Extended 1;
}server {
# Block the bot completely
if ($is_google_extended) {
return 403;
}
# Or serve cached content
location / {
if ($is_google_extended) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_google_extended) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Limit Access
Protect pricing and inventory data from AI training
Blog/News
Consider Blocking
Your content may be used for AI training without compensation
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets