GPTBot is OpenAI's official web crawler designed to collect publicly available internet content for training and improving GPT models, including ChatGPT. Launched in August 2023, this bot respects robots.txt directives and provides website owners with full control over whether their content is used for AI training. GPTBot identifies itself clearly in server logs and follows ethical crawling practices, including respecting crawl delays and rate limits. Website owners who block GPTBot are effectively opting out of having their content used to train future GPT models, which could impact how well these models understand and reference their content. The bot primarily focuses on high-quality, publicly accessible content while automatically filtering out paywall-restricted content, personally identifiable information, and content that violates OpenAI's policies.
User Agent String
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.0; +https://openai.com/gptbot)
How to Control GPTBot
Block Completely
To prevent GPTBot from accessing your entire website, add this to your robots.txt file:
# Block GPTBot
User-agent: GPTBot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.
Detection Patterns
Multiple ways to detect GPTBot in your application:
Basic Pattern
/GPTBot/i
Strict Pattern
/^Mozilla/5\.0 AppleWebKit/537\.36 \(KHTML, like Gecko; compatible; GPTBot/1\.0; \+https\://openai\.com/gptbot\)$/
Flexible Pattern
/GPTBot[\s\/]?[\d\.]*?/i
Vendor Match
/.*OpenAI.*GPTBot/i
Implementation Examples
// PHP Detection for GPTBot
function detect_gptbot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/GPTBot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('GPTBot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for GPTBot
import re
from flask import request, make_responsedef detect_gptbot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'GPTBot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class GPTBotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for GPTBot
const express = require('express');
const app = express();// Middleware to detect GPTBot
function detectGPTBot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /GPTBot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('GPTBot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'GPTBot';
}
next();
}app.use(detectGPTBot);
# Apache .htaccess rules for GPTBot# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} GPTBot [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} GPTBot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "GPTBot" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /GPTBot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for GPTBot# Map user agent to variable
map $http_user_agent $is_gptbot {
default 0;
~*GPTBot 1;
}server {
# Block the bot completely
if ($is_gptbot) {
return 403;
}
# Or serve cached content
location / {
if ($is_gptbot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_gptbot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Limit Access
Protect pricing and inventory data from AI training
Blog/News
Consider Blocking
Your content may be used for AI training without compensation
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets