Cookie Consent by Free Privacy Policy Generator Claude-Web User Agent - Anthropic Bot Details | CL SEO

Claude-Web

Anthropic Since 2024
Ai Respects robots.txt
#ai #claude #anthropic #crawler
Quick Actions
Official Docs

What is Claude-Web?

Claude-Web is Anthropic's web crawler that enables Claude AI to browse and retrieve real-time information from the internet during conversations with users. Similar to ChatGPT's browsing capability, this user agent represents Claude's ability to access current web content when users request up-to-date information or ask for analysis of specific web pages. The crawler is designed with a strong emphasis on safety and ethical web access, following Anthropic's constitutional AI principles. It respects robots.txt directives and website preferences, allowing site owners to control whether Claude can access their content. When Claude-Web visits a site, it's typically in response to a specific user query requiring current information, making each visit purposeful rather than speculative. The bot operates with careful rate limiting to avoid overwhelming servers and maintains transparent identification. For website owners, allowing Claude-Web access means their content can be referenced and discussed in Claude conversations, potentially increasing their visibility to AI-assisted research and analysis workflows.

User Agent String

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Claude-Web/1.0; +https://www.anthropic.com)

How to Control Claude-Web

Block Completely

To prevent Claude-Web from accessing your entire website, add this to your robots.txt file:

# Block Claude-Web User-agent: Claude-Web Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Claude-Web Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Claude-Web Crawl-delay: 10

How to Verify Claude-Web

Verification Method:
Verify requests come from Anthropic IP ranges

Learn more in the official documentation.

⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.

Detection Patterns

Multiple ways to detect Claude-Web in your application:

Basic Pattern

/Claude\-Web/i

Strict Pattern

/^Mozilla/5\.0 AppleWebKit/537\.36 \(KHTML, like Gecko; compatible; Claude\-Web/1\.0; \+https\://www\.anthropic\.com\)$/

Flexible Pattern

/Claude\-Web[\s\/]?[\d\.]*?/i

Vendor Match

/.*Anthropic.*Claude\-Web/i

Implementation Examples

// PHP Detection for Claude-Web function detect_claude_web() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Claude\\-Web/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Claude-Web detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Claude-Web import re from flask import request, make_response def detect_claude_web(): user_agent = request.headers.get('User-Agent', '') pattern = r'Claude-Web' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class ClaudeWebMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Claude-Web const express = require('express'); const app = express(); // Middleware to detect Claude-Web function detectClaudeWeb(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Claude-Web/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Claude-Web detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Claude-Web'; } next(); } app.use(detectClaudeWeb);
# Apache .htaccess rules for Claude-Web # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Claude\-Web [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Claude\-Web [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "Claude\-Web" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Claude\-Web/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Claude-Web # Map user agent to variable map $http_user_agent $is_claude_web { default 0; ~*Claude\-Web 1; } server { # Block the bot completely if ($is_claude_web) { return 403; } # Or serve cached content location / { if ($is_claude_web) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_claude_web) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Limit Access Protect pricing and inventory data from AI training
Blog/News Consider Blocking Your content may be used for AI training without compensation
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Claude-Web Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Claude-Web # Blocking AI training bot Disallow: /

SaaS/Application Configuration

User-agent: Claude-Web Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Claude-Web

Robots.txt Name

Claude-Web

Category

ai

Respects robots.txt

Yes
Copied to clipboard!