Cookie Consent by Free Privacy Policy Generator CCBot User Agent - Common Crawl Bot Details | CL SEO

CCBot

Common Crawl Since 2011
Ai Other Respects robots.txt
#dataset #ai-training #crawler #open-data
Quick Actions
Official Docs

What is CCBot?

CCBot is the web crawler operated by Common Crawl, a non-profit organization that builds and maintains open datasets of web crawl data. Since 2011, Common Crawl has been creating monthly snapshots of billions of web pages, making this data freely available to researchers, companies, and individuals. The datasets produced by CCBot have become fundamental resources for training large language models and other AI systems, with major tech companies and research institutions relying on Common Crawl data. The bot follows ethical crawling practices, respects robots.txt, and operates at a scale that captures a significant portion of the public web. By making web data accessible in a structured format, Common Crawl democratizes access to web-scale datasets that would otherwise be available only to large corporations.

User Agent String

CCBot/2.0 (https://commoncrawl.org/faq/)

How to Control CCBot

Block Completely

To prevent CCBot from accessing your entire website, add this to your robots.txt file:

# Block CCBot User-agent: CCBot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: CCBot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: CCBot Crawl-delay: 10

How to Verify CCBot

Verification Method:
Check user agent string and behavior patterns

Learn more in the official documentation.

⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.

Detection Patterns

Multiple ways to detect CCBot in your application:

Basic Pattern

/CCBot/i

Strict Pattern

/^CCBot/2\.0 \(https\://commoncrawl\.org/faq/\)$/

Flexible Pattern

/CCBot[\s\/]?[\d\.]*?/i

Vendor Match

/.*Common Crawl.*CCBot/i

Implementation Examples

// PHP Detection for CCBot function detect_ccbot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/CCBot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('CCBot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for CCBot import re from flask import request, make_responsedef detect_ccbot(): user_agent = request.headers.get('User-Agent', '') pattern = r'CCBot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class CCBotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for CCBot const express = require('express'); const app = express();// Middleware to detect CCBot function detectCCBot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /CCBot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('CCBot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'CCBot'; } next(); }app.use(detectCCBot);
# Apache .htaccess rules for CCBot# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} CCBot [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} CCBot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "CCBot" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /CCBot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for CCBot# Map user agent to variable map $http_user_agent $is_ccbot { default 0; ~*CCBot 1; }server { # Block the bot completely if ($is_ccbot) { return 403; } # Or serve cached content location / { if ($is_ccbot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_ccbot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Limit Access Protect pricing and inventory data from AI training
Blog/News Consider Blocking Your content may be used for AI training without compensation
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: CCBot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: CCBot # Blocking AI training bot Disallow: /

SaaS/Application Configuration

User-agent: CCBot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

CCBot

Robots.txt Name

CCBot

Category

ai, other

Respects robots.txt

Yes
Copied to clipboard!