-
Home
-
User Agent Directory
- CCBot
CCBot
What is CCBot?
CCBot is the web crawler operated by Common Crawl, a non-profit organization that builds and maintains open datasets of web crawl data. Since 2011, Common Crawl has been creating monthly snapshots of billions of web pages, making this data freely available to researchers, companies, and individuals. The datasets produced by CCBot have become fundamental resources for training large language models and other AI systems, with major tech companies and research institutions relying on Common Crawl data. The bot follows ethical crawling practices, respects robots.txt, and operates at a scale that captures a significant portion of the public web. By making web data accessible in a structured format, Common Crawl democratizes access to web-scale datasets that would otherwise be available only to large corporations.
User Agent String
CCBot/2.0 (https://commoncrawl.org/faq/)
How to Control CCBot
Block Completely
To prevent CCBot from accessing your entire website, add this to your robots.txt file:
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
How to Verify CCBot
Check user agent string and behavior patterns
Learn more in the official documentation.
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.
Detection Patterns
Multiple ways to detect CCBot in your application:
Basic Pattern
/CCBot/i
Strict Pattern
/^CCBot/2\.0 \(https\://commoncrawl\.org/faq/\)$/
Flexible Pattern
/CCBot[\s\/]?[\d\.]*?/i
Vendor Match
/.*Common Crawl.*CCBot/i
Implementation Examples
Should You Block This Bot?
Recommendations based on your website type:
| Site Type | Recommendation | Reasoning |
|---|---|---|
| E-commerce | Limit Access | Protect pricing and inventory data from AI training |
| Blog/News | Consider Blocking | Your content may be used for AI training without compensation |
| SaaS Application | Block | No benefit for application interfaces; preserve resources |
| Documentation | Selective | Allow for public docs, block for internal docs |
| Corporate Site | Limit | Allow for public pages, block sensitive areas like intranets |
Advanced robots.txt Configurations
E-commerce Site Configuration
Publishing/Blog Configuration
SaaS/Application Configuration
Quick Reference
User Agent Match
CCBot
Robots.txt Name
CCBot
Category
ai, other
Respects robots.txt
Yes