Cookie Consent by Free Privacy Policy Generator Twitterbot User Agent - X (Twitter) Bot Details | CL SEO

Twitterbot

X (Twitter) Since 2012
Respects robots.txt
#social #twitter #x #cards
Quick Actions
Official Docs

What is Twitterbot?

Twitterbot is the crawler used by X (formerly Twitter) to generate Twitter Cards - the rich media previews that appear when links are shared on the platform. The bot reads Twitter Card meta tags to display enhanced previews with images, summaries, and other media. These previews are crucial for engagement on X, as tweets with Twitter Cards receive significantly more clicks. Twitterbot also handles video previews and app cards for mobile applications. Publishers and developers must ensure their Twitter Card markup is properly implemented for Twitterbot to generate optimal previews.

User Agent String

Twitterbot/1.0

How to Control Twitterbot

Block Completely

To prevent Twitterbot from accessing your entire website, add this to your robots.txt file:

# Block Twitterbot User-agent: Twitterbot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Twitterbot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Twitterbot Crawl-delay: 10

How to Verify Twitterbot

Verification Method:
Use Twitter's Card Validator tool

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect Twitterbot in your application:

Basic Pattern

/Twitterbot/i

Strict Pattern

/^Twitterbot/1\.0$/

Flexible Pattern

/Twitterbot[\s\/]?[\d\.]*?/i

Vendor Match

/.*X \(Twitter\).*Twitterbot/i

Implementation Examples

// PHP Detection for Twitterbot function detect_twitterbot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Twitterbot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Twitterbot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Twitterbot import re from flask import request, make_response def detect_twitterbot(): user_agent = request.headers.get('User-Agent', '') pattern = r'Twitterbot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class TwitterbotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Twitterbot const express = require('express'); const app = express(); // Middleware to detect Twitterbot function detectTwitterbot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Twitterbot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Twitterbot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Twitterbot'; } next(); } app.use(detectTwitterbot);
# Apache .htaccess rules for Twitterbot # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Twitterbot [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Twitterbot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "Twitterbot" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Twitterbot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Twitterbot # Map user agent to variable map $http_user_agent $is_twitterbot { default 0; ~*Twitterbot 1; } server { # Block the bot completely if ($is_twitterbot) { return 403; } # Or serve cached content location / { if ($is_twitterbot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_twitterbot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Twitterbot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Twitterbot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Twitterbot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Twitterbot

Robots.txt Name

Twitterbot

Category

social

Respects robots.txt

Yes
Copied to clipboard!