Cookie Consent by Free Privacy Policy Generator Baiduspider User Agent - Baidu Bot Details | CL SEO

Baiduspider

Baidu Since 2000
Search Respects robots.txt
#search #baidu #chinese #crawler
Quick Actions
Official Docs

What is Baiduspider?

Baiduspider is the web crawler for Baidu, China's dominant search engine with over 70% market share in the country. The crawler is optimized for Chinese language content and follows Chinese internet regulations while indexing web pages. Baiduspider supports both simplified and traditional Chinese content, making it essential for reaching Chinese-speaking audiences globally. The bot respects robots.txt and has evolved to handle JavaScript-heavy sites and mobile content. For businesses targeting the Chinese market, ensuring Baiduspider can properly crawl and index their content is crucial for online visibility.

User Agent String

Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)

How to Control Baiduspider

Block Completely

To prevent Baiduspider from accessing your entire website, add this to your robots.txt file:

# Block Baiduspider User-agent: Baiduspider Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Baiduspider Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Baiduspider Crawl-delay: 10

How to Verify Baiduspider

Verification Method:
Check IP ranges from Baidu's published list

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect Baiduspider in your application:

Basic Pattern

/Baiduspider/i

Strict Pattern

/^Mozilla/5\.0 \(compatible; Baiduspider/2\.0; \+http\://www\.baidu\.com/search/spider\.html\)$/

Flexible Pattern

/Baiduspider[\s\/]?[\d\.]*?/i

Vendor Match

/.*Baidu.*Baiduspider/i

Implementation Examples

// PHP Detection for Baiduspider function detect_baiduspider() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Baiduspider/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Baiduspider detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Baiduspider import re from flask import request, make_response def detect_baiduspider(): user_agent = request.headers.get('User-Agent', '') pattern = r'Baiduspider' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class BaiduspiderMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Baiduspider const express = require('express'); const app = express(); // Middleware to detect Baiduspider function detectBaiduspider(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Baiduspider/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Baiduspider detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Baiduspider'; } next(); } app.use(detectBaiduspider);
# Apache .htaccess rules for Baiduspider # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Baiduspider [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Baiduspider [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "Baiduspider" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Baiduspider/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Baiduspider # Map user agent to variable map $http_user_agent $is_baiduspider { default 0; ~*Baiduspider 1; } server { # Block the bot completely if ($is_baiduspider) { return 403; } # Or serve cached content location / { if ($is_baiduspider) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_baiduspider) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Allow Essential for product visibility in search results
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Allow Improves documentation discoverability for developers
Corporate Site Allow Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Baiduspider Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Baiduspider Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Baiduspider Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Baiduspider

Robots.txt Name

Baiduspider

Category

search

Respects robots.txt

Yes
Copied to clipboard!