Cookie Consent by Free Privacy Policy Generator Scrapy User Agent - Scrapy Bot Details | CL SEO

Scrapy

Scrapy Since 2008
Other Respects robots.txt
#scraping #python #framework #crawler
Quick Actions
Official Docs

What is Scrapy?

Scrapy is a powerful and popular open-source web scraping framework for Python. Unlike simple HTTP libraries, Scrapy provides a complete framework for large-scale web scraping with features like concurrent requests, automatic retries, and middleware support. The framework is designed for professional data extraction and respects robots.txt by default. Scrapy powers many legitimate data collection operations including price monitoring, research projects, and business intelligence. Its professional approach to web scraping includes built-in support for ethical crawling practices.

User Agent String

Scrapy/2.11.0 (+https://scrapy.org)

How to Control Scrapy

Block Completely

To prevent Scrapy from accessing your entire website, add this to your robots.txt file:

# Block Scrapy User-agent: Scrapy Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Scrapy Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Scrapy Crawl-delay: 10

How to Verify Scrapy

Verification Method:
Scrapy version in user agent

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect Scrapy in your application:

Basic Pattern

/Scrapy/i

Strict Pattern

/^Scrapy/2\.11\.0 \(\+https\://scrapy\.org\)$/

Flexible Pattern

/Scrapy[\s\/]?[\d\.]*?/i

Vendor Match

/.*Scrapy.*Scrapy/i

Implementation Examples

// PHP Detection for Scrapy function detect_scrapy() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Scrapy/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Scrapy detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Scrapy import re from flask import request, make_response def detect_scrapy(): user_agent = request.headers.get('User-Agent', '') pattern = r'Scrapy' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class ScrapyMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Scrapy const express = require('express'); const app = express(); // Middleware to detect Scrapy function detectScrapy(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Scrapy/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Scrapy detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Scrapy'; } next(); } app.use(detectScrapy);
# Apache .htaccess rules for Scrapy # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Scrapy [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Scrapy [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "Scrapy" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Scrapy/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Scrapy # Map user agent to variable map $http_user_agent $is_scrapy { default 0; ~*Scrapy 1; } server { # Block the bot completely if ($is_scrapy) { return 403; } # Or serve cached content location / { if ($is_scrapy) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_scrapy) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Scrapy Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Scrapy Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Scrapy Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Scrapy

Robots.txt Name

Scrapy

Category

other

Respects robots.txt

Yes
Copied to clipboard!