Cookie Consent by Free Privacy Policy Generator Screaming Frog SEO Spider User Agent - Screaming Frog Bot Details | CL SEO

Screaming Frog SEO Spider

Screaming Frog Since 2010
Seo Respects robots.txt
#seo #audit #desktop #spider
Quick Actions
Official Docs

What is Screaming Frog SEO Spider?

Screaming Frog SEO Spider is a desktop-based website crawler widely used by SEO professionals for technical audits. Unlike cloud-based crawlers, it runs directly from the user's computer, providing instant insights into technical SEO issues. The spider can analyze page titles, meta descriptions, headers, status codes, and much more. It's particularly valued for its ability to visualize site architecture and identify technical problems quickly. While primarily a desktop tool, it identifies itself with this user agent when crawling. The tool has become an industry standard for technical SEO audits.

User Agent String

Screaming Frog SEO Spider/20.0

How to Control Screaming Frog SEO Spider

Block Completely

To prevent Screaming Frog SEO Spider from accessing your entire website, add this to your robots.txt file:

# Block Screaming Frog SEO Spider User-agent: Screaming Frog SEO Spider Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Screaming Frog SEO Spider Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Screaming Frog SEO Spider Crawl-delay: 10

How to Verify Screaming Frog SEO Spider

Verification Method:
Desktop application, varies by user IP

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect Screaming Frog SEO Spider in your application:

Basic Pattern

/Screaming Frog SEO Spider/i

Strict Pattern

/^Screaming Frog SEO Spider/20\.0$/

Flexible Pattern

/Screaming Frog SEO Spider[\s\/]?[\d\.]*?/i

Vendor Match

/.*Screaming Frog.*Screaming/i

Implementation Examples

// PHP Detection for Screaming Frog SEO Spider function detect_screaming_frog_seo_spider() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Screaming Frog SEO Spider/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Screaming Frog SEO Spider detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Screaming Frog SEO Spider import re from flask import request, make_responsedef detect_screaming_frog_seo_spider(): user_agent = request.headers.get('User-Agent', '') pattern = r'Screaming Frog SEO Spider' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class ScreamingFrogSEOSpiderMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Screaming Frog SEO Spider const express = require('express'); const app = express();// Middleware to detect Screaming Frog SEO Spider function detectScreamingFrogSEOSpider(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Screaming Frog SEO Spider/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Screaming Frog SEO Spider detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Screaming Frog SEO Spider'; } next(); }app.use(detectScreamingFrogSEOSpider);
# Apache .htaccess rules for Screaming Frog SEO Spider# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Screaming Frog SEO Spider [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Screaming Frog SEO Spider [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "Screaming Frog SEO Spider" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Screaming Frog SEO Spider/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Screaming Frog SEO Spider# Map user agent to variable map $http_user_agent $is_screaming_frog_seo_spider { default 0; ~*Screaming Frog SEO Spider 1; }server { # Block the bot completely if ($is_screaming_frog_seo_spider) { return 403; } # Or serve cached content location / { if ($is_screaming_frog_seo_spider) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_screaming_frog_seo_spider) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Screaming Frog SEO Spider Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Screaming Frog SEO Spider Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Screaming Frog SEO Spider Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Screaming Frog SEO Spider

Robots.txt Name

Screaming Frog SEO Spider

Category

seo

Respects robots.txt

Yes
Copied to clipboard!