-
Home

-
User Agent Directory

- SerpstatBot
SerpstatBot
Serpstat •
Since 2016
Quick Actions
Official Docs
What is SerpstatBot?
SerpstatBot is the web crawler for Serpstat, a comprehensive SEO platform offering keyword research, competitor analysis, site auditing, and backlink tracking. The bot crawls web pages to discover and analyze backlinks, which powers Serpstat's link analysis features. It respects robots.txt directives and identifies itself clearly in server logs.
User Agent String
serpstatbot/2.1 (advanced backlink tracking bot; https://serpstatbot.com/; [email protected])
How to Control SerpstatBot
Block Completely
To prevent SerpstatBot from accessing your entire website, add this to your robots.txt file:
# Block SerpstatBot
User-agent: SerpstatBot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
User-agent: SerpstatBot
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
User-agent: SerpstatBot
Crawl-delay: 10
How to Verify SerpstatBot
Verification Method:
Check user agent string for serpstatbot identifier
Check user agent string for serpstatbot identifier
Learn more in the official documentation.
Detection Patterns
Multiple ways to detect SerpstatBot in your application:
Basic Pattern
/SerpstatBot/iStrict Pattern
/^serpstatbot/2\.1 \(advanced backlink tracking bot; https\://serpstatbot\.com/; abuse@serpstatbot\.com\)$/Flexible Pattern
/SerpstatBot[\s\/]?[\d\.]*?/iVendor Match
/.*Serpstat.*SerpstatBot/iImplementation Examples
// PHP Detection for SerpstatBot
function detect_serpstatbot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/SerpstatBot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('SerpstatBot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for SerpstatBot
import re
from flask import request, make_responsedef detect_serpstatbot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'SerpstatBot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class SerpstatBotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for SerpstatBot
const express = require('express');
const app = express();// Middleware to detect SerpstatBot
function detectSerpstatBot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /SerpstatBot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('SerpstatBot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'SerpstatBot';
}
next();
}app.use(detectSerpstatBot);
# Apache .htaccess rules for SerpstatBot# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} SerpstatBot [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} SerpstatBot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "SerpstatBot" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /SerpstatBot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for SerpstatBot# Map user agent to variable
map $http_user_agent $is_serpstatbot {
default 0;
~*SerpstatBot 1;
}server {
# Block the bot completely
if ($is_serpstatbot) {
return 403;
}
# Or serve cached content
location / {
if ($is_serpstatbot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_serpstatbot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
| Site Type | Recommendation | Reasoning |
|---|---|---|
| E-commerce | Optional | Evaluate based on bandwidth usage vs. benefits |
| Blog/News | Allow | Increases content reach and discoverability |
| SaaS Application | Block | No benefit for application interfaces; preserve resources |
| Documentation | Selective | Allow for public docs, block for internal docs |
| Corporate Site | Limit | Allow for public pages, block sensitive areas like intranets |
Advanced robots.txt Configurations
E-commerce Site Configuration
User-agent: SerpstatBot
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
User-agent: SerpstatBot
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
User-agent: SerpstatBot
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
SerpstatBotRobots.txt Name
SerpstatBotCategory
seoRespects robots.txt
Yes
Copied to clipboard!
