Cookie Consent by Free Privacy Policy Generator AddSearchBot User Agent - AddSearch Bot Details | CL SEO

AddSearchBot

AddSearch Since 2014
Search Respects robots.txt
#site-search #enterprise-search #crawler
Quick Actions
Official Docs

What is AddSearchBot?

AddSearchBot is the web crawler for AddSearch, a site search solution that provides search functionality for business websites. The bot indexes website content to power on-site search results. AddSearch is used by enterprises and mid-size businesses that need a reliable search experience on their websites. The bot respects robots.txt directives and only crawls sites that have subscribed to AddSearch's service.

User Agent String

AddSearchBot

How to Control AddSearchBot

Block Completely

To prevent AddSearchBot from accessing your entire website, add this to your robots.txt file:

# Block AddSearchBot User-agent: AddSearchBot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: AddSearchBot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: AddSearchBot Crawl-delay: 10

How to Verify AddSearchBot

Verification Method:
Check user agent string for AddSearchBot identifier

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect AddSearchBot in your application:

Basic Pattern

/AddSearchBot/i

Strict Pattern

/^AddSearchBot$/

Flexible Pattern

/AddSearchBot[\s\/]?[\d\.]*?/i

Vendor Match

/.*AddSearch.*AddSearchBot/i

Implementation Examples

// PHP Detection for AddSearchBot function detect_addsearchbot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/AddSearchBot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('AddSearchBot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for AddSearchBot import re from flask import request, make_responsedef detect_addsearchbot(): user_agent = request.headers.get('User-Agent', '') pattern = r'AddSearchBot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class AddSearchBotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for AddSearchBot const express = require('express'); const app = express();// Middleware to detect AddSearchBot function detectAddSearchBot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /AddSearchBot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('AddSearchBot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'AddSearchBot'; } next(); }app.use(detectAddSearchBot);
# Apache .htaccess rules for AddSearchBot# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} AddSearchBot [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} AddSearchBot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "AddSearchBot" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /AddSearchBot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for AddSearchBot# Map user agent to variable map $http_user_agent $is_addsearchbot { default 0; ~*AddSearchBot 1; }server { # Block the bot completely if ($is_addsearchbot) { return 403; } # Or serve cached content location / { if ($is_addsearchbot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_addsearchbot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Allow Essential for product visibility in search results
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Allow Improves documentation discoverability for developers
Corporate Site Allow Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: AddSearchBot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: AddSearchBot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: AddSearchBot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

AddSearchBot

Robots.txt Name

AddSearchBot

Category

search

Respects robots.txt

Yes
Copied to clipboard!