Cookie Consent by Free Privacy Policy Generator DeepCrawl User Agent - Lumar Bot Details | CL SEO

DeepCrawl

Lumar Since 2013
Seo Respects robots.txt
#seo #technical #audit #crawler
Quick Actions
Official Docs

What is DeepCrawl?

DeepCrawl, now rebranded as Lumar, is an enterprise technical SEO platform that performs comprehensive website crawls to identify technical issues affecting search performance. The platform specializes in large-scale website analysis, capable of crawling millions of pages while providing actionable insights. Lumar helps SEO teams understand site architecture, identify crawl budget waste, find duplicate content, and monitor technical health. With features like JavaScript rendering and log file analysis integration, it provides a complete view of how search engines see websites.

User Agent String

Mozilla/5.0 (compatible; Deepcrawl/3.5; +https://www.lumar.io/)

How to Control DeepCrawl

Block Completely

To prevent DeepCrawl from accessing your entire website, add this to your robots.txt file:

# Block DeepCrawl User-agent: Deepcrawl Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Deepcrawl Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Deepcrawl Crawl-delay: 10

How to Verify DeepCrawl

Verification Method:
Lumar/DeepCrawl crawler verification

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect DeepCrawl in your application:

Basic Pattern

/DeepCrawl/i

Strict Pattern

/^Mozilla/5\.0 \(compatible; Deepcrawl/3\.5; \+https\://www\.lumar\.io/\)$/

Flexible Pattern

/DeepCrawl[\s\/]?[\d\.]*?/i

Vendor Match

/.*Lumar.*DeepCrawl/i

Implementation Examples

// PHP Detection for DeepCrawl function detect_deepcrawl() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/DeepCrawl/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('DeepCrawl detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for DeepCrawl import re from flask import request, make_responsedef detect_deepcrawl(): user_agent = request.headers.get('User-Agent', '') pattern = r'DeepCrawl' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class DeepCrawlMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for DeepCrawl const express = require('express'); const app = express();// Middleware to detect DeepCrawl function detectDeepCrawl(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /DeepCrawl/i; if (pattern.test(userAgent)) { // Log bot detection console.log('DeepCrawl detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'DeepCrawl'; } next(); }app.use(detectDeepCrawl);
# Apache .htaccess rules for DeepCrawl# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} DeepCrawl [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} DeepCrawl [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "DeepCrawl" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /DeepCrawl/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for DeepCrawl# Map user agent to variable map $http_user_agent $is_deepcrawl { default 0; ~*DeepCrawl 1; }server { # Block the bot completely if ($is_deepcrawl) { return 403; } # Or serve cached content location / { if ($is_deepcrawl) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_deepcrawl) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Deepcrawl Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Deepcrawl Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Deepcrawl Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

DeepCrawl

Robots.txt Name

Deepcrawl

Category

seo

Respects robots.txt

Yes
Copied to clipboard!