Cookie Consent by Free Privacy Policy Generator DuckDuckBot User Agent - DuckDuckGo Bot Details | CL SEO

DuckDuckBot

DuckDuckGo Since 2008
Search Respects robots.txt
#search #privacy #duckduckgo #crawler
Quick Actions
Official Docs

What is DuckDuckBot?

DuckDuckBot is the web crawler for DuckDuckGo, the privacy-focused search engine that doesn't track users or store personal information. While DuckDuckGo primarily sources results from partners like Bing, DuckDuckBot helps improve and supplement these results with direct crawling. The bot focuses on discovering and updating content while maintaining DuckDuckGo's privacy principles. It respects robots.txt and operates transparently, contributing to DuckDuckGo's mission of providing private search without compromising quality. The crawler helps DuckDuckGo maintain independence and improve result quality for its growing user base.

User Agent String

DuckDuckBot/1.0; (+http://duckduckgo.com/duckduckbot.html)

How to Control DuckDuckBot

Block Completely

To prevent DuckDuckBot from accessing your entire website, add this to your robots.txt file:

# Block DuckDuckBot User-agent: DuckDuckBot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: DuckDuckBot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: DuckDuckBot Crawl-delay: 10

How to Verify DuckDuckBot

Verification Method:
Check user agent and request patterns

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect DuckDuckBot in your application:

Basic Pattern

/DuckDuckBot/i

Strict Pattern

/^DuckDuckBot/1\.0; \(\+http\://duckduckgo\.com/duckduckbot\.html\)$/

Flexible Pattern

/DuckDuckBot[\s\/]?[\d\.]*?/i

Vendor Match

/.*DuckDuckGo.*DuckDuckBot/i

Implementation Examples

// PHP Detection for DuckDuckBot function detect_duckduckbot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/DuckDuckBot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('DuckDuckBot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for DuckDuckBot import re from flask import request, make_response def detect_duckduckbot(): user_agent = request.headers.get('User-Agent', '') pattern = r'DuckDuckBot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class DuckDuckBotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for DuckDuckBot const express = require('express'); const app = express(); // Middleware to detect DuckDuckBot function detectDuckDuckBot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /DuckDuckBot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('DuckDuckBot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'DuckDuckBot'; } next(); } app.use(detectDuckDuckBot);
# Apache .htaccess rules for DuckDuckBot # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} DuckDuckBot [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} DuckDuckBot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "DuckDuckBot" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /DuckDuckBot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for DuckDuckBot # Map user agent to variable map $http_user_agent $is_duckduckbot { default 0; ~*DuckDuckBot 1; } server { # Block the bot completely if ($is_duckduckbot) { return 403; } # Or serve cached content location / { if ($is_duckduckbot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_duckduckbot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Allow Essential for product visibility in search results
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Allow Improves documentation discoverability for developers
Corporate Site Allow Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: DuckDuckBot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: DuckDuckBot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: DuckDuckBot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

DuckDuckBot

Robots.txt Name

DuckDuckBot

Category

search

Respects robots.txt

Yes
Copied to clipboard!