Cookie Consent by Free Privacy Policy Generator yacybot User Agent - YaCy Bot Details | CL SEO

yacybot

YaCy Since 2006
Search Respects robots.txt
#search #decentralized #peer-to-peer #open-source
Quick Actions
Official Docs

What is yacybot?

yacybot is the web crawler for YaCy, a free, decentralized, peer-to-peer search engine. Unlike traditional search engines, YaCy operates on a distributed network of nodes that collaboratively crawl and index the web without central control. Each YaCy node contributes to the shared search index. The bot respects robots.txt directives and is operated by individual users running YaCy nodes on their own hardware.

User Agent String

yacybot (/global; amd64 Linux 5.4.0; java 11.0.11; Etc/en) http://yacy.net/bot.html

How to Control yacybot

Block Completely

To prevent yacybot from accessing your entire website, add this to your robots.txt file:

# Block yacybot User-agent: yacybot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: yacybot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: yacybot Crawl-delay: 10

How to Verify yacybot

Verification Method:
Check user agent string for yacybot identifier

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect yacybot in your application:

Basic Pattern

/yacybot/i

Strict Pattern

/^yacybot \(/global; amd64 Linux 5\.4\.0; java 11\.0\.11; Etc/en\) http\://yacy\.net/bot\.html$/

Flexible Pattern

/yacybot[\s\/]?[\d\.]*?/i

Vendor Match

/.*YaCy.*yacybot/i

Implementation Examples

// PHP Detection for yacybot function detect_yacybot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/yacybot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('yacybot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for yacybot import re from flask import request, make_responsedef detect_yacybot(): user_agent = request.headers.get('User-Agent', '') pattern = r'yacybot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class yacybotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for yacybot const express = require('express'); const app = express();// Middleware to detect yacybot function detectyacybot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /yacybot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('yacybot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'yacybot'; } next(); }app.use(detectyacybot);
# Apache .htaccess rules for yacybot# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} yacybot [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} yacybot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "yacybot" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /yacybot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for yacybot# Map user agent to variable map $http_user_agent $is_yacybot { default 0; ~*yacybot 1; }server { # Block the bot completely if ($is_yacybot) { return 403; } # Or serve cached content location / { if ($is_yacybot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_yacybot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Allow Essential for product visibility in search results
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Allow Improves documentation discoverability for developers
Corporate Site Allow Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: yacybot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: yacybot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: yacybot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

yacybot

Robots.txt Name

yacybot

Category

search

Respects robots.txt

Yes
Copied to clipboard!