Cookie Consent by Free Privacy Policy Generator linkdexbot User Agent - Linkdex Bot Details | CL SEO

linkdexbot

Linkdex Since 2012
Seo Respects robots.txt
#seo #linkdex #link-analysis #crawler
Quick Actions
Official Docs

What is linkdexbot?

linkdexbot is the web crawler for Linkdex, an enterprise SEO platform used for link analysis, content optimization, and site auditing. The bot crawls web pages to discover links, analyze page content, and build Linkdex's SEO intelligence database. It respects robots.txt directives and follows standard web crawling practices.

User Agent String

Mozilla/5.0 (compatible; linkdexbot/2.0; +http://www.linkdex.com/bots/)

How to Control linkdexbot

Block Completely

To prevent linkdexbot from accessing your entire website, add this to your robots.txt file:

# Block linkdexbot User-agent: linkdexbot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: linkdexbot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: linkdexbot Crawl-delay: 10

How to Verify linkdexbot

Verification Method:
Check user agent string for linkdexbot identifier

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect linkdexbot in your application:

Basic Pattern

/linkdexbot/i

Strict Pattern

/^Mozilla/5\.0 \(compatible; linkdexbot/2\.0; \+http\://www\.linkdex\.com/bots/\)$/

Flexible Pattern

/linkdexbot[\s\/]?[\d\.]*?/i

Vendor Match

/.*Linkdex.*linkdexbot/i

Implementation Examples

// PHP Detection for linkdexbot function detect_linkdexbot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/linkdexbot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('linkdexbot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for linkdexbot import re from flask import request, make_responsedef detect_linkdexbot(): user_agent = request.headers.get('User-Agent', '') pattern = r'linkdexbot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class linkdexbotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for linkdexbot const express = require('express'); const app = express();// Middleware to detect linkdexbot function detectlinkdexbot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /linkdexbot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('linkdexbot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'linkdexbot'; } next(); }app.use(detectlinkdexbot);
# Apache .htaccess rules for linkdexbot# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} linkdexbot [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} linkdexbot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "linkdexbot" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /linkdexbot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for linkdexbot# Map user agent to variable map $http_user_agent $is_linkdexbot { default 0; ~*linkdexbot 1; }server { # Block the bot completely if ($is_linkdexbot) { return 403; } # Or serve cached content location / { if ($is_linkdexbot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_linkdexbot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: linkdexbot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: linkdexbot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: linkdexbot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

linkdexbot

Robots.txt Name

linkdexbot

Category

seo

Respects robots.txt

Yes
Copied to clipboard!