What is MJ12bot?
MJ12bot is the web crawler operated by Majestic, a UK-based company specializing in backlink analysis and link intelligence. The bot has been crawling the web since 2004, building one of the largest link databases available. Majestic's unique metrics like Trust Flow and Citation Flow are derived from MJ12bot's crawl data. The crawler operates as part of a distributed network, making it one of the most extensive web crawling operations. While some sites block MJ12bot due to its association with SEO analysis, the crawler respects robots.txt and provides valuable link data for SEO professionals.
User Agent String
Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/)
Copy
How to Control MJ12bot
Block Completely
To prevent MJ12bot from accessing your entire website, add this to your robots.txt file:
User-agent: MJ12bot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
User-agent: MJ12bot
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
User-agent: MJ12bot
Crawl-delay: 10
How to Verify MJ12bot
Verification Method:
Check against published IP list
Learn more in the official documentation .
Detection Patterns
Multiple ways to detect MJ12bot in your application:
Basic Pattern
/MJ12bot/i
Strict Pattern
/^Mozilla/5\.0 \(compatible; MJ12bot/v1\.4\.8; http\://mj12bot\.com/\)$/
Flexible Pattern
/MJ12bot[\s\/]?[\d\.]*?/i
Vendor Match
/.*Majestic.*MJ12bot/i
Implementation Examples
PHP
Python
JavaScript
.htaccess
Nginx
Copy
function detect_mj12bot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/MJ12bot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('MJ12bot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
Copy
import re
from flask import request, make_response
def detect_mj12bot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'MJ12bot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False
class MJ12botMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
Copy
const express = require('express');
const app = express();
// Middleware to detect MJ12bot
function detectMJ12bot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /MJ12bot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('MJ12bot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'MJ12bot';
}
next();
}
app.use(detectMJ12bot);
Copy
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} MJ12bot [NC]
RewriteRule .* - [F,L]
RewriteCond %{HTTP_USER_AGENT} MJ12bot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]
SetEnvIfNoCase User-Agent "MJ12bot" is_bot=1
<If "%{HTTP_USER_AGENT} =~ /MJ12bot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
Copy
map $http_user_agent $is_mj12bot {
default 0;
~*MJ12bot 1;
}
server {
if ($is_mj12bot) {
return 403;
}
location / {
if ($is_mj12bot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
location @backend {
if ($is_mj12bot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets
Advanced robots.txt Configurations
E-commerce Site Configuration
Copy
User-agent: MJ12bot
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
Copy
User-agent: MJ12bot
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
Copy
User-agent: MJ12bot
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
MJ12bot
Robots.txt Name
MJ12bot
Category
seo
Respects robots.txt
Yes