FacebookBot is Meta's dedicated web crawler for collecting training data for Meta AI models, including LLaMA. This is distinct from facebookexternalhit, which generates link previews when URLs are shared on Facebook. FacebookBot performs broader web crawling to gather diverse content for AI model training and improvement. It respects robots.txt directives, allowing website owners to opt out of having their content used for Meta's AI training while still allowing link previews via facebookexternalhit.
User Agent String
FacebookBot
How to Control FacebookBot
Block Completely
To prevent FacebookBot from accessing your entire website, add this to your robots.txt file:
⚠️ AI Training Notice
This bot may collect and use your website content for AI model training. Consider whether you want your content used for this purpose before allowing access.
Detection Patterns
Multiple ways to detect FacebookBot in your application:
Basic Pattern
/FacebookBot/i
Strict Pattern
/^FacebookBot$/
Flexible Pattern
/FacebookBot[\s\/]?[\d\.]*?/i
Vendor Match
/.*Meta.*FacebookBot/i
Implementation Examples
// PHP Detection for FacebookBot
function detect_facebookbot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/FacebookBot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('FacebookBot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for FacebookBot
import re
from flask import request, make_responsedef detect_facebookbot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'FacebookBot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class FacebookBotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for FacebookBot
const express = require('express');
const app = express();// Middleware to detect FacebookBot
function detectFacebookBot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /FacebookBot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('FacebookBot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'FacebookBot';
}
next();
}app.use(detectFacebookBot);
# Apache .htaccess rules for FacebookBot# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} FacebookBot [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} FacebookBot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "FacebookBot" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /FacebookBot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for FacebookBot# Map user agent to variable
map $http_user_agent $is_facebookbot {
default 0;
~*FacebookBot 1;
}server {
# Block the bot completely
if ($is_facebookbot) {
return 403;
}
# Or serve cached content
location / {
if ($is_facebookbot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_facebookbot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Limit Access
Protect pricing and inventory data from AI training
Blog/News
Consider Blocking
Your content may be used for AI training without compensation
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets