Chrome-Lighthouse is the user agent for Google's Lighthouse, an open-source automated tool for improving web page quality. It audits performance, accessibility, progressive web apps, SEO, and more. Lighthouse runs in Chrome DevTools, from the command line, or as a Node module. When run in automated environments or CI/CD pipelines, it identifies itself with this user agent. The tool has become essential for web developers, providing actionable insights and metrics that directly influence search rankings and user experience. Lighthouse powers various Google tools including PageSpeed Insights.
User Agent String
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Chrome-Lighthouse
How to Control Chrome-Lighthouse
Block Completely
To prevent Chrome-Lighthouse from accessing your entire website, add this to your robots.txt file:
Multiple ways to detect Chrome-Lighthouse in your application:
Basic Pattern
/Chrome\-Lighthouse/i
Strict Pattern
/^Mozilla/5\.0 \(X11; Linux x86_64\) AppleWebKit/537\.36 \(KHTML, like Gecko\) Chrome/120\.0\.0\.0 Safari/537\.36 Chrome\-Lighthouse$/
Flexible Pattern
/Chrome\-Lighthouse[\s\/]?[\d\.]*?/i
Vendor Match
/.*Google.*Chrome\-Lighthouse/i
Implementation Examples
// PHP Detection for Chrome-Lighthouse
function detect_chrome_lighthouse() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/Chrome\\-Lighthouse/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('Chrome-Lighthouse detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for Chrome-Lighthouse
import re
from flask import request, make_responsedef detect_chrome_lighthouse():
user_agent = request.headers.get('User-Agent', '')
pattern = r'Chrome-Lighthouse'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class ChromeLighthouseMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for Chrome-Lighthouse
const express = require('express');
const app = express();// Middleware to detect Chrome-Lighthouse
function detectChromeLighthouse(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /Chrome-Lighthouse/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('Chrome-Lighthouse detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'Chrome-Lighthouse';
}
next();
}app.use(detectChromeLighthouse);
# Apache .htaccess rules for Chrome-Lighthouse# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Chrome\-Lighthouse [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} Chrome\-Lighthouse [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "Chrome\-Lighthouse" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /Chrome\-Lighthouse/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for Chrome-Lighthouse# Map user agent to variable
map $http_user_agent $is_chrome_lighthouse {
default 0;
~*Chrome\-Lighthouse 1;
}server {
# Block the bot completely
if ($is_chrome_lighthouse) {
return 403;
}
# Or serve cached content
location / {
if ($is_chrome_lighthouse) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_chrome_lighthouse) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets