What is WebPageTest?
WebPageTest bot is the crawler for the popular open-source web performance testing tool. Originally developed by AOL and now maintained by Catchpoint, WebPageTest provides detailed performance metrics including Core Web Vitals, waterfall charts, and filmstrip views. The bot can simulate various connection speeds, devices, and geographic locations to provide comprehensive performance testing. WebPageTest is widely used by performance engineers and developers for its detailed metrics and ability to test from multiple global locations. The service offers both public and private testing instances.
User Agent String
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 WebPageTest
Copy
How to Control WebPageTest
Block Completely
To prevent WebPageTest from accessing your entire website, add this to your robots.txt file:
User-agent: WebPageTest
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
User-agent: WebPageTest
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
User-agent: WebPageTest
Crawl-delay: 10
How to Verify WebPageTest
Verification Method:
WebPageTest identification in user agent
Learn more in the official documentation .
Detection Patterns
Multiple ways to detect WebPageTest in your application:
Basic Pattern
/WebPageTest/i
Strict Pattern
/^Mozilla/5\.0 \(X11; Linux x86_64\) AppleWebKit/537\.36 \(KHTML, like Gecko\) Chrome/125\.0\.0\.0 Safari/537\.36 WebPageTest$/
Flexible Pattern
/WebPageTest[\s\/]?[\d\.]*?/i
Vendor Match
/.*WebPageTest.*WebPageTest/i
Implementation Examples
PHP
Python
JavaScript
.htaccess
Nginx
Copy
function detect_webpagetest() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/WebPageTest/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('WebPageTest detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
Copy
import re
from flask import request, make_response
def detect_webpagetest():
user_agent = request.headers.get('User-Agent', '')
pattern = r'WebPageTest'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False
class WebPageTestMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
Copy
const express = require('express');
const app = express();
// Middleware to detect WebPageTest
function detectWebPageTest(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /WebPageTest/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('WebPageTest detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'WebPageTest';
}
next();
}
app.use(detectWebPageTest);
Copy
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} WebPageTest [NC]
RewriteRule .* - [F,L]
RewriteCond %{HTTP_USER_AGENT} WebPageTest [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]
SetEnvIfNoCase User-Agent "WebPageTest" is_bot=1
<If "%{HTTP_USER_AGENT} =~ /WebPageTest/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
Copy
map $http_user_agent $is_webpagetest {
default 0;
~*WebPageTest 1;
}
server {
if ($is_webpagetest) {
return 403;
}
location / {
if ($is_webpagetest) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
location @backend {
if ($is_webpagetest) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets
Advanced robots.txt Configurations
E-commerce Site Configuration
Copy
User-agent: WebPageTest
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
Copy
User-agent: WebPageTest
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
Copy
User-agent: WebPageTest
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
WebPageTest
Robots.txt Name
WebPageTest
Category
monitor
Respects robots.txt
May not respect