HubSpot crawler supports various features of HubSpot's marketing, sales, and service platform. It fetches website content for social media monitoring, competitor analysis, and content strategy tools. The bot also powers HubSpot's website grader tool and helps with lead intelligence by gathering information about prospect websites. For HubSpot users, the crawler enables features like social media publishing previews, website performance analysis, and automated lead enrichment. The bot operates respectfully and is essential for many HubSpot marketing automation workflows.
User Agent String
HubSpot Crawler 1.0 (+https://www.hubspot.com/)
How to Control HubSpot
Block Completely
To prevent HubSpot from accessing your entire website, add this to your robots.txt file:
# Block HubSpot
User-agent: HubSpot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
// PHP Detection for HubSpot
function detect_hubspot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/HubSpot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('HubSpot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for HubSpot
import re
from flask import request, make_responsedef detect_hubspot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'HubSpot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class HubSpotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for HubSpot
const express = require('express');
const app = express();// Middleware to detect HubSpot
function detectHubSpot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /HubSpot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('HubSpot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'HubSpot';
}
next();
}app.use(detectHubSpot);
# Apache .htaccess rules for HubSpot# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} HubSpot [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} HubSpot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "HubSpot" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /HubSpot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for HubSpot# Map user agent to variable
map $http_user_agent $is_hubspot {
default 0;
~*HubSpot 1;
}server {
# Block the bot completely
if ($is_hubspot) {
return 403;
}
# Or serve cached content
location / {
if ($is_hubspot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_hubspot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets