Cookie Consent by Free Privacy Policy Generator Googlebot User Agent - Google Bot Details | CL SEO

Googlebot

Google Since 1996
Search Respects robots.txt
#search #google #crawler #indexing
Quick Actions
Official Docs

What is Googlebot?

Googlebot is Google's primary web crawling bot, responsible for discovering and indexing billions of web pages that appear in Google Search results. As one of the most important crawlers on the internet, Googlebot uses sophisticated algorithms to determine which sites to crawl, how often, and how many pages to fetch from each site. The crawler operates from IP addresses that can be verified through reverse DNS lookups, and it respects robots.txt directives, meta robots tags, and X-Robots-Tag HTTP headers. Googlebot actually consists of two different crawlers: a desktop crawler that simulates a user on desktop, and a mobile crawler that simulates a mobile user. Since 2019, Googlebot has been using an evergreen Chromium rendering engine, meaning it can understand modern JavaScript and render pages similar to how users see them. Website owners should ensure their sites are accessible to Googlebot for optimal search visibility, while using robots.txt and other directives to control which content should not be indexed. Blocking Googlebot will result in pages being removed from Google Search results, significantly impacting organic traffic.

User Agent String

Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

How to Control Googlebot

Block Completely

To prevent Googlebot from accessing your entire website, add this to your robots.txt file:

# Block Googlebot User-agent: Googlebot Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: Googlebot Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: Googlebot Crawl-delay: 10

How to Verify Googlebot

Verification Method:
Reverse DNS should resolve to googlebot.com or google.com

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect Googlebot in your application:

Basic Pattern

/Googlebot/i

Strict Pattern

/^Mozilla/5\.0 \(compatible; Googlebot/2\.1; \+http\://www\.google\.com/bot\.html\)$/

Flexible Pattern

/Googlebot[\s\/]?[\d\.]*?/i

Vendor Match

/.*Google.*Googlebot/i

Implementation Examples

// PHP Detection for Googlebot function detect_googlebot() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/Googlebot/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('Googlebot detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for Googlebot import re from flask import request, make_responsedef detect_googlebot(): user_agent = request.headers.get('User-Agent', '') pattern = r'Googlebot' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class GooglebotMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for Googlebot const express = require('express'); const app = express();// Middleware to detect Googlebot function detectGooglebot(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /Googlebot/i; if (pattern.test(userAgent)) { // Log bot detection console.log('Googlebot detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'Googlebot'; } next(); }app.use(detectGooglebot);
# Apache .htaccess rules for Googlebot# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} Googlebot [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} Googlebot [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "Googlebot" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /Googlebot/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for Googlebot# Map user agent to variable map $http_user_agent $is_googlebot { default 0; ~*Googlebot 1; }server { # Block the bot completely if ($is_googlebot) { return 403; } # Or serve cached content location / { if ($is_googlebot) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_googlebot) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Allow Essential for product visibility in search results
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Allow Improves documentation discoverability for developers
Corporate Site Allow Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: Googlebot Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: Googlebot Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: Googlebot Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

Googlebot

Robots.txt Name

Googlebot

Category

search

Respects robots.txt

Yes
Copied to clipboard!