What is Gigabot?
Gigabot is the crawler for Gigablast, an open-source search engine created by former Infoseek engineer Matt Wells. Gigablast is notable for being one of the few independent search engines with its own web index. The crawler and search engine are designed for transparency and efficiency, with the source code publicly available. While smaller than major search engines, Gigablast represents an important alternative in the search ecosystem and demonstrates that independent search engines are still viable.
User Agent String
Gigabot/3.0 (http://www.gigablast.com/spider.html)
Copy
How to Control Gigabot
Block Completely
To prevent Gigabot from accessing your entire website, add this to your robots.txt file:
User-agent: Gigabot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
User-agent: Gigabot
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
User-agent: Gigabot
Crawl-delay: 10
How to Verify Gigabot
Verification Method:
Gigablast crawler information
Learn more in the official documentation .
Detection Patterns
Multiple ways to detect Gigabot in your application:
Basic Pattern
/Gigabot/i
Strict Pattern
/^Gigabot/3\.0 \(http\://www\.gigablast\.com/spider\.html\)$/
Flexible Pattern
/Gigabot[\s\/]?[\d\.]*?/i
Vendor Match
/.*Gigablast.*Gigabot/i
Implementation Examples
PHP
Python
JavaScript
.htaccess
Nginx
Copy
function detect_gigabot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/Gigabot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('Gigabot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
Copy
import re
from flask import request, make_response
def detect_gigabot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'Gigabot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False
class GigabotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
Copy
const express = require('express');
const app = express();
// Middleware to detect Gigabot
function detectGigabot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /Gigabot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('Gigabot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'Gigabot';
}
next();
}
app.use(detectGigabot);
Copy
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Gigabot [NC]
RewriteRule .* - [F,L]
RewriteCond %{HTTP_USER_AGENT} Gigabot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]
SetEnvIfNoCase User-Agent "Gigabot" is_bot=1
<If "%{HTTP_USER_AGENT} =~ /Gigabot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
Copy
map $http_user_agent $is_gigabot {
default 0;
~*Gigabot 1;
}
server {
if ($is_gigabot) {
return 403;
}
location / {
if ($is_gigabot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
location @backend {
if ($is_gigabot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Allow
Essential for product visibility in search results
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Allow
Improves documentation discoverability for developers
Corporate Site
Allow
Allow for public pages, block sensitive areas like intranets
Advanced robots.txt Configurations
E-commerce Site Configuration
Copy
User-agent: Gigabot
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
Copy
User-agent: Gigabot
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
Copy
User-agent: Gigabot
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
Gigabot
Robots.txt Name
Gigabot
Category
search
Respects robots.txt
Yes