Cookie Consent by Free Privacy Policy Generator ICC-Crawler User Agent - NICT Bot Details | CL SEO

ICC-Crawler

NICT Since 2016
Other Respects robots.txt
#research #multilingual #nlp #crawler #japanese
Quick Actions
Official Docs

What is ICC-Crawler?

ICC-Crawler is operated by NICT (National Institute of Information and Communications Technology), a Japanese research institution. The bot collects web pages for research into advanced information processing, including multilingual natural language processing and machine translation. It respects robots.txt directives and is used strictly for academic research purposes.

User Agent String

ICC-Crawler/3.0 (Mozilla-compatible; ; https://ucri.nict.go.jp/en/icccrawler.html)

How to Control ICC-Crawler

Block Completely

To prevent ICC-Crawler from accessing your entire website, add this to your robots.txt file:

# Block ICC-Crawler User-agent: ICC-Crawler Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: ICC-Crawler Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: ICC-Crawler Crawl-delay: 10

How to Verify ICC-Crawler

Verification Method:
Verify requests originate from NICT IP ranges

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect ICC-Crawler in your application:

Basic Pattern

/ICC\-Crawler/i

Strict Pattern

/^ICC\-Crawler/3\.0 \(Mozilla\-compatible; ; https\://ucri\.nict\.go\.jp/en/icccrawler\.html\)$/

Flexible Pattern

/ICC\-Crawler[\s\/]?[\d\.]*?/i

Vendor Match

/.*NICT.*ICC\-Crawler/i

Implementation Examples

// PHP Detection for ICC-Crawler function detect_icc_crawler() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/ICC\\-Crawler/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('ICC-Crawler detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for ICC-Crawler import re from flask import request, make_responsedef detect_icc_crawler(): user_agent = request.headers.get('User-Agent', '') pattern = r'ICC-Crawler' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False# Django Middleware class ICCCrawlerMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for ICC-Crawler const express = require('express'); const app = express();// Middleware to detect ICC-Crawler function detectICCCrawler(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /ICC-Crawler/i; if (pattern.test(userAgent)) { // Log bot detection console.log('ICC-Crawler detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'ICC-Crawler'; } next(); }app.use(detectICCCrawler);
# Apache .htaccess rules for ICC-Crawler# Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} ICC\-Crawler [NC] RewriteRule .* - [F,L]# Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} ICC\-Crawler [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP SetEnvIfNoCase User-Agent "ICC\-Crawler" is_bot=1# Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /ICC\-Crawler/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for ICC-Crawler# Map user agent to variable map $http_user_agent $is_icc_crawler { default 0; ~*ICC\-Crawler 1; }server { # Block the bot completely if ($is_icc_crawler) { return 403; } # Or serve cached content location / { if ($is_icc_crawler) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_icc_crawler) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site TypeRecommendationReasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: ICC-Crawler Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: ICC-Crawler Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: ICC-Crawler Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

ICC-Crawler

Robots.txt Name

ICC-Crawler

Category

other

Respects robots.txt

Yes
Copied to clipboard!