Cookie Consent by Free Privacy Policy Generator IAS Crawler User Agent - Integral Ad Science Bot Details | CL SEO

IAS Crawler

Integral Ad Science Since 2015
Other Respects robots.txt
#advertising #brand-safety #verification #crawler
Quick Actions
Official Docs

What is IAS Crawler?

IAS Crawler is operated by Integral Ad Science, a global leader in digital ad verification. The crawler analyzes web pages to ensure brand safety, checking content where ads might appear to protect advertisers from association with inappropriate content. IAS helps verify that ads appear in suitable contexts, free from fraud, and viewable by real humans. Their crawler is part of a comprehensive platform that helps advertisers ensure their ads appear only on appropriate, brand-safe content.

User Agent String

IAS Crawler/1.0 (+https://integralads.com/ias-crawler/)

How to Control IAS Crawler

Block Completely

To prevent IAS Crawler from accessing your entire website, add this to your robots.txt file:

# Block IAS Crawler User-agent: IAS Crawler Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: IAS Crawler Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: IAS Crawler Crawl-delay: 10

How to Verify IAS Crawler

Verification Method:
IAS crawler identification

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect IAS Crawler in your application:

Basic Pattern

/IAS Crawler/i

Strict Pattern

/^IAS Crawler/1\.0 \(\+https\://integralads\.com/ias\-crawler/\)$/

Flexible Pattern

/IAS Crawler[\s\/]?[\d\.]*?/i

Vendor Match

/.*Integral Ad Science.*IAS/i

Implementation Examples

// PHP Detection for IAS Crawler function detect_ias_crawler() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/IAS Crawler/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('IAS Crawler detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for IAS Crawler import re from flask import request, make_response def detect_ias_crawler(): user_agent = request.headers.get('User-Agent', '') pattern = r'IAS Crawler' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class IASCrawlerMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for IAS Crawler const express = require('express'); const app = express(); // Middleware to detect IAS Crawler function detectIASCrawler(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /IAS Crawler/i; if (pattern.test(userAgent)) { // Log bot detection console.log('IAS Crawler detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'IAS Crawler'; } next(); } app.use(detectIASCrawler);
# Apache .htaccess rules for IAS Crawler # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} IAS Crawler [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} IAS Crawler [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "IAS Crawler" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /IAS Crawler/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for IAS Crawler # Map user agent to variable map $http_user_agent $is_ias_crawler { default 0; ~*IAS Crawler 1; } server { # Block the bot completely if ($is_ias_crawler) { return 403; } # Or serve cached content location / { if ($is_ias_crawler) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_ias_crawler) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: IAS Crawler Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: IAS Crawler Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: IAS Crawler Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

IAS Crawler

Robots.txt Name

IAS Crawler

Category

other

Respects robots.txt

Yes
Copied to clipboard!