-
Home

-
User Agent Directory

- PaperLiBot
PaperLiBot
Paper.li •
Since 2010
Quick Actions
Official Docs
What is PaperLiBot?
PaperLiBot is the web crawler for Paper.li, a content curation service that automatically creates online newspapers from social media and web content. The bot crawls URLs shared on social media to extract content for curation into themed publications. It respects robots.txt directives.
User Agent String
Mozilla/5.0 (compatible; PaperLiBot/2.1; https://support.paper.li/)
How to Control PaperLiBot
Block Completely
To prevent PaperLiBot from accessing your entire website, add this to your robots.txt file:
# Block PaperLiBot
User-agent: PaperLiBot
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
User-agent: PaperLiBot
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay
To slow down the crawl rate (note: not all bots respect this directive):
User-agent: PaperLiBot
Crawl-delay: 10
How to Verify PaperLiBot
Verification Method:
Check user agent string for PaperLiBot identifier
Check user agent string for PaperLiBot identifier
Learn more in the official documentation.
Detection Patterns
Multiple ways to detect PaperLiBot in your application:
Basic Pattern
/PaperLiBot/iStrict Pattern
/^Mozilla/5\.0 \(compatible; PaperLiBot/2\.1; https\://support\.paper\.li/\)$/Flexible Pattern
/PaperLiBot[\s\/]?[\d\.]*?/iVendor Match
/.*Paper\.li.*PaperLiBot/iImplementation Examples
// PHP Detection for PaperLiBot
function detect_paperlibot() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/PaperLiBot/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('PaperLiBot detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for PaperLiBot
import re
from flask import request, make_responsedef detect_paperlibot():
user_agent = request.headers.get('User-Agent', '')
pattern = r'PaperLiBot'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False# Django Middleware
class PaperLiBotMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for PaperLiBot
const express = require('express');
const app = express();// Middleware to detect PaperLiBot
function detectPaperLiBot(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /PaperLiBot/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('PaperLiBot detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'PaperLiBot';
}
next();
}app.use(detectPaperLiBot);
# Apache .htaccess rules for PaperLiBot# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} PaperLiBot [NC]
RewriteRule .* - [F,L]# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} PaperLiBot [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "PaperLiBot" is_bot=1# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /PaperLiBot/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for PaperLiBot# Map user agent to variable
map $http_user_agent $is_paperlibot {
default 0;
~*PaperLiBot 1;
}server {
# Block the bot completely
if ($is_paperlibot) {
return 403;
}
# Or serve cached content
location / {
if ($is_paperlibot) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_paperlibot) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
| Site Type | Recommendation | Reasoning |
|---|---|---|
| E-commerce | Optional | Evaluate based on bandwidth usage vs. benefits |
| Blog/News | Allow | Increases content reach and discoverability |
| SaaS Application | Block | No benefit for application interfaces; preserve resources |
| Documentation | Selective | Allow for public docs, block for internal docs |
| Corporate Site | Limit | Allow for public pages, block sensitive areas like intranets |
Advanced robots.txt Configurations
E-commerce Site Configuration
User-agent: PaperLiBot
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
User-agent: PaperLiBot
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
User-agent: PaperLiBot
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
PaperLiBotRobots.txt Name
PaperLiBotCategory
otherRespects robots.txt
Yes
Copied to clipboard!
