What is Tiny Tiny RSS?
Tiny Tiny RSS (tt-rss) is a popular self-hosted, open-source RSS/Atom feed reader and aggregator written in PHP. It provides a full-featured web-based interface for managing and reading RSS feeds. The bot fetches subscribed feeds on behalf of users running their own instances. It respects robots.txt directives and supports features like feed categories, article scoring, plugins, and mobile apps.
User Agent String
Tiny Tiny RSS/24.03 (http://tt-rss.org/)
Copy
How to Control Tiny Tiny RSS Block Completely To prevent Tiny Tiny RSS from accessing your entire website, add this to your robots.txt file:
User-agent: Tiny Tiny RSS
Disallow: /
Block Specific Directories To restrict access to certain parts of your site while allowing others:
User-agent: Tiny Tiny RSS
Disallow: /admin/
Disallow: /private/
Disallow: /wp-admin/
Allow: /public/
Set Crawl Delay To slow down the crawl rate (note: not all bots respect this directive):
User-agent: Tiny Tiny RSS
Crawl-delay: 10
How to Verify Tiny Tiny RSS
Verification Method:
Check user agent string for Tiny Tiny RSS identifier
Learn more in the official documentation .
Detection Patterns Multiple ways to detect Tiny Tiny RSS in your application:
Basic Pattern
/Tiny Tiny RSS/i
Strict Pattern
/^Tiny Tiny RSS/24\.03 \(http\://tt\-rss\.org/\)$/
Flexible Pattern
/Tiny Tiny RSS[\s\/]?[\d\.]*?/i
Vendor Match
/.*Tiny Tiny RSS.*Tiny/iImplementation Examples
PHP
Python
JavaScript
.htaccess
Nginx
Copy
function detect_tiny_tiny_rss() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/Tiny Tiny RSS/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('Tiny Tiny RSS detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
Copy
import re
from flask import request, make_responsedef detect_tiny_tiny_rss():
user_agent = request.headers.get('User-Agent', '')
pattern = r'Tiny Tiny RSS'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False
class TinyTinyRSSMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
Copy
const express = require('express');
const app = express();// Middleware to detect Tiny Tiny RSS
function detectTinyTinyRSS(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /Tiny Tiny RSS/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('Tiny Tiny RSS detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'Tiny Tiny RSS';
}
next();
}app.use(detectTinyTinyRSS);
Copy
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Tiny Tiny RSS [NC]
RewriteRule .* - [F,L]
RewriteCond %{HTTP_USER_AGENT} Tiny Tiny RSS [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]
SetEnvIfNoCase User-Agent "Tiny Tiny RSS" is_bot=1
<If "%{HTTP_USER_AGENT} =~ /Tiny Tiny RSS/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
Copy
map $http_user_agent $is_tiny_tiny_rss {
default 0;
~*Tiny Tiny RSS 1;
}server {
if ($is_tiny_tiny_rss) {
return 403;
}
location / {
if ($is_tiny_tiny_rss) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
location @backend {
if ($is_tiny_tiny_rss) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot? Recommendations based on your website type:
Site Type Recommendation Reasoning E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits Blog/News
Allow
Increases content reach and discoverability SaaS Application
Block
No benefit for application interfaces; preserve resources Documentation
Selective
Allow for public docs, block for internal docs Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets
Advanced robots.txt Configurations E-commerce Site Configuration
Copy
User-agent: Tiny Tiny RSS
Crawl-delay: 5
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Disallow: /api/
Disallow: /*?sort=
Disallow: /*?filter=
Disallow: /*&page=
Allow: /products/
Allow: /categories/
Sitemap: https://example.com/sitemap.xml
Publishing/Blog Configuration
Copy
User-agent: Tiny Tiny RSS
Crawl-delay: 10
Disallow: /wp-admin/
Disallow: /drafts/
Disallow: /preview/
Disallow: /*?replytocom=
Allow: /
SaaS/Application Configuration
Copy
User-agent: Tiny Tiny RSS
Disallow: /app/
Disallow: /api/
Disallow: /dashboard/
Disallow: /settings/
Allow: /
Allow: /pricing/
Allow: /features/
Allow: /docs/
Quick Reference
User Agent Match
Tiny Tiny RSS
Robots.txt Name
Tiny Tiny RSS
Category
other
Respects robots.txt
Yes