NextCloud Server Crawler is used by Nextcloud instances to generate previews and index content shared through Nextcloud servers. As a popular open-source, self-hosted cloud platform, Nextcloud installations worldwide use this crawler to fetch external content for preview generation when users share links. The crawler helps create rich previews within Nextcloud's interface, enhancing the user experience for file sharing and collaboration. Each Nextcloud instance may operate its own crawler, making this a distributed crawling pattern.
User Agent String
NextCloud Server Crawler
How to Control NextCloud Server Crawler
Block Completely
To prevent NextCloud Server Crawler from accessing your entire website, add this to your robots.txt file:
# Block NextCloud Server Crawler
User-agent: NextCloud Server Crawler
Disallow: /
Block Specific Directories
To restrict access to certain parts of your site while allowing others:
Multiple ways to detect NextCloud Server Crawler in your application:
Basic Pattern
/NextCloud Server Crawler/i
Strict Pattern
/^NextCloud Server Crawler$/
Flexible Pattern
/NextCloud Server Crawler[\s\/]?[\d\.]*?/i
Vendor Match
/.*Nextcloud.*NextCloud/i
Implementation Examples
// PHP Detection for NextCloud Server Crawler
function detect_nextcloud_server_crawler() {
$user_agent = $_SERVER['HTTP_USER_AGENT'] ?? '';
$pattern = '/NextCloud Server Crawler/i';
if (preg_match($pattern, $user_agent)) {
// Log the detection
error_log('NextCloud Server Crawler detected from IP: ' . $_SERVER['REMOTE_ADDR']);
// Set cache headers
header('Cache-Control: public, max-age=3600');
header('X-Robots-Tag: noarchive');
// Optional: Serve cached version
if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) {
readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html');
exit;
}
return true;
}
return false;
}
# Python/Flask Detection for NextCloud Server Crawler
import re
from flask import request, make_response
def detect_nextcloud_server_crawler():
user_agent = request.headers.get('User-Agent', '')
pattern = r'NextCloud Server Crawler'
if re.search(pattern, user_agent, re.IGNORECASE):
# Create response with caching
response = make_response()
response.headers['Cache-Control'] = 'public, max-age=3600'
response.headers['X-Robots-Tag'] = 'noarchive'
return True
return False
# Django Middleware
class NextCloudServerCrawlerMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if self.detect_bot(request):
# Handle bot traffic
pass
return self.get_response(request)
// JavaScript/Node.js Detection for NextCloud Server Crawler
const express = require('express');
const app = express();
// Middleware to detect NextCloud Server Crawler
function detectNextCloudServerCrawler(req, res, next) {
const userAgent = req.headers['user-agent'] || '';
const pattern = /NextCloud Server Crawler/i;
if (pattern.test(userAgent)) {
// Log bot detection
console.log('NextCloud Server Crawler detected from IP:', req.ip);
// Set cache headers
res.set({
'Cache-Control': 'public, max-age=3600',
'X-Robots-Tag': 'noarchive'
});
// Mark request as bot
req.isBot = true;
req.botName = 'NextCloud Server Crawler';
}
next();
}
app.use(detectNextCloudServerCrawler);
# Apache .htaccess rules for NextCloud Server Crawler# Block completely
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} NextCloud Server Crawler [NC]
RewriteRule .* - [F,L]
# Or redirect to a static version
RewriteCond %{HTTP_USER_AGENT} NextCloud Server Crawler [NC]
RewriteCond %{REQUEST_URI} !^/static/
RewriteRule ^(.*)$ /static/$1 [L]
# Or set environment variable for PHP
SetEnvIfNoCase User-Agent "NextCloud Server Crawler" is_bot=1
# Add cache headers for this bot
<If "%{HTTP_USER_AGENT} =~ /NextCloud Server Crawler/i">
Header set Cache-Control "public, max-age=3600"
Header set X-Robots-Tag "noarchive"
</If>
# Nginx configuration for NextCloud Server Crawler# Map user agent to variable
map $http_user_agent $is_nextcloud_server_crawler {
default 0;
~*NextCloud Server Crawler 1;
}
server {
# Block the bot completely
if ($is_nextcloud_server_crawler) {
return 403;
}
# Or serve cached content
location / {
if ($is_nextcloud_server_crawler) {
root /var/www/cached;
try_files $uri $uri.html $uri/index.html @backend;
}
try_files $uri @backend;
}
# Add headers for bot requests
location @backend {
if ($is_nextcloud_server_crawler) {
add_header Cache-Control "public, max-age=3600";
add_header X-Robots-Tag "noarchive";
}
proxy_pass http://backend;
}
}
Should You Block This Bot?
Recommendations based on your website type:
Site Type
Recommendation
Reasoning
E-commerce
Optional
Evaluate based on bandwidth usage vs. benefits
Blog/News
Allow
Increases content reach and discoverability
SaaS Application
Block
No benefit for application interfaces; preserve resources
Documentation
Selective
Allow for public docs, block for internal docs
Corporate Site
Limit
Allow for public pages, block sensitive areas like intranets