Cookie Consent by Free Privacy Policy Generator NextCloud Server Crawler User Agent - Nextcloud Bot Details | CL SEO

NextCloud Server Crawler

Nextcloud Since 2016
Other Respects robots.txt
#cloud #preview #self-hosted #crawler
Quick Actions
Official Docs

What is NextCloud Server Crawler?

NextCloud Server Crawler is used by Nextcloud instances to generate previews and index content shared through Nextcloud servers. As a popular open-source, self-hosted cloud platform, Nextcloud installations worldwide use this crawler to fetch external content for preview generation when users share links. The crawler helps create rich previews within Nextcloud's interface, enhancing the user experience for file sharing and collaboration. Each Nextcloud instance may operate its own crawler, making this a distributed crawling pattern.

User Agent String

NextCloud Server Crawler

How to Control NextCloud Server Crawler

Block Completely

To prevent NextCloud Server Crawler from accessing your entire website, add this to your robots.txt file:

# Block NextCloud Server Crawler User-agent: NextCloud Server Crawler Disallow: /

Block Specific Directories

To restrict access to certain parts of your site while allowing others:

User-agent: NextCloud Server Crawler Disallow: /admin/ Disallow: /private/ Disallow: /wp-admin/ Allow: /public/

Set Crawl Delay

To slow down the crawl rate (note: not all bots respect this directive):

User-agent: NextCloud Server Crawler Crawl-delay: 10

How to Verify NextCloud Server Crawler

Verification Method:
Varies by Nextcloud instance

Learn more in the official documentation.

Detection Patterns

Multiple ways to detect NextCloud Server Crawler in your application:

Basic Pattern

/NextCloud Server Crawler/i

Strict Pattern

/^NextCloud Server Crawler$/

Flexible Pattern

/NextCloud Server Crawler[\s\/]?[\d\.]*?/i

Vendor Match

/.*Nextcloud.*NextCloud/i

Implementation Examples

// PHP Detection for NextCloud Server Crawler function detect_nextcloud_server_crawler() { $user_agent = $_SERVER['HTTP_USER_AGENT'] ?? ''; $pattern = '/NextCloud Server Crawler/i'; if (preg_match($pattern, $user_agent)) { // Log the detection error_log('NextCloud Server Crawler detected from IP: ' . $_SERVER['REMOTE_ADDR']); // Set cache headers header('Cache-Control: public, max-age=3600'); header('X-Robots-Tag: noarchive'); // Optional: Serve cached version if (file_exists('cache/' . md5($_SERVER['REQUEST_URI']) . '.html')) { readfile('cache/' . md5($_SERVER['REQUEST_URI']) . '.html'); exit; } return true; } return false; }
# Python/Flask Detection for NextCloud Server Crawler import re from flask import request, make_response def detect_nextcloud_server_crawler(): user_agent = request.headers.get('User-Agent', '') pattern = r'NextCloud Server Crawler' if re.search(pattern, user_agent, re.IGNORECASE): # Create response with caching response = make_response() response.headers['Cache-Control'] = 'public, max-age=3600' response.headers['X-Robots-Tag'] = 'noarchive' return True return False # Django Middleware class NextCloudServerCrawlerMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if self.detect_bot(request): # Handle bot traffic pass return self.get_response(request)
// JavaScript/Node.js Detection for NextCloud Server Crawler const express = require('express'); const app = express(); // Middleware to detect NextCloud Server Crawler function detectNextCloudServerCrawler(req, res, next) { const userAgent = req.headers['user-agent'] || ''; const pattern = /NextCloud Server Crawler/i; if (pattern.test(userAgent)) { // Log bot detection console.log('NextCloud Server Crawler detected from IP:', req.ip); // Set cache headers res.set({ 'Cache-Control': 'public, max-age=3600', 'X-Robots-Tag': 'noarchive' }); // Mark request as bot req.isBot = true; req.botName = 'NextCloud Server Crawler'; } next(); } app.use(detectNextCloudServerCrawler);
# Apache .htaccess rules for NextCloud Server Crawler # Block completely RewriteEngine On RewriteCond %{HTTP_USER_AGENT} NextCloud Server Crawler [NC] RewriteRule .* - [F,L] # Or redirect to a static version RewriteCond %{HTTP_USER_AGENT} NextCloud Server Crawler [NC] RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^(.*)$ /static/$1 [L] # Or set environment variable for PHP SetEnvIfNoCase User-Agent "NextCloud Server Crawler" is_bot=1 # Add cache headers for this bot <If "%{HTTP_USER_AGENT} =~ /NextCloud Server Crawler/i"> Header set Cache-Control "public, max-age=3600" Header set X-Robots-Tag "noarchive" </If>
# Nginx configuration for NextCloud Server Crawler # Map user agent to variable map $http_user_agent $is_nextcloud_server_crawler { default 0; ~*NextCloud Server Crawler 1; } server { # Block the bot completely if ($is_nextcloud_server_crawler) { return 403; } # Or serve cached content location / { if ($is_nextcloud_server_crawler) { root /var/www/cached; try_files $uri $uri.html $uri/index.html @backend; } try_files $uri @backend; } # Add headers for bot requests location @backend { if ($is_nextcloud_server_crawler) { add_header Cache-Control "public, max-age=3600"; add_header X-Robots-Tag "noarchive"; } proxy_pass http://backend; } }

Should You Block This Bot?

Recommendations based on your website type:

Site Type Recommendation Reasoning
E-commerce Optional Evaluate based on bandwidth usage vs. benefits
Blog/News Allow Increases content reach and discoverability
SaaS Application Block No benefit for application interfaces; preserve resources
Documentation Selective Allow for public docs, block for internal docs
Corporate Site Limit Allow for public pages, block sensitive areas like intranets

Advanced robots.txt Configurations

E-commerce Site Configuration

User-agent: NextCloud Server Crawler Crawl-delay: 5 Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /api/ Disallow: /*?sort= Disallow: /*?filter= Disallow: /*&page= Allow: /products/ Allow: /categories/ Sitemap: https://example.com/sitemap.xml

Publishing/Blog Configuration

User-agent: NextCloud Server Crawler Crawl-delay: 10 Disallow: /wp-admin/ Disallow: /drafts/ Disallow: /preview/ Disallow: /*?replytocom= Allow: /

SaaS/Application Configuration

User-agent: NextCloud Server Crawler Disallow: /app/ Disallow: /api/ Disallow: /dashboard/ Disallow: /settings/ Allow: / Allow: /pricing/ Allow: /features/ Allow: /docs/

Quick Reference

User Agent Match

NextCloud Server Crawler

Robots.txt Name

NextCloud Server Crawler

Category

other

Respects robots.txt

Yes
Copied to clipboard!