Cookie Consent by Free Privacy Policy Generator

Screaming Frog SEO Spider CLI

Modern technical SEO is no longer just about crawling HTML and counting status codes. You are increasingly enriching crawls with external data sources, joining Search Console, GA4, PageSpeed, link metrics, custom extractions, embeddings generation, and now LLM models and AI provider signals into a single, repeatable dataset.

Using Screaming Frog CLI, you can wire it into n8n, PowerShell, Task Scheduler, CI pipelines, or downstream Python and SQL workflows without relying on the main SF UI ever being opened.

I'm still in the process of learning how to control Screaming Frog CLI, so take the CLI commands below as guidance. This is very much for my reference, as well as sharing it.


Everything below assumes you are using a Windows PC.


1. Navigate to the install directory​


The most common location on Windows is:

Code:
cd "C:\Program Files (x86)\Screaming Frog SEO Spider"


2. The single most important command​

Before anything else, understand this.

Code:
.\ScreamingFrogSEOSpiderCli.exe --help

Screenshot 2025-12-12 091643.webp


This is the source of truth for your setup. If SF versioning changes and new features are added, this is a good starting point.



Running --help tells you what is available, what it is called, and what will actually execute. That is especially important once you start wiring Screaming Frog into automation, scheduled jobs, or n8n pipelines where there is no UI to sanity check things for you.


3. Discover valid export and report names​

When you move Screaming Frog into automation, these CLI commands become useful. Export tabs, bulk exports, and reports all rely on exact string matches. If a name is wrong, outdated, or copied from a different version, the crawl will still complete but the export will not run. There is no warning and no partial failure to alert you.

These commands let you query the installed version directly and see what is actually available. They are the quickest way to confirm valid names before you wire anything into a script, scheduler, or n8n workflow.


Export tabs​

Code:
ScreamingFrogSEOSpiderCli.exe --help export-tabs


Bulk exports​

Code:
ScreamingFrogSEOSpiderCli.exe --help bulk-export


Reports​

Code:
ScreamingFrogSEOSpiderCli.exe --help save-report


Custom crawl overview exports​

Code:
ScreamingFrogSEOSpiderCli.exe --help export-custom-summary


Practical advice:
Copy the output of these into versioned documentation. When Screaming Frog updates, diff it. That is how you can keep your pipelines stable.



4. Basic crawl commands​


Crawl a single site​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com


By default, this opens the main SF UI.


Headless crawl​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless


Headless should be your default when you are comfortable with the CLI.



5. Output control and file management​


Define output folder​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --output-folder "C:\Crawls\Example"


Timestamped output folders​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --timestamped-output


This is useful for scheduled crawls where historical comparison matters or when piping data downstream.


Overwrite existing files​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --overwrite


If you do not plan to save every crawl, this will overwrite the previous crawl export.



6. Saving and re-loading crawls​


Save a crawl to disk​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --save-crawl


This produces a .seospider or .dbseospider file depending on your storage settings.


Load an existing crawl file​

Code:
ScreamingFrogSEOSpiderCli.exe --headless --load-crawl "C:\Crawls\example.dbseospider"


7. Configuration and authentication​


Apply a saved crawl configuration​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --config "C:\Configs\default.seospiderconfig"


Loads up a preconfigured crawl config. This is how you guarantee parity between manual and automated crawls.


Apply authentication settings​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --auth-config "C:\Configs\auth.seospiderauthconfig"


This is useful for staging, web bot auth, gated content, and internal platforms.



8. Exporting data​


Export specific tabs and filters​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --export-tabs "Internal:All,Response Codes:Client Error (4xx)"


The strings must exactly match the UI labels. One character off and nothing exports.

Other useful commands

Code:
--arg-file <path to file>


Code:
--crawl-sitemap <sitemap url>


Code:
--crawl-sitemap-list <sitemap list file>


Code:
--crawl-google-sheet <google drive account> <google sheet url>


Code:
--email-on-complete <email addresses>


Code:
--consolidate-spreadsheets


Code:
--save-logs <path/to/output/dir>

Run bulk exports​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --bulk-export "All Inlinks,Response Codes:Redirection (3xx)"


Bulk exports are where Screaming Frog earns its keep in technical audits.


Save reports​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --save-report "Redirects:Redirect Chains,Canonical Errors"


This saves and outputs individual reports. Use them to validate the underlying data.



9. Export formats​


Set export format​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --export-format csv


Supported values include csv, xls, xlsx, and gsheet. CSV should be your default unless there is a strong reason otherwise. You can change the output filesin to JSON or SQL downstream.



10. XML sitemaps​


Create an XML sitemap​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --create-sitemap


These respect your crawl configuration. Garbage in, garbage out.



11. API integrations during crawl​


Google Search Console (GSC)​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-google-search-console "Google Account" "https://www.example.com/"


Google Analytics 4 (GA4)​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-google-analytics-4 "Google Account" "Account" "Property" "Data Stream"


PageSpeed Insights (PSI)​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-pagespeed


Link metrics providers​

Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-ahrefs


Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-majestic


Code:
ScreamingFrogSEOSpiderCli.exe --crawl https://www.example.com --headless --use-mozscape


I will keep this guide updated as I learn more and as Screaming Frog continues to evolve. The CLI surface area is clearly expanding, and new flags do get added, particularly around data sources and integrations. When that happens, I will fold them in and call out where they materially change what is possible.

Screenshot 2025-12-12 134520.webp


Right now, I am actively working on using the Screaming Frog CLI inside n8n workflow pipelines.

Something like this:
Code:
powershell.exe -NoProfile -ExecutionPolicy Bypass -Command "Write-Output 'SF CLI starting'; New-Item -ItemType Directory -Force -Path 'C:\Users\Administrator\Desktop\sf_crawl_data' | Out-Null; cd 'C:\Program Files (x86)\Screaming Frog SEO Spider'; .\ScreamingFrogSEOSpiderCli.exe --crawl https://chrisleverseo.com/ --headless --output-folder 'C:\Users\Administrator\Desktop\sf_crawl_data' --export-format csv --overwrite --export-tabs 'Internal:All'; Write-Output 'SF CLI finished'"

The focus will eventually be on scraping at scale, enriching crawls with external data, APIs, and generating embeddings on the fly so crawl output can flow straight into downstream analysis rather than sitting in a spreadsheet waiting for manual intervention.

Used this way, Screaming Frog stops being a crawler you run and starts being a component you orchestrate. It becomes part of a broader system that collects, joins, and transforms data in a repeatable way. These workflows turn Screaming Frog into a data join engine, not just a crawler.

I will share more as those pipelines mature.
 
Last edited:
Back
Top