Compare commits

...

43 commits

Author SHA1 Message Date
ai.robots.txt
aa8519ec10 Update from Dark Visitors 2025-12-21 01:07:06 +00:00
dark-visitors
83485effdb Update from Dark Visitors 2025-12-20 00:58:49 +00:00
ai.robots.txt
8b8bf9da5d Update from Dark Visitors 2025-12-06 00:58:16 +00:00
dark-visitors
f1c752ef12 Update from Dark Visitors 2025-12-05 01:00:44 +00:00
Adam Newbold
51afa7113a
Update ai_robots_update.yml with rebase command to fix scheduled run 2025-12-03 20:15:10 -05:00
ai.robots.txt
7598d77e4a Update from Dark Visitors 2025-12-04 01:10:02 +00:00
dark-visitors
45b071b29f Update from Dark Visitors 2025-12-04 01:00:27 +00:00
dark-visitors
f61b3496f7 Update from Dark Visitors 2025-12-03 01:00:32 +00:00
dark-visitors
8363d4fdd4 Update from Dark Visitors 2025-12-02 01:25:24 +00:00
Adam Newbold
2ccd443581
Update ai_robots_update.yml with workflow_dispatch
Adding workflow_dispatch to enable manual triggers of this schedule job (for testing)
2025-12-01 20:24:41 -05:00
Adam Newbold
6d75f3c1c9
Update robots.py to address error on line 57
Attempting to work around an error that prevents parsing the Dark Visitors site
2025-12-01 20:18:29 -05:00
Glyn Normington
56010ef913
Merge pull request #205 from fiskhandlarn/fix/editorconfig
Fix/editorconfig
2025-11-29 10:02:08 +00:00
ai.robots.txt
3fadc88a23 Merge pull request #206 from newbold/main
Adding LAIONDownloader
2025-11-29 10:00:52 +00:00
Glyn Normington
47c077a8ef
Merge pull request #206 from newbold/main
Adding LAIONDownloader
2025-11-29 10:00:42 +00:00
Adam Newbold
f5d7ccb243
Fixed invalid JSON 2025-11-28 14:10:38 -05:00
Adam Newbold
30d719a09a
Adding LAIONDownloader 2025-11-28 14:08:18 -05:00
fiskhandlarn
05bbdebeaa feat: disallow final newline for files generated by python
if any of these files have ending newlines the tests will fail
2025-11-28 10:47:22 +01:00
fiskhandlarn
c6ce9329a1 fix: ensure whitespace as defined in .editorconfig 2025-11-28 10:46:22 +01:00
ai.robots.txt
10d5ae2870 Merge pull request #200 from glyn/deepseek
Clarify that DeepSeekBot does not respect robots.txt
2025-11-27 14:00:50 +00:00
Glyn Normington
4467002298
Merge pull request #200 from glyn/deepseek
Clarify that DeepSeekBot does not respect robots.txt
2025-11-27 14:00:40 +00:00
Glyn Normington
4a159a818f
Merge pull request #199 from glyn/editorconfig
Standardise editor options
2025-11-27 13:54:03 +00:00
Glyn Normington
3d6b33a71a
Merge pull request #202 from glyn/formatting
Tidy README
2025-11-27 12:42:31 +00:00
Glyn Normington
c26c8c0911 Tidy README 2025-11-27 12:41:37 +00:00
Glyn Normington
91959fe791
Merge pull request #201 from Anshita-18H/add-requirements-file
Add requirements.txt with project dependencies
2025-11-27 12:38:28 +00:00
Glyn Normington
b75163e796
Ensure Python3 is used 2025-11-27 12:38:12 +00:00
Glyn Normington
f46754d280
Order deps 2025-11-27 12:37:35 +00:00
Glyn Normington
c3f2fe758e
Whitespace 2025-11-27 12:37:21 +00:00
Glyn Normington
7521c3af50
Fix link 2025-11-27 12:37:07 +00:00
Anshita-18H
4302fd1aca Improve contributing section and fix formatting in README 2025-11-27 17:56:47 +05:30
Anshita-18H
9ca5033927 Deduplicate requirements.txt and add installation instructions to README 2025-11-27 17:44:47 +05:30
Anshita-18H
9cc3dbc05f Add requirements.txt with project dependencies 2025-11-27 17:06:59 +05:30
Glyn Normington
c322d7852d
Merge pull request #130 from maxheadroom/main
Create traefik-manual-setup.md
2025-11-26 12:08:11 +00:00
Glyn Normington
35588b2ddb
Typo. 2025-11-26 12:07:39 +00:00
Glyn Normington
ab24e41106 Clarify that DeepSeekBot does not respect robots.txt
Fixes https://github.com/ai-robots-txt/ai.robots.txt/issues/198
2025-11-26 12:02:42 +00:00
Glyn Normington
b681c0c0d8 Standardise editor options
This was motivated by @fiskhandlarn's comment:
https://github.com/ai-robots-txt/ai.robots.txt/pull/195#issuecomment-3576331322
2025-11-26 11:54:10 +00:00
Glyn Normington
4e7e28335f
Merge pull request #195 from fiskhandlarn/patch-2
feat: allow robots access to `/robots.txt` in nginx
2025-11-26 11:50:15 +00:00
fiskhandlarn
a6cf6b204b test: update test nginx conf 2025-11-25 16:47:04 +01:00
fiskhandlarn
2679fcad34 feat: update nginx generator 2025-11-25 16:46:31 +01:00
fiskhandlarn
ef8eda4fe6 chore: normalize quote style 2025-11-25 16:39:35 +01:00
fiskhandlarn
0b3266b35f feat: allow robots access to /robots.txt in nginx 2025-11-25 16:22:14 +01:00
Falko Zurell
6c3ae6eb20
Merge branch 'ai-robots-txt:main' into main 2025-05-16 13:52:02 +02:00
Falko Zurell
684d11d889 moved traefik manual setup into docs
moved the traefik manual setup into the docs directory and linked to it from the README.md
2025-05-16 13:50:58 +02:00
Falko Zurell
9b5f75e2f3
Create traefik-manual-setup.md 2025-05-16 13:27:31 +02:00
14 changed files with 340 additions and 54 deletions

9
.editorconfig Normal file
View file

@ -0,0 +1,9 @@
root = true
[*]
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[{Caddyfile,haproxy-block-ai-bots.txt,nginx-block-ai-bots.conf}]
insert_final_newline = false

View file

@ -2,6 +2,7 @@ name: Updates for AI robots files
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
dark-visitors:
@ -10,7 +11,7 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
fetch-depth: 0
- run: |
pip install beautifulsoup4 requests
git config --global user.name "dark-visitors"
@ -22,6 +23,7 @@ jobs:
git add -A
if ! git diff --cached --quiet; then
git commit -m "Update from Dark Visitors"
git rebase origin/main
git push
else
echo "No changes to commit."

View file

@ -1,3 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AddSearchBot|AI2Bot|Ai2Bot\-Dolma|aiHitBot|AmazonBuyForMe|atlassian\-bot|amazon\-kendra|Amazonbot|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|LinerBot|Linguee\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YaK|YandexAdditional|YandexAdditionalBot|YouBot) [NC]
RewriteCond %{HTTP_USER_AGENT} (AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot) [NC]
RewriteRule !^/?robots\.txt$ - [F]

View file

@ -1,3 +1,3 @@
@aibots {
header_regexp User-Agent "(AddSearchBot|AI2Bot|Ai2Bot\-Dolma|aiHitBot|AmazonBuyForMe|atlassian\-bot|amazon\-kendra|Amazonbot|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|LinerBot|Linguee\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YaK|YandexAdditional|YandexAdditionalBot|YouBot)"
header_regexp User-Agent "(AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot)"
}

View file

@ -44,15 +44,22 @@ Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
file on-the-fly.
- Alternatively you can [manually configure Traefik](./docs/traefik-manual-setup.md) to centrally serve a static `robots.txt`.
## Contributing
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3, installing the dependencies:
```console
pip install -r requirements.txt
```
and then issuing:
```console
code/tests.py
```
The `.editorconfig` file provides standard editor options for this project. See [EditorConfig](https://editorconfig.org/) for more information.
## Releasing
Admins may ship a new release `v1.n` (where `n` increments the minor version of the current release) as follows:

View file

@ -52,7 +52,16 @@ def updated_robots_json(soup):
for agent in section.find_all("a", href=True):
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
name = clean_robot_name(name)
desc = agent.find("p").get_text().strip()
# This line below occasionally throws this error: AttributeError: 'NoneType' object has no attribute 'get_text'
#desc = agent.find("p").get_text().strip()
# Attempting a different way to handle to avoid errors:
p_tag = agent.find("p")
if p_tag is not None:
desc = p_tag.get_text().strip()
else:
desc = "Description unavailable from darkvisitors.com"
default_values = {
"Unclear at this time.",
@ -175,7 +184,7 @@ def json_to_htaccess(robot_json):
def json_to_nginx(robot_json):
# Creates an Nginx config file. This config snippet can be included in
# nginx server{} blocks to block AI bots.
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
config = f"set $block 0;\n\nif ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n set $block 1;\n}}\n\nif ($request_uri = \"/robots.txt\") {{\n set $block 0;\n}}\n\nif ($block) {{\n return 403;\n}}"
return config

View file

@ -1,3 +1,13 @@
set $block 0;
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
set $block 1;
}
if ($request_uri = "/robots.txt") {
set $block 0;
}
if ($block) {
return 403;
}

View file

@ -0,0 +1,36 @@
# Intro
If you're using Traefik as your reverse proxy in your docker setup, you might want to use it as well to centrally serve the ```/robots.txt``` for all your Traefik fronted services.
This can be achieved by configuring a single lightweight service to service static files and defining a high priority Traefik HTTP Router rule.
# Setup
Define a single service to serve the one robots.txt to rule them all. I'm using a lean nginx:alpine docker image in this example:
```
services:
robots:
image: nginx:alpine
container_name: robots-server
volumes:
- ./static/:/usr/share/nginx/html/:ro
labels:
- "traefik.enable=true"
# Router for all /robots.txt requests
- "traefik.http.routers.robots.rule=Path(`/robots.txt`)"
- "traefik.http.routers.robots.entrypoints=web,websecure"
- "traefik.http.routers.robots.priority=3000"
- "traefik.http.routers.robots.service=robots"
- "traefik.http.routers.robots.tls.certresolver=letsencrypt"
- "traefik.http.services.robots.loadbalancer.server.port=80"
networks:
- external_network
networks:
external_network:
name: traefik_external_network
external: true
```
The Traefik HTTP Routers rule explicitly does not contain a Hostname. Traefik will print a warning about this for the TLS setup but it will work. The high priority of 3000 should ensure this rule is evaluated first for incoming requests.
Place your robots.txt in the local `./static/` directory and NGINX will serve it for all services behind your Traefik proxy.

View file

@ -1,16 +1,17 @@
AddSearchBot
AI2Bot
AI2Bot-DeepResearchEval
Ai2Bot-Dolma
aiHitBot
AmazonBuyForMe
atlassian-bot
amazon-kendra
Amazonbot
AmazonBuyForMe
Andibot
Anomura
anthropic-ai
Applebot
Applebot-Extended
atlassian-bot
Awario
bedrockbot
bigsur.ai
@ -19,6 +20,8 @@ Brightbot 1.0
BuddyBot
Bytespider
CCBot
Channel3Bot
ChatGLM-Spider
ChatGPT Agent
ChatGPT-User
Claude-SearchBot
@ -30,6 +33,7 @@ CloudVertexBot
cohere-ai
cohere-training-data-crawler
Cotoyogi
Crawl4AI
Crawlspace
Datenbank Crawler
DeepSeekBot
@ -53,16 +57,25 @@ GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iAskBot
iaskspider
iaskspider/2.0
IbouBot
ICC-Crawler
ImagesiftBot
imageSpider
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
KlaviyoAIBot
KunatoCrawler
laion-huggingface-processor
LAIONDownloader
LCC
LinerBot
Linguee Bot
LinkupBot
Manus-User
meta-externalagent
Meta-ExternalAgent
meta-externalfetcher
@ -86,6 +99,7 @@ Perplexity-User
PerplexityBot
PetalBot
PhindBot
Poggio-Citations
Poseidon Research Crawler
QualifiedBot
QuillBot
@ -96,15 +110,21 @@ SemrushBot-OCOB
SemrushBot-SWA
ShapBot
Sidetrade indexer bot
Spider
TavilyBot
TerraCotta
Thinkbot
TikTokSpider
Timpibot
TwinAgent
VelenPublicWebCrawler
WARDBot
Webzio-Extended
webzio-extended
wpbot
WRTNBot
YaK
YandexAdditional
YandexAdditionalBot
YouBot
ZanistaBot

View file

@ -1,3 +1,13 @@
if ($http_user_agent ~* "(AddSearchBot|AI2Bot|Ai2Bot\-Dolma|aiHitBot|AmazonBuyForMe|atlassian\-bot|amazon\-kendra|Amazonbot|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|LinerBot|Linguee\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YaK|YandexAdditional|YandexAdditionalBot|YouBot)") {
set $block 0;
if ($http_user_agent ~* "(AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot)") {
set $block 1;
}
if ($request_uri = "/robots.txt") {
set $block 0;
}
if ($block) {
return 403;
}

3
requirements.txt Normal file
View file

@ -0,0 +1,3 @@
beautifulsoup4
lxml
requests

View file

@ -13,12 +13,19 @@
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"AI2Bot-DeepResearchEval": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/ai2bot-deepresearcheval"
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
"respect": "Yes",
"function": "Content is used to train open language models.",
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"aiHitBot": {
"operator": "[aiHit](https://www.aihitdata.com/about)",
@ -27,20 +34,6 @@
"frequency": "No information provided.",
"description": "Scrapes data for AI systems."
},
"AmazonBuyForMe": {
"operator": "[Amazon](https://amazon.com)",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "No information provided.",
"description": "Buy For Me is an AI agent that helps buy products at the direction of customers."
},
"atlassian-bot": {
"operator": "[Atlassian](https://www.atlassian.com)",
"respect": "[Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt)",
"function": "AI search, assistants and agents",
"frequency": "No information provided.",
"description": "atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product."
},
"amazon-kendra": {
"operator": "Amazon",
"respect": "Yes",
@ -55,6 +48,13 @@
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"AmazonBuyForMe": {
"operator": "[Amazon](https://amazon.com)",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "No information provided.",
"description": "Buy For Me is an AI agent that helps buy products at the direction of customers."
},
"Andibot": {
"operator": "[Andi](https://andisearch.com/)",
"respect": "Unclear at this time",
@ -90,6 +90,13 @@
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"atlassian-bot": {
"operator": "[Atlassian](https://www.atlassian.com)",
"respect": "[Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt)",
"function": "AI search, assistants and agents",
"frequency": "No information provided.",
"description": "atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product."
},
"Awario": {
"operator": "Awario",
"respect": "Unclear at this time.",
@ -146,6 +153,20 @@
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"Channel3Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/channel3bot"
},
"ChatGLM-Spider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/chatglm-spider"
},
"ChatGPT Agent": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
@ -223,6 +244,13 @@
"frequency": "No information provided.",
"description": "Scrapes data for AI training in Japanese language."
},
"Crawl4AI": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/crawl4ai"
},
"Crawlspace": {
"operator": "[Crawlspace](https://crawlspace.dev)",
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
@ -239,7 +267,7 @@
},
"DeepSeekBot": {
"operator": "DeepSeek",
"respect": "Unclear at this time.",
"respect": "No",
"function": "Training language models and improving AI products",
"frequency": "Unclear at this time.",
"description": "DeepSeekBot is a web crawler used by DeepSeek to train its language models and improve its AI products."
@ -384,6 +412,20 @@
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iAskBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskbot"
},
"iaskspider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskspider"
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
@ -412,6 +454,13 @@
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"imageSpider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/imagespider"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
@ -440,6 +489,34 @@
"frequency": "Indexes based on 'change signals' and user configuration.",
"description": "Indexes content to tailor AI experiences, generate content, answers and recommendations."
},
"KunatoCrawler": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/kunatocrawler"
},
"laion-huggingface-processor": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/laion-huggingface-processor"
},
"LAIONDownloader": {
"operator": "[Large-scale Artificial Intelligence Open Network](https://laion.ai/)",
"respect": "[No](https://laion.ai/faq/)",
"function": "AI tools and models for machine learning research.",
"frequency": "Unclear at this time.",
"description": "LAIONDownloader is a bot by LAION, a non-profit organization that provides datasets, tools and models to liberate machine learning research."
},
"LCC": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/lcc"
},
"LinerBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
@ -454,6 +531,20 @@
"frequency": "Unclear at this time.",
"description": "Linguee Bot is a web crawler used by Linguee to gather training data for its AI powered translation service."
},
"LinkupBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/linkupbot"
},
"Manus-User": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/manus-user"
},
"meta-externalagent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes",
@ -609,11 +700,18 @@
"description": "Operated by Huawei to provide search and AI assistant services."
},
"PhindBot": {
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly",
"frequency": "No explicit frequency provided.",
"function": "AI-enhanced search engine.",
"operator": "[phind](https://www.phind.com/)",
"respect": "Unclear at this time."
"respect": "Unclear at this time.",
"function": "AI-enhanced search engine.",
"frequency": "No explicit frequency provided.",
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly"
},
"Poggio-Citations": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/poggio-citations"
},
"Poseidon Research Crawler": {
"operator": "[Poseidon Research](https://www.poseidonresearch.com)",
@ -644,11 +742,11 @@
"respect": "Unclear at this time."
},
"SBIntuitionsBot": {
"description": "AI development and information analysis",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)",
"respect": "[Yes](https://www.sbintuitions.co.jp/en/bot/)",
"frequency": "No information.",
"function": "Uses data gathered in AI development and information analysis.",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)"
"frequency": "No information.",
"description": "AI development and information analysis"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
@ -685,6 +783,20 @@
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"Spider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/spider"
},
"TavilyBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/tavilybot"
},
"TerraCotta": {
"operator": "[Ceramic AI](https://ceramic.ai/)",
"respect": "[Yes](https://github.com/CeramicTeam/CeramicTerracotta)",
@ -713,12 +825,19 @@
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"TwinAgent": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/twinagent"
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
"respect": "[Yes](https://velen.io)",
"function": "Scrapes data for business data sets and machine learning models.",
"frequency": "No information.",
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\""
},
"WARDBot": {
"operator": "WEBSPARK",
@ -734,6 +853,13 @@
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"webzio-extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"wpbot": {
"operator": "[QuantumCloud](https://www.quantumcloud.com)",
"respect": "Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9)",
@ -741,6 +867,13 @@
"frequency": "Unclear at this time.",
"description": "wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support."
},
"WRTNBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/wrtnbot"
},
"YaK": {
"operator": "[Meltwater](https://www.meltwater.com/en/suite/consumer-intelligence)",
"respect": "Unclear at this time.",
@ -768,5 +901,12 @@
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
},
"ZanistaBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/zanistabot"
}
}

View file

@ -1,16 +1,17 @@
User-agent: AddSearchBot
User-agent: AI2Bot
User-agent: AI2Bot-DeepResearchEval
User-agent: Ai2Bot-Dolma
User-agent: aiHitBot
User-agent: AmazonBuyForMe
User-agent: atlassian-bot
User-agent: amazon-kendra
User-agent: Amazonbot
User-agent: AmazonBuyForMe
User-agent: Andibot
User-agent: Anomura
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: atlassian-bot
User-agent: Awario
User-agent: bedrockbot
User-agent: bigsur.ai
@ -19,6 +20,8 @@ User-agent: Brightbot 1.0
User-agent: BuddyBot
User-agent: Bytespider
User-agent: CCBot
User-agent: Channel3Bot
User-agent: ChatGLM-Spider
User-agent: ChatGPT Agent
User-agent: ChatGPT-User
User-agent: Claude-SearchBot
@ -30,6 +33,7 @@ User-agent: CloudVertexBot
User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Cotoyogi
User-agent: Crawl4AI
User-agent: Crawlspace
User-agent: Datenbank Crawler
User-agent: DeepSeekBot
@ -53,16 +57,25 @@ User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iAskBot
User-agent: iaskspider
User-agent: iaskspider/2.0
User-agent: IbouBot
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: imageSpider
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: KlaviyoAIBot
User-agent: KunatoCrawler
User-agent: laion-huggingface-processor
User-agent: LAIONDownloader
User-agent: LCC
User-agent: LinerBot
User-agent: Linguee Bot
User-agent: LinkupBot
User-agent: Manus-User
User-agent: meta-externalagent
User-agent: Meta-ExternalAgent
User-agent: meta-externalfetcher
@ -86,6 +99,7 @@ User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: PhindBot
User-agent: Poggio-Citations
User-agent: Poseidon Research Crawler
User-agent: QualifiedBot
User-agent: QuillBot
@ -96,16 +110,22 @@ User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SWA
User-agent: ShapBot
User-agent: Sidetrade indexer bot
User-agent: Spider
User-agent: TavilyBot
User-agent: TerraCotta
User-agent: Thinkbot
User-agent: TikTokSpider
User-agent: Timpibot
User-agent: TwinAgent
User-agent: VelenPublicWebCrawler
User-agent: WARDBot
User-agent: Webzio-Extended
User-agent: webzio-extended
User-agent: wpbot
User-agent: WRTNBot
User-agent: YaK
User-agent: YandexAdditional
User-agent: YandexAdditionalBot
User-agent: YouBot
User-agent: ZanistaBot
Disallow: /

View file

@ -2,17 +2,18 @@
|------|----------|-----------------------|----------|------------------|-------------|
| AddSearchBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | AddSearchBot is a web crawler that indexes website content for AddSearch's AI-powered site search solution, collecting data to provide fast and accurate search results. More info can be found at https://darkvisitors.com/agents/agents/addsearchbot |
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| AI2Bot\-DeepResearchEval | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/ai2bot-deepresearcheval |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
| AmazonBuyForMe | [Amazon](https://amazon.com) | Unclear at this time. | AI Agents | No information provided. | Buy For Me is an AI agent that helps buy products at the direction of customers. |
| atlassian\-bot | [Atlassian](https://www.atlassian.com) | [Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt) | AI search, assistants and agents | No information provided. | atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product. |
| amazon\-kendra | Amazon | Yes | Collects data for AI natural language search | No information provided. | Amazon Kendra is a highly accurate intelligent search service that enables your users to search unstructured data using natural language. It returns specific answers to questions, giving users an experience that's close to interacting with a human expert. It is highly scalable and capable of meeting performance demands, tightly integrated with other AWS services such as Amazon S3 and Amazon Lex, and offers enterprise-grade security. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| AmazonBuyForMe | [Amazon](https://amazon.com) | Unclear at this time. | AI Agents | No information provided. | Buy For Me is an AI agent that helps buy products at the direction of customers. |
| Andibot | [Andi](https://andisearch.com/) | Unclear at this time | Search engine using generative AI, AI Search Assistant | No information provided. | Scrapes website and provides AI summary. |
| Anomura | [Direqt](https://direqt.ai) | Yes | Collects data for AI search | No information provided. | Anomura is Direqt's search crawler, it discovers and indexes pages their customers websites. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| atlassian\-bot | [Atlassian](https://www.atlassian.com) | [Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt) | AI search, assistants and agents | No information provided. | atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product. |
| Awario | Awario | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Awario is an AI data scraper operated by Awario. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/awario |
| bedrockbot | [Amazon](https://amazon.com) | [Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector) | Data scraping for custom AI applications. | Unclear at this time. | Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application. |
| bigsur\.ai | Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses | Unclear at this time. | AI Assistants | Unclear at this time. | bigsur.ai is a web crawler operated by Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses. More info can be found at https://darkvisitors.com/agents/agents/bigsur-ai |
@ -21,6 +22,8 @@
| BuddyBot | [BuddyBotLearning](https://www.buddybotlearning.com) | Unclear at this time. | AI Learning Companion | Unclear at this time. | BuddyBot is a voice-controlled AI learning companion targeted at childhooded STEM education. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| Channel3Bot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/channel3bot |
| ChatGLM\-Spider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/chatglm-spider |
| ChatGPT Agent | [OpenAI](https://openai.com) | Yes | AI Agents | Unclear at this time. | ChatGPT Agent is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/chatgpt-agent |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-SearchBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. | No information provided. | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. |
@ -32,9 +35,10 @@
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
| Crawl4AI | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/crawl4ai |
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
| Datenbank Crawler | Datenbank | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Datenbank Crawler is an AI data scraper operated by Datenbank. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/datenbank-crawler |
| DeepSeekBot | DeepSeek | Unclear at this time. | Training language models and improving AI products | Unclear at this time. | DeepSeekBot is a web crawler used by DeepSeek to train its language models and improve its AI products. |
| DeepSeekBot | DeepSeek | No | Training language models and improving AI products | Unclear at this time. | DeepSeekBot is a web crawler used by DeepSeek to train its language models and improve its AI products. |
| Devin | Devin AI | Yes | AI Assistants | Unclear at this time. | Devin is a collaborative AI teammate built to help ambitious engineering teams achieve more. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
@ -55,16 +59,25 @@
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iAskBot | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskbot |
| iaskspider | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskspider |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| IbouBot | Ibou | Yes | Search result generation. | Unclear at this time. | Ibou.io operates a crawler service named IbouBot which fuels and updates their graph representation of the World Wide Web. This database and all the metrics are used to provide a search engine. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support their suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Their web intelligence products use this index to enable search and retrieval of similar images. |
| imageSpider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/imagespider |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| KlaviyoAIBot | [Klaviyo](https://www.klaviyo.com) | [Yes](https://help.klaviyo.com/hc/en-us/articles/40496146232219) | AI Search Crawlers | Indexes based on 'change signals' and user configuration. | Indexes content to tailor AI experiences, generate content, answers and recommendations. |
| KunatoCrawler | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/kunatocrawler |
| laion\-huggingface\-processor | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/laion-huggingface-processor |
| LAIONDownloader | [Large-scale Artificial Intelligence Open Network](https://laion.ai/) | [No](https://laion.ai/faq/) | AI tools and models for machine learning research. | Unclear at this time. | LAIONDownloader is a bot by LAION, a non-profit organization that provides datasets, tools and models to liberate machine learning research. |
| LCC | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/lcc |
| LinerBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | LinerBot is the web crawler used by Liner AI assistant to gather information from academic sources and websites to provide accurate answers with line-by-line source citations for research and scholarly work. More info can be found at https://darkvisitors.com/agents/agents/linerbot |
| Linguee Bot | [Linguee](https://www.linguee.com) | No | AI powered translation service | Unclear at this time. | Linguee Bot is a web crawler used by Linguee to gather training data for its AI powered translation service. |
| LinkupBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/linkupbot |
| Manus\-User | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/manus-user |
| meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
| meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
@ -88,6 +101,7 @@
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| PhindBot | [phind](https://www.phind.com/) | Unclear at this time. | AI-enhanced search engine. | No explicit frequency provided. | Company offers an AI agent that uses AI and generate extra web query on the fly |
| Poggio\-Citations | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/poggio-citations |
| Poseidon Research Crawler | [Poseidon Research](https://www.poseidonresearch.com) | Unclear at this time. | AI research crawler | No explicit frequency provided. | Lab focused on scaling the interpretability research necessary to make better AI systems possible. |
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
| QuillBot | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
@ -98,15 +112,21 @@
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SEO Writing Assistant. | Roughly once every 10 seconds. | Data collected is used for the SEO Writing Assistant tool to check if URL is accessible. |
| ShapBot | [Parallel](https://parallel.ai) | [Yes](https://docs.parallel.ai/features/crawler) | Collects data for Parallel's web APIs. | Unclear at this time. | ShapBot helps discover and index websites for Parallel's web APIs. |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| Spider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/spider |
| TavilyBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/tavilybot |
| TerraCotta | [Ceramic AI](https://ceramic.ai/) | [Yes](https://github.com/CeramicTeam/CeramicTerracotta) | AI Agents | Unclear at this time. | Downloads data to train LLMs. |
| Thinkbot | [Thinkbot](https://www.thinkbot.agency) | No | Insights on AI integration and automation. | Unclear at this time. | Collects data for analysis on AI usage and automation. |
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| TwinAgent | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/twinagent |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| WARDBot | WEBSPARK | Unclear at this time. | AI Data Scrapers | Unclear at this time. | WARDBot is an AI data scraper operated by WEBSPARK. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/wardbot |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| webzio\-extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| wpbot | [QuantumCloud](https://www.quantumcloud.com) | Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9) | Live chat support and lead generation. | Unclear at this time. | wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support. |
| WRTNBot | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/wrtnbot |
| YaK | [Meltwater](https://www.meltwater.com/en/suite/consumer-intelligence) | Unclear at this time. | According to the [Meltwater Consumer Intelligence page](https://www.meltwater.com/en/suite/consumer-intelligence) 'By applying AI, data science, and market research expertise to a live feed of global data sources, we transform unstructured data into actionable insights allowing better decision-making'. | Unclear at this time. | Retrieves data used for Meltwater's AI enabled consumer intelligence suite |
| YandexAdditional | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YandexAdditionalBot | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
| ZanistaBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/zanistabot |