Compare commits

...

107 commits

Author SHA1 Message Date
ai.robots.txt
aa8519ec10 Update from Dark Visitors 2025-12-21 01:07:06 +00:00
dark-visitors
83485effdb Update from Dark Visitors 2025-12-20 00:58:49 +00:00
ai.robots.txt
8b8bf9da5d Update from Dark Visitors 2025-12-06 00:58:16 +00:00
dark-visitors
f1c752ef12 Update from Dark Visitors 2025-12-05 01:00:44 +00:00
Adam Newbold
51afa7113a
Update ai_robots_update.yml with rebase command to fix scheduled run 2025-12-03 20:15:10 -05:00
ai.robots.txt
7598d77e4a Update from Dark Visitors 2025-12-04 01:10:02 +00:00
dark-visitors
45b071b29f Update from Dark Visitors 2025-12-04 01:00:27 +00:00
dark-visitors
f61b3496f7 Update from Dark Visitors 2025-12-03 01:00:32 +00:00
dark-visitors
8363d4fdd4 Update from Dark Visitors 2025-12-02 01:25:24 +00:00
Adam Newbold
2ccd443581
Update ai_robots_update.yml with workflow_dispatch
Adding workflow_dispatch to enable manual triggers of this schedule job (for testing)
2025-12-01 20:24:41 -05:00
Adam Newbold
6d75f3c1c9
Update robots.py to address error on line 57
Attempting to work around an error that prevents parsing the Dark Visitors site
2025-12-01 20:18:29 -05:00
Glyn Normington
56010ef913
Merge pull request #205 from fiskhandlarn/fix/editorconfig
Fix/editorconfig
2025-11-29 10:02:08 +00:00
ai.robots.txt
3fadc88a23 Merge pull request #206 from newbold/main
Adding LAIONDownloader
2025-11-29 10:00:52 +00:00
Glyn Normington
47c077a8ef
Merge pull request #206 from newbold/main
Adding LAIONDownloader
2025-11-29 10:00:42 +00:00
Adam Newbold
f5d7ccb243
Fixed invalid JSON 2025-11-28 14:10:38 -05:00
Adam Newbold
30d719a09a
Adding LAIONDownloader 2025-11-28 14:08:18 -05:00
fiskhandlarn
05bbdebeaa feat: disallow final newline for files generated by python
if any of these files have ending newlines the tests will fail
2025-11-28 10:47:22 +01:00
fiskhandlarn
c6ce9329a1 fix: ensure whitespace as defined in .editorconfig 2025-11-28 10:46:22 +01:00
ai.robots.txt
10d5ae2870 Merge pull request #200 from glyn/deepseek
Clarify that DeepSeekBot does not respect robots.txt
2025-11-27 14:00:50 +00:00
Glyn Normington
4467002298
Merge pull request #200 from glyn/deepseek
Clarify that DeepSeekBot does not respect robots.txt
2025-11-27 14:00:40 +00:00
Glyn Normington
4a159a818f
Merge pull request #199 from glyn/editorconfig
Standardise editor options
2025-11-27 13:54:03 +00:00
Glyn Normington
3d6b33a71a
Merge pull request #202 from glyn/formatting
Tidy README
2025-11-27 12:42:31 +00:00
Glyn Normington
c26c8c0911 Tidy README 2025-11-27 12:41:37 +00:00
Glyn Normington
91959fe791
Merge pull request #201 from Anshita-18H/add-requirements-file
Add requirements.txt with project dependencies
2025-11-27 12:38:28 +00:00
Glyn Normington
b75163e796
Ensure Python3 is used 2025-11-27 12:38:12 +00:00
Glyn Normington
f46754d280
Order deps 2025-11-27 12:37:35 +00:00
Glyn Normington
c3f2fe758e
Whitespace 2025-11-27 12:37:21 +00:00
Glyn Normington
7521c3af50
Fix link 2025-11-27 12:37:07 +00:00
Anshita-18H
4302fd1aca Improve contributing section and fix formatting in README 2025-11-27 17:56:47 +05:30
Anshita-18H
9ca5033927 Deduplicate requirements.txt and add installation instructions to README 2025-11-27 17:44:47 +05:30
Anshita-18H
9cc3dbc05f Add requirements.txt with project dependencies 2025-11-27 17:06:59 +05:30
Glyn Normington
c322d7852d
Merge pull request #130 from maxheadroom/main
Create traefik-manual-setup.md
2025-11-26 12:08:11 +00:00
Glyn Normington
35588b2ddb
Typo. 2025-11-26 12:07:39 +00:00
Glyn Normington
ab24e41106 Clarify that DeepSeekBot does not respect robots.txt
Fixes https://github.com/ai-robots-txt/ai.robots.txt/issues/198
2025-11-26 12:02:42 +00:00
Glyn Normington
b681c0c0d8 Standardise editor options
This was motivated by @fiskhandlarn's comment:
https://github.com/ai-robots-txt/ai.robots.txt/pull/195#issuecomment-3576331322
2025-11-26 11:54:10 +00:00
Glyn Normington
4e7e28335f
Merge pull request #195 from fiskhandlarn/patch-2
feat: allow robots access to `/robots.txt` in nginx
2025-11-26 11:50:15 +00:00
fiskhandlarn
a6cf6b204b test: update test nginx conf 2025-11-25 16:47:04 +01:00
fiskhandlarn
2679fcad34 feat: update nginx generator 2025-11-25 16:46:31 +01:00
fiskhandlarn
ef8eda4fe6 chore: normalize quote style 2025-11-25 16:39:35 +01:00
Adam Newbold
a29102f0fc
Merge pull request #196 from glyn/releasing
Document how to ship a new release
2025-11-25 10:34:25 -05:00
fiskhandlarn
0b3266b35f feat: allow robots access to /robots.txt in nginx 2025-11-25 16:22:14 +01:00
Glyn Normington
6225e3e98e Document how to ship a new release 2025-11-23 04:01:43 +00:00
ai.robots.txt
be4d74412c Merge pull request #194 from glyn/193-kendra
delete extraneous hyphen
2025-11-21 16:32:11 +00:00
Cory Dransfeldt
729be4693a
Merge pull request #194 from glyn/193-kendra
delete extraneous hyphen
2025-11-21 08:32:01 -08:00
Glyn Normington
efb4d260da delete extraneous hyphen
Fixes https://github.com/ai-robots-txt/ai.robots.txt/issues/193
2025-11-20 10:23:48 +00:00
ai.robots.txt
e2726ac160 Merge pull request #192 from ai-robots-txt/cdransf/notebooklm-klaviyo
chore: adds NotebookLM and KlaviyoAIBot agents
2025-11-14 21:01:25 +00:00
Glyn Normington
663d030f96
Merge pull request #192 from ai-robots-txt/cdransf/notebooklm-klaviyo
chore: adds NotebookLM and KlaviyoAIBot agents
2025-11-14 21:01:13 +00:00
Cory Dransfeldt
28b45ea08d
chore: adds NotebookLM and KlaviyoAIBot agents 2025-11-14 11:03:03 -08:00
ai.robots.txt
443dd27527 Merge pull request #189 from ai-robots-txt/cdransf/atlassian-amazon-bots
chore(robots.json): add AmazonBuyForMe and atlassian-bot
2025-11-05 23:50:59 +00:00
Glyn Normington
60b6a0829d
Merge pull request #189 from ai-robots-txt/cdransf/atlassian-amazon-bots
chore(robots.json): add AmazonBuyForMe and atlassian-bot
2025-11-05 23:50:48 +00:00
Cory Dransfeldt
00bf2b0e13
chore(robots.json): add AmazonBuyForMe and atlassian-bot 2025-11-05 14:02:37 -08:00
ai.robots.txt
e87eb706e3 Merge pull request #188 from ai-robots-txt/cdransf/buddybot
chore: adds BuddyBot
2025-11-04 05:16:28 +00:00
Glyn Normington
3d41350256
Merge pull request #188 from ai-robots-txt/cdransf/buddybot
chore: adds BuddyBot
2025-11-04 05:16:15 +00:00
Cory Dransfeldt
808451055c
chore: adds BuddyBot 2025-11-03 13:39:07 -08:00
ai.robots.txt
5cad0ee389 Merge pull request #187 from ai-robots-txt/cdransf/add-Linguee-Bot
chore: adds Linguee Bot
2025-10-24 18:05:01 +00:00
Glyn Normington
d55c9980cd
Merge pull request #187 from ai-robots-txt/cdransf/add-Linguee-Bot
chore: adds Linguee Bot
2025-10-24 19:04:42 +01:00
Cory Dransfeldt
192b0a2eef
chore: adds Linguee Bot 2025-10-24 10:35:04 -07:00
dark-visitors
97e19445ce Update from Dark Visitors 2025-10-24 00:53:30 +00:00
ai.robots.txt
0bc2361be8 Merge pull request #186 from ai-robots-txt/third-party
Use third party in text, rather than first
2025-10-21 15:23:46 +00:00
Cory Dransfeldt
511d8c955d
Merge pull request #186 from ai-robots-txt/third-party
Use third party in text, rather than first
2025-10-21 08:23:28 -07:00
Glyn Normington
b89a9eae6a Use third party in text, rather than first 2025-10-21 01:11:32 +01:00
ai.robots.txt
646ab08e15 Merge pull request #185 from ai-robots-txt/cdransf/add-IbouBot
chore: adds IbouBot
2025-10-21 00:09:24 +00:00
Glyn Normington
5c3da1c1af
Merge pull request #185 from ai-robots-txt/cdransf/add-IbouBot
chore: adds IbouBot
2025-10-21 01:09:06 +01:00
Glyn Normington
d22b8dfd7b
Description is third party, not first 2025-10-21 01:08:42 +01:00
Cory Dransfeldt
19c1d346c3
chore: adds IbouBot 2025-10-20 09:28:31 -07:00
ai.robots.txt
2fa0e9119c Merge pull request #183 from ai-robots-txt/cdransf/bot-additions-cloudflare
chore: add amazon-kendra-, Anomura, Cloudflare-AutoRAG and Bravebot
2025-10-20 14:34:43 +00:00
Glyn Normington
0874a92503
Merge pull request #183 from ai-robots-txt/cdransf/bot-additions-cloudflare
chore: add amazon-kendra-, Anomura, Cloudflare-AutoRAG and Bravebot
2025-10-20 15:34:30 +01:00
Cory Dransfeldt
28d2d09633
chore: add amazon-kendra-, Anomura, Cloudflare-AutoRAG and Bravebot 2025-10-19 15:57:28 -07:00
ai.robots.txt
260f5029fe Update from Dark Visitors 2025-10-16 00:56:33 +00:00
dark-visitors
91bf905fa9 Update from Dark Visitors 2025-10-15 00:56:30 +00:00
dark-visitors
56d03d46fb Update from Dark Visitors 2025-09-26 00:54:30 +00:00
ai.robots.txt
38d60b928c Merge pull request #179 from ai-robots-txt/deepseekbot
chore(robots.json): add DeepSeekBot
2025-09-25 10:17:38 +00:00
Glyn Normington
e2266bbc1d
Merge pull request #179 from ai-robots-txt/deepseekbot
chore(robots.json): add DeepSeekBot
2025-09-25 11:17:23 +01:00
Cory Dransfeldt
bf347bdf91
chore(robots.json): add DeepSeekBot 2025-09-24 09:42:01 -07:00
nisbet-hubbard
c6e7d69dd5
Update README.md 2025-09-24 09:39:21 -07:00
ai.robots.txt
8906f6b447 Update from Dark Visitors 2025-09-12 00:53:06 +00:00
dark-visitors
2fd93029ca Update from Dark Visitors 2025-09-11 00:55:06 +00:00
ai.robots.txt
b6338ddc73 Merge pull request #176 from emersion/TerraCotta
Add TerraCotta
2025-09-10 14:11:16 +00:00
Cory Dransfeldt
7ffbf33baf
Merge pull request #176 from emersion/TerraCotta
Add TerraCotta
2025-09-10 07:11:04 -07:00
Simon Ser
50870ba911 Add TerraCotta
Example server log:

    X.X.X.X - - [10/Sep/2025:00:06:17 +0000] "GET /archives/dri-devel/2023-November/430875.html HTTP/1.1" 200 7388 "-" "TerraCotta https://github.com/CeramicTeam/CeramicTerracotta"
2025-09-10 12:04:08 +02:00
dark-visitors
dd391bf960 Update from Dark Visitors 2025-09-10 00:53:56 +00:00
ai.robots.txt
229d1b4dbc Merge pull request #174 from karolyi/master
Update Brightbot operator and details; add meta-webindexer entry
2025-09-09 02:45:32 +00:00
Glyn Normington
4d506ca322
Merge pull request #174 from karolyi/master
Update Brightbot operator and details; add meta-webindexer entry
2025-09-09 03:45:23 +01:00
László Károlyi
ec508ab434
Update Brightbot operator and details; add meta-webindexer entry
- Update Brightbot operator to https://brightdata.com/brightbot.
- Change Brightbot frequency to "At least one per minute."
- Expand Brightbot description with disguise tactics link.
- Add new entry for meta-webindexer under Meta operator.
- Set meta-webindexer respect to "Unclear at this time."
- Define meta-webindexer function as "AI Assistants."
- Set meta-webindexer frequency to "Unhinged, more than 1 per second."
- Include meta-webindexer description on improving Meta AI search.
2025-09-08 21:42:06 +02:00
dark-visitors
0ed29412c9 Update from Dark Visitors 2025-08-28 00:55:47 +00:00
ai.robots.txt
2ad1c3e831 Merge pull request #168 from ai-robots-txt/google-firebase-shapbot
chore: add Google-Firebase and ShapBot
2025-08-27 10:13:17 +00:00
Glyn Normington
1677278c5a
Merge pull request #168 from ai-robots-txt/google-firebase-shapbot
chore: add Google-Firebase and ShapBot
2025-08-27 11:13:04 +01:00
Cory Dransfeldt
1a8edfa84a
chore: add Google-Firebase and ShapBot 2025-08-26 19:39:42 -07:00
dark-visitors
cf073d49f2 Update from Dark Visitors 2025-08-15 01:01:42 +00:00
ai.robots.txt
6d3f3e1712 Merge pull request #167 from ai-robots-txt/OpenAI-bot
chore: adds OpenAI but user agent
2025-08-14 01:45:05 +00:00
Glyn Normington
784b8440a5
Merge pull request #167 from ai-robots-txt/OpenAI-bot
chore: adds OpenAI but user agent
2025-08-14 02:44:53 +01:00
Cory Dransfeldt
0e687a5b58
chore: adds OpenAI but user agent 2025-08-13 09:01:55 -07:00
ai.robots.txt
ff9fc26404 Update from Dark Visitors 2025-08-01 01:12:53 +00:00
dark-visitors
146a229662 Update from Dark Visitors 2025-07-31 01:05:21 +00:00
ai.robots.txt
64f9d6ce9c Update from Dark Visitors 2025-07-30 01:05:21 +00:00
dark-visitors
085dd1071e Update from Dark Visitors 2025-07-29 01:11:26 +00:00
ai.robots.txt
9565c11d4c Merge pull request #166 from ai-robots-txt/summaly-bot
fix: remove summaly
2025-07-28 19:19:15 +00:00
Glyn Normington
8869442615
Merge pull request #166 from ai-robots-txt/summaly-bot
fix: remove summaly
2025-07-28 20:19:06 +01:00
ai.robots.txt
27420d6fed Merge pull request #164 from GlitzSmarter/main
Adding YaK
2025-07-28 18:29:57 +00:00
Glyn Normington
44e58a5ece
Merge pull request #164 from GlitzSmarter/main
Adding YaK
2025-07-28 19:29:48 +01:00
Glyn Normington
12c5368e04
Fix syntax 2025-07-28 19:29:19 +01:00
Glitz Smarter
c15065544a
Rewriting function in YaK
Rewrote the function to make it clear the quote is according to the page on the Clearwater website.
2025-07-28 12:54:25 -04:00
Cory Dransfeldt
9171625db6
fix: remove summaly 2025-07-28 08:56:21 -07:00
Gregory
8b188d0612 Added YaK user-agent to list of robots
Added Meltwater's AI to list of bots that use AI and scrape websites.
2025-07-26 23:50:11 -04:00
Falko Zurell
6c3ae6eb20
Merge branch 'ai-robots-txt:main' into main 2025-05-16 13:52:02 +02:00
Falko Zurell
684d11d889 moved traefik manual setup into docs
moved the traefik manual setup into the docs directory and linked to it from the README.md
2025-05-16 13:50:58 +02:00
Falko Zurell
9b5f75e2f3
Create traefik-manual-setup.md 2025-05-16 13:27:31 +02:00
14 changed files with 587 additions and 52 deletions

9
.editorconfig Normal file
View file

@ -0,0 +1,9 @@
root = true
[*]
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[{Caddyfile,haproxy-block-ai-bots.txt,nginx-block-ai-bots.conf}]
insert_final_newline = false

View file

@ -2,6 +2,7 @@ name: Updates for AI robots files
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
dark-visitors:
@ -10,7 +11,7 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
fetch-depth: 0
- run: |
pip install beautifulsoup4 requests
git config --global user.name "dark-visitors"
@ -22,6 +23,7 @@ jobs:
git add -A
if ! git diff --cached --quiet; then
git commit -m "Update from Dark Visitors"
git rebase origin/main
git push
else
echo "No changes to commit."

View file

@ -1,3 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|SummalyBot|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot) [NC]
RewriteCond %{HTTP_USER_AGENT} (AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot) [NC]
RewriteRule !^/?robots\.txt$ - [F]

View file

@ -1,3 +1,3 @@
@aibots {
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|SummalyBot|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)"
header_regexp User-Agent "(AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot)"
}

View file

@ -4,7 +4,7 @@
This list contains AI-related crawlers of all types, regardless of purpose. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling.
@ -44,15 +44,34 @@ Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
file on-the-fly.
- Alternatively you can [manually configure Traefik](./docs/traefik-manual-setup.md) to centrally serve a static `robots.txt`.
## Contributing
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3, installing the dependencies:
```console
pip install -r requirements.txt
```
and then issuing:
```console
code/tests.py
```
The `.editorconfig` file provides standard editor options for this project. See [EditorConfig](https://editorconfig.org/) for more information.
## Releasing
Admins may ship a new release `v1.n` (where `n` increments the minor version of the current release) as follows:
* Navigate to the [new release page](https://github.com/ai-robots-txt/ai.robots.txt/releases/new) on GitHub.
* Click `Select tag`, choose `Create new tag`, enter `v1.n` in the pop-up, and click `Create`.
* Enter a suitable release title (e.g. `v1.n: adds user-agent1, user-agent2`).
* Click `Generate release notes`.
* Click `Publish release`.
A GitHub action will then add the asset `robots.txt` to the release. That's it.
## Subscribe to updates
You can subscribe to list updates via RSS/Atom with the releases feed:
@ -65,6 +84,13 @@ You can subscribe with [Feedly](https://feedly.com/i/subscription/feed/https://g
Alternatively, you can also subscribe to new releases with your GitHub account by clicking the ⬇️ on "Watch" button at the top of this page, clicking "Custom" and selecting "Releases".
## License content with RSL
It is also possible to license your content to AI companies in `robots.txt` using
the [Really Simple Licensing](https://rslstandard.org) standard, with an option of
collective bargaining. A [plugin](https://github.com/Jameswlepage/rsl-wp) currently
implements RSL as well as payment processing for WordPress sites.
## Report abusive crawlers
If you use [Cloudflare's hard block](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) alongside this list, you can report abusive crawlers that don't respect `robots.txt` [here](https://docs.google.com/forms/d/e/1FAIpQLScbUZ2vlNSdcsb8LyTeSF7uLzQI96s0BKGoJ6wQ6ocUFNOKEg/viewform).

View file

@ -52,7 +52,16 @@ def updated_robots_json(soup):
for agent in section.find_all("a", href=True):
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
name = clean_robot_name(name)
desc = agent.find("p").get_text().strip()
# This line below occasionally throws this error: AttributeError: 'NoneType' object has no attribute 'get_text'
#desc = agent.find("p").get_text().strip()
# Attempting a different way to handle to avoid errors:
p_tag = agent.find("p")
if p_tag is not None:
desc = p_tag.get_text().strip()
else:
desc = "Description unavailable from darkvisitors.com"
default_values = {
"Unclear at this time.",
@ -108,10 +117,10 @@ def clean_robot_name(name):
# This was specifically spotted in "Perplexity-User"
# Looks like a non-breaking hyphen introduced by the HTML rendering software
# Reading the source page for Perplexity: https://docs.perplexity.ai/guides/bots
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
# and it's only the Row-Heading that has the special hyphen
#
# Technically, there's no reason there wouldn't someday be a bot that
#
# Technically, there's no reason there wouldn't someday be a bot that
# actually uses a non-breaking hyphen, but that seems unlikely,
# so this solution should be fine for now.
result = re.sub(r"\u2011", "-", name)
@ -173,9 +182,9 @@ def json_to_htaccess(robot_json):
return htaccess
def json_to_nginx(robot_json):
# Creates an Nginx config file. This config snippet can be included in
# Creates an Nginx config file. This config snippet can be included in
# nginx server{} blocks to block AI bots.
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
config = f"set $block 0;\n\nif ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n set $block 1;\n}}\n\nif ($request_uri = \"/robots.txt\") {{\n set $block 0;\n}}\n\nif ($block) {{\n return 403;\n}}"
return config
@ -225,7 +234,7 @@ def conversions():
file_name="./Caddyfile",
converter=json_to_caddy,
)
update_file_if_changed(
file_name="./haproxy-block-ai-bots.txt",
converter=json_to_haproxy,

View file

@ -1,3 +1,13 @@
set $block 0;
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
set $block 1;
}
if ($request_uri = "/robots.txt") {
set $block 0;
}
if ($block) {
return 403;
}

View file

@ -0,0 +1,36 @@
# Intro
If you're using Traefik as your reverse proxy in your docker setup, you might want to use it as well to centrally serve the ```/robots.txt``` for all your Traefik fronted services.
This can be achieved by configuring a single lightweight service to service static files and defining a high priority Traefik HTTP Router rule.
# Setup
Define a single service to serve the one robots.txt to rule them all. I'm using a lean nginx:alpine docker image in this example:
```
services:
robots:
image: nginx:alpine
container_name: robots-server
volumes:
- ./static/:/usr/share/nginx/html/:ro
labels:
- "traefik.enable=true"
# Router for all /robots.txt requests
- "traefik.http.routers.robots.rule=Path(`/robots.txt`)"
- "traefik.http.routers.robots.entrypoints=web,websecure"
- "traefik.http.routers.robots.priority=3000"
- "traefik.http.routers.robots.service=robots"
- "traefik.http.routers.robots.tls.certresolver=letsencrypt"
- "traefik.http.services.robots.loadbalancer.server.port=80"
networks:
- external_network
networks:
external_network:
name: traefik_external_network
external: true
```
The Traefik HTTP Routers rule explicitly does not contain a Hostname. Traefik will print a warning about this for the TLS setup but it will work. The high priority of 3000 should ensure this rule is evaluated first for incoming requests.
Place your robots.txt in the local `./static/` directory and NGINX will serve it for all services behind your Traefik proxy.

View file

@ -1,26 +1,42 @@
AddSearchBot
AI2Bot
AI2Bot-DeepResearchEval
Ai2Bot-Dolma
aiHitBot
amazon-kendra
Amazonbot
AmazonBuyForMe
Andibot
Anomura
anthropic-ai
Applebot
Applebot-Extended
atlassian-bot
Awario
bedrockbot
bigsur.ai
Bravebot
Brightbot 1.0
BuddyBot
Bytespider
CCBot
Channel3Bot
ChatGLM-Spider
ChatGPT Agent
ChatGPT-User
Claude-SearchBot
Claude-User
Claude-Web
ClaudeBot
Cloudflare-AutoRAG
CloudVertexBot
cohere-ai
cohere-training-data-crawler
Cotoyogi
Crawl4AI
Crawlspace
Datenbank Crawler
DeepSeekBot
Devin
Diffbot
DuckAssistBot
@ -34,29 +50,47 @@ FriendlyCrawler
Gemini-Deep-Research
Google-CloudVertexBot
Google-Extended
Google-Firebase
Google-NotebookLM
GoogleAgent-Mariner
GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iAskBot
iaskspider
iaskspider/2.0
IbouBot
ICC-Crawler
ImagesiftBot
imageSpider
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
KlaviyoAIBot
KunatoCrawler
laion-huggingface-processor
LAIONDownloader
LCC
LinerBot
Linguee Bot
LinkupBot
Manus-User
meta-externalagent
Meta-ExternalAgent
meta-externalfetcher
Meta-ExternalFetcher
meta-webindexer
MistralAI-User
MistralAI-User/1.0
MyCentralAIScraperBot
netEstate Imprint Crawler
NotebookLM
NovaAct
OAI-SearchBot
omgili
omgilibot
OpenAI
Operator
PanguBot
Panscient
@ -65,6 +99,7 @@ Perplexity-User
PerplexityBot
PetalBot
PhindBot
Poggio-Citations
Poseidon Research Crawler
QualifiedBot
QuillBot
@ -73,15 +108,23 @@ SBIntuitionsBot
Scrapy
SemrushBot-OCOB
SemrushBot-SWA
ShapBot
Sidetrade indexer bot
SummalyBot
Spider
TavilyBot
TerraCotta
Thinkbot
TikTokSpider
Timpibot
TwinAgent
VelenPublicWebCrawler
WARDBot
Webzio-Extended
webzio-extended
wpbot
WRTNBot
YaK
YandexAdditional
YandexAdditionalBot
YouBot
YouBot
ZanistaBot

View file

@ -1,3 +1,13 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|Awario|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Datenbank\ Crawler|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|SummalyBot|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)") {
set $block 0;
if ($http_user_agent ~* "(AddSearchBot|AI2Bot|AI2Bot\-DeepResearchEval|Ai2Bot\-Dolma|aiHitBot|amazon\-kendra|Amazonbot|AmazonBuyForMe|Andibot|Anomura|anthropic\-ai|Applebot|Applebot\-Extended|atlassian\-bot|Awario|bedrockbot|bigsur\.ai|Bravebot|Brightbot\ 1\.0|BuddyBot|Bytespider|CCBot|Channel3Bot|ChatGLM\-Spider|ChatGPT\ Agent|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|Cloudflare\-AutoRAG|CloudVertexBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawl4AI|Crawlspace|Datenbank\ Crawler|DeepSeekBot|Devin|Diffbot|DuckAssistBot|Echobot\ Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini\-Deep\-Research|Google\-CloudVertexBot|Google\-Extended|Google\-Firebase|Google\-NotebookLM|GoogleAgent\-Mariner|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iAskBot|iaskspider|iaskspider/2\.0|IbouBot|ICC\-Crawler|ImagesiftBot|imageSpider|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|KlaviyoAIBot|KunatoCrawler|laion\-huggingface\-processor|LAIONDownloader|LCC|LinerBot|Linguee\ Bot|LinkupBot|Manus\-User|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|meta\-webindexer|MistralAI\-User|MistralAI\-User/1\.0|MyCentralAIScraperBot|netEstate\ Imprint\ Crawler|NotebookLM|NovaAct|OAI\-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poggio\-Citations|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|ShapBot|Sidetrade\ indexer\ bot|Spider|TavilyBot|TerraCotta|Thinkbot|TikTokSpider|Timpibot|TwinAgent|VelenPublicWebCrawler|WARDBot|Webzio\-Extended|webzio\-extended|wpbot|WRTNBot|YaK|YandexAdditional|YandexAdditionalBot|YouBot|ZanistaBot)") {
set $block 1;
}
if ($request_uri = "/robots.txt") {
set $block 0;
}
if ($block) {
return 403;
}

3
requirements.txt Normal file
View file

@ -0,0 +1,3 @@
beautifulsoup4
lxml
requests

View file

@ -1,4 +1,11 @@
{
"AddSearchBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "AddSearchBot is a web crawler that indexes website content for AddSearch's AI-powered site search solution, collecting data to provide fast and accurate search results. More info can be found at https://darkvisitors.com/agents/agents/addsearchbot"
},
"AI2Bot": {
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes",
@ -6,12 +13,19 @@
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"AI2Bot-DeepResearchEval": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/ai2bot-deepresearcheval"
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
"respect": "Yes",
"function": "Content is used to train open language models.",
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"aiHitBot": {
"operator": "[aiHit](https://www.aihitdata.com/about)",
@ -20,6 +34,13 @@
"frequency": "No information provided.",
"description": "Scrapes data for AI systems."
},
"amazon-kendra": {
"operator": "Amazon",
"respect": "Yes",
"function": "Collects data for AI natural language search",
"frequency": "No information provided.",
"description": "Amazon Kendra is a highly accurate intelligent search service that enables your users to search unstructured data using natural language. It returns specific answers to questions, giving users an experience that's close to interacting with a human expert. It is highly scalable and capable of meeting performance demands, tightly integrated with other AWS services such as Amazon S3 and Amazon Lex, and offers enterprise-grade security."
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
@ -27,6 +48,13 @@
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"AmazonBuyForMe": {
"operator": "[Amazon](https://amazon.com)",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "No information provided.",
"description": "Buy For Me is an AI agent that helps buy products at the direction of customers."
},
"Andibot": {
"operator": "[Andi](https://andisearch.com/)",
"respect": "Unclear at this time",
@ -34,6 +62,13 @@
"frequency": "No information provided.",
"description": "Scrapes website and provides AI summary."
},
"Anomura": {
"operator": "[Direqt](https://direqt.ai)",
"respect": "Yes",
"function": "Collects data for AI search",
"frequency": "No information provided.",
"description": "Anomura is Direqt's search crawler, it discovers and indexes pages their customers websites."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
@ -55,6 +90,13 @@
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"atlassian-bot": {
"operator": "[Atlassian](https://www.atlassian.com)",
"respect": "[Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt)",
"function": "AI search, assistants and agents",
"frequency": "No information provided.",
"description": "atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product."
},
"Awario": {
"operator": "Awario",
"respect": "Unclear at this time.",
@ -69,12 +111,33 @@
"frequency": "Unclear at this time.",
"description": "Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application."
},
"bigsur.ai": {
"operator": "Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "bigsur.ai is a web crawler operated by Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses. More info can be found at https://darkvisitors.com/agents/agents/bigsur-ai"
},
"Bravebot": {
"operator": "https://safe.search.brave.com/help/brave-search-crawler",
"respect": "Yes",
"function": "Collects data for AI search",
"frequency": "Unclear at this time.",
"description": "Brave search has a crawler to discover new pages and index their content."
},
"Brightbot 1.0": {
"operator": "Browsing.ai",
"operator": "https://brightdata.com/brightbot",
"respect": "Unclear at this time.",
"function": "LLM/AI training.",
"frequency": "At least one per minute.",
"description": "Scrapes data to train LLMs and AI products focused on website customer support, [uses residential IPs and legit-looking user-agents to disguise itself](https://ksol.io/en/blog/posts/brightbot-not-that-bright/)."
},
"BuddyBot": {
"operator": "[BuddyBotLearning](https://www.buddybotlearning.com)",
"respect": "Unclear at this time.",
"function": "AI Learning Companion",
"frequency": "Unclear at this time.",
"description": "Scrapes data to train LLMs and AI products focused on website customer support."
"description": "BuddyBot is a voice-controlled AI learning companion targeted at childhooded STEM education."
},
"Bytespider": {
"operator": "ByteDance",
@ -90,6 +153,27 @@
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"Channel3Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/channel3bot"
},
"ChatGLM-Spider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/chatglm-spider"
},
"ChatGPT Agent": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "ChatGPT Agent is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/chatgpt-agent"
},
"ChatGPT-User": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
@ -125,6 +209,20 @@
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"Cloudflare-AutoRAG": {
"operator": "[Cloudflare](https://developers.cloudflare.com/autorag)",
"respect": "Yes",
"function": "Collects data for AI search",
"frequency": "Unclear at this time.",
"description": "AutoRAG is an all-in-one AI search solution."
},
"CloudVertexBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "CloudVertexBot is a Google-operated crawler available to site owners to request targeted crawls of their own sites for AI training purposes on the Vertex AI platform. More info can be found at https://darkvisitors.com/agents/agents/cloudvertexbot"
},
"cohere-ai": {
"operator": "[Cohere](https://cohere.com)",
"respect": "Unclear at this time.",
@ -146,6 +244,13 @@
"frequency": "No information provided.",
"description": "Scrapes data for AI training in Japanese language."
},
"Crawl4AI": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/crawl4ai"
},
"Crawlspace": {
"operator": "[Crawlspace](https://crawlspace.dev)",
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
@ -160,12 +265,19 @@
"frequency": "Unclear at this time.",
"description": "Datenbank Crawler is an AI data scraper operated by Datenbank. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/datenbank-crawler"
},
"DeepSeekBot": {
"operator": "DeepSeek",
"respect": "No",
"function": "Training language models and improving AI products",
"frequency": "Unclear at this time.",
"description": "DeepSeekBot is a web crawler used by DeepSeek to train its language models and improve its AI products."
},
"Devin": {
"operator": "Devin AI",
"respect": "Unclear at this time.",
"respect": "Yes",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Devin is an AI assistant operated by Devin AI. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/devin"
"description": "Devin is a collaborative AI teammate built to help ambitious engineering teams achieve more."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
@ -251,19 +363,33 @@
"frequency": "No information.",
"description": "Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search."
},
"GoogleAgent-Mariner": {
"Google-Firebase": {
"operator": "Google",
"respect": "Unclear at this time.",
"function": "Used as part of AI apps developed by users of Google's Firebase AI products.",
"frequency": "Unclear at this time.",
"description": "Supports Google's Firebase AI products."
},
"Google-NotebookLM": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Google-NotebookLM is an AI-powered research and note-taking assistant that helps users synthesize information from their own uploaded sources, such as documents, transcripts, or web content. It can generate summaries, answer questions, and highlight key themes from the materials you provide, acting like a personalized research companion built on Google's Gemini model. Google-NotebookLM fetches source URLs when users add them to their notebooks, enabling the AI to access and analyze those pages for context and insights. More info can be found at https://darkvisitors.com/agents/agents/google-notebooklm"
},
"GoogleAgent-Mariner": {
"operator": "Google",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "GoogleAgent-Mariner is an AI agent created by Google that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/googleagent-mariner"
},
"GoogleOther": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "Scrapes data.",
"frequency": "No information.",
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\""
},
"GoogleOther-Image": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
@ -286,6 +412,20 @@
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iAskBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskbot"
},
"iaskspider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskspider"
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
@ -293,6 +433,13 @@
"operator": "iAsk",
"respect": "No"
},
"IbouBot": {
"operator": "Ibou",
"respect": "Yes",
"function": "Search result generation.",
"frequency": "Unclear at this time.",
"description": "Ibou.io operates a crawler service named IbouBot which fuels and updates their graph representation of the World Wide Web. This database and all the metrics are used to provide a search engine."
},
"ICC-Crawler": {
"operator": "[NICT](https://nict.go.jp)",
"respect": "Yes",
@ -301,12 +448,19 @@
"description": "Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business."
},
"ImagesiftBot": {
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images.",
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Their web intelligence products use this index to enable search and retrieval of similar images.",
"frequency": "No information.",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support their suite of web intelligence products",
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"imageSpider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/imagespider"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
@ -328,6 +482,69 @@
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"KlaviyoAIBot": {
"operator": "[Klaviyo](https://www.klaviyo.com)",
"respect": "[Yes](https://help.klaviyo.com/hc/en-us/articles/40496146232219)",
"function": "AI Search Crawlers",
"frequency": "Indexes based on 'change signals' and user configuration.",
"description": "Indexes content to tailor AI experiences, generate content, answers and recommendations."
},
"KunatoCrawler": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/kunatocrawler"
},
"laion-huggingface-processor": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/laion-huggingface-processor"
},
"LAIONDownloader": {
"operator": "[Large-scale Artificial Intelligence Open Network](https://laion.ai/)",
"respect": "[No](https://laion.ai/faq/)",
"function": "AI tools and models for machine learning research.",
"frequency": "Unclear at this time.",
"description": "LAIONDownloader is a bot by LAION, a non-profit organization that provides datasets, tools and models to liberate machine learning research."
},
"LCC": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/lcc"
},
"LinerBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "LinerBot is the web crawler used by Liner AI assistant to gather information from academic sources and websites to provide accurate answers with line-by-line source citations for research and scholarly work. More info can be found at https://darkvisitors.com/agents/agents/linerbot"
},
"Linguee Bot": {
"operator": "[Linguee](https://www.linguee.com)",
"respect": "No",
"function": "AI powered translation service",
"frequency": "Unclear at this time.",
"description": "Linguee Bot is a web crawler used by Linguee to gather training data for its AI powered translation service."
},
"LinkupBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/linkupbot"
},
"Manus-User": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/manus-user"
},
"meta-externalagent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes",
@ -356,6 +573,13 @@
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"meta-webindexer": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/)",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unhinged, more than 1 per second.",
"description": "As per their documentation, \"The Meta-WebIndexer crawler navigates the web to improve Meta AI search result quality for users. In doing so, Meta analyzes online content to enhance the relevance and accuracy of Meta AI. Allowing Meta-WebIndexer in your robots.txt file helps us cite and link to your content in Meta AI's responses.\""
},
"MistralAI-User": {
"operator": "Mistral",
"respect": "Unclear at this time.",
@ -384,6 +608,13 @@
"frequency": "Unclear at this time.",
"description": "netEstate Imprint Crawler is an AI data scraper operated by netEstate. If you think this is incorrect or can provide additional detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/netestate-imprint-crawler"
},
"NotebookLM": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "NotebookLM is an AI-powered research and note-taking assistant that helps users synthesize information from their own uploaded sources, such as documents, transcripts, or web content. It can generate summaries, answer questions, and highlight key themes from the materials you provide, acting like a personalized research companion built on Google's Gemini model. NotebookLM fetches source URLs when users add them to their notebooks, enabling the AI to access and analyze those pages for context and insights. More info can be found at https://darkvisitors.com/agents/agents/google-notebooklm"
},
"NovaAct": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
@ -412,6 +643,13 @@
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"OpenAI": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Unclear at this time.",
"frequency": "Unclear at this time.",
"description": "The purpose of this bot is unclear at this time but it is a member of OpenAI's suite of crawlers."
},
"Operator": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
@ -462,11 +700,18 @@
"description": "Operated by Huawei to provide search and AI assistant services."
},
"PhindBot": {
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly",
"frequency": "No explicit frequency provided.",
"function": "AI-enhanced search engine.",
"operator": "[phind](https://www.phind.com/)",
"respect": "Unclear at this time."
"respect": "Unclear at this time.",
"function": "AI-enhanced search engine.",
"frequency": "No explicit frequency provided.",
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly"
},
"Poggio-Citations": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/poggio-citations"
},
"Poseidon Research Crawler": {
"operator": "[Poseidon Research](https://www.poseidonresearch.com)",
@ -497,11 +742,11 @@
"respect": "Unclear at this time."
},
"SBIntuitionsBot": {
"description": "AI development and information analysis",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)",
"respect": "[Yes](https://www.sbintuitions.co.jp/en/bot/)",
"frequency": "No information.",
"function": "Uses data gathered in AI development and information analysis.",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)"
"frequency": "No information.",
"description": "AI development and information analysis"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
@ -524,6 +769,13 @@
"frequency": "Roughly once every 10 seconds.",
"description": "Data collected is used for the SEO Writing Assistant tool to check if URL is accessible."
},
"ShapBot": {
"operator": "[Parallel](https://parallel.ai)",
"respect": "[Yes](https://docs.parallel.ai/features/crawler)",
"function": "Collects data for Parallel's web APIs.",
"frequency": "Unclear at this time.",
"description": "ShapBot helps discover and index websites for Parallel's web APIs."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
@ -531,12 +783,26 @@
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"SummalyBot": {
"description": "AI input summarization.",
"frequency": "No information.",
"function": "Extracts data to provide summaries for user; additional usages unclear.",
"Spider": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time."
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/spider"
},
"TavilyBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/tavilybot"
},
"TerraCotta": {
"operator": "[Ceramic AI](https://ceramic.ai/)",
"respect": "[Yes](https://github.com/CeramicTeam/CeramicTerracotta)",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMs."
},
"Thinkbot": {
"operator": "[Thinkbot](https://www.thinkbot.agency)",
@ -559,12 +825,19 @@
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"TwinAgent": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/twinagent"
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
"respect": "[Yes](https://velen.io)",
"function": "Scrapes data for business data sets and machine learning models.",
"frequency": "No information.",
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\""
},
"WARDBot": {
"operator": "WEBSPARK",
@ -580,6 +853,13 @@
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"webzio-extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"wpbot": {
"operator": "[QuantumCloud](https://www.quantumcloud.com)",
"respect": "Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9)",
@ -587,6 +867,20 @@
"frequency": "Unclear at this time.",
"description": "wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support."
},
"WRTNBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/wrtnbot"
},
"YaK": {
"operator": "[Meltwater](https://www.meltwater.com/en/suite/consumer-intelligence)",
"respect": "Unclear at this time.",
"function": "According to the [Meltwater Consumer Intelligence page](https://www.meltwater.com/en/suite/consumer-intelligence) 'By applying AI, data science, and market research expertise to a live feed of global data sources, we transform unstructured data into actionable insights allowing better decision-making'.",
"frequency": "Unclear at this time.",
"description": "Retrieves data used for Meltwater's AI enabled consumer intelligence suite"
},
"YandexAdditional": {
"operator": "[Yandex](https://yandex.ru)",
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
@ -607,5 +901,12 @@
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
},
"ZanistaBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/zanistabot"
}
}

View file

@ -1,26 +1,42 @@
User-agent: AddSearchBot
User-agent: AI2Bot
User-agent: AI2Bot-DeepResearchEval
User-agent: Ai2Bot-Dolma
User-agent: aiHitBot
User-agent: amazon-kendra
User-agent: Amazonbot
User-agent: AmazonBuyForMe
User-agent: Andibot
User-agent: Anomura
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: atlassian-bot
User-agent: Awario
User-agent: bedrockbot
User-agent: bigsur.ai
User-agent: Bravebot
User-agent: Brightbot 1.0
User-agent: BuddyBot
User-agent: Bytespider
User-agent: CCBot
User-agent: Channel3Bot
User-agent: ChatGLM-Spider
User-agent: ChatGPT Agent
User-agent: ChatGPT-User
User-agent: Claude-SearchBot
User-agent: Claude-User
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: Cloudflare-AutoRAG
User-agent: CloudVertexBot
User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Cotoyogi
User-agent: Crawl4AI
User-agent: Crawlspace
User-agent: Datenbank Crawler
User-agent: DeepSeekBot
User-agent: Devin
User-agent: Diffbot
User-agent: DuckAssistBot
@ -34,29 +50,47 @@ User-agent: FriendlyCrawler
User-agent: Gemini-Deep-Research
User-agent: Google-CloudVertexBot
User-agent: Google-Extended
User-agent: Google-Firebase
User-agent: Google-NotebookLM
User-agent: GoogleAgent-Mariner
User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iAskBot
User-agent: iaskspider
User-agent: iaskspider/2.0
User-agent: IbouBot
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: imageSpider
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: KlaviyoAIBot
User-agent: KunatoCrawler
User-agent: laion-huggingface-processor
User-agent: LAIONDownloader
User-agent: LCC
User-agent: LinerBot
User-agent: Linguee Bot
User-agent: LinkupBot
User-agent: Manus-User
User-agent: meta-externalagent
User-agent: Meta-ExternalAgent
User-agent: meta-externalfetcher
User-agent: Meta-ExternalFetcher
User-agent: meta-webindexer
User-agent: MistralAI-User
User-agent: MistralAI-User/1.0
User-agent: MyCentralAIScraperBot
User-agent: netEstate Imprint Crawler
User-agent: NotebookLM
User-agent: NovaAct
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: OpenAI
User-agent: Operator
User-agent: PanguBot
User-agent: Panscient
@ -65,6 +99,7 @@ User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: PhindBot
User-agent: Poggio-Citations
User-agent: Poseidon Research Crawler
User-agent: QualifiedBot
User-agent: QuillBot
@ -73,16 +108,24 @@ User-agent: SBIntuitionsBot
User-agent: Scrapy
User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SWA
User-agent: ShapBot
User-agent: Sidetrade indexer bot
User-agent: SummalyBot
User-agent: Spider
User-agent: TavilyBot
User-agent: TerraCotta
User-agent: Thinkbot
User-agent: TikTokSpider
User-agent: Timpibot
User-agent: TwinAgent
User-agent: VelenPublicWebCrawler
User-agent: WARDBot
User-agent: Webzio-Extended
User-agent: webzio-extended
User-agent: wpbot
User-agent: WRTNBot
User-agent: YaK
User-agent: YandexAdditional
User-agent: YandexAdditionalBot
User-agent: YouBot
User-agent: ZanistaBot
Disallow: /

View file

@ -1,29 +1,45 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|------|----------|-----------------------|----------|------------------|-------------|
| AddSearchBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | AddSearchBot is a web crawler that indexes website content for AddSearch's AI-powered site search solution, collecting data to provide fast and accurate search results. More info can be found at https://darkvisitors.com/agents/agents/addsearchbot |
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| AI2Bot\-DeepResearchEval | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/ai2bot-deepresearcheval |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
| amazon\-kendra | Amazon | Yes | Collects data for AI natural language search | No information provided. | Amazon Kendra is a highly accurate intelligent search service that enables your users to search unstructured data using natural language. It returns specific answers to questions, giving users an experience that's close to interacting with a human expert. It is highly scalable and capable of meeting performance demands, tightly integrated with other AWS services such as Amazon S3 and Amazon Lex, and offers enterprise-grade security. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| AmazonBuyForMe | [Amazon](https://amazon.com) | Unclear at this time. | AI Agents | No information provided. | Buy For Me is an AI agent that helps buy products at the direction of customers. |
| Andibot | [Andi](https://andisearch.com/) | Unclear at this time | Search engine using generative AI, AI Search Assistant | No information provided. | Scrapes website and provides AI summary. |
| Anomura | [Direqt](https://direqt.ai) | Yes | Collects data for AI search | No information provided. | Anomura is Direqt's search crawler, it discovers and indexes pages their customers websites. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| atlassian\-bot | [Atlassian](https://www.atlassian.com) | [Yes](https://support.atlassian.com/organization-administration/docs/connect-custom-website-to-rovo/#Editing-your-robots.txt) | AI search, assistants and agents | No information provided. | atlassian-bot is a web crawler used to index website content for its AI search, assistants and agents available in its Rovo GenAI product. |
| Awario | Awario | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Awario is an AI data scraper operated by Awario. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/awario |
| bedrockbot | [Amazon](https://amazon.com) | [Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector) | Data scraping for custom AI applications. | Unclear at this time. | Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application. |
| Brightbot 1\.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
| bigsur\.ai | Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses | Unclear at this time. | AI Assistants | Unclear at this time. | bigsur.ai is a web crawler operated by Big Sur AI that fetches website content to enable AI-powered web agents, sales assistants, and content marketing solutions for businesses. More info can be found at https://darkvisitors.com/agents/agents/bigsur-ai |
| Bravebot | https://safe.search.brave.com/help/brave-search-crawler | Yes | Collects data for AI search | Unclear at this time. | Brave search has a crawler to discover new pages and index their content. |
| Brightbot 1\.0 | https://brightdata.com/brightbot | Unclear at this time. | LLM/AI training. | At least one per minute. | Scrapes data to train LLMs and AI products focused on website customer support, [uses residential IPs and legit-looking user-agents to disguise itself](https://ksol.io/en/blog/posts/brightbot-not-that-bright/). |
| BuddyBot | [BuddyBotLearning](https://www.buddybotlearning.com) | Unclear at this time. | AI Learning Companion | Unclear at this time. | BuddyBot is a voice-controlled AI learning companion targeted at childhooded STEM education. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| Channel3Bot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/channel3bot |
| ChatGLM\-Spider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/chatglm-spider |
| ChatGPT Agent | [OpenAI](https://openai.com) | Yes | AI Agents | Unclear at this time. | ChatGPT Agent is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/chatgpt-agent |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-SearchBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. | No information provided. | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. |
| Claude\-User | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. | No information provided. | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. |
| Claude\-Web | Anthropic | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Cloudflare\-AutoRAG | [Cloudflare](https://developers.cloudflare.com/autorag) | Yes | Collects data for AI search | Unclear at this time. | AutoRAG is an all-in-one AI search solution. |
| CloudVertexBot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | CloudVertexBot is a Google-operated crawler available to site owners to request targeted crawls of their own sites for AI training purposes on the Vertex AI platform. More info can be found at https://darkvisitors.com/agents/agents/cloudvertexbot |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
| Crawl4AI | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/crawl4ai |
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
| Datenbank Crawler | Datenbank | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Datenbank Crawler is an AI data scraper operated by Datenbank. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/datenbank-crawler |
| Devin | Devin AI | Unclear at this time. | AI Assistants | Unclear at this time. | Devin is an AI assistant operated by Devin AI. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/devin |
| DeepSeekBot | DeepSeek | No | Training language models and improving AI products | Unclear at this time. | DeepSeekBot is a web crawler used by DeepSeek to train its language models and improve its AI products. |
| Devin | Devin AI | Yes | AI Assistants | Unclear at this time. | Devin is a collaborative AI teammate built to help ambitious engineering teams achieve more. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
| Echobot Bot | Echobox | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Echobot Bot is an AI data scraper operated by Echobox. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/echobot-bot |
@ -36,29 +52,47 @@
| Gemini\-Deep\-Research | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Gemini-Deep-Research is the agent responsible for collecting and scanning resources used in Google Gemini's Deep Research feature, which acts as a personal research assistant. More info can be found at https://darkvisitors.com/agents/agents/gemini-deep-research |
| Google\-CloudVertexBot | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Build and manage AI models for businesses employing Vertex AI | No information. | Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleAgent\-Mariner | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | GoogleAgent-Mariner is an AI agent created by Google that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/googleagent-mariner |
| Google\-Firebase | Google | Unclear at this time. | Used as part of AI apps developed by users of Google's Firebase AI products. | Unclear at this time. | Supports Google's Firebase AI products. |
| Google\-NotebookLM | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Google-NotebookLM is an AI-powered research and note-taking assistant that helps users synthesize information from their own uploaded sources, such as documents, transcripts, or web content. It can generate summaries, answer questions, and highlight key themes from the materials you provide, acting like a personalized research companion built on Google's Gemini model. Google-NotebookLM fetches source URLs when users add them to their notebooks, enabling the AI to access and analyze those pages for context and insights. More info can be found at https://darkvisitors.com/agents/agents/google-notebooklm |
| GoogleAgent\-Mariner | Google | Unclear at this time. | AI Agents | Unclear at this time. | GoogleAgent-Mariner is an AI agent created by Google that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/googleagent-mariner |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iAskBot | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskbot |
| iaskspider | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/iaskspider |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| IbouBot | Ibou | Yes | Search result generation. | Unclear at this time. | Ibou.io operates a crawler service named IbouBot which fuels and updates their graph representation of the World Wide Web. This database and all the metrics are used to provide a search engine. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support their suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Their web intelligence products use this index to enable search and retrieval of similar images. |
| imageSpider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/imagespider |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| KlaviyoAIBot | [Klaviyo](https://www.klaviyo.com) | [Yes](https://help.klaviyo.com/hc/en-us/articles/40496146232219) | AI Search Crawlers | Indexes based on 'change signals' and user configuration. | Indexes content to tailor AI experiences, generate content, answers and recommendations. |
| KunatoCrawler | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/kunatocrawler |
| laion\-huggingface\-processor | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/laion-huggingface-processor |
| LAIONDownloader | [Large-scale Artificial Intelligence Open Network](https://laion.ai/) | [No](https://laion.ai/faq/) | AI tools and models for machine learning research. | Unclear at this time. | LAIONDownloader is a bot by LAION, a non-profit organization that provides datasets, tools and models to liberate machine learning research. |
| LCC | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/lcc |
| LinerBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | LinerBot is the web crawler used by Liner AI assistant to gather information from academic sources and websites to provide accurate answers with line-by-line source citations for research and scholarly work. More info can be found at https://darkvisitors.com/agents/agents/linerbot |
| Linguee Bot | [Linguee](https://www.linguee.com) | No | AI powered translation service | Unclear at this time. | Linguee Bot is a web crawler used by Linguee to gather training data for its AI powered translation service. |
| LinkupBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/linkupbot |
| Manus\-User | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/manus-user |
| meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
| meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| meta\-webindexer | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/) | Unclear at this time. | AI Assistants | Unhinged, more than 1 per second. | As per their documentation, "The Meta-WebIndexer crawler navigates the web to improve Meta AI search result quality for users. In doing so, Meta analyzes online content to enhance the relevance and accuracy of Meta AI. Allowing Meta-WebIndexer in your robots.txt file helps us cite and link to your content in Meta AI's responses." |
| MistralAI\-User | Mistral | Unclear at this time. | AI Assistants | Unclear at this time. | MistralAI-User is an AI assistant operated by Mistral. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/mistralai-user |
| MistralAI\-User/1\.0 | Mistral AI | Yes | Takes action based on user prompts. | Only when prompted by a user. | MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. |
| MyCentralAIScraperBot | Unclear at this time. | Unclear at this time. | AI data scraper | Unclear at this time. | Operator and data use is unclear at this time. |
| netEstate Imprint Crawler | netEstate | Unclear at this time. | AI Data Scrapers | Unclear at this time. | netEstate Imprint Crawler is an AI data scraper operated by netEstate. If you think this is incorrect or can provide additional detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/netestate-imprint-crawler |
| NotebookLM | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | NotebookLM is an AI-powered research and note-taking assistant that helps users synthesize information from their own uploaded sources, such as documents, transcripts, or web content. It can generate summaries, answer questions, and highlight key themes from the materials you provide, acting like a personalized research companion built on Google's Gemini model. NotebookLM fetches source URLs when users add them to their notebooks, enabling the AI to access and analyze those pages for context and insights. More info can be found at https://darkvisitors.com/agents/agents/google-notebooklm |
| NovaAct | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| OpenAI | [OpenAI](https://openai.com) | Yes | Unclear at this time. | Unclear at this time. | The purpose of this bot is unclear at this time but it is a member of OpenAI's suite of crawlers. |
| Operator | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator |
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
| Panscient | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
@ -67,6 +101,7 @@
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| PhindBot | [phind](https://www.phind.com/) | Unclear at this time. | AI-enhanced search engine. | No explicit frequency provided. | Company offers an AI agent that uses AI and generate extra web query on the fly |
| Poggio\-Citations | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/poggio-citations |
| Poseidon Research Crawler | [Poseidon Research](https://www.poseidonresearch.com) | Unclear at this time. | AI research crawler | No explicit frequency provided. | Lab focused on scaling the interpretability research necessary to make better AI systems possible. |
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
| QuillBot | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
@ -75,15 +110,23 @@
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | Data collected is used for the ContentShake AI tool reports. |
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SEO Writing Assistant. | Roughly once every 10 seconds. | Data collected is used for the SEO Writing Assistant tool to check if URL is accessible. |
| ShapBot | [Parallel](https://parallel.ai) | [Yes](https://docs.parallel.ai/features/crawler) | Collects data for Parallel's web APIs. | Unclear at this time. | ShapBot helps discover and index websites for Parallel's web APIs. |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| SummalyBot | Unclear at this time. | Unclear at this time. | Extracts data to provide summaries for user; additional usages unclear. | No information. | AI input summarization. |
| Spider | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/spider |
| TavilyBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/tavilybot |
| TerraCotta | [Ceramic AI](https://ceramic.ai/) | [Yes](https://github.com/CeramicTeam/CeramicTerracotta) | AI Agents | Unclear at this time. | Downloads data to train LLMs. |
| Thinkbot | [Thinkbot](https://www.thinkbot.agency) | No | Insights on AI integration and automation. | Unclear at this time. | Collects data for analysis on AI usage and automation. |
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| TwinAgent | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/twinagent |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| WARDBot | WEBSPARK | Unclear at this time. | AI Data Scrapers | Unclear at this time. | WARDBot is an AI data scraper operated by WEBSPARK. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. More info can be found at https://darkvisitors.com/agents/agents/wardbot |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| webzio\-extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| wpbot | [QuantumCloud](https://www.quantumcloud.com) | Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9) | Live chat support and lead generation. | Unclear at this time. | wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support. |
| WRTNBot | Unclear at this time. | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/wrtnbot |
| YaK | [Meltwater](https://www.meltwater.com/en/suite/consumer-intelligence) | Unclear at this time. | According to the [Meltwater Consumer Intelligence page](https://www.meltwater.com/en/suite/consumer-intelligence) 'By applying AI, data science, and market research expertise to a live feed of global data sources, we transform unstructured data into actionable insights allowing better decision-making'. | Unclear at this time. | Retrieves data used for Meltwater's AI enabled consumer intelligence suite |
| YandexAdditional | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YandexAdditionalBot | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
| ZanistaBot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Description unavailable from darkvisitors.com More info can be found at https://darkvisitors.com/agents/agents/zanistabot |