Completely Block Dragonbot

If you want to be sure that Dragonbot can never crawl your site under any circumstances, we recommend using the robots.txt file. Other methods of blocking Dragonbot are not recommended.

Robots.txt (recommended)

Dragonbot respects the robots.txt directive. You can prevent Dragonbot from crawling specific areas of your site by writing directives for the user agent "dragonbot".

To restrict Dragonbot from crawling all page on your site, add the following code to the robots.txt file in the root domain and all other subdomains you wish to block from being crawled.

User-agent: Dragonbot
Disallow: /

Learn more about blocking Dragonbot by robots.txt

User agent (not recommended)

One method for blocking Dragonbot that we do not recommend is to identify the crawler by user agent "dragonbot" and deny its requests. Since there is nothing telling Dragonbot not to continue crawling, this will not actually stop Dragonbot from crawling your site. It will continue to make requests (that your server will need to continue to deny).

Therefore we do not recommend blocking Dragonbot by User Agent.

IP Address (not recommended)

Because Dragonbot users a dynamic IP address, it cannot be reliably identified by IP. Similar to blocking by user agent, since there is nothing telling Dragonbot not to continue crawling, this will not actually stop Dragonbot from crawling your site. It will continue to make requests (that your server will need to continue to deny).

Therefore we do not recommend blocking Dragonbot by IP address.

Stop Dragonbot from crawling specific areas of your site

If you would like to allow crawling, but just limit which URLs are crawled, there are several options to accomplish this. Learn more about restricting crawled URLs

Disable crawling

If you have created a Dragon Metrics campaign for your site, the easiest way to stop Dragonbot from crawling it is by disabling crawling in Crawler settings listed under Campaign Settings in the bottom left on the navigation.

Since this method does not require any server-side updates, it may be the easiest way to stop site crawls. However, it does not guarantee that your site will not be crawled. Another campaign can be created (either by your organization or a competitor) with crawling enabled. Therefore, if you need to ensure your site is not crawled, please use the robots.txt file.

Did this answer your question?