Dragonbot respects the robots.txt directive. You can prevent Dragonbot from crawling specific areas of your site by writing directives for the user agent "dragonbot".

Dragonbot uses the same open-source code that Google does to interpret robots.txt, so you can be confident that any rules that Google understands will be interpreted the same way by Dragonbot.

For example, to restrict Dragon Metrics from crawling any page on your site, add the following code to the robots.txt file in the root domain (or any other subdomain Dragon Metrics is set to crawl):

User-agent: Dragonbot
Disallow: /

To restrict Dragon Metrics from crawling a directory on your site called "private", use the following code:

User-agent: Dragonbot
Disallow: /private

Dragonbot also supports wildcard characters. For example, to restrict Dragonbot from crawling any page that contains the word "user" anywhere in the URL (besides the domain), use the following code:

User-agent: Dragonbot
Disallow: /*user

To restrict Dragonbot from crawling any page that contains URL parameters, use the following code:

User-agent: Dragonbot
Disallow: /*?
Did this answer your question?