There could be many reasons why Dragonbot does not find all of the pages in your site:

No Links to the Page

Dragonbot works the same way other web crawlers do - Dragonbot crawls all of the links on your home page and visits them. Then it crawls all the links on this page, and so on. If there are no links to the pages you want crawled, Dragonbot will not be able to find them.

Links in JavaScript, iFrames, Java, or Flash

Dragonbot only crawls content and links in plain text. Therefore, if there are links in JavaScript, iFrames, Java, Flash, or some other format other than plain text, Dragonbot will not be able to discover or follow these links.

Page Size Too Large

Dragonbot only downloads the first 500KB of each page. If you have a very large page with links at that the bottom, these links are likely to not be crawled or discovered by Dragonbot.

Too Many Links on Page

Dragonbot follows a maximum of 500 links per page. Links above this limit will not be followed or crawled.

Outside of Website Scope

The website address you add when creating the campaign determines the scope of the crawl. Dragonbot will not crawl outside this scope. The table below shows some examples:

Website: http://example.com or example.com
Scope: All subdomains and all subdirectories

Website: http://www.example.com or www.example.com
Scope: All subdirectories on the www subdomain and its subdomains (e.g. sub.www.example.com)

Website: http://www.example.com/products or www.example.com/products
Scope: The products subdirectory on the www subdomain and its subdomains

Website: https://example.com
Scope: All subdomains and all subdirectories under the https://example.com domain (http://example.com will not be crawled)

Redirecting Based on IP Address

Some multinational websites will sniff the users' IP address, and redirect them to a different domain based on their location. (For example, users from the UK may be redirected to www.example.co.uk when they try to visit www.example.com, while users from China may be redirected to www.example.cn when they visit www.example.com.)

When this happens, in many cases Dragonbot will not be able to crawl the site, depending on how the redirects are done.

Outside Crawl Limits

Dragonbot is limited by the number of crawl credits assigned to each campaign. Therefore, if the crawl limits are set to 10,000, Dragonbot will only crawl 10,000 URLs on your site. URLs blocked by the robots.txt file do not count towards this limit.

Some sites may be larger than the crawl limits, so for these sites, Dragonbot will crawl URLs prioritized by depth (link distance from the home page). Page that may be several links away from the home page may not be crawled.

Disallowed by Robots.txt

Dragonbot follows the robots.txt directive. If Dragonbot is blocked in this file, it will not crawl the site or parts of the site that is blocked.

Content Behind Login

Any content that users must log in for will not be accessible to Dragonbot.

Cookies

Some sites that use cookies to redirect users may confuse Dragonbot, and it will not be able to crawl the site effectively.

Server Issues

It's possible that on occasion a site may be temporarily down when Dragonbot crawls the site. In this case, you will usually see these pages listed as a HTTP 503, 500, or other error message.

If you're still having issues not listed above, please contact Dragon Metrics support.

Did this answer your question?