Crawl Stats

metriques intermediate

Definition

Detailed data on how Googlebot explores your site: frequency, volume, and resources consumed.

Crawl Stats are available in Google Search Console and detail how Googlebot interacts with your site. They include the total number of crawl requests, download volume, average server response time, breakdown by content type (HTML, images, CSS, JavaScript), HTTP response codes received, and request destinations (pages, resources). These statistics are essential for understanding whether Google can efficiently explore your site and for diagnosing crawl budget issues. A site with high server response times or an abnormally low number of crawl requests may experience indexation delays for new pages.

Crawl statistics Exploration statistics Google exploration stats

Key Points

  • Google Search Console data on Googlebot's exploration of your site
  • Includes crawl frequency, response time, file types, and HTTP codes
  • Essential for diagnosing crawl budget and indexation issues

Practical Examples

Crawl budget diagnosis

Crawl Stats show Googlebot only requests 50 pages/day on a 10,000-page site. An average response time of 3.2s reveals a saturated server. Upgrading to better hosting increases crawl to 800 pages/day.

Detecting parasitic pages

Crawl Stats analysis reveals 60% of crawl requests target parameter pages (filters, sorting) with no SEO value. Implementing robots.txt directives frees up crawl budget for important pages.

Frequently Asked Questions

In Google Search Console, go to Settings > Crawl Stats. You will find graphs of crawl requests, download sizes, and server response times over the last 90 days.

It depends on your site's size. A small 50-page site crawled 10 times/day is normal. A 50,000-page site crawled 50 times/day has a problem. If Google does not crawl enough, new pages and changes will not be indexed quickly.

Go Further with LemmiLink

Discover how LemmiLink can help you put these SEO concepts into practice.

Last updated: 2026-02-07