Skip to main content

Troubleshooting: scans, crawls, tokens, alerts

Quick fixes for the most common issues.

J
Written by Jakka Pranav swaroop Naidu

My scan failed

Open Quality Lab > Test Runs and click the failed run. The Logs drawer shows the cause.

Most common reasons: robots.txt blocked the crawler (allow Jakka-bot or verify the domain), the site timed out (try a slower crawl speed in Project Settings > Crawl Settings), authentication required (enable Password Protection or check your credentials), or 5xx responses from your origin (check your server health and retry).

Pages are missing from my crawl

Three causes to check, in order. First, are the pages discoverable? Open Quality Lab > Pages and look at the Discovery filter. If pages are sitemap-only and have no internal links, they show as Orphaned. Add internal links or manually add the URLs. Second, are they blocked by robots.txt? Filter for Blocked. Third, is your plan limiting the crawl? Check the Pages Per Scan setting and your monthly quota.

I am out of tokens

Click Top up in the user dropdown to buy a bundle, or upgrade your plan. Scheduled scans pause when you hit zero. Manual scans fail until tokens are added. The token balance shows in three places: org chip, user dropdown, and project Overview.

I am getting too many alerts

Open Project Settings > Notifications. Turn off the alert types that are too noisy. Set Quiet Hours to receive a daily digest instead of real-time pings outside work hours. Critical alerts still bypass quiet hours unless you explicitly disable them.

An AI recommendation seems wrong

Click Not Helpful on the recommendation. The reason picker (False positive, Not relevant, Duplicate, etc.) feeds model quality. If a banned word slipped through, double-check that Brand Kit > Writing Style > Banned Words actually contains it.

JavaScript-rendered content is missing

Open Project Settings > Crawl Settings and turn on Execute JavaScript. Set Wait Time to 2000ms (or higher if your SPA is slow). Re-run the scan. This is required for React, Vue, Angular, and similar single-page apps.

Robots.txt is blocking my own staging site

Verify the domain first in Project Settings > Domain Verification. Once verified, you can uncheck "Respect robots.txt and crawl-delay directives" in Crawl Settings.

When to contact support

If none of the above resolves your issue, click the Intercom Messenger icon in the bottom right of the dashboard. Include your project ID (proj_<number>) and the run ID if relevant. Response time depends on your plan.

Related articles

Did this answer your question?