Is your robots.txt blocking AI crawlers?
Paste your robots.txt below. We check GPTBot, ClaudeBot, PerplexityBot, Googlebot-Extended, CCBot, and anthropic-ai: the six crawlers that control whether AI systems can find and cite your business.
Nothing is uploaded. Parsing happens in your browser.
Why this matters
If the crawlers that train, fetch, or verify AI answers cannot reach your site, your business loses a direct path into AI search results. That means fewer chances to be cited in ChatGPT, Claude, Perplexity, and Google AI Overviews when people ask high-intent questions in your market.
Most blocks are accidental. A site launch template, an old SEO plugin setting, or a wildcard rule in robots.txt can quietly shut out the crawlers that matter. This checker shows exactly which AI crawlers are affected and the lines to add so they can access your site again.
We can review your robots.txt, schema, citations, and content signals, then give you a prioritized fix list for AI search.
Book a free GEO review →Frequently asked questions
What is a robots.txt file?
A robots.txt file is a plain text file at the root of your website that tells web crawlers which pages they can and cannot access. Misconfigured robots.txt files are one of the most common reasons businesses don't appear in AI search results. Nearly 21% of the top 1,000 websites accidentally block GPTBot.
Which AI crawlers should I allow in my robots.txt?
The six AI crawlers that matter most are GPTBot (ChatGPT), ClaudeBot (Claude/Anthropic), PerplexityBot (Perplexity), Googlebot-Extended (Google AI Overviews), CCBot (Common Crawl, used by many AI models), and anthropic-ai (Anthropic's secondary crawler). All six should be explicitly allowed.
How do I fix a blocked AI crawler in my robots.txt?
Add User-agent lines for each blocked crawler followed by Allow: /. For example: User-agent: GPTBot / Allow: /. This explicitly permits GPTBot to crawl your entire site. If you have a Disallow: / rule for all user-agents, move the AI crawler allow rules above it.
Why do some websites accidentally block AI crawlers?
Many website builders and SEO plugins add robots.txt rules that were written before AI crawlers existed. A common culprit is the rule Disallow: / under User-agent: * (block all crawlers), which blocks AI crawlers unless they have an explicit Allow rule above it.