Developers aren't reading your docs
At least, not in the way they used to. They're asking AI from their IDE instead. Here's how to make that work for you.
A developer needs to set up authentication using your SDK.
Two years ago, theyâd open your docs site and read the guide. Today, they type âHow do I authenticate with this API?â in Claude Code. The AI answers, writes the code, and moves on. Your docs site never sees a pageview.
Top engineers at Anthropic and OpenAI report that AI now writes 100% of their code. If developers arenât even writing code themselves, theyâre definitely not browsing your docs to figure out how to write it.
We're seeing this in the documentation projects we work on. AI-driven traffic to docs sites is growing while human visits drop. Mintlify reports that almost half of documentation site traffic now comes from AI agents, not humans. Your docs have a new primary reader. It doesnât browse. It queries.
From reading to asking
Documentation always competed with Stack Overflow, GitHub issues, and asking a coworker on Slack. AI coding assistants just won that competition.
When you can describe your problem in natural language and get working code back, in context, inside your editor, there's no reason to open a docs site.
But that doesnât mean docs matter less. If anything, accuracy matters more now. A human reading a confusing paragraph will slow down, re-read, maybe search for other sources. An AI will just pick an interpretation and ship it.
The problem with public AI models
When a developer asks an AI about your product, the model either relies on training data or searches the web. Both are broken.
Training data is stale. If you shipped a new SDK version last month, the model doesnât know. Itâll suggest deprecated methods with full confidence. Web search relies on SEO. The model might find a three-year-old Medium article instead of your current API reference. Your contentâs discoverability depends on how well you rank, not on how good your docs are.
Neither approach has a special relationship with your documentation. It doesnât know your custom error codes or the migration guide you published last Tuesday.
The result: AI generates code based on outdated examples and wrong parameter names. The developer doesnât even know the answer was wrong until something breaks.
It gets worse for internal docs
If public docs have a discoverability problem, internal docs are completely invisible.
Your internal wiki, your Confluence spaces, your private Notion pages. AI models canât see any of it. A new engineer joins your team, asks their AI assistant âHow do I deploy to staging?â, and gets nothing useful. The model wasnât trained on your deployment runbook. It canât search your internal network.
Your internal documentation is invisible to every AI tool your developers use daily. Thatâs a real problem.
Connect your docs to the tools developers actually use
Thereâs a fix for this. Instead of hoping AI models happen to find your docs, you can connect your documentation directly to the AI tools developers are already using.
This is what the Model Context Protocol (MCP) enables. Itâs an open standard that lets AI assistants connect to external data sources. Think of it as a bridge between an AI coding assistant and your documentation.
With an MCP server pointing at your docs, the assistant fetches answers directly from your content. Not from training data. Not from Google. From your docs, in real time.
And hereâs the part most people miss: you register a description of what your docs contain. The AI uses that description to decide when to query your docs. If a developer is working with your SDK and hits an authentication problem, the assistant knows your docs are the right place to look. Itâs not a keyword search. The AI decides based on context.
What this means for documentation teams
Your docs go from a website people browse to a data source AI tools query. That changes things.
Content quality matters more, not less. Every ambiguity, every outdated example, every missing step gets amplified. Bad docs produce bad AI answers at scale. Good docs produce answers that make developers trust your product.
Internal docs become reachable. That deployment runbook, your teamâs coding standards, your architecture decision records. MCP can expose them to AI assistants inside your organization.
You also get better feedback. When AI assistants query your docs, you can see what questions are being asked. Thatâs direct signal about what developers are trying to do. Better than pageview analytics ever was.
What this doesnât fix
MCP doesnât fix bad docs. If your content is incomplete or outdated, AI will serve incomplete, outdated answers. Just faster.
And it doesnât eliminate hallucinations. But it gets things wrong less when it has access to the right content. Your docs site is still the canonical source of truth. When AI gives a wrong answer, thatâs where developers go to verify. MCP just makes your content reachable from more places.
Where to start
Start thinking about how AI assistants consume your content. Not how humans read it. How machines query it.
Structured content, clear headings, explicit examples. These matter even more now. Your new reader doesnât browse pages. It parses them.
Look into how MCP can work for your docs. Connect it to the tools your team already uses. See what questions come in.
Your docs are already being consumed by AI. The question is whether youâre serving the right answers or letting the model guess.


