Analytics should answer, not display
Every dashboard is a guess. Whoever built it had to decide, in advance, which questions would matter, and arrange the answers on a single screen. If your real question is among them, the dashboard helps. If it isn't, it has nothing to say. Stephen Few's 2006 definition, still the one most vendors build against, is explicit about this: "a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance."
This was unavoidable for most of computing's history. Software could not take a natural-language question and return a specific answer over arbitrary data, so the next best thing was to precompute a reasonable set of answers and put them in a layout someone could scan. Dashboards were the compromise.
The compromise did not improve much with investment. Gartner has measured the same narrow band for twenty-five years: BI adoption inside organizations sat at 19% of employees in 1998, 22% in 2009, 35% in 2019, and 29% today. OLAP cubes, self-service BI, semantic layers, embedded analytics. Each generation of tooling reached the same ceiling. The dashboards that did get built largely went unread; Atlan's 2026 sprawl analysis found that around 40% of Tableau reports hadn't been opened in six months. Benn Stancil, surveying twenty years of BI, concluded flatly: "we didn't get that much better at making decisions."
The long-running assumption behind that investment was that the tools weren't good enough yet. The format itself was doing the wrong job. Pirolli and Card's information foraging work at Xerox PARC supplied the mechanism decades ago: people spend time on an information source in proportion to the value they expect over the cost of extraction. Dashboards keep the extraction cost high. You scan, interpret, pivot, and remember which filter is current. When the cost exceeds the value of any individual answer, attention falls off.
When language models arrived, the industry's response was to add a chat box to the dashboard. ThoughtSpot, Tableau Ask Data, Salesforce Einstein, Power BI Copilot, a decade of text-to-SQL startups. None of them moved the adoption number. Holistics's 2026 review names the structural reason:
"Text-to-SQL is fast to implement but fragile — the AI has no understanding of business definitions, so 'revenue' might mean gross revenue in one query and net revenue in the next."
The language model was guessing at schema and semantics while the BI tool still owned the data model and the dashboard still owned the surface. Chat-with-your-BI gave users a different input method on the same machine.
MCP, released by Anthropic in November 2024, is a protocol for assistants to invoke typed, documented operations exposed by external systems. A data source declares the operations it supports; the assistant calls them with arguments; the server returns authoritative values. The definition of a metric lives with the vendor that owns it, which is the part text-to-SQL could never deliver.
The result is a sequence the dashboard could not supply. Inside the same thread that shipped a change, an assistant can query the affected metric, read the result, and create the alert that watches for the regression. When a question wasn't anticipated by whoever built the tools, it doesn't land on a blank screen; the assistant composes the call from whatever operations exist. The scanning, pivoting, and interpretation that the dashboard pushed onto its reader happen inside the assistant instead.
Clamp applies this to web analytics. An SDK captures pageviews and custom events. An MCP server exposes them as typed tools the assistant calls inside Cursor, Claude Code, Copilot. The conversation is the interface.
The first people to reach for Clamp are mainly builders, because they're already running an MCP client. The SDK install is a single command in the place the work happens, and the two ends of the loop, shipping a change and asking what the change did, already sit in the same window. The gap that closes first is the one between code and the behaviour of the code.
The same gap exists for everyone else involved in the product. A founder asking about sign-ups, a marketer reading a referral source, a support lead checking whether a refund spike followed a release. They're asking against the same typed tools, the same metric definitions. The answer has the same shape.
The next thing on the roadmap closes that gap for them: a chat interface at clamp.sh, where talking to your analytics doesn't require an MCP client at all. It runs on the same tools the MCP server exposes; nothing underneath changes. The chat box bolted onto a BI dashboard failed because the model was guessing at what "revenue" meant. Clamp's model isn't guessing, whether the conversation starts in Cursor or in a browser tab.
There is no analytics dashboard at clamp.sh, and there will not be one. Every Clamp surface, the MCP server today and the chat interface next, sits on the same typed tools the assistant calls. A grid of tiles next to an answer-shaped interface would be hedging on the argument.
Clamp is live. npm i @clamp-sh/analytics.