Entry-tier · Free

AVS Exec Brief

A real one-day measurement of how AI is talking about your business — built using the same methodology as the full AVS. Not a sample.

Free, but not no-strings. We need a form filled out, a 30-minute call upfront so we can build the prompt flow properly, and a 30-minute run-through when we deliver the brief.

Last updated 7 May 2026

What an AVS Exec Brief actually is

Most agencies offering free GEO audits run them at 25 prompts and call it done. The output is a sample. It looks like measurement; it isn’t.

Our Exec Brief is a real one-day measurement. Same methodology as the full AVS Annual. We design a tailored prompt set for your business, run it across ChatGPT, Google AI Overviews and Perplexity, and score the responses by hand. You get a visibility score, the headline findings, and a named-competitor view of who AI is recommending instead of you.

It’s the closest thing to the real AVS Report you’ll get without commissioning one.

What we need from you

No financial commitment. But the time commitment is real, because the time is what makes the brief useful.

Step 1
Fill the form
Tell us who you are, what you sell, who your competitors are, what queries matter to you. Five minutes.
Step 2
30-min discovery call
Upfront. We use it to build the prompt flow properly — the difference between a real measurement and a generic sample.
Step 3
30-min run-through
When we deliver the brief, we walk through it with you. So you actually understand what AI is saying about you.

That’s it. No invoice. No retainer. No upsell pressure. We’re doing it because it’s the right way to give people a feel for how we work — and so we get a feel for whether we’re right for you too.

What you get

What you don’t get

The full 12-pillar breakdown, the Top 12 prioritised recommendations, the full Supporting Document audit trail, and the strategic consultation that come with a paid AVS engagement.

If you want those, the Exec Brief is the gateway. Once you’ve seen the real measurement, you’ll know whether the full AVS is worth commissioning.

Why we changed the way we do this

We used to run a free Flash report. It wasn’t doing K&C or clients justice — it gave the wrong impression of what we do, and we kept getting prospects who dropped a form, took the report, and disappeared.

The Exec Brief is more work for us. We do the extra work because it’s how we filter for genuinely interested clients — and it’s the only way the brief is actually useful. Read why we killed the Flash report →

If that sounds like the way you’d want to engage

Fill the form. Book the call. We’ll take it from there.

Get in touch
FAQ · Model coverage

How we measure — frequently asked

Before you fill the form, the methodology questions we get asked most. The honest version.

Which AI search platforms do you measure?

The standard AVS measures three engines: ChatGPT, Google AIO, and Perplexity. We measure them at the consumer scraping tier, which is the version a regular buyer sees when they open the app or run a Google search. We do not measure paid Pro tiers or pinned API model versions by default. The scope is deliberate: three platforms, the consumer experience, the same read your prospects actually get. Clients who want additional engines can scope them in. The standard read is the buyer read.

Why those three?

They cover roughly 95% of buyer-side AI search today. ChatGPT carries about 78% of AI chatbot referrals to websites and serves 800 million weekly users (Similarweb, 2026). Google AI Overviews appears on around half of all Google searches and is projected to hit 75% by 2028 (McKinsey, 2026). Perplexity is the AI-first research tool, the platform a buyer opens specifically to research a category. Together these three are where the actual buyer is. Adding more engines lifts coverage by single digits.

Why not measure every model and every version?

Six major platforms, multiple versions each, weekly drift. Measuring every permutation costs more credits per cycle, lengthens the run, and adds noise without proportional signal. The marginal engine ends up measuring the same brands. BrightEdge’s 2026 cross-engine study found pairwise brand-recommendation overlap of 36 to 55%, against pairwise source overlap of 16 to 59%. Engines mostly agree on brands. They disagree on sources. Three engines is the cleanest read on what a buyer actually sees. We can adapt the search if a client needs extras.

What about ChatGPT 5.2 or future versions?

Less variance than people fear. The same BrightEdge data shows engines disagree about which articles to cite far more than they disagree about which brands to recommend. Versions of the same engine sit even closer together. Brands win in AI search by being the ones an engine holds high confidence in, and high confidence stays stable across versions. The consensus shortlist your buyer sees does not flip when ChatGPT 5.2 ships. AVS reads the consensus, and that is the signal that holds.

Can you measure additional engines if a client wants them?

Yes. Claude, Gemini, DeepSeek, and version-pinned API engines are available as part of the Enhanced tier or as a one-off Discovery engagement. If you have a specific platform or model version that matters to your category, tell us at scoping and we will add it to your prompt set. The standard three are the consumer baseline. Anything else is a deliberate addition for a specific commercial reason. Talk to us about scope before you sign.

Sources: Similarweb, AI Chatbot Market Share, March 2026. McKinsey, cited in AirOps Playbook, March 2026. BrightEdge, Why AI Engines Cite Different Sources but Recommend the Same Brands, 2026. K&C, AVS Scoring Methodology v1.0, 6 May 2026.

For the full methodology, see the methodology page.