Focus. Measure. Plan. Deliver. Repeat.

Five steps. One continuous programme. That's how you get — and stay — known and cited.

Focus
People — a conversation between us and you
Define where you want to show up in AI — the conversations and narratives that matter to your business. A focused scope produces a focused strategy.
Measure
Tech — our AI research platform runs structured research waves
Your position across 12 pillars of AI visibility, multiple platforms, tracked over research waves. We use daily tracking over seven days because that's what the data shows you need for meaningful results.
Plan
People + Tech — AI proposes, our specialists tailor
The right moves in the right order — content, PR, authority signals — prioritised by impact vs effort. A small focused programme beats a big unfocused one every time.
Deliver
People — clear costs, you pick what we do vs DIY
Authoritative content, earned media, PR, thought leadership — the things AI actually draws from. We can do it for you, or hand you the plan. Your call.
Repeat
People — we go again, starting with a conversation
Re-measure every 3, 6, or 12 months to track progress, adapt to LLM changes, and refine the strategy. AI visibility isn't a one-off project — it's a programme.

What is AI Visibility Strategy?

AI Visibility Strategy (AVS) is our end-to-end approach to understanding and improving how AI platforms cite and recommend your business. We don't just measure — we build the strategy to improve it, with the content, PR, and media coverage that AI actually draws from.

At the heart of every paid AVS report is a visibility score out of 100 — backed by a 12-pillar breakdown, competitor benchmarking and concrete recommendations. You can see what's driving the number and what to do about it. Reports are built from structured queries across ChatGPT, Google AI and Perplexity, scored against a consistent framework that lets us track meaningful change over time. Each paid AVS report runs 6,300+ data points.

Complex businesses — multiple product lines, multi-market operations, or deeper diagnostic needs — are scoped bespoke. Priced case by case. Get in touch.

How is AI visibility measured?

We design a tailored set of queries across your sector, in any market and any language, and run them across multiple AI platforms. We don't take a one-off snapshot. We run structured research waves: daily tracking over seven days per report, because that's what the data shows you need for meaningful results. We analyse every response and place your business in one of four measurable bands, with Known and Cited as the named end goal.

The four AVS bands

Every business falls into one of four bands based on their composite AVS score:

Your AVS score is benchmarked against competitors in your sector. The four bands describe where you are today. They move with the score, in either direction, at each measurement cycle.

The end goal: Known and Cited

Known and Cited is the K&C standard. It is not a fifth band. It is the recognition we give to brands that have not just achieved high citation density, but sustained authoritative surfacing across the field over multiple measurement cycles.

A brand qualifies for Known and Cited recognition when all of the following are true:

The four bands describe where you are. Known and Cited describes what you are working toward. The mantra "Be Known. Be Cited." is the journey between them.

What are the 12 citation pillars?

The 12 Key Factor Pillars are the categories we test across on every paid AVS report (Annual, Bi-Annual and Quarterly). They're designed to cover every dimension of how AI models evaluate and cite brands in your sector. Each pillar is customised to your specific industry — a SaaS platform gets different questions than a hotel chain or a financial services firm.

Pillar 01

Direct Mentions

How often does your business appear across all AI platform responses?

Pillar 02

Recommendation Rate

In what percentage of relevant queries is your business actively recommended?

Pillar 03

Sentiment & Framing

When mentioned, is your business framed positively, neutrally, or negatively?

Pillar 04

Source Authority

How many authoritative third-party sources cite or reference your business?

Pillar 05

Narrative Consistency

Is your positioning consistent across different AI platforms and query types?

Pillar 06

Competitor Gap

How does your visibility compare to tracked competitors in your sector?

Pillar 07

Query Coverage

Across how many query categories and topics does your business appear?

Pillar 08

Multi-LLM Consistency

Is your visibility consistent across ChatGPT, Google AI Overview, and Perplexity?

Pillar 09

Feature & Service Attribution

Are your specific services and features correctly attributed to your business by AI?

Pillar 10

Geographic Relevance

Does your business appear for location-relevant queries in your target markets?

Pillar 11

Temporal Freshness

Is your content being picked up by current AI indexing and retrieval systems?

Pillar 12

Category Leadership

Is your business positioned as a leader or authority in its category by AI?

These pillars provide a comprehensive framework, but they may be adapted depending on the specific business's market, sector, and competitive landscape. Not every pillar carries equal weight for every business — and that's by design.

The three AVS dimensions

Your overall AI Visibility Strategy is built from three equal dimensions, each contributing 30-40% to the final number:

AI Visibility (40%): Your mention rate and citation frequency across all engines, benchmarked against competitors. This is the primary indicator of how visible you are in conversational AI.

Source Quality (30%): How many citations come from sources that mention or link to your business. This measures the strength of the underlying evidence base that AI models are drawing from.

Narrative Alignment (30%): How well your business's story is represented across the 12 pillars. This measures whether AI models understand and represent your positioning accurately.

How we use Authoritas

We use Authoritas — a measurement platform that enables structured query collection across multiple AI engines simultaneously, with support for localised queries in any language. Authoritas provides the data infrastructure; we provide the methodology, analysis, and strategic interpretation. You can learn more at authoritas.com.

How often should you audit AI visibility?

Quarterly is our recommended minimum. AI platforms change constantly — models update, training data shifts, competitor content changes, and new features are released. A single audit is a photograph; quarterly re-measurement turns it into a video. The real value is the longitudinal data: seeing what's shifting over time and whether your actions are working. You can catch emerging trends, competitive movements, and the impact of your own content efforts far more effectively with quarterly tracking.

Is this a solved science?

No — and anyone who says it is, is overselling. GEO is a new discipline. LLM mechanisms are opaque and changing fast. What we offer is structured measurement, informed recommendations, and ongoing tracking. We're confident about our methodology, not about guaranteed results. Focus. Measure. Plan. Deliver. Repeat.

As standard, every AVS measures ChatGPT, Google AI Overviews / AI Mode, and Perplexity — the three major AI platforms that expose measurable search-like surfaces. Other engines (Claude, Gemini, Copilot) can be added on request, scoped case by case.

How We Deliver Your Strategy

Every AVS report comes with prioritised strategic recommendations. AVS Bi-Annual delivers the full 12-pillar strategic plan. AVS Quarterly adds multi-market scope with localised recommendations per market. Each recommendation is tagged so you know exactly what's involved:

[K&C Content] — we write it. Blog posts, website pages, thought leadership, FAQs. Competitively priced — get in touch and we'll happily share our rates.

[K&C PR Connect] — we connect you with the right PR specialist for your market and sector. We'll connect you and provide an indicative scope. The PR specialist works directly with you — K&C provides the strategic brief and handover, they handle the execution.

[Internal] — your team does it. We'll tell you what and how, with an estimated time for your in-house team.

[Internal — AI Opportunity] — if it's technical or document work, we'll flag it: "Talk to us about AI — we may be able to help improve this process."

[Combined] — part us, part you. We'll show both estimates so you can see what we handle and what your team owns.

Every recommendation includes a full breakdown in the supporting document — so you can brief your agency, plan your internal resources, or hand the whole lot to us.

FAQ · Model coverage

How we measure — frequently asked

The honest version. Three engines, the consumer scraping tier, and the reasoning behind the scope.

Which AI search platforms do you measure?

The standard AVS measures three engines: ChatGPT, Google AIO, and Perplexity. We measure them at the consumer scraping tier, which is the version a regular buyer sees when they open the app or run a Google search. We do not measure paid Pro tiers or pinned API model versions by default. The scope is deliberate: three platforms, the consumer experience, the same read your prospects actually get. Clients who want additional engines can scope them in. The standard read is the buyer read.

Why those three?

They cover roughly 95% of buyer-side AI search today. ChatGPT carries about 78% of AI chatbot referrals to websites and serves 800 million weekly users (Similarweb, 2026). Google AI Overviews appears on around half of all Google searches and is projected to hit 75% by 2028 (McKinsey, 2026). Perplexity is the AI-first research tool, the platform a buyer opens specifically to research a category. Together these three are where the actual buyer is. Adding more engines lifts coverage by single digits.

Why not measure every model and every version?

Six major platforms, multiple versions each, weekly drift. Measuring every permutation costs more credits per cycle, lengthens the run, and adds noise without proportional signal. The marginal engine ends up measuring the same brands. BrightEdge’s 2026 cross-engine study found pairwise brand-recommendation overlap of 36 to 55%, against pairwise source overlap of 16 to 59%. Engines mostly agree on brands. They disagree on sources. Three engines is the cleanest read on what a buyer actually sees. We can adapt the search if a client needs extras.

What about ChatGPT 5.2 or future versions?

Less variance than people fear. The same BrightEdge data shows engines disagree about which articles to cite far more than they disagree about which brands to recommend. Versions of the same engine sit even closer together. Brands win in AI search by being the ones an engine holds high confidence in, and high confidence stays stable across versions. The consensus shortlist your buyer sees does not flip when ChatGPT 5.2 ships. AVS reads the consensus, and that is the signal that holds.

Can you measure additional engines if a client wants them?

Yes. Claude, Gemini, DeepSeek, and version-pinned API engines are available as part of the Enhanced tier or as a one-off Discovery engagement. If you have a specific platform or model version that matters to your category, tell us at scoping and we will add it to your prompt set. The standard three are the consumer baseline. Anything else is a deliberate addition for a specific commercial reason. Talk to us about scope before you sign.

Sources: Similarweb, AI Chatbot Market Share, March 2026. McKinsey, cited in AirOps Playbook, March 2026. BrightEdge, Why AI Engines Cite Different Sources but Recommend the Same Brands, 2026. K&C, AVS Scoring Methodology v1.1, 12 May 2026.

More questions? See our full FAQ page.

Ready to understand the full picture?

Start with a AVS Exec Brief. Or talk to us about AVS Annual, Bi-Annual or Quarterly for the full measurement framework across one country or many.

Get in touch
Have questions? See our FAQs

Methodology in practice

Our methodology applied — the framework, the worked example, and the AI-led PR delivery that turns audit into action.