A curated library of research and resources about how AI citations work. We gather the latest GEO research from international sources, analyse what it means for your business, and share what we learn. The current best frame for understanding why some businesses appear in AI answers and others don’t is signal architecture — entity authority, third-party validation, and community discussion. This page is a living resource, updated as new research lands. Bookmark it.
GEO (Generative Engine Optimisation) and AI Search refer to the same thing — how businesses appear in AI-powered search tools like ChatGPT, Perplexity, and Google AI. We use both terms interchangeably.
Generative Engine Optimisation is moving rapidly. There's a lot of noise. Not enough data shared openly. We think businesses deserve research they can actually rely on.
Known & Cited doesn't run its own research programme yet. Instead, we curate high-quality GEO research from international sources — academic studies, industry analysis, and original observations from our audits — and share what it means for your business. This page brings together what the community is learning and what we've observed in our measurement work.
We combine published research with findings from our AVS audits. Our goal is to help you understand the GEO landscape — what's working, what's uncertain, and where the field is still figuring things out. We're honest about confidence levels. Where research is solid, we'll say so. Where it's emerging or contested, we'll flag that too.
We want to start our own data-led international research programme — measuring how AI citations work across countries and languages — but that's future work. For now, this is our curation hub. Bookmark it.
Until now, nobody had a credible number for how long it takes ChatGPT and Claude to start citing newly published content. Josh Blyskal at Profound now does. The dataset is small. The method is honest. The finding is genuinely useful.
cited within 37 days
cited within 7 days
days typical time to first citation
Profound’s data measures first citation. How quickly a page shows up in an AI answer at all. That is one clock, and it is the one PR and content teams have been desperate for.
Our own measurement focuses on a different clock: sustained citation across the volatility window. AI answers wobble. Appearing once is not the same as being part of the answer set. Our tech partner’s data shows around 45% of brands appear only once in a 7-day window on unbranded prompts. So while Profound tells you how fast the door opens, AVS tells you whether it stays open. Same game. Different clocks. You need both.
Source: Profound (Josh Blyskal), LinkedIn, 11 May 2026. Caveats from the thread credited to Rodolfo Sabino (pipeline reframe) and Garrett Smith (emergent topics).
Read our full take on this: AI cites you in 6.81 days, if everything else is already working · the longer blog piece on what the Profound clock changes for PR teams, and why first-citation is only one of two clocks worth watching.
The official chiefmartec annual. Brinker has spent two decades mapping the marketing technology landscape; this is the report every CMO, vendor and analyst will be quoting for the next twelve months. The 2026 edition does something the field has been waiting for: it formally renames the SEO subcategory of the MartechMap to SEO/AEO/GEO, names the new tool category in print, and frames the measurement gap that AVS was built to close.
The 63.1% / 13.6% number is the entire reason a service like AVS exists. Brinker frames the gap as transitional opacity that the tools will solve. We think it’s structural. Most teams will never close it on their own because the work isn’t in the publishing layer (which they own); it’s in the answer layer (which they cannot see). Even with llms.txt files, schema markup, structured FAQs and content rebuilt for machines, you still need somebody running queries, capturing answers, scoring citations, and tracking competitor presence over time. That isn’t a feature your CMS will ship in a release note. It’s a process, run by a person who knows what to look for.
The named tool list (AirOps, Bluefish, Daydream, Evertune, Profound, Scrunch, plus Semrush and Ahrefs) is all US‑based product companies. K&C isn’t there yet, and that’s the honest read. Tooling tells you whether you appeared in a query. It does not tell you whether you appeared in the right queries, whether your competitors appeared more, or whether the citation pattern across a category is moving toward or away from you. That layer (query design, sector benchmarks, scored methodology) is where AVS sits. It’s a different shape of product. You can’t buy it off a shelf yet.
The MartechMap rename matters more than it sounds. Brinker has put the field’s name in print, in the report every CMO and vendor will read for the next year. The category has a shape. The question now is who owns the methodology layer for the businesses Brinker’s tool list won’t reach: UK SMEs, mid‑market B2B, charities, ecommerce operators without a US product team on retainer. That’s the K&C shape. Be Known. Be Cited.
A 17-page report from a US content-operations platform, built on the most rigorous published GEO dataset to date. The AirOps team turned twelve months of citation analysis into a practitioner-facing playbook — the partner piece to their larger 2026 State of AI Search companion dataset with Kevin Indig. The data is the strongest bit. The frameworks and case studies are where it tilts into sales pitch.
The data is real. The frameworks are sales pitches dressed as research, every featured case study (Carta, Webflow, Chime, Docebo, Klaviyo, LegalZoom) is an AirOps customer, and methodology disclosure is non-existent — no published prompt set, no model versions, no country or language coverage. Worth reading. Worth questioning. Cite the customer numbers as “AirOps reports their customer X saw…”, not as independent benchmarks.
The headline frame is ~15 million data points across 21,000 brands — roughly 700 prompts per brand on average. As industry-wide field reading, that’s plenty. As a measurement of your business, it’s thin and generic. AVS Annual measures 6,000+ prompts per brand, every prompt designed for that brand’s sector and buyers. Roughly 8× the per-brand depth — tailored, not generic.
The product K&C sells is the judgement on top of the data, not the data itself. Anyone with a budget can buy AI search numbers now. The bit you can’t buy off a shelf is somebody who knows what those numbers mean for your buyers, which findings matter, and which battles to pick first. Generic benchmarks tell you which way the field is moving. They don’t tell you what to do.
The most rigorous piece of GEO research the field has produced so far. Seer queried six LLMs across every category in the 2026 Winter Olympics — broadcast partners, sportswear, equipment manufacturers, ticket platforms, host-city hospitality — and measured which brands appeared in AI answers and which did not. The headline finding is a 7.8× outcome gap between brands with strong “signal architecture” and brands without.
The temptation, reading this study, is to walk away thinking “we need a Wikipedia page and a Reddit campaign.” Resist it. The Olympics is one slice — broadcast media, sportswear, equipment manufacturers, ticket platforms. That slice happens to lean on Wikipedia and Reddit. Most other categories don’t, in the same proportions or at all.
Some sectors live on trade press and analyst reports. Some live on accreditation bodies and government registers. Some live on LinkedIn and founder presence. Some genuinely live on Reddit. The right move is to map your sector’s signal architecture first — work out which sources AI is actually pulling from for queries in your category — and only then decide where to invest.
The method generalises. The specific sources don’t. We’ve written a longer take on what this study means for K&C and our clients — including the bits agencies will pretend they always knew.
Signal architecture is the three-layer model Seer Interactive built their GEO Olympics Study around. It’s a useful frame. It is not a checklist. The layers Seer identified — entity authority (who is the brand?), third-party validation (who validates it?), and community discussion (who talks about it?) — are best treated as a diagnostic lens. Strong on all three, you appear in AI answers. Weak on any of them, you start losing ground. Weak on all three, you fall off Seer’s “Binary Cliff”.
Where this gets misread is on the specific tactics. Wikipedia, for example, is downstream of notability. It is not a tactic you can run. A Wikipedia page is the downstream effect of significant coverage in reliable third-party sources you don’t control — trade press, analyst reports, books, peer-reviewed work. Paid placement doesn’t count. Sponsored content doesn’t count. Your own blog doesn’t count. Your LinkedIn doesn’t count. If you are not already independently notable, do not try to write your own page. The community will spot it, delete it, and the brand will end up on the talk page as a cautionary example. Plenty of household names don’t have a Wikipedia entry and won’t get one until somebody else writes it.
Reddit and equivalent communities are a measurement signal, not a manipulation channel. K&C doesn’t run Reddit campaigns and won’t pretend to. Astroturfing kills brands and the sub-bans follow you. What is on offer is monitoring (what the relevant subs are saying about you), authentic engagement under real names from staff and founders who actually belong in those subs, and earned mention through products and content good enough that people share them organically. If a category doesn’t lean on Reddit, don’t invest there. Most categories don’t.
The diagnostic value is the work. Knowing whether your sector’s AI answers are being shaped by trade press, accreditation bodies, Wikipedia, Reddit, LinkedIn or analyst reports tells you where to put effort. Most businesses skip this step and invest in the wrong place — running content programmes for sources their category’s AI answers don’t pull from, and ignoring the ones that matter.
That mapping is the bit AVS is built to do. Three LLMs, a tailored prompt set for your business, the actual citations behind each answer, scored across our twelve pillars. If you want to know what your category’s signal architecture looks like, that is what the AVS Exec Brief is for — or read our manifesto piece on why AI search is about architecture, not campaigns.
Drawing on our own audit data, published academic research, and analysis from across the GEO landscape, here are the patterns we're seeing. We've tried to be honest about confidence levels — some of this is well-established, some is emerging, and some is informed speculation.
ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot do not cite the same businesses for the same queries. Our audits consistently show that a business scoring well on one platform can be invisible on another. Perplexity tends to cite more sources explicitly. ChatGPT is more likely to synthesise without attribution. Google's AI Overviews favour content already ranking in traditional search. This means a single-platform GEO strategy is inherently fragile.
Businesses that produce clear, well-structured content that directly answers common questions tend to be cited more often. This includes FAQ pages, "how it works" explainers, and content that uses schema markup. The Georgia Tech GEO research found that adding citations, quotations from authoritative sources, and statistics to content improved LLM citation rates by up to 40%. Content that reads like it was written for AI extraction — clear, factual, attributable — performs better than content written purely for human engagement.
AI models don't just read your website. They've been trained on the entire web. Businesses that appear on trusted third-party domains — industry publications, review sites, Wikipedia, professional directories — tend to be cited more consistently. In our audits, businesses with strong earned media presence almost always outperform businesses that only invest in their own domain, even when the owned content is excellent. PR, in the traditional sense, may be one of the strongest GEO signals.
Run the same query on ChatGPT today and tomorrow, and you may get different brands cited. LLMs are probabilistic, not deterministic. Citation scores fluctuate weekly. This creates a measurement challenge: a single snapshot can be misleading. Meaningful trends only emerge over quarterly periods. Anyone claiming to offer real-time GEO tracking is measuring noise, not signal. This is why our methodology uses structured query frameworks across multiple time windows.
Our multi-country audits show that the same business can be recommended in the UK but invisible in Germany, or cited in the US but not in France. Language matters. Local sources matter. Regional search behaviour patterns influence AI training data. Most GEO services operate in English only, in a single market. This leaves a massive blind spot for any business operating internationally. We believe international GEO research is one of the most underexplored areas in the field.
When a business tells the same story across multiple sources — website, press, industry directories, LinkedIn, review sites — AI models appear to develop a stronger "understanding" of what that business does and who it's for. Businesses with fragmented or contradictory positioning across different channels tend to receive weaker, less specific citations. The implication: GEO is partly a business consistency exercise.
The GEO space is changing rapidly. Two years ago, none of these services existed. Now there's an emerging field of tools, agencies, and methodologies — all trying to solve the same problem from different angles.
Dashboard tools like Otterly.ai, Sight, and Peec.ai offer self-serve LLM monitoring. PR agencies like Hotwire, Brands2Life, and Ambitious PR are adding GEO to retainer services. Analytics platforms like Authoritas provide the underlying data infrastructure. Agency services like Muck Rack's Generative Pulse, Impression Digital, and C8 Consulting are developing proprietary GEO frameworks.
What's notably absent from most of these approaches is regular, data-led international research. Most operate in a single market. Most focus on English-language queries. Most publish case studies rather than ongoing research. The field needs more data, shared openly, with honest confidence levels.
That's the gap we're aiming to fill.
We're developing ongoing international GEO research tracking how AI answers vary across countries, languages, and platforms. More to come.
Our ongoing GEO research programme is designed around the questions that matter most to businesses. These are the themes we're actively investigating through our audit data and dedicated research queries.
How do ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot differ in what they cite and how? Which platforms are most volatile? Which are most consistent? When one platform starts citing a business, do others follow?
How do AI recommendations differ between the UK, US, Germany, France, and other markets? Do localised queries produce fundamentally different business recommendations? How much does language affect citation?
What types of content correlate most strongly with AI citation? How important are schema markup, FAQ pages, structured data, and authoritative third-party mentions? Can we isolate individual signals?
How quickly does new content get picked up by AI platforms? How long does a citation last? Is there a "half-life" for AI visibility? What causes a business to drop out of AI recommendations?
Does GEO work differently in professional services vs. consumer businesses? How do B2B and B2C citation patterns differ? Are some sectors more "GEO-ready" than others?
How directly does traditional PR activity translate to AI citations? Is earned media the strongest GEO signal? How long after publication does a press mention start appearing in AI answers?
We're not an academic institution. We're a commercial business that runs AI visibility audits. We curate GEO research from a specific perspective: we want to understand this landscape well enough to give our clients genuinely useful advice. That means sourcing rigorous work and being honest about what's proven versus what's still emerging.
We monitor GEO research from academic institutions, industry platforms, and fellow practitioners. We analyse what's relevant to our clients — how AI citations work, what signals matter, how measurement should work. We combine published findings with observations from our own audit data. We prioritise international research and cross-platform analysis.
Every finding includes a confidence level. "High" means it's been observed consistently and backed by multiple sources. "Medium" means we've seen it in our audit data or industry analysis, but it's not yet settled. "Low" means it's emerging or contested. We're honest about what we actually know.
We don't claim to lead GEO research — that's community work. What we do is analyse high-quality research, measure how it plays out across platforms and countries, and share observations that help you navigate the GEO landscape. This curation is part of our service to clients.
Understanding the GEO landscape is part of what makes Known & Cited different. We curate and analyse international research so you don't have to. We measure how it works in your market. We share what we learn so you can make better decisions about your AI visibility. Find out why that matters →
A growing reference of the most-cited GEO and AI search statistics, drawn from independent and academic research. Every stat attributed. Every source linkable. Updated as new research lands.
Browse the Latest GEO Stats →Book an AVS Exec Brief — a quick snapshot of your AI visibility across five platforms.
Explore our methodology and case studies to understand how we measure AI visibility and what it means for your market.