<\!DOCTYPE html> AI cites you in 6.81 days, if everything else is already working · Blog · Known & Cited
Blog · Research

AI cites you in 6.81 days, if everything else is already working

The first proper clock on AI citation, and what it changes for PR.

Josh Blyskal at Profound has done the work nobody else managed to do. He has put a real number on how fast AI starts citing new content.

The headline: 90% of newly published marketing pages were cited by ChatGPT or Claude within 37 days of going live. Half were cited inside seven days. The typical first-citation time was 6.81 days.

This feels pretty mega. It is also the first time anyone in our field has had a credible answer to the question every PR director and content lead has been asking for two years: how long does this actually take to work?

The sample is around 900 newly published marketing pages, observed across ChatGPT and Claude agent logs over a roughly 60-day window from March to May 2026. Small dataset, narrow content type, two LLMs, English-language web. Caveats noted. The finding still matters.

The Profound stat in full, with K&C’s commentary and the two-clocks framing, lives on our GEO Research 2026 page as a featured-research block. This piece is the longer take.

Day 37 is your benchmark

If you publish a page today and it is still not appearing in any AI answer 37 days from now, something is wrong. Not maybe wrong. Wrong.

The Profound data is saying that for nine out of ten pages, citation happens within five and a bit weeks. That does not mean every page is “in” within 37 days. It means the pipeline (publish, index, retrieve, cite) typically completes inside that window when the rest of the setup is working.

Which leads to the most useful reframe from the LinkedIn thread underneath the post.

It is a pipeline benchmark, not a content benchmark

Rodolfo Sabino pointed out, in the comments, that AI citation only happens after Google has indexed the page and the model’s retrieval layer has fanned out to pick it up. The 6.81-day number is not measuring how fast AI “understands” your content. It is measuring how fast the whole pipeline (indexing, retrieval, citation) completes.

This is the bit most teams miss. If your page is not being cited, the first question is rarely “is the writing good enough?” It is usually “did Google index it, and is it being retrieved at all?”

When we audit a brand that is failing to appear in AI answers, the answer is almost always upstream of the content. Pages that ought to rank do not rank. Pages that rank do not get retrieved. Pages that get retrieved do not have the supporting signal architecture (third-party validation, entity authority, community discussion) that would push them into the answer set.

Profound’s clock now gives that diagnostic process a real deadline. Day 37 is your trigger to look at the pipeline.

Emergent topics get cited faster than mature ones

The other useful observation in the thread came from Garrett Smith: in his empirical work, emergent topics are cited faster than crowded, established ones. The model is hungrier for new ground than for re-coverage of well-trodden subjects.

This has a practical consequence for content planning. If your team is fighting for citation share on a saturated category page, you are choosing the slow lane. If you can carve out an emergent sub-topic where the answer set is still forming, you are choosing the fast one.

Most B2B content programmes are still optimising for the well-trodden subjects competitors already own. Profound’s data plus Garrett’s observation says the same thing: stop. Look for where the answer set is open.

Two clocks. One game.

Here is the bit the Profound data does not measure, and where K&C’s work fits in.

Profound is measuring first citation. How quickly a page appears in an AI answer at all. That is one clock, and it is the one PR teams have been desperate for.

AVS measures a different clock: sustained citation across the volatility window. AI answers wobble. Run the same prompt seven days running and the citation set will shift. Our tech partner’s data shows roughly 45% of brands appear only once in a 7-day window on unbranded prompts. That means many of the brands “winning” on first-citation are actually showing up, falling out, and showing up again.

Two clocks. One game. You need to know both.

What this changes for PR teams

For the first time, comms functions have a real clock against which to measure AI visibility work. Not “we’ll see if it lands” but “if this page is not cited by day 37, we have a pipeline problem and here is where to look.”

That changes the conversation with leadership. AI visibility moves from soft-marketing-claim territory (“our content is being seen by AI”) into something every PR director can actually report on. Cited or not cited. By day 37. On these prompts. In these models. Against these competitors.

Combine first-citation timing with sustained-citation measurement, and you have, for the first time, an honest performance dashboard for AI visibility. Not someday. Now.

The job of PR is changing. The job of measurement is catching up.

Be Known. Be Cited.

Sources: Profound (Josh Blyskal) on LinkedIn, 11 May 2026. Pipeline reframe credited to Rodolfo Sabino. Emergent-topic observation credited to Garrett Smith. Sustained-citation stat from K&C’s tech partner data, 2026.
Read next:

Want to know where you stand on both clocks?

An AVS Exec Brief measures whether AI is citing you at all, and whether the citation is holding. First citation is one thing. Sustained citation is the harder, more useful one.

Start AVS Exec Brief