The Pre-Sold Visitor: Why Agentic Commerce Changes What CRO Is Actually Solving.
I've been noticing a pattern across a few brand accounts I look after. Sessions arriving via direct to a product detail page, usually mobile, with a engagement that's shorter than you'd expect from a cold visitor but a conversion rate that's higher. Not dramatically higher in a way that flags automatically. Just consistently higher than the same device type arriving from paid search or organic, and doing it across brands that don't share an audience.
My first instinct was attribution drift. UTM parameters not being passed through, branded search being misclassified, that kind of thing. I checked, and the usual suspects weren't responsible. What I think I'm seeing (this is a hypothesis, not a conclusion) is the early signature of AI agent referral traffic landing in GA4 as direct.
The mechanism, if I'm right, is that AI shopping agents strip referral headers when they send users through to a brand's site. The session starts clean, no source, no medium. It lands as direct. And the visitor it's sending is not a cold visitor. They've already had a comparison conversation. They've already had product questions answered. They arrive having, in some meaningful sense, already decided. They are landing ‘pre-sold’. The ‘point of persuasion’ is before the website.
I'm calling this the pre-sold visitor. And if that's what I'm seeing, then a significant part of what we do in CRO needs updating.
The CRO science that built this discipline.
I spent several years at Conversion Rate Experts (CRE), the agency that created CRO as a rigorous practice. Karl Blanks and Ben Jesson built the methodology the rest of the industry has been borrowing from ever since. The idea that conversion optimisation is a science, not a set of best practices, and that every change on a page should be driven by a diagnosed problem and tested against measurable outcomes rather than applied because someone read it worked for another brand. Their book Making Websites Win is probably the most complete codification of that thinking in one place. I recommend a read.
The framework I work from still reflects a lot of what I learned there. The idea that visitors arrive with objections, and the job of the page is to neutralise those objections before they exit. Those objections are identified through ‘voice of visitor’ research including customer surveys, post-purchase interviews, on-site polls. This data tells you what's stopping people buying in their own words. The idea that different personality types need different persuasion approaches is why CRE use long-form sales copy: address everything, so the methodical visitor finds their evidence, the humanistic visitor finds their social proof, the competitive visitor finds the key information fast, the spontaneous visitor gets closed before their enthusiasm fades. The idea that almost every conversion problem has a specific diagnosis rather than a generic fix.
That framework is still correct. But it was built for a world where almost all visitors arrive cold, with low or unformed intent.
What changes when the visitor arrives pre-sold.
A cold visitor arrives with low or medium intent and lands on a product page carrying a question. Usually an implicit one: is this the right product for me, and can I trust this brand enough to spend money here? The page's job is to answer that question — aligning a clear value proposition with the right prospect and countering objections derived from voice of visitor data. Most CRO work is the work of doing that more effectively.
A pre-sold visitor has already had that conversation with an AI agent, before they arrived. The comparison was done. The question was answered. What they're arriving to do is confirm what they've decided and complete the transaction.
That's a fundamentally different job for the page.
The failure mode for a pre-sold visitor isn't that they don't know enough. It's that something on the page contradicts what the AI agent told them, or fails to confirm it quickly enough that their decision energy holds. The price is different to what was cited. The product description doesn't match the attributes the AI summarised. The size they need is showing as available but the specific variant page is broken or slow to load. Small discrepancies that would be background noise to a cold visitor land much harder when the visitor arrived expecting confirmation.
I've started thinking about this in terms of the verification moment. The pre-sold visitor needs a fast, frictionless confirmation that what they came to buy matches what they came expecting to buy. Steve Krug's principle ‘don't make me think’ was written for usability, but it applies here in a different register. For a cold visitor, "don't make me think" means don't confuse me about the interface. For a pre-sold visitor, it means don't make me reconsider. Any friction that reintroduces the question they'd already answered is friction that can lose the sale. If the page delivers fast verification, conversion is high. If it introduces doubt or contradiction, the decision collapses at the last moment. And because they arrived via what looks like direct traffic, you can't see the attribution clearly enough to diagnose it through the normal channels.
The four buyer types in a pre-sold state.
The buyer persona framework holds, but the specific needs shift. Long-form sales copy works for cold traffic because it gives every buyer type what they need in a single page. The methodical visitor reads the full specification, the spontaneous visitor gets caught by a headline and a strong CTA before they scroll past.
The pre-sold visitor didn't arrive for the copy. They arrived because the question was already answered.
The methodical buyer arriving cold needs detailed specifications, reviews, comparison data, evidence. The methodical buyer arriving pre-sold already has that. What they need is to verify it quickly and not have their objections countered based on voice of visitor data, but find a fast match between what the AI cited and what the page shows. The schema you have in your product feed, the material composition, the sizing information, the technical attributes all need to match precisely. If the AI told them 320 gsm and your PDP says "heavyweight cotton," there's a discrepancy that will cost you. The methodical buyer notices it.
The humanistic buyer arriving cold needs brand confidence, social proof, the sense that other people have had a good experience. The humanistic buyer arriving pre-sold needs one clear confirming signal. Usually a recent review that mirrors the use case the AI described. If that review is there, visible early on the PDP, the decision holds. If your review section is buried or showing aggregate ratings without accessible individual reviews, that confirming signal isn't available quickly enough.
The competitive buyer has the least patience in both states, but arriving pre-sold they have even less. They came to buy, not to be persuaded. Anything that makes them stop and think such as a slow variant selector, an unclear stock status, an ambiguous call to action introduces a delay. A blocked artery. These buyers are the most sensitive to checkout friction, and the most likely to exit without completing if something minor slows them down.
The spontaneous buyer's decision energy decays. Arriving cold, you build that energy through the page experience. Arriving pre-sold, the energy is already at its peak when they land. The job is not to build it. It’s to not let it decay. Speed matters here more than content richness. Every second of friction is decision energy they're spending without moving toward completion.
The Agentic Trust Layer connection.
This connects to something I've been building across this series. The Agentic Trust Layer is the infrastructure that determines how confidently an AI agent can recommend your brand and your products using entity signals, structured data, schema markup, product feed completeness, and review ecosystem health.
What I'd missed in framing this primarily as a discovery problem is how directly it affects conversion. If the Agentic Trust Layer is weak, the AI agent either doesn't recommend you or recommends you with less specificity. But even if you're being recommended, if the product data the AI cited, your GMC feed data, your schema, your product attributes doesn't match what the visitor finds when they arrive, you've created the exact contradiction that collapses a pre-sold decision.
The fit, the composition, the sizing convention, the availability, the price. If these are accurate and specific in your product feed, the AI cites them accurately, and the visitor arrives expecting exactly what they find. If they're thin or inconsistent, the AI either doesn't cite them or the visitor finds a discrepancy.
Entity infrastructure and conversion rate are connected at this point. The same data that determines recommendation quality determines verification quality for the visitor who arrives expecting confirmation. These are not two separate problems that happen to touch the same feed. They're the same problem, measured at different points in the same journey.
How to use this data to your advantage
A few practical things worth checking, in rough order.
First, build the GA4 segment and look at it. Direct sessions landing on PDPs, mobile and desktop, conversion rate and average session duration compared against your paid and organic baselines. I'm not saying the numbers will be dramatic. I'm saying the pattern is worth understanding before it gets larger. If the segment shows sessions with shorter dwell time but higher conversion than your cold traffic baselines, that's the signal worth investigating.
Second, ensure maximum product card coverage in Google Merchant Centre. Every product that should be surfaced in AI-powered shopping experiences needs to be approved, attributed, and complete in your GMC feed. Missing attributes, disapproved items, or thin product data limits the quality and specificity of what AI agents can cite about you. Coverage before quality. If the product isn't in the feed properly, none of the rest of this matters.
Third, run a verification audit on your hero PDPs. Not a full CRO audit. A verification audit. Go and ask an AI shopping agent for your product category and look at what the product card says about your hero products. Then compare that against what your PDP actually shows. Look for the specific discrepancies: price, material, sizing information, availability, product highlights. Those discrepancies are the failure points for pre-sold visitors, and they're invisible in standard analytics.
Fourth, review your checkout for impatience. Competitive and spontaneous buyers arriving pre-sold have no tolerance for unnecessary steps, unclear progress indicators, or form fields that don't need to be there. This is standard CRO work, but it's worth doing with the pre-sold visitor specifically in mind because the reason they're abandoning is not that they weren't persuaded. It's that something introduced friction or doubt at the moment of commitment.
The CRO discipline was built on the premise that you know roughly how a visitor is arriving and what they need. The page is optimised to answer questions they're carrying when they land.
That premise is changing. The question has been answered before they get there. The job of the page is confirmation and frictionless completion rather than persuasion.
I'm not suggesting CRO stops applying or the fundamentals go away. Most traffic is still cold. Most conversion problems are still cold visitor problems. But there's a growing segment where the existing diagnostic approach is looking at the wrong question.
The session looks short, the entry point is unusual, and nothing in the standard conversion analysis explains why it behaves the way it does.
The brands that connect the infrastructure work to the conversion work rather than treating entity signals, product feed completeness, and CRO as separate projects owned by separate people will have built something that compounds over the next couple of years. Good entity infrastructure produces more accurate product data, which produces higher quality pre-sold visitors, which arrive at pages built to confirm rather than persuade.
That loop exists whether you've noticed it or not. The question is whether you're building for it deliberately.