The New Shortlist Nobody Talks About
Before a buyer fills out your demo form, something else happens first.
They open ChatGPT and type something like "what's the best project management tool for a distributed engineering team." Or they search Perplexity for "HubSpot alternatives for early-stage SaaS." Or they ask Google and get an AI Overview summarising the top options before a single link appears.
AI gives them a shortlist. Two, maybe three products get named. The rest don't exist.
73% of B2B buyers now use AI tools in their research process, yet most SaaS companies are still optimising exclusively for the old version of search — the one where you rank for a keyword and someone clicks your link. That playbook still matters, but it no longer covers the full picture of how your buyers find you.
This post is about the other half. How AI systems actually decide which products to recommend, and what you can do to be one of them.
First, Understand That Each Platform Works Differently
One of the biggest mistakes SaaS teams make when approaching AI visibility is treating ChatGPT, Perplexity, and Google AI Overviews as interchangeable.
They're not. They have meaningfully different citation architectures, and a strategy optimised for one won't automatically work for the others.
An analysis of 680 million citations across all three platforms found that only 11% of domains are cited by both ChatGPT and Perplexity. Eleven percent. That's not overlap — that's two almost entirely separate ecosystems.
Here's how they break down:
ChatGPT draws heavily from its training data, which includes Wikipedia, licensed publisher content, and Reddit threads with meaningful engagement. When browsing is enabled, it pulls from Bing's index. It tends to favour encyclopedic, structured, authoritative content. Brand mentions have a stronger correlation with ChatGPT visibility than backlinks — brands with high branded search volume get cited more consistently.
Perplexity is a live web retrieval engine at its core. Every query triggers a real-time search, and Reddit is its most-cited domain at 46.7% of top citations. Perplexity users skew toward researchers and technical buyers who want verifiable answers with sources. It cites nearly twice as many real-time sources as ChatGPT and is significantly more responsive to fresh, community-sourced content.
Google AI Overviews are the most conservative of the three. They have the strongest correlation with traditional search rankings — around 93% of citations link back to content that already ranks in Google's top results. Getting into AI Overviews is largely an extension of getting your traditional SEO right, with additional emphasis on structured data, schema, and content that directly answers specific questions.
Understanding these differences matters because it tells you where to prioritise effort based on where your buyers actually spend their research time.
The Consensus Signal: Why Multi-Source Presence Is Everything
Here's the underlying logic that ties all three platforms together.
AI systems don't recommend brands based on a single strong signal. They look for consensus — multiple independent, credible sources converging on the same answer.
Research across AI citation patterns consistently shows that when your product appears consistently across Reddit discussions, G2 reviews, industry roundups, YouTube content, and third-party publications — all with similar positioning — AI systems gain confidence in recommending you. When you only exist on your own website, AI treats your claims with skepticism. There's no independent verification, so it defaults to recommending brands with broader presence.
This is why brands with strong community presence, review volume, and press mentions get cited far more than brands with technically superior websites and stronger domain authority.
The practical implication: getting recommended by AI is fundamentally an off-site problem, not an on-site one.
How to Get ChatGPT to Recommend You
ChatGPT's recommendations are shaped primarily by what it learned during training, supplemented by live web search when browsing is enabled.
For training-based visibility, the highest-leverage actions are brand mentions across authoritative sources. Wikipedia inclusion matters significantly for ChatGPT — it's the single most-cited domain in ChatGPT responses. For most SaaS companies, a Wikipedia entry is only feasible once you have meaningful press coverage and third-party references, but it's worth tracking as a longer-term goal.
More immediately actionable: getting your brand mentioned in high-authority industry publications, comparison content on established tech sites, and analyst reports in your category. These are the types of sources that end up in training data and influence how ChatGPT's parametric knowledge — its baseline understanding of your category — represents your brand.
For live browsing visibility, the logic shifts toward traditional SEO. If your content ranks well in Bing (which ChatGPT's search mode uses), you'll surface in ChatGPT responses more frequently. This means clean technical SEO, strong backlinks to your key pages, and content that directly answers buyer questions.
Reddit is also disproportionately weighted in ChatGPT's training data. Threads about your category on r/SaaS, r/entrepreneur, and vertical subreddits — where your product gets mentioned in context — feed both training-time and retrieval-time visibility.
How to Get Perplexity to Recommend You
Perplexity is the most transparent of the three platforms for optimisation purposes. Because it shows its sources inline, you can actually see what it's pulling from when it answers a question about your category.
The single most effective thing you can do for Perplexity visibility is build authentic presence in relevant Reddit communities. Perplexity cites Reddit at 46.7% of its top citations — no other platform comes close. Active, helpful participation in subreddits where your buyers ask questions is directly traceable to Perplexity recommendations in a way that's hard to replicate through any other channel.
Beyond Reddit, Perplexity responds well to fresh, recently updated content. Because it retrieves in real time, content freshness matters more here than in ChatGPT. Pages that were last updated two years ago compete poorly against recently refreshed content on the same topic.
Perplexity also rewards citation density in your own content. Including links to credible external sources — research studies, industry data, authoritative publications — signals to Perplexity that your content is well-sourced and trustworthy enough to pass on to its users.
Clear attribution matters here too. Named authors with verifiable credentials, linked LinkedIn profiles, and clear expertise signals all help Perplexity assess whether your content is worth citing.
How to Get Into Google AI Overviews
Google AI Overviews are the most closely tied to traditional search performance, which is both good and bad news depending on where your SEO stands.
The good news: if you already rank well on Google, you have a real path into AI Overviews without starting from scratch.
The bad news: Google AI Overviews are selective. They typically cite only three to five sources per query, and they strongly favour established domain authority. Smaller or newer SaaS brands often find that Google AI Overviews is their hardest AI platform to break into, while Perplexity and ChatGPT offer more opportunity.
What actually moves the needle for AI Overviews specifically:
Schema markup. FAQ schema, HowTo schema, and Article schema help Google extract and understand your content for AI summarisation. Pages with structured data are significantly more likely to be cited.
Answer-first content structure. Google's AI extraction system pulls the most extractable content from pages. If your most useful information is buried in the middle of a long article, it gets skipped. Lead with the direct answer, then support it.
E-E-A-T signals. Experience, Expertise, Authoritativeness, and Trustworthiness are built into Google's evaluation framework. Real author bylines, credentials, original research, and external citations in your content all reinforce these signals.
Comparison and listicle content. Listicle-format content accounts for 59.5% of all URLs cited by AI search engines in many SaaS categories. "Best X for Y" and "Top tools for Z" content gets pulled disproportionately into AI Overviews compared to other formats.
The Content Architecture That Works Across All Three
Despite the differences between platforms, there are content principles that improve your visibility across all three simultaneously.
Lead with the answer. Every piece of content should open with a direct, clear answer to the question it's addressing. Don't make AI systems work to extract your key point — put it front and centre. Pages with what researchers call "answer capsules" — tight, self-contained paragraphs that directly answer a query — achieve significantly higher citation rates across all platforms.
Include specific data. Original statistics, named outcomes, concrete numbers. Specific data is one of the most reliable signals of citability across all AI systems. Content with original research or proprietary data consistently outperforms generic content on the same topic.
Write in natural language. AI systems have become very good at detecting content written for keyword density rather than human comprehension. Conversational, specific, direct writing outperforms over-optimised prose. Write the way your best sales engineer explains something to a prospect.
Keep paragraphs extractable. Structure your content so that individual paragraphs of 40–60 words can stand alone as answers. AI retrieval works at the passage level — a dense 800-word wall of text is much harder to cite than the same information broken into clean, discrete sections.
Use clear headings that match buyer questions. Your H2s and H3s function as signals to AI systems about what each section covers. Format them as questions or direct descriptors — not creative titles that don't communicate what the content answers.
The Off-Site Work That Actually Moves the Needle
Given that 85% of AI citations come from third-party sources, the highest-leverage GEO work for most SaaS teams happens off your own domain.
G2 and Capterra reviews. These platforms are heavily cited by AI systems for SaaS category queries. A consistent stream of genuine reviews — not just a burst campaign — builds the kind of sustained presence AI systems interpret as social proof.
Industry roundups and comparison articles. Find the "best X tools" articles that already rank well and are already being cited by AI in your category. Getting included in those articles is more valuable than publishing a new article yourself. Direct outreach to the authors with a clear value proposition for why you belong in the list is a perfectly reasonable approach.
Press and media mentions. Up to 89% of all citations within LLMs come from earned media according to Muck Rack's research. Product launches, data studies, and thought leadership that gets picked up by industry publications build the third-party mention footprint that AI systems treat as credibility.
YouTube. It's underused by SaaS brands and increasingly important for AI visibility. Ahrefs analysis of 75,000 brands found YouTube mentions have the highest correlation with AI visibility — a 0.737 correlation coefficient — across all platforms tracked. Even basic tutorial content, product walkthroughs, and thought leadership videos build a presence that AI systems weight heavily.
Reddit participation. Covered in depth elsewhere on this blog, but essential for Perplexity in particular. Authentic community presence in relevant subreddits creates a citation pipeline that compounds over time.
How Long Does This Take?
Most SaaS teams begin seeing results from AI citation optimisation within four to eight weeks of implementing the structural changes — schema markup, content restructuring, robots.txt configuration for AI crawlers.
The community-building and earned media work takes longer. Six to twelve months of consistent Reddit participation and PR activity before you see it reliably translating into AI citations.
The freshness-sensitive platforms — particularly Perplexity — reward you faster. A well-structured piece of content that earns Reddit traction can surface in Perplexity answers within days.
The key metric to track in the meantime is branded search volume. When AI visibility is working, more people encounter your name in AI answers and then Google you directly. That branded search lift typically shows up in Google Search Console before any other measurable signal.
Start Here If You're Starting From Zero
If this is new territory for your team, do these four things first before anything else:
Run a baseline audit. Search for ten of your most important buyer questions across ChatGPT, Perplexity, and Google. Write down who appears, who doesn't, and what sources get cited. That's your map.
Check whether AI crawlers can access your site. Look at your robots.txt file and make sure you're not blocking GPTBot, ClaudeBot, PerplexityBot, or Google-Extended. Plenty of SaaS sites inadvertently block AI crawlers and wonder why they don't appear in AI answers.
Fix your entity footprint. Make sure your brand description is consistent across your website, LinkedIn, Crunchbase, G2, and any major directories. Inconsistency is a fast fix that has an outsized impact on AI recognition.
Identify your biggest citation gap. Find the one or two articles or comparison pages AI is already citing most often in your category. If you're not in them, that's your highest-priority outreach.
Everything else builds from there.