AI Visibility Report
How to launch a run, interpret metrics, and turn insights into actions.
AI Visibility Report
AI Visibility Report shows how often your brand appears in generative AI answers, in what context it is mentioned, and how it compares to competitors.
Core principle: AI answers are stochastic. The same question can produce different outputs, so the report uses repeated sampling and probability-based metrics instead of a single fixed rank.
When to use it
- Before a campaign, to capture baseline visibility.
- After content, PR, or product updates, to measure impact.
- For regular competitor benchmarking by market and language.
- For client-facing AI SEO / GEO reporting.
What to prepare
- Brand: target brand to analyze.
- Locale: market/language.
- Base intent: natural user-style prompt.
- Competitors: ideally 3-10 direct competitors.
- LLM sources: providers/models to include.
- Sample profile: speed vs depth mode.
Tip: write Base intent as a real user question, not as a keyword list.
How to launch a run
- Open AI Visibility.
- Fill in Launch a run.
- Choose profile:
- Fast: quick diagnostic check.
- Balanced: default trade-off.
- Deep: maximum depth, longer runtime. - Click Run report.
- Track progress in Run history, then open run detail.
Typical runtime is around 10-20 minutes depending on profile and selected sources.
How to read the run detail page
KPI block
- Share of voice: target-brand mention share among brand mentions.
- Confidence: consistency of answers across samples.
- Volatility: variance across sampled outputs.
- Sample coverage: how complete and parseable the sample set is.
Provider summary
Shows per-source contribution:
- sample count,
- parsed count,
- source-level SoV,
- target mention volume.
Use it to identify where your brand is strongest or weakest.
Competitors table
Shows mention volume, share, average confidence, and also newly discovered competitors from extraction.
Use this to prioritize real market competition in AI answers, not just your initial competitor list.
Sentiment and sources
- Sentiment: tone around target-brand mentions.
- Top sources/domains: domains AI responses cite most often.
If visibility rises but sentiment drops, this is a reputation issue, not a pure growth signal.
Verbatims
Representative snippets let you quickly validate quality and context without reading every sample.
How to interpret warnings
Warnings are analytical signals, not just technical errors.
Common cases:
- No/low samples: widen sources or simplify base intent.
- Target not detected: verify brand spelling, locale, and prompt framing.
- Low confidence: rerun with Balanced or Deep, then compare.
- New competitors found: include them explicitly in next run.
Recommended workflow
- Run baseline on priority locales.
- Implement content/reputation updates.
- Rerun with comparable settings.
- Compare SoV, confidence, sentiment, and source mix.
- Update content plan, FAQ, comparisons, and PR priorities.
Important limitations
- Results are probabilistic, not deterministic ranks.
- One run is directional; trend over multiple runs is more reliable.
- Different AI sources can produce different market pictures.
Export and sharing
When a run is completed, export PDF from run detail for client updates and internal reviews.