The rise of AI-generated search and discovery forces marketers to measure the visibility of their products on these platforms. Many search optimizers try to use traditional metrics like traffic from genAI and ranking in answers. Both are behind.
Traffic. The focus on traffic obscures the purpose of AI responses: to satisfy a need on the spotnot generate clicks.
Solutions generated by artificial intelligence usually do not contain links to branded sites. For example, Google’s AI reports sometimes associate product names with organic search listings.
So visibility does not equal traffic. Merchant products can appear in the AI response and receive no clicking.
Brand names cited in Google’s AI reports often link to organic search listings, such as this example of North Face hiking boots.
Leaderboards. AI responses often contain lists. Many sellers try to follow these listings to be at or near the top. However, tracking such a ranking is impossible.
AI responses are unpredictable. A recent study by Sparkoro found that AI platforms recommend different brands and different orders every time the same person asks the same question.
Better AI metrics
Here are better metrics to measure AI visibility.
Product or brand placement in LLM training data
Training data is critical to AI visibility because large language models default to what they know. Even when querying Google and elsewhere, LLMs often use their training data to guide search terms.
It is therefore essential to monitor what LLMs are keeping about your brand and competitors, and what is important, what is incorrect or outdated. Then focus on providing missing or corrected data on your website and across all owned channels.
Manual prompts in ChatGPT, Claude and Gemini (at least) will help identify gaps. Challenges can be:
- “What do you know about (MY PRODUCT)?”
- “Compare (MY PRODUCT) vs (MY COMPETITOR’S PRODUCT).”
Profound, Peec AI, and other AI visibility trackers can set up these prompts to monitor product location over time.
When using such visibility tools, keep in mind:
- AI trackers make calls through the LLM API. People often see different results due to personalization and differences between AI models. API results are better for checking training data because LLMs are likely returning results from that data (vs. live search) to save resources.
- The visibility score of the tools depends entirely on the challenge. In tools, separate branded prompts in a folder as they are likely to score 100%. Also focus on non-brand appeals that reflect the product’s value proposition. Prompts that are not relevant to the item’s key features are likely to score 0%.
Most cited sources
Increasingly, LLM platforms perform live searches when responding to calls. They can query Google or Bing — yes, organic search increases AI visibility — or browse other sources like Reddit.
Citations, such as articles or videos, from these live searches influence the AI’s responses. However, citations vary widely because LLMs span different (often unrelated) inquiries. So trying to include every source cited is not realistic.
However, challenges often repeatedly produce the same influential resources. These are worth exploring to include your brand or product. AI visibility trackers can collect the most cited URLs for your brand, product or industry.
Brand mentions and brand search volume
Use Search Console or other traditional analytics tools to track:
- Queries that include your brand name or version.
- The number of clicks from these queries.
- Impressions from those questions. The more AI answers a brand name has, the more people will search for it.
In Search Console, create a filter in the Performance section to view data for branded queries.
To view data for branded queries, create a filter in the Performance section of Search Console.