What we saw
In a research run on the category “analytics for mid-market e-commerce,” one of the brands being tested consistently landed in 4th–5th position in ChatGPT answers and barely appeared in Perplexity. This was unexpected: the brand is not small, it has a strong site with detailed documentation, an active blog, and several case studies with major clients. In traditional Google search, it ranks in the top 5 for its core category queries. By all the usual measures, this is a visible brand.
But in neutral AI100 scenarios, where the models were asked questions such as “what should I choose for a mid-sized online store with a small team,” the brand lost to competitors that were objectively less well known.
What turned out to be the cause
The analysis showed three overlapping problems.
The first and most important was a gap between the language of the site and the language of demand. The brand described itself as a “modular environment for intelligent commerce analytics.” Users — and models reflecting their language — were asking about a “simple reporting service for an online store without a dedicated analyst.” Those two languages barely intersected anywhere. Nowhere on the site was there a direct answer to the question “who is this for?” in the same words a user would use to describe the task.
The second problem was that the model reformulated the task into an adjacent category. Instead of “analytics for e-commerce,” the answer was built around “BI tools for small business.” As a result, the list included two solutions from a neighboring category that the brand did not consider competitors. This is the classic category drift described in a separate article in the corpus.
The third problem was the update lag. Two months before the research run, the brand had launched a new pricing plan aimed at the mid-market segment. But the model still described it as a solution for large companies — information about the new plan had not yet seeped into external reviews or structured data.
What follows from this
The observation confirms several theses that the AI100 corpus describes as persistent.
Machine distinctness and human recognizability are different things. A brand that is well known to people can be functionally invisible to a model if its language does not match the language of the task.
Category drift happens before direct comparison between brands. The brand lost not to a competitor, but to someone else’s frame. The model first renamed the task and only then assembled a list within the new category.
Update lag is not an abstract delay. It is a concrete situation in which the brand has already changed a fact about itself, but the machine has not yet had time to see it.
What the brand could have done
Add a page to the site that directly answers the question “who is our product for?” in the language of the user’s task, not the internal marketing category. Make sure the new pricing plan is described not only on the pricing page, but also in external reviews. Check that the structured markup (Product, Offer) reflects the current pricing plans and target audience.
A repeat research run in 6–8 weeks would show whether the picture changed.