Why genAI search is as bad for shoppers as it is for marketers

GenAI search engines promise the “best” information, but deliver shallow, incomplete answers that frustrate buyers and shortchange sellers. The post Why genAI search is as bad for shoppers as it is for marketers appeared first on MarTech.

Why genAI search is as bad for shoppers as it is for marketers

GenAI search products like AI Overview are as bad for buyers as they are for marketers. Where traditional search once provided pages of possible vendors, the new tools collapse results into a single answer declared “best” for reasons unknown. It looks authoritative, but hides much more than it reveals — leaving sellers invisible and buyers uninformed.

One of the many problems with genAI search is that it gives the finger to the invisible hand of the marketplace — the “best” choice is picked by algorithm, not competition. The algorithm selects what a mathematical formula finds most “helpful” to the user, excluding many options as good or better than the AI’s “best.”

I discovered this searching for a particular type of first aid kit. I’m working to become a certified EMT — not changing careers, just wanting to be prepared for emergencies. That means having the right gear, in this case, an individual first aid kit (IFAK), which is for trauma, not first aid. The kits include a tourniquet for severe limb bleeding, hemostatic agents and gauze for wound packing, a chest seal for open chest wounds and other things I hope I never need.

Dig deeper: How AI decisioning will change your marketing

North American Rescue is the consensus gold standard in this space. But many other companies also make solid kits, and I wanted to know my options. So I asked my subscription version of Google Gemini: “What U.S. companies offer kits comparable to North American Rescue’s Ready Every Day (RED) Personal Kit?”

How the bots did

I ran the same query across multiple genAI tools. The results looked less like “search” and more like roulette:

  • Gemini (paid): First gave four companies plus three “helpful” suggestions about what a IFAIK should contain; a second ask produced eight; a third returned 14; asking for a complete list in a table dropped it back to eight.
  • ChatGPT Pro: Started with five products; when pushed for more, returned five different ones including a standard home first-aid kit; on a third try it listed eight, some being larger team packs that weren’t comparable.
  • Perplexity (free): Began with five, excluding anything labeled “tactical” for some reason; added five more on the second try; a “complete list” request had only seven of those 10.
  • Claude (free): First answer had three kits plus a list of competitors; second answer repeated the same three in more detail; third answer jumped to 10 with less detail.
  • DeepSeek (free): Went 2 → 11 → 19 across three asks.
  • Qwen (free): Initially denied the NAR RED kit existed; after correction, returned six, then 25 kits.
That’s some human-quality mansplaining.

Across these tests, I collected 71 different “comparable” kits, 54 appeared only once and just 17 appeared more than once. Eight manufacturers had kits on three or more lists. Congratulations to TacMed Solutions, the only company with kits named by all five genAI search bots.

Dig deeper: How AI reads your brand and why meaning matters most

That’s really, really bad. All of these services should warn users about the quality of the search results. Put it right next to the warning that sometimes the AI will lie to you.

Regular search isn’t much better

Unfortunately, plain old search engine results aren’t much better. On my query for kits comparable to the NAR RED kit, Google’s first page had 11 unpaid links, only five relevant. DuckDuckGo showed 13 unpaid links, six relevant. Bing had six unpaid links, three relevant — the best hit ratio, but buried in ads.

Marketers grappling with getting genAI search to pick their brand over all others is a symptom of a much larger problem. Google’s monopoly on search killed innovation. Search now focuses more on being an ad platform (another Google monopoly) than delivering good results. So far, there’s no indication that genAI is trying to improve on that.

Links to videos and Reddit forums are popular and sometimes useful. However, they are there because of an algorithm setting, not because the user asked for them. AIs prioritize providing answers that are helpful first, harmless second and accurate third. The answers to my queries failed because the AI defines helpful as the lowest common denominator.

A note to the folks who make genAI search engines: We don’t want the mathematically most frequent data presented under the guise of helpfulness. We want the answer to our question.

P.S., I bought the NAR kit because of its quality — and because they were running a great sale.

How to get better search results from LLMs

You can get better results from AIs. Here are ways to find out what ChatGPT and Gemini are excluding. For other AIs, all it takes is asking them how to find out what is being left out and why.

ChatGPT: Save a reusable instruction so it’s transparent when lists are shortened.

  1. Type this: “Please save this as a reusable prompt called Data Transparency.”
  2. Then, paste: “When asked for lists, data, or examples, do not silently shorten or filter the output. If you provide only part of the data, explicitly state that the list is incomplete and explain why you limited it (e.g., too many total items, space constraints, duplication, or relevance). Always estimate the approximate scale of the full set (dozens, hundreds, thousands) before presenting a subset. Clarify your selection criteria (e.g., most cited, most recent, most relevant). Never hide the reasons for truncation or prioritization — always disclose them clearly to the user.”
  3. Before a request where you want this applied, type: “Use Data Transparency.”

Google Gemini: You can’t permanently save prompts, but you can press it to explain how it chose results by using this prompt:

“Regarding the results provided in your last response, please detail the following three criteria that defined the search scope, and explain how each may have caused companies or data points to be excluded:

  1. Temporal Scope: What was the beginning and ending date range for the data considered?
  2. Inclusion/Exclusion Criteria: What were the minimum requirements (e.g., size, revenue, activity level, or primary business focus) used to include an entity, and what common types of entities would this have specifically excluded?
  3. Source/Geographic Limitations: What specific databases, regions, or publicly available information sources were utilized, and what are the known biases or limitations of those sources?”

Fuel up with free marketing insights.

Email:

The post Why genAI search is as bad for shoppers as it is for marketers appeared first on MarTech.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow