How to Track AI Rankings: Is It Even Possible?

By

Matthew Arndt

Co-Founder & Chief Commercial Officer, ADvance Media

If you have spent any time around digital marketing in the last two years, someone has almost certainly tried to sell you on “AI ranking” reports. The pitch is familiar: just like you track your position in Google search results, you can now track where your practice shows up when patients ask AI tools for recommendations. Position 1 in ChatGPT. Top three in Google AI Overviews. It sounds logical.

There is one significant problem. It does not work that way, and any tool claiming to give you a reliable “ranking position” in AI is selling you a metric that does not exist. Understanding why requires stepping back from how we have always thought about search, and rethinking what visibility in an AI-driven world actually means.

SEO Rankings Have a Position. AI Results Do Not.

AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization)  operate in an entirely different environment. When a patient asks ChatGPT, Claude, or Google’s AI Overview a question like “What is the best treatment for crooked teeth?” or “Is LASIK still the best option for vision correction?”, the AI is not pulling from a ranked list of web pages in a fixed order. It is generating a response based on its training data, the content it can retrieve, and a probabilistic model of what a useful answer looks like. That process is, by design, variable.

The result is that “ranking” as a concept does not map onto how AI systems work. There is no position 1. There is no stable order. There is only whether your practice, your content, and your expertise are part of what the AI draws from when it constructs its answer  and that changes from query to query, user to user, and day to day.

The Research That Should Make Any “AI Ranking Tool” Vendor Nervous

In January 2026, Rand Fishkin and Patrick O’Donnell of Gumshoe.ai published what is arguably the most important piece of primary research on AI visibility to date. Working with 600 volunteers who ran 12 different prompts across ChatGPT, Claude, and Google’s AI tools a combined 2,961 times, their findings were striking.

AI tools returned the same list of brand recommendations less than 1 in 100 times when given identical prompts. The order of those recommendations matched even less frequently, less than 1 in 1,000 runs. And the number of items on any given list varied widely, sometimes producing two or three results, sometimes ten or more. The conclusion from the SparkToro research was direct: these tools are probability engines, not ranking engines. They are designed to generate unique answers every time. Treating them as sources of consistent, trackable positions is, in Fishkin’s framing, “provably nonsensical.”

There is a second layer to this problem that goes beyond the variability in AI outputs themselves. Even when AI tools are prompted with the same underlying intent, real users almost never phrase their queries the same way. Someone researching rhinoplasty might ask “best plastic surgeon for a nose job near me,” while someone else asks “who does natural-looking rhinoplasty in [city]?” while a third person asks “is rhinoplasty worth it and how do I find a good surgeon?” Those are three different prompts, three different AI responses, and three different windows into whether your practice gets mentioned. Any tool that claims to capture that variability in a single ranked position number is compressing something genuinely complex into a number that carries false precision.

This does not mean AI visibility is untraceable or that building it does not matter. It means the question has to shift from “where do we rank?” to “are we showing up consistently enough, in enough places, to be part of the conversation when patients are making decisions?”

So What Can You Actually Track and Do?

Accepting that AI rankings are not real does not mean you are flying blind. It means you need a different measurement framework, one built around signals that are meaningful rather than numbers that feel precise but are not. Here are three things that are actually worth doing.

1. Use Brand Mentions as a Temperature Gauge, Not a Scorecard

You can get a directional sense of your AI visibility by periodically testing how AI tools respond to questions in your category. Ask ChatGPT or Claude who the top providers of LASIK are in your city. Ask Google’s AI Overview which orthodontists it recommends for adults in your market. Ask about rhinoplasty consultations. Note whether your practice appears, and how it is described when it does.

This is valuable, but treat it as a rough temperature gauge rather than a precise ranking. Because of the variability Fishkin’s research documented, a single test tells you very little. A pattern of tests over time starts to tell you something more useful: are you showing up at all, are you showing up frequently enough to matter, and when you do appear, is the way you are described accurate and positive? Monitoring brand mentions and sentiment in AI responses is a legitimate practice. Claiming any of it can be reduced to a “position number” is not.

2. Build for AI Visibility Rather Than Trying to Measure It Like SEO

The most productive investment is not in tracking AI rankings, it is in building the kind of presence that makes AI systems more likely to include you when they generate responses. This comes down to two related efforts.

The first is technical: implementing structured data markup (schema) so that AI systems and search engines can clearly understand what your practice offers, where you are located, and what conditions or procedures you specialize in. Similarly, maintaining a well-structured llms.txt file on your website, a growing convention that helps large language models understand your content more accurately, is a practical step that relatively few practices have taken yet.

The second is a content strategy built around Search Everywhere Optimization. AI systems do not pull information exclusively from your website. They draw from the full landscape of the internet: YouTube videos, Reddit discussions, review platforms, news coverage, social media, and third-party directories. A practice that has built genuine presence across multiple channels is more likely to be represented consistently in AI responses than one that has only optimized its website. The goal is not a single high-ranking page, it is a dense, corroborated presence across the sources AI systems trust.

3. Focus Your Energy on What You Can Actually Measure

This may be the most important point of all. The purpose of any marketing investment is not to achieve a position in a report. It is to generate consultations, starts, bookings, and revenue. Those are the numbers that matter, and they are the numbers you can reliably track.

How many leads is your practice generating across all channels? How many of those leads are converting to consultations? How many consultations are converted to procedures or starts? If those conversion numbers are healthy and growing, your marketing is working  regardless of what any AI visibility report says. If they are flat or declining despite strong SEO metrics, that gap is exactly the signal that something has shifted in how patients are finding and evaluating you.

The practices that will be hardest to compete with over the next few years are not the ones obsessively checking an AI ranking dashboard. They are the ones building the kind of content, reputation, and multi-channel presence that earns consistent mentions in AI responses  and then confirming it is working by watching their consultation and booking numbers move.

The Bottom Line

AI ranking tools are selling a framework that feels familiar because it borrows the language of SEO, but it does not reflect how AI systems actually work. The research is clear: AI responses are variable by design, users search in wildly different ways, and there is no stable position to track.

​​If you are not sure where your practice stands on either the visibility side or the conversion side  that is exactly what the Growth Architecture Audit is designed to assess.

Your Next Step: The Growth Architecture Audit

The Growth Architecture Audit is a practical, focused review of your current marketing setup  including your procedure and FAQ content, your local visibility and review profile, your technical AEO and GEO readiness, and how your overall strategy stacks up against the way patients are researching and making decisions in 2026.

The audit is complimentary and is designed to be useful regardless of how you move forward. You can take the findings to your current agency, implement them with your in-house team, or work with Advance Media as your implementation partner. The goal is a clear, honest picture of where your opportunity is.

Request your complimentary Growth Architecture Audit at advancemedia.com/.


Share this post