Where the thinking came from

The Five Layers of Commercial Intent did not arrive fully formed. It developed through years of work at the point where search behaviour, commercial strategy, and real buying decisions intersect — and through repeated encounters with the same problem: organisations optimising for signals that did not represent their commercial weight. These are some of the moments that shaped the methodology.

Why search volume is the wrong starting point

Early in my time working in SEO, I inherited responsibility for a monthly report covering a large retail client's keyword performance. The report tracked over 30,000 keywords, ordered by search volume. Which meant the keyword at the very top of the report, month after month, was a category term the business stocked almost incidentally — low margin, limited range, not commercially central to anything they were trying to do.

It sat at the top because it had the largest search volume. Not because it mattered most. Those are not the same thing, and in that report they were treated as if they were.

The hardest part of understanding demand is not finding signals. It is deciding which ones deserve attention.

What struck me wasn't that the keyword appeared in the data. Any large retail category will generate search volume around products at its edges. What surprised me was how easily the genuinely useful signal — the demand that actually reflected commercial health — disappeared behind the volume of noise being presented as insight.

My background before SEO was in paid search, where the discipline is almost the reverse: strip everything away until only the signals that drive decisions remain. So not long after inheriting that report, I restructured it entirely. Instead of tracking 30,000 keywords, I focused on roughly 1,400 — selected specifically for category demand, transactional intent, product relevance, and revenue potential.

That smaller, more deliberate set of signals became the foundation for years of successful strategic work. The lesson it left was simple but persistent: volume tells you a market exists. It does not tell you what the market is actually there for. Most analysis stops at the first observation. The second is where strategy earns its value.

How good ideas become templates — and stop working

During a complex site migration for a large e-commerce client, we faced a familiar problem: restructuring a site's architecture creates short-term confusion for search engines. Pages that had established meaning lose context. Signals that pointed clearly in one direction become ambiguous.

To help manage this, we started adding supporting content to category and subcategory pages — not primarily for users, but to give search engines clearer signals about what each page represented and how it connected to the wider site structure. It worked. The migration completed with minimal visibility loss, and the approach quickly developed beyond its original purpose.

Content placed above product listings began setting the primary intent of the page. Content placed below addressed underserved layers of intent — the questions and contexts that the navigation and product grid alone couldn't answer. Category pages started performing far beyond what their product range alone might have supported. The approach had a name internally: enhanced content. Not because it meant more content, but because it enhanced the intent a page could serve.

The solution had become part of the process. The thinking that created it had quietly left the room.

What happened next is probably familiar to anyone who has worked in a fast-moving agency environment. The idea worked, so it spread. Enhanced content appeared across more clients, more campaigns, more page types — as a deliverable rather than a diagnosis. The question shifted from "does this page have an underserved intent that content could address?" to "which pages need their enhanced content this month?"

Some pages genuinely benefited. Others would have been better served by completely different improvements. But by then the solution had separated from the thinking that produced it. It had become a template.

Good ideas often start as thoughtful responses to specific problems. When they work, they get systematised. When they get systematised, the problem-solving that created them disappears — and the solution keeps running on its own momentum long after the conditions that made it effective have changed. Recognising when a solution has become a habit, and asking whether the original problem still exists, is one of the harder disciplines in strategic work.

Search behaviour shows you the market. Buying behaviour shows you what it is actually there for.

Several years into working in SEO, I produced a demand analysis for a business in a specialist retail sector. Standard approach: gather keyword data across the category, filter for relevance, group by theme, map the patterns. From that, a picture of the market emerged. Search behaviour suggested people were entering the category through a wide range of decorative and design-led interests — styles, aesthetics, specific product types — and the analysis mapped those interests into what looked like a coherent demand landscape.

When I presented the findings, the client listened, then said something that immediately changed how I understood the situation. He explained that while search behaviour reflected how people explored the category, the vast majority of their actual sales resolved in an entirely different place — a single product type that dominated conversions to a degree that the keyword data gave almost no indication of.

The analysis wasn't wrong. It was just answering a different question from the one that mattered.

The search data was showing me how people moved through the category in discovery mode. The conversion data was showing me how they actually resolved their decision. Those are not the same journey, and they do not point to the same strategy. Had we followed the demand analysis alone, we would have prioritised the parts of the market that looked interesting — the high-volume, design-led exploration — at the expense of the part that drove commercial outcomes.

That moment forced a lasting shift in how I approached demand analysis. The question stopped being "what are people searching for?" and became "what decision are they actually trying to make?" — and then "what is driving that decision beneath the surface?" Intent classification, as it is usually practised, answers the first question and calls it done. The second and third questions are where the commercially significant insight usually lives. Volume shows you the shape of the market. The decision behind the volume shows you what the market is actually worth.

The point where the task looks complete is not always the point where the real problem has been found

Some years into managing technical SEO teams, I oversaw an audit for a large e-commerce site. The initial brief was fairly typical — a day's work to review indexation and identify surface-level issues. After the first day, it was clear the surface had barely been scratched. Early signals suggested something more complex might be at work, and the scope expanded to include deeper log file analysis to understand how the site was actually being crawled and interpreted.

The deeper the investigation went, the more questions it surfaced. The original scope had long since passed. The work continued not because of scope creep but because the problem had not yet revealed itself. It took close to forty hours before the underlying issue became clear — a structural problem affecting indexation in a way that would have been almost impossible to identify without the extended investigation.

Diagnostic work gets treated like production work far too often. The output is visible. The depth of the investigation that produced it usually isn't.

Had the work stopped when the task technically appeared complete, the root cause would have remained hidden. The site would have continued underperforming against a backdrop of surface-level fixes that addressed symptoms without touching the cause.

When the client subsequently went through a competitive review process, several technical issues were raised by other parties. We had already found them — and had already identified the deeper problem none of the others had reached. The audit strengthened the relationship and led to a significant expansion of the engagement.

What stayed with me from that experience was not the technical detail but the principle it illustrated. In diagnostic work, the appearance of completeness is often the most dangerous point — because it creates pressure to stop before the real finding has been made. The task that looks finished and the problem that is actually solved are frequently not the same thing.

Why intent classification is the beginning of the analysis, not the end of it

Most marketing teams would tell you that search volume alone does not drive their strategy. They would point to intent classification — commercial terms separated from informational ones, transactional queries prioritised, journey stages mapped to content. That work is real and it matters. But there is a step almost universally skipped, and it is the step where commercial weight is actually determined.

Intent classification tells you where someone is in a journey. It does not tell you who they are, what is driving their decision, or whether they represent commercial weight worth building around. Consider a single high-volume query in the home services category. Classified as commercial intent — fine. But inside that query sits a landlord facing a tenancy deadline, a homeowner without heating in January, someone doing preliminary research before winter sets in, and a first-time buyer trying to understand what they have inherited with their property.

Same query. Four different people. Four different levels of urgency. Four different likelihoods of acting. A strategy built on the aggregate serves the average — and the average does not exist.

This is the limitation of intent classification as it is normally practised. It identifies the category of behaviour but not the distribution of need within it. High commercial intent at the aggregate level can conceal enormous variation in urgency, price sensitivity, and likelihood to act across the groups that make up that aggregate.

The businesses that get this right do not abandon their volume metrics — the board still sees volume growth, the commercial director sees conversion improvement. Both happen, because decomposing demand properly does not replace execution strategy. It fixes the brief the execution is built from. And the brief, in most organisations, is where the real problem lives.

When the report looks healthy and the strategy isn't

Retail understood a long time ago that footfall is a starting point, not a conclusion. A busy store with the wrong people converting poorly is not a commercial success — it looks like one. The numbers go up. The reports look healthy. Somewhere in the background, the business is optimising for visitors who were never going to buy anything meaningful. The industry responded by building entire technology categories to solve it: dwell time analysis, demographic mapping, journey tracking. Billions invested in understanding not just how many people came through the door, but who they were, why they came, and what would make them act.

Because the number alone was never enough.

Digital marketing has not learned the same lesson. Search volume is still treated as the primary signal for most briefs. High volume means opportunity. The brief is written around it, strategy follows, execution begins — and nobody stops to ask who those people actually are.

The industry has become skilled at performing an understanding of audiences. The performance and the understanding are not always the same thing.

Consider a market like home improvement services, where the dominant search query looks price-driven on the surface. Build around cost comparisons, promote competitive quotes, lead with value. But community signals and review data tell a different story: the people using that query are not primarily price-sensitive. They are trust-sensitive. They are using price research as a proxy — the only dimension they can investigate without exposing themselves to a sales process they already distrust. Same query. Completely different underlying need. A brief built on the volume signal alone misses it entirely.

The result of building on the wrong signal is a pattern I think of as vanity theatre. High-volume demand that does not convert gets over-resourced. High-weight demand that converts well gets missed. Visibility grows. Engagement metrics look healthy. The conversion picture tells a different story — but by the time that story is legible, the budget has already been spent on the wrong thing.

Volume tells you a market exists. Commercial weight tells you which parts of it are worth building around. Most strategies are built on the first observation. The second is where the brief should start.