Back to Insights
Engineering Breakdown·8 min read·

Attribution systems vs commercial reality

Every attribution model answers a different question. Most ecommerce teams don't realise the question their model is answering has nothing to do with their actual business problem.

Series: Your metrics lie

The common explanation for declining marketing efficiency is that competition has increased and costs have risen. In practice, that isn't what's happening in most cases.

What's actually happening is that attribution systems are reporting a version of reality that diverges further from commercial truth with every additional channel, touchpoint, and platform change.

Understanding why requires looking at the mechanism, not the metrics.

What attribution actually measures

An attribution model assigns credit for a conversion to one or more touchpoints. Last-click gives all credit to the final interaction. Data-driven distributes it probabilistically. Multi-touch spreads it across the journey.

None of these models measure what the business actually needs to know: which spend created demand that would not have existed otherwise?

That question — incrementality — requires a fundamentally different measurement approach. Attribution answers "who gets credit?" Incrementality answers "what caused the purchase?"

These are not the same question. In most organisations, they're treated as if they are.

The three distortion mechanisms

Attribution data diverges from commercial reality through three specific mechanisms.

1. Signal loss from consent and platform fragmentation

iOS privacy changes, cookie deprecation, and cross-device behaviour have reduced the signal available to attribution platforms. Google's own data shows that observable conversion paths have shortened — not because journeys are shorter, but because large portions of the journey are now invisible.

The result is that attribution models assign credit based on what they can see, which biases toward channels that operate within closed ecosystems. Meta reports Meta-attributed conversions. Google reports Google-attributed conversions. Neither accounts for the other's influence. Both report success. The sum exceeds actual revenue.

In most multi-channel ecommerce operations, the total attributed revenue across all platforms exceeds actual revenue by 30% to 60%. This isn't fraud. It's the structural consequence of each platform attributing from its own perspective.

2. Temporal compression

Attribution windows are typically 7 or 28 days. But purchase decisions for considered products — fashion, furniture, electronics, health — often span weeks or months.

A customer researches a product via organic search in January, reads a review in February, clicks a retargeting ad in March, and buys. The attribution model credits the retargeting ad. The organic content that created the initial awareness receives nothing.

This systematically undervalues awareness-stage channels and overvalues conversion-stage channels. It explains why cutting brand spend rarely produces an immediate revenue drop — the measurement system never credited brand with the revenue it generated, so removing it doesn't register until weeks later when the top of the funnel dries up.

3. Survivorship in optimisation loops

When teams optimise toward attributed performance, they create a feedback loop. Channels that report well get more budget. Channels that report poorly get less. Over time, the portfolio converges on the channels that are best at claiming credit, not necessarily the channels that create the most value.

This is why many mature ecommerce brands find themselves spending 70%+ of budget on bottom-funnel performance channels while wondering why customer acquisition costs keep rising. They optimised for measurement efficiency, not business efficiency.

What incrementality measurement looks like

The alternative is incrementality testing — structured experiments that isolate the causal effect of a channel or campaign.

The simplest form is geographic holdout testing. You suppress a channel in one region and compare outcomes against a control region. The difference in performance, after controlling for baseline variation, is the incremental contribution.

This is operationally harder than reading a dashboard. It requires pre-registration, statistical rigour, and the willingness to suppress spend for the duration of the test. Most teams resist it because the results are frequently uncomfortable — channels that looked efficient under attribution often show much lower incremental impact.

But the teams that run these tests consistently make better capital allocation decisions. They know which spend is creating demand and which is harvesting demand that already existed.

The commercial consequence

The wrong attribution model doesn't give you wrong answers. It optimises the wrong system.

If you measure channel efficiency, you'll build a channel-efficient organisation. If you measure contribution margin per incremental customer, you'll build a commercially efficient one.

These are different organisations. They make different decisions about where to invest, which products to promote, which markets to enter, and how to allocate headcount.

The measurement system isn't a reporting layer on top of the business. It is the control system that determines how the business allocates resources. When that system is miscalibrated, every decision downstream inherits the bias.

This is why most optimisation efforts plateau. They improve the system they have instead of changing the system they need.