Why Attribution Models Don’t Fix Broken Data

Changing the model doesn’t fix the system behind it

The default assumption

When attribution doesn’t make sense, the solution seems obvious:

Change the model.

  • last-click
  • first-click
  • data-driven
  • position-based

The expectation is that a better model will produce better answers.

It doesn’t.

What attribution models actually do

Attribution models don’t create data.

They interpret it.

They take:

  • recorded events
  • identified users
  • defined sessions

…and assign credit based on a set of rules.

If the inputs are incomplete or inconsistent, the output will reflect that.

Changing the model changes the distribution—not the truth

Switching models will change your numbers.

  • channels gain or lose credit
  • performance appears to shift
  • reports tell a different story

But nothing about the underlying data has improved.

You’re applying different logic to the same inputs.

Incomplete data doesn’t become complete

Attribution models operate within constraints.

If your data includes:

  • missing transactions
  • fragmented user identity
  • inconsistent event definitions
  • tracking gaps across devices or sessions

…then every model is working with a partial view.

Changing the model does not fill those gaps.

It only redistributes what is already there.

Why this creates false confidence

Model changes often feel like progress.

They produce:

  • cleaner-looking reports
  • more intuitive channel splits
  • numbers that align more closely with expectations

But this is alignment with assumptions—not reality.

The system remains unchanged.

Where attribution is actually determined

Attribution is not created in the model.

It is determined upstream—by how your system is defined.

It depends on:

  • how events are structured
  • how users are identified
  • how sessions are defined
  • how logic is applied across systems

Why models still matter

Attribution models are not useless.

They provide:

  • a consistent way to interpret data
  • a framework for comparison
  • a lens for decision-making

But they are only as reliable as the data they interpret.

A good model applied to inconsistent data produces inconsistent results.

What actually improves attribution

Attribution improves when the system improves.

That means:

  • reducing data loss
  • aligning identity across sessions and platforms
  • standardizing event definitions
  • applying consistent logic

When the system is stable:

  • models become more meaningful
  • differences become explainable
  • decisions become more reliable

What this leads to

If attribution doesn’t make sense, changing the model is not the solution.

It’s a change in interpretation—not an improvement in accuracy.

Reliable attribution comes from a system that produces consistent, complete, and aligned data.

The next step

Before evaluating attribution models, you need to understand how your data is actually being produced.

An Evaluate engagement identifies:

  • where data is incomplete or inconsistent
  • how attribution is being distorted
  • what is required to improve reliability

From there, model choice becomes meaningful—because the system behind it is stable.

Start with Evaluate.

Doug McCaffrey
Designs and maintains analytics systems that remain reliable over time.