What Most Agencies Get Wrong About Attribution

Attribution isn’t a model problem—it’s a system problem

The Default Response

When attribution doesn’t make sense, most agencies respond the same way:

  • switch models
  • compare platforms
  • build new reports

They ask:

“Which attribution model should we use?”

It feels like the right question.

It isn’t.

Attribution is not a reporting feature

Attribution isn’t something you choose.

It’s something that emerges from how your system is defined.

It reflects:

  • how data is collected
  • how sessions are defined
  • how users are identified
  • how events are structured

Long before any model is applied.

For a broader view of how attribution is actually determined:

Where It Breaks

Agencies operate inside tools like:

  • Google Analytics 4
  • ad platforms
  • reporting dashboards

Within them, they accept:

  • default definitions
  • black-box logic
  • fragmented identity

Then attempt to “fix” attribution in reporting.

Why This Fails

Attribution is not a reporting feature.

It is the outcome of upstream decisions.

If those decisions are inconsistent:

  • models won’t align
  • platforms will disagree
  • reports will contradict

No model can resolve that.

The Illusion of Model Choice

First-click
Last-click
Data-driven
Position-based

These feel like strategic choices.

But when the inputs are inconsistent:

you’re applying different interpretations to unstable data

The output changes.

The problem doesn’t.

Where attribution is actually determined

Attribution becomes meaningful only when the system is defined.

It depends on:

  • how identity is handled
  • how sessions are structured
  • how events are defined
  • how logic is applied across systems

Where agencies go wrong

They optimize for:

  • speed
  • outputs
  • platform alignment

Instead of:

  • consistency
  • structure
  • definitional clarity

The result:

  • dashboards that look correct
  • numbers that don’t agree

The actual failure point

Attribution breaks when:

  • identity is fragmented
  • sessions are inconsistent
  • events are loosely defined
  • channels are unstructured

At that point:

attribution isn’t inaccurate

it’s undefined

Why platform comparisons fail

Agencies try to reconcile:

  • Google Ads
  • Google Analytics
  • other tools

But each platform:

  • defines users differently
  • reconstructs journeys differently
  • applies its own logic

So:

attribution becomes interpretation layered on inconsistency

What good attribution looks like

Not:

  • perfect agreement across platforms

But:

  • consistent logic within your system
  • transparent definitions
  • reproducible results

When the system is stable:

  • attribution becomes explainable
  • differences become understandable
  • decisions become more reliable

What this leads to

If attribution doesn’t make sense, the issue isn’t the model.

It’s the system behind it.

Optimizing attribution without addressing the system only makes the problem harder to see.

Final thought

Most agencies believe:

attribution is about choosing the right model

In reality:

attribution reflects how well your system is defined

If this feels familiar

If attribution:

  • changes depending on the report
  • requires constant explanation
  • leads to debate instead of decisions

You don’t have a model problem.

You have a:

system definition problem

The next step

Before changing models or comparing platforms, you need to understand how your system is actually behaving.

An Evaluate engagement identifies:

  • where attribution is being distorted
  • how inconsistencies are introduced
  • what is required to improve reliability

Start with Evaluate.

Doug McCaffrey
Designs and maintains analytics systems that remain reliable over time.