Skip to content

The Attribution Readiness Scorecard

By Timo Dechau · Last updated March 25, 2026

You’ve just read through 14 pitfalls. Some probably felt familiar. Some might have prompted an uncomfortable “wait, are we doing that?” moment. Good. That discomfort is the beginning of building something better.

Before you jump into building attribution — multi-touch models, marketing mix analyses, all the sophisticated stuff — you need to honestly assess where you stand. Not where you wish you stood. Where you actually are, right now, with the data and infrastructure you have today.

These five questions emerged from working with dozens of teams on exactly this problem. They’re the questions that separate teams who build attribution that works from teams who build attribution that looks impressive in a slide deck but nobody trusts.

Sit with each one. If you can, discuss them with your team. The conversation is as valuable as the answers.


1. Do you have defined use cases beyond GA4’s interface?

Section titled “1. Do you have defined use cases beyond GA4’s interface?”

This is question zero. If you can’t articulate what you need from BigQuery that GA4’s interface doesn’t already provide, you don’t have a reason to build a custom data model yet. And building without a reason is how you end up rebuilding the GA4 UI in SQL (Pitfall #12) and spending months on work that creates no new value.

A good answer sounds like: “We need to join GA4 behavioral data with our CRM to understand which marketing touchpoints drive accounts that actually retain past month three.” That’s specific. That’s something the GA4 interface genuinely cannot do.

A bad answer sounds like: “We want to see our metrics in Looker instead of GA4.” That’s a visualization preference, not a use case. The GA4 interface already shows you those metrics.


2. Can you distinguish anonymous from account-level users?

Section titled “2. Can you distinguish anonymous from account-level users?”

Attribution is fundamentally about connecting a person’s journey from first touch to conversion. If you can’t tell anonymous visitors apart from identified accounts in your data model, you’re trying to trace journeys for entities you can’t even define.

A good answer: “Yes. We have a clear identity model — anonymous traffic stays on user_pseudo_id, and once someone logs in or signs up, we map their pseudo IDs to a persistent account identifier. We know which pool each event belongs to.”

A bad answer: “We use user_pseudo_id for everything.” That means you’re counting cookies, not people — and your attribution model is built on sand (Pitfall #8).


3. What’s your identification rate, and do you have a plan to increase it?

Section titled “3. What’s your identification rate, and do you have a plan to increase it?”

Your identification rate is the percentage of sessions or users you can tie to a known account. This number is your attribution ceiling. If only 15% of your sessions are ever identified, even a perfect attribution model can only explain 15% of the picture. The other 85% is a black box.

A good answer: “Our identification rate is 35%, and we’re running experiments with email URL parameters and post-login session stitching to push it toward 50% by Q3.”

A bad answer: “I don’t know.” Or: “We haven’t measured it.” If you haven’t measured it, you don’t know how much of your data you’re blind to — and any attribution model you build is working with an unknown fraction of reality. Run the identity coverage query from Pitfall #8 before you go any further.


4. How do you define a touchpoint, and how many does your average account have?

Section titled “4. How do you define a touchpoint, and how many does your average account have?”

A touchpoint is the unit of your attribution model. If you haven’t defined what counts as one, you can’t build attribution. And if you have defined it, the average count per account tells you how complex your model needs to be.

A good answer: “A touchpoint is a session with a marketing attribution source. Our average identified account has 4.2 touchpoints before conversion. We also capture post-purchase survey responses as virtual touchpoints.”

A bad answer: “Every page view is a touchpoint.” That’s not a touchpoint definition — that’s raw event data. You’ll drown in noise. Or: “Our average account has 1.3 touchpoints.” If most accounts have only one or two touchpoints, you don’t need multi-touch attribution. Single-touch is simpler, cheaper, and tells you everything the data can support. Save the complexity for when the data warrants it.


5. What percentage of your conversions can you actually attribute end-to-end?

Section titled “5. What percentage of your conversions can you actually attribute end-to-end?”

This is the punchline. After identity resolution, after touchpoint design, after connecting everything together — what fraction of your actual conversions have a complete, traceable journey from first touch to conversion?

A good answer: “62% of our conversions have at least one attributable touchpoint. 40% have three or more, which is where our multi-touch model adds value. We know the other 38% are a gap, and we’re working on closing it through better identification.”

A bad answer: “We assume it’s high because our attribution model produces numbers.” That’s not coverage — that’s faith. If you haven’t measured the gap between total conversions and attributable conversions, your model might be explaining 30% of the picture while presenting it as 100%. That’s worse than no model at all, because it creates false confidence.


If you answered all five confidently, with real numbers, you’re ready to build attribution. Genuinely ready — not “we enabled the BigQuery export” ready, but “we understand our data, our gaps, and our limitations” ready. That’s a strong position.

If some answers made you uncomfortable, that’s the point. Each gap maps to a specific piece of work — an identity model to build, a coverage metric to measure, a use case to define. Those are your next steps. They’re not glamorous, but they’re the foundation that makes everything else work.

The ebook you just read covered the pitfalls. The scorecard shows you where you are. Part 2 of this training series covers what comes next: building the actual attribution model on top of this foundation.