#AnalyticsSummit Presents: Closing the Loop on Data Validation

I’m pleased to be one of the featured speakers at the upcoming Observepoint Analytics Summit. It’s a free, virtual event, and I hope you’ll sign up for my session. To get you excited about it, here’s a sneak peek at what I’ll be talking about: Closing the Loop on Data Validation.
Everybody knows the secret to delivering quality data. You check it. You check it right before release. You check it every time a change is made to the campaign or the website, either through a dev release or a Tag Management release. You check it once it’s pushed to production. Then you put it on the list of things to check again periodically.
And increasingly, you engage with an auditing tool like Observepoint to automate those checks.
Even after all of that, analysts who have done this for a while find that errors keep creeping into the data. If everyone knows the secret to quality data, how does this keep happening?
Part of the answer lies in the way we currently use our tools.
-Our auditing tool flags broken data linkages in our reporting
-The audit report acts like an alarm going off, notifying you to go in and fix it
-The end
But what if we never fix it? And if we do fix it, what happens to data acquired before the fix?
When subsequent reports are pulled, month or years afterward, there may be no permanent record indicating that data for a specific period was flawed. It’s even less likely that you’ll have a record of exactly a) how the data was flawed, b) when it was fixed, and c) what the implications are for the data in the rest of the report you’re pulling.
And there is a secondary issue to combat. As an analyst, you have two main responsibilities: keep the data clean, and increase users’ confidence in the data. If you’re an analytics team, you want to use data quality automation tools — but at the same time you have to be careful about how widely you broadcast the occasional “low data quality alert.”
Because your ultimate goal is for people to trust the data, there’s an understandable hesitation to reveal instances of flawed data. That means, as an analyst, you’re constantly walking a fine line.
Instead, what if we do this: every time we audit something we create a score, assessing our confidence level in the validity of that data. We then pass that score into our Analytics tool — permanently logging the data quality for that period, in real time and with precise granularity, so that forever after when data from that period are pulled, a score for validity is carried with it.
If you’re able to have a permanent score for data quality attached to old reports, it will do away with the perception of  “untrustworthy bridges” in your reporting.
And, if you’re constantly scoring and fixing data linkages, over time the reliability score of the data will improve.  In the rare instances when someone does pull data that is compromised, they’ll immediately be warned — in the form of a reliability score. They’ll be able to see when the fix was made, and how the quality score has increased over time. And you know what? They’re going to trust that report.
Sign up for my session on November 17, to learn more.
 

Get Started

Find out why hundreds of customers use Claravine

Back to Top