Data Quality Score is a composite metric that assesses the overall quality of data across dimensions including accuracy (data is correct), completeness (no missing values), consistency (same data across systems), timeliness (data is current), and uniqueness (no duplicate records). Low data quality erodes trust in analytics, leads to poor decisions, and increases engineering time spent on data debugging rather than analysis.
Data quality should be measured at the pipeline, table, and field level to enable targeted remediation rather than just tracking an aggregate score.
Data quality scores above 90% across all dimensions are generally considered excellent; below 80% typically signals systematic data collection or pipeline issues requiring urgent attention.
Each function reads DQS through a different lens and takes different actions when it changes.
Click any question to expand the answer.
Metrics that are commonly analyzed alongside DQS.
See how each role uses DQS in context with the full set of metrics they own.
askotter connects your data sources and applies causal analysis to tell you exactly why your metrics are changing, not just that they changed.
Book a Conversation →