The Challenge
BAT's trade marketing function was operating with a patchwork of Excel reports and manual SQL
extracts that covered distribution coverage, campaign execution, and outlet performance. Each
report was maintained by a different team member, used different definitions for the same metrics,
and was delivered at different cadences.
The deeper problem was a systemic data discrepancy between the field data management system and
head office records. Outlet counts, campaign activation rates, and volume figures regularly
disagreed between the two sources — meaning leadership could not trust any single report and
analysts spent significant time reconciling numbers rather than generating insight.
Approach
The project had two distinct phases: fixing the data, then building the reporting layer on top of it.
- Root cause analysis: Traced the field-to-HQ discrepancy to a combination of delayed data syncs, inconsistent outlet classification logic, and unmapped records introduced during a system migration. Resolved each issue at the SQL extraction layer.
- Canonical data model: Built a single source-of-truth data model in SQL that standardised outlet classification, campaign definitions, and volume attribution across both systems. All Power BI reports drew from this model.
- Report suite design: Designed four interconnected Power BI reports — Distribution Coverage, Campaign Execution, Outlet Performance, and Period Review — with consistent navigation, shared DAX measures, and drill-through between reports.
- Automation: Configured scheduled refresh and automated distribution via Power BI Service, eliminating the manual export-and-email workflow that had been in place.
Key Results
4
Integrated Power BI reports replacing fragmented Excel files
300K+
Retail outlets covered in the reporting suite
0
Manual reconciliation steps after automation
Resolving the underlying data discrepancy was the highest-value part of the project. The reporting
suite itself was only possible — and only credible — because the data feeding it was finally
consistent. Analysts shifted from reconciliation work to analysis work within weeks of launch.
Lessons Learned
- Reporting projects are often data quality projects in disguise. The work that delivered the most value was not building the dashboards — it was fixing the underlying discrepancy that made any reporting trustworthy.
- Canonical data models pay for themselves quickly. Standardising definitions once at the SQL layer, rather than in every individual report, reduced long-term maintenance burden dramatically.
- Stakeholder adoption requires more than a good product. Running structured walkthroughs with each team before launch — showing them their own data in the new format — was what drove daily usage rather than passive awareness.