My hands were shaking as I was joining the quarterly review Zoom call. Four months of work and roughly $200k spent. The most sophisticated operational risk dashboard our bank had ever built was about to go live.
We had stitched data together from six business segments across four regions. Every chart was pixel perfect, every KPI told a story itself. It looked like something MBB would charge $2 million to deliver.
Not going to lie, I was ready for applause. Instead, I got the kind of silence that makes your stomach drop.
“These numbers don’t match what we reported to the regulators last quarter”, said Sarah from Operational Risk team, whose voice was cutting through the Zoom call like a blade.
The mortgage portfolio was off by $25 million. Which for context, in banking, that is not a round error. It’s the kind of mistake that ends careers, triggers regulatory investigations and makes news headlines. And I had just presented it as a fact to our entire executive team.
“We will investigate and get back to you..”. $200k later, I was hitting them with the classic consultant non-answer.
What I Found Made It Worse
The investigation took a month. A month of sleepless nights, back to back meetings with tech teams and business trying to identify the root cause of differences.
The culprit? One of the product processors from a legacy system from an acquisition was coding mortgages as “MORT” instead of “MTG” like all other processors.
One product processor. Out of 43 nationwide. That single deviation cascaded through our entire data pipeline, inflating our mortgage risk exposure by exactly $25,098,113.
Why This Matters More Than Ever
I have spent fifteen years in financial services, and I have watched this exact scenario play out dozens of times. Beautiful dashboards built on broken data. Teams celebrating technical achievements while business stakeholders lost trust in everything we deliver.
Now, with AI amplifying our decisions, the stakes are even higher. Bad data does not just create embarrassing presentations anymore, it actually drives automated decisions that scale your problems across the entire organization.
TWhat I Learned From All This (The Hard Way)
Before any dashboard goes live, I would ask three non negotiable questions:
Can we trace every data point back to its source? - Not just the aggregated numbers, I am talking about every single input.
Have we tested with real production data from every source system? Dummy data hides the problems that matter most.
Can we reconcile our numbers with external reports? If regulatory filings don’t match, nothing else matters.
If the answer to any of these is “no”, project stops and we go back to the drawing board. Period. This approach saved my team from three similar disasters since MORT.
What Is Coming Next
This newsletter exists because I am tired of watching talented people make the same expensive mistakes I did. Every week, I will share the hard earned lessons from fifteen years of data disasters and few victories here and there in financial services. Not just theory but real stories with real consequences.
You will learn about data systems that executives actually trust, how to communicate technical complexity without loosing your audience, and also avoid the career limiting mistakes that nobody talks about in the data science bootcamps.
Next week, I will dive into why data analysts need to think more like business analysts. It is controversial, but after fifteen years of watching data projects succeed and fail, I am convinced that technical skills without business context are a recipe for expensive disasters. The goal isn’t to build the prettiest dashboard. It is to build the most trusted one.