Filtering Facts from Folklore
A paranormal newspaper taught me about reporting.
“Banecroft read out the headline: ‘”Wolverhampton Invaded by UFOs'. This is made-up nonsense, is it? It’s complete hokum, isn’t it? Who could possibly believe it?’
“Banecroft jabbed his finger at the first paragraph. ‘Mrs Stade, 42, from Blakenhall. That’s who. You see, we’re not saying it’s true. What we’re saying is look, this mad woman believes this and here’s why she does.’
“‘We aren’t reporting the story as fact; we’re reporting the existence of the story as fact.’”
Something in my head clicked.
Management reports often include oceans of data: velocity, happiness, code quality, backlog growth, uptime, incident volume.
And, because none of these data hold any meaning in isolation, we may supplement the report with more data: historic figures, trends, averages.
But how do we stop ourselves drowning in data oceans?
Relevant data (information) is “the answer to the question asked” reminds Goldratt.
Often, the question our team or organisation is trying to answer - the goal - is unclear. And, if the goal is unclear, how can we measure progress? Suddenly all data seems relevant, and a data ocean emerges.
So, filter the data ocean by agreeing the goal.
But, how can we check if data is invalid?
Management reports (and the news) often attempt to diagnose reality. “Sales are dropping because the market is saturated with product”, or “development is slowing because of increasing technical debt”. How can we validate such diagnoses? Invalid diagnoses can only lead to an unhelpful decisions.
Doctors often diagnose patients without being able to directly observe the underlying cause. When confronted with a symptom, doctors take a guess at the underlying cause and then ask themselves, “if my suspicion is true, what others symptoms should I expect to see?”
When confronted with dry eyes, a doctor may suspect an under-active thyroid. An under-active thyroid can’t be seen directly. So, to build confidence, the doctor may check for the other expected symptoms: forgetfulness and tiredness.
So, validate diagnoses by checking for the presence of other expected symptoms.
If we believe sales are dropping because the market is saturated with product, do we also see competitors’ sales also dropping?
If we believe development is slowing because of increasing technical debt, do we also see more test failures?
The takeaway? Simplify reporting by focusing on progress towards the goal. Detect hokum by validating diagnoses.
This article was inspired by The Stranger Times , by Goldratt, and Noah.