8 min read
1 in 73 million — the number that sent innocent people to prison
Numbers don't lie. But they can be made to say almost anything — if you choose the right question and hide the rest. Three real cases. Three ruined lives. One flaw nobody stopped to question.
achou

The night everything changed
December 1996. Sally Clark, a British lawyer, finds her 11-week-old son Christopher unresponsive in his crib. The paramedics arrive. Nothing can be done. The medical report says natural causes — most likely Sudden Infant Death Syndrome.
Less than two years later, her second son Harry dies the same way. He is eight weeks old.
At that moment, sympathy stopped. And the calculations began.
The number that destroyed her life
In court, the prominent physician Sir Roy Meadow stood up and said one sentence that changed Sally Clark's fate forever:
The probability of two children dying from SIDS in the same affluent family is 1 in 73 million.
His method was simple. The odds of one such death in a family like Clark's was roughly 1 in 8,500. So he multiplied: 8,500 × 8,500 = 73 million.
The jury looked at the number. 1 in 73 million. Like winning the lottery twice in a row. It could not be a coincidence.
November 1999 — unanimous guilty verdict. Life in prison.
Three errors buried inside one number
The number was wrong from the start. When statisticians reviewed it, they found three compounding mistakes.
Error one: the independence assumption
Meadow multiplied the two probabilities as if the deaths were completely unrelated events — like rolling a die twice. But infant deaths are not random in that way. There may be genetic or environmental factors that make the same family more vulnerable. The Royal Statistical Society later confirmed that multiplying the probabilities was only valid if the two events were statistically independent — something that had never been established.
Error two: selective factors
Meadow chose factors that made the death less likely in the Clark family — good income, stability, no smoking. He ignored factors that worked in the other direction, like both children being male, and boys face higher SIDS risk than girls.
Error three — and the most damaging: the prosecutor's fallacy
Here the problem stops being mathematical and becomes philosophical.
1 in 73 million is the probability of two accidental deaths. But the jury heard something different: that the probability of Sally's innocence was 1 in 73 million.
These are not the same thing. Not even close. Saying the chance of two natural deaths is 1 in 73 million tells you nothing about whether the mother is guilty, unless you also ask: what is the probability that a mother kills both her children? When one statistician calculated that figure in 2002, he found that repeated natural death was actually between 4.5 and 9 times more likely than repeated homicide.
In other words: the numbers were pointing toward innocence. Nobody ran that calculation.
What happened next
In 2003, it emerged that the pathologist who examined the children had withheld lab reports indicating Harry had died from a bacterial infection — a natural death, clearly documented. The conviction was overturned. Sally Clark walked out of prison after four years.
She never recovered. The imprisonment, the loss of her children, returning to a world that had already decided who she was — it accumulated. She died in 2007 from acute alcohol intoxication, aged 42.
Meadow's medical license was temporarily revoked, then reinstated. He had used the same logic in other cases.
Case two: The nurse and the number 342 million
The Hague, Netherlands, 2001. Lucia de Berk is a pediatric nurse at Juliana Children's Hospital. After a baby dies during her shift, a colleague reports to hospital management that de Berk has been present at an unusually high number of deaths and resuscitations.
Management brings in a statistician. He reviews shift records and calculates that the probability of one nurse coincidentally being present at that many incidents is 1 in 342 million.
Newspapers call her a "murder nurse" and an "angel of death." She is charged with seven murders and three attempted murders. The court, relying heavily on that number, sentences her to life in prison — the maximum under Dutch law.
The problem was the same one that convicted Sally Clark: the statistician answered the wrong question.
What the number actually said
The calculation assumed that all nurses experience the same base rate of incidents per shift. But that is not how hospitals work. Nurses in high-risk wards naturally encounter more deaths. Nurses who have worked longer in intensive care units see more critical events. De Berk was an experienced nurse assigned to the most vulnerable patients. The comparison group the statistician used — all nurses across all wards — was not a fair baseline.
When statistician Richard Gill ran the numbers himself years later, he found the cluster of deaths on de Berk's watch might well be entirely due to coincidence. He calculated the real probability at closer to 1 in 25 — not 1 in 342 million.
There was also a subtler bias at work. The investigation started because of deaths that occurred while de Berk was present. Investigators then went back through records at other hospitals where she had worked — looking specifically for incidents linked to her. It turned out there had actually been more deaths at the ward before her arrival. But those deaths were not included in the calculation. The data was selected to confirm the suspicion, not to test it.
The exoneration
In December 2009, with new statistical analysis and medical evidence having been presented, the court accepted that the deaths had all been entirely natural. On April 14, 2010 — nine years and three months after her arrest — Lucia de Berk was acquitted of all charges.
She later received a written apology from the Dutch minister of justice and undisclosed financial compensation for the 6.5 years she spent in prison.
Case three: A yellow car and a number invented in court
Los Angeles, 1964. An elderly woman is knocked down in an alley and her purse is stolen. Witnesses see a young blonde woman with a ponytail running from the scene, getting into a yellow car driven by a Black man with a beard and mustache.
Police arrest Janet and Malcolm Collins. They match the description. But the eyewitness testimony alone was uncertain — the witnesses were not fully consistent. The prosecution needed something stronger.
So they brought in a mathematician.
The math that wasn't
The prosecutor called an instructor of mathematics at a state college. The instructor explained the product rule to the jury: multiply the probability of each independent characteristic together to get the probability of all of them occurring at once.
The prosecutor then assigned probabilities to each feature of the suspects:
Multiply them all: 1 in 12 million. Therefore, the prosecutor argued, there was a 1 in 12 million chance the Collinses were innocent.
The jury convicted.
Everything was wrong
The California Supreme Court reversed the conviction, and its reasoning is worth reading carefully.
First problem: the characteristics were not independent. Bearded men commonly have mustaches. The product rule only works when events have no relationship to each other. Multiplying dependent probabilities inflates the result — the actual odds of someone having both a beard and a mustache are much higher than multiplying the two separate probabilities would suggest.
Second problem: the probabilities themselves were invented. The prosecutor pulled numbers from thin air. No survey was done. No data was collected. 1 in 10 for a yellow car was not based on DMV records or any count of cars in Los Angeles. It was a guess, presented as math.
Third — and the Court's central point: even accepting the 1 in 12 million figure as accurate, you still could not conclude the Collinses were probably guilty. The prosecution's figures actually implied a likelihood of over 40 percent that at least one other couple in the Los Angeles area shared those same characteristics.
The math did not say they were probably guilty. It said there was a reasonable chance someone else could have done it. The jury never heard that part.
The California Supreme Court wrote: "Mathematics, while assisting the trier of fact in the search of truth, must not cast a spell over him."
The pattern underneath all three cases
Three different countries. Three different decades. The same mistake every time.
A number is calculated. It is presented to people who are not trained to question it. It sounds authoritative because it comes from an expert. And nobody in the room asks the one question that would unravel the whole thing:
What question is this number actually answering?
In every case, the number answered one question while the jury heard the answer to a completely different one. The probability of a coincidence was presented as the probability of innocence. The probability of a random match was presented as proof of guilt. The two things sound similar. They are not the same.
This is not a story about dishonest prosecutors or corrupt experts. In most of these cases, the people involved believed they were right. They had a number. The number was large. It felt like proof.
That feeling — the weight a number carries in a room — is exactly where the danger lives.
The question worth asking
The next time you read a statistic — in a news article, a courtroom, a policy document — the question is not "is the number accurate?"
It is: what question was this number designed to answer, and is that actually the question being asked right now?
Because a true number, answering the wrong question, can send an innocent person to prison for six years.
It already has. More than once.