Lodaer Img

When a project fails, the first explanation is often technical.

The data wasn’t solid enough.
The results didn’t reproduce.
Something behaved differently once things scaled.

I used to think this way too.

But after working through enough projects that didn’t make it, I’ve become less convinced that “the science failed” is usually the real answer. Not because the science was perfect — it rarely is — but because by the time failure shows up in the data, the project has often already been drifting for a while.

When science becomes the easiest explanation

In biotech, it’s very natural to frame failure as scientific risk. That language feels safe. It fits how most teams are trained to think, and it gives everyone something concrete to point at.

What I’ve seen, though, is that many projects don’t collapse because a hypothesis was fundamentally wrong. They struggle because decisions were pushed back, ownership was fuzzy, or certain uncomfortable questions were quietly parked until “we have clearer data”.

Waiting for clearer data feels reasonable — until you realise how much is already being decided in the meantime.

The things that never show up in the data

Looking back, the signs are usually not that subtle.

Meetings where the same issues come up again and again, but never quite land anywhere.
Roles that exist on slides, but not in day-to-day decision-making. Timelines that shift a little, then a little more, until no one is quite sure what “on track” even means.

None of this shows up in an experiment readout.
But it shapes the project long before the science is judged.

Where failure actually becomes visible

What makes this hard is that it doesn’t feel like failure while it’s happening.

Experiments are still running. Data is still coming in. There’s always a sense that things will become clearer soon, and that bigger decisions can wait just a bit longer.

The cost of waiting is rarely obvious at first. It builds quietly — in lost momentum, in misaligned expectations, in decisions that get made by default rather than deliberately.

So when a project is finally labelled a technical failure, it’s often just the point where everything else can no longer be ignored.

I’m not arguing that science doesn’t matter. Of course it does. Strong science is the foundation of everything in this space.

But when failure is explained only in technical terms, it’s worth pausing before accepting that explanation too quickly.

Sometimes the science isn’t where the project failed. It’s simply where the failure became impossible to hide.