Epidemiologists routinely turn to models to predict the progression of an infectious disease. Fighting public suspicion of these models is as old as modern epidemiology, which traces its origins back to John Snow’s famous cholera maps in 1854. Those maps proved, for the first time, that London’s terrible affliction was spreading through crystal-clear fresh water that came out of pumps, not the city’s foul-smelling air.

Many people didn’t believe John Snow, because they lived in a world without a clear understanding of germ theory and only the most rudimentary microscopes. In the last 160 years scientist gain much more reputation and respect by society, nowadays, people believe epidemiologist, however, people get mad when their model aren’t perfect.

A few weeks ago, the U.K. had almost no measures in place, the government planned to let the virus run its course through the population, with the exception of the elderly, who were to be kept indoors. The idea was to let enough people get sick and recover from the mild version of the disease, to create “herd immunity.”

Things changed when an epidemiolocal model from Imperal College London projected that without interventions, more than half a million British citizens would die from COVID-19. The report also projected more than 2 million deaths in the United States, again barring interventions. The stark numbers prompted British Prime Minister Boris Johnson, who himself has tested positive for COVID-19, to change course, shutting down public life and ordering the population to stay at home.

A few days after the U.K. changed its policies, Neil Ferguson, the scientist who led the Imperial College team, testified before Parliament that he expected deaths in the U.K. to top out at about 20,000.

The drastically lower number caused shock waves: One former New York Times reporter described it as “a remarkable turn”

The British tabloid the Daily Mail ran a story about how the scientist had a “patchy” record in modeling. The conservative site The Federalist even declared, “The Scientist Whose Doomsday Pandemic Model Predicted Armageddon Just Walked Back the Apocalyptic Predictions.” But there was no turn, no walking back, not even a revision in the model. From the original paper (Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand), the model lays out a range of predictions, from tens of thousands to 500,000 dead, which all depend on how people react.

That variety of potential outcomes coming from a single epidemiological model may seem extreme and even counterintuitive. But that’s an intrinsic part of how they operate, because epidemics are especially sensitive to initial inputs and timing, and because epidemics grow exponentially.

Why the predictions are so wide? Well, that is the difficulty of modeling pandemic. Using a mathematical model to predict the future is valuable for experts, even if there are vast gulfs between possible outcomes. But it’s not always easy to make sense of the results and how they change over time.

Imagine a simple mathematical model to predict coronavirus outcomes. It’s relatively easy to put together. The number of people who will die is a function of how many people could become infected, how the virus spreads and how many people the virus is capable of killing.

That’s when you discover that it is not an easy task. Every variable is dependent on a number of choices and knowledge gaps. And if every individual piece of a model is wobbly, then the model is going to have as much trouble standing on its.

Consider something as basic as data entry. Different countries and regions collect data in different ways. There’s no single spreadsheet everyone is filling out that can easily allow us to compare cases and deaths around the world. Even within the United States, doctors say we’re underreporting the total number of deaths due to COVID-19.

The same inconsistencies apply to who gets tested. Some countries are giving tests to anyone who wants one. Others are not doing the same. That affects how much we can know about how many people have actually contracted COVID-19, versus how many people have tested positive.

And the virus itself is an unpredictable contagion, hurting some groups more than others, meaning that local demographics and health care access are going to be big determinants when it comes to the virus’ impact on communities.

Modeling an exponential process necessarily produces a wide range of outcomes. In Italy, two similar regions, Lombardy and Veneto, took different approaches to the community spread of the epidemic. Both mandated social distancing, but only Veneto undertook massive contact tracing and testing early on. Despite starting from very similar points, Lombardy is now tragically overrun with the disease, having experienced roughly 7,000 deaths and counting, while Veneto has managed to mostly contain the epidemic to a few hundred fatalities. Similarly, South Korea and the United States had their first case diagnosed on the same day, but South Korea undertook massive tracing and testing, and the United States did not. Now South Korea has only 162 deaths, and an outbreak that seems to have leveled off, while the U.S. is approaching 4,000 deaths as the virus’s spread accelerates.

At the beginning of a pandemic, we have the disadvantage of higher uncertainty, but the advantage of being early: The costs of our actions are lower because the disease is less widespread. As we prune the tree of the terrible, unthinkable branches, we are not just choosing a path; we are shaping the underlying parameters themselves because the parameters themselves are not fixed. If our hospitals are not overrun, we will have fewer deaths and thus a lower fatality rate.

That’s why we shouldn’t get bogged down in litigating a model’s numbers. Instead we should focus on the parameters we can change, and change them.

Contact Us