Hi Welcome You can highlight texts in any article and it becomes audio news that you can hear
  • Fri. Nov 22nd, 2024

The Mathematics of Predicting the Course of the Coronavirus

In the past few days, New York City’s hospitals have become unrecognizable. Thousands of patients sick with the novel coronavirus have swarmed into emergency rooms and intensive care units. From 3,000 miles away in Seattle, as Lisa Brandenburg watched the scenes unfold—isolation wards cobbled together in lobbies, nurses caring for Covid-19 patients in makeshift trash bag gowns, refrigerated mobile morgues idling on the street outside—she couldn’t stop herself from thinking: “That could be us.”

It could be, if the models are wrong.

Until this past week, Seattle had been the center of the Covid-19 pandemic in the United States. It’s where US health officials confirmed the nation’s first case, back in January, and its first death a month later. As president of the University of Washington Medicine Hospitals and Clinics, Brandenburg oversees the region’s largest health network, which treats more than half a million patients every year. In early March, she and many public health authorities were shaken by an urgent report produced by computational biologists at the Fred Hutchinson Cancer Research Center. Their analysis of genetic data indicated the virus had been silently circulating in the Seattle area for weeks and had already infected at least 500 to 600 people. The city was a ticking time bomb.

The mayor of Seattle declared a civil emergency. Superintendents started closing schools. King and Snohomish counties banned gatherings of more than 250 people. The Space Needle went dark. Seattleites wondered if they should be doing more, and they petitioned the governor to issue a statewide shelter-at-home order. But Brandenburg was left with a much grimmer set of questions: How many people are going to get hospitalized? How many of them will require critical care? When will they start showing up? Will we have enough ventilators when they do?

Read all of our coronavirus coverage here.

There’s no way to know those answers for sure. But hospital administrators like Brandenburg have to hazard an educated guess. That’s the only way they can try to buy enough ventilators and hire enough ICU nurses and clear out enough hospital beds to be ready for a wave of hacking, gasping, suffocating Covid-19 patients.

That’s where Chris Murray and his computer simulations come in.

Murray is the director of the Institute for Health Metrics and Evaluation at the University of Washington. With about 500 statisticians, computer scientists, and epidemiologists on staff, IHME is a data-crunching powerhouse. Every year it releases the Global Burden of Disease study—an alarmingly comprehensive report that quantifies the incidence and impact of every conceivable illness and injury in each of the world’s 195 countries and territories.

In February, Murray and a few dozen IHME employees turned their attention full-time to forecasting how Covid-19 will hit the US. Specifically, they were trying to help hospitals—starting with the UW Medicine system—prepare for the coming crisis. Brandenburg says the collaboration could turn out to be, quite literally, life-saving. “It’s one thing to know you may be getting a surge of patients,” she says. “If you can make that more tangible—here’s what it’s actually going to look like—then we’re in a much better place in terms of being able to plan for the worst.”

But it’s a big if. During a pandemic, real data is hard to find. Chinese researchers have only published some of their findings on the spread of Covid-19 in Hubei. The ongoing catastrophe of testing for the virus in the United States means no researcher has even a reliable denominator, an overall number of infections that would be a reasonable starting point for untangling how rapidly the disease spreads. Since the 2009 outbreak of H1N1 influenza, researchers worldwide have increasingly relied on mathematical models, computer simulations informed by what little data they can find, and some reasoned inferences. Federal agencies like the Centers for Disease Control and Prevention and the National Institutes of Health have modeling teams, as do many universities.

As with simulations of Earth’s changing climate or what happens when a nuclear bomb detonates in a city, the goal here is to make an informed prediction—within a range of uncertainty—about the future. When data is sparse, which happens when a virus crosses over into humans for the first time, models can vary widely in terms of assumptions, uncertainties, and conclusions. But governors and task force leads still tout their models from behind podiums, increasingly famous modeling labs release regular reports into the content mills of the press and social media, and policymakers still use models to make decisions. In the case of Covid-19, responding to those models may yet be the difference between global death tolls in the thousands or the millions. Models are imperfect, but they’re better than flying blind—if you use them right.

The basic math of a computational model is the kind of thing that seems obvious after someone explains it. Epidemiologists break up a population into “compartments,” a sorting-hat approach to what kind of imaginary people they’re studying. A basic version is an SIR model, with three teams: susceptible to infection, infected, and recovered or removed (which is to say, either alive and immune, or dead). Some models also drop in an E—SEIR—for people who are “exposed” but not yet infected. Then the modelers make decisions about the rules of the game, based on what they think about how the disease spreads. Those are variables like how many people one infected person infects before being taken off the board by recovery or death, how long it takes one infected person to infect another (also known as the interval generation time), which demographic groups recover or die, and at what rate. Assign a best-guess number to those and more, turn a few virtual cranks, and let it run.

“At the beginning, everybody is susceptible and you have a small number of infected people. They infect the susceptible people, and you see an exponential rise in the infected,” says Helen Jenkins, an infectious disease epidemiologist at the Boston University School of Public Health. So far, so terrible.

The assumption for how big any of those fractions of the population are, and how fast they move from one compartment to another, start to matter immediately. “If we discover that only 5 percent of a population have recovered and are immune, that means we’ve still got 95 percent of the population susceptible. And as we move forward, we have much bigger risk of flare-ups,” Jenkins says. “If we discover that 50 percent of the population has been infected—that lots of them were asymptomatic and we didn’t know about them—then we’re in a better position.”

So the next question is: How well do people transmit the disease? That’s called the “reproductive number,” or R0, and it depends on how easily the germ jumps from person to person—whether they’re showing symptoms or not. It also matters how many people one of the infected comes into contact with, and how long they are actually contagious. (That’s why social distancing helps; it cuts the contact rate.) You might also want the “serial interval,” the amount of time it takes for an infected person to infect someone else, or the average time before a susceptible person becomes an infected one, or an infected person becomes a recovered one (or dies). That’s “reporting delay.”

And R0 really only matters at the beginning of an outbreak, when the pathogen is new and most of the population is House Susceptible. As the population fractions change, epidemiologists switch to another number: the Effective Reproductive Number, or Rt, which is still the possible number of people infected, but can flex and change over time.

You can see how fiddling with the numbers could generate some very complicated math very quickly. (A good modeler will also conduc

Read More

Click to listen highlighted text!