We would be well-served to remember that statistics that can mislead are prevalent in many areas of public policy today.
One of the most notable things that has been revealed since the COVID-19 crisis launched a hostile takeover of public conversation is how questionable the data has been as a basis for onerous and massively costly policy prescriptions.
Reported COVID-19 deaths include everyone who has it when they die. But that means deaths actually caused by other things (e.g., pneumonia) are being misattributed, overstating COVID-19 risks.
It reminds me of what my dad’s doctor said to him in his mid-eighties: “You have prostate cancer. But don’t worry about it, because it is slow growing, so something else will kill you first.”
The doctor was right, and he didn’t attribute my dad’s death to prostate cancer.
Similarly, the constant reporting of the updated numbers of COVID-19 cases in particular areas overstates the risk to others, since those who have gotten better and are not a potential source of contagion are included in those numbers. And this bias will grow the longer the crisis persists. Reporting how many we believe currently have the disease would be far more accurate and useful.
We don’t even know with any sort of reliability how many cases there are. Mild cases are very likely to be missed. That means we are underestimating people who have (or have had) the disease, as well as overestimating the risk of contagion. Because of the sharply limited number of tests, especially since many of the first ones were faulty, as far more people get tested, we must distinguish between uptick of reported cases caused by testing more of the population and an increasing incidence of the disease in the population, which is crucial to determining the likely future course of the disease.
There are added examples of the COVID-19 measurement muddle, but these make it entirely clear that we need to be careful about relying on misleading measures in the current crisis. But there is no reason to presume that such issues are limited to today’s hot topic. We would be well-served to suspect that such problems are prevalent in many areas of public policy. Consider the following as a brief sample of similar issues in other areas.
Americans have been told repeatedly that wealth inequality has been rising. But the measure of wealth used ignores “Social Security wealth,” the present value of Social Security benefits qualified for but not yet received. But leaving out a source of wealth that has been growing rapidly, to where it now equals 42 percent of marketable wealth, and is far more evenly distributed across the population, “completely reverses the trend since 1989.”
Recent work has estimated that the fraction of wealth held by the top 10 percent has risen by over 10 percentage points between 1989 and 2016, but including Social Security wealth makes their share instead fall by three percentage points. Just fixing this one bias dramatically changes the premises supposedly justifying draconian changes in policies.
Similarly, Medicare for All proposals were based in large part on mis-measured administrative costs. Many of Medicare’s administrative costs were ignored because they appear in other agencies’ budgets—the costs of collecting taxes appear in the IRS budget, the costs of collecting premiums appear in Social Security’s budget, and many of the accounting, building, and marketing expenses appear in the Health and Human Services budget. In contrast, for private insurance, everything except claims payments are counted as administrative costs, so that they include state taxes on premiums, disease management and on-call consultation services (because they do not generate health claims), and even fraud prevention costs (that have been estimated to reduce fraud by as much as 15 times what they cost).
Criticism of American health care because of higher measured infant mortality than many other countries have also relied on biased measures, which ignore important differences in what countries count as infant deaths as well as many factors unrelated to health care quality. For example, nonviable babies who die quickly after birth are recorded as live births in the US, including far more high risk, very premature babies, but are more likely to be classified as stillbirths elsewhere. The mother’s age, obesity, drug use and other lifestyle factors, as well as babies’ gestational age at birth, also worsen American results.
Even hundreds of billions of dollars in annual anti-poverty spending makes the poor look poorer. Because official income and poverty data ignore in-kind transfers, spending for food stamp (SNAP), housing, health care, and many other programs is omitted. But the reductions in benefits when recipients earn more act as substantial income taxes, which reduces earnings, which is counted in the data.
Similarly, because the standard data ignores taxes, tens of billions of dollars annually in Earned Income Tax Credits to lower-income families aren’t counted, yet most recipients are in the phaseout range of those credits, which imposes an effective 21 percent income tax rate, and the consequent reductions in earnings over that range are counted.
The examples here do not begin to exhaust the illustrations of mis-measurement in public policy discussions. For instance, an entire section of my forthcoming AIER book, Pathways to Policy Failures, deals with such issues. But my purpose here is to provide enough evidence that mis-measurements not only seriously muddle our understanding of COVID-19 and policies to deal with it. Rather they are a far more general and important policy problem than most know about.
My hope is that once more people become aware of these issues more generally, they will be more likely to think for themselves and dig into the research that is available, and respond to the “shut up and do what our experts tell you” guidance we so often hear today with the skepticism it deserves rather than the deference being demanded.