Power  /  Explainer

Medicare for All in the Age of Coronavirus

A history of U.S. health care debates.

In the best of times, trying to make sense of the US health care system is a difficult undertaking. Now that the coronavirus has upended American lives, the terms of political discussion about health care are likely to change again in ways that are hard to anticipate. To pass the time as we huddle in our homes during the COVID-19 pandemic, a history of the US health care system might help with understanding the terms of the debate and what is at stake.

Prior to this crisis, health care reform had already emerged as a leading issue in the 2020 election campaign, especially in the Democratic Party. Democratic candidates have proposed various reform agendas, among which Medicare for All (MFA) got the most attention. For reasons that are not entirely clear, repackaging the concept of a single-payer health care program—an idea that has been around for at least 50 years—has succeeded in making the idea more politically viable. The phrase Medicare for All was first adopted by Massachusetts Senator Edward Kennedy in 2007; it was popularized by presidential candidate Bernie Sanders in 2016 and endorsed by a growing number of Democrats in Congress during the 2018 election. MFA proposes expanding Medicare, the federal program established in 1965 to provide health coverage for people 65 and older, to cover all Americans. Candidates who support MFA remain vague about how this would happen, what it would cost, and who would pay for it. At the same time, advocates argue that MFA will fix the well-documented dysfunctionality of the US health care system, including its uneven coverage, high costs, grave inefficiencies, and overall fragmentation of care. 

The US has long been an outlier among nations in its approach to health care. It is the only country with a so-called advanced economy that does not provide universal coverage for essential health services. Fellow member nations of the Organization for Economic Co-operation and Development took many different paths to arrive at the goal of universal coverage from the 1950s to 1970s. Some chose a mix of public and private options, while others adopted single-payer systems, but all were seeking change after the devastation of World War II. Many political leaders in other countries came to see universal coverage as a desirable goal that government involvement was necessary to achieve. 

In contrast, postwar US political leaders rejected the idea of universal coverage and decided against direct government involvement in the provision of health care. Instead, Congress provided generous indirect support, funding medical research through the National Institutes of Health and subsidizing hospital construction through the Hill-Burton Act, passed in 1946. Changes in the Food and Drug Administration gave pharmaceutical companies added incentives to develop new prescription drugs. In theory, the expansion of private, employer-based medical insurance plans would make it possible for Americans to afford the better (and more expensive) care these indirect investments provided. This private insurance system linked coverage to being a valued worker; while it was initially a perk of white-collar occupations, unions secured the same for many blue-collar workers in the 1940s and 1950s. In line with its Cold War superpower status, the US promoted this “free enterprise” approach to health care as far superior to the “socialized” medicine being adopted in other countries.