America’s Broken Health Care: Diagnosis and Prescription -SPECIAL EDITION
The following is adapted from a talk delivered at Hillsdale College on March 5, 2023, during a Center for Constructive Alternatives conference on “Big Pharma.”
By John Abramson – March 5, 2023
I developed a serious cardiac arrhythmia, ventricular tachycardia, seven years ago. It worsened over the past summer and early fall, and over the past six weeks I’ve had several ambulance rides and hospitalizations. And my experience through this illustrates the good side as well as the bad side of medicine today.
On the good side, I was fortunate to have the attention of two world-class doctors who spent six hours, one going inside my heart, the other coming through my chest wall to the outside of my heart, to map electrically the aberrant signals in my heart and to ablate them. Since then, I’ve not had a problem.
On the bad side, two days after the procedure, I was in the intensive care unit when a cardiologist came by on rounds. He advocates a wider use of cholesterol-lowering statin medications than I do, and he started to cite the literature about why I should be taking more cholesterol-lowering medicine than I already was. I asked him if he had read the studies underlying that literature, and of course he had. I then asked him if he understood that the endpoint of many of those studies wasn’t really appropriate to determine the benefit of statins, and he acknowledged there was some debate about that. Finally, I asked if he was aware that when peer-reviewed articles are published in medical journals—even the most reputable medical journals—the peer reviewers don’t have access to the actual data from the clinical trials being reviewed. And he answered, somewhat meekly, that yes, he was aware of that.
In other words, he was aware that his recommendation that I increase my use of statin drugs was based entirely on incompletely vetted commercially-sponsored and largely commercially-influenced medical journal articles. This gets to the heart of the problem of the commercial takeover of the medical knowledge that doctors believe in and implement.
But before continuing that thought, let me step back and explain why I begin from the assumption that U.S. health care is on the wrong track.
An easy way to gauge the health of a country, and to compare the health of a country with that of other countries, is to look at average life expectancy. And if you look at a chart comparing average life expectancy in the U.S. with the average life expectancy of eleven other wealthy countries from 1980 to 2021, you will find that in 1980, the U.S. was just about equal with those other countries. But as the years have progressed since then, life expectancy in the U.S. has fallen further and further behind. Until 2014, our life expectancy was going up, but we were losing ground to the populations of other advanced countries.
By 2019, prior to COVID, life expectancy in the U.S. had fallen relative to that in the other countries so much that 500,000 Americans were dying each year in excess of the death rates of the citizens of those other countries. To exclude poverty as a factor in these numbers, a study looked at the health of privileged Americans—specifically, white citizens living in counties that are in the top one percent and the top five percent in terms of income. This high-income population had better health outcomes than other U.S. citizens, but it still had worse outcomes than average citizens of the other developed countries in such areas as infant and maternal mortality, colon cancer, childhood acute lymphocytic leukemia, and acute myocardial infarction.
Now combine this with the fact that we in the U.S. are paying an enormous excess over those other countries on health care. In the U.S., we spend on average $12,914 per person per year on health care, whereas that figure in the other comparable countries is $6,125. That comes to $6,800 more per person—and if you multiply that by 334 million Americans, we are spending an excess $2.3 trillion a year on health care—and getting poorer results.
Which means that our health care system is broken and needs fixing.
Prior to leaving office in 1961, President Eisenhower famously warned the nation about what he called the “military-industrial complex.” I suggest that we now have a medical-industrial complex that is sucking America’s wealth away from the other things that will make us healthier and create better lives for the American people.
Ask yourself, what ought to be the primary goal of American health care? To my mind it is this: to maintain and improve individual and population health most effectively and efficiently. And if that is correct, there are two critical questions we all need to ask: (1) Why are we failing so miserably to achieve this goal? and (2) Why are doctors and other health care professionals willing to go along with this dysfunctional system?
One of the fundamental reasons for the disparity between the health of Americans and the health of people in other wealthy developed countries is that our medical-industrial complex has taken control over what doctors and the public accept as medical knowledge. This is something that has evolved over time.
Back in 1981, I was finishing my medical residency and starting a two-year fellowship, which is when I learned my research skills. At that time, my colleagues and I would spend hours dissecting the articles in medical journals, and commercial bias was never an issue. But 1981 turned out to be a pivot point. Derek Bok, the president of Harvard University, said in Harvard Magazine that year that the university’s reliance on industry funding for research was causing “an uneasy sense that programs to exploit [i.e., make money from] technological development are likely to confuse the university’s central commitment to the pursuit of knowledge.” He explained that because grants from the National Institutes of Health and the National Science Foundation were declining, scientific researchers were turning to commercial sources for funding.
Along the same lines, a 1982 article in the journal Science, “The Academic-Industrial Complex,” pointed out that universities that had been pursuing knowledge for its social and scientific value had been suddenly drawn into the marketplace and were pursuing knowledge for commercial value. We today have grown accustomed to an environment where it’s normal for professors at medical schools to have commercial relationships. But it wasn’t always that way, and it doesn’t have to be that way in the future.
A second factor in the evolution I’m describing was the passage by Congress in 1980 of the University and Small Business Patent Procedures Act—also known as the Bayh-Dole Act. When Japanese cars entered U.S. markets in the late 1970s, it was widely believed that the Japanese government was supporting the development of those imports in order to help Japanese car manufacturers compete against their U.S. counterparts. Many thought that the same loss of competitive edge was happening in science: research taking place at universities wasn’t being properly commercialized because the universities had no financial incentive. The Bayh-Dole Act aimed to remedy that by allowing universities and other nonprofit research institutions to commercialize discoveries made by their scientists while conducting federally-funded research by retaining any profits—including profits from patents on pharmaceuticals. With that, universities became players in the marketplace and were absorbed into the medical-industrial complex.
The first and most obvious result of this had to do with who was sponsoring and controlling medical research. In 1991, 80 percent of pharmaceutical research was taking place in university medical centers, and it was conducted, analyzed, and published by independent academic researchers. But by 2004, only 26 percent of the pharmaceutical industry’s research was taking place in universities. The other 74 percent was being done by for-profit research companies. These companies might hire medical centers to provide research help, but overall control of the research had moved from the academic centers to the pharmaceutical industry. And this was a radical change.
A 2005 article in the New England Journal of Medicine noted that 80 percent of clinical trial agreements allowed drug companies to own the data produced by the research. In my mind, data from a clinical trial—excluding, of course, manufacturing techniques and genuinely proprietary information—is a public good, because doctors are going to use that data to make decisions about how to treat their patients. But the drug companies don’t see it that way.
In litigation involving Pfizer—although Pfizer is no different than other drug companies in this respect—internal Pfizer documents stated in stark language that “Pfizer-sponsored studies belong to Pfizer, not to any individual,” and that the “Purpose of data [from those studies] is to support, directly or indirectly, marketing of our product.” Not to ensure that the drugs will make people healthier or improve quality of life—or to ensure that they will do no harm—but to support the company’s marketing.
These internal documents go on to specify some of the ways the data is used for marketing. One is “through publications for field force use.” Translated into plain English, this means the drug companies purchase reprints of medical journal articles and have their drug representatives hand them out to doctors so the doctors will prescribe their drugs. And that would be a perfectly fine thing to do—if the journal articles underwent independent peer review. With independent analysis of the accuracy and completeness of the research and data provided in the medical journals, we could trust them. But as things currently stand, we can’t.
The New England Journal of Medicine survey I cited before contained two other findings worth noting. In 24 percent of clinical trial agreements, the sponsor (meaning the drug company) “may include its own statistical analysis in manuscripts [i.e., journal articles].” And even more outrageously, 50 percent of clinical trial agreements allow the sponsor to “write up the results for publication and the investigators may review the manuscript and suggest revisions.” In other words, 50 percent of the contracts that academic medical centers make with drug companies allow the drug companies to ghostwrite the articles. The researchers who are the named authors of the articles have the right to suggest revisions but not to make actual corrections or edits. This is not academic freedom. Nor is it an arrangement in which medical science is going to serve the interest of the American people.
Related to this, I once asked an editor of one of the world’s most respected medical journals why journals don’t simply require that the drug companies submit their extensive internal clinical study report and data, while redacting proprietary information. The editor responded without missing a beat: “That would be a death spiral for the journal.” What he was saying is that he understands the problem, but that the medical journals need to publish the major clinical trials to maintain their prestige and continue to sell reprints back to the drug companies. (As an aside, the sale of reprints is a big deal: in 2005, The Lancet, one of the world’s most prestigious journals, made 41 percent of its income from selling reprints.)
It is irresponsible for medical journals not to require transparency from the drug companies—but it makes perfect business sense when we understand their financial dependence on those companies.
In summary, the biomedical market is not like Adam Smith’s basic market in the 1700s—it’s not a market where people shop for their bread, meat, and beer, and where they can directly assess the quality of the information, the quality of the product, and whether the price is fair or not. Biomedical products are not directly-experienced goods like bread, meat, and beer. They are what economists call “credence goods”—goods that can’t be evaluated directly by the purchaser. Rather, consumers must rely on the evaluation of experts. And with prescription drugs, the manufacturers have a monopoly on the information.
Turning to the issue of excess cost, there are three main factors. One is that the U.S. is the only wealthy developed country that has no formal mechanism of price negotiation. A second is that because most consumers are insured, they pay only a small part of the price—so high prices don’t provide market discipline. A third important factor is that, as a country, we are perhaps too mesmerized by the idea of biomedical innovation.
Regarding this third factor, historian Jill Lepore has written: “Innovation might make the world a better place, or it might not.” Innovation, she goes on to say, is not necessarily “concerned with goodness,” but often “with novelty, speed, and profit.” It is certain that in the biomedical area, too many innovations we are being sold today are not being properly evaluated in terms of their true value for the public.
We in the U.S. are spending 96 percent of our biomedical research money on medical drugs and devices, and only four percent on how to make the population healthier and how to deliver health care more efficiently and effectively. Put another way, the U.S. spends $116 billion on researching new drugs and devices—which comprise only 13 percent of total health care costs—but only $5 billion on research concerning the remaining 87 percent of health care costs. Why? Because the drug companies’ job is to maximize the money they return to their investors, and the highest return on research investment is not going to be from studying and promoting healthy diets and lifestyles. The money is in selling drugs and devices. This leads to a tremendous epidemiological imbalance in the information coming down to doctors.
Even when new drugs get approved, only one out of four is actually materially better than previously available and far less expensive therapies. Germany’s Institute for Quality and Efficiency in Health Care, an independent agency under contract to Germany’s Ministry of Health, found that of 216 new drugs entering the German market from 2011-2017, only 54 were of “major” or “considerable” benefit. Thirty-seven were evaluated to be of “minor,” “less,” or “non-quantifiable” benefit. And there was “no proof of added benefit” for 125 of the drugs. Here in the U.S., meanwhile, because we don’t have a formal mechanism of evaluating new products, doctors don’t know which product out of every four is worth prescribing. They are visited by marketers and given medical journal articles supporting the use of all the drugs, and they are denied the knowledge they need to act as learned intermediaries.
In my recent book, I talk about Trulicity—a diabetes drug that reduces the risk of heart disease. It was heavily advertised on TV and heavily marketed to doctors. But what wasn’t publicized or shared with doctors is Trulicity’s NNT—which stands for “number needed to treat.” The NNT tells you how many patients have to be treated, and for how long, for one patient to benefit from a drug. In the case of Trulicity, it turns out that you have to treat 327 people for approximately three years in order to prevent one non-fatal heart event. And treating just those 327 people over that time period would cost the public $2.7 million. Wouldn’t knowing these numbers make a difference to a doctor deciding whether to prescribe the drug? Or to a patient deciding whether to request the drug? And this is leaving aside the possible negative side effects—and the “number needed to harm” for each of them—which clinical trials often fail to monitor and more often fail to report in journal articles.
So we have all these brand name drugs being developed, and all of them are marketed heavily regardless of their effectiveness. The drug companies that own the patents are monopolies. How high are these brand name drugs being priced? In comparative terms, they cost three-and-a-half times more in the U.S. than in other wealthy developed countries. But the most shocking numbers have to do with the rate of increase in prices. In 2008, the average annual price of a new drug in the U.S. was $2,115; by 2021, this annual average price of a new drug had risen to $180,000. Think about that.
In 2022, the average annual price was up to $257,000.
Big Pharma is comprised of for-profit companies. The job of for-profit companies is to maximize returns to their investors. Accusing drug companies of being greedy is like accusing zebras of having stripes. They are doing their job, and we’re not going to change them. So it is our job—not only doctors, but the American people as a whole—to insist on guardrails to ensure that the pharmaceutical industry serves, rather than harms, public health.
What is needed is very clear. First, we need to ensure that the evidence base of medicine is accurate and complete, which requires independent, transparent peer review. Second, we need to implement health technology assessment, so that we and our doctors know which drugs and devices are the most effective. Third, we need to control the price of brand name drugs.
This is not rocket science—so why doesn’t it happen? Largely because the greatest bipartisan agreement among our political leaders is that it is fine for them to accept large contributions from drug companies. Huge amounts of money flow about equally to Democrats and Republicans. This is why any meaningful reform will require the formation of a coalition of Americans to demand action. And a plea I would make is that people on the conservative side who have an aversion to government and people on the progressive side who have an aversion to free markets come together with open minds to find a middle way to solve the problem of declining health and spiraling costs.
We need to transcend our ideologies—to think of the good for our country and its people on this issue. Neither the people who tend to the Republican side alone nor the people who tend to the Democratic side alone will be able to break up the medical-industrial complex that has a stranglehold on American health care. Instead of focusing on our disagreements, we need to focus on what we agree about—namely, that it would be better if Americans were healthier and didn’t spend over twice as much money (much of it to little or no benefit) on health care as citizens of other wealthy countries.
Oliver Wendell Holmes said in 1869, “The state of medicine is an index of the civilization of an age and country—one of the best, perhaps, by which it can be judged.” Medical science is a wonderful gift, but we have to use that gift wisely so that it serves the American people by providing the best and most efficient care. We can’t allow it to be held hostage by the medical-industrial complex.
John Abramson received his B.A. from Harvard College, his B.M.S. from Dartmouth Medical School, and his M.D. from Brown Medical School. He served as a family physician for 22 years, was twice voted “best doctor” in his area by readers of the local newspapers, and was three times selected by his peers as one of a handful of best family practitioners in Massachusetts. He has been on the faculty at Harvard Medical School for 16 years, where he has taught primary care and currently teaches health care policy. He is the author of Overdosed America: The Broken Promise of American Medicine and, more recently, Sickening: How Big Pharma Broke American Health Care and How We Can Repair It.