Home > NewsRelease > Hospital Managers, Medical Decisions, and Patients’ Need to Know
Text
Hospital Managers, Medical Decisions, and Patients’ Need to Know
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Wednesday, July 3, 2024

 

Medical decisions are being made not only by insurance companies but also by hospital managers and algorithms, and concern for patient care continues to grow.

Photo by Harry cao on Unsplash

The term “corporatization” in healthcare is still being debated, but most people agree that it means that healthcare organizations are being taken over by a large corporation that rule over or replaces local autonomy. It can also mean that hospitals and health systems are changing their behavior to prioritize making money over caring for patients.

I’ve had a physician tell me, in strictest confidence, that the hospital replaces physicians who leave with any available MD, regardless of their expertise. “They see an MD as an MD, and that’s it.” We have to wonder what effect this has on patient care.

In an ideal practice setting, medicine and surgery are used in a two-way connection between a doctor and a patient, with support from leadership, staff, and the care team. The clinician has all the tools they need to heal. The goal should be to do what is best for the patient at all times.

But there is ample proof that the health system is becoming increasingly corporate. In 2023, 65 hospitals or health systems revealed deals to merge or buy other hospitals, bringing in more than $38 billion. The business of medicine is a big part of the economy, especially since the US spends almost $5 trillion a year on healthcare. And the system is underperforming.

Private equity investors have a big stake in the US healthcare system; they own more than 30% of hospitals in some markets and almost 400 hospitals. Little is left for the smaller hospitals or, indeed, the single practitioner who wishes to work independently. Little by little, they are being forced into a market that seems to smack of monopolistic practices.

In America’s profit-driven healthcare system, physicians believe they are hurt when managers, hospital executives, and insurers make them break the rules of ethics that were supposed to guide their profession. It is hard for many physicians to balance their Hippocratic oath with the reality of making money off of sick and vulnerable people. Some say this promotes a very high rate of physician suicide and burnout.

The 2024 physician burnout and depression study from Medscape says that almost half of physicians feel burned out. The number of physicians who are burned out has gone down since last year, when 53% said they were burned out. But many are considering leaving the field. Due to employees quitting, the resource gap in available care will widen. Nurses, too, are leaving the field because of overload, lack of support, and wages.

A physician I spoke to told me that he resisted being bought by a hospital chain and, as a result, will not be permitted to admit patients there or receive referrals; they are squeezing him out of existence. He now plans to leave medicine in about two years. The daily stress of dealing with insurance companies is exhausting for his staff.

The concerns regarding patient care are real, and the US government realizes them. The Office of Civil Rights in the U.S. Department of Health and Human Services released a rule about the nondiscrimination section in Section 1557 of the Affordable Care Act (PDF). This rule could punish doctors if they use algorithm-based tools that cause discriminatory harm.

The Federation of State Medical Boards also put out a set of rules saying that doctors are responsible for harm caused by tools that use algorithms. But what if the physicians or staff have little say over how algorithms are used and who uses them? Can we hold them responsible for management’s actions? And, if management is a private equity company, where does the buck stop? Harry Truman knew.

A new report from the World Health Organization (WHO) discusses five fundamental ways AI LLMs could be used in medicine and public health: diagnosis, patient care, administrative chores, medical education, and research. However, the study also warns that AI comes with big risks of bias, unfairness, privacy breaches, and problems with openness.

Experts and civil society groups share these worries. Depending on algorithms that are devoid of emotion and only deal with data is taking a road too far and giving too much power to a math formula over medical staff and patient input. In fact, there is NO patient input, only data.

One patient I knew who found a major error in the EHR attempted to have it remediated to the correct information—it took seven years, and the patient was told the hospital could do nothing about the EPIC software errors. How is it possible that a program has no fail-safe corrections for inaccurate diagnoses, treatments, or medications and on which major health decisions are made?

One thing about making professional decisions is that the situation is often much tougher and more complicated than people think. The assumption is that it is not hard to make a medically complicated decision. You need to gather a few facts and think about them like a medical professional (i.e., a doctor), and you can figure out exactly what the patient is sick with and how to treat it. That’s how AI would act, and it would be done within minutes if not seconds.

But medical staff need to consider more variables than the AI may have been trained on and therein may lie a bed of thorns. Who is truly conversant with the limits of AI training and the bias inherent within its vast network? Certainly, hospital staff aren’t equipped to do much. What are the potential harmful effects?

The AI tools and machine learning (ML) methods that make them up are not perfect, and it is not likely that they will ever be. So, adding AI will bring benefits and the common problem of AI tools making mistakes. According to a study from the European Parliamentary Research Service, one of the biggest risks of putting AI into healthcare is that it could hurt patients through mistakes.

Are hospital administrators or private equity managers up to the task of monitoring instead of zeroing in on the bottom-line savings of AI? Instead of becoming a major moneymaker for them, it could become a swamp of lawsuits that will push some of them into bankruptcy from major decisions against them.

Caution seems to have been thrown to the wind in the heady giddiness that may be exhibited by people who should know better. Yes, I realize I am being caustic, but people's lives, livelihoods, and professions are on the line. We are not talking about trading stocks but working with lives.

Website: www.drfarrell.net

Author's page: http://amzn.to/2rVYB0J

Medium page: https://medium.com/@drpatfarrell

Twitter: @drpatfarrell

Attribution of this material is appreciated.

News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Jump To Dr. Patricia A. Farrell -- Psychologist Jump To Dr. Patricia A. Farrell -- Psychologist
Contact Click to Contact
Other experts on these topics