Distrust of Artificial Intelligence in Healthcare

By Russell R. Barksdale, Jr.

In 2024, over $100 billion was reportedly invested in artificial intelligence (AI) companies, an over 60% increase from 2023. Many financial analysts regard AI as the leading sector for venture capital, reflecting investor confidence. However, when it comes to healthcare, enthusiasm is tempered by skepticism. While AI holds significant promise in medical applications, concerns about trust and safety remain paramount for patients and healthcare professionals alike.

A recent national study highlights this growing unease: more than 65% of adults surveyed expressed low confidence in their healthcare system’s ability to implement AI responsibly. Additionally, over 57% doubted that their health system could ensure AI-driven decisions would not cause harm. This hesitancy suggests that while AI adoption in healthcare is accelerating, efforts to reassure patients of its benefits are lagging. The skepticism may also stem from well-publicized cases of insurance-based precertification denials driven by AI algorithms.

The application of AI in radiology, pathology, and clinical decision support systems has the potential to revolutionize diagnostics, mitigate risk factors, and optimize treatment plans. By analyzing vast amounts of medical research and outcome data, AI could assist physicians in making earlier and more accurate diagnoses. However, this potential must be balanced against the realities of the healthcare landscape, where providers must stay abreast of evolving medical knowledge while also navigating complex administrative burdens.

One of AI’s most controversial roles in healthcare is its use in managed care, particularly in the precertification process for clinical tests and medical procedures. AI-driven algorithms are increasingly employed by insurers to assess medical claims and determine “medical necessity,” too often leading to claim denials. Yet, a historical analysis of 2023 claims data reveals a stark contrast in prior authorization rates: Medicare Advantage enrollees faced nearly 2 prior authorization requests per person, while traditional Medicare beneficiaries encountered just one per 100 enrollees—a striking disparity.

Further scrutiny suggests potential shortcomings in AI-driven precertification determinations. A recent claims analysis found that approximately 1 in 10 precertification denials were successfully appealed, with an approval rate exceeding 80%. These findings score “the squeaky wheel” theory over precertification algorithms unless hands remain on the scale. For these reasons human oversight remains crucial to prevent inappropriate denials or bias.

Despite its theoretical potential in clinical medicine, AI is currently being deployed primarily in administrative functions, such as billing automation and patient scheduling. The financial burden of AI implementation also raises concerns about digital disparities—healthcare systems with greater resources are better positioned to evaluate and refine AI applications, while underfunded institutions may struggle to integrate these technologies effectively.

Just over a decade ago, IBM’s Watson was celebrated as a groundbreaking innovation poised to revolutionize healthcare. The Jeopardy-winning supercomputer was introduced as a powerful tool for physicians and clinical researchers, capable of processing vast amounts of medical data to enhance disease diagnosis, treatment development, and patient care. However, Watson ultimately fell short of its ambitious promise. Now, with the dramatic reduction in data storage costs and exponential advances in processing speeds, machine learning is entering a new era in healthcare—one that may finally fulfill the potential once envisioned.

A national survey of U.S. hospitals found that approximately 65% have adopted AI-powered predictive models, with 79% relying on models developed by their electronic medical record (EMR) developers. Among hospitals using predictive analytics, 92% employed AI for inpatient health trajectory predictions, 79% for identifying high-risk outpatients, and 51% for scheduling optimization. However, independent evaluations of AI accuracy and reliability are essential to enhance patient trust and ensure safe outcomes.

AI is, by definition, “artificial,” and without continuous human oversight, its effectiveness and safety cannot be secured. To ensure AI fulfills its promise, rigorous testing, monitoring, and recalibration are necessary to eliminate bias and enhance reliability. Just as new pharmaceuticals undergo extensive clinical trials before widespread adoption, AI-driven medical tools must be subjected to stringent validation processes. Only through robust oversight and transparency can AI reach its full potential and earn the trust of patients and providers alike.

Russell R. Barksdale, Jr., Ph.D, MPA/MHA, FACHE is President & CEO of Waveny LifeCare Network

Related Posts
Loading...

New Canaan Sentinel Digital Edition

Stay informed, subscribe today and support the journalism that keeps you connected
$ 45 Yearly
  • Weekly Edition Of The New Canaan Sentinel Sent To Your Email
  • Access To The Digital Edition Tab Containing Past Issues Of The Sentinel
  • Equivalent To Spending 12 Cents A Day
Popular