

As artificial intelligence becomes more embedded in healthcare, growing concerns about transparency, accountability, and human judgment are reshaping how patients trust medical systems.

By Dr. Oni Blackstock
Founder and Executive Director
Health Justice
After OpenAI and Anthropic launched dedicated health care initiatives in January, a study published in February found that OpenAIโs ChatGPT Health had a 50% error rate, incorrectly recommending that care be delayed in emergency test cases half the time.
That error rate, which was not identified before the app was rolled out, is a symptom of a broader problem: the rapid adoption of AI systems by health care systems and insurers, often skipping essential testing to determine how well these systems work and how safe they are for patients. This push to expand AI in health care is intensifying an existing trust crisis.
The decline ofย trustย inย healthย careย in the U.S. has been ongoing and was worsened by the institutional responses to the Covid-19 pandemic. Aย national surveyย of more than 443,000 U.S. adults foundย trustย in physicians and hospitals fell more than 30 percentage points between 2020 and 2024, from 72% to 40%, with declines across multiple sociodemographic groups. Forย Black,ย Latine, andย Indigenousย communities, this collapse layers onto preexisting medical mistrust rooted in a legacy and ongoing history of medical racism in the U.S. health care system. Research shows that patients who distrust theirย healthย careย providers are more likely toย delayย care, including preventive screenings, andย discontinue their medications, and that those patterns are associated withย higher rates of hospitalization and premature death.
READ ENTIRE ARTICLE AT STAT NEWS


