AI Is the Only Unchecked US ‘Sector of Consequence,’ Says Health Care Exec

AI Is the Only Unchecked US ‘Sector of Consequence,’ Says Health Care Exec

Artificial intelligence has the potential to revitalize the health care workforce—but not without some level of risk, according to Dr. Chris DeRienzo, chief physician executive of the American Hospital Association.

DeRienzo gave the introductory remarks at Newsweek‘s September 17 panel discussion entitled “Is AI the Cure for Doctor Burnout?” at One World Trade Center in New York City.

The health care industry is experiencing a historic workforce crisis with clinicians burning out at “alarmingly high rates,” according to DeRienzo. The issue is a top priority for health systems everywhere, regardless of location or financial status.

AI has emerged as a possible solution. New tools have the potential to decrease physicians’ administrative burden by streamlining workflows and automating time-consuming tasks. Some even intend to aid clinical decision making.

NW Horizons Doctor Burnout
Generative AI tools have the potential to lighten doctors’ loads and reduce burnout, but it’s hard for health systems to vet tools without a consensus definition of responsible AI, according to Dr. Brian Anderson, CEO…


Newsweek

But these technologies are still in their infancy, with many developing within the past two years. Generative AI (the kind that creates original content and holds significant promise in health care) exploded into the public eye when ChatGPT launched on November 30, 2022. That month, only two press releases were published on generative AI in health care, according to an analysis from Bain & Company. By November 2023, the number had skyrocketed to 45.

AI’s rapid rise has left governments and regulatory bodies scrambling to catch up. The European Parliament adopted the Artificial Intelligence Act in March, and the European Council followed suit in May. The law will impose safeguards on artificial intelligence in the European Union over the course of two years, while “boosting innovation and establishing Europe as a leader in the field,” according to a news release from the Parliament.

Regulations aren’t fully baked yet in the United States. In October 2023, President Joe Biden issued an executive order to catalyze a “coordinated, federal government-wide approach” to governing the responsible use and development of AI. Among the order’s actions was the establishment of an AI task force at the Department of Health and Human Services.

AI legislation has also been on Congress’ docket. The House Committee on Science, Space and Technology passed nine bills to develop AI guidance, training and innovation on September 11—including the authorization of $2.58 billion over the next six years to establish a National AI Research Resource.

But the bills still have to pass through the full House of Representatives and the Senate to become law. That could take a while.

In the interim, health care leaders must make decisions about how to harness AI’s “currently unknowable power,” DeRienzo said at Newsweek‘s event.

“You and I both know that technology is increasingly core to our experience of health care,” DeRienzo told the audience of physicians and health care executives. “But it’s also true that when we leave technology to its own devices, we run the risk of technology really being an orchestra without a conductor.”

Fortunately, “all over this country,” DeRienzo continued, “there’s an increasingly diverse group of conductors who are using this technology to make some pretty spectacular music.”

Dr. Chris DeRienzo
Dr. Chris DeRienzo, chief physician executive of the American Hospital Association, delivers opening remarks ahead of Newsweek’s panel discussion, “Is AI the Cure for Doctor Burnout?”

Marleen Moise

One of the most significant groups is the Coalition for Health AI (CHAI)—a collection of more than 3,000 member organizations from the private and public sectors working to define best practices for AI in health care.

Dr. Brian Anderson is the CEO and co-founder of CHAI, which began in 2021 and was inspired by the response to the COVID-19 pandemic. Anderson noticed how stakeholders from all over—including competitors who wouldn’t normally collaborate—put their heads together to formulate a plan. He wanted to replicate that community while confronting another unknown: AI.

“We looked at AI, and one of the things we asked was, ‘Do we have a consensus agreement on what good, responsible AI looks like in health at a technical level?'” Anderson told Newsweek. “We really quickly came to the answer: ‘No, we don’t.'”

The organization has five working groups tackling that consensus agreement from different angles, Anderson said. It’s an arduous task that extends beyond a definition of best practices; CHAI must determine how to objectively measure an AI model’s alignment with the ultimate guideline.

So far, CHAI has partnered with 32 prospective quality assurance labs to independently evaluate AI models. They aim to create “report cards” using that common definition and share them so people can determine if a tool is safe and effective—and, “importantly, safe and effective on a patient like them,” Anderson said.

Most accessible data sets in the United States come from urban academic medical centers, whose patients are usually highly educated and white, according to Anderson. Sometimes, an AI model is incredibly accurate at predicting a heart attack in the white populations it was trained on—but when applied to an African American patient, it performs worse than a coin flip, he said.

At the moment, it’s hard to get full transparency from health tech companies, Anderson said. If a health system wants to buy an AI tool that promises to improve burnout or train on diverse data sets, it can’t ask the vendor for an independent evaluation. The framework simply doesn’t exist yet, and, as Anderson put it, “it’s hard to completely trust someone who has a clear conflict of interest.”

CHAI’s members are sharing discoveries with one another and learning as they go, according to Anderson. Along the way, they’re informing the U.S. government of their findings so that regulators have an informed framework to build laws upon.

The slow process stands in stark contrast to the rapid clip of generative AI innovation, which is “moving a mile a minute,” Anderson said. For him and his colleagues in the health care industry, there’s an urgency to the work.

“Every sector of consequence in the U.S. economy has the ability to have independent entities that evaluate things for safety and effectiveness,” Anderson said. “We don’t have that in health AI. And that’s a huge problem.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *