Advertisement
Skip to Content
MarketWatch

As AI advances in healthcare, industry players wrestle with its risks

By Eleanor Laise

Accuracy of health information, privacy and equity are among executives' key concerns as health AI rapidly evolves

As artificial-intelligence tools show promise in transforming healthcare, technology companies, healthcare providers, drugmakers and other players are also focused on its potentially damaging side effects.

AI-powered tools may help cut healthcare costs, ease health providers' administrative burdens, improve diagnoses and optimize clinical-trial designs, proponents say. But the tech and healthcare industries must also address the risks that these tools could perpetuate racial disparities, violate patients' privacy or mislead consumers who are trying to obtain health information from a ChatGPT-style interaction, industry executives said at a Washington, D.C., healthcare conference Wednesday.

"ChatGPT is pretty exciting, but it writes things definitively, and as human beings we're used to taking definitive statements very seriously," Cris Ross, chief information officer at the Mayo Clinic, said during an AI-focused panel at the conference, which was organized by Politico. "A lot of the responses from these tools still require human introspection, inspection, evaluation and so on," he said.

Earlier Wednesday, Alphabet Inc.'s (GOOGL) Google Cloud announced that the Mayo Clinic will be using its generative-AI search tool that unifies data across various documents, databases and intranets to help healthcare professionals find information more quickly.

As the technology rapidly evolves, health experts are racing to ensure that it's trustworthy and transparent. The Coalition for Health AI, a group of academic health systems, organizations and AI practitioners, in April released a set of recommendations aiming to align health AI standards and reporting. Roughly 40% of healthcare-industry working hours could be transformed by generative AI, which uses algorithms to create new content such as text and images, according to a recent report by consulting firm Accenture.

AI advancements have raised a host of questions about ethics and equity in healthcare. Large language models such as ChatGPT, for example, are fed data from books, articles and other sources that may lack diversity and representation in their authorship and subject matter and may only exacerbate existing societal biases, researchers from Indiana University and the University of Houston wrote in a recent report in the journal Health Affairs.

"At the stage of actually training and evaluating models, before they are ever brought into a decision-making context or connected to anything operational, there needs to be a really robust set of guidelines on fairness and bias checking," Hirsh Jain, head of public health at Palantir Technologies Inc. (PLTR), said at the conference.

With the right inputs, the tools could be used to improve representation in clinical trials, Moderna Inc. (MRNA) chief legal officer Shannon Thyme Klinger said at the conference. "If you have it employed ethically, if you're training your models to take into account all biases, we might actually be able to enroll even more diverse clinical trials," she said. "Our clinical trials should look like the communities where we live and work, and they don't today."

The accuracy of information coming from generative AI is also critical in healthcare, executives said. While generative AI is incredible, "there are also a lot of limitations," said Palantir's Jain. "Doing computation and generating something that is factually true is really hard when the generative component is there."

Thorny regulatory issues remain. "The almost ever-changing nature of AI requires us to think differently about how it should be regulated," Bakul Patel, head of digital health regulatory strategy at Google, said during the panel discussion. "Of course it has to be regulated, but it has to be regulated in a responsible way" to prevent problems down the road with bias, access and other issues, Patel said.

The hype around AI's potential to transform healthcare must be tempered with a recognition of its incremental benefits, some experts say. "I'm a huge fan right now of little AI," the Mayo Clinic's Ross said. "Not the doctor in the sky who's going to diagnose all disease and help with all treatment plans, but something that takes away administrative load and lets people operate at the top of their license. That's where we're going to be putting a lot of focus over the next couple of years."

One of Palantir's core tenets, Jain said, is that "all of the innovation, all the technology that we deliver is significantly more powerful when it's augmenting human intelligence than when it's replacing it."

-Eleanor Laise

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

 

(END) Dow Jones Newswires

06-07-23 1629ET

Copyright (c) 2023 Dow Jones & Company, Inc.

Market Updates

Our Picks