How AI is Changing Healthcare

Feature
Article
MHE PublicationMHE October 2024
Volume 34
Issue 10

Artificial intelligence is quickly becoming a valuable tool in the U.S. healthcare industry. Experts say a thoughtful approach can head off ethical problems and optimize efficiency.

Matthew DeCamp, M.D., Ph.D., is used to dealing with a flood of disparate questions through his health system’s messaging platform. Recently, though, he’s gotten some help in responding. A large language model (LLM) analyzes patient queries and generates draft responses for him to approve or edit.

Matthew DeCamp, M.D.,Ph.D.

Matthew DeCamp, M.D.,Ph.D.

“Sometimes the responses are amazingly helpful,” says DeCamp, an internist and bioethicist at the UC Health University of Colorado Hospital in Aurora.

The responses often demonstrate an understanding of the question, express empathy and even pick up on subtle cues, he says.

Other times, though, the artificial intelligence (AI) platform is less helpful. Once, a patient wrote to DeCamp seeking advice for “the worst runny nose of his life.” The AI draft response suggested the culprit could be a cerebrospinal fluid leak.

“It wasn’t technically wrong,” DeCamp points out. But absent other risk factors or symptoms, it probably was not necessary to warn the patient about such a dramatic possibility. It’s the kind of context-ignorant mistake a human likely would not make.

“A clinician would have that judgment of the whole picture in their head,” DeCamp says. “Whereas in this case, the AI did not.”

DeCamp’s story might be seen merely as an example of AI gone awry. In truth, though, it is illustrative of a more complex, layered reality: AI is already being widely used in U.S. healthcare, in ways both obvious and not so plainly seen. It works surprisingly well in many instances. At the same time, there is a maze of problems to work through, both technical and ethical, if the technology is to reach its full potential.

The beta period

The list of potential AI applications in healthcare is seemingly endless, but a few uses have surged to the forefront: operational efficiency, radiology/imaging and drug discovery.

Barinder Marhow, M. Pharm.

Barinder Marhow, M. Pharm.

Barinder Marhok, M.Pharm., global head of life sciences at Quantiphi, a digital engineering firm headquartered in suburban Boston, says one ripe opportunity to streamline health systems’ efficiency is the mundane problems with scheduling patients.

“Scheduling a patient requires 25 to 30 parameters to be understood before you can say, ‘OK, this patient can come at this time,’ ” he notes.

Optimal scheduling not only means knowing the availability of physicians, nurses and exam rooms, but also factors such as how long an exam might take and what level of provider the patient requires. AI can help schedulers more efficiently pick their way through the bramble of those variables to arrive at the best time, he says.

Quantiphi has started marketing a product called Baioniq that is designed to help health systems and other enterprises more efficiently leverage LLMs. The platform acts as a single interface through which users can access and select the best LLM for each particular use. That specificity matters because some models are better suited to certain types of generative AI tasks, such as answering patient medical questions. The platform also provides added layers of security and information management that are necessary in the world of healthcare data with its thresholds for privacy and attractiveness as target for cyberattacks.

There are also plenty of here-and-now clinical applications. Christina Silcox, M.S., Ph.D., the research director for digital health at the Duke-Margolis Institute for Health Policy at Duke University in North Carolina, says AI can also boost efficiency by helping to triage patients. For instance, she noted that patients who suffer a stroke have significantly better outcomes if they receive treatment within the first hour — known as the “golden hour” — after the stroke. But speedy treatment can be difficult, in part because it takes time for a radiologist to confirm the diagnosis.

Christina Silcox, M.S., Ph.D.

Christina Silcox, M.S., Ph.D.

“The way that radiology worked for a long time was that people read these reports chronologically,” she says. Perhaps a persistent physician could get their patient bumped to the front of the line, she says, but often much of the “golden hour” was eaten up waiting for the radiology report. Now, AI technology can automatically identify likely cases of stroke and put those at the top of a radiologist’s queue. “And that makes it much more likely that you’ll get that treatment within that golden hour,” she says.

AI can also improve both the reading of images and the performance of the technicians taking the images, Silcox says. She noted that at some smaller healthcare facilities, the people performing ultrasounds are imaging generalists. Sometimes that means a second ultrasound is required in order to get the particular information necessary for a diagnosis. AI can assist generalists when taking ultrasounds to ensure the correct views and data are recorded the first time. The result is magic to managed care ears: “That lowers costs, increases efficiency and helps diagnose things faster,” she says.

A new way of thinking

Perhaps one of the biggest ways AI is changing healthcare is in the field of drug discovery, according to Jackie Hunter, Ph.D., a longtime pharmaceutical executive who now chairs the board of Britain’s Stevenage Bioscience Catalyst biomedical campus, which is about 30 miles north of London.

Drug discovery, she says, has always required good data. “You need the best information to make the best decisions,” she says. “And when you have so much information, trying to synthesize it is just beyond
human comprehension.”

Jackie Hunter, Ph.D.

Jackie Hunter, Ph.D.

Hunter led neurology and gastrointestinal drug discovery and early clinical development at GSK, and later became chief executive at BenevolentAI, a firm that uses AI to support drug discovery.

BenevolentAI’s platform, and others like it, can identify potential molecules and targets that are likely to have a meaningful clinical effect, she says. Moreover, AI can then conduct simulations to help developers choose compounds to study in trials.

“Instead of having to make 3,000 compounds, using simulations you could actually decide that you just have to make 60 compounds that give you the most information with the least redundancy,” she says. “So that’s both a time and a cost saving.”

One of BenevolentAI’s success stories is the rheumatoid arthritis therapy baricitinib, which Eli Lilly and Company markets under the brand name Olumiant. When the COVID-19 pandemic hit in 2019, the company partnered with Eli Lilly to see if the therapy might help patients with COVID-19. The partnership ultimately resulted in an emergency use authorization from the FDA for baricitinib to be used in combination with Veklury (remdesivir) in certain patients with severe COVID-19.

Eli Lilly is not alone in using AI. According to a review article published in the journal Drug Discovery in June 2024, 67 AI-discovered molecules were being assessed in clinical trials in 2023, up from just seven in 2018. AI was used to identify the therapeutic target in 24 cases and to design the small molecule in 22 others. AI was also used to develop vaccines and antibodies, and to repurpose existing molecules, the investigators noted.

Hunter says AI can also be used to more precisely stratify and select patients for clinical trials so that investigators have a better sense of which patients will respond to a therapy, and why.

Still, Hunter says large pharmaceutical companies have generally taken a cautious approach toward AI. In some cases, she says, the issue is not so much an unwillingness to embrace AI as it is a desire to develop their own AI systems internally, rather than partnering with outside collaborators, even though the latter might speed up the process. Some of the caution, though, comes down to a reticence to upend systems that have largely been successful. She compares the situation to when drug companies first started using molecular biology. Initially, they tended to treat molecular biology as a stand-alone department.

“But now, of course, molecular biology is just embedded in everything we do and how we think,” she says.

Hunter says AI also opens up new ways of thinking about drug discovery. Since AI can better analyze things like the pathogenesis of diseases, she envisions a world where mechanisms become more important than
disease categories.

“We’re moving from more of a disease phenotype to a mechanism phenotype,” she says, “where we’re realizing, for example, in cancer, the same mechanism can apply in several different types of cancer, but
only in subtypes.”

Due diligence

As an unofficial ambassador for AI, Hunter has seen firsthand the ways in which perceptions and fears of AI can affect attitudes toward it. She recalls speaking at a conference several years ago and being confronted by an
audience member.

“This professor of pathology in the audience stood up and told me I was talking absolute rot, and that there was no way that digital pathology could replace a pathologist,” she says.

Ironically, Hunter says she does not necessarily disagree with the professor. ”I’m not sure that the healthcare system, unless it’s under a lot of strain, would be amenable to taking the human out of the loop completely, for a number of different reasons,” she says.

Among those reasons: Taking a human out of the loop would complicate accountability if the AI makes a mistake. However, she expects digital pathology to match or exceed human users someday, even if its main purpose is to make human pathologists more efficient by helping them focus their energies on rare and complicated cases.

Silcox, at Duke, says health systems have generally been wary of tools with direct implications for patient health, such as AI-powered clinical decision support technology.

“Clinical decision support has been around for a long time, and often it has not been great,” she says. “And so I think that people are excited about the idea of it being better, but cautious about it.”

Marhok says AI developers should take heed and adopt a similar sense of caution.

“It may be the industry is doing itself a disservice by going in front of doctors and trying to say, ‘We could help you,’ when they have not done the proper due diligence,” he says. It is better, Marhok says, to take longer to ensure the technology meets the needs of the healthcare industry.

There are other ways in which the industry risks moving too quickly. AI brings with it a number of novel ethical and practical issues.

DeCamp, who has a doctorate in philosophy and teaches at the University of Colorado’s Center for Bioethics and Humanities, says if the algorithms used to analyze patients’ cases are trained on data that is unrepresentative of the wider population, their predictions could end up being biased or inaccurate. That issue is well known, he says. Less well known, though, is biased processing — flaws in the variables and assumptions used to interpret the data. One example, he says, was an algorithm that treated healthcare expenditures as a proxy for overall health on the premise that healthy patients spend less on care.

“In our country, healthcare costs and what is spent on a patient does not necessarily correlate with the actual needs that they have,” he notes. In other cases, a machine learning algorithm might be given the freedom to select which variables it uses to get to a desired outcome; those, too, can be based on flawed premises, and might be harder to detect.

DeCamp says there have been technical efforts to try to use AI to synthetically make data more representative, but he said a more reliable solution would simply be to do a better job of boosting clinical trial participation, particularly among groups with historically low participation rates.

Hunter says she believes better communication and data governance could help solve the problem.

“Most people actually do want their data to be used for human good, but they want it to be used in a way that’s not for profit necessarily,” she says, “or that’s very open about what it’s being used for.”

Another missed opportunity, Hunter says, is negative trial data. Studies whose hypotheses fail are often left unpublished, even though such data could be equally valuable.

The ‘black box’

AI also brings concerns about explainability, or the “black box” issue. The algorithms used in AI systems can be difficult or impossible for nonexperts to understand, raising questions about the ethics of making medical decisions based on inscrutable outputs from an algorithm.

DeCamp says he does not think patients need to become software engineers to accept AI tools. No patient needs to understand exactly how magnetic resonance imaging (MRI) works before getting an MRI scan that might help clinicians make a diagnosis and decide about treatment. Still, DeCamp there’s a need for transparency.

“The level of explanation needs to be tailored to the nature of the decision, the risk of the decision and the information that the patient wants to help make a decision,” he says.

Results from a June 2024 survey conducted by the polling firm Morning Consult showed that patients are more willing to accept AI in healthcare for lower-stakes tasks, such as note-taking, than for determining diagnoses or helping with medical procedures.

For many patients, DeCamp says, the issue is less about how something works than if it works.

For health systems seeking to implement technologies, Silcox notes that it is important to have the proper safeguards and governance in place. She says the first task is simply getting a handle on which AI tools the health system is using. That can be difficult, she says, because in some cases health systems have been using algorithms for years without thinking of them as such.

The next task, Silcox says, is to clearly define a process for evaluation, keeping in mind that some technology may require ongoing monitoring. She notes that while medical hardware — say, a defibrillator — can be expected to continue working in a similar fashion for years, “you do not have that expectation with software, and particularly with machine
learning-based software.”

Patient need

Also, because of the nature of AI, she says it’s important to have multiple viewpoints represented when institutions consider AI tools. That includes not only healthcare practitioners, but also experts in AI and in regulatory compliance. In some cases, patient advocates are warranted, notes Silcox.

Although DeCamp says it is imperative for the healthcare industry to sort through the ethical issues associated with AI, he does not see them as barriers to the integration of AI in healthcare.

DeCamp says he’s encouraged that many medical schools have begun teaching about AI as part of their curricula, which he says will give future generations a better handle on the utility and limitations of the technology. AI can be leveraged to improve human lives, says DeCamp, with this important caveat: Patients and providers need to put pressure on AI developers to ensure human good is at the center of what they do.

“My hope, of course, is that the way we think about using these technologies is really driven by patient need and by the desire to make healthcare more accessible and more fair to all,” he says.

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.