The Intelligence May Be Artificial but Humans Need to Be Factored Into AI Trust and Acceptance | AHIP 2024

News
Article

Experts on AI spoke about strategies and tactics for implementing artificial intelligence that would assuage worries and build trust during a keynote session at AHIP annual meeting.

Interest in artificial intelligence (AI) is cresting in healthcare as many hope that its super-charged computing power can do everything from accelerate drug development to predict illness.

Miriam Vogel, J.D.

Miriam Vogel, J.D.

That soaring optimism was in evidence at a keynote session on artificial intelligence at the AHIP annual meeting last week in Las Vegas. “We think that AI can and will bring can, and will, bring into the healthcare space is the most exciting thing to happen to our world since I can't even imagine when. We're clearly in a revolution,” said panelist Miriam Vogel, J.D., president and CEO of EqualAI, and chair of the National AI Advisory Committee, which advises the White House on AI issues.

But Vogel and the other panelists also spoke grounding AI in human, economic and regulatory realities that would help instill trust in its application in healthcare. .

Neil Gomes, M.M.S., M.Ed., MBA

Neil Gomes, M.M.S., M.Ed., MBA

Neil Gomes, M.M.S., M.Ed., MBA, senior vice president and chief digital officer for AmeriHealth Caritas, warned about vendors turning on systems that “ingest” data and starting with simple use cases. For AI to win trust, Gomes said, there “needs to be a lot more literacy in the [AI] place.”

“People need to be educated not just to develop the right answer but to be able to ask the right questions,” Gomes said.

David C. Rhew, M.D., global chief medical officer and vice president of healthcare at Microsoft, noted that the transparency cited as a necessary ingredient to developing “responsible AI” means different things to different people, and that consumer and patients might be more concerned with what is going to be done with AI-generated data and whether the AI was tested in people like them. Rhew said it is sometimes difficult to know if there is bias in AI “until you actually start looking for the unintended consequences, and so there’s an element here of, we have to put in new processes if we’re going to implement this new technology.”

Vogel summarize some of the resources on best practices on her company’s websites, including one on good AI “hygiene” and processes that should be put into place “if you want to be a responsible AI actor.” She also outlined the work National AI Advisory Committee. The executive order on AI issued by President Joe Biden in October 2023 reflected most of the recommendations made by the committee earlier in the year, Vogel said. After ChatGPT, Claude and other platforms planted AI in the public , the committee shifted focused to AI’s meaning for literacy, education and awareness and also started meeting monthly, Vogel said. She directed to the AHIP audience’s attention to the National Institute of Standards and Technology, which she described as a “little hidden jewel” in the federal government’s Department of Commerce, as a source of best practices on artificial intelligence and cybersecurity.

There was little if any disagreement among Vogel, Gomes and Rhew, and the differences in what they said about AI were largely a matter of emphasis and perspective. One common thread was that AI should not be developed and implemented in isolation. Gomes spokes about governance system or centers of excellence including people from human resources, compliance and those who might be affected by AI.

“We are no longer in an era where IT can be some individual off to the side of the room or [in] the cubicle that you call for IT help. AI has to be front and center,” Vogel said.

She also said that “study after study shows that the more diverse,the more stakeholders that participate in AI development and throughout the entire AI lifecycle, the better the AI system is.”

David C. Rhew, M.D.

David C. Rhew, M.D.

Rhew highlighted two initiatives that Microsoft is involved. Earlier this month, the company, along with the Google, announced that it would free or deeply discounted cybersecurity services to rural hospitals. He said the cybersecurity efforts can have a knock-on effect, providing a framework for where smaller providers can take advantage of AI.

Rhew also mentioned Trustworthy & Responsible AI Network (TRAIN), which was announced at Health Information and Management Systems Society meeting in March. The network, which includes prominent hospital systems such as Mass General Brigham in Boston, Johns Hopkins Medicine in Baltimore and Cleveland Clinic, aims to “operationalize responsible AI principles to improve the quality, safety and trustworthiness of AI” in healthcare, the announcement said, and Microsoft is the “technology enabling partner.” Rhew said the one of the motivations behind TRAIN was to make AI available to healthcare providers that would likely struggle to adopt AI, such as under resourced rural hospitals and federally qualified health centers. He spoke about “democratizing” responsible AI care.

Although advocates for technology are generally leery of regulation, Vogel spoke in favor of regulation: “It’s how we built trust in airplanes. It’s how we trust in cars.”

Vogel added, “That’s what I think proper regulations do. [They] make sure that we have trust and confidence in new systems that are putting so much consequence and have so much impact on our lives.” She said government can help define and ensure AI literacy and establish consistent metrics.

Rhew didn’t endorse Vogel’s take on regulation, but he did speak in favor of public-private partnerships and healthcare providers and payers and other sectors working together. “It’s a new frontier. We don’t have a playbook for exactly how to do this. But we can learn together.”

He also sketched out a future of more ongoing evaluation of AI by entities who use it. “We can't just simply apply the current process,” Rhew said. “And we're now seeing from a policy standpoint, that even the regulators are recognizing that we're in a new paradigm, because it used to be developer develops algorithm, the FDA looks at the results, they do a clearance and then it gets deployed. And pretty much most people forget about it, unless there's some kind of an unusual event that occurs. But for the most part, there's not an ongoing activity to see whether it’s working. We're now realizing that there's actually three partners to this, there are developers, the regulators and there are implementers. And the implementers also have a role.”
Gomes advocated for mind-set of learning and betting informed about AI: Participate — don’t be on the sidelines.” He also said that healthcare organizations are businesses and should think about applying AI to businesses processes and not focus only on provision of healthcare.

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.