Log in




Panelists discuss potential guardrails for AI in healthcare, securing trust in technology

4 Dec 2025 3:12 PM | Deborah Hodges (Administrator)

Setting up guardrails for the use of artificial intelligence in healthcare will prove challenging, but can help build trust in the technology, panelists said Wednesday.

Dr. Abel Kho, director of the Institute for AI in Medicine at Northwestern University Feinberg School of Medicine, said during a Health News Illinois event in Chicago that AI is going to be ubiquitous, and that it will be “really, really difficult” to make a broad regulatory framework. [Health News Illinois] 

Instead, he said an approach that focuses on specific applications of artificial intelligence makes more sense, such as with prior authorization.

“I think there is a precedent set for states to come in and have a clear, on-the-ground, reality-driven policy, especially around specific domains,” Kho said. “But I think broadly… It's changing too fast, it's too ubiquitous. I think it can be very, very difficult for us to come up with something that's going to be relevant today and in 10 years from now.”

Rep. Bob Morgan, D-Deerfield, said policymakers are “building the plane as it’s in the air” when it comes to regulating AI. The added challenge is that the General Assembly is a “slow, deliberate” group, and regulations approved when lawmakers return to Springfield early next year may be invalid just a few months later.

Other challenges Morgan flagged include a potential federal ban on states regulating AI, and discussions on the balance between guardrails and tapping into the potential of AI in healthcare.

“(This) provides an opportunity to break down those barriers and to democratize healthcare in a way that we've probably never seen,” Morgan said. “Technology is starting to get there with telemedicine and telehealth, but really, artificial intelligence has so many intentions.”

One potential piece, he said, that lawmakers will consider next spring is how insurers outsource claims reviews to third-party payers that use AI to process claims.

Dr. George Cybulski, chief of neurosurgery and clinical AI leader at Humboldt Park Health, said that physicians are trying to solve problems at the patient level, and that collaboration is needed to craft rules that protect all parties.

Panelists also noted that AI has come much more into the public consciousness in recent years, though the attention has not always been positive.

Dr. Jon Handler, a senior fellow of innovation at OSF HealthCare, said patients are often “shocked” to find out that AI is not being used nearly as much as they think when it comes to medical records and other health services.

He said a key to trust is to have AI for a specific purpose that helps patients and physicians, such as transcription tools and flagging potential mistakes.

“I think in many ways, (patients) expect and are generally happy with the idea that the medical record will serve as a second double-check on the clinician, and to the extent that we make that real, I think that's a good thing that increases trust,” Handler said. “I think the minute they perceive the AI has forbidden them from getting certain care, prevented them from being able to talk to someone they need to talk to, prevented them from getting payment for services they felt they needed at that moment, then we will know that's where the trust in AI for that purpose will be lost.”

###

Powered by Wild Apricot Membership Software