Добавить новость
ru24.net
News in English
Февраль
2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24
25
26
27
28

The Medical School Dean Who Says AI in Healthcare Moved Faster Than He Ever Expected

0

Watch more: Monday Conversation With Michigan Medicine’s Dr. Marschall Runge

Twelve months ago, Marschall Runge wasn’t a skeptic, exactly. But he wasn’t a true believer, either. He thought generative AI would be useful in healthcare. A helpful tool, an incremental advance. He didn’t think it would move this fast. He didn’t think it would change this much. He certainly didn’t expect to be sitting across from PYMNTS CEO Karen Webster, describing a world in which AI is already remaking how hospitals operate, how physicians think, and how patients first reach out for help. And confessing, with the candor of someone who has been genuinely surprised, that the transformation is only beginning.

“In a year,” he said, “I’ve completely changed my mind.”

That confession carries weight when it comes from someone like Runge. He ran Michigan Medicine, one of the country’s premier academic medical centers, as its dean and CEO. He is a cardiologist and physician-scientist who has spent a career at the intersection of clinical practice and medical innovation.

He is not easily dazzled.

He has seen waves of transformation come through healthcare before, and he has watched many promises dissolve on contact with the system’s stubborn realities. When a person like that says the speed and scope of what’s happening with AI genuinely caught him off guard, it’s worth paying attention.

The Monday conversation that followed was part progress report, part honest reckoning. A look at what AI is already delivering in healthcare, where it is falling short, and what stands between its current promise and its long-term potential.

‘Doctor GPT’ and the New Front Door to Care

One of the most consequential shifts Runge described is happening not inside hospitals, but in the privacy of patients’ homes, before they ever call a doctor.

Webster put it directly: patients are “talking to ‘Doctor GPT’ as if they’re talking to a real doctor.” They are typing their symptoms, their fears, their questions into AI systems that respond immediately, synthesize information across thousands of variables, and never make anyone feel like they’re wasting their time.

Runge doesn’t dismiss this. He does it himself. He uses AI as a knowledge resource. And he’s been genuinely impressed by what it can do.

“AI thinks broadly,” he said. It can hold a patient’s age, medications, and underlying conditions in mind simultaneously, drawing connections that a physician running behind schedule and juggling a full caseload might miss. He has seen AI surface diagnostic possibilities that trained clinicians hadn’t initially considered.

But the risks, he emphasized, are real.

Overreliance. Misplaced confidence. The seductive feeling of a confident answer where clinical uncertainty is the honest truth. AI does not carry a stethoscope. It cannot read a room. It cannot sense that a patient is frightened, or that something in their affect suggests the problem is not what they’re describing.

Stop Holding AI to a Standard Medicine Itself Can’t Meet

Here is where Runge’s thinking becomes most striking. He rejected, clearly and without hedging, the idea that AI must be error-free before it earns a place in clinical settings.

“We can’t require something that’s just unachievable,” he said. Medicine itself, practiced by the most careful and experienced physicians, produces errors. To demand perfection from AI, and to withhold deployment until it arrives. is not a safety standard. It is a reason to do nothing.

What he wants instead is structure. Certification. Guardrails.

“I think anything that we’re doing medically with AI ought to have to be certified and have guardrails,” he told Webster.

He knows the first adverse event attributed to AI will generate a firestorm, regardless of how AI’s safety record actually compares to the baseline. He wants a framework that anticipates that moment rather than being destroyed by it.

“We’re in this middle zone,” Runge said. Not ready for AI as an autonomous provider. But ready, genuinely, urgently ready, for it to do more.

20% More Capacity. Without Hiring a Single Surgeon.

If the philosophical arguments feel abstract, Runge offered a number to ground them: 20%.

That’s how much one hospital increased operating room utilization after deploying AI to observe surgical workflows and predict when patients would move from the OR to recovery. It’s a fun discussion about the differences between Dr. Speed and Dr. Slow, where the AI learned the physician-specific, procedure-specific patterns, and used them to orchestrate the system more precisely than human scheduling ever could.

Think about what 20% means in a hospital operating at or near capacity.

It means more patients helped, more procedures performed, more families told “we can see you next week” instead of “next month.” It means reducing the waitlists that drive patients to emergency departments out of desperation rather than necessity. It means expanding what the system can do without expanding the workforce, a critical advantage as physician shortages deepen across the country.

AI, in this telling, is not replacing surgeons. It is giving them more room to work.

The Scheduling Problem Nobody Talks About

Access to healthcare is usually framed as a supply problem: not enough doctors, not enough nurses, not enough hours in the day. Runge reframed it. Scheduling, he said, has been “chaos.” Appointment slots scattered across decentralized systems, invisible to each other, each managed in isolation. Centralization helped. AI is doing something more fundamental.

An AI system can scan an entire physician’s calendar instantaneously. Not just the obvious open slots, but the pockets of underutilized time hidden across a week. It can find the gap between a cancelled appointment and a lunch that runs short. It can match that gap to the patient who needs it.

“Access is the key,” Runge said, and he meant it as both a practical observation and a moral one.

The patients who can’t get a timely appointment don’t just wait. They worry. They delay care. Or they end up in emergency rooms, driving up costs and consuming resources that should be going elsewhere.

The Real Obstacle: It’s Not the Algorithm

Ask Runge what’s standing in the way of AI’s full impact on healthcare, and he doesn’t talk about computing power or data quality or even regulatory frameworks. He talks about money. Specifically, a payment system he called, with unusual bluntness, “all goofed up.”

The relative value units that determine how physicians get paid were designed decades ago. They persist not because they reflect clinical value, but because they are administratively convenient and deeply embedded. Transitioning to outcome-based models, paying for health rather than for procedures, would require dismantling measurement frameworks and compensation structures that entire institutions have been built around.

Reform, Runge acknowledged, will not be quick or easy.

But it is not optional. AI can optimize a broken system. It cannot fix one.

What AI Can’t Do — And Why That Still Matters

For all his conversion, Runge has not lost his clinical instincts. He was clear about the domains where AI’s reach ends. Physical examination. The reading of human emotion. The relational trust that accumulates over years between a physician and a patient who is frightened and looking for someone to believe in.

“I’m unconvinced that it’s going to really be able to read human emotion,” he said. And he said it not as a knock on AI, but as a reminder of what medicine, at its best, actually is.

Not an information-delivery system. A human encounter.

The physician of the future, in Runge’s vision, will need to know enough medicine to interrogate AI outputs rather than simply accept them, to understand why a recommendation is being made, to push back when something feels wrong, to bring the clinical judgment that no algorithm has yet replicated.

And they will need, more than ever, to do the thing AI cannot. Be present.

A year ago, Runge would have described AI as a promising tool. Today, he describes it as embedded infrastructure, already woven into the operational and cognitive layers of healthcare, accelerating, and not yet close to its ceiling.

In his mind, the question is no longer whether AI will transform medicine. It is whether the institutions, payment structures, and regulatory frameworks surrounding medicine can transform fast enough to let it.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post The Medical School Dean Who Says AI in Healthcare Moved Faster Than He Ever Expected appeared first on PYMNTS.com.




Moscow.media
Частные объявления сегодня





Rss.plus
















Музыкальные новости




























Спорт в России и мире

Новости спорта


Новости тенниса