If this article speaks to you, I would be grateful if you would share it. The conversation about civic literacy and practical skills is one we need to have more often—not less. Like it, comment on it, pass it along to a graduate you know. The discussion matters more than the metrics.
It did not arrive with a bang. It arrived with clinical necessity.
For much of the public conversation, artificial intelligence in medicine is framed as something looming just over the horizon—futuristic, disruptive, and not quite real yet.
That framing is already obsolete.
The AI revolution in medicine has begun. Not in theory. Not in Silicon Valley demonstrations. It began quietly, unevenly, and pragmatically in clinics, hospitals, laboratories, and operating rooms around the world.
What makes this moment different is not ambition.
It is scale.
If this challenges how you have been thinking about AI in healthcare—whether you are a physician, a healthcare administrator, a patient advocate, or simply someone trying to understand what is happening—I would welcome your thoughts in the comments.
Medicine Has Been Here Before—Just Not This Fast
Long before anyone used the term “artificial intelligence,” medicine understood the power of structured knowledge. One of the great historical advantages of the Mayo Clinic was not a particular machine or breakthrough drug. It was something more fundamental: an extraordinary institutional memory.
Mayo systematically captured clinical experience. Patient histories. Diagnostic pathways. Outcomes. They made that accumulated knowledge available across specialties. Physicians did not work as isolated experts but as contributors to a shared, evolving body of medical judgment. It was not AI, but it was close: disciplined data, governed by physicians, in service of better decisions.
What AI is doing now is allowing other institutions to build Mayo-like capabilities without a century of manual accumulation. AI systems can integrate imaging, lab results, clinical notes, genomics, and population data at a scale no human team could manage alone.
The emulsifier has changed. The recipe, at its best, has not.
This matters because the principle underlying Mayo’s success was never technological. It was organizational. It was about who controlled the knowledge and to what ends it was applied. The physicians remained in charge. The data served them. They did not serve it.
The question facing medicine today is whether that principle will survive the transition to artificial intelligence—or whether accountability will quietly disappear into the algorithm.
Where AI Is Already in Clinical Use
The reality of AI in medicine is less dramatic than the headlines suggest and more consequential than most people realize. These systems are not replacing physicians. They are changing what physicians can see, how fast they can act, and how much administrative burden they must carry.
Diagnostics and Imaging
In radiology, dermatology, and pathology, AI systems are already assisting clinicians by flagging abnormalities, prioritizing urgent cases, and reducing missed findings. IDx-DR, approved by the FDA in 2018, autonomously diagnoses diabetic retinopathy without requiring a specialist to interpret results—the first such approval in any field. Paige AI received FDA approval in 2021 for detecting prostate cancer in biopsy slides, demonstrating 96% sensitivity in clinical trials.
In breast cancer screening, a 2020 Lancet study of over 25,000 mammograms found that AI-assisted image analysis matched the performance of two human radiologists while reducing workload by 44%. The AI system achieved 94.6% specificity—not by replacing the radiologist, but by serving as a tireless second reader that never gets fatigued, distracted, or rushed.
These systems do not diagnose in isolation. They augment clinical judgment. The radiologist still reads the scan. The AI ensures nothing gets buried in the queue.
Predictive Analytics in Hospitals
At Johns Hopkins, the Targeted Real-Time Early Warning System (TREWS) has been predicting sepsis six hours before clinical onset since 2018. In a 2022 study published in Nature Medicine, TREWS reduced sepsis mortality by 18.2% across five hospitals—representing hundreds of lives saved annually at a single health system.
Mount Sinai’s AI system analyzes over 30,000 variables per patient to predict deterioration. Epic Systems, which provides electronic health records to more than 250 million patients, now embeds sepsis prediction algorithms directly into clinical workflows at hospitals including Cleveland Clinic, Kaiser Permanente, and Intermountain Health.
For clinicians, the value is not automation but early warning. A nudge that says: pay attention here. The physician still decides what to do.
The AI ensures the warning comes in time.
Drug Discovery and Clinical Trials
AI is accelerating drug discovery by identifying promising molecular targets, simulating drug-protein interactions, and optimizing trial design. Insilico Medicine used AI to identify a novel drug target and generate a candidate molecule for idiopathic pulmonary fibrosis in 18 months—a process that traditionally takes four to six years. The compound entered Phase II clinical trials in 2023.
During the COVID-19 pandemic, BenevolentAI’s algorithms identified baricitinib as a potential treatment within days of analyzing the virus. The drug received FDA emergency authorization in November 2020 after clinical trials confirmed reduced mortality in hospitalized patients.
The near-term impact is not miracle cures. It is faster failure—helping researchers abandon dead ends earlier and focus resources where biology shows promise.
This is not glamorous. It is enormously valuable.
Administrative Relief
Perhaps the least glamorous but most immediately felt use of AI is administrative. Nuance’s DAX Copilot, deployed at over 200 health systems including Stanford Health Care and the University of Michigan, drafts clinical notes from ambient listening during patient encounters. Early adopters report documentation time reduced by 50% or more.
For physicians facing historic levels of burnout—a 2022 American Medical Association survey found 63% reporting at least one symptom—this matters. The promise is modest but meaningful: less time clerking, more time thinking.
Less time fighting the electronic health record, more time facing the patient.
What Is Underway Right Now
Across the globe, AI systems are moving from pilot projects into regulated clinical workflows. AI-assisted surgical planning and robotic guidance are becoming standard in complex procedures—Intuitive Surgical’s da Vinci systems, enhanced with AI visualization, have been used in over 12 million procedures worldwide. Tempus and Foundation Medicine deploy genomic analysis tools that match cancer patients to targeted therapies based on their specific tumor profiles. Population-level risk stratification, such as Geisinger Health’s AI-driven chronic disease management program, helps health systems identify patients who need intervention before they present in crisis.
These deployments share a common feature. They work best when physicians remain clearly in charge—setting constraints, validating outputs, and retaining responsibility.
This is not accidental. It reflects hard-earned lessons about bias, opacity, and over-automation.
The systems that failed did so because they forgot who was supposed to be accountable when something went wrong.
The American Medical Association’s Position
The American Medical Association has been notably careful—and notably consistent—in its framing. The AMA prefers the term “augmented intelligence,” emphasizing that AI should enhance physician decision-making, not replace it.
AMA policy stresses physician oversight and accountability. It demands transparency and explainability of AI systems. It insists on protection against bias and inequitable outcomes. It requires clear liability frameworks. And it calls for physician involvement in design and deployment.
In other words, the AMA recognizes that the question is not whether AI will be used in medicine—it already is—but who governs it, and to what ends.
This position reflects something deeper than professional protectionism. It reflects an understanding that medicine involves judgments that cannot be reduced to optimization. When a physician decides how to deliver bad news, when to pursue aggressive treatment versus comfort care, how to balance statistical probability against an individual patient’s values—these are not computational problems. They are human ones.
AI can inform these decisions. It cannot make them. The moment it appears to make them, accountability has vanished.
What Comes Next
In the next five to ten years, several developments are likely. Continuous risk monitoring for chronic disease will use AI-driven pattern detection to identify problems before they become crises. Earlier, more precise diagnoses will emerge through multimodal data integration—combining imaging, labs, genomics, and clinical history in ways no human could synthesize alone. Clinical decision copilots will summarize evidence, guidelines, and patient-specific factors in real time. System-level learning will mean that every patient encounter improves care for the next.
The danger is not that AI will become too capable.
The danger is that medicine will outsource judgment rather than scale it responsibly.
The organizations that succeed will be those that maintain discipline precisely when the pressure to automate is greatest. They will ask not just what AI can do, but where humans must remain responsible—even when machines are faster.
The Real Revolution Is Cultural
AI does not introduce a new ethical problem to medicine. It intensifies old ones: judgment, accountability, humility, and restraint.
- At its best, AI allows medicine to behave more like its best institutions always have—learning collectively, remembering systematically, and acting thoughtfully.
- At its worst, it risks becoming an authority without responsibility.
The revolution, then, is not technological.
It is professional.
Medicine has already begun this transition. The outcome will depend not on how powerful the tools become—but on whether physicians remain the ones holding the knife, the chart, and the final word.
The hospitals that thrive will be those that treat AI as a tool in service of human judgment, not a replacement for it. The physicians who thrive will be those who master the technology without being mastered by it. The patients who benefit will be those whose caregivers remember that medicine is, in the end, one human being helping another.
That has not changed.
It must not change.
Key Takeaways
- AI in medicine is already widespread and clinically meaningful. This is not a future development. It is current reality—from IDx-DR’s autonomous diabetic retinopathy diagnosis to Johns Hopkins’ TREWS sepsis prediction system to Nuance’s ambient documentation tools.
- The most successful AI systems augment rather than replace physician judgment. They work best when humans remain clearly in charge—setting constraints, validating outputs, and retaining accountability.
- Historical models demonstrate this approach works. The Mayo Clinic built extraordinary capabilities through disciplined data governance long before AI existed. The principle remains: physicians must control the knowledge, not serve it.
- The AMA emphasizes governance, accountability, and oversight. Professional medical organizations recognize that the question is not whether AI will be used, but who governs it and to what ends.
- The Eight Critical Skills remain essential—and AI intensifies their importance. Communication, analysis, and continued education become more demanding, not less, when AI enters clinical practice.
- The future of AI in medicine depends more on culture than code. Organizations that maintain discipline around human accountability will succeed. Those that quietly outsource judgment will eventually face consequences.
What has your experience been with AI in healthcare—as a provider, administrator, or patient? Have you seen these accountability challenges in practice? I would welcome your perspective in the comments.
Copyright © 2026 by Charles Cranston Jett
All Rights Reserved
Charles Cranston Jett is the author of WANTED: Eight Critical Skills You Need To Succeed and writes about leadership, executive development, and the skills that drive career success at criticalskillsblog.com.
