Private insurers have long leaned on artificial intelligence to vet coverage requests. Now Medicare is preparing to follow suit, a shift that may reshape clinical judgment, patient access, and how public health insurance is administered.
The Private Sector as a Testbed
Over the past several years, many private insurers and their contractors have integrated AI and predictive algorithms into their prior authorization workflows. The intent is to speed decision making and weed out waste or fraud, but critics say these systems often err on the side of denial. In some reported instances, automated denial rates were up to sixteen times higher than typical for certain services. Companies like EviCore, which reviews specialized procedures for insurers, have used AI models (sometimes dubbed “the dial”) to flag which cases get human review and which can be denied more automatically. That kind of layering of human and machine judgment has triggered lawsuits. For example, Humana faces a class action suit alleging it systematically used AI to deny rehabilitation care for seniors under Medicare Advantage plans.
Meanwhile, the volume of these authorization decisions is massive. In 2023, Medicare Advantage plans made nearly 50 million prior authorization determinations. That scale provides fertile ground for automating parts of the process, but also raises concerns about how errors or biases might multiply.
Medicare’s AI Experiment in Prior Authorization
Traditional Medicare has historically relied less on prior authorization than private plans. But starting in January 2026, the Centers for Medicare & Medicaid Services will pilot a model called WISeR (Wasteful and Inappropriate Service Reduction) in six states: Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington. Under WISeR, AI systems will help determine whether some outpatient services are eligible for coverage, though final decisions will still rest with human clinicians.
The services targeted are those traditionally flagged for overuse or fraud, for example skin substitutes, nerve implants, knee arthroscopy, incontinence devices, certain spine and cervical fusion procedures, and steroid injections. The pilot is voluntary for providers, and emergency or inpatient services will not be included.
CMS frames the change as an effort to “expedite review” and protect taxpayers from unnecessary spending. But opponents worry this will erode one advantage of traditional Medicare: minimal bureaucratic barriers to care.
Implications for Patients and Providers
One major concern is delay. Prior authorization has long been a source of frustration: more than 90 percent of surveyed physicians say it delays care, and over 80 percent say patients abandon needed treatment because of the burden. When AI becomes embedded in that process, physicians fear denials will become harder to contest and harder to reverse.
Another risk is that AI systems may institutionalize bias or brittleness. AI is only as good as its training data, and models that learn from past insurer decisions might replicate patterns of under treatment of certain populations. Trust is a key barrier; research finds people are less comfortable when AI’s role is overt, and clinicians may “negotiate” rather than fully defer to machine recommendations.
On the flipside, proponents argue AI can sort out clear, low risk cases and free human reviewers to focus on edge or high complexity cases. It may reduce administrative burden in provider offices and contain runaway costs. Predictive analytics in other health domains have even reduced hospital stays by forecasting discharge needs earlier.
What’s at Stake for Medicare’s Identity
This pilot marks a potential turning point in the relationship between public and private healthcare. If Medicare begins to incorporate AI into coverage decisions at scale, it could narrow the distinction between traditional Medicare and Medicare Advantage, where utilization controls are already pervasive.
If the pilot succeeds, meaning faster decisions, fewer appeals, and cost savings, the model may expand nationwide. But if it undermines access, provokes clinically harmful denials, or introduces opaque decision logic, backlash from lawmakers, medical associations, and patients is likely. The AMA, for instance, has warned that unregulated AI systems could override sound medical judgment.
Transparency will be critical. How will CMS monitor false negatives, track disparities in denial rates, and ensure clinicians retain final say? The pilot’s design and oversight will determine whether AI becomes a tool for efficiency or a new barrier to care.
In the coming years, the way Medicare incorporates AI may shape more than spending trends. It could reshape the power dynamics among clinicians, insurers, and patients, and force a deeper reckoning with how technology mediates medical judgment.