Effective AI in Healthcare RCM Requires a Human Touch

Effective AI in Healthcare RCM Requires a Human Touch

Artificial intelligence (AI) has become a fixture in healthcare revenue cycle management (RCM), an area where finance leaders are desperate for ways to relieve understaffed departments struggling under unprecedented volumes of third-party audit demands and rising denial rates without sacrificing accuracy or precision.

At a time when RCM staffing shortages are high, AI provides a critical productivity boost. By investing in data, AI, and technology platforms, compliance and revenue integrity departments have been able to reduce their necessary team size by a third while performing 10 percent more in audit activities, compared to 2022, according to the 2023 Benchmark Report.

Here is where AI shines. Arguably its greatest asset is assisting in uncovering outliers and needles in the haystack across millions of data points.

Unfulfilled Promises

While AI has enabled the automation of many RCM tasks, however, the promise of fully autonomous systems remains unfulfilled. This is partially due to software vendors’ propensity to focus on technology without first taking the time to fully understand the targeted workflows and the human touchpoints within them. It’s a practice that leads to ineffective AI integration and end-user adoption.

For AI to function appropriately in a complex RCM environment, humans must be in the loop. Human intervention helps overcome deficits in accuracy and precision – the toughest challenges with autonomous AI – and enhances outcomes, helping avoid the repercussions of poorly designed solutions.

Financial impacts are the most obvious repercussion for healthcare organizations. Poorly trained AI tools being used to conduct prospective claim audits might miss instances of undercoding, which means missed revenue opportunities. For one MDaudit customer, an incorrect rule within their “autonomous” coding system was improperly coding drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected if not for a human in the loop uncovering the flaw.

AI can also fall short by overcoding results with false positives, an area under specific scrutiny due to the government’s mission of fighting fraud, abuse, and waste in the healthcare system.

Even individual providers can be impacted by poorly designed AI, for example, if the tool has not been properly trained on the concept of “at-risk providers” in the revenue cycle sense. Physicians could find themselves unfairly targeted for additional scrutiny and training if they are included in sweeps for at-risk providers with high denial rates – wasting time that should be spent seeing patients, slowing cash flow by delaying claims for prospective reviews, and potentially harming their reputation by slapping them with a “problematic” label.

Retaining Humans in the Loop

Again, keeping humans in the loop is the best strategy for preventing these types of negative outcomes. In fact, there are three specific areas of AI that will always require human involvement to achieve optimal outcomes.

Building a strong data foundation.

A robust data foundation is crucial, because the underlying data model, including proper metadata, data quality, and governance, is key to enabling AI to function at peak efficiency. This requires developers to get into the trenches with billing compliance, coding, and revenue cycle teams to fully understand their workflows and data needed to perform their duties.

Effective anomaly detection requires billing, denial, and other claims data, as well as an understanding of the complex interplay between providers, coders, billers, payors, etc. This ensures the technology can continuously assess risks in real time and deliver to users the information needed to focus their actions and activities in ways that drive measurable outcomes. If the data foundation is skipped in favor of accelerating deployment of the AI models and other shiny tools, the result will be hallucinations and false positives that will cause noise and hinder adoption.

Continuous training

AI-enabled RCM tools require ongoing education in the same way professionals do, to understand the latest regulations, trends, and priorities in an evolving healthcare RCM environment. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates, to ensure AI tools are meeting current and future needs.

AI should be trainable in real time. End users should be able to support continuous learning by immediately providing input and feedback on the results of information searches and/or analysis. Users should also be able to mark data as unsafe, when warranted, to prevent its amplification at scale. For example, this could involve attributing financial loss or compliance risk to specific entities or individuals without properly explaining why it’s appropriate to do so.

Appropriate governance

Human validation is required to ensure that AI’s output is safe. For example, for autonomous coding to work properly, a coding professional must ensure AI has properly “learned” how to apply updated code sets or deal with new regulatory requirements. Excluding humans from the governance loop leaves healthcare organizations wide open to revenue leakage, negative audit outcomes, reputational loss, and much more.

Without question, AI can transform healthcare RCM. But doing so requires that healthcare organizations augment their technology investments with human and workforce training to optimize accuracy, productivity, and business value.

Facebook
Twitter
LinkedIn

Ritesh Ramesh, CEO

Ritesh Ramesh is CEO of MDaudit, a leading health IT company that harnesses its proven track record and the power of analytics to allow the nation’s premier healthcare organizations to mitigate compliance risk and retain revenue.

Related Stories

Not Just AI will Hallucinate Answers

Not Just AI will Hallucinate Answers

If you listened to Monitor Mondays last week, you heard me talk about how artificial intelligence (AI) can make up answers and provide inaccurate information.

Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Sepsis: Bridging the Clinical Documentation and Coding Gap to Reduce Denials

Sepsis: Bridging the Clinical Documentation and Coding Gap to Reduce Denials

Sepsis remains one of the most frequently denied and contested diagnoses, creating costly revenue loss and compliance risks. In this webcast, Angela Comfort, DBA, MBA, RHIA, CDIP, CCS, CCS-P, provides practical, real-world strategies to align documentation with coding guidelines, reconcile Sepsis-2 and Sepsis-3 definitions, and apply compliant queries. You’ll learn how to identify and address documentation gaps, strengthen provider engagement, and defend diagnoses against payer scrutiny—equipping you to protect reimbursement, improve SOI/ROM capture, and reduce audit vulnerability in this high-risk area.

September 24, 2025
2026 IPPS Masterclass 3: Master MS-DRG Shifts and NTAPs

2026 IPPS Masterclass Day 3: MS-DRG Shifts and NTAPs

This third session in our 2026 IPPS Masterclass will feature a review of FY26 changes to the MS-DRG methodology and new technology add-on payments (NTAPs), presented by nationally recognized ICD-10 coding expert Christine Geiger, MA, RHIA, CCS, CRC, with bonus insights and analysis from Dr. James Kennedy.

August 14, 2025
2026 IPPS Masterclass Day 2: Master ICD-10-PCS Changes

2026 IPPS Masterclass Day 2: Master ICD-10-PCS Changes

This second session in our 2026 IPPS Masterclass will feature a review the FY26 changes to ICD-10-PCS codes. This information will be presented by nationally recognized ICD-10 coding expert Christine Geiger, MA, RHIA, CCS, CRC, with bonus insights and analysis from Dr. James Kennedy.

August 13, 2025

Trending News

Featured Webcasts

E/M Services Under Intensive Federal Scrutiny: Navigating Split/Shared, Incident-to & Critical Care Compliance in 2025-2026

E/M Services Under Intensive Federal Scrutiny: Navigating Split/Shared, Incident-to & Critical Care Compliance in 2025-2026

During this essential RACmonitor webcast Michael Calahan, PA, MBA Certified Compliance Officer, will clarify the rules, dispel common misconceptions, and equip you with practical strategies to code, document, and bill high-risk split/shared, incident-to & critical care E/M services with confidence. Don’t let audit risks or revenue losses catch your organization off guard — learn exactly what federal auditors are looking for and how to ensure your documentation and reporting stand up to scrutiny.

August 26, 2025
The Two-Midnight Rule: New Challenges, Proven Strategies

The Two-Midnight Rule: New Challenges, Proven Strategies

RACmonitor is proud to welcome back Dr. Ronald Hirsch, one of his most requested webcasts. In this highly anticipated session, Dr. Hirsch will break down the complex Two Midnight Rule Medicare regulations, translating them into clear, actionable guidance. He’ll walk you through the basics of the rule, offer expert interpretation, and apply the rule to real-world clinical scenarios—so you leave with greater clarity, confidence, and the tools to ensure compliance.

June 19, 2025
Open Door Forum Webcast Series

Open Door Forum Webcast Series

Bring your questions and join the conversation during this open forum series, live every Wednesday at 10 a.m. EST from June 11–July 30. Hosted by Chuck Buck, these fast-paced 30-minute sessions connect you directly with top healthcare experts tackling today’s most urgent compliance and policy issues.

June 11, 2025

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24