Analysis: Alleged Bias Found in Extrapolation Audits Part II

The most recent case involved a Zone Program Integrity Contractor (ZPIC), which turned a $4,000 overpayment into a $3 million overpayment using extrapolation.

EDITOR’S NOTE: This is the second in a series of reports on alleged bias the author has uncovered in extrapolation audits.

I recently wrote an article opining (or maybe just whining) about how the use of extrapolation in billing and coding audits is biased against the provider. It’s not that extrapolation is bad; it’s the guidelines used by the government, called the Program Integrity Manual (PIM), that are bad. Chapter 8 in particular deals with the issues of statistical methods used in the process of an extrapolation, and at only 20 pages, it is far too short and incomplete to deal with what can be such a complex subject.

In that last article, I addressed some specific examples that identified how the PIM stands in stark contrast and direct conflict with many standards of statistical practice. I talked about the use of paid amounts as the variable of interest, in contrast with U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG) guidelines. I wrote about how the PIM ignores the importance of sample size and how there are no substantive rules regarding the unit to be audited; that is completely up to the auditor and does not have to make any statistical sense. I wrote about the importance of maintaining independence of the units to be audited, and how breaking this rule violates the central limit theorem, without which extrapolation doesn’t work. In essence, the crux of Chapter 8 tells auditors that not only do they not have to do the best job at statistical analysis, but they don’t even have to do a good job – as long as they follow the guidelines.

In my most recent case, a practice was audited by a Zone Program Integrity Contractor (ZPIC), which turned a $4,000 overpayment into a $3 million overpayment using extrapolation. The key issue here was that, in contrast with the most basic tenets of statistical standards, the ZPIC did not properly create a probability sample, meaning that it was not a statistically valid random sample and should not have been used for extrapolation. In Chapter 8, section 8.4.4.1.1, it states:

“Simple random sampling involves using a random selection method to draw a fixed number of sampling units from the frame without replacement, i.e., not allowing the same sampling unit to be selected more than once. The random selection method must ensure that, given the desired sample size, each distinguishable set of sampling units has the same probability of selection as any other set – thus the method is a case of “equal probability sampling.”

In essence, an auditor will base its entire defense of an extrapolation on the idea that it drew a random sample. Here, in addition to creating a random sample, the auditor wanted to create a set of “spares,” or a secondary set of sampling units that may be used if one of the primary sampling units was invalid. To create a random sample, statisticians will use some type of software application designed for this purpose, such as RAT-STATS, SAS, MiniTab, etc., involving the use of a “seed value.” The seed value is a number that is actually the starting point in a sequence of random numbers. If the statistician doesn’t specify a seed value, then the program often will default to the clock cycle for that value. In general, using the seed value and the specified application, I should be able to recreate that exact sample from the frame from which it was drawn.

For each random pull, the seed values should be different, and many statisticians believe that the values should be very different. If I were to use the same seed value, it would yield the same sequence of random numbers. In this case, the auditor wanted a sample size of 75 with 10 spares, so it pulled an initial random sample of 85 units using some seed value. Then, they pulled the 10 spares from the sample using the same seed value. Now, while this may not seem like a big deal (and perhaps it did not significantly alter the representativeness of the sample), it resulted in the sample no longer being a probability sample, which is required under the PIM. Here, without question, the sample did not satisfy section 8.4.2, and therefore, by their own standards, the auditor should not have been permitted to proceed to extrapolation. But it did. And the crux of their defense was that none of the 10 spares were used in the audit, and therefore they should be treated as though they did not even exist. In many cases, the auditor has relied upon HCFA Ruling 86-1, which says that the provider has to show that the statistical sample and/or extrapolated overpayment is invalid – meaning that the contractor has no responsibility to show that its method is, in fact, valid.

Another big issue has to do with non-paid (or zero-paid) claims. PIM section 8.4.3.2.1 roughly states that the universe of claims from which a sample is selected shall consist of fully and partially adjudicated claims, meaning that zero-paid claims are not included in the sample. Yet this often seems to be in conflict with PIM Chapter 3, section 3.6.1, which says that the “MACs (Medicare Administrative Contractors) and ZPICs shall net out the dollar amount of services underpaid during the cost accounting period, meaning that amounts owed to providers are balanced against amounts owed from providers.”

I can’t think of a better way to identify underpayments than to look at non-paid claims. In section 8.4.3.2.2, the PIM states that “in principle, any type of sampling unit is permissible as long as the total aggregate of such units covers the population of potential mis-paid amounts.” So, if a claim has not been paid but should have been paid, then that would classify it as a “mis-paid amount.” In defense, the auditor will inevitably state that it has no obligation or responsibility under the PIM guidelines to review zero-paid claims. Well, just because you aren’t “obligated” to do so doesn’t make it right.

Perhaps my biggest pet peeve has to do with the requirements regarding statistical sampling and extrapolation in the first place. Let’s review section 8.4.1.2 of the PIM:

“Statistical sampling is used to calculate and project (i.e., extrapolate) the amount of overpayment(s) made on claims. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) mandates that before using extrapolation to determine overpayment amounts to be recovered by recoupment, offset or otherwise, there must be a determination of sustained or high level of payment error, or documentation that educational intervention has failed to correct the payment error. By law, the determination that a sustained or high level of payment error exists is not subject to administrative or judicial review.”

I want you to look at the first sentence. What does this mean? Well, can someone help me find a legal definition of “sustained?” Does that mean that the problem has gone on for a week? A month? A year? Longer? And how about “high level of payment error?” Is that an error rate of 50 percent? Maybe 10 percent? Five percent? How about one percent?

The fact is, there is no existing universally accepted definition of these terms. I once participated in an audit for which the error rate was 1.6 percent, yet the auditor chose to pursue an extrapolation even in the absence of any pattern of improper coding. Their reasoning? Look at the last sentence. This means that the provider is not permitted to use as its defense a challenge to the concept (because that is all it is) of a “sustained or high level of error.” These concepts cannot be challenged at any point along the five-step appeals process, which, in my opinion, is in direct conflict with HCFA Ruling 86-1 – which, while placing the burden of validation on the provider, at least gives them a chance to object.

Once again, these are but a few examples of the egregiousness of the statistical and extrapolation guidelines contained within Chapter 8 of the PIM. In total, again, Chapter 8 is only 20 pages long, far from sufficient to describe the nuances of statistical sampling and even further from adequate to address the statistical complexity of extrapolation. Yet, it seems that’s all we have. The appeal process – at least the first two levels – are all but useless in challenging statistical issues.

Either there isn’t anyone who understands the process, or, as I suspect, they are mandated by some financial incentive to simply rubber-stamp the auditor’s findings. I am the first to admit that I am a bit naïve and often miss subtle cues, so maybe I’m missing the big picture here. What is the purpose of conducting an audit in the first place? For one, it is mandated by law; the Centers for Medicare & Medicaid Services (CMS) is required to ensure that only claims that are legitimate – that is, they meet the requirements for payment – are paid. And hey, I’m all in on that. I’m a taxpayer, too, you know, and I don’t agree with or approve of fraudulent or abusing billing practices. But that’s not what’s happening here.

It is my opinion (so you are welcome to disagree) that the purpose of these audits is to get to the facts about potential overpayment and ensure that the provider, as well as the taxpayer, is treated fairly. The latter part, to me, is of abundant importance. The guidelines should be written so that they comply with accepted standards of statistical practice, and based on my experience, they simply do not. As such, they do not provide for an environment wherein the provider is treated fairly. Rather, the guidelines appear to have been written for a different purpose: to allow HHS to recoup as much money as possible from healthcare providers with wanton disregard for due process and statistical validity.

The guidelines should be designed to ensure protection for both parties, but instead they only ensure protection for the auditors. This inherently describes a model that is biased, unfair, and ultimately antithetical to the government’s moral and ethical obligations to healthcare providers.

And that’s the world according to Frank.

 

Program Note:

Listen to Frank Cohen report on this subject on Monitor Monday, June 11, 10-10:30 a.m. ET.

 

Comment on this article

Facebook
Twitter
LinkedIn

Frank Cohen, MPA

Frank D. Cohen is Senior Director of Analytics and Business Intelligence at VMG Health, LLC, and is Chief Statistician for Advanced Healthcare Analytics. He has served as a testifying expert witness in more than 300 healthcare compliance litigation matters spanning nearly five decades in computational statistics and predictive analytics.

Related Stories

The OIG, ABN, IMM, and DND in the News

Let’s start with a recent (U.S. Department of Health and Human Services Office of Inspector General) OIG audit of a Medicare Advantage plan. Now these

Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

2026 ICD-10-CM/PCS Coding Clinic Update Webcast Series

Uncover essential coding insights with nationally recognized coding authority Kay Piper, RHIA, CDIP, CCS. Through ICD10monitor’s interactive, on‑demand webcast series, Kay walks you through the AHA’s 2026 ICD‑10‑CM/PCS Quarterly Coding Clinics, translating each update into practical, easy‑to‑apply guidance designed to sharpen precision, ensure compliance, and strengthen day‑to‑day decision‑making. Available shortly after each official release.

April 13, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Fourth Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s fourth quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

December 14, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Third Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s third quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

October 12, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Second Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s second quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

July 13, 2026

Trending News

Featured Webcasts

Compliance for the Inpatient Psychiatric Facility (IPF-PPS): Minimizing Federal Audit Findings by Strengthening Best Practices

Federal auditors are intensifying their focus on inpatient psychiatric facilities, using advanced data analytics to spotlight outliers and pursue high‑dollar repayments. In this high‑impact webcast, Michael Calahan, PA, MBA, Compliance Officer and V.P., Hospital & Physician Compliance, breaks down what regulators are really targeting in IPF-PPS admissions, documentation, treatment and discharge planning. Attendees will learn practical steps to tighten processes, avoid common audit triggers and protect reimbursement and reduce the risk of multimillion-dollar repayment demands.

April 9, 2026

Mastering MDM for Accurate Professional Fee Coding

In this timely session, Stacey Shillito, CDIP, CPMA, CCS, CCS-P, CPEDC, COPC, breaks down the complexities of Medical Decision Making (MDM) documentation so providers can confidently capture the true complexity of their care. Attendees will learn practical, efficient strategies to ensure documentation aligns with current E/M guidelines, supports accurate coding, and reduces audit risk, all without adding to charting time.

March 31, 2026

The PEPPER Returns – Risk and Opportunity at Your Fingertips

Join Ronald Hirsch, MD, FACP, CHCQM for The PEPPER Returns – Risk and Opportunity at Your Fingertips, a practical webcast that demystifies the PEPPER and shows you how to turn complex claims data into actionable insights. Dr. Hirsch will explain how to interpret key measures, identify compliance risks, uncover missed revenue opportunities, and understand new updates in the PEPPER, all to help your organization stay ahead of audits and use this powerful data proactively.

March 19, 2026

Top 10 Audit Targets for 2026-2027 for Hospitals & Physicians: Protect Your Revenue

Stay ahead of the 2026-2027 audit surge with “Top 10 Audit Targets for 2026-2027 for Hospitals & Physicians: Protect Your Revenue,” a high-impact webcast led by Michael Calahan, PA, MBA. This concise session gives hospitals and physicians clear insight into the most likely federal audit targets, such as E/M services, split/shared and critical care, observation and admissions, device credits, and Two-Midnight Rule changes, and shows how to tighten documentation, coding, and internal processes to reduce denials, recoupments, and penalties. Attendees walk away with practical best practices to protect revenue, strengthen compliance, and better prepare their teams for inevitable audits.

January 29, 2026

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

BLOOM INTO SAVINGS! Get 25% OFF during our spring sale through March 27. Use code SPRING26 at checkout to claim this offer.

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 1 with code CYBER25

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24