Analysis: Alleged Bias Found in Extrapolation Audits Part II

The most recent case involved a Zone Program Integrity Contractor (ZPIC), which turned a $4,000 overpayment into a $3 million overpayment using extrapolation.

EDITOR’S NOTE: This is the second in a series of reports on alleged bias the author has uncovered in extrapolation audits.

I recently wrote an article opining (or maybe just whining) about how the use of extrapolation in billing and coding audits is biased against the provider. It’s not that extrapolation is bad; it’s the guidelines used by the government, called the Program Integrity Manual (PIM), that are bad. Chapter 8 in particular deals with the issues of statistical methods used in the process of an extrapolation, and at only 20 pages, it is far too short and incomplete to deal with what can be such a complex subject.

In that last article, I addressed some specific examples that identified how the PIM stands in stark contrast and direct conflict with many standards of statistical practice. I talked about the use of paid amounts as the variable of interest, in contrast with U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG) guidelines. I wrote about how the PIM ignores the importance of sample size and how there are no substantive rules regarding the unit to be audited; that is completely up to the auditor and does not have to make any statistical sense. I wrote about the importance of maintaining independence of the units to be audited, and how breaking this rule violates the central limit theorem, without which extrapolation doesn’t work. In essence, the crux of Chapter 8 tells auditors that not only do they not have to do the best job at statistical analysis, but they don’t even have to do a good job – as long as they follow the guidelines.

In my most recent case, a practice was audited by a Zone Program Integrity Contractor (ZPIC), which turned a $4,000 overpayment into a $3 million overpayment using extrapolation. The key issue here was that, in contrast with the most basic tenets of statistical standards, the ZPIC did not properly create a probability sample, meaning that it was not a statistically valid random sample and should not have been used for extrapolation. In Chapter 8, section, it states:

“Simple random sampling involves using a random selection method to draw a fixed number of sampling units from the frame without replacement, i.e., not allowing the same sampling unit to be selected more than once. The random selection method must ensure that, given the desired sample size, each distinguishable set of sampling units has the same probability of selection as any other set – thus the method is a case of “equal probability sampling.”

In essence, an auditor will base its entire defense of an extrapolation on the idea that it drew a random sample. Here, in addition to creating a random sample, the auditor wanted to create a set of “spares,” or a secondary set of sampling units that may be used if one of the primary sampling units was invalid. To create a random sample, statisticians will use some type of software application designed for this purpose, such as RAT-STATS, SAS, MiniTab, etc., involving the use of a “seed value.” The seed value is a number that is actually the starting point in a sequence of random numbers. If the statistician doesn’t specify a seed value, then the program often will default to the clock cycle for that value. In general, using the seed value and the specified application, I should be able to recreate that exact sample from the frame from which it was drawn.

For each random pull, the seed values should be different, and many statisticians believe that the values should be very different. If I were to use the same seed value, it would yield the same sequence of random numbers. In this case, the auditor wanted a sample size of 75 with 10 spares, so it pulled an initial random sample of 85 units using some seed value. Then, they pulled the 10 spares from the sample using the same seed value. Now, while this may not seem like a big deal (and perhaps it did not significantly alter the representativeness of the sample), it resulted in the sample no longer being a probability sample, which is required under the PIM. Here, without question, the sample did not satisfy section 8.4.2, and therefore, by their own standards, the auditor should not have been permitted to proceed to extrapolation. But it did. And the crux of their defense was that none of the 10 spares were used in the audit, and therefore they should be treated as though they did not even exist. In many cases, the auditor has relied upon HCFA Ruling 86-1, which says that the provider has to show that the statistical sample and/or extrapolated overpayment is invalid – meaning that the contractor has no responsibility to show that its method is, in fact, valid.

Another big issue has to do with non-paid (or zero-paid) claims. PIM section roughly states that the universe of claims from which a sample is selected shall consist of fully and partially adjudicated claims, meaning that zero-paid claims are not included in the sample. Yet this often seems to be in conflict with PIM Chapter 3, section 3.6.1, which says that the “MACs (Medicare Administrative Contractors) and ZPICs shall net out the dollar amount of services underpaid during the cost accounting period, meaning that amounts owed to providers are balanced against amounts owed from providers.”

I can’t think of a better way to identify underpayments than to look at non-paid claims. In section, the PIM states that “in principle, any type of sampling unit is permissible as long as the total aggregate of such units covers the population of potential mis-paid amounts.” So, if a claim has not been paid but should have been paid, then that would classify it as a “mis-paid amount.” In defense, the auditor will inevitably state that it has no obligation or responsibility under the PIM guidelines to review zero-paid claims. Well, just because you aren’t “obligated” to do so doesn’t make it right.

Perhaps my biggest pet peeve has to do with the requirements regarding statistical sampling and extrapolation in the first place. Let’s review section of the PIM:

“Statistical sampling is used to calculate and project (i.e., extrapolate) the amount of overpayment(s) made on claims. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) mandates that before using extrapolation to determine overpayment amounts to be recovered by recoupment, offset or otherwise, there must be a determination of sustained or high level of payment error, or documentation that educational intervention has failed to correct the payment error. By law, the determination that a sustained or high level of payment error exists is not subject to administrative or judicial review.”

I want you to look at the first sentence. What does this mean? Well, can someone help me find a legal definition of “sustained?” Does that mean that the problem has gone on for a week? A month? A year? Longer? And how about “high level of payment error?” Is that an error rate of 50 percent? Maybe 10 percent? Five percent? How about one percent?

The fact is, there is no existing universally accepted definition of these terms. I once participated in an audit for which the error rate was 1.6 percent, yet the auditor chose to pursue an extrapolation even in the absence of any pattern of improper coding. Their reasoning? Look at the last sentence. This means that the provider is not permitted to use as its defense a challenge to the concept (because that is all it is) of a “sustained or high level of error.” These concepts cannot be challenged at any point along the five-step appeals process, which, in my opinion, is in direct conflict with HCFA Ruling 86-1 – which, while placing the burden of validation on the provider, at least gives them a chance to object.

Once again, these are but a few examples of the egregiousness of the statistical and extrapolation guidelines contained within Chapter 8 of the PIM. In total, again, Chapter 8 is only 20 pages long, far from sufficient to describe the nuances of statistical sampling and even further from adequate to address the statistical complexity of extrapolation. Yet, it seems that’s all we have. The appeal process – at least the first two levels – are all but useless in challenging statistical issues.

Either there isn’t anyone who understands the process, or, as I suspect, they are mandated by some financial incentive to simply rubber-stamp the auditor’s findings. I am the first to admit that I am a bit naïve and often miss subtle cues, so maybe I’m missing the big picture here. What is the purpose of conducting an audit in the first place? For one, it is mandated by law; the Centers for Medicare & Medicaid Services (CMS) is required to ensure that only claims that are legitimate – that is, they meet the requirements for payment – are paid. And hey, I’m all in on that. I’m a taxpayer, too, you know, and I don’t agree with or approve of fraudulent or abusing billing practices. But that’s not what’s happening here.

It is my opinion (so you are welcome to disagree) that the purpose of these audits is to get to the facts about potential overpayment and ensure that the provider, as well as the taxpayer, is treated fairly. The latter part, to me, is of abundant importance. The guidelines should be written so that they comply with accepted standards of statistical practice, and based on my experience, they simply do not. As such, they do not provide for an environment wherein the provider is treated fairly. Rather, the guidelines appear to have been written for a different purpose: to allow HHS to recoup as much money as possible from healthcare providers with wanton disregard for due process and statistical validity.

The guidelines should be designed to ensure protection for both parties, but instead they only ensure protection for the auditors. This inherently describes a model that is biased, unfair, and ultimately antithetical to the government’s moral and ethical obligations to healthcare providers.

And that’s the world according to Frank.


Program Note:

Listen to Frank Cohen report on this subject on Monitor Monday, June 11, 10-10:30 a.m. ET.


Comment on this article

Print Friendly, PDF & Email

Frank Cohen

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

Related Stories

Beware of Large Egos

Beware of Large Egos

When hiring consultants or compliance or legal professionals, ego, often insecurity in disguise, can cause big trouble. People who feel a strong need to prove

Print Friendly, PDF & Email
Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Mastering the Two-Midnight Rule: Keys to Navigating Short-Stay Admissions with Confidence

Mastering the Two-Midnight Rule: Keys to Navigating Short-Stay Admissions with Confidence

The CMS Two-Midnight Rule and short-stay audits are here to stay, impacting inpatient and outpatient admissions, ASC procedures, and Medicare Parts C & D. New for 2024, the Two-Midnight Rule applies to Medicare Advantage patients, requiring differentiation between Medicare plans affecting Case Managers, Utilization Review, and operational processes and knowledge of a vital distinction between these patients that influences post-discharge medical reviews and compliance risk. Join Michael G. Calahan for a comprehensive webcast covering federal laws for all admission processes. Gain the knowledge needed to navigate audits effectively and optimize patient access points, personnel, and compliance strategies. Learn Two-Midnight Rule essentials, Medicare Advantage implications, and compliance best practices. Discover operational insights for short-stay admissions, outpatient observation, and the ever-changing Inpatient-Only Listing.

Print Friendly, PDF & Email
September 19, 2023
Secondary Diagnosis Coding: A Deep Dive into Guidelines and Best Practices

Secondary Diagnosis Coding: A Deep Dive into Guidelines and Best Practices

Explore comprehensive guidelines and best practices for secondary diagnosis coding in our illuminating webcast. Delve into the intricacies of accurately assigning secondary diagnosis codes to ensure precise medical documentation. Learn how to navigate complex scenarios and adhere to coding regulations while enhancing coding proficiency. Our expert-led webcast covers essential insights, including documentation requirements, sequencing strategies, and industry updates. Elevate your coding skills and stay current with the latest coding advancements so you can determine the correct DRG assignment to optimize reimbursement, support medical decision-making, and maintain compliance.

Print Friendly, PDF & Email
September 20, 2023
Principal Diagnosis Coding: Mastering Selection and Sequencing

Principal Diagnosis Coding: Mastering Selection and Sequencing

Enhance your inpatient coding precision and revenue with Principal Diagnosis Coding: Mastering Selection and Sequencing. Join our expert-led webcast to conquer the challenges of principal diagnosis selection and sequencing. We’ll decode the intricacies of ICD-10-CM guidelines, equipping you with a clear grasp of the rules and the official UHDDS principal diagnosis definition. Uncover the crucial role of coding conventions, master the sequencing of related conditions, and confidently tackle cases with equally valid principal diagnoses.

Print Friendly, PDF & Email
September 14, 2023
2024 IPPS Summit: Final Rule Update with Expert Insights and Analysis

2024 IPPS Summit: Final Rule Update with Expert Insights and Analysis

Only ICD10monitor delivers what you need: updates on must-know changes associated with the FY24 Inpatient Prospective Payment System (IPPS) Final Rule, including new ICD-10-CM/PCS codes, plus insights, analysis and answers to questions from the country’s most respected subject matter experts.

Print Friendly, PDF & Email
2024 IPPS Summit Day 3: MS-DRG Shifts and NTAPs

2024 IPPS Summit Day 3: MS-DRG Shifts and NTAPs

This third session in our 2024 IPPS Summit will feature a review of FY24 changes to the MS-DRG methodology and new technology add-on payments (NTAPs), presented by senior healthcare consultant Laurie Johnson, with bonus insights and analysis from two acclaimed subject matter experts

Print Friendly, PDF & Email
August 17, 2023

Trending News