When Extrapolations Were Being Vacated

A close look at three extrapolation case histories.

When the COVID-19 pandemic first hit in 2020, many folks assumed that it would not be an issue for more than a few months. In March of that year, governments began imposing strict lockdowns to try to control the spread of the virus. 

Howard Markel, MD, Ph.D., a University of Michigan expert on pandemics, said that “an outbreak anywhere can go everywhere, and right now, we all need to pitch in to try to prevent cases, both within ourselves and in our communities.” 

It’s called “flattening the curve,” a term that was already very familiar to public health officials, but was new to most Americans. Approximately two years later, things had not changed very much. Businesses had gone broke; our educational system was in tatters and life as we knew it was pretty much over for the foreseeable future. And our healthcare system was far from exempt from the noise and confusion that the pandemic had caused. 

Telehealth quickly went from a concept to the predominant method for physician/patient encounters. Elective surgeries and non-emergent procedures were being postponed, and researchers are still trying to figure out just how many folks died because they stopped getting screenings and treatment for chronic diseases like diabetes, COPD, and others. 

If there was any silver lining for the healthcare system, it was that recoupment and recovery audits had been placed on hold. Physicians and hospitals were left to provide care without the overbearing burden of what many of us have come to know as unreasonable interference in the practice of medicine. 

Oversight? Good. Overzealous audits? Bad. But the hiatus didn’t last long, and beginning in August 2021, we started to see government contractors coming out of the woodwork to try to catch up for lost time (and for them, lost recoupments).

Amid all of this, the U.S. Department of Health and Human Services (HHS) was trying to clear up the backlog of Medicare administrative appeals and administrative law judge (ALJ) hearings. What was once a 90-day pause turned into a wait of three years or more for tens of thousands of appellants. 

On Nov. 1, 2018, the U.S. District Court for the District of Columbia issued a mandamus order requiring HHS to clear the backlog by the end of 2022. At this time, the Office of Medicare Hearings and Appeals (OMHA) had 426,594 appeals pending, with some providers waiting up to five years to get their hearings scheduled. Again, a far cry from the 90-day statutory requirement. 

For medical providers, it was a perfect storm that promised to make practicing medicine as difficult (and possibly, as unprofitable) as ever. We saw an uptick in audits, and at the same time, an unbelievable increase in ALJ hearings. Practices were overwhelmed, and judges were under enormous pressure to burn through these hearings to meet the 2022 requirement. As of the writing of this article, I have participated as a testifying statistical expert in more hearings so far this year than in all other years combined. And there is no end in sight. 

In many of these hearings, the judges have been quite clear that they don’t have time to play around – the facts and just the facts, please. In the last dozen or so hearings, the judges instructed counsel that they had a copy of my credentials, so it would not be necessary to read my qualifications into the record. They were asking legal counsel to just make a motion to have me accepted as a statistical expert and move on. 

The judges were also very clear that they either had access to or had read my report, and would not appreciate spending hours going over it paragraph by paragraph (my reports can be quite lengthy). Instead, I would write up a testimonial summary that was no more than 15 minutes in length and covered only the top three statistical issues discussed. In some cases, I would opine on matters that weren’t covered in my report, issues that I discovered later, and the judges were very appreciative of this process. Happy judge, happy appellant.

And it seems to have worked, because our win ratio this year has been quite higher than in other years in that we were getting well over half of the extrapolations thrown out, meaning that the overpayment amounts were limited to the face value of the actual audited overpayments (after final adjudication, of course). This was a bold move by some of these judges, because rumor has it that the Centers for Medicare & Medicaid Services (CMS) has encouraged them to hold fast on the extrapolations, and the Medicare Administrative Contractor (MAC) has done the same when these have been appealed by the contractor. Nonetheless, a win is a win, and a win for me is a win for my client. Case solved. Crisis averted!

What follows is a summary of three case studies that I thought were interesting; they exemplify different reasons that the extrapolations were vacated by the court.

Case Study 1: Long-Term Care Facility

This case was initiated as a Zone Program Integrity Contractor (ZPIC) audit that focused on a set of evaluation and management (E&M) codes that were billed to and paid by Medicare for four years. As you would expect, the ZPIC auditor alleged that some number of these should not have been paid, for any number of reasons, such as lack of sufficient documentation, medical necessity, etc. The ZPIC identified a sample of 147 claims but were only able to get documentation to support 80 of those. Sixty-seven charts were MIA. The audit resulted in a face-value overpayment calculation of $6,080. When extrapolated to the four years of claims within the audit period, the total overpayment demand was just more than $4.2 million.

After I analyzed the data, I concluded that for several reasons, the statistical process did not follow what I would consider standards of statistical practice (although there isn’t such a thing), nor did it comport with the basic guidelines of Chapter 8 of the Medicare Program Integrity Manual (MPIM). This is related to Section 8.4.2, which in general requires that any sample meet the criteria for a probability sample. 

The biggest problem was the missing 67 charts. The government treated each as a 100-percent error, meaning that they assessed the overpayment amount at 100 percent of the paid amount. The problem was why those charts were missing. And it was because they were sequestered by another government agency as part of a separate and unrelated investigation that had nothing to do with the provider here. As such, at the very least, these claims should have been treated as “missing data” and not “missing documentation,” since it was outside the control of the client. 

The average overpayment amount for the 80 claims that did have documentation should have been assigned to these claims, which would have significantly reduced the extrapolated amount. But what was of greater concern is this: if all of the missing claims should have been excluded from the sample frame, then the basic tenet of randomness no longer applied. In essence, each of the sampling units no longer had an equal and non-zero opportunity to be selected. 

In the end, the judge agreed with our findings, opinions, and conclusions, and here is the wording of the final decision:

“Ultimately, though, we find that the Appellant has demonstrated through several different arguments that the Medicare contractor that developed the statistical sample methodology relative to the Appellant’s claims failed to meet guidelines set forth in the Program Integrity Manual and failed to use a statistically valid sample. Consequently, there is no confidence that the overpayments determined can be appropriately extrapolated to the universe of claims.”

Case Study 2: Provider of Durable Medical Equipment (DME) Services

This audit was initiated by the Uniform Program Integrity Contractor (UPIC) and included a sample of 290 claims from 85 beneficiaries over 12 months. The auditor alleged a face-value overpayment of just over $104,000. 

Based on the total number of claims in the universe, this was extrapolated to nearly $3 million. On my analysis and review of the data and documentation, it was my finding and subsequent opinion that the contractor had introduced several fatal flaws in the statistical sampling and overpayment estimation process, as follows:

  1. The auditor failed to provide the requisite data, as required under Chapter 8 of the MPIM. For many cases in which I have been the expert, we have faced the same issue: the auditor does not provide everything that is required for me to be able to replicate their sample and results. Often, this issue is downplayed during the appeals process, but in this case, the judge took it seriously and seemed to agree that this was, in fact, a violation of the MPIM.
  2. The second issue is one that I have written about often, and that is that the variability inherently found in the durable medical equipment (DME) area is simply too great to allow for extrapolation. For example, the audit may include a TENS unit, a back brace, a heating pad, and maybe a knee orthosis device. The first observation is that these are disparate items that have nothing to do with each other. The reason for dispensing is different, the pricing is grossly different, and the medical necessity requirements are different. This was made apparent by the large number of high outliers, which in and of themselves bias any average overpayment amount to the right. In fact, payments ranged from under a dollar to over $4,000. 
  3. Perhaps the most egregious of the fatal flaws was that the auditor included, in both the frame and the sample, claims that had been in a prior audit – and for many, repayment had already been made. The auditor, at the hearing, suggested that these just be removed from the sample results. But that is an unacceptable solution, as it renders the sample no longer random – and as such, no longer a probability sample, and therefore no longer appropriate for extrapolation under 8.4.2 of the MPIM.

In the introduction to the decision, the judge wrote, “I find the sampling methodology and overpayment calculation was invalid, and as such any actual overpayment determined to exist cannot be extrapolated to the universe of claims.” 

In the analysis section of the decision, the judge wrote, “after a complete review of the evidence, I find, the Appellant, through Mr. Cohen, presented credible arguments to support that, when taken as a whole, the sampling process and the resulting extrapolated overpayment lacks statistical integrity and fails to meet the requirements of the MPIM. Specifically, (the auditor) included duplicate claims contained in other statistical sample cases with the Appellant. Although (the auditor’s statistician) indicated that the duplicate claims should just be removed, he specifically testified during the hearing that you cannot just remove claims from a sample, as it would distort the integrity of the sample itself and make it impossible to replicate.”

Case Study 3: Physical Therapy Services

In this case, 100 claims from a universe of over 40,000 were pulled for the audit. From these 100 claims, the contractor determined that the face-value overpayment amount was around $8,500, or an average overpayment of $101 per claim. Based on the size of the universe, this translated to an extrapolated overpayment estimate of just over $4 million. 

This was an interesting case in that there were only two real glaring issues: the sample size was too small, and the auditor violated the rules of independence, which would invalidate the use of extrapolation under what is known as the Central Limit Theorem (CLT). The CLT is the axiomatic foundation for all inferential statistics. The CLT has three basic requirements. These include the following:

  1. The sample size is large enough such that the sample averages would be normally distributed;
  2. Each possible sampling unit is independent of any other possible sampling unit; and
  3. The sample size does not exceed 10 percent of the universe.

Let’s look at the sample size first. There are lots of different methods, models, and formulas used to create a sample size. In this case, as in most of these types of cases in which I have been a statistical expert, for the formula used to calculate sample size, whether with RAT-STATS, SAS, SPSS, etc., the assumption is that the data are normally distributed. And this is rarely the case, and the reason is that the data are always bounded on the left by zero. 

In essence, the least amount a provider can be paid is $0, but the greatest, while theoretically infinite, is some finite value that can be orders of magnitude more than the lowest paid amounts. This requires a different set of calculations, but most contractors don’t abide by that construct. To test this, I always perform a resampling test, which is kind of like a Monte Carlo simulation. 

In this case, I had the computer create 10,000 random samples of 100 from the universe of claims. I took the average paid amount for each one and plotted them – and the result was that they were not randomly distributed. As far as I’m concerned, there isn’t any need to go any further at this point. This violates the CLT, and therefore the sample cannot be used for extrapolation.

But I did go further, and the auditor also violated the second rule: independence. 

In essence, within the frame and then cascaded to the sample, there were multiple claims from the same beneficiary with different dates of service. It would take more space than is allocated here to explain all this in detail, but in general, sicker patients normally have more claims than healthier patients, and those claims usually pay more than the claims for healthier patients. If you don’t test this and subsequently control for it, you will end up with a sample that contains disproportionately more claims from sicker patients than from healthier patients, skewing the overall average paid amount per claim to the right. Since this metric is used to calculate the extrapolation, it also skews the extrapolated amount higher.

In the opening lines of the decision, the judge wrote, “the undersigned finds Medicare payment shall be allowed for some of the claims at issue, and that the use of extrapolation is not supported because the statistical methodology was invalid.” After noting that the statistical extrapolation was upheld at both the redetermination and the reconsideration level, the judge concluded, “I find Mr. Cohen’s argument compelling and conclude that the Appellant has met its burden and has successfully invalidated the extrapolation. (See Ruling No. 86-1). Consequently, based on the analysis above, I find that the sampling and overpayment extrapolation methodology employed in the instant matter was not statistically valid.”

So, that is three cases so far out of some 20 hearings this year. And for the purposes of transparency, the extrapolation has been upheld in three other hearings. So, right now, we are batting .500, but there are still 14 decisions yet to be rendered (and another 12 scheduled through the end of the year). The point is, just because the auditor says it’s okay to extrapolate doesn’t necessarily mean that it’s okay to extrapolate. And it is incumbent upon the provider to fight for their rights. Sometimes you win and sometimes you lose, but at least you don’t have to live with the regret of not doing anything.

The first sentence of Tolstoy’s novel Anna Karenina is “Happy families are all alike; every unhappy family is unhappy in its own way.” Let me butcher this with my own opening sentence, which goes something like “all extrapolation losses are alike; every extrapolation win is a win in its own way.”

EDITOR’S NOTE: As reported by RACmonitor, CMS, in April 2014, consolidated the work of the Zone Program Integrity Contractors (ZPICs) into a new agency, the Unified Program Integrity Contractors (UPICs). In his article, Mr. Cohen refers to ZIPCs, acknowledging that when the audits were conducted the auditing was being done by ZPICs, noting that it took several years for the cases to be heard at the ALJ level. Older audits were conducted by ZPICs, newer ones by UPICs.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

Frank Cohen

Frank Cohen is Senior Director of Analytics and Business Intelligence for VMG Health, LLC. He is a computational statistician with a focus on building risk-based audit models using predictive analytics and machine learning algorithms. He has participated in numerous studies and authored several books, including his latest, titled; “Don’t Do Something, Just Stand There: A Primer for Evidence-based Practice”

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
Mastering Healthcare Refunds: Navigating Compliance with Confidence

Mastering Healthcare Refunds: Navigating Compliance with Confidence

Join healthcare attorney David Glaser, as he debunks refund myths, clarifies compliance essentials, and empowers healthcare professionals to safeguard facility finances. Uncover the secrets behind when to refund and why it matters. Don’t miss this crucial insight into strategic refund management.

February 29, 2024
2024 SDoH Update: Navigating Coding and Screening Assessment

2024 SDoH Update: Navigating Coding and Screening Assessment

Dive deep into the world of Social Determinants of Health (SDoH) coding with our comprehensive webcast. Explore the latest OPPS codes for 2024, understand SDoH assessments, and discover effective strategies for integrating coding seamlessly into healthcare practices. Gain invaluable insights and practical knowledge to navigate the complexities of SDoH coding confidently. Join us to unlock the potential of coding in promoting holistic patient care.

May 22, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →