While supplies last! Free 2022 Coding Essentials for Infusion & Injection Therapy Services book with every RACmonitor webcast order. No code required. Order now >

Frank Cohen

fcohen100After receiving nearly 400 responses to our RAC survey, what has become very clear is that the RACs represent a bunch of digital cowboys who are not responsible enough to be allowed to roam without proper oversight. At least that’s the conclusion I have reached, and while others may disagree, it is hard to argue the facts.

First of all, let me make it very clear that this survey was not scientific in nature; in other words it does not represent a random sampling of all healthcare providers in the country. The majority of responses came from Florida, Ohio, New York, North Carolina, Oregon, South Carolina, California, Illinois, Georgia and Indiana.

More than 80 percent of the responses were associated with a medical practice, and of those, 90 percent indicated that they were subject to a RAC audit. So while I suspect that the survey was subject to volunteer bias, the truth is I don’t care and I don’t think it is important, since the purpose was to get a handle on what providers are experiencing as a result of their RAC audits.

Some of the results were expected, such as the low rate (less than 5 percent) of audits that used extrapolation to determine damages. I expect, however, based on the success of other integrity contractor audits, (i.e. those performed by MICs, ZPICs, etc.) that this is a very temporary situation. What was a bit surprising was the number of respondents (nearly 50 percent) who stated that a RAC used (or reported using) a random sampling in order to review records.

Use of Extrapolation

Let’s understand the importance of this and how this behavior points toward the practice of extrapolation. According to Medicare guidelines, specifically the Centers for Medicare and Medicaid Services (CMS) Pub. 100-08, Chapter 3, Section 10.1.2, statistical sampling is used to calculate and project (i.e. extrapolate) the amounts of overpayment(s) made on claims. The Medicare Prescription Drug, Improvement and Modernization Act of 2003 mandates that before using extrapolation to determine overpayment amounts to be recovered by recoupment, offset or otherwise, there must be a determination of a sustained or high level of payment error or documentation that educational intervention has failed to correct the payment error.

The only way this determination can be made is through a probe audit, which is an initial analysis of a set of records drawn randomly from a universe of claims. Now, it’s perfectly acceptable for a RAC auditor to pull claims without the use of a random audit, and if I were a RAC auditor, my decision about which records to review would be biased toward those with the highest tendency toward error. For example, if a practice was billing for services provided by a PA or NP in a facility where billing was not permitted, I would pull as many of those procedures as I could in order to maximize my recovery. I could not, however, use this “probe” analysis to make a determination of a sustained or high level of payment error. Even if I did, it would be totally unacceptable to use this sample to extrapolate to the entire universe since the sample, not being random, represents a very specific type of claim or service.

What’s really discouraging, however, is the following sentence, which also comes from the guidelines: By law, the determination that a sustained or high level of payment error exists is not subject to administrative or judicial review.

What’s a Provider to Do?

My interpretation is this: even if the RAC uses improper techniques to make an erroneous determination supporting an extrapolation, there isn’t anything a provider can do about it. Now that gives license for malfeasance, and it’s just plain wrong. Plus, it further supports the need for specific oversight.

The second concern about the results has to do with the huge disparity between what the CERT study determined regarding the rate of payment error and what practices reported as the RAC findings. For 2009, CMS found that around 7.8 percent of the 100,000 or so records examined were paid in error. In contrast, respondents indicated that RACs on average found 74 percent of records examined to have been paid in error. And while the CERT study reported that around 5 percent of the errors were actually underpayments, I have yet to see a RAC audit in which there were any underpayments identified.

To say that not a single record was underpaid is statistically improbable. Without getting involved in a qualitative analysis of all the RAC findings, my initial impression is that the RACs are at the very least overzealous in their reviews, and the fact that they are paid a commission only can aggravate concerns about fairness expressed by so many providers.

Cost of Appealing

Finally, I particularly was concerned that, in agreement with some other studies, our survey indicated that nearly one-third of all records submitted for appeal were overturned in favor of the practice. This would mean that if 21 records were found to have been paid in error and all of them were appealed seven would be adjudicated in favor of the provider. Now, some folks think this is a good thing, indicating that the appeal process really must work if a third of overpayment findings won’t stick. But it also says two very important things about the audit process: first, it means that RACs make an error in their determinations a third of the time, and second, win, lose or draw, the appeal process invariably costs the provider time, money and resources.

Some practices have stated that the cost of the appeal in some cases exceeds the value of the recovery. Since the RACs are paid a commission on their findings of overpayment, maybe one way to keep them in check would be to penalize them every time a provider wins an appeal. For example, when a practice wins an appeal, in addition to not having to pay the overpayment amount, CMS must pay the practice some amount that approximates the cost of the appeal (say $25 or $30). Some have estimated that an appeal costs the provider around $30 and costs the auditor around $60.

Let’s do the math: for every million claims found to be paid in error, 333,334 were appealed successfully, yielding a total administrative cost of approximately $30 million. Somehow, that just doesn’t sound like an efficient way to reduce the cost of healthcare.

To sum things up, there is evidence that supports the need to deal with healthcare fraud and abuse. I believe that if a provider is committing fraud, they should go to jail. If they are abusing the system by taking advantage of loopholes, they should pay the price. But fraud and abuse make up a very small percentage of what is represented by CERT or RAC results. The overwhelming majority of the issues we see simply are due to the fact that the rules and regulations to code and bill for even the simplest of services are medically irrelevant and administratively complex.

Instead of continuing to pile rules on top of rules and then incentivizing contractors to catch people when those rules are broken, maybe it might be far more efficient and cost-effective to make the business of providing healthcare services and procedures simpler and more straightforward.

About the Author

Frank Cohen is the senior analyst for The Frank Cohen Group, LLC.  He is a healthcare consultant that specializes in data mining, applied statistics, practice analytics, decision support and process improvement.

Contact the Author


To comment on this article please go to editor@racmonitor.com

“RACs and Patient Status, Discharge Disposition”


Frank Cohen

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

You May Also Like

Leave a Reply

Your Name(Required)
Your Email(Required)