Chapter 8 changes appear to be less fair than ever before.
“Life Isn’t Fair, but Government Must Be”— Former Texas Governor Ann Richards, 1991
Everyone has a plan until they get punched in the mouth, according to Mike Tyson, retired boxer and armchair philosopher. And it appears that the Centers for Medicare & Medicaid Services (CMS) is planning a one-two punch in the mouth for healthcare providers come 2019.
First, CMS proposed a paradigm shift to how office visits would be coded, reducing the levels of office visits from 10 to four – and while it is true that physicians have been asking CMS to simplify coding and billing rules and regulations, no one asked for them to dismantle a 26-year-old model overnight. While there are lots of problems with this new proposal, one area that is missing is how audits are going to be initiated and conducted on the as-of-yet undefined documentation criteria for this new system – which brings me to the second punch…
CMS just released a major revision to Chapter 8 of the Program Integrity Manual (PIM). Chapter 8 deals with sampling and extrapolation, and in terms of money grabbing, it is the most financially consequential chapter within the PIM. For those who have not had the pleasure of being subjected to an extrapolation audit, this is where CMS takes actual overpayments for a simple audit, say 30 claims, and extrapolates that face-value sum to some estimate that can be tens, hundreds, or even thousands of times more damaging that the actual initial amount.
For example, let’s say a Unified Program Integrity Contractor (UPIC) pulls 30 claims and finds that the total overpayment was $1,000. Divided by the 30 claims, this results in an average overpayment of $33.33 per claim. Now, let’s say that this sample of 30 was pulled form a universe of 10,000 claims. Multiply that average overpayment of $33.33 by the universe of 10,000 claims, and you get an extrapolated overpayment of nearly $350,000! Saying that this is a “bit of a stretch” is an understatement, and is also why it is so critically important that providers are able to defend themselves against such audits.
In any other industry, a challenge to the statistical process of extrapolation would rely upon standards of statistical practice, but such is not the case with CMS. The current working copy of the guidelines (Chapter 8) is rife with inaccuracies, incorrect assumptions, elastic interpretations, and license to the degree that defending extrapolation audits is heavily biased against the provider. For example, the U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG), as well as other CMS statistical experts, have opined that using the paid amount to calculate sample size or create stratifications is completely wrong and unacceptable. But because the PIM doesn’t require that an auditor use the proper methods, it is acceptable for CMS audits.
Another example deals with the fact that, because of basic billing characteristics, paid and overpaid data is almost never normally distributed, meaning that the use of the average and standard deviation for calculating point estimates, confidence intervals, and precision is almost always inappropriate. But because the PIM doesn’t discuss the issue of distributions, the auditor will hide behind this ambiguity, refusing to abide by established standards within the statistical community. I have been involved in statistics for some 40 years, and I can’t think of another industry that has such an array of disjointed and indefensible statistical policies, procedures, and guidelines as CMS.
Like with the 2019 evaluation and management (E&M) changes, CMS had a chance to get this revision to Chapter 8 right, and as with the 2019 E&M changes, they missed the boat. With the E&M guidelines, they failed to do any substantive testing to assess the actual anticipated impact on providers and beneficiaries, both financially and behaviorally.
With respect to Chapter 8, the agency missed a great opportunity to bring the guidelines up to accepted standards of statistical practice, ensuring that in an audit, the process would be fair to both the government (re: taxpayers) and the providers. But they didn’t, and shame on them for that. While some of the changes were benign and involved rote wording, and while some changes may benefit the provider, the majority of the big changes were constructed to protect the auditor, ensuring that the process will even less fair than it has been.
Section 22.214.171.124 (General Purpose) has added that the sampling methodologies used “shall be well-accepted methodologies amongst statisticians, and complete explanation shall be provided for why the methodology used was the appropriate methodology in the situation.” This has been sorely lacking in the current guidelines. I can’t even keep track of how many times we have asked the auditor for a detailed explanation, including their logic, behind the use of a sampling methodology – and without exception, we have been told that they don’t have to provide that. Hopefully, this will change. But before we get excited about the possibility of this, read on: “failure by the contractor to follow one or more of the requirements contained herein may result in review by CMS of their performance, but should not be construed as necessarily affecting the validity of the statistical sampling and/or the projection of the overpayment.” And this is what we will see throughout this revision. Basically, if the provider misses a filing by a single day, they are done. If the government auditors mess up almost everything, there are no consequences for them. In essence, without the interference of an objective arbitrator, providers don’t have a chance (hence why so many audits are appealed to the ALJ, administrative law judge, level).
In Section 126.96.36.199, we read that “assessing the distribution of the paid amounts in the sample frame to determine the sample design, it is very likely that the distribution of the overpayments will not be normal. However, there are many sampling methodologies (for example, use of the Central Limit Theorem) that may be used to accommodate non-normal distributions. The statistician should state the assumptions being made about the distribution and explain the sampling methodology selected as a result of that distribution.”
Here, more likely than not, they are talking about using Monte Carlo simulations to show that the sample meets the CLT criteria – and if that’s the case, they are sadly missing the point. In bootstrapping, the auditor will take, say, 10,000 random samples from the sampling frame, get the average for each of those samples, and if it results in a normal distribution, declare that their methodology and sample size is appropriate. The problem is that in the real world, we look at only one sample, not 10,000, and if it is non-normally distributed, a whole different set of rules apply (well, again, in the real world, anyway).
One really big change is in 188.8.131.52 (determining when a statistical sample may be used). We have always had the wording that statistical sampling can be used “when it has been determined that a sustained or high level of payment error exists.” But in the past, “high level” was not defined. And I’m not really sure if it is in the revised guidelines, but it does state that “high error rate determinations by the contractor or by other medical reviews (i.e., greater than or equal to 50 percent from a previous pre- or post-payment review)…” I am not sure if this means that a provider can now contest the use of statistical sampling and extrapolation if no probe audit has been performed, or if there isn’t any evidence from a prior audit of this level of error, and I am open for comments on this. Help!
In that same section, there is a caveat that may render the above invalid, as follows: “if the contractor believes that statistical sampling and/or extrapolation should be used for purposes of estimation, and it does not meet any of the criteria listed above, it shall consult with its COR and BFL prior to creating a statistical sample and issuing a request for medical records from the provider/supplier.” If I am reading this correctly, I interpret it as saying that even if the auditor can’t find a valid reason to sample and extrapolate, they can still go to the contracting officer’s representative (COR) or business function lead (BFL) for permission in the absence of meeting any of the criteria. Who are these people, anyway?
Section 184.108.40.206 (Consultation with a Statistical Expert) makes mentions of an issue that seems to permeate many of the changes in other sections. First, it iterates the importance and need for detailed documentation, workbooks, and methodological statements for the sampling and extrapolation process. But then it seems to restrict release of these to other contractors, like the Medicare Administrative Contractors (MACs), Quality Improvement Organizations (QICs), and the ALJs, but, and I may have just missed it, there doesn’t seem to be any language that requires these details to be made available to the provider. Again, if I am wrong here, then I will be the first to apologize to CMS, so please feel free to correct me.
Section 220.127.116.11 (Defining the Universe, the Sampling Unit, and the Sampling Frame) creates an excuse for the auditor to exclude including all raw data in the universe. Way too often, the universe and the sampling frame are the same. Here, they infer that the universe should already be filtered for certain criteria, such as unpaid claims. But then this defies the definition of the universe, and it deprives the provider the right to see just what was filtered from the universe to create the sampling frame. The other issue, which is one that has long existed and created a lot of controversy, is the exclusion of unpaid claims from the sampling frame. CMS’s logic is this: if the claim wasn’t paid, then CMS didn’t suffer any financial loss or damage. The problem is that, throughout both chapters 3 and 8, the contractors are instructed to look at underpayments as well as overpayments – and which claims do you think would be the most subject to an underpayment? Those that were not paid – therefore, excluding those robs the provider of the opportunity to net out the overpayments with the underpayments, which could result in a significant impact on the extrapolated overpayment estimate. I argue this all the time, but will admit that it is rarely considered, since, once again, the PIM, right or wrong, is often interpreted as the final authority with regard to appealing the extrapolation.
Here’s a big one for me. Section 18.104.22.168.2 basically excuses CMS auditors from abiding by one of the most axiomatic principles of inferential statistics: The Central Limit Theorem, as discussed above. Here is a quote from that section: “certain sampling theorems require an assumption that sampled items are identically and independently distributed. In sampling from a finite universe without replacement, there is always a certain amount of dependence, because the probability of selection changes with each unit that is selected. However, correlations of characteristics in the target population do not imply dependence in sampling.”
They go on to say that “in this context, independence means the selection of one sampling unit does not influence, or gives no information about, the outcome of another selection.”
This is just soooo wrong!
If the claim is the sampling unit and more than one claim is selected from the same beneficiary with a different date of service, then the absolute requirement for identically and independently distributed units has been grossly violated. The concept is actually pretty simple. Sicker patients see the provider more often. Therefore, in a universe of claims data, the ratio of claims to sicker patients will be higher than the ratio of claims to healthy patients – and if this is not corrected in the sample frame, it simply invalidates the randomness of the sample, period. Other statisticians may disagree, but I don’t see any situation in which this is ok.
Notably, Section 22.214.171.124.4 (Overpayment/Underpayment Worksheets) is missing in the revised guidelines, and 126.96.36.199 has been changed from “Informational Copies to Primary GTL, Associate GTL, SME or CMS RO” to “Maintenance of Documentation.” In the revised section, it says that “the contractor shall maintain all documentation pertinent to the calculation of an estimated overpayment, including but not limited to the statistician-approved sampling methodology, universe, sample frame, and formal worksheets. The documentation must be sufficient to allow for any future replication and/or validation by an administrative or judicial body.”
What I don’t get is why doesn’t this mention the provider? Does this mean that these workbooks are not available to providers, and only CMS-based administrative or judicial officials? If that’s the case, this exclusion would make it nearly impossible for a provider to effectively defend themselves against an extrapolation audit. This issue of excluding providers (or their experts) from access to documentation and data files seems a bit thematic throughout the revision.
The last big change is found in Section 188.8.131.52 (Recovery from Provider or Supplier). It says, in part that “the contractor shall obtain approval from CMS prior to issuing a findings letter to the provider/supplier when the estimated overpayment exceeds $500,000 or is an amount that is greater than 25 percent of the provider’s/supplier’s Medicare revenue received within the previous 12 months.” I have no idea what the impact of this will be, as there isn’t any prior information on how the BFL/COR will respond to an excessively high extrapolation finding. From my own experience of working on nearly 200 extrapolation cases in my career, the overwhelming majority have had demands in excess of $500,000.
Finally, I was disappointed to see that several issues I think should have been included were not. For example, there was no discussion of using non-parametric measures in the case of a grossly skewed sample. Even though the Office of Management and Budget (OMB), Government Accountability Office (GAO) and HHS OIG have all promulgated guidelines regarding acceptable precision rates, there is nothing within the revised guidelines to address that. And precision is a huge consideration for extrapolation. I have seen extrapolations proceed when the precision has been higher than 25 percent, which is completely unacceptable, but since there is no guidance within the PIM, the contract auditor is left to decide this, authoritatively, on their own. I am also disappointed to see that, in 184.108.40.206., we are still told that “by law, the determination that a sustained or high level of payment error exists is not subject to administrative or judicial review.”
In the end, this just looks like more cover for the auditors. To me, it looks like CMS took all of the issues that were used to successfully defeat inappropriate extrapolations and provided excuses for poor statistical techniques and bad behavior, lowering even further, if that is possible, the accountability of CMS-contracted auditors.
Unfortunately, this continues to fuel the fervor for ALJ hearings, adding to the already untenable backlog that currently exists. And if the past is any predictor of the future, I believe that we will continue to see a high success rate at having extrapolations thrown out at the ALJ level.
In the immortal words of Ann Richards, former governor of Texas, “life isn’t fair, but government must be.”
And that’s the world according to Frank.