Why the Scoring Mechanisms of Medical Decision-Making are Flawed

The scoring mechanisms of the MDM are suggested tools, not rules or laws.

In our last article we explored how time in conjunction with medical decision-making (MDM) must support the same level of service, and why that rule makes sense. However, we were left with a puzzling consideration in regard to MDM, and here is why: The scoring mechanisms of MDM were created through the Marshfield Clinic Guidelines (MCG). So, let’s consider any relevance MCG has on our current audit process.

We should start at the beginning. Marshfield Clinic was a large clinic in Wisconsin. The group was multi-specialty and included many locations. During the creation of 1995 Documentation Guidelines, the group’s Medicare Administrative Contractor (MAC) approached them and asked if they would beta-test the guidelines. The clinic quickly identified that while the history and exam components included some sort of qualification system, the MDM did not have such a scoring algorithm. Therefore, the clinic created its own internal MDM guidance.

This evolved into the MDM scoring process we find in most every scoring or audit grid in the industry. It has become our industry’s standard approach to reviewing MDM. Ironically, while most MACs include the MCG of scoring MDM as part of their audit interactive or static worksheets, the Centers for Medicare & Medicaid Services (CMS) National Carrier Guidance (NCG) has never acknowledged or validated the MCG scoring approach.

Whether reviewing documentation for time-based services or based it on the documentation components, how does this newfound knowledge of MDM scoring impact your audits? Remember, these are guidelines—not rules, not laws—and they are not even “official” guidelines. Do a quick Internet search for CMS Evaluation and Management (E&M) Documentation Guidelines and you will find a tool that is essentially the 1995 & 1997 Documentation Guidelines enveloped in the wrapper of CMS logos with no mention of MCG.

I’m not telling you to burn the guidelines and never use them again, but I am saying you need to understand that this scoring is just a suggested tool. Maybe I can better explain by actually reviewing some of the controversies that exist in MCG scoring, and how knowing the information in this article may sway you on future audits and reviews.

Our first scoring step in the MDM is reviewing the diagnosis. The MCG have an option for a self-limited or minor problem, while the other two options are “new problem” or “established problem to the provider today.” The golden rule of auditing says, “I am not allowed to assume or interpret.” So how could I then reasonably use this option, as I would be assuming or interpreting the documentation, as a non-clinician, to assess that the problem is minor? This is subjective and completely open to interpretation. For example, an insect bite is commonly defined by this sort of visit, but with Lyme disease, Zika, West Nile virus, etc., an insect bite suddenly is no longer merely a minor problem. Therefore, maybe it would be best to not use this option on the scoring grid through an audit policy for your coders and auditors to follow. I have followed this scoring technique in audits I have personally performed and do not feel that I have ever given more or less “points” than were due.

Also within the diagnosis we have another issue that is truly Pandora’s box, and that is a new problem with additional workup. Most MACs who have published guidance have stated that additional workup is considered any work done beyond today’s encounter. This would mean that if my provider performed a chest x-ray in the office today, that would not be considered additional workup, but if the patient now needs a CT scan, that would represent additional workup. The rationale is that at the end of the encounter the provider still does not have a definitive answer to the patient’s problem, and so additional workup was needed. However, there is one MAC (Palmetto) that actually has published guidance that states work done during today’s encounter would also be considered additional workup. This rather blatant contradiction between MACs demonstrates the continuing complexities within the MDM.

Therefore, there is a quandary about what really constitutes additional workup. Before we can answer that, we have to take a sharp look at two letters—U and P. MCG specifically included the word “up” within their scoring statement of a new problem to the provider today. Here is the quandary this creates: If a provider orders a UA in the office and then decides to send off the urine for culture and sensitivity, this is considered additional workup. But if that same provider sees a new patient and decides that surgical intervention is indicated—it is additional work, but many say you cannot count it because it is not additional workup. Before you jump on the side of those who say, well, that is not additional workup—I want you to be the one to sit across the table from a physician who’s just trying to get an honest day’s pay for an honest day’s work and tell him that sending urine to the lab gives him more “points” than deciding surgical intervention was needed. I believe the consideration here should be removing those two letters up, and merely stating additional work.

If the condition of the patient necessitates work by the provider to be done beyond today’s encounter for him to accurately treat (lab/x-ray), diagnose (biopsy/diagnostic testing), identify the problem (consult to another provider), or resolve the patient’s problem (surgical intervention), this seems more inclusive of how “additional work” could be more accurately defined, as well as clearing up a lot of the gray area.

MCG guidelines seem to be pretty straightforward on identifying the difference between a new versus established problem to the provider by indicating if the problem is new to the examiner, then we score it as a higher point value, and if it is established we score it with a lower point value. This approach in healthcare today is no longer as straightforward, as we have large groups that include extenders in which a patient may see the physician in the first encounter, but then a non-physician provider (NPP) the next time. This style of clinic organization has led to an unstated “abuse” of the new problem about the established problem to the provider.

Let’s find the actual rules—well, guidelines—to truly interpret the intent of this area of encounter. Ironically, in this area Documentation Guidelines (DG) offer a very direct approach. DG does not define these as a new or established problem by the examiner today, but rather identifies them as a presenting problem with an established diagnosis or without an established diagnosis. DG never includes the phrase “new to the provider” that MCG does. The wording of DG seems to make the guidance clearer. For example, if you are seeing my patient for me today because I’m lying on a beach somewhere in the Caribbean, and we are the same specialty, under the same TIN, and you therefore have access to my full medical documentation that includes an active treatment plan on the problem—then this would NOT be a new problem to you. It is an established problem, and should not be given the credit of a new problem for that provider. As a result of misunderstanding the MCG scoring of new vs. established problem, NGS Medicare came out last year with a “clarification” to this point, and there was a lot of hum that NGS was going rogue and changing their view on a new problem within DG. No, they weren’t. They were creating awareness of the misleading interpretation of MCG and referencing you back to DG, which clearly defines how to consider these conditions.

That leads us to the Data & Complexity portion of the MDM. If you do any auditing or coding on a regular basis, you know that within the MDM this is the area where you tend to “drop” in the scoring process. We do this, however, because of a lack of documentation of work done during the encounter. Our focus is on any contradictions or concerns that are raised due to MCG. In my opinion, MCG did provide a great scoring tool in this area, but the confusion is due to the fact that there are categories that redundantly include the same element, sometimes even at a higher point value. For example, look at the MCG scoring grid on data & complexity, and look at the fifth option down that provides 1 point. It states that we are allowed to count this when the history is obtained from someone other than the patient. But then look down at the sixth option, which is 2 points, and allows the use of obtaining history from someone other than the patient. How do I know which one to use? I have heard the same “opinions” and “interpretations” as you have, given by consultants, educators, and conference speakers through the years. But again, those are the facts—the rules—and at the end of the day it is their interpretation of how to best use a contradiction. Many say that if someone is merely supplementing this history to allow the 1-point option, but if we obtain the history fully from someone else to then use the 2-point option. That sounds like a good idea, but then again, I don’t have a rule, a law, or a guideline to differentiate these definitively.

The last point we will discuss today is that last option on the data & complexity grid, and to discuss this I am going to provide a side-by-side comparison of MCG guidance and DG for consideration:

MCG: Independent visualization of image, tracing or specimen itself (not simply review of report)

         

The way MCG interprets this element above leads to coders and auditors giving “points” in this area that are unwarranted. Many read this as saying if my provider read the chest x-ray he performed in his clinic on that day, then he gets this credit as well, but that couldn’t be further from the intention. Remember that the provider is also receiving professional component reimbursement for the interpretation, so that could be construed as “double-dipping” to provide both.

Now, let’s review DG guidance on this same element:

DG: The direct visualization and independent interpretation of an image, tracing or specimen previously or subsequently interpreted by another physician should be documented.

 

That is not nearly as vague or ambiguous. It helps to specifically indicate that if my provider is laying eyes on the images/tracing/specimen, creating his own interpretation—even though it was already read by the radiologist, cardiologist, or pathologist—then my provider is given additional credit for that work performed. So referring back to the actual guidance certainly helped to better define this element.

Are you curious as to what prompted this article? On our own internal auditing team at the National Alliance of Medical Accreditation Specialists (NAMAS) we have had—to put it nicely—a difference of opinion when it comes to considering surgical intervention as additional workup.

So I set out to research what the guidance truly is and how we should accurately score these encounters, and that is when all of the dominos fell into place for me. I realized MCG is not “official” guidance, and there is room for interpretation of “up.” Therefore there really isn’t a “rule” on this point. Which means if you think surgery is additional work and should be counted, I could agree with you, but if you stand strong that MCG says it is workup and surgery is NOT workup—I cannot say you are right or wrong, either. What I do now know, however, is that the scoring of MDM is a reference guide, if you will—one which (just like the rest of DG) includes gray areas. Our team, along with your team, will continue to use the MCG of scoring the MDM, as it is the industry standard, but for consistency’s sake you must have a policy to guide your team. You must lay out what additional workup is considered within your organization, and whether you are even considering the minimal problem scoring element. The key here is that MCG are not guidelines that CMS formally recognizes, but DG are. Use DG to help guide your selections for MCG scoring of the MDM—and when you reach those areas with blatant potholes, fill the potholes with internal policy and guidance for interpretation.

One more thought: There are many coders, auditors, and organizations who feel MDM should define the overall level of service. Really? I would be curious after you have read this article whether you still feel that assigning the level based on guidelines that are not CMS-approved is really the best approach, especially in light of what we do know from CMS—that medical necessity and not MDM is the overarching criterion. In fact, Noridian even says medical necessity cannot be quantified by a points system alone, and that is exactly what MDM scoring is: a flawed scoring system. Just another thought to ponder within your compliance and auditing policy creation.

 

 
Facebook
Twitter
LinkedIn

Shannon DeConda CPC, CPC-I, CEMC, CMSCS, CPMA®

Shannon DeConda is the founder and president of the National Alliance of Medical Auditing Specialists (NAMAS) as well as the president of coding and billing services and a partner at DoctorsManagement, LLC. Ms. DeConda has more than 16 years of experience as a multi-specialty auditor and coder. She has helped coders, medical chart auditors, and medical practices optimize business processes and maximize reimbursement by identifying lost revenue. Since founding NAMAS in 2007, Ms. DeConda has developed the NAMAS CPMA® Certification Training, written the NAMAS CPMA® Study Guide, and launched a wide variety of educational products and web-based educational tools to help coders, auditors, and medical providers improve their efficiencies. Shannon is a member of the RACmonitor editorial board and is a popular guest on Monitor Mondays.

Related Stories

SNFs Under Scrutiny

SNFs Under Scrutiny

Some of you may have noticed that I am not always very nice to some insurance companies. And deservedly so. But I also point out

Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Comprehensive Inpatient Clinical Documentation Integrity: From Foundations to Advanced Strategies

Comprehensive Outpatient Clinical Documentation Integrity: From Foundations to Advanced Strategies

Optimize your outpatient clinical documentation and gain comprehensive knowledge from foundational practices to advanced technologies, ensuring improved patient care and organizational and financial success. This webcast bundle provides a holistic approach to outpatient CDI, empowering you to implement best practices from the ground up and leverage advanced strategies for superior results. You will gain actionable insights to improve documentation quality, patient care, compliance, and financial outcomes.

September 5, 2024
Advanced Outpatient Clinical Documentation Integrity: Mastering Complex Narratives and Compliance

Advanced Outpatient Clinical Documentation Integrity: Mastering Complex Narratives and Compliance

Enhancing outpatient clinical documentation is crucial for maintaining accuracy, compliance, and proper reimbursement in today’s complex healthcare environment. This webcast, presented by industry expert Angela Comfort, DBA, RHIA, CDIP, CCS, CCS-P, will provide you with actionable strategies to tackle complex challenges in outpatient documentation. You’ll learn how to craft detailed clinical narratives, utilize advanced EHR features, and implement accurate risk adjustment and HCC coding. The session also covers essential regulatory updates to keep your documentation practices compliant. Join us to gain the tools you need to improve documentation quality, support better patient care, and ensure financial integrity.

September 12, 2024

Foundations of Outpatient Clinical Documentation Integrity: Best Practices for Accurate Coding and Compliance

This webcast, presented by Angela Comfort, DBA, RHIA, CDIP, CCS, CCS-P, a recognized expert with over 30 years of experience, offers essential strategies to improve outpatient clinical documentation integrity. You will learn how to enhance the accuracy and completeness of patient records by adopting best practices in coding and incorporating Social Determinants of Health (SDOH). The session also highlights the role of technology, such as EHRs and CDI software, in improving documentation quality. By attending, you will gain practical insights into ensuring precise and compliant documentation, supporting patient care, and optimizing reimbursement. This webcast is crucial for those looking to address documentation gaps and elevate their coding practices.

September 5, 2024
Preventing Sepsis Denials: From Recognition to Clinical Validation

Preventing Sepsis Denials: From Recognition to Clinical Validation

ICD10monitor has teamed up with renowned CDI expert Dr. Erica Remer to bring you an exclusive webcast on how to recognize sepsis, how to get providers to give documentation that will support sepsis, and how to educate to avert sepsis denials. Register now and become a crucial piece of the solution to standardizing sepsis clinical practice, documentation, and coding at your facility.

August 22, 2024

Trending News

Featured Webcasts

Patient Notifications and Rights: What You Need to Know

Patient Notifications and Rights: What You Need to Know

Dr. Ronald Hirsch provides critical details on the new Medicare Appeal Process for Status Changes for patients whose status changes during their hospital stay. He also delves into other scenarios of hospital patients receiving custodial care or medically unnecessary services where patient notifications may be needed along with the processes necessary to ensure compliance with state and federal guidance.

December 5, 2024
Navigating the No Surprises Act & Price Transparency: Essential Insights for Compliance

Navigating the No Surprises Act & Price Transparency: Essential Insights for Compliance

Healthcare organizations face complex regulatory requirements under the No Surprises Act and Price Transparency rules. These policies mandate extensive fee disclosures across settings, and confusion is widespread—many hospitals remain unaware they must post every contracted rate. Non-compliance could lead to costly penalties, financial loss, and legal risks.  Join David M. Glaser Esq. as he shows you how to navigate these regulations effectively.

November 19, 2024
Post Operative Pain Blocks: Guidelines, Documentation, and Billing to Protect Your Facility

Post Operative Pain Blocks: Guidelines, Documentation, and Billing to Protect Your Facility

Protect your facility from unwanted audits! Join Becky Jacobsen, BSN, RN, MBS, CCS-P, CPC, CPEDC, CBCS, CEMC, and take a deep dive into both the CMS and AMA guidelines for reporting post operative pain blocks. You’ll learn how to determine if the nerve block is separately codable with real life examples for better understanding. Becky will also cover how to evaluate whether documentation supports medical necessity, offer recommendations for stronger documentation practices, and provide guidance on educating providers about documentation requirements. She’ll include a discussion of appropriate modifier and diagnosis coding assignment so that you can be confident that your billing of post operative pain blocks is fully supported and compliant.

October 24, 2024
The OIG Update: Targets and Tools to Stay in Compliance

The OIG Update: Targets and Tools to Stay in Compliance

During this RACmonitor webcast Dr. Ronald Hirsch spotlights the areas of the OIG’s Work Plan and the findings of their most recent audits that impact utilization review, case management, and audit staff. He also provides his common-sense interpretation of the prevailing regulations related to those target issues. You’ll walk away better equipped with strategies to put in place immediately to reduce your risk of paybacks, increased scrutiny, and criminal penalties.

September 19, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

👻Spooky Sale is Back!👻 Get 31% off all three Medlearn brands, using code SPOOKY24.