Defenses Against AI-Based Medicare Audits: Part II

Courtroom view of the bench with gavel and judge’s seat - a time for justice in the legal system

Legal scholars, practitioners, and other observers have grappled with the legal dimensions of artificial intelligence (AI).

As early as 1992, Solum discussed the case for considering AI to be a legal person.[1] As of 2014, writers still were arguing that AI was unable to present “higher-order cognition” needed by the legal profession.[2] But this time has passed. Today, particularly for routine matters, such as one might find in a Medicare audit, AI can perform as well as any average attorney. After all, much of what attorneys do already is based on boilerplates, documents, and standard procedures. There is no reason at all to think that AI cannot pursue precisely the same routes. The reality is that no one will know the difference.

Artificial Intelligence and Auditing

Computer-assisted auditing based in machine learning is being applied by hospitals to handle sets of data that are too large for human auditors. For example, diagnosis-related group (DRG) data can be used to pre-audit the accuracy of health insurance declaration forms.[3]

Medicare is another battlefield. It is said that there is a significant amount of fraud in the system, but given its size and the gigantic ocean of data, it is difficult to create machine-learning models to detect fraud.[4] Another issue is, what happens after fraud is suspected? Does this then lead to a tip-off for the auditors?[5] After all, merely referring a healthcare provider for intensive auditing is not reviewable under current administrative law.[6]

Is AI Certified as Reliable?

It is widely recognized that AI auditing systems can be faulty. Some have proposed a medical algorithmic audit framework that can help the auditor understand any algorithmic errors that might be appearing. This helps target the weakness of an AI system and also can set up mechanisms to mitigate their impact on the patient and on the organization.[7]

Many have argued that systems dependent on AI and machine learning need to be certified as lawful, ethical, secure, and reliable.[8]

Some have called for the launching of an international regulatory agency to handle the legal and ethical issues of AI.[9] Others have proposed the government set up an agency that will certify the safety of these systems. “AI programs would be subject to limited tort liability, while uncertified programs that are offered for commercial sale or use would be subject to strict joint and several liability.”[10]

At this time, there is no clear and universally accepted methodology to evaluate AI in the healthcare setting, or really in any setting. Some have suggested using “standards of clinical benefit including downstream results, such as overall survival” as a key measure.[11]

But at this time, there are no standards for this, and no authorities are empowered to perform this certification.

Artificial Intelligence Means Real Bias

It is recognized that AI systems can have an unintended bias in healthcare. For example, even though the AI might initially learn under the supervision of a human healthcare provider, it might inadvertently learn to discriminate or make other poor decisions that are systematically adverse for some groups.

It also has been shown that clinicians are affected by automation bias; that is, they have a tendency to over-rely on automation. In response, some have developed techniques for discovering bias in AI decision-making.[12]

Research has shown also that dependence on algorithms in auditing Medicare claims leads to crippling problems.

“[S]tatistical analysis tools utilized in fraud inspection processes prescribed by the U.S. Federal Government are deployed to justify abuse of power via auditor enforcement actions. These are used against healthcare providers in a fashion that stifles their financial viability and shifts power toward larger, albeit not necessarily more effective or efficient, actors.”[13]

It also was found that using these tools of data analytics “allow[s] government auditors to justify sanctions, promote the use of power, and abdicate responsibility for the consequences.”[14]

One would imagine that any typical Medicare auditor, typically with considerably less education and training, would be even more vulnerable to these problems.

We might wonder if the amount that is claimed to have been saved by these abusive recoupments is worth the strain on the healthcare delivery system.[15]

It is still too early to see where all of this is going, but it is a reasonable prediction that the continued use of algorithms and AI will generate more problems and create more harm than in the healthcare system. The reason for this is that the current system is built on an architecture of opposition between the payor and the provider. Until information systems are used as a means of harmonization, and as a means of anticipating and preventing fraud before it even occurs, we will continue to have engendered this congenital weakness in our healthcare system.

Summary of Legal Defenses Against AI Audits

This light review of some of the literature indicates that developing legal defenses against AI is as new as AI itself. As of this time, there is no clear defense, but we can see the emergence of several themes that eventually will be tested in the courts.

The Algorithm is Not Certified Reliable

One of the first problems with the use of algorithms, machine learning, and AI is that there is no certifying authority that guarantees the accuracy of the software. There are no standards in place for determining whether the results of these algorithms are valid. An additional problem is that it is extremely difficult for the courts to determine, even using outside experts, whether an algorithm is reliable. Bottom line: the algorithm does not pass a test for scientific viability.

Algorithms Are Not Persons

Another emerging argument is that an audit done by AI is not done by auditors. Software systems are not people. They do not have the experience of people, and there is no way to know or to have the algorithm explain how it came to a decision.

In addition, the system of auditing presently in the United States, predicated on the use of subcontractors operating as agents for the government, is based on the use of human auditors who have undergone various tests and certifications in order to get their jobs. There is no such certification for algorithms. The auditors are chartered to provide human auditors for their work. If humans are no longer doing the auditing, then the company’s work is invalid, unless the contracts are renegotiated, and the company charters changed.

AI Audits Are Not Reproducible in Court

Although the outcome of an audit by AI might seem compelling, it is impossible to understand the factors that were considered and the reasoning behind any decision. For example, unlike a case with an actual human, it is impossible to have any type of hearing under administrative law in which the so-called auditor can testify or provide further explanation about a decision.

AI Audits Violate the Sixth Amendment

This brings into question the entire basis of our legal system, in which the accused should be able to confront the accuser. If the accuser is an undecipherable algorithm, then how can anyone, including the trier of fact, truly understand what has happened and whether the accused is guilty? How can the Sixth Amendment be complied with?[16]

Humans Can Not Verify the Work of AI

Even if the work of the AI algorithm appears at first to be logical and reasonable, it is impossible to dig into the details and understand exactly how the conclusions were derived. There is no record of what factors were considered or criteria were used to make decisions, or even how decisions were made. In essence, it is a question of trust. When the stakes are so high, and the future of a healthcare provider may be on the chopping block, can a trier of fact depend upon an algorithm that no one can understand?

EDITOR’S NOTE:

In Part III of this series, we will examine the AI arms race between auditors and healthcare providers.


[1] Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N. Carolina L. R. 4 (1992) https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=3447&context=nclr

[2] Harry Surden, Machine Learning and Law, 89 Wash. L. Rev. 87 (2014) https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=1088&context=faculty-articles

[3] Huang, Shi-Ming, and Cheng-Han Tsai. “A Smart Audit Teaching Case Using CAATs for Medicare.” International Journal of Computer Auditing 3, no. 1 (2021): 4-26. https://www.researchgate.net/profile/Shi-Huang-5/publication/358152470_A_Smart_Audit_Teaching_Case_Using_CAATs_for_Medicare/links/62c8fe2900d0b4511042f1ec/A-Smart-Audit-Teaching-Case-Using-CAATs-for-Medicare.pdf

[4] Johnson, Justin M., and Taghi M. Khoshgoftaar. “Medicare fraud detection using neural networks.” Journal of Big Data 6, no. 1 (2019): 63. https://link.springer.com/article/10.1186/s40537-019-0225-0  Concluding that “ROS [Random Over Sampling]and ROS–RUS [Random Under Sampling] perform significantly better than baseline and algorithm-level methods with average AUC [Area Under the Curve] scores of 0.8505 and 0.8509, while ROS–RUS maximizes efficiency with a 4× speedup in training time.”

[5] See, for example, Feng, Yunyi, Simon Lin, En-Ju Lin, Lesley Farley, Yungui Huang, and Chang Liu. “Identifying Candidates for Medical Coding Audits: Demonstration of a Data Driven Approach to Improve Medicare Severity Diagnosis-Related Group Coding Compliance.” In Health Information Science: 8th International Conference, HIS 2019, Xi’an, China, October 18–20, 2019, Proceedings 8, pp. 47-57. Springer International Publishing, 2019.

[6] Medicare audits are governed by a type of “made up” law called “administrative law”. This law is created by the agency that is in charge of enforcing it, and providing the pseudo-courts to adjudicate cases. It is not passed by Congress or by any State. The standards government enforcement of administrative law are not as rigorous or as well tested as for actual legislation.

[7] Liu, Xiaoxuan, Ben Glocker, Melissa M. McCradden, Marzyeh Ghassemi, Alastair K. Denniston, and Lauren Oakden-Rayner. “The medical algorithmic audit.” The Lancet Digital Health 4, no. 5 (2022): e384-e397. https://www.sciencedirect.com/science/article/pii/S2589750022000036

[8] Akula, Ramya, and Ivan Garibay. “Audit and assurance of AI algorithms: a framework to ensure ethical algorithmic practices in artificial intelligence.” arXiv preprint arXiv:2107.14046 (2021).  https://arxiv.org/pdf/2107.14046.pdf

[9] Olivia J. Erdelyi & July Goldsmith, Regulating Artificial Intelligence Proposal for a Global Solution, AIES’18 (2-3 Feb. 2018) https://dl.acm.org/doi/pdf/10.1145/3278721.3278731

[10] Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harvard J. of L. & Tech., 2, 353, 393(2016) http://euro.ecom.cmu.edu/program/law/08-732/AI/Scherer.pdf

[11] Choudhury, Avishek. “A framework for safeguarding artificial intelligence systems within healthcare.” British journal of healthcare management 25, no. 8 (2019): 1-6, at p.4.  https://d1wqtxts1xzle7.cloudfront.net/61063104/choudhury_A_2019-libre.pdf

[12] Panigutti, Cecilia, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi. “FairLens: Auditing black-box clinical decision support systems.” Information Processing & Management 58, no. 5 (2021): 102657. https://www.sciencedirect.com/science/article/pii/S030645732100145X  Referencing Barocas S., Hardt M., Narayanan A., Fairness in machine learning, Nips Tutorial, 1 (2017), p. 2, Pedreschi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 560–568)., Pierson E., Cutler D.M., Leskovec J., Mullainathan S., Obermeyer Z., An algorithmic approach to reducing unexplained pain disparities in underserved populations, Nature Medicine, 27 (1) (2021), pp. 136-140, Hillson S.D., Connelly D.P., Liu Y., The effects of computer-assisted electrocardiographic interpretation on physicians’ diagnostic decisions, Medical Decision Making, 15 (2) (1995), pp. 107-112, and Lindow T., Kron J., Thulesius H., Ljungström E., Pahlm O., Erroneous computer-based interpretations of atrial fibrillation and atrial flutter in a Swedish primary health care setting, Scandinavian Journal of Primary Health Care, 37 (4) (2019), pp. 426-433.

[13] Koreff, Jared, Martin Weisner, and Steve G. Sutton. “Data analytics (AB) use in healthcare fraud audits.” International Journal of Accounting Information Systems 42 (2021): 100523. https://digitalcommons.trinity.edu/cgi/viewcontent.cgi?article=1145&context=busadmin_faculty  (Note: This author highly recommends this paper to readers of RACmonitor.)

[14] Ibid.

[15] Just saying.

[16] Sixth Amendment: In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.

Facebook
Twitter
LinkedIn

Edward M. Roche, PhD, JD

Edward Roche is the director of scientific intelligence for Barraclough NY, LLC. Mr. Roche is also a member of the California Bar. Prior to his career in health law, he served as the chief research officer of the Gartner Group, a leading ICT advisory firm. He was chief scientist of the Concours Group, both leading IT consulting and research organizations. Mr. Roche is a member of the RACmonitor editorial board as an investigative reporter and is a popular panelist on Monitor Mondays.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Enhancing Outcomes with CDI-Coding-Quality Collaboration in Acute Care Hospitals

Enhancing Outcomes with CDI-Coding-Quality Collaboration in Acute Care Hospitals

Join Angela Comfort, DBA, MBA, RHIA, CDIP, CCS, CCS-P, as she presents effective strategies to strengthen collaboration between CDI, coding, and quality departments in acute care hospitals. Angela will also share guidance on implementing cross-departmental meetings, using shared KPIs, and engaging leadership to foster a culture of collaboration. Attendees will gain actionable tools to optimize documentation accuracy, elevate quality metrics, and drive a unified approach to healthcare goals, ultimately enhancing both patient outcomes and organizational performance.

November 21, 2024
Comprehensive Inpatient Clinical Documentation Integrity: From Foundations to Advanced Strategies

Comprehensive Outpatient Clinical Documentation Integrity: From Foundations to Advanced Strategies

Optimize your outpatient clinical documentation and gain comprehensive knowledge from foundational practices to advanced technologies, ensuring improved patient care and organizational and financial success. This webcast bundle provides a holistic approach to outpatient CDI, empowering you to implement best practices from the ground up and leverage advanced strategies for superior results. You will gain actionable insights to improve documentation quality, patient care, compliance, and financial outcomes.

September 5, 2024
Advanced Outpatient Clinical Documentation Integrity: Mastering Complex Narratives and Compliance

Advanced Outpatient Clinical Documentation Integrity: Mastering Complex Narratives and Compliance

Enhancing outpatient clinical documentation is crucial for maintaining accuracy, compliance, and proper reimbursement in today’s complex healthcare environment. This webcast, presented by industry expert Angela Comfort, DBA, RHIA, CDIP, CCS, CCS-P, will provide you with actionable strategies to tackle complex challenges in outpatient documentation. You’ll learn how to craft detailed clinical narratives, utilize advanced EHR features, and implement accurate risk adjustment and HCC coding. The session also covers essential regulatory updates to keep your documentation practices compliant. Join us to gain the tools you need to improve documentation quality, support better patient care, and ensure financial integrity.

September 12, 2024

Trending News

Featured Webcasts

Patient Notifications and Rights: What You Need to Know

Patient Notifications and Rights: What You Need to Know

Dr. Ronald Hirsch provides critical details on the new Medicare Appeal Process for Status Changes for patients whose status changes during their hospital stay. He also delves into other scenarios of hospital patients receiving custodial care or medically unnecessary services where patient notifications may be needed along with the processes necessary to ensure compliance with state and federal guidance.

December 5, 2024
Navigating the No Surprises Act & Price Transparency: Essential Insights for Compliance

Navigating the No Surprises Act & Price Transparency: Essential Insights for Compliance

Healthcare organizations face complex regulatory requirements under the No Surprises Act and Price Transparency rules. These policies mandate extensive fee disclosures across settings, and confusion is widespread—many hospitals remain unaware they must post every contracted rate. Non-compliance could lead to costly penalties, financial loss, and legal risks.  Join David M. Glaser Esq. as he shows you how to navigate these regulations effectively.

November 19, 2024
Post Operative Pain Blocks: Guidelines, Documentation, and Billing to Protect Your Facility

Post Operative Pain Blocks: Guidelines, Documentation, and Billing to Protect Your Facility

Protect your facility from unwanted audits! Join Becky Jacobsen, BSN, RN, MBS, CCS-P, CPC, CPEDC, CBCS, CEMC, and take a deep dive into both the CMS and AMA guidelines for reporting post operative pain blocks. You’ll learn how to determine if the nerve block is separately codable with real life examples for better understanding. Becky will also cover how to evaluate whether documentation supports medical necessity, offer recommendations for stronger documentation practices, and provide guidance on educating providers about documentation requirements. She’ll include a discussion of appropriate modifier and diagnosis coding assignment so that you can be confident that your billing of post operative pain blocks is fully supported and compliant.

October 24, 2024
The OIG Update: Targets and Tools to Stay in Compliance

The OIG Update: Targets and Tools to Stay in Compliance

During this RACmonitor webcast Dr. Ronald Hirsch spotlights the areas of the OIG’s Work Plan and the findings of their most recent audits that impact utilization review, case management, and audit staff. He also provides his common-sense interpretation of the prevailing regulations related to those target issues. You’ll walk away better equipped with strategies to put in place immediately to reduce your risk of paybacks, increased scrutiny, and criminal penalties.

September 19, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →