Defenses Against AI-Based Medicare Audits: Part II

Courtroom view of the bench with gavel and judge’s seat - a time for justice in the legal system

Legal scholars, practitioners, and other observers have grappled with the legal dimensions of artificial intelligence (AI).

As early as 1992, Solum discussed the case for considering AI to be a legal person.[1] As of 2014, writers still were arguing that AI was unable to present “higher-order cognition” needed by the legal profession.[2] But this time has passed. Today, particularly for routine matters, such as one might find in a Medicare audit, AI can perform as well as any average attorney. After all, much of what attorneys do already is based on boilerplates, documents, and standard procedures. There is no reason at all to think that AI cannot pursue precisely the same routes. The reality is that no one will know the difference.

Artificial Intelligence and Auditing

Computer-assisted auditing based in machine learning is being applied by hospitals to handle sets of data that are too large for human auditors. For example, diagnosis-related group (DRG) data can be used to pre-audit the accuracy of health insurance declaration forms.[3]

Medicare is another battlefield. It is said that there is a significant amount of fraud in the system, but given its size and the gigantic ocean of data, it is difficult to create machine-learning models to detect fraud.[4] Another issue is, what happens after fraud is suspected? Does this then lead to a tip-off for the auditors?[5] After all, merely referring a healthcare provider for intensive auditing is not reviewable under current administrative law.[6]

Is AI Certified as Reliable?

It is widely recognized that AI auditing systems can be faulty. Some have proposed a medical algorithmic audit framework that can help the auditor understand any algorithmic errors that might be appearing. This helps target the weakness of an AI system and also can set up mechanisms to mitigate their impact on the patient and on the organization.[7]

Many have argued that systems dependent on AI and machine learning need to be certified as lawful, ethical, secure, and reliable.[8]

Some have called for the launching of an international regulatory agency to handle the legal and ethical issues of AI.[9] Others have proposed the government set up an agency that will certify the safety of these systems. “AI programs would be subject to limited tort liability, while uncertified programs that are offered for commercial sale or use would be subject to strict joint and several liability.”[10]

At this time, there is no clear and universally accepted methodology to evaluate AI in the healthcare setting, or really in any setting. Some have suggested using “standards of clinical benefit including downstream results, such as overall survival” as a key measure.[11]

But at this time, there are no standards for this, and no authorities are empowered to perform this certification.

Artificial Intelligence Means Real Bias

It is recognized that AI systems can have an unintended bias in healthcare. For example, even though the AI might initially learn under the supervision of a human healthcare provider, it might inadvertently learn to discriminate or make other poor decisions that are systematically adverse for some groups.

It also has been shown that clinicians are affected by automation bias; that is, they have a tendency to over-rely on automation. In response, some have developed techniques for discovering bias in AI decision-making.[12]

Research has shown also that dependence on algorithms in auditing Medicare claims leads to crippling problems.

“[S]tatistical analysis tools utilized in fraud inspection processes prescribed by the U.S. Federal Government are deployed to justify abuse of power via auditor enforcement actions. These are used against healthcare providers in a fashion that stifles their financial viability and shifts power toward larger, albeit not necessarily more effective or efficient, actors.”[13]

It also was found that using these tools of data analytics “allow[s] government auditors to justify sanctions, promote the use of power, and abdicate responsibility for the consequences.”[14]

One would imagine that any typical Medicare auditor, typically with considerably less education and training, would be even more vulnerable to these problems.

We might wonder if the amount that is claimed to have been saved by these abusive recoupments is worth the strain on the healthcare delivery system.[15]

It is still too early to see where all of this is going, but it is a reasonable prediction that the continued use of algorithms and AI will generate more problems and create more harm than in the healthcare system. The reason for this is that the current system is built on an architecture of opposition between the payor and the provider. Until information systems are used as a means of harmonization, and as a means of anticipating and preventing fraud before it even occurs, we will continue to have engendered this congenital weakness in our healthcare system.

Summary of Legal Defenses Against AI Audits

This light review of some of the literature indicates that developing legal defenses against AI is as new as AI itself. As of this time, there is no clear defense, but we can see the emergence of several themes that eventually will be tested in the courts.

The Algorithm is Not Certified Reliable

One of the first problems with the use of algorithms, machine learning, and AI is that there is no certifying authority that guarantees the accuracy of the software. There are no standards in place for determining whether the results of these algorithms are valid. An additional problem is that it is extremely difficult for the courts to determine, even using outside experts, whether an algorithm is reliable. Bottom line: the algorithm does not pass a test for scientific viability.

Algorithms Are Not Persons

Another emerging argument is that an audit done by AI is not done by auditors. Software systems are not people. They do not have the experience of people, and there is no way to know or to have the algorithm explain how it came to a decision.

In addition, the system of auditing presently in the United States, predicated on the use of subcontractors operating as agents for the government, is based on the use of human auditors who have undergone various tests and certifications in order to get their jobs. There is no such certification for algorithms. The auditors are chartered to provide human auditors for their work. If humans are no longer doing the auditing, then the company’s work is invalid, unless the contracts are renegotiated, and the company charters changed.

AI Audits Are Not Reproducible in Court

Although the outcome of an audit by AI might seem compelling, it is impossible to understand the factors that were considered and the reasoning behind any decision. For example, unlike a case with an actual human, it is impossible to have any type of hearing under administrative law in which the so-called auditor can testify or provide further explanation about a decision.

AI Audits Violate the Sixth Amendment

This brings into question the entire basis of our legal system, in which the accused should be able to confront the accuser. If the accuser is an undecipherable algorithm, then how can anyone, including the trier of fact, truly understand what has happened and whether the accused is guilty? How can the Sixth Amendment be complied with?[16]

Humans Can Not Verify the Work of AI

Even if the work of the AI algorithm appears at first to be logical and reasonable, it is impossible to dig into the details and understand exactly how the conclusions were derived. There is no record of what factors were considered or criteria were used to make decisions, or even how decisions were made. In essence, it is a question of trust. When the stakes are so high, and the future of a healthcare provider may be on the chopping block, can a trier of fact depend upon an algorithm that no one can understand?

EDITOR’S NOTE:

In Part III of this series, we will examine the AI arms race between auditors and healthcare providers.


[1] Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N. Carolina L. R. 4 (1992) https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=3447&context=nclr

[2] Harry Surden, Machine Learning and Law, 89 Wash. L. Rev. 87 (2014) https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=1088&context=faculty-articles

[3] Huang, Shi-Ming, and Cheng-Han Tsai. “A Smart Audit Teaching Case Using CAATs for Medicare.” International Journal of Computer Auditing 3, no. 1 (2021): 4-26. https://www.researchgate.net/profile/Shi-Huang-5/publication/358152470_A_Smart_Audit_Teaching_Case_Using_CAATs_for_Medicare/links/62c8fe2900d0b4511042f1ec/A-Smart-Audit-Teaching-Case-Using-CAATs-for-Medicare.pdf

[4] Johnson, Justin M., and Taghi M. Khoshgoftaar. “Medicare fraud detection using neural networks.” Journal of Big Data 6, no. 1 (2019): 63. https://link.springer.com/article/10.1186/s40537-019-0225-0  Concluding that “ROS [Random Over Sampling]and ROS–RUS [Random Under Sampling] perform significantly better than baseline and algorithm-level methods with average AUC [Area Under the Curve] scores of 0.8505 and 0.8509, while ROS–RUS maximizes efficiency with a 4× speedup in training time.”

[5] See, for example, Feng, Yunyi, Simon Lin, En-Ju Lin, Lesley Farley, Yungui Huang, and Chang Liu. “Identifying Candidates for Medical Coding Audits: Demonstration of a Data Driven Approach to Improve Medicare Severity Diagnosis-Related Group Coding Compliance.” In Health Information Science: 8th International Conference, HIS 2019, Xi’an, China, October 18–20, 2019, Proceedings 8, pp. 47-57. Springer International Publishing, 2019.

[6] Medicare audits are governed by a type of “made up” law called “administrative law”. This law is created by the agency that is in charge of enforcing it, and providing the pseudo-courts to adjudicate cases. It is not passed by Congress or by any State. The standards government enforcement of administrative law are not as rigorous or as well tested as for actual legislation.

[7] Liu, Xiaoxuan, Ben Glocker, Melissa M. McCradden, Marzyeh Ghassemi, Alastair K. Denniston, and Lauren Oakden-Rayner. “The medical algorithmic audit.” The Lancet Digital Health 4, no. 5 (2022): e384-e397. https://www.sciencedirect.com/science/article/pii/S2589750022000036

[8] Akula, Ramya, and Ivan Garibay. “Audit and assurance of AI algorithms: a framework to ensure ethical algorithmic practices in artificial intelligence.” arXiv preprint arXiv:2107.14046 (2021).  https://arxiv.org/pdf/2107.14046.pdf

[9] Olivia J. Erdelyi & July Goldsmith, Regulating Artificial Intelligence Proposal for a Global Solution, AIES’18 (2-3 Feb. 2018) https://dl.acm.org/doi/pdf/10.1145/3278721.3278731

[10] Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harvard J. of L. & Tech., 2, 353, 393(2016) http://euro.ecom.cmu.edu/program/law/08-732/AI/Scherer.pdf

[11] Choudhury, Avishek. “A framework for safeguarding artificial intelligence systems within healthcare.” British journal of healthcare management 25, no. 8 (2019): 1-6, at p.4.  https://d1wqtxts1xzle7.cloudfront.net/61063104/choudhury_A_2019-libre.pdf

[12] Panigutti, Cecilia, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi. “FairLens: Auditing black-box clinical decision support systems.” Information Processing & Management 58, no. 5 (2021): 102657. https://www.sciencedirect.com/science/article/pii/S030645732100145X  Referencing Barocas S., Hardt M., Narayanan A., Fairness in machine learning, Nips Tutorial, 1 (2017), p. 2, Pedreschi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 560–568)., Pierson E., Cutler D.M., Leskovec J., Mullainathan S., Obermeyer Z., An algorithmic approach to reducing unexplained pain disparities in underserved populations, Nature Medicine, 27 (1) (2021), pp. 136-140, Hillson S.D., Connelly D.P., Liu Y., The effects of computer-assisted electrocardiographic interpretation on physicians’ diagnostic decisions, Medical Decision Making, 15 (2) (1995), pp. 107-112, and Lindow T., Kron J., Thulesius H., Ljungström E., Pahlm O., Erroneous computer-based interpretations of atrial fibrillation and atrial flutter in a Swedish primary health care setting, Scandinavian Journal of Primary Health Care, 37 (4) (2019), pp. 426-433.

[13] Koreff, Jared, Martin Weisner, and Steve G. Sutton. “Data analytics (AB) use in healthcare fraud audits.” International Journal of Accounting Information Systems 42 (2021): 100523. https://digitalcommons.trinity.edu/cgi/viewcontent.cgi?article=1145&context=busadmin_faculty  (Note: This author highly recommends this paper to readers of RACmonitor.)

[14] Ibid.

[15] Just saying.

[16] Sixth Amendment: In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

Edward M. Roche, PhD, JD

Edward Roche is the director of scientific intelligence for Barraclough NY, LLC. Mr. Roche is also a member of the California Bar. Prior to his career in health law, he served as the chief research officer of the Gartner Group, a leading ICT advisory firm. He was chief scientist of the Concours Group, both leading IT consulting and research organizations. Mr. Roche is a member of the RACmonitor editorial board as an investigative reporter and is a popular panelist on Monitor Mondays.

Related Stories

The Weirdness of Remuneration

The Weirdness of Remuneration

Remuneration. It is a weird word. It almost sounds like reMOOneration, which I imagine is when a cow has acid reflux. Or MOONeration, an ideal

Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Navigating AI in Healthcare Revenue Cycle: Maximizing Efficiency, Minimizing Risks

Navigating AI in Healthcare Revenue Cycle: Maximizing Efficiency, Minimizing Risks

Michelle Wieczorek explores challenges, strategies, and best practices to AI implementation and ongoing monitoring in the middle revenue cycle through real-world use cases. She addresses critical issues such as the validation of AI algorithms, the importance of human validation in machine learning, and the delineation of responsibilities between buyers and vendors.

May 21, 2024
Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
2024 SDoH Update: Navigating Coding and Screening Assessment

2024 SDoH Update: Navigating Coding and Screening Assessment

Dive deep into the world of Social Determinants of Health (SDoH) coding with our comprehensive webcast. Explore the latest OPPS codes for 2024, understand SDoH assessments, and discover effective strategies for integrating coding seamlessly into healthcare practices. Gain invaluable insights and practical knowledge to navigate the complexities of SDoH coding confidently. Join us to unlock the potential of coding in promoting holistic patient care.

May 22, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →