The New Front Line of Revenue Integrity: When AI Crosses into Clinical Practice

The New Front Line of Revenue Integrity: When AI Crosses into Clinical Practice

The boundaries of clinical judgment have always been clearly defined. They belong to the provider, and are supported by documentation, clarified through compliant queries, and governed by regulatory standards that ensure the integrity of the medical record. What has remained constant is accountability.

That accountability is now being challenged.

On May 1, the Commonwealth of Pennsylvania State Board of Medicine filed a petition in the Commonwealth Court seeking to enjoin Character Technologies, Inc. – the company behind Character.AI – from engaging in the unauthorized practice of medicine.¹

This case, recently highlighted by Eric Fish, Partner at Hooper, Lundy & Bookman, brings early visibility to what may become a defining regulatory moment for artificial intelligence (AI) in healthcare.

The Association of Clinical Documentation Integrity Specialists/American Health Information Management Association (ACDIS/AHIMA) draft Guidelines for Achieving a Compliant Query Practice 2026 Update reaffirm that clinical judgment belongs to the provider. This case raises the question of what happens when something else begins to act like the provider.

At first glance, this reads like a case about a chatbot, but it is not. This is a case about where the line is drawn between technology and clinical practice, and what happens when that line is crossed.

The platform at the center of this action, Character.AI, allows users to create and interact with AI-generated personas, including those designed to simulate healthcare professionals.¹ In the complaint, a state investigator engaged a chatbot presenting itself as a psychiatrist. During that interaction, the AI discussed symptoms consistent with depression, offered to perform an assessment, suggested that medication could be appropriate, and critically, represented itself as a licensed physician, including in Pennsylvania, even providing a false license number.¹

That interaction is the foundation of this case, and it is enough.

The Pennsylvania Medical Practice Act does not require harm to occur for a violation to exist. It does not require a prescription to be written or a procedure to be performed. It requires only that an individual or entity practice or purport to practice medicine without a valid license.¹ That distinction is not subtle, but foundational The act of representing clinical authority is, in itself, regulated.

This is where the connection to query compliance becomes unavoidable.

The chatbot did not simply provide general information. It interpreted symptoms, advanced a diagnostic direction, and suggested treatment. In effect, it moved beyond clarification and into clinical decision-making. It behaved as if it were the clinician.

That distinction matters. A compliant query presents clinically supported options and preserves independent provider judgment. What occurred in this case bypasses that structure entirely. It introduces clinical reasoning without defined authorship, validation, or accountability. This is not a deviation from compliant query practice, but a wholesale avoidance of it.

For healthcare organizations, this marks an inflection point. For years, AI has been framed as a tool: something that supports clinical workflows, enhances documentation, or improves efficiency. This case reframes that narrative. When an AI system begins to interpret clinical information, engage in decision-making dialogue, and communicate with the authority of a licensed provider, it is no longer functioning as a tool, but as a clinical actor. And clinical actors are subject to regulation.

It would be a mistake to view this as an isolated enforcement action against a consumer-facing platform. The implications extend directly into healthcare operations. We are already seeing widespread adoption of AI scribes that generate clinical documentation, decision-support tools that influence treatment pathways, and risk-adjustment models that shape how conditions are captured and represented. The question is no longer whether AI is present in the workflow, but whether the output of the AI functions as clinical judgment.

That distinction matters because the regulatory standard is shifting from intent to function. If the output behaves like clinical decision-making, it will be evaluated as clinical decision-making.

This is also where federal enforcement posture becomes relevant. The U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG) has made it clear through its recent compliance guidance and enforcement priorities that organizations are expected to maintain effective oversight of evolving technologies, particularly where those technologies influence billing, documentation, or clinical decision-making.² AI does not sit outside of existing compliance frameworks; it amplifies them. If AI-generated content contributes to a claim, supports a diagnosis, or influences medical necessity, it becomes subject to the same scrutiny as any other element of the record.

This introduces a new layer of risk into the medical record – one that centers on authorship and provenance. The medical record has always served as the legal record of care. It is the foundation for reimbursement, quality measurement, and regulatory review. Increasingly, it is also the data source for automated payer review, structured data extraction, and interoperability frameworks. When AI-generated content enters that record, the question becomes not just what was documented, but who or what generated it, and whether that content can withstand scrutiny.

If a statement in the record reads as a clinical conclusion, it will be treated as one. If that conclusion cannot be attributed to a licensed provider exercising clinical judgment, the integrity of the record is compromised, from an enforcement perspective, which creates exposure not only to documentation accuracy, but also to the risk of false claims if unsupported or non-validated AI-generated content contributes to reimbursement.

Another critical element of this case is where accountability lies. The action is not directed at an individual user who created the chatbot. It is directed at the platform itself. That signals a clear regulatory posture: organizations may be held responsible for how AI systems are deployed, how they behave, and what they are allowed to represent. AI outputs are not viewed as isolated artifacts, but extensions of the entity that enables them.

For healthcare systems, this aligns with existing expectations regarding documentation integrity, coding accuracy, and medical necessity. The difference is that AI introduces scale and variability at a pace that traditional governance structures were not designed to manage. Without deliberate oversight, the risk is not incremental, but exponential.

This is where governance maturity becomes the differentiator. Organizations that approach AI as a technology implementation will remain reactive. Organizations that approach AI as a clinical and compliance risk domain will build the controls necessary to withstand scrutiny. That includes formal governance structures, defined accountability for AI-generated content, validation processes for clinical language, and clear policies prohibiting AI from representing itself as a licensed provider or an independent clinical authority.

Perhaps the most important takeaway from this case is how low the exposure threshold is. The trigger is not treatment, but perception. If an AI system presents itself with clinical authority, uses protected titles, or implies licensure, it may meet the definition of “holding out” as a provider. That standard has direct implications for any AI-enabled interface that interacts with patients, supports documentation, or generates clinical language.

Healthcare organizations must respond accordingly. This requires more than awareness; it also demands governance. There must be clear boundaries governing what AI can and cannot do within clinical workflows. Clinical statements must be attributable to licensed providers. Documentation generated or supported by AI must be validated, not assumed. And critically, clinical documentation integrity (CDI), compliance, and physician advisory leadership must have a defined role in evaluating and overseeing AI deployment. This is where CDI becomes central, not peripheral, in AI governance.

This is not simply a technological decision, but also a clinical, regulatory, and legal one.

The broader trajectory is clear. As regulators, payers, and enforcement bodies continue to evaluate the role of AI in healthcare, the focus will not be on the technology’s sophistication. It will be on the impact of its output. Does it influence care? Does it shape decision-making? Does it present itself as a clinical authority?

If the answer is yes, it will be regulated accordingly.

We are entering a phase where the distinction between technology and clinician is no longer defined by design, but by behavior. That shift places the medical record at the center of the conversation once again, not just as a reflection of care, but as evidence of how care decisions are made.

The expectation remains unchanged. The record must be accurate, attributable, and defensible. What has changed is the environment in which that record is created.

If it looks like clinical judgment, communicates like clinical judgment, and influences care like clinical judgment, it will be regulated as clinical judgment.


References

  1. Commonwealth of Pennsylvania, Department of State, State Board of Medicine v. Character Technologies, Inc. Petition for Review like a Complaint in Equity. Filed May 1, 2026.
  2. Office of Inspector General. General Compliance Program Guidance and industry segment guidance updates addressing technology, oversight, and program integrity expectations. https://oig.hhs.gov/compliance
  3. Fish E. LinkedIn post regarding Pennsylvania State Board of Medicine action against Character.AI. May 2026.
  4. ACDIS, AHIMA. Guidelines for Achieving a Compliant Query Practice—2026 Update (Draft). 2026. Accessed May 2026. https://acdis.org/resources/acdisahima-guidelines-achieving-compliant-query-practice-2026-update
Facebook
Twitter
LinkedIn

Penny Jefferson, MSN, RN, CCDS, CCDS-O, CCS, CDIP, CRC, CHDA, CRCR, CPHQ, ACPA-C

With more than 33 years in healthcare, Penny began her career as a U.S. Army medic and has held roles spanning CNA through MSN. She brings 14 years of critical care nursing experience and 14 years in Clinical Documentation Integrity. She joined Mayo Clinic in 2019 as a concurrent CDI reviewer and advanced to Supervisor of CDI in Rochester, Minnesota. In December 2022, she transitioned to the University of California Davis Medical Center, where she serves as the Director of CDI. She is a published author, national thought leader, and currently leads the ACPA CommUnity Denials & Appeals Interest Group, fostering collaboration on denial prevention, appeals strategy, and payer engagement. She is also the newly appointed co-host of Talk Ten Tuesday.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Mastering Breast Biopsy Billing: Guidance-Driven Coding for Accurate Reimbursement

Breast biopsy procedures may be clinically straightforward but accurately translating them into compliant billing can be anything but. In this focused webcast, Shawn Blackburn, CPC, CPMA, CIC, CRC, CCS-P breaks down how imaging guidance, lesion count, laterality, and payer expectations all impact how these procedures should be reported. Through clear explanations and real-world scenarios, you’ll gain practical insight into aligning clinical workflows with billing requirements, avoiding common pitfalls, and ensuring your documentation supports accurate reimbursement and compliance.

May 21, 2026

Mastering OB GYN Coding Accuracy: Precision Coding for Compliance and Reimbursement

Gain clarity and confidence in OB‑GYN coding with this expert‑led webcast featuring Sherri L. Clayton, RHIT, CSS. You’ll learn how to apply global maternity package rules accurately, select the right CPT codes for procedures and visits, and identify documentation gaps that lead to denials. With practical guidance and real examples, this session helps you strengthen compliance, reduce audit risk, and ensure accurate reimbursement for women’s health services.

May 14, 2026

2026 ICD-10-CM/PCS Coding Clinic Update Webcast Series

Uncover essential coding insights with nationally recognized coding authority Kay Piper, RHIA, CDIP, CCS. Through ICD10monitor’s interactive, on‑demand webcast series, Kay walks you through the AHA’s 2026 ICD‑10‑CM/PCS Quarterly Coding Clinics, translating each update into practical, easy‑to‑apply guidance designed to sharpen precision, ensure compliance, and strengthen day‑to‑day decision‑making. Available shortly after each official release.

April 13, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Fourth Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s fourth quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

December 14, 2026

Trending News

Featured Webcasts

Reengineering Utilization Management: Building an Adaptive Model for the New Payer Era

Traditional utilization management models can no longer keep pace with regulatory shifts, payer scrutiny, and operational pressures. In this webcast, Tiffany Ferguson, LMSW, CMAC, ACM, ACPA-C, introduces an Adaptive Model strategy that modernizes UM through role specialization, technology-driven workflows, and proactive, team-based processes. Attendees will learn how to restructure programs to improve efficiency, strengthen clinical collaboration, and enhance financial performance in a rapidly changing healthcare environment.

May 20, 2026

Compliance for the Inpatient Psychiatric Facility (IPF-PPS): Minimizing Federal Audit Findings by Strengthening Best Practices

Federal auditors are intensifying their focus on inpatient psychiatric facilities, using advanced data analytics to spotlight outliers and pursue high‑dollar repayments. In this high‑impact webcast, Michael Calahan, PA, MBA, Compliance Officer and V.P., Hospital & Physician Compliance, breaks down what regulators are really targeting in IPF-PPS admissions, documentation, treatment and discharge planning. Attendees will learn practical steps to tighten processes, avoid common audit triggers and protect reimbursement and reduce the risk of multimillion-dollar repayment demands.

April 9, 2026

Mastering MDM for Accurate Professional Fee Coding

In this timely session, Stacey Shillito, CDIP, CPMA, CCS, CCS-P, CPEDC, COPC, breaks down the complexities of Medical Decision Making (MDM) documentation so providers can confidently capture the true complexity of their care. Attendees will learn practical, efficient strategies to ensure documentation aligns with current E/M guidelines, supports accurate coding, and reduces audit risk, all without adding to charting time.

March 31, 2026

The PEPPER Returns – Risk and Opportunity at Your Fingertips

Join Ronald Hirsch, MD, FACP, CHCQM for The PEPPER Returns – Risk and Opportunity at Your Fingertips, a practical webcast that demystifies the PEPPER and shows you how to turn complex claims data into actionable insights. Dr. Hirsch will explain how to interpret key measures, identify compliance risks, uncover missed revenue opportunities, and understand new updates in the PEPPER, all to help your organization stay ahead of audits and use this powerful data proactively.

March 19, 2026

Trending News

Happy HIP Week! Sign up to win free access to our 2026 Coding Clinic Update Webcast Series! Click here to learn more →

Prepare for the 2025 CMS IPPS Final Rule with ICD10monitor’s IPPSPalooza! Click HERE to learn more

Get 15% OFF on all educational webcasts at ICD10monitor with code JULYFOURTH24 until July 4, 2024—start learning today!

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 1 with code CYBER25

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24