Happy Clinical Documentation Integrity (CDI) Week! We’ve made it another year in a tumultuous healthcare environment. Like many other industries, perhaps one of the biggest impacts on our industry has been artificial intelligence (AI). Luckily, I don’t think we have to worry about job loss, even though that was a real concern when Computer-Assisted-Coding (CAC) was first introduced.
The impact of CAC and other AI technologies have been somewhat underwhelming in our industry, even though they are being quickly adapted by most healthcare systems. Although AI-driven technologies can prioritize records for review, highlight inconsistent documentation, and reduce how long it takes to assign codes to an inpatient health record, there are few who would argue that it has lived up to the hype. The promise was that these technologies would result in greater efficiency, allowing more reviews with less staff. To be fair, it appears that AI-driven models based on deep learning have improved sensitivity (recall) and specificity (precision) compared to their rules-based predecessors, but have yet to become the promised panacea.
If anything, it seems that technology has made it easier to assign increasingly complex tasks to CDI and coding professionals. And that is the problem. The degree of difficulty associated with CDI and coding reviews is ever-increasing as the scope of reviews expands. However, it doesn’t seem that CDI and coding job titles and pay are reflective of this increased responsibility. I asked AI to summarize key findings from studies that examine the impact of AI on CDI coding professionals. The conclusion? AI can increase productivity by accelerating repetitive tasks, but it cannot replace professional judgment; human oversight is crucial.
With advancements in AI technology like predictive modeling, CDI and coding professionals not only have to deal with false positives, but they may also have to identify AI hallucinations. If you aren’t familiar with the term, this refers to unverifiable or absurd conclusions. Predictive modeling allows AI to generate results that are often devoid of context, because the prediction may be based on false or inaccurate data.
AI tools “neither know the factual validity of their output, nor are they constrained by the rules of logical reasoning in the output they produce,” allowing them to generate a plausible string of text without understanding what it means. As such, it is reasonable to conclude that CDI and coding professionals of today function as data analysts who must have excellent critical thinking skills.
As such, it is likely that many CDI and coding professionals are underpaid, when you compare the type of higher-level work they are performing to that of other industries requiring expert human oversight combined with the ability to analyze information.
AI tools need enormously large data sets (large language models), but there is often a lack of quality control, because much of the data is publicly sourced. Most healthcare data is protected, limiting its availability, but there are also subjective variations in hospital documentation and coding practices. There is no guarantee that an AI tool will be consistent with the nuances practiced at a particular healthcare facility, which can lead to higher rates of false positives. Technology also may incorrectly assume causality when none exists.
An analysis by OpenAI acknowledges that their new models are more prone to hallucinations because they have less world knowledge. If less knowledge is problematic, AI vendors in the healthcare space should be transparent by acknowledging that predictions may be less accurate when identifying a new code (i.e., one recently added to the code set).
When new codes are introduced, it can make historical data obsolete, but AI tools may not be able to make that distinction within their reasoning. Flawed logic may result in flawed outcomes.
AI hallucinations are problematic in nuanced areas like healthcare documentation and inpatient medical coding – especially since inaccuracies can result in denials, which results in direct revenue leakage through lower reimbursement and indirect leakage through the administrative costs associated with the appeals process. This is why CDI and coding professionals need strong critical thinking skills.
Overreliance on AI technologies can lead to “metacognitive laziness,” a term used to describe when humans offload some of their higher-level thinking to an AI tool; this is a concept discussed in the article “Are we Offloading Critical Thinking to Chatbots.” The article explains how using chatbots changes the “nature of the effort people invest in critical thinking. It shifts from information gathering to information verification, from problem-solving to incorporating the AI’s output, and it shifts other types of higher-level thinking to merely stewarding the AI, steering the chatbot with their prompts and assessing whether the response is sufficient for their work.”
Hospitals need to encourage continued critical thinking among their CDI and coding staffs. Being too focused on productivity can contribute to a loss of critical thinking. A job devoid of critical thinking could also result in lower job satisfaction, which can create a downward spiral.
Hospital leadership needs to value quality over quantity.
Remember, low quality can lead to revenue leakage through increased denials. CDI and coding professionals need to be at the top of their game, because they must be able to identify incorrect output generated by an “all-knowing expert.” It is hard not to second-guess yourself when you disagree with AI-empowered output, but we are the professionals, and we need to trust our critical thinking skills.
Hospital leadership also needs to recognize the value of CDI and coding professionals. I hope you feel appreciated in your place of employment.