AI has become a fixture in healthcare revenue cycle management (RCM) as finance leaders seek to provide a measure of relief for overburdened, understaffed departments facing unprecedented volumes of third-party audit demands and rising denial rates.
According to the newly released 2023 Benchmark Report, growing investments in data, AI, and technology platforms have enabled compliance and revenue integrity departments to reduce their team size by 33% while performing 10% more in audit activities compared to 2022. At a time when RCM staffing shortages are high, AI provides a critical productivity boost.
Healthcare organizations are now reporting four times more audit requests than received in previous years – and audit demand letters are running more than 100 pages. This is where AI shines – its greatest ability is uncovering outliers and needles in the haystack across millions of data points. AI represents a significant competitive advantage to the RCM function, and healthcare finance leaders who dismiss AI as hype will soon find their organizations left behind.
Where AI Can Fall Short
Truly autonomous AI in healthcare is a pipe dream. While it’s true that AI has enabled the automation of many RCM tasks, the promise of fully autonomous systems remains unfulfilled. This is due in part to software vendors’ propensity to focus on technology without first taking the time to fully understand the targeted workflows and importantly, the human touchpoints within them – a practice that leads to ineffective AI integration and end-user adoption.
Humans must always be in the loop to ensure that AI can function appropriately in a complex RCM environment. Accuracy and precision remain the toughest challenges with autonomous AI and this is where involving humans in the loop will enhance outcomes. While the stakes may not be as high for RCM as they are on the clinical side, the repercussions of poorly designed AI solutions are nonetheless significant.
Financial impacts are the most obvious for healthcare organizations. Poorly trained AI tools being used to conduct prospective claims audits might miss instances of undercoding, which means missed revenue opportunities. One MDaudit customer discovered that an incorrect rule within their so-called autonomous coding system was incorrectly coding drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected if not for a human in the loop uncovering the flaw.
Likewise, AI can also fall short with overcoding results with false positives – an area in which healthcare organizations must stay compliant in alignment with the government’s mission of fighting fraud, abuse, and waste (FWA) in the healthcare system.
Poorly designed AI can also impact individual providers. Consider the implications if an AI tool is not properly trained on the concept of “at-risk provider” in the revenue cycle sense. Physicians could find themselves unfairly targeted for additional scrutiny and training if they are included in sweeps for at-risk providers with high denial rates. It wastes time that should be spent seeing patients, slows cash flow by delaying claims for prospective reviews, and could harm their reputation by slapping them with a “problematic” label.
Keeping Humans in the Loop
Preventing these types of negative outcomes requires humans in the loop. There are three areas of AI in particular that will always require human involvement to achieve optimal outcomes.
1. Building a strong data foundation.
Building a robust data foundation is critical, as the underlying data model with proper metadata, data quality, and governance is key to enabling AI to achieve peak efficiencies. For this to happen, developers must take time to get into the trenches with billing compliance, coding, and revenue cycle leaders and staff to fully understand their workflows and data needed to perform their duties.
Effective anomaly detection requires not only billing, denials, and other claims data but also an understanding of the complex interplay between providers, coders, billers, payors, etc. to ensure the technology is capable of continuously assessing risks in real-time and delivering to users the information needed to focus their actions and activities in ways that drive measurable outcomes. If organizations skip the data foundation and accelerate the deployment of their AI models using shiny tools, it will result in hallucinations and false positives from the AI models that will cause noise and hinder adoption.
2. Continuous training.
Healthcare RCM is a continuously evolving profession requiring ongoing education to ensure its professionals understand the latest regulations, trends, and priorities. The same is true of AI-enabled RCM tools. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates to ensure AI tools are meeting current and future needs.
AI should be trainable in real-time, allowing end users to immediately provide input and feedback on the results of information searches and/or analysis to support continuous learning. It should also be possible for users to mark data as unsafe when warranted to prevent its amplification at scale. For example, attributing financial loss or compliance risk to specific entities or individuals without properly explaining why it’s appropriate to do so.
3. Proper governance.
Humans must validate AI’s output to ensure it is safe. Even with autonomous coding, a coding professional must ensure AI has properly “learned” how to apply updated code sets or deal with new regulatory requirements. When humans are excluded from the governance loop, a healthcare organization leaves itself wide open to revenue leakage, negative audit outcomes, reputational loss, and much more.
There is no question that AI can transform healthcare, especially RCM. However, doing so requires healthcare organizations to augment their technology investments with human and workforce training to optimize accuracy, productivity, and business value.
Credit: Source link
Comments are closed.