Menu Close

Why Revenue Cycle is the Perfect Proving Ground for AI in Healthcare

AI’s potential to transform the practice of medicine is enormous and well-touted. However, the risks associated with the technology introduce critical questions about when and how to use it. There is debate about how far AI may be utilized ethically in healthcare settings since there are no universal guidelines for its use.1

Clinical decision support tools have existed for decades, and expert systems and rules engines existed before EHR systems.2 When digitized medical information intersected with electronic patient health information, systems to help achieve better patient outcomes were propelled to the forefront. The definition of “state of the art” has changed dramatically from the 1980s to today. AI is the latest chapter in medicine’s quest to use data and machines to improve patient care.

Despite phenomenal advancements, there are significant concerns about the quality of data and the reliability of systems that diagnose conditions and predict outcomes. These and other important issues have prevented AI from reaching its full potential and broad deployment in the clinical setting.

In this second article in a series on AI, we’ll argue that the risks that keep AI out of the clinic don’t exist in operational areas of the health system. For this reason, putting AI to work in the revenue cycle will prove the impact it can have on the business.

One Janus customer has gained 12,000 hours of new capacity annually, reduced claim statusing by 40%, and worked 12% more accounts.

A study published in early 2023 titled, Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector1 summarizes what is holding AI back from being widely deployed. The study outlines four primary issues:

1. Data. Where will it come from?

AI needs a lot of data to generate refined and accurate algorithms, which are central to the success or failure of any application. Data also needs to be clean. For example, an algorithm to diagnose a condition or predict the outcome of a treatment requires massive amounts of high-quality patient profile data, health data, insurance data, and other relevant data from providers, systems, and other institutions. A steady supply of clean data is needed to continue improving the algorithm.

Having large amounts of data is only one side of the equation; the other side is the infrastructure to process the data and build models (like LLMs) necessary to deploy AI tools. Building custom, special-purpose models is costly, and using public services like ChatGPT in healthcare settings introduces problems with privacy and confidentiality.

The healthcare business has a complex issue with information accessibility and cleanliness. Because patient records are confidential, there is a natural reluctance among institutions to exchange health data. Internal corporate resistance might make this difficult to achieve.1 Furthermore, keeping data from multiple sources and systems clean and consistent is a constant struggle.

And so, AI for medicine faces a data supply and processing challenge. On the other hand, data representing operational transactions and predictive analytics in the revenue cycle is neutral and abundant. It can be mined safely within an individual health system and combined with other generic sources without compromising policy or governance.

2. Ethics. Whose fault is it if something goes wrong?

Part of the mystery of AI and machine learning ( ML) is how the machine learns. When asked to spot patterns or draw conclusions from data, the process it uses is unknown. Researchers worry that figuring out how an algorithm reaches a particular decision will be difficult. Also, you can ask an AI the same question twice and get two different answers. This may be ok in some settings, but it’s not for a clinical diagnosis.

So, who’s responsible If ML generates a diagnosis that turns out to be false? This scenario is the “black box” problem. You can’t assign responsibility if you don’t know the root cause.

For this reason, some have suggested that the black-box problem is less of a concern for algorithms used in lower-stakes applications, such as those that aren’t medical, and instead prioritize efficiency or betterment of operations.

3. Social. How do you get workers to embrace it?

As talk of AI has increased, so has human dread based on the perceived existential threat to livelihood. Thoughts of losing a job can be terrifying, as can the feeling of hopelessness that AI will render skills developed after years of experience obsolete. These genuine feelings can foster resistance to technology and resentment toward those promoting it.

This issue isn’t new with the advancement of AI; it’s the result of progress. Social, economic, or technological progress has affected every facet of work, and the current pace of change can be unsettling.

What has endured through all ages of progress? People. They drive change and benefit from it, which is true with AI. People are required to mine data, guide training, and evaluate the results. Technology is an opportunity, not a threat.

Revenue cycle workers can learn, grow, and train to become the facilitators and experts who make AI work for them. They possess something a machine never will: humanity.

4. Clinical Implementation. How do you measure effectiveness?

Let’s say an algorithm identifies a new care pathway for a well-known condition. How do you test the safety and efficacy of the latest treatment? How do you conduct a peer review with an algorithm? The medical community needs standards and practices that subject AI output to medical-grade testing and validation rigors. Patient safety must be paramount.

Algorithms that address productivity and outcomes in revenue cycle operations are immune from the risks associated with medical care. AI deployment for operational efficiency has tremendous potential to improve the financial bottom line with far less risk of an adverse outcome.

Healthcare providers, patients, and health insurers may all benefit from the efficiencies and improved treatment outcomes AI tools can provide. Still, there are risks [they] should consider when implementing these innovative tools in healthcare.3 We must find a way to embrace the promise and address the problems. Doing nothing because AI is imperfect risks perpetuating a problematic status quo.4

AI will transform medicine, but it will take time. While clinics wrestle with the challenges, health systems can see AI in action and measure its impact on revenue without risking patient safety or data.

Revenue cycle management (RCM) can be used as a testing ground for AI and help hospital leaders learn what it takes to implement it successfully. RCM is ideal because successful implementations are more directly tied to cost or revenue, and processes do not involve convincing clinicians to change workflows.5

Revenue cycle automations are ready today to overcome labor shortages, boost productivity, and improve financial outcomes.

Read the rest of our AI blog series:


1 (Khan, Bangul, et al. “Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector.” Biomedical materials & devices (New York, N.Y.), 1-8. 8 Feb. 2023, doi:10.1007/s44174-023-00063-2) Full text

2 OHSU Wiki

3 Understanding the advantages and risks of AI usage in healthcare, Thompson Reuters 2013

4 AMA J Ethics. 2020;22(11):E945-951. doi: 10.1001/amajethics.2020.945. Full text

5 Applying AI: Why Revenue Cycle Management? Becker’s, December 2023

Schedule a Demo