* First, some background from Forbes…
AI's primary role in healthcare has focused on back-office automation and infrastructure improvements. This includes streamlining intake forms, generating meeting notes, and providing comprehensive summarization services. Companies like DeepScribe have successfully implemented AI for transcription services, while emerging platforms like Thoughtful AI are revolutionizing back-office automation through enhanced payment claims processing and claims management using AI agents. Agents have the ability to take actions and perform work autonomously, similarly to a human, such as by opening applications, copying and pasting, etc.
Language models combined with agents will create artificial general intelligence (AGI), an AI that can do almost anything a human can. Many experts now project that AGI could arrive as soon as 2028, a significant acceleration from previous estimates, due to rapid advancements in the technology. AGI trained on appropriate medical datasets could surpass clinicians' abilities in diagnosis, treatment planning, and even the administration of treatment.
Many providers may not realize AI tools are being used to review their claims, and these systems are not built with provider interests in mind. While AI has the potential to streamline processes, its current use in the revenue cycle is resulting in more claim denials, payment delays, and a greater need for appeals, particularly because payers often use AI to retroactively review medical necessity determinations. To navigate this AI-driven landscape, hospitals need to develop expertise to combat the biases and errors inherent in these systems.
One of the biggest issues with AI in claims processing is the lack of transparency. Payers rarely disclose that AI is being used or explain how it operates, and providers are often unaware of the algorithms driving these AI systems. This leaves hospitals with little information to contest AI-generated denials.
Without insight into the logic behind these denials, hospitals are at a disadvantage, especially given the added administrative burden of contesting them. For example, AI audits frequently occur after hospitals have completed due diligence, received authorization, and have been paid for a claim. AI systems may retroactively re-evaluate the claim and decide that medical necessity wasn’t met. This can lead to payment reversals, requiring hospitals to use even more resources to contest claims that were initially approved. In short, AI-driven post-payment audits delay payments and erode trust between hospitals and payers, putting hospitals under financial strain. […]
In an effort to combat payer AI denials, some hospitals have implemented their own AI tools to handle claims. While this might seem like a good solution, it can backfire. Payers’ AI systems are increasingly sophisticated and can sometimes detect when they are countered by another AI system rather than a skilled human. This can trigger more denials, as payer systems may overlook or reject automated responses, perceiving them as less credible.
* Crain’s…
Illinois Rep. Bob Morgan, D-Deerfield, is introducing a new piece of legislation today aimed at regulating how health insurance companies leverage artificial intelligence to make coverage decisions.
The bill’s working title, Artificial Intelligence Systems Use in Health Insurance Act, would give the Illinois Department of Insurance regulatory oversight of Illinois health insurance providers' use of AI to make or support adverse determinations that affect consumers, such as care claims denials. And it would effectively ban the sole use of machine-learning or generative AI to deny care or coverage. […]
The legislation would require any adverse decision be reviewed by a health care professional. It would also expand the type of information insurers would be required to share with the IDOI when using AI. […]
Health insurers, like other insurance companies, often use AI to expedite claims approval and denial processes as they deal with high claims volume. But concern over how that's affecting consumers and their health is growing, especially as more patients and physicians report insurance plans denying coverage for care. Investigations by ProPublica and others have revealed the use of AI can sometimes contribute to care denials.
* HB5918…
Creates the Artificial Intelligence Systems Use in Health Insurance Act. Provides that the Department of Insurance's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Provides that any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. Provides that an insurer authorized to do business in Illinois shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of insurance plans or benefits that result solely from the use or application of any AI system or predictive model. Provides that any decision-making process for the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models shall be meaningfully reviewed, in accordance with review procedures determined by Department rules, by an individual with authority to override the AI systems and determinations. Authorizes the Department to adopt emergency rules to implement the Act and to adopt rules concerning standards for full and fair disclosure of an insurer's use of AI systems. Makes a conforming change in the Illinois Administrative Procedure Act.
Are the unknown processes of AI significantly worse than the seemingly arbitrary human decisions that also lack transparency?
ReplyDeleteInsurers and health care providers can dig into their deep profits and continue to pay humans to review claims and make these decisions. When you are making decisions that affect the lives of humans, especially with medical debt and cost being such a burden on average workers, then these decisions should only be made by humans.
ReplyDeleteInstead it sounds like both sides are just creating programs to justify themselves instead of what's right for the patient.
I am not so sure that AI can be discounted if used properly. My anecdotal use if it for various purposes is pretty favorable. In my work as a luthier, I even asked it some very detailed and pointed questions that only an advance luthier would even know. I of course knew the exact answer and to my surprise it was incredible. Other than some minor details it was spot on. I really think what we need to do is have AI specialist and those in the field teach and explain how it works and implications. Humans have to have the final say by AI for some things is outright amazing.
ReplyDeleteUsing AI to analyze insurance claims can help to speed up the process if used properly. However, the lack of transparency when utilizing this tool and the insurance company's AI system deciding to write off a hospital's AI system for being "less credible" demonstrates that this is about profits, not efficiency. Regulation of this new technology is necessary to protect patients. I would feel deeply uncomfortable if my private medical information was used to train an AI system without my consent.
ReplyDelete