AI is speeding up insurance claims, but at what cost?


Full story

  • Artificial intelligence is increasingly being used in health insurance to streamline claims and save costs, but concerns about ethics, oversight and patient outcomes are intensifying. The technology is reshaping the industry while raising critical questions about its long-term implications.
  • Some states, like California, have passed legislation to restrict AI-driven decisions, ensuring human oversight remains central to healthcare.
  • While AI promises efficiency, risks like bias, lack of transparency, and potential errors raise significant questions about its role in medical decision-making.

Full Story

Artificial intelligence is transforming industries worldwide. Now, it’s making waves in health insurance, streamlining claims and customizing coverage.

But as AI becomes more involved in healthcare decisions, questions about oversight, ethics and patient outcomes are growing louder.

Stories like Dr. Elisabeth Potter’s show what’s at stake. The Austin-based plastic surgeon went viral in January after posting a TikTok video saying she received a call from UnitedHealthcare during a breast reconstruction surgery.

“I got a call into the operating room that UnitedHealthcare wanted me to call them about one of the patients that was having surgery today who’s actually asleep having surgery,” Potter said in the video.

Incidents like this highlight growing tensions between healthcare providers and insurers — and why companies are turning to AI for help.

The financial promise of AI in insurance

According to Newsweek, consulting firm McKinsey & Company estimates AI could help health insurers save between $150 million and $300 million in administrative costs — and up to $970 million in medical costs — for every $10 billion in revenue.

University of Pennsylvania professor Hamsa Bastani explained how the process works.

“When a claim comes in, an algorithm can review details like medical codes, patient history, and patterns of past claims to see whether the claim is valid, consistent with policy coverage,” Bastani told Newsweek.

If a claim appears routine, an automated payout may follow. If not, it’s flagged for a human reviewer.

AI regulation varies by state

Because health insurance is regulated at the state level, there is no national policy standard for AI. That’s why some states—including California—are passing their own legislation.

In 2024, Governor Gavin Newsom signed Senate Bill 1120, which prohibits insurance companies from using AI to deny claims outright. At least 10 other states are considering similar legislation, according to NBC News.

The Mercury News reports that 26% of insurance claims in California were denied last year. And in 2023, the American Medical Association found that insurer Cigna denied more than 300,000 claims using an AI-assisted review system.

California State Senator Josh Becker, who authored the bill, explained why human judgment still matters.

“An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences,” Becker said.

“SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians’ access to the quality care they deserve.”

The risks behind the tech

AI in health insurance raises questions and concerns about efficiency and fairness. These systems learn from data, which can reflect bias based on race, gender, or income.

Experts also point to the “black box” problem—when algorithms make decisions without clear explanations. This can make it nearly impossible for patients to understand why their claim was denied.

Another expert told Newsweek that insurance claim evaluators need a confident understanding of the technology to ensure patients aren’t put at risk.

Even the most advanced AI can miss critical context, and when that happens, patients pay the price.

Dr. Potter’s case escalates

In Dr. Potter’s case, even the human process has its pitfalls. After she posted the original TikTok video, she says UnitedHealthcare followed up with a legal letter — and later denied her cancer patient’s hospital stay.

Potter has continued to share updates with her followers, adding fuel to an already heated conversation.

With new laws taking shape across the U.S., one thing is clear. Lawmakers are trying to ensure that AI helps the healthcare system without hurting the people it serves.

More recently, Arizona introduced a bill that would prohibit AI from being the sole factor in decisions to deny, delay or modify healthcare services.

As the use of AI in health insurance grows, the debate over how — and if — it should replace human decision-making is just getting started.

Tags: , , , , , , ,

Full story

  • Artificial intelligence is increasingly being used in health insurance to streamline claims and save costs, but concerns about ethics, oversight and patient outcomes are intensifying. The technology is reshaping the industry while raising critical questions about its long-term implications.
  • Some states, like California, have passed legislation to restrict AI-driven decisions, ensuring human oversight remains central to healthcare.
  • While AI promises efficiency, risks like bias, lack of transparency, and potential errors raise significant questions about its role in medical decision-making.

Full Story

Artificial intelligence is transforming industries worldwide. Now, it’s making waves in health insurance, streamlining claims and customizing coverage.

But as AI becomes more involved in healthcare decisions, questions about oversight, ethics and patient outcomes are growing louder.

Stories like Dr. Elisabeth Potter’s show what’s at stake. The Austin-based plastic surgeon went viral in January after posting a TikTok video saying she received a call from UnitedHealthcare during a breast reconstruction surgery.

“I got a call into the operating room that UnitedHealthcare wanted me to call them about one of the patients that was having surgery today who’s actually asleep having surgery,” Potter said in the video.

Incidents like this highlight growing tensions between healthcare providers and insurers — and why companies are turning to AI for help.

The financial promise of AI in insurance

According to Newsweek, consulting firm McKinsey & Company estimates AI could help health insurers save between $150 million and $300 million in administrative costs — and up to $970 million in medical costs — for every $10 billion in revenue.

University of Pennsylvania professor Hamsa Bastani explained how the process works.

“When a claim comes in, an algorithm can review details like medical codes, patient history, and patterns of past claims to see whether the claim is valid, consistent with policy coverage,” Bastani told Newsweek.

If a claim appears routine, an automated payout may follow. If not, it’s flagged for a human reviewer.

AI regulation varies by state

Because health insurance is regulated at the state level, there is no national policy standard for AI. That’s why some states—including California—are passing their own legislation.

In 2024, Governor Gavin Newsom signed Senate Bill 1120, which prohibits insurance companies from using AI to deny claims outright. At least 10 other states are considering similar legislation, according to NBC News.

The Mercury News reports that 26% of insurance claims in California were denied last year. And in 2023, the American Medical Association found that insurer Cigna denied more than 300,000 claims using an AI-assisted review system.

California State Senator Josh Becker, who authored the bill, explained why human judgment still matters.

“An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences,” Becker said.

“SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians’ access to the quality care they deserve.”

The risks behind the tech

AI in health insurance raises questions and concerns about efficiency and fairness. These systems learn from data, which can reflect bias based on race, gender, or income.

Experts also point to the “black box” problem—when algorithms make decisions without clear explanations. This can make it nearly impossible for patients to understand why their claim was denied.

Another expert told Newsweek that insurance claim evaluators need a confident understanding of the technology to ensure patients aren’t put at risk.

Even the most advanced AI can miss critical context, and when that happens, patients pay the price.

Dr. Potter’s case escalates

In Dr. Potter’s case, even the human process has its pitfalls. After she posted the original TikTok video, she says UnitedHealthcare followed up with a legal letter — and later denied her cancer patient’s hospital stay.

Potter has continued to share updates with her followers, adding fuel to an already heated conversation.

With new laws taking shape across the U.S., one thing is clear. Lawmakers are trying to ensure that AI helps the healthcare system without hurting the people it serves.

More recently, Arizona introduced a bill that would prohibit AI from being the sole factor in decisions to deny, delay or modify healthcare services.

As the use of AI in health insurance grows, the debate over how — and if — it should replace human decision-making is just getting started.

Tags: , , , , , , ,