Who Is Responsible If Healthcare AI Fails?

Who is responsible when AI mistakes in healthcare cause accidents, injuries or worse? Depending on the situation, it could be the AI developer, a healthcare professional or even the patient. Liability is an increasingly complex and serious concern as AI becomes more common in healthcare. Who is responsible for AI gone wrong and how can accidents be prevented?

The Risk of AI Mistakes in Healthcare

There are many amazing benefits to AI in healthcare, from increased precision and accuracy to quicker recovery times. AI is helping doctors make diagnoses, conduct surgeries and provide the best possible care for their patients. Unfortunately, AI mistakes are always a possibility.

There are a wide range of AI-gone-wrong scenarios in healthcare. Doctors and patients can use AI as purely a software-based decision-making tool or AI can be the brain of physical devices like robots. Both categories have their risks.

For example, what happens if an AI-powered surgery robot malfunctions during a procedure? This could cause a severe injury or potentially even kill the patient. Similarly, what if a drug diagnosis algorithm recommends the wrong medication for a patient and they suffer a negative side effect? Even if the medication doesn’t hurt the patient, a misdiagnosis could delay proper treatment.

At the root of AI mistakes like these is the nature of AI models themselves. Most AI today use “black box” logic, meaning no one can see how the algorithm makes decisions. Black box AI lack transparency, leading to risks like logic bias, discrimination and inaccurate results. Unfortunately, it is difficult to detect these risk factors until they have already caused issues.

AI Gone Wrong: Who’s to Blame?

What happens when an accident occurs in an AI-powered medical procedure? The possibility of AI gone wrong will always be in the cards to a certain degree. If someone gets hurt or worse, is the AI at fault? Not necessarily.

When the AI Developer Is at Fault

It’s important to remember AI is nothing more than a computer program. It’s a highly advanced computer program, but it’s still code, just like any other piece of software. Since AI is not sentient or independent like a human, it cannot be held liable for accidents. An AI can’t go to court or be sentenced to prison.

AI mistakes in healthcare would most likely be the responsibility of the AI developer or the medical professional monitoring the procedure. Which party is at fault for an accident could vary from case to case.

For example, the developer would likely be at fault if data bias caused an AI to give unfair, inaccurate, or discriminatory decisions or treatment. The developer is responsible for ensuring the AI functions as promised and gives all patients the best treatment possible. If the AI malfunctions due to negligence, oversight or errors on the developer’s part, the doctor would not be liable.

When the Doctor or Physician Is at Fault

However, it’s still possible that the doctor or even the patient could be responsible for AI gone wrong. For example, the developer might do everything right, give the doctor thorough instructions and outline all the possible risks. When it comes time for the procedure, the doctor might be distracted, tired, forgetful or simply negligent.

Surveys show over 40% of physicians experience burnout on the job, which can lead to inattentiveness, slow reflexes and poor memory recall. If the physician does not address their own physical and psychological needs and their condition causes an accident, that is the physician’s fault.

Depending on the circumstances, the doctor’s employer could ultimately be blamed for AI mistakes in healthcare. For example, what if a manager at a hospital threatens to deny a doctor a promotion if they don’t agree to work overtime? This forces them to overwork themselves, leading to burnout. The doctor’s employer would likely be held responsible in a unique situation like this. 

When the Patient Is at Fault

What if both the AI developer and the doctor do everything right, though? When the patient independently uses an AI tool, an accident can be their fault. AI gone wrong isn’t always due to a technical error. It can be the result of poor or improper use, as well.

For instance, maybe a doctor thoroughly explains an AI tool to their patient, but they ignore safety instructions or input incorrect data. If this careless or improper use results in an accident, it is the patient’s fault. In this case, they were responsible for using the AI correctly or providing accurate data and neglected to do so.

Even when patients know their medical needs, they might not follow a doctor’s instructions for a variety of reasons. For example, 24% of Americans taking prescription drugs report having difficulty paying for their medications. A patient might skip medication or lie to an AI about taking one because they are embarrassed about being unable to pay for their prescription.

If the patient’s improper use was due to a lack of guidance from their doctor or the AI developer, blame could be elsewhere. It ultimately depends on where the root accident or error occurred.

Regulations and Potential Solutions

Is there a way to prevent AI mistakes in healthcare? While no medical procedure is entirely risk free, there are ways to minimize the likelihood of adverse outcomes.

Regulations on the use of AI in healthcare can protect patients from high-risk AI-powered tools and procedures. The FDA already has regulatory frameworks for AI medical devices, outlining testing and safety requirements and the review process. Leading medical oversight organizations may also step in to regulate the use of patient data with AI algorithms in the coming years.

In addition to strict, reasonable and thorough regulations, developers should take steps to prevent AI-gone-wrong scenarios. Explainable AI — also known as white box AI — may solve transparency and data bias concerns. Explainable AI models are emerging algorithms allowing developers and users to access the model’s logic.

When AI developers, doctors and patients can see how an AI is coming to its conclusions, it is much easier to identify data bias. Doctors can also catch factual inaccuracies or missing information more quickly. By using explainable AI rather than black box AI, developers and healthcare providers can increase the trustworthiness and effectiveness of medical AI.

Safe and Effective Healthcare AI

Artificial intelligence can do amazing things in the medical field, potentially even saving lives. There will always be some uncertainty associated with AI, but developers and healthcare organizations can take action to minimize those risks. When AI mistakes in healthcare do occur, legal counselors will likely determine liability based on the root error of the accident.

Credit: Source link

Comments are closed.