When AI lets you down - Who pays the bill?
We are in the eye of the hurricane, but in recent months this knowledge has become more and more unconscious. While the world was initially fascinated or shocked by the first release of ChatGPT, it soon kind of forgot about its steady presence and the continuous development of various AI technologies around us. This isn't just a technological revolution; it's a societal one, and it brings with it many questions about our values and moral integrity as a community. A fundamental question we're already grappling with (and haven't yet found a satisfactory answer to) is: Who is responsible when AI fails? Recently, Daniel Payne wrote an article on this topic in Politico that inspired me greatly.
Who sets the rules for this debate?
The U.S. Congress is still debating how to provide some clarity on the crucial question: "Who is in charge when AI fails in healthcare decisions?" As long as there is no resolution, the situation remains unclear. According to American Medical Association (AMA) President Jesse Ehrenfeld, the issue so far has been "a little bit [like] building the plane as you’re flying it."
The challenge is to shape a legal framework that ensures AI can enhance quality and clinical outcomes for patients while also regulating compensation in cases of malpractice.
Of course, physicians continue to treat patients every day and cannot wait for the political system to conclude their debate. They should probably adhere to the AMA's recommendation to treat AI as a smart assistant that provides advice but should not be fully relied upon. See AI as "augmented intelligence," not as a medical licensed tool replacing your own thinking and experience.
Until the political system has established clear rules, physicians, AI vendors, and patients remain in an uncertain situation.
What is the role of the AI vendors?
AI vendors are definitely on the list of stakeholders accountable for AI-caused malpractice.
There are proposals to create a so-called "safe harbor" that could be applied to AI tools and doctors if they join a surveillance program to track patient outcomes. This could significantly reduce the fear of:
- MDs using AI tools.
- AI vendors entering markets like the US or the EU.
However, the downside of such a "safe harbor" could be that it gives too much leeway to MDs and AI companies, thereby passing the risk entirely onto the patients.
Higher risk for unregulated tools
Providing an unregulated tool bears even greater risks for AI companies because these are more vulnerable to lawsuits since they are not protected by the “preemption doctrine.” This doctrine can prevent some claims against regulated devices, given that they were declared safe and effective by the FDA.
The difficult role of physicians: caught between different interests
Unfortunately, there is no “preemption doctrine” for doctors. The situation for physicians is actually very uncomfortable because in the past, courts have taken many cases against them when they followed the treatment or drug advice from software that turned out to be erroneous.
How do AI companies respond to this situation? They compare their devices to a GPS in your car. You make the final decision and should never fully rely on any instructions.
That is easy to say because doctors are in a very difficult position. Their risk of malpractice has more than just one dimension:
- Malpractice because you followed an errant advice from an AI tool.
- Malpractice because you did not follow AI advice that retrospectively would have been advantageous for the treatment and clinical outcome.
And that is not the end of it. What if patients strongly demand to follow the advice of an AI tool (and you maybe disagree)? Can you deny this or how else can you deal with it?
My personal opinion
We are literally in a first-time situation. The rise of AI is changing everything, reshaping healthcare services not only for patients but for all stakeholders including medical doctors, insurers, and lawmakers. We have never faced such rapid change in the history of mankind before, and therefore we need to accept that it is “a little bit [like] building the plane as you’re flying it.” The process must be agile and iterative in nature. If we attempt detailed planning, the chances we will fail are quite high. Why is that? Because, at least to me, it seems impossible to forecast the capabilities and influences of AI in the future. And I am referring to a future that is as soon as six months or a year from now. We are in the midst of a technological and societal revolution, and in it, we must learn to live with uncertainty.
Source
https://www.politico.com/news/2024/03/24/who-pays-when-your-doctors-ai-goes-rogue-00148447