Michigan Law professor explores challenges of medicine, AI, and the need for a doctor ‘in the loop’

Michigan Law Professor Nicholson Price recently co-wrote a paper on the concept of ‘Humans in the Loop.’

By Bob Needham
Michigan Law


Medical applications of artificial intelligence usually require a clinician to be involved—but clinicians may be ill equipped to fill oversight roles, Professor Nicholson Price argues in a new paper.

Clinicians are often unavailable, ineffective, or both, Price writes in the Emory Law Journal. Alternative structures will be necessary for AI to assist in providing quality patient care, he says.

Price recently answered five questions on the issue:

1. Your article focuses on the practice of using a person to ensure that AI is working properly. Why is this important?

This paper draws from a piece that I wrote with Rebecca Crootoff at the University of Richmond Law School and Margot Kaminski at the University of Colorado Law School on the concept of “Humans in the Loop.” What this means is that in a specific decision, there is a human who is involved in that decision. It’s not just left up to the AI. 

For example, in a medical setting, if you had a patient who took an image of a lesion on their arm and an AI system analyzed that and came back with a diagnosis that said the lesion is precancerous, that decision would not have a human in the loop. If instead there’s an image of the lesion and an AI system flags it as potentially precancerous and a dermatologist confirms that it’s precancerous, we’ve added a human into the decisional loop.

This is important because it’s a real go-to fix for situations where an algorithm might not get everything quite right. Maybe the algorithm is not right for a particular patient. Maybe it’s biased, maybe it has errors, maybe it’s not kind. Whatever problem you have within an algorithmic system, a frequent solution is to put a human in the loop to make sure the algorithm is performing correctly. 

2. You note that one problem with doing this in medical settings is that clinicians are not particularly effective in this role. Why not?

Empirically speaking, clinicians—at least as far as we’ve been able to tell—often aren’t great at catching AI errors. Double-checking requires time, and time is something a lot of clinicians just don’t have. 

Another part of this is automation bias, which arises when you’ve got a system in which a machine is giving you answers over time. If it does a pretty good job, you’ll tend to defer to the machine. 

Then there’s a question of training. We are training young doctors to look at these systems and identify algorithmic errors. There’s a program at Michigan Medicine (DATA-MD) that helps clinicians understand how to use medical AI, what to look for, what problems to be aware of. But there are lots of physicians who were trained a while ago, and this was just not part of their training.

3. The other issue you discuss is that clinicians are not necessarily even available. How does that play out?

It’s easy to look at the question of a clinician in a loop and think, how is this AI system going to interact with a trained expert doctor who has the time to interact with it? That’s certainly an important question to ask. Yet frequently, we’re just not going to have that person around. 

One of the potential benefits of AI is the ability to extend care to folks who otherwise don’t have access to it. But if you expect that that care is going to be supervised by a specialist, you’re just recreating the initial problem. If you have a tool that can do something you might have needed an ophthalmologist to do—but the tool only works if you have an ophthalmologist to double-check the answers—you just lost the value of the tool. 

There are ways to try to deal with some of these issues, but the idea of AI driving broad access to care is really challenging to square with the requirement that we have an expert double-checking these answers.

4. What should be done differently?

First, just be aware that this is a problem. Slapping a human into the loop is a deeply ingrained and easy solution, but it’s not a good solution in many situations. 

Second, to the extent that we do have humans to be in the loop—which I recognize they will be, for a long time, in lots of contexts—you need to enable them to succeed. That means identifying what you expect the humans to do. If you say, “I want you to explain the results of this to patients and be kind and empathetic,” that’s one important thing a human can do. If you say, “I want you to double-check these results and make sure they’re accurate,” that’s a different thing.

Third, we need monitoring to make sure the system—the combination of human and AI—is performing well over time. 

5. You also suggest standards for systems and governance?

Yes. One of the challenges that arises in this space is designing systems. The places that’ll be able to spend the most effort on getting that right are places like Michigan or Harvard or Memorial Sloan Kettering—places that have lots of resources. 

But what if we really want, in addition to those places getting better, to enable lots of care for lots of folks? There’s a real role for nongovernmental organizations, policy makers, and academic medical centers to work together to develop standards and best practices. 

There is some effort to develop those frameworks happening now. It’s really important that that work continue and be supported and be widely distributed to help enable all sorts of different places to have AI and humans working well together—or to just recognize that sometimes you’re not going to have a human in the loop at all.



––––––––––––––––––––
Subscribe to the Legal News!
https://www.legalnews.com/Home/Subscription
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available