Smarter health: The ethics of AI in health care
[ad_1]
What should the doctor do? Hypothetical. Helen is not a real patient, but her scenario is based on very real technology that’s currently in use at Stanford Hospital.
DR. STEVEN LIN [Tape]: I can actually pretty accurately predict when people are actually going to die.
CHAKRABARTI: This is Dr. Steven Lin. He’s a primary care physician and head of the Stanford Health Care AI Applied Research Team. A few years ago, a team at the university’s Center for Biomedical Informatics Research developed a tool to make these mortality predictions. And Dr. Lin’s group helped implemented at Stanford Hospital.
DR. LIN [Tape]: Some of our partners built these really, really accurate models for thinking about if a patient is admitted to Stanford Hospital, what is their risk of passing away in the next month? Three months, six months, 12 months?
CHAKRABARTI: If you’re a computer scientist, the tool is a highly complex algorithm that scours a patient’s electronic health record and calculates that person’s chance of dying within a specified period of time. If you’re in hospital administration, it’s the advance care planning or ACP model for the rest of us. And for Helen, it’s a death predictor.
The ACP model was launched at Stanford Hospital in July 2020. It’s been used on every patient admitted to the hospital since then, more than 11,000 people. And in that time the model has flagged more than 20% of them. The tool is intended to help doctors decide if they need to initiate end of life care conversations with patients.
DR. LIN [Tape]: These are patients that are admitted for anything. We want to know what their risk of having adverse outcomes and passing away are so that we can prioritize making sure that they have advanced care plans in place so that their wishes are respected.
CHAKRABARTI: The ACP model is only as useful as it is accurate. Dr. Lin says the model’s accuracy depends on the prediction threshold. It’s asked to meet at Stanford. It’s asked to flag patients who have the highest likelihood of dying within the next year or the top 25th percentile of predicted 12 month mortality risk. As Lin puts it, for those patients, one validation study found that 60% of the patients flagged by the model did, in fact, die within 12 months.
Stanford, though, has not yet done a randomized controlled trial of the tool. So how does the ACP model work? It’s a deep neural network that evaluates more than 13,000 pieces of information from a patient’s medical records within 24 hours of a patient being admitted to Stanford Hospital. It looks at everything from age, gender, race to disease, classifications, billing codes, procedure and prescription codes to doctor’s notes. And then it generates a mortality prediction.
DR. LIN [Tape]: But what do we do with that data? And so one potential use case for this is to really improve our rates of advance care planning conversations.
CHAKRABARTI: Advance care planning is woefully inadequate in the United States. Palliative care is even harder to access. The National Palliative Care Registry estimates that less than half of the hospital patients who need palliative care actually receive it. Dr. Steven Lin says ideally, end of life care conversations would happen with all hospitalized patients.
But time and resources are limited, so not every patient gets to have that talk with their doctor. By identifying the patients at highest risk of dying within the next year, Lynn says the model provides a way to prioritize which patients need that advance care planning conversation the most.
DR. LIN [Tape]: You know, most people say that they have wishes regarding their end of life care, but only one in three adults has an advance care plan. And we if we can identify those at highest risk, then we can also prioritize those discussions with those individuals to make sure that their wishes are respected. If they weren’t able to make decisions for themselves, if they were to get really sick.
CHAKRABARTI: Which brings us back to our hypothetical patient, Helen. And back to her care team. Helen doesn’t know it, but when the model processed her medical records, it found that some of her blood test results had pre-existing health conditions.
Along with a diagnostic ultrasound of her urinary system, history of bladder disease and the difficulties she had breathing following a previous surgery, along with the number of days she spent in the hospital this time, all that together puts the 40-year-old mom of three at very high risk of dying in the next year, according to the model. It also surprises her doctor.
As exciting as AI and machine learning are, there are many ethical and also health equity implications of artificial intelligence that we are now beginning to realize and really studying and trying to find ways to to mitigate them.
Questions like: When should the model flag a patient? At Stanford, it’s only that highest risk category we mentioned in the future. Other institutions might choose a different threshold. What should the human caregiver do with that information? Who else should know? What if the physician disagrees with the prediction? What if the model is wrong?
Dr. Lin’s team helped develop some of the protocols in use at Stanford. When patients are flagged by the model, the care team is asked to have a conversation with the patient using the Serious Illness Conversation Guide, a template for Advanced Care Conversations. Developed by Ariadne Labs, the organization founded by the author physician Atul Gawande.
The guide suggests that doctors ask for a patient’s permission to have the conversation, to talk about uncertainties and frame them as wishes. Such as, Helen, I wish we were not in this situation, but I am worried that your time may be as short as one year. It also suggests asking patients, Does their family know about their priorities and wishes? The guide also says, Allow for silence. But there are many more questions than one conversation guide can answer. Dr. Steven Lin says they’re the questions that should concern all of American health care as artificial intelligence tools permeate deeper into the system.
How do patients react when they are flagged by the model as being high risk of X, Y and Z, or being diagnosed with X, Y, and Z? How do human clinicians handle that? What’s their trust in the AI? And then very, very importantly, what are the equity implications of data driven tools like artificial intelligence when we know that the data that we have is biased and discriminatory because our health care systems are biased and discriminatory?
Finally, as a once and future patient myself, my mind wanders back to that searingly human moment. In hypothetical Helen’s case, the moment when the doctor first sees that alert, when she looks at Helen lying in bed, still hooked up to medical monitors, wanting to go home.
Will the doctor tell Helen that her death prediction came from an algorithm at Stanford? That’s left up to the attending physician’s discretion. Will Helen want to know? Would you? When we come back, two bioethicists will give us their answer to that question and they’ll explore other big ethical fronts as AI advances further into American health care. This is On Point.
Part II
CHAKRABARTI: This is On Point. I’m Meghna Chakrabarti. And we’re back with episode two of our series Smarter health. And today, we’re exploring the deep ethical questions that instantaneously arise as artificial intelligence permeates deeper into the American health care system.
With us to explore those questions is Glenn Cohen. He’s faculty director at the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics at Harvard Law School. Professor Cohen, welcome to you.
GLENN COHEN: Thank you for having me.
CHAKRABARTI: Also with us is Yolonda Wilson. She is associate professor of health care ethics at Saint Louis University, with additional appointments in the Departments of Philosophy and African American Studies. And she’s with us from Saint Louis. Professor Wilson, welcome to you.
YOLONDA WILSON: Thank you for having me today.
CHAKRABARTI: Well, let me first start by asking both of you your answer to that question that we ended the last segment with. If indeed you were in a situation where an algorithmic model predicted your chance of mortality in the next year, would you want to know that it came from an algorithm?
WILSON: I absolutely would want to know.
CHAKRABARTI: And Professor Cohen, what about you?
COHEN: I also agree, in the law of informed consent, there’s an idea about materiality. We want to disclose things that are material, that matter to patients in terms of the decisions they make. And when I think about myself, about my planning and the like, about this being told, oh, well, it’s a hunch I have or I’ve seen a million patients like this.
I might react very differently to the information we looked at 13,000 variables, right? We have this mountain of evidence behind it. So I think the more information being given to a patient, the more secure they can be in the information, the more they can organize their life accordingly.
And I think it would be important for me to know some of the baked in equity issues, for example, in algorithms that, you know, I want to be clear about the kind of coding and how the algorithm arrived at that decision. So that’s an unqualified yes for me.
CHAKRABARTI: Interesting, because I’m not sure where I fall on the spectrum. I’m not sure I would want to know because I think it might distract me from the next steps in care. But we can get back to that in a moment. I think the fact that the three of us feel differently about it is indicative of the complexities that immediately arise when we talk about AI and its impact on health care, health care, decision making and ethics.
So here’s the first major ethical issue. Algorithms are obviously very data hungry, and they need vast amounts of patient data to train on. So where does that data come from? Who are those patients? Now, I should say that for the advance care planning model, Stanford University did tell us that the training data came from what’s called the Stanford Translational Research Integrated Database Environment.
That’s a trove of past patient records. So the ACP team used about 2 million pieces of data from patients who had been treated between 1995 to 2014 at two of Stanford’s biggest hospitals. The university also told us that all of the data used to develop the model was approved by an institutional review board.
So that’s where the training data came from for the death predictor. But now, as we’ve talked about, it’s being used on every patient, no matter who they are, who comes into Stanford Hospital. So, Professor Wilson, what ethical questions does that raise for you?
WILSON: Well, certainly there are health equity questions, right? I mean, the Stanford patient population has a very specific patient population. And to the extent that other people don’t fit neatly into the kind of abstract Stanford patient, then you may see some variances that the algorithm doesn’t account for. No, we’re not talking about algorithmic development yet.
But that doesn’t account for also. Right. I mean, people make decisions about which data points are worth investigating and worth thinking about. And so there are places for important information to be left out, right to be deemed as unimportant by whatever team is involved in the data collection. Right. I mean, anytime you have instances of data collection, you have people making decisions about what’s valuable and what’s junk.
And so those are some issues that spring to mind for me. Also, to the extent that other health care institutions decide to use this model, are they going to generate their own data sets or are they just going to kind of build models based on this particular data set? And I think that’s going to look very different in Mississippi or in rural Georgia, where I’m from, than it looks in Palo Alto.
I often say as a middle aged white guy living in Boston, I am like dead center in most data sets in terms of what they predict and how well they do prediction. But that’s very untrue for other people out there in the world. And the further we go away from the training data, from the algorithm, the more questions we might have about how good the data is in predicting other kinds of people.
CHAKRABARTI: Let me ask both of you this. It seems to me on this issue regarding representation or representativeness, if I can put it that way of information that goes into creating a new tool in medicine, we already have a problem, right? I mean, there’s so much research that shows that in the process of drug development, there aren’t enough patients included in clinical trials that represent all of Americans. As everything with technology and AI in particular, there is the risk of taking a problem that already exists and just scaling it wildly, isn’t there? And is that what we face? Professor Cohen here.
COHEN: So I’ll say yes and no. So I’m a little bit more of a techno optimist as well. So I always ask the question with all the ethical questions, AI as against what’s the one thing we know about physicians? They’re wonderful people. Many friends of mine are physicians, but they bring in the same biases that all of us do. And there’s ways in which the AI brings in a different set of biases, a propensity to bias, and that can be helpful in some instances. Right?
So one thing to think about is that when the AI looks at the data set, it really doesn’t see black and white necessarily, unless that’s coded in or it’s asked to look at it. It looks at a lot of different variables and the variables that look like may influence and be more strongly in some directions and others compared to human decision makers. So what we really want to know is performance. That’s my question.
Does this do better than physicians who are left to decide for themselves with whom to have these serious conversations? And is the pattern of who gets the serious conversation more equitable or less equitable than if we had physicians just doing without the assistance of an algorithm?
CHAKRABARTI: So, I mean, Professor Wilson, do you think that I could, if used and developed properly, reduce the presence of that bias in health care?
WILSON: I mean, I think a lot hangs on what one means by if used and developed properly. And we know that health care professionals have biases just like everyone else. But so do people who collect data. So do people who develop algorithms. I think I’m much more of a pessimist than Professor Cohen, just in general, probably, but certainly around AI in some of the health equity ethics questions that come to my mind.
CHAKRABARTI: Well, I will say, in the more than three dozen conversations and interviews that we had in the course of developing this series, this one thing came up again and again. And I just want to play a little bit of tape from someone else who reflected those same concerns. This is Dr. Vindell Washington. Right now, he’s CEO of Onduo. It’s a health care technology company that’s trying to develop AI tools for various conditions, including diabetes. Now, before that, he served as the national coordinator for Health Information Technology in the Obama administration. And here’s what he said about AI and health equity.
DR. VINDELL WASHINGTON [Tape]: One of the things we actually test for that and look at him as we’re delivering our service in Ondo, is our communities of color, do they have different outcomes or results?
And you wouldn’t know if your algorithm was leading you down the wrong path if you didn’t ask that question in just the most brutally honest way that you could. And I think often what people tend to do in those circumstances is they they tend to say, I have no evidence of X, Y or Z happening prior to actually looking for the thing that they’re worried about happening.
CHAKRABARTI: That’s Dr. Vindell Washington. And by the way, we’re going to hear a lot more from him in episode four of this series. But Professor Wilson, I want to hear from you what you think about that. I mean, he’s basically saying that somehow we have to be sure that as these products get developed, that they’re even asking the right questions.
WILSON: Yeah, to me, that’s just basic. I mean, people who are developing make decisions and don’t think about asking certain questions. So I would absolutely say that certain questions need to be asked at the outset, but it’s also a matter of who’s in the room to even ask those questions, because I’m sure that there were questions that might not occur to some populations to ask over others.
CHAKRABARTI: It’s not just the development of the technology that we need to be concerned about regarding health equity. It’s also in how it’s used. And I’ll just, you know, present another potential nightmare scenario to both of you and see what you think here. Because I’m wondering, say, the the advance care planning model puts out a prediction that Patient X might die in the next year and the patient happens to be a black woman. Right.
You have to wonder whether, like, given the biases that already exist in the United States, in in our health care system, might a prediction like that lead a health care team to say maybe not consciously, but is it worth offering patient X, A, B, and C if they’re more likely to die anyway? Do you see what I’m saying, Professor Wilson?
WILSON: Yes, absolutely. I mean, we already see those kinds of lapses in care for particularly Black patients and Latino patients, right? That in terms of pain management, in terms of certain kinds of treatment options, we also see we already see disparities there. And so in some ways, this kind of information could provide cover for those biases, right? Oh, I’m not biased. I’m just going where the data lead leads me to go.
CHAKRABARTI: You know, Professor Cohen, this makes me think again at Stanford in there with their model, the algorithm puts the prediction in the patient’s electronic health record. And presumably it stays there. Are there a set of ethical concerns around that? Because I wonder how that might impact the future care that person receives when other physicians or, say, insurance companies see the prediction in the future?
COHEN: Yes. There’s a lot to unpack there. So first, the question is who to whom this should be visible. Right. And so part of privacy is contextual rules about who gets to access information about you. It’s one thing for the physician treating you, advising you on your, you know, end of life decision making to see it something very different for an insurance company or even another physician or think about a family member in the treatment of a family member. Right. And these conversations. Right.
Once this is presented to a patient, there’s a way in which the patient is going to face this and face questions about it for the rest of their life, however long that life is. So it’s really important that the information be protected and that the patient kind of know that the information is there and who can see it and who can’t. So one of the big questions is, do we need to ask you information? You know, think about your loved one who passed away or that person in the process of dying.
Do we need their permission to use information about their death process in order to build a model like this one? Or can we say, you know what, you’re a patient record. We’re going to we’re going to identify the data. We’re not going to be able to point the finger at you, but you’re going to have participated in the building of this thing that you might have strong feelings about.
CHAKRABARTI: Okay. It seems like this is an area in which we have two fields that obviously have some overlap, but in a sense distinctly different ethical considerations. Right. There’s there’s medicine and health care and then there’s technology in computer science. So in order to explore sort of how those two fields ought to interact, we talked with Dr. Richard Sharp. He’s director of the Biomedical Ethics Research Program at the Mayo Clinic.
DR. RICHARD SHARP [Tape]: I think that AI tools really have the power to bring more people into the health care system.
CHAKRABARTI: So he’s more of a an optimist here than maybe some of us at the table today. But but Dr. Sharp told us that patients are already noticing what he calls a depersonalization of care.
DR. RICHARD SHARP [Tape]: And the focus of the work that we’re doing in bioethics is really are going out to patients, making them aware of these trends that are beginning and asking them what they think about these developments.
We want to be proactive and we want to solicit those opinions so that we don’t develop health care systems that end up not aligning with the goals and interests of patients. And so I think that bioethics research can play a big role in terms of shaping the final implementation of different kinds of AI tools.
CHAKRABARTI: Well, now, you know, we did actually speak with him in episode one. Listeners might remember him from the first episode of the series, and we did follow up with him about his thoughts about the ethics or the ethical considerations around these tools.
DR. RICHARD SHARP [Tape]: As long as those tools are aides to patients and help to bring them to the health care system and make their experiences more efficient, I think there isn’t an ethical problem at all. But if these kinds of tools are seen as substitutes for medical knowledge as provided by an expert clinician, then I think that’s really quite problematic.
CHAKRABARTI: And here’s the point that Dr. Sharp makes that I think is most fascinating. Computer scientists and physicians, as I noted earlier, essentially have different viewpoints or mindsets regarding ethical considerations. So Dr. Sharp says it’s the computer scientists and technologists that need to adopt medicine’s ethical standards.
DR. RICHARD SHARP [Tape]: Then bioethics. We talk a lot about the importance of respect for patient choice and preservation of confidentiality. All these sorts of moral principles that have for ages been sort of core to the ways in which we deliver health care.
Well, in the computer sciences and other areas, they haven’t necessarily embraced those principles. Those have been core to their work. And so part of what we’re seeing is really the socialization of AI developers into the culture of medicine and the ethos of medicine as well.
CHAKRABARTI: That’s Richard Sharp, director of the Biomedical Ethics Research Program at the Mayo Clinic. Professor Yolonda Wilson, you’re your response to that. What do you think?
WILSON: So, you know, I kind of beat my humanity’s drum a bit sometimes. So my actual Ph.D. is in philosophy. So I’m a humanist by training. And so I would just kind of almost kind of tongue in cheek say, you know, I think the medical ethics need to be clear that they’re getting that humanistic side of bioethics training and not just what’s happening on the clinical side.
But I certainly think that the ethical issues should guide. AI development data collection. And not be seen as an impediment for and I think sometimes you know in the undergraduate classroom I see this with my, you know, engineering and computer science majors who kind of have a little bit of frustration and wonder why these kinds of questions matter. And I think, you know, we see very clearly with this technology why these questions matter.
CHAKRABARTI: Hmm. Professor Cohen, your thoughts?
COHEN: So two things to pull out there. One is this idea of ethics by design, that the best version of this is when ethicists are involved in the design process rather than being given an algorithm that’s already designed, ready to be implemented and say, okay, so should we do it, guys? Right.
So that’s the first point. The second is just to say that one of the things that I hear in the comments he just made is this idea. And my my friend Bob True has a beautiful essay about this, about the stethoscope, the way in which technology can get in the way of a more humanistic, more physician touch experience. In the case of stethoscope, literally, there was a period of time where people put their ear to the chest of a patient, and this was with the deployment of the stethoscope on purpose. In a sense, this was introduced to create a little bit more distance.
And there’s a way in which you can imagine the advance care planning conversation, end of life, decision making, talking to a patient and suddenly looking again and showing them the numbers on the screen and having the screen kind of intermediate this relationship. And there’s a way in which something might be profoundly lost about that humanistic moment.
Well, when we come back, we’ll take a look at another major ethical question. Rohit Mopani is a health consultant who leads the development of the chose AI guidelines. And he asks who bears the responsibility and liability for AI when it’s out in the wild?
Does it sit with the producer and the designer of the technology? Does it sit with the government that selects the technology or does it sit with the provider of the technology? If you’re a company, once you put it out into that sort of into a health care system, you kind of want to remove yourself from having to have any responsibility to it. So my concern is ultimately to have an assurance that a government or a designer has done their diligence.
CHAKRABARTI: Back in a moment. This is On Point.
Part III
CHAKRABARTI: This is On Point. I’m Meghna Chakrabarti. And this is episode II of our special series, Smarter health. And today we’re talking about all the ethical considerations that come along with the advancement of AI in the American health care system. I’m joined today by Glenn Cohen and Yolonda Wilson.
I do want to hear from both of you about what questions arise when you talk about implementation, because that’s where we started this hour about like how should Stanford most ethically use information generated by algorithmic model about the chance of someone dying?
So first of all, accuracy is everything right. If you don’t have an AI here, have high confidence of its accuracy, it’s really not worth kind of deploying. You’re not ready to deploy it yet. But even if you do have good accuracy information when you use simulated data, those are small numbers of patients. Actually deploying this in a care stream might produce very different results. Right. You might have a group of physicians who overcorrect against what the algorithm predicts. And unless you look at that and find how they actually behave, it’s something you’re going to miss from looking at it on paper.
What we do know for sure from Stanford, because they’ve told us, is that regarding the accuracy question, they’ve only done a validation studies so far. They haven’t done yet the sort of gold standard in medicine, the double blind randomized controlled trial. So how do we know how accurate it really is or or the impact it’s actually having on decisions made for patients?
As you say, that latter thing is what we really care about, right? If you put something like this in place, it’s because you think it’s going to help patients and you think it’s going to help physicians to identify the patients, to have these conversations and have better conversations. It can turn out that in reality it doesn’t do that. And if that’s true, you want to find out as soon as you can and you want to go back and see what can be done to improve it.
Professor Wilson, your thoughts on that?
WILSON: You know, as I said in the in the first segment, I think I’m from a small town in Georgia in, you know, rural southern Georgia. And, you know, I think about the kind of cultural dynamics at play and expectations of how providers are going to interact with patients and what that looks like.
And whether people see physicians or nurse practitioners are the providers with any kind of regularity and the impact that that’s going to have in what gets lost when the doctor who delivered you and watches you grow and watched you grow up delivers news versus numbers on a screen show up. And I think that we need to figure out how to pay attention to those those kinds of nuances. Again, the humanistic aspect of implementation.
CHAKRABARTI: Professor Cohen, you said there were some other areas of implementation that we need to focus on.
COHEN: Yes. So one thing is just the question of informed consent, right? So here it is being run. The algorithm is being run in the background. The physician is being given this information. There’s a question. It sounds like they’re leaving it to the discretion of the individual physician whether to share that it was run. But even the question of running the algorithm. Right.
Should a patient have the right to be asked ahead of time? Is this something we would like to do in analyzing your care or not? And the patient will know that if you’re not disclosing that you’re potentially doing this right. So do we think that a patient would have a legitimate gripe if they found out after the fact that actually this was run? They were never told it was run. And every physician they saw thereafter had this piece of information that the patient never knew on the chart blinking there.
CHAKRABARTI: How much consent, though, informed consent, do we already gather from patients for other kinds of tests? Because, you know, when when a doctor says we’re going to draw this blood to do some blood tests, very rarely do they actually specifically say what the tests are.
This is a good point that I’m not you know, I’ll say there’s a million things that go into physician reasoning, whether it’s like medical school lectures, the last 12 patients they saw. But and here’s where this idea of materiality makes a difference. I do think patients feel very differently about artificial intelligence, and that’s just a fact in the world. And if you know that, if you know your patients feel differently about it, maybe that creates a stronger obligation to ask them about it.
So, you know, there’s another area of consideration that we haven’t brought up yet. We could do several hours on just this one thing alone. And that is, given that American health care is, how does payment and insurance come into this? Because I can imagine that AI and the kind of data it produces is an actuaries dream come true. Right? It kind of takes on this patina of authority because it’s a calculation that may be kind of hard to overcome when it comes to deciding what care will be paid for.
These models can become a justification for doing or not doing in ways that would be detrimental to patients. And I think that we have to be incredibly mindful of that. So it’s not necessarily that I subsidies, you know, becomes the substitute judgment, but that it becomes the justification for certain kinds of actions. And I think that one of the elements of. Implementation has to include that kind of responsiveness from insurance companies or from Medicaid or Medicare.
And I could go a step further and say, you know, we could resolve some of this by having a single payer health system. But that’s not the conversation that’s on the table today. But at the very least, the reality of how people in this country obtain health care has to be a factor in thinking about implementation.
CHAKRABARTI: Professor Cohen, what do you think about that?
COHEN: So one of the things we say to make ourselves feel better about implementation is that the physician remains the captain of the ship. The A.I. gives information and put in a decision making. But the physician ultimately is the one who makes the call, even though that call is going to be skewed certain directions.
That looks very different in a world with payers who are also looking at the data. And even though the air says Let’s do X, which is the cheaper thing, the physician might ordinarily say, no, I want to do Y, which is the most more expensive. If it turns out the pair says that’s fine, but we’re only going to pay for X.
That looks very different than in a world where the physician is making freestanding judgments. So we have to think a little bit about the systems in which this is built, in how much discretion for physicians it’s important to keep. On the flip side, again, to my optimist hat coming in, right, part of the advantage of AI, when it’s going to have the most advantage is when it’s actually telling physicians to do something they wouldn’t have otherwise done.
Right, Tom, I mean, you thought you would do this. You’re training saying you do this, but I’m giving you this additional information. If we create too many obstacles to following the AI, be they payment or be they liability or things like that, will also end up in a situation where much of the value of AI is not realized. So I think we have to be nuanced about this.
CHAKRABARTI: Okay. One of the big questions that we’re asking in this series is the United States spends more on health care than any other country in the world, but our health outcomes are not as good as the hundreds of billions of dollars we spend might otherwise lead us to believe. And so can artificial intelligence change that, and if so, how?
Well, we asked that question of Dr. Steven Lin, who talked to us about the advance care planning model at Stanford. And I want to emphasize that he is a primary care physician. Okay. So when we said it, when we asked him what would be the best use of AI in the American health care system, he pointed out that primary care represents 52% of all the care delivered in the United States. But right now, air investments are disproportionately going towards developing technologies for the narrow band of hospital care.
DR. STEVEN LIN [Tape]: I’m very concerned that the largest care delivery platform in the U.S., that is primary care, is being left behind. We need to really build tools for primary care that actually impacts the vast majority of people in this country, not just narrowly focused on specialty specific inpatient use cases that are very important, too, but really don’t benefit the vast majority of society.
CHAKRABARTI: So, Professor Wilson, the reason why I wanted to play that is because if we’re asking, you know, writ large, as billions of dollars are going in to develop AI tools for health care, and we want to offer sort of a large scale ethical framework around that investment. Wouldn’t part of that framework say some of the money should be going towards the kinds of care that can produce the greatest benefit? And that’s. Well, according to Dr. Steven Lin, that’s not happening just yet. What do you think about that?
WILSON: Yeah, I mean, I think that’s going to be really important. And here may be Professor Cohen’s optimism rubbing off on me. I think that primary care is important. I mean, I agree with with the doctor. And I also think kind of broadly speaking, community health and public health are going to be the spaces where assuming we can get this right.
And again, this is Dr. Cohen’s optimism rubbing off on me, assuming we can get this right and we get all the benefits of these uses of AI, that absolutely primary care is one place they should be directed. But also, again, broadly speaking, kind of public health and community health spaces, because I think those are going to be really important, particularly in rural areas.
CHAKRABARTI: Okay. But a few minutes ago I had a phrase that was doing a lot of work, and I think you’ll acknowledge that assuming that we can get this right is doing a lot of work right now. Yes, Professor Wilson.
Yeah, I glossed over that.
Intentionally because I see AI health care system right now in the United States set up that has all the wrong incentives or incentives that don’t lead in the direction of investment in technology and primary care. So now your pessimism is rubbed off on me, because I think a lot of the emphasis in infrastructure and investment in A.I. technology is making very good physicians and very good medicine even better.
Whereas most of the value proposition is in democratizing expertise, taking the expertise of pretty good medicine and spreading it not just in the U.S. but around the world. And to me, if you want to talk about alignments, that’s the kind of alignment we would see that would do a lot of good in the space.
But it’s not one that if A.I. is being developed large. Only by profit seeking developers and, you know, capitalism and stuff like that. We’re going to see necessarily because the value to show the value of that much harder and much more longer term. So I think it would be a good opportunity for government to step in to try to plug this gap a little bit.
Okay. There’s little. My pessimism has rubbed off on you railing against capitalism.
Well, somewhere between optimism and pessimism, there’s realism. And that’s that’s where that’s it’s not a terrible place to land. What about the question of liability? Also, when you add another tool as sophisticated as AI and a tool that in some cases is like a black box tool, essentially, we don’t even know how it’s making the decisions. Does that further complicate the question of medical liability?
So there’s liability at different levels. There’s liability for the people who develop the algorithms, potentially for hospital systems that purchase and implement them, even potentially for physicians and nurses who follow or don’t follow. I’ll say in that last level, the way the tort system is set up, it encourages a certain conservativism, because if you follow the standard of care, what you would have done in the absence of the eye for a particular patient, you’re very unlikely at the end of the day to be liable if you deviate.
You’re putting yourself in a situation where you might face more liability if an error occurs and a patient is harmed. But it’s precisely those cases where the AI is adding value that you’re probably going to want to deviate. So I do think there’s a lot of uncertainty here. And in some ways the uncertainty over liability is doing more work to effect the way in which the system is working than actually the liability itself.
Mm hmm. Well, I want to acknowledge that the vast majority of people listening to this are going to be on the patient side of things, even though we do have a lot of health care providers that listen to on point as well. But on that note, Stacy Hurt is a patient advocate that we’ve spoken to in the course of researching this series.
We’re also going to hear a lot more from her in episode four. Stacey is a stage four colon cancer survivor and the mother of a son with severe disabilities. She now consults for health care companies on patient perspectives on things like artificial intelligence and data collection.
It’s like your best friend borrowing your car. It’s okay that your best friend borrows your car. You just want to know about it. You don’t want to look out your driveway and be like, Where’s my car? And then you find out, Oh, my best friend took it. Okay, that’s fine. Same thing with data and capturing data. Like if you’re in a clinical trial or you’re in a study or whatever.
If you just tell me what’s happening at the beginning, I’m probably going to consent. But don’t try to pull the wool over my eyes or something behind my back because we have a huge trust problem in this country right now. And you don’t want to be a part of the problem. You want to be a part of the solution.
We’ve talked a lot about the myriad areas in which there needs to be a lot of smart thinking about the ethical considerations around AI and health care. But for all the patients, all regular listeners hearing this right now, do you have like a tool to add to the patient’s toolkit on how to think about how AI is going to have an impact on their health care?
You know, we always talk about or many of us who think about these kinds of bioethics questions will say things like, you know, as a patient, you need to advocate for yourself. But we know that because of reasons of bias and access to health care, what it looks like to advocate for yourself can be interpreted differently. One can be penalized for advocating for themselves. And so I think, you know, you know, being your best advocate is often a pat answer at the end of these kinds of conversations. But I think that there is probably a bit more nuance involved in that than we have time to think about now. But I would say, you know, to the extent that you can try to advocate for yourself, but again, that also puts the onus on patients in ways that I don’t think is fair.
Right.
So two quick things. First, I want to underscore this idea. The Stanford case study you offered us started off with the assumption that they couldn’t give end of care, decision making, discussions with every patient. Right. So they’re selecting who to do it with. That might be the problem in the system rather than the AI and something worth to think about the resources for this. But in general, when you have a patient given information and told that an artificial intelligence was involved with the decision making, what are the questions a patient should ask? Who developed the algorithm in question? That’s number one. Second of all, what evidence do you have that the information of the device of the algorithm is good? That’s question number two and three.
Was this algorithm trained on patients like me? And if not, are there assumptions it’s making that if we tweak some of the assumptions, a different outcome would result? Now, whether that particular physicians, the person who can answer that question, they may not be, but I think that’s what a patient should think about when they’re given this information. Those are the three questions I would start with.
I would also phrase it as, Doctor, do you agree with what the algorithm says? And if not, why not? Right. I mean, is there still room for a question like that in health?
Absolutely. Or as I often do it, if it was your mother, would you follow the algorithm’s advice or something like that? I hate doctors hate getting that question, but it personalizes it in a way that I think is helpful.
Well, Glenn Cohen is a professor, deputy dean and faculty director of the Petrie Flom Center for Health, Law Policy, Biotechnology and Bioethics at Harvard Law School. Professor Cohen, thank you for being with us.
Thank you for having me.
And Yolonda Wilson, associate professor of health care ethics at Saint Louis University, with additional appointments in the Departments of Philosophy and African-American Studies. Professor Wilson, it’s been a great pleasure to have you back on the show. Thank you so very much.
Thank you so much for having me back.
Coming up next on Smarter health …
CHAKRABARTI: Well, coming up on episode three of our special series, Smarter health, we’re going to talk about regulation. Does the FDA, an agency built to regulate devices and medicines, have the ability to adequately regulate artificial intelligence programs? Dr. Matthew Diamond is the head of digital health at the FDA. And he says:
We are here to ensure patients in the United States have timely access to safe and effective devices in general and those that that use the latest technology.
But Dr. Elizabeth Rosenthal asks, how can the FDA regulate technology? It barely understands.
It probably has zero expertise at evaluating artificial intelligence. It’s never been asked to do that for devices and hardware. The FDA has even looser standards than for drugs, so they’re just not set up to do this kind of thing.
The race to new regulations. That’s in episode three of our special series, Smarter health. I’m Meghna Chakrabarti. This is On Point.
[ad_2]
Source link