The COVID-19 pandemic highlighted disparities in health care across the United States over the past few years. Today, with the rise of AI, experts warn developers to remain cautious when implementing models to ensure that these inequalities are not exacerbated.
Dr. Jay Bhatt, practicing geriatrician and executive director of the Center for Health Solutions and Health Equity Institute in Deloittesat down with MobiHealthNews to give his perspective on the benefits and possible detrimental effects of AI on healthcare.
MobiHealthNews: What are your thoughts on the use of AI by companies trying to tackle health inequalities?
Jay Bhat: I think the inequalities that we are trying to correct are significant, they are persistent. I often say that health inequities are America’s chronic disease. We tried to fix it by putting band-aids on it or other ways, but we didn’t really get far enough ahead. We need to think about structural systemic issues that impact health care delivery and lead to health inequities – racism and prejudice. And machine learning researchers are detecting some of the pre-existing biases in the healthcare system. They must also, as you alluded to, address weaknesses in the algorithms. And there are questions that arise at every stage, from ideation to what the technology is trying to solve, to considering deployment in the real world.
I think of the question in a number of buckets. Number one, limited race and ethnicity data that has an impact, so we’re dealing with that. The other is inequitable infrastructure. So the lack of, you know, access to the kinds of tools, you think about broadband and the kind of digital divide, but also the gaps in digital literacy and engagement. Thus, digital literacy gaps are high among populations that already face particularly poor health conditions, such as disparate ethnic groups, low-income people, and older adults. And then the patient engagement challenges related to cultural barriers of language and trust. Thus, technology analysis has the potential to be truly useful in addressing health equity.
But technology and analytics also have the potential to exacerbate inequality and discrimination if not designed with this in mind. So we see this bias built into AI for voice and facial recognition, the choice of data proxies for healthcare. Prediction algorithms can lead to inaccurate predictions that impact results.
MNH: How do you think AI can positively and negatively impact health equity?
Bhat: So one of the positive ways is that AI can help us identify where to prioritize action and where to invest resources and then take action to address health inequities. It can bring up perspectives that we may not be able to see.
I think the other is the issue of algorithms that both have a positive impact on how hospitals allocate resources to patients, but could also have a negative impact. You know, we see race-based clinical algorithms, especially around kidney disease, kidney transplantation. This is one of a number of examples that have surfaced where there is a bias in clinical algorithms.
So we released a piece on what’s been really interesting that shows some of the places that are happening and what organizations can do about it. So, first, there is a bias in the statistical sense. Maybe the model that’s being tested doesn’t work for the research question you’re trying to answer. The other is variance. So you don’t have enough sample size to get a very good result. And then the last thing is noise. That something happened during the data collection process, long before the model was developed and tested, has an impact on that and on the results.
I think we need to create more data to be diverse. The high-quality algorithms we try to train require the right data, and then some systematic and thorough initial thinking and decisions when deciding which datasets and algorithms to use. And then we need to invest in talent that is diverse in both background and experience.
MNH: As AI advances, what are your fears if companies don’t make the necessary changes to their offerings?
Bhat: I think one would be organizations and individuals making decisions based on data that may be inaccurate, not sufficiently questioned, and not thought through from potential bias.
The other is the fear of how this further fuels distrust and misinformation in a world that is really struggling with it. We often say that health equity can be influenced by how quickly you build trust, but also, more importantly, how well you maintain trust. When we don’t think and test the output and it turns out that it could lead to an unintended consequence, we still have to be responsible for it. And so we want to minimize those problems.
The other is that we’re still in the early stages of trying to figure out how generative AI works, right? So generative AI has really come out of the fore now, and the question will be how do different AI tools talk to each other, and then what is our relationship to AI? And what is the relationship between different AI tools, because some AI tools may be better in certain circumstances – one for science versus resource allocation versus providing interactive feedback.
But, you know, generative AI tools can raise tricky questions, but can also be useful. For example, if you’re looking for help, like we do for telehealth for mental health, and people get messages that may have been written by the AI, those messages don’t incorporate some kind of empathy and understanding. This can lead to an unintended consequence and aggravate the condition a person may have or impact their ability to subsequently want to commit to care facilities.
I think trustworthy AI and ethical technology is paramount – one of the key issues that the healthcare system and life science companies are going to have to confront and have a strategy for. AI just has an exponential growth model, doesn’t it? It changes so fast. So I think it’s going to be very important for organizations to understand their approach, learn quickly and have the agility to address some of their strategic and operational approaches to AI, and then help deliver knowledge and help clinicians and care teams use it effectively.