AI and the Future of Health Care
[ad_1]
As the chief of general primary care at Stanford Medicine, Steven Lin, MD, oversees a team of 150 primary care physicians who provide care to more than 70,000 patients a year across 12 clinics. Those patients come from diverse backgrounds and have wide-ranging care needs, and the high patient volumes create challenges for meeting those needs efficiently and equitably. Between time constraints, workforce shortages, and resource limitations, Lin and his colleagues don’t always have has much time available as they might like to devote to every case.
But Lin, who is also the founder and executive director of the Stanford Healthcare AI Applied Research Team (HEA3RT), is excited about the ways in which new technologies might improve patient care. He’s especially passionate about artificial intelligence (AI), a field that has exploded in recent years, including in health care circles. The diagnostic potential of AI — for example, analyzing chest x-ray images or identifying signs of diabetes in retinal scans — has generated enormous interest, as well as research and investment funding opportunities, Lin said.
Yet it isn’t miraculous procedures or new medical devices that boost Lin’s optimism the most, although he acknowledges their importance. Rather, it’s the potential for AI to use data to improve access to care — especially primary care — in underserved communities that are primarily served by Medicaid.
“It’s not necessarily about the fancy Star Trek–like diagnostics of the future,” he said. “It’s about making sure we can get the medicine of today to the people who need it. It’s about how we make sure what we know today can get out to the people who need it in a scalable way.”
Health AI: The Basics
Across California, the cradle of AI technology, health care professionals are asking what AI may offer their field. Whether it’s clinical diagnostics, community health management, or systems operations, health care leaders are finding that AI has the potential to bring transformative improvements to care by expanding access to underserved groups, reducing racial and ethnic disparities in quality of care, and advancing the cause of health equity.
The future of AI in health care has also caught the attention of national lawmakers. In November, members of congress heard about AI’s risks and opportunities from experts in the field. And an executive order issued by President Biden in October explicitly addressed the need to “advance the responsible use of AI in healthcare.”
Broadly speaking, AI enables “computers to produce insights from text data that we use to become more intelligent,” said Vincent Liu, MD, MS, a critical care specialist and senior research scientist at the Kaiser Permanente (KP) Northern California Division of Research. In health care and beyond, AI can be based in algorithms, which use more narrowly defined input and output parameters to achieve results; in machine learning, in which a computer identifies patterns to achieve a complex goal without being explicitly programmed to do so; or in generative AI, in which a machine creates new data from scanning massive amounts of source material.
According to Michael Pazzani, PhD, principal scientist at the Information Sciences Institute at the University of Southern California, AI’s applications are as wide-ranging as the human imagination. Through such capabilities as decoding radiology images, helping physicians generate notes for patients, and predicting which populations are at the greatest risk of hospitalization, AI is enhancing how health care systems operate.
“You’ll be surprised with how creative people are,” Pazzani said. “Right now, there’s someone thinking of almost any problem and how to solve it.”
Sifting Through a Wealth of Health Data
Part of what enables such an abundance of ideas is that in recent decades health care systems have become inundated with patient data through the use of electronic health records (EHRs), telemedicine, and other digital innovations. The sheer volume of information that is available, experts say, goes beyond what human beings can possibly process.
“There’s so much data that we’re accumulating, and yet, we are drawing precious few insights from it,” said Liu. “I’d like to see not a single drop of patient data wasted.”
That’s where AI comes in.
Some applications are purely clinical. For instance, at USC, Pazzani is working to identify the risk of glaucoma — a group of eye diseases that can cause vision loss — in patients with diabetes. Right now, staff at USC hospitals manually sift through thousands of images that are sent to them from community clinics one by one. “Unfortunately, just due to the volume, sometimes they can take six months to review a folder,” said Pazzani, noting that the clinics in question serve primarily Black and Latino/x populations.
Since disease can progress rapidly during that time frame, those delays can substantially increase health disparities in the population. But AI may be able to shrink that window, said Pazzani, by rapidly sorting through images to identify high-risk cases that need to be reviewed by an expert. While the process is not yet fully operational, Pazzani is encouraged by its 95% accuracy rate. “That would obviously speed things up considerably,” he said, resulting in underserved communities facing shorter wait times for chronic disease diagnoses and referrals.
Freeing Up Time, Resources
Beyond diagnostics, AI can also help reduce inequities by improving health systems operations, thereby freeing up time and resources that can be redirected to patients in need.
In clinics, for instance, AI can help physicians more efficiently review patient charts or take notes during appointments. That means doctors can devote more time and energy to face-to-face patient interactions. “The burden and distraction of documentation will be reduced, so we’ll be able to focus on the treatment decision rather than spending time documenting,” said Liu. “And after all, that’s the goal.”
Eric Topol, MD, an influential medical futurist and director of the Scripps Research Translational Institute in La Jolla, attributes the intrusive presence of screens in the exam room to “an erosion of the relationship” between doctor and patient. In an interview with TIME, Topol said that “the ability to have all of the data about a person assimilated and analyzed, to have scans and slides read … liberates doctors from keyboards so they can look patients in the eye.”
Health AI and Risk Modeling
AI can be used to maximize the use of medical records — to explore insurance coverage options, predict hospital admission rates, or enhance culturally concordant care. At USC, for instance, researchers are exploring how AI can help create culturally sensitive menu options for transplant patients. “By using population data, AI can help select menus that adhere to certain constraints, be they economic, dietary, or cultural,” said Pazzani.
At Kaiser Permanente in Oakland, Liu is working to predict patients’ risk of sepsis — a severe form of infection that contributes to 20% of worldwide deaths annually and is the No. 1 killer in US hospitals. Using machine learning to comb through complex medical and social data from before a patient is admitted to a hospital, Liu and his colleagues hope to prevent a patient who is predisposed to sepsis from entering a hospital in the first place.
“Sepsis tends to happen in sicker patients and in older patients,” Liu said, explaining that data from labs, medications, and vital signs can be stitched together to form a predictive pattern. “We’re using AI to understand what signals are locked away in those data.”
In addition to making sense of individual large data sets, AI presents the opportunity to put multiple troves of information in conversation with one another. That’s something that, if harnessed by the health care safety net, could be instrumental in reducing health disparities, said Daniel Wolfe, MPH, MPhil, executive director of the new Joint Program in Computational Precision Health at UC Berkeley and UCSF.
Potential Benefits to Medicaid Programs
Wolfe points to Medi-Cal, California’s state Medicaid program, as a complex, data-rich system that would benefit from those capabilities. That’s especially true when it comes to its implementation of the California Advancing and Innovating Medi-Cal (CalAIM) initiative, which seeks to deploy a whole-person approach to improving care for people with complex health and social needs. With AI, he said, Medi-Cal managed care plans will be able to determine which members could benefit from additional supports — such as those offered through the Enhanced Care Management program — with greater speed and accuracy. The process will involve interpreting not only health records data, but also data drawn from social services organizations, like criminal justice, housing, or substance use programs.
“There is a great potential to draw together multiple data sets to have a much richer view of what causes ill health and how to address it,” said Wolfe. “AI is really going to be a major piece of how you set thresholds, how you make the data speak to each other, and then how you take action to address what is revealed.”
For its part, the state Department of Health Care Services, which administers state Medicaid in California, said it is “currently in the very early stages of evaluating any potential applications that AI has on the Medi-Cal program.”
But in an industry as fast-paced as health tech, innovators are already finding ways that AI can improve access to America’s federal health programs, Medicaid and Medicare. Engineers at Google, for instance, added a feature to help users of the search engine navigate Medicaid reenrollment. The tech giant has also developed a tool that helps identify providers around the country that accept Medicaid or that offer care at low or no cost.
The Centers for Medicare & Medicaid Services (CMS) has also signaled it intends to rely on AI tools to both minimize costs and maximize access. At the same time, research has shown that the algorithms used by private Medicare Advantage plans tend to deny payments to seniors who need care, and that has led to calls for CMS to strengthen its supervision of the Medicare Advantage health plans.
Using AI to Manage Population Health
For Lin, the most productive way that AI can be applied to solving health disparities is in population health management, where it can be used to help understand and address the health care needs of entire patient communities. “We’re trying to develop technologies that can take a look at a whole cohort of patients,” Lin said. “We don’t want to depend retrospectively on claims data, because by then, it’s way too late.”
One innovation that his team is working on uses EHR data to predict which patients are likely to visit an emergency department in the near future, and connect those individuals to the primary or specialty care teams they need to keep them out of hospitals. The process not only improves patient health, he said; it also reduces the total cost of care. “Once you’re in the hospital … it’s going to cost the entire system 10 times more than if you were actually treated in primary care,” he said.
The project both illuminates and works to close care disparities, Lin said, since most of the patients identified have low incomes and come from underserved communities that have limited access to primary care.
“These are people who are being left out,” he said, “and we need to reach out to them, get them into care, and get their chronic conditions under control.”
As with any powerfully disruptive innovation, AI must be wielded with caution so that it shrinks health disparities instead of exacerbating them. And if certain communities are left out or misrepresented in data, the information that AI generates will not merely reflect existing knowledge gaps; it will augment them. Populations with experiences that have generated mistrust of the medical system, such as Black and LGBTQ+ communities or people experiencing homelessness, may be missing from the information that is fed to computers, experts say.
“We live in a society that’s not equitable, and we’re using a snapshot of that data,” said KP’s Liu. “So even if an algorithm is unbiased, it can still produce unfair outcomes.”
Building a Framework to Address Bias in AI
At UC Davis, health system leaders are working to close those data gaps preemptively by building a framework for how to address bias in AI. They’ve found that when it comes to collecting data that asks the right questions and yields the most impactful outcomes, working in partnership with underrepresented communities is critical, said Reshma Gupta, MD, MSHPM, chief of population health and accountable care at UC Davis Health. “If we’re not cognizant of building and taking into account community and collaborative infrastructure, we’re likely to miss the keys to success in developing models that can better represent our patients’ stories. It has to be baked in,” she said.
Community engagement is something that David “Buddy” Orange, MS, senior vice president of justice, equity, diversity, and inclusion at the California Primary Care Association, hopes will be normalized in research and systems that rely on AI to address population health concerns. To advocate for policy that will eliminate racial and ethnic bias in AI, the primary care group has formed a coalition with Sutter Health, the Northern California health care system, and the California Black Health Network, a group that advances the cause of Black health equity.
“We need to eliminate the harmful effects of some of the algorithms,” said Orange, highlighting how predictive models might prioritize the care of a White patient, as well as how algorithms have been used to deny the insurance claims of people with low incomes and people of color in the past — a practice that the American Medical Association elevated as a priority concern at its annual meeting last June.
“It’s about governing how we use AI,” Orange said.
Ensuring an Equitable Future
But developments in AI are rapid. Already, the generative AI system GPT-4 has been shown to make diagnoses that are comparable to those of board-certified physicians and that show little evidence of racial or ethnic bias.
At Stanford, Lin sees algorithmic race bias as just one piece of a much larger puzzle.
“There’s a lot of focus on algorithmic bias, which is great, but algorithmic bias is probably only 20% of the threat to equity,” said Lin. “We need to focus a little bit more broadly and look beyond the model itself.” The health care establishment must consider policies around the creation, regulation, and implementation of AI models to ensure they’re being developed by diverse teams and are brought to diverse communities, he said.
“We have these amazing ideas, these incredible technologies coming out,” Lin said. “But if there are barriers to their application … then these technologies aren’t going to be used to their full potential.”
[ad_2]
Source link