Health Care

Development of a Machine Learning–Based Prescriptive Tool to Address Racial Disparities in Access to Care After Penetrating Trauma | Emergency Medicine | JAMA Surgery

[ad_1]


Key Points

Question 
Can artificial intelligence (AI) be used to diagnose and remedy racial disparities in care?

Findings 
In this cohort study, examining the AI logic uncovered significant disparities in access to postacute care after discharge, with race (Black patients compared with White) playing the second most important role. After fairness adjustment, disparities disappeared and a similar percentage of Black and White patients had a recommended discharge to postacute care.

Meaning 
Interpretable machine learning methodologies are powerful tools to diagnose and remedy system-related bias in care, such as disparities in access to postinjury rehabilitation care.

Importance 
The use of artificial intelligence (AI) in clinical medicine risks perpetuating existing bias in care, such as disparities in access to postinjury rehabilitation services.

Objective 
To leverage a novel, interpretable AI-based technology to uncover racial disparities in access to postinjury rehabilitation care and create an AI-based prescriptive tool to address these disparities.

Design, Setting, and Participants 
This cohort study used data from the 2010-2016 American College of Surgeons Trauma Quality Improvement Program database for Black and White patients with a penetrating mechanism of injury. An interpretable AI methodology called optimal classification trees (OCTs) was applied in an 80:20 derivation/validation split to predict discharge disposition (home vs postacute care [PAC]). The interpretable nature of OCTs allowed for examination of the AI logic to identify racial disparities. A prescriptive mixed-integer optimization model using age, injury, and gender data was allowed to “fairness-flip” the recommended discharge destination for a subset of patients while minimizing the ratio of imbalance between Black and White patients. Three OCTs were developed to predict discharge disposition: the first 2 trees used unadjusted data (one without and one with the race variable), and the third tree used fairness-adjusted data.

Main Outcomes and Measures 
Disparities and the discriminative performance (C statistic) were compared among fairness-adjusted and unadjusted OCTs.

Results 
A total of 52 468 patients were included; the median (IQR) age was 29 (22-40) years, 46 189 patients (88.0%) were male, 31 470 (60.0%) were Black, and 20 998 (40.0%) were White. A total of 3800 Black patients (12.1%) were discharged to PAC, compared with 4504 White patients (21.5%; P < .001). Examining the AI logic uncovered significant disparities in PAC discharge destination access, with race playing the second most important role. The prescriptive fairness adjustment recommended flipping the discharge destination of 4.5% of the patients, with the performance of the adjusted model increasing from a C statistic of 0.79 to 0.87. After fairness adjustment, disparities disappeared, and a similar percentage of Black and White patients (15.8% vs 15.8%; P = .87) had a recommended discharge to PAC.

Conclusions and Relevance 
In this study, we developed an accurate, machine learning–based, fairness-adjusted model that can identify barriers to discharge to postacute care. Instead of accidentally encoding bias, interpretable AI methodologies are powerful tools to diagnose and remedy system-related bias in care, such as disparities in access to postinjury rehabilitation care.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button