AI and the tradeoff between fairness and efficacy: 'You actually can get both'

A recent study counters the perception that mitigating bias in machine learning models requires sacrificing accuracy.
By Kat Jercich
09:37 AM

Photo: Negative Space/Pexels

A recent study in Nature Machine Intelligence by researchers at Carnegie Mellon sought to investigate the impact that mitigating bias in machine learning has on accuracy.

Despite what researchers referred to as a "commonly held assumption" that reducing disparities requires either accepting a drop in accuracy or developing new, complex methods, they found that the trade-offs between fairness and effectiveness can be "negligible in practice."  

"You actually can get both. You don't have to sacrifice accuracy to build systems that are fair and equitable," said Rayid Ghani, a CMU computer science professor and an author on the study, in a statement.

At the same time, Ghani noted, "It does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won't work."  

WHY IT MATTERS  

Ghani, along with CMU colleagues Kit Rodolfa and Hemank Lamba, focused on the use of machine learning in public policy contexts – specifically with regard to benefit allocation in education, mental health, criminal justice and housing safety programs.  

The team found that models optimized for accuracy could predict outcomes of interest, but showed disparities when it came to intervention recommendations.  

But when they adjusted the outputs of the models with an eye toward improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be successfully removed.  

In other words, by defining the fairness goal upfront in the machine learning process and making design choices to achieve that goal, they could address slanted outcomes without sacrificing accuracy.  

"In practice, straightforward approaches such as thoughtful label choice, model design or post-modelling mitigation can effectively reduce biases in many machine learning systems," read the study.  

Researchers noted that a wide variety of fairness metrics exists, depending on the context, and a broader exploration of the fairness-accuracy trade-offs is warranted – especially when stakeholders may want to balance multiple metrics.  

"Likewise, it may be possible that there is a tension between improving fairness across different attributes (for example, sex and race) or at the intersection of attributes," read the study.   

"Future work should also extend these results to explore the impact not only on equity in decision-making, but also equity in longer-term outcomes and implications in a legal context," it continued.  

The researchers noted that fairness in machine learning goes beyond the model’s predictions; it also includes how those predictions are acted on by human decision makers.   

"The broader context in which the model operates must also be considered, in terms of the historical, cultural and structural sources of inequities that society as a whole must strive to overcome through the ongoing process of remaking itself to better reflect its highest ideals of justice and equity," they wrote.  

THE LARGER TREND  

Experts and advocates have sought to shine a light on the ways that bias in artificial intelligence and ML can play out in a healthcare setting. For instance, a study this past August found that under-developed models may worsen COVID-19 health disparities for people of color.   

And as Chris Hemphill, VP of applied AI and growth at Actium Health, told Healthcare IT News this past month, even innocuous-seeming data can reproduce bias.  

"Anything you're using to evaluate need, or any clinical measure you're using, could reflect bias," Hemphill said.  

ON THE RECORD  

"We hope that this work will inspire researchers, policymakers and data science practitioners alike to explicitly consider fairness as a goal and take steps, such as those proposed here, in their work that can collectively contribute to bending the long arc of history towards a more just and equitable society," said the CMU researchers.

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.