Deleting the wiki page 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' cannot be undone. Continue?
Machine-learning designs can fail when they attempt to make predictions for individuals who were underrepresented in the datasets they were trained on.
For circumstances, a model that predicts the best treatment option for someone with a chronic illness might be trained utilizing a dataset that contains mainly male clients. That model might make inaccurate forecasts for female clients when deployed in a healthcare facility.
To enhance outcomes, engineers can try stabilizing the training dataset by eliminating information points until all subgroups are similarly. While dataset balancing is appealing, larsaluarna.se it typically needs eliminating big amount of information, injuring the design’s total efficiency.
MIT researchers developed a new strategy that determines and wiki.monnaie-libre.fr gets rid of specific points in a training dataset that contribute most to a model’s failures on minority subgroups. By eliminating far less datapoints than other techniques, this strategy maintains the total precision of the model while enhancing its efficiency relating to underrepresented groups.
In addition, the strategy can identify surprise sources of bias in a training dataset that lacks labels. Unlabeled data are much more prevalent than identified data for numerous applications.
This technique might also be integrated with other methods to enhance the fairness of machine-learning models deployed in high-stakes circumstances. For example, it might sooner or later help ensure underrepresented patients aren’t misdiagnosed due to a biased AI design.
“Many other algorithms that attempt to address this issue presume each datapoint matters as much as every other datapoint. In this paper, we are revealing that assumption is not true. There are specific points in our dataset that are adding to this predisposition, and we can find those data points, remove them, and improve performance,” states Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this technique.
She composed the paper with co-lead authors Saachi Jain PhD ‘24 and fellow EECS graduate trainee Kristian Georgiev
Deleting the wiki page 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' cannot be undone. Continue?