New research shows how to avoid bias in AI brain models
Source: Geralt/Pixabay
Artificial intelligence (AI) machine learning is a rapidly emerging brain modeling tool for mental health research, psychiatry, neuroscience, genomics, pharmaceuticals, life sciences, and biotechnology. Scientists have identified areas of potential weak points in AI brain models and offer solutions to avoid bias in a new peer-reviewed study.
The research team led by Abigail Greene at Yale School of Medicine with co-authors affiliated with Yale University, Brigham and Women’s Hospital, Harvard Medical School, University of Washington, and the Department of Psychiatry at Columbia University Irving Medical Center highlights the need to identify why AI algorithms for brain models don’t work for everyone when seeking to understand brain-phenotype relationships without bias.
“Individual differences in the functional organization of the brain follow a range of traits, symptoms, and behaviors,” the scientists wrote. “So far, work modeling linear brain-phenotype relationships has assumed that only one of these relationships generalizes to all individuals, but the models do not perform equally well in all participants.”
They used predictive AI models to relate brain activity to phenotype that were trained and validated on independent data. In genomics, height, eye color, and hair color are examples of phenotypes.
Phenotype refers to how DNA manifests itself physically. They are the observable traits of an organism resulting from the combination of alleles it has for a specific gene and environment. An allele is a variant of a gene formed by a mutation.
To determine where AI models are failing, the team trained AI models to rank neurocognitive test performance using brain activity data. The three datasets used for the study were the Human Connectome Project, the UCLA Consortium for Neuropsychiatric Phenomics, and data collected at Yale from February 2018 through March 2021. The battery.
“Across a range of data processing and analysis approaches applied to three independent datasets, we found that model failure is systematic, reliable, phenotype-specific, and generalizable across datasets, and that individuals’ scores are misclassified when they ‘surprise’ the model, operating inconsistently with the consensus covariate profile of high and low scores,” the researchers reported.
This study suggests that AI models of the brain show neurocognitive constructs that are a combination of sociodemographic and clinical factors that yield a stereotypical profile rather than one that will generalize well to the general population. Researchers recommend collecting comprehensive and inclusive demographic data.
“The fact that models pick up and use stereotyped profiles is not always, in itself, a problem for studies based on brain-phenotype relationship data,” the researchers wrote.
They urge the characterization of these profiles in order to identify biases and generalize the model to different population samples.
“Our results suggest that brain activity-based models often predict complex profiles rather than unitary cognitive processes, underscoring the need to take these profiles into account and the influence of sample representation on them.” writes the researchers.
Copyright © 2022 Cami Rosso All rights reserved.