Weather     Live Markets

A University of Washington study highlighted biases in AI used to screen resumes for job applications. Three large language models favored white male candidates 85% of the time and female candidates 11% of the time. Black men were the least favored group. The bias is a result of existing societal privileges showing up in training data, causing the models to reproduce or amplify patterns in their decisions. Researchers Aylin Caliskan and Kyra Wilson presented their findings at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.

The study used 554 resumes and 571 job descriptions from real-world documents, swapping in first names associated with different genders and races. The technology showed bias based on gender, race, and intersectional factors. The bias was evident even for jobs typically held by women. Removing names from resumes is not a sufficient fix, as technology can infer identity from other factors. Developers need to produce unbiased training datasets to address the issue.

The University of Washington focused on open-source large language models from Salesforce, Contextual AI, and Mistral. These models produce numerical representations of documents for easier comparisons. Previous studies have investigated biases in foundation LLMs, but few have looked at MTEs in this context. Spokespeople for Salesforce and Contextual AI noted that the models used were not intended for real-world employment screenings. Salesforce conducts rigorous testing for toxicity and bias in its commercial AI offerings.

To address bias in AI hiring systems, California and New York City have implemented laws to safeguard against discrimination. California has made intersectionality a protected characteristic, and New York City requires companies using AI hiring systems to disclose their performance. However, exemptions exist if humans are involved in the process. Wilson’s further research will focus on how human decision makers interact with AI systems, as individuals may trust technology more than humans, potentially leading to increased bias in selections.

Overall, the study revealed significant biases in AI models used to screen resumes for job applications. The technology favored white male candidates and showed biases based on gender and race. The challenge is to address biases in training datasets to ensure that AI makes fair and unbiased decisions. California and New York City have implemented laws to address discrimination in AI hiring systems, but further research and efforts are needed to mitigate biases in these technologies. Wilson’s future research will explore how human decision makers interact with AI systems to better understand potential sources of bias in selection processes.

Share.
Exit mobile version