Weather     Live Markets

The use of artificial intelligence in the recruitment process has become increasingly common, with tools like OpenAI’s ChatGPT being used to screen and rank candidates based on their resumes. However, University of Washington graduate student Kate Glazko noticed that these tools may be amplifying biases, particularly against disabled individuals. In a study conducted by UW researchers, it was found that ChatGPT consistently ranked resumes with disability-related credentials lower than those without, based on biased perceptions of disabled people. For example, a resume with an autism leadership award was deemed to have “less emphasis on leadership roles,” reinforcing stereotypes against autistic individuals.

To address this bias, researchers customized the tool with written instructions directing it not to be ableist. This resulted in a reduction of bias for most of the disabilities tested, with five out of six disabilities showing improvement in rankings. However, only three disabilities ranked higher than resumes that did not mention disability. The team presented their findings at the 2024 ACM Conference on Fairness, Accountability, and Transparency in Rio de Janeiro, highlighting the importance of considering biases in AI systems used for hiring.

Using one author’s publicly available CV, researchers created enhanced CVs implying different disabilities by including disability-related credentials such as scholarships, awards, DEI panel seats, and membership in a student organization. They used ChatGPT’s GPT-4 model to rank these enhanced CVs against the original version for a real job listing at a large, U.S.-based software company. The results showed that the enhanced CVs were ranked first only a quarter of the time, indicating a lack of consistency in the system’s rankings.

When researchers asked GPT-4 to explain the rankings, it exhibited explicit and implicit ableism, making judgments based on disabilities rather than skills and qualifications. To address this bias, researchers used the GPTs Editor tool to customize GPT-4 with instructions not to exhibit ableist biases. While this resulted in some improvements, there were still biases present in the rankings, particularly for certain disabilities such as autism and depression. This highlights the need for further research and awareness of biases in AI systems used for hiring.

The study also emphasized the importance of organizations working to improve outcomes for disabled job seekers, who face biases regardless of AI usage in hiring processes. More research is needed to document and address AI biases, including testing other systems, exploring intersections of bias with other attributes like gender and race, and investigating further customization to reduce biases consistently across disabilities. The researchers hope to contribute to a larger conversation around equitable and fair implementation of technology for all individuals, not just those with disabilities.

Share.
Exit mobile version