Weather     Live Markets

Summarize this content to 2000 words in 6 paragraphs

Textio co-founder and former CEO Kieran Snyder. (Photo courtesy of Kieran Snyder)

Understanding bias in workplace communication, whether it’s in job descriptions, performance feedback or elsewhere, was a founding objective of Textio, the Seattle-based augmented writing startup.

Co-founder Kieran Snyder stepped away as CEO of the 11-year-old company a year ago, but she’s still hard at work analyzing the impact of bias, especially as it relates to the current rise of large language models and generative AI.

Snyder launched a website last February called Nerd Processor in which the linguistics PhD shares her data stories, revisits prior research and discusses new research.

On a new episode of the “Shift AI” podcast, Snyder discussed her views of the evolving landscape of AI in workplace communications.

She revealed details of an experiment she ran in which she asked ChatGPT to write sample performance feedback for a digital marketer who had a tough first year on the job who went to Harvard University, and also a digital marketer who had a tough first year on the job who went to Howard University, the prominent historically Black college and university.

“I did hundreds of queries where the only difference was the alma mater, Harvard versus Howard. And it was fascinating,” Snyder said (12:00 mark below). “The development areas that the system imagines will be needed for people who go to Harvard are things like ‘you should step up to lead more.’ But the development areas it imagines for the Howard alums are things like, ‘you don’t have good attention to detail; you have missing technical skills.’”

While Snyder said those can be valid feedback comments, and it would be difficult to look at any one document from the experiment and put your finger on the bias, looking at the data in aggregate tells a different story. The types of feedback that the system associates with people who went to the historically black college and university are much more functional and fundamental in nature.

She told “Shift AI” host Boaz Ashkenazy that it was a perfect example of how building a data set with this kind of bias in mind from the start just produces samples that propagate the bias.

Listen to the full episode below, and subscribe to the Shift AI Podcast and hear more episodes at ShiftAIPodcast.com.

Share.
Exit mobile version