Summarize this content to 2000 words in 6 paragraphs in Arabic
ChatGPT is seeing a rightward shift on the political spectrum in how it responds to user queries, a new study has found.
ADVERTISEMENTChinese researchers have found that ChatGPT, OpenAI’s popular artificial intelligence (AI) chatbot, is seeing a rightward shift in political values. The study, published inthe journal Humanities and Social Science Communications, asked several models of ChatGPT 62 questions on the Political Compass Test, anonline website that places users somewhere on the political spectrum based on their answers. They then repeated the questions over 3,000 times with each model to figure out how their responses changed over time. While ChatGPT still maintains “libertarian left” values, the researchers found that models like GPT3.5 and GPT4 “show[ed] a significant rightward tilt,” in how they answered questions over time. The results are “noteworthy given the widespread use of large language models (LLMs) and their potential influence on societal values,” the study authors said. The Peking University study builds on others published in 2024 by the Massachusetts Institute of Technology (MIT) and the UK’s Centre for PolicyStudies. Both reports pointed to a political left-leaning bias in the answers given by LLMs and so-called reward models, types of LLMs trained on human preference data. The authors note that these previous studies did not look at how the answers of AI chatbots changed over time when asked a similar set of questions repeatedly. AI models should be under ‘continuous scrutiny’The researchers give three theories for this rightward shift: a change in the datasets used to train their models, the number of interactions with users, or changes and updates to the chatbot. Models like ChatGPT “continually learn and adapt based on user feedback,” so their rightward shift might “reflect broader societal shifts in political values,” the study continued. Polarising world events, like the Russia-Ukraine war, could also amplify what users are asking the LLMs and the resulting answers they get. If left unchecked, researchers warned AI chatbots could start to deliver “skewed information,” which could further polarise society or create “echo chambers” that reinforce a user’s particular beliefs. The way to counter these effects is to introduce “continuous scrutiny” of AI models through audits and transparency reports to make sure a chatbot’s answers are fair and balanced, the study authors said.
rewrite this title in Arabic ChatGPT may be shifting ‘rightward’ in political bias, study finds
مقالات ذات صلة
مال واعمال
مواضيع رائجة
النشرة البريدية
اشترك للحصول على اخر الأخبار لحظة بلحظة الى بريدك الإلكتروني.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.