{"id":192453,"date":"2025-02-04T05:23:46","date_gmt":"2025-02-04T05:23:46","guid":{"rendered":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-the-problem-of-fairness-in-an-automated-world\/"},"modified":"2025-02-04T05:23:46","modified_gmt":"2025-02-04T05:23:46","slug":"rewrite-this-title-in-arabic-the-problem-of-fairness-in-an-automated-world","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-the-problem-of-fairness-in-an-automated-world\/","title":{"rendered":"rewrite this title in Arabic The problem of fairness in an automated world"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Social affairs myFT Digest &#8212; delivered directly to your inbox.What does it mean for a machine\u2019s decision to be \u201cfair\u201d? So far, the public debate has focused mostly on the issue of bias and discrimination. That is understandable: most people would expect machines to be less biased than humans (indeed, this is often given as the rationale for using them in processes such as recruitment), so it is right to pay attention to evidence that they can be biased, too.But the word \u201cfair\u201d has a lot of interpretations, and \u201cunbiased\u201d is only one of them. I found myself on the receiving end of an automated decision recently which made me think about what it really means to feel that you have been treated justly, and how hard it might be to hold on to those principles in an increasingly automated world.I have a personal Gmail account which I use for correspondence about a book project I am working on. I woke up one morning in November to discover that I could no longer access it. A message from Google said my access had been \u201crestricted globally\u201d because \u201cit looks as though Gmail has been used to send unwanted content. Spamming is a violation of Google\u2019s policies.\u201d The note said the decision had been made by \u201cautomatic processing\u201d and that if I thought it was a mistake, I could submit an appeal.I had not sent any spam and couldn\u2019t imagine why Google\u2019s algorithm thought that I had. That made it hard to know what to write in the \u201cappeal\u201d text box, other than a panicked version of something like, \u201cI didn\u2019t do it (whatever it is)!\u201d and, \u201cPlease help, I really need access to my email and my files\u201d. (To my relief, I realised later that I hadn\u2019t lost access to my drive.)Two days later, I heard back: \u201cAfter reviewing your appeal, your account\u2019s access remains restricted for this service.\u201d I wasn\u2019t given any more information on what I had supposedly done or why the appeal had been rejected, but was told that \u201cif you disagree with this decision, you can submit another appeal.\u201d I tried again and was rejected again. I did this a few more times \u2014 curious, at this point, about how long this doom loop could continue. A glance at Reddit suggested other people had been through similar things. Eventually, I gave up. (Google declined to comment on the record.)Among regulators, one popular answer to the question of how to make automated decisions more \u201cfair\u201d is to insist that people can request a human to review them. But how effective is this remedy? For one thing, humans are prone to \u201cautomation complacency\u201d \u2014 a tendency to trust the machine too much. In the case of the UK\u2019s Post Office scandal, for example, where sub-postmasters were wrongly accused of theft because of a faulty computer system called Horizon, a judge in 2019 concluded that people at the Post Office displayed \u201c\u200b\u200ba simple institutional obstinacy or refusal to consider any possible alternatives to their view of Horizon\u201d.Ben Green, an expert on algorithmic fairness at the University of Michigan, says there can be practical problems in some organisations, too. \u201cOften times the human overseers are on a tight schedule \u2014 they have many cases to review,\u201d he told me. \u201cA lot of the cases I\u2019ve looked at are instances where the decision is based on some sort of statistical prediction,\u201d he said, but \u201cpeople are not very good at making those predictions, so why would they be good at evaluating them?\u201dOnce my impotent rage about my email had simmered down, I found I had a certain amount of sympathy with Google. With so many customers, an automated system is the only practical way to detect breaches of its policies. And while it felt deeply unfair to have to plead my case without knowing what had triggered the system, nor any explanation of pitfalls to avoid in an appeal, I could also see that the more detail Google offered about the way the system worked, the easier it would be for bad actors to get around it.But this is the point. In increasingly automated systems, the goal of procedural justice \u2014 that people feel the process has been fair to them \u2014 often comes into conflict with other goals, such as the need for efficiency, privacy or security. There is no easy way to make those trade-offs disappear.As for my email account, when I decided to write about my experience for this column, I emailed Google\u2019s press office with the details to see if I could discuss the issue. By the end of the day, my access to my email account had been restored. I was pleased, of course, but I don\u2019t think many people would see that as particularly fair either.sarah.oconnor@ft.com<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Social affairs myFT Digest &#8212; delivered directly to your inbox.What does it mean for a machine\u2019s decision to be \u201cfair\u201d? So far, the public debate has focused mostly on the issue of bias and discrimination.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-192453","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/192453","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=192453"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/192453\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=192453"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=192453"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=192453"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}