{"id":228768,"date":"2025-03-04T08:23:21","date_gmt":"2025-03-04T08:23:21","guid":{"rendered":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-students-must-learn-to-be-more-than-mindless-machine-minders\/"},"modified":"2025-03-04T08:23:21","modified_gmt":"2025-03-04T08:23:21","slug":"rewrite-this-title-in-arabic-students-must-learn-to-be-more-than-mindless-machine-minders","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-students-must-learn-to-be-more-than-mindless-machine-minders\/","title":{"rendered":"rewrite this title in Arabic Students must learn to be more than mindless \u2018machine-minders\u2019"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.University students have taken to artificial intelligence in the same way that an anxious new driver with a crumpled road map might take to satnav \u2014 that is to say, hungrily and understandably. A survey of UK undergraduates by the Higher Education Policy Institute think-tank shows 92 per cent of them are using generative AI in some form this year compared with 66 per cent last year, while 88 per cent have used it in assessments, up from 53 per cent last year.What should universities do? My instinct would be to lean in. Tell your students you will be giving the same essay question to a tool such as ChatGPT. They will be marked on how much better their version is than the machine\u2019s: how much more original, creative, perceptive or accurate. Or give them the AI version and tell them to improve upon it, as well as to identify and correct its hallucinations.After all, your students\u2019 prospects in the world of work are going to depend on how much value they can add, over and above what a machine can spit out. What\u2019s more, studies of AI use at work suggest these editing and supervising tasks will become increasingly common. A Microsoft study published this year on knowledge workers\u2019 use of generative AI found the tool had changed \u201cthe nature of critical thinking\u201d from \u201cinformation gathering to information verification\u201d, from \u201cproblem-solving to AI response integration\u201d and from \u201ctask execution to task stewardship\u201d.But like many pleasingly neat solutions to complex problems, mine turns out to be a terrible idea. Maria Abreu, a professor of economic geography at Cambridge university, told me her department had experimented along these lines. But when they gave undergraduates an AI text and asked them to improve it, the results were disappointing. \u201cThe improvements were very cosmetic, they didn\u2019t change the structure of the arguments,\u201d she said. Masters students did better, perhaps because they had already honed the ability to think critically and structure arguments. \u201cThe worry is, if we don\u2019t train them to do their own thinking, are they going to then not develop that ability?\u201d After the pandemic prompted a shift to assessments in which students had access to the internet, Abreu\u2019s department is now going back to closed exam conditions.Michael Veale, an associate professor at University College London\u2019s law faculty, told me his department had returned to using more traditional exams, too. Veale, who is an expert on technology policy, sees AI as a \u201cthreat to the learning process\u201d because it offers an alluring short-cut to students who are pressed for time and anxious to get good marks. \u201cWe\u2019re worried. Our role is to warn them of these short-cuts \u2014 short-cuts that limit their potential. We want them to be using the best tools for the job in the workplace when the time comes, but there\u2019s a time for that, and that time isn\u2019t always at the beginning,\u201d he says.This concern doesn\u2019t just apply to essay-based subjects. A study of novice programmers by the ACM Digital Library found that students with better grades used generative AI tools smartly to \u201caccelerate towards a solution\u201d. Others did poorly and probably gained misconceptions, but maintained \u201can unwarranted illusion of competence\u201d thanks to the AI.We might soon see the same patterns in work. The knowledge workers study by Microsoft (which is making a huge push to get AI into workplaces) found generative AI tools \u201creduce the perceived effort of critical thinking while also encouraging over-reliance on AI\u201d. Of course, this is nothing new. In 1983, Lisanne Bainbridge put her finger on the problem in a famous paper called \u201cIronies of Automation\u201d. She argued that humans asked to be \u201c\u2018machine-minding\u2019 operators\u201d would find their skills and knowledge would atrophy through lack of regular use, making it harder for them to intervene when they needed to.In many cases, that has been fine. People embraced satnav and forgot how to navigate properly. The world didn\u2019t end. But it won\u2019t be fine for everyone to uncritically swallow often-faulty AI output across a vast range of work tasks.How to avoid this future? As with the programming students, it appears the answer is to know your stuff: the Microsoft study found that people with higher self-confidence \u2014 who knew they could perform the task without AI if they wanted to \u2014 applied more critical thought.The researchers concluded that \u201ca focus on maintaining foundational skills in information gathering and problem-solving would help workers avoid becoming overreliant on AI\u201d. In other words, to use the short-cut effectively rather than mindlessly, you need to know how to do it without the short-cut. Universities \u2014 and students \u2014 take note.sarah.oconnor@ft.com<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.University students have taken to artificial intelligence in the same way that an anxious new driver with a crumpled road map might take to satnav \u2014<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-228768","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/228768","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=228768"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/228768\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=228768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=228768"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=228768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}