Smiley face
حالة الطقس      أسواق عالمية

Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.University students have taken to artificial intelligence in the same way that an anxious new driver with a crumpled road map might take to satnav — that is to say, hungrily and understandably. A survey of UK undergraduates by the Higher Education Policy Institute think-tank shows 92 per cent of them are using generative AI in some form this year compared with 66 per cent last year, while 88 per cent have used it in assessments, up from 53 per cent last year.What should universities do? My instinct would be to lean in. Tell your students you will be giving the same essay question to a tool such as ChatGPT. They will be marked on how much better their version is than the machine’s: how much more original, creative, perceptive or accurate. Or give them the AI version and tell them to improve upon it, as well as to identify and correct its hallucinations.After all, your students’ prospects in the world of work are going to depend on how much value they can add, over and above what a machine can spit out. What’s more, studies of AI use at work suggest these editing and supervising tasks will become increasingly common. A Microsoft study published this year on knowledge workers’ use of generative AI found the tool had changed “the nature of critical thinking” from “information gathering to information verification”, from “problem-solving to AI response integration” and from “task execution to task stewardship”.But like many pleasingly neat solutions to complex problems, mine turns out to be a terrible idea. Maria Abreu, a professor of economic geography at Cambridge university, told me her department had experimented along these lines. But when they gave undergraduates an AI text and asked them to improve it, the results were disappointing. “The improvements were very cosmetic, they didn’t change the structure of the arguments,” she said. Masters students did better, perhaps because they had already honed the ability to think critically and structure arguments. “The worry is, if we don’t train them to do their own thinking, are they going to then not develop that ability?” After the pandemic prompted a shift to assessments in which students had access to the internet, Abreu’s department is now going back to closed exam conditions.Michael Veale, an associate professor at University College London’s law faculty, told me his department had returned to using more traditional exams, too. Veale, who is an expert on technology policy, sees AI as a “threat to the learning process” because it offers an alluring short-cut to students who are pressed for time and anxious to get good marks. “We’re worried. Our role is to warn them of these short-cuts — short-cuts that limit their potential. We want them to be using the best tools for the job in the workplace when the time comes, but there’s a time for that, and that time isn’t always at the beginning,” he says.This concern doesn’t just apply to essay-based subjects. A study of novice programmers by the ACM Digital Library found that students with better grades used generative AI tools smartly to “accelerate towards a solution”. Others did poorly and probably gained misconceptions, but maintained “an unwarranted illusion of competence” thanks to the AI.We might soon see the same patterns in work. The knowledge workers study by Microsoft (which is making a huge push to get AI into workplaces) found generative AI tools “reduce the perceived effort of critical thinking while also encouraging over-reliance on AI”. Of course, this is nothing new. In 1983, Lisanne Bainbridge put her finger on the problem in a famous paper called “Ironies of Automation”. She argued that humans asked to be “‘machine-minding’ operators” would find their skills and knowledge would atrophy through lack of regular use, making it harder for them to intervene when they needed to.In many cases, that has been fine. People embraced satnav and forgot how to navigate properly. The world didn’t end. But it won’t be fine for everyone to uncritically swallow often-faulty AI output across a vast range of work tasks.How to avoid this future? As with the programming students, it appears the answer is to know your stuff: the Microsoft study found that people with higher self-confidence — who knew they could perform the task without AI if they wanted to — applied more critical thought.The researchers concluded that “a focus on maintaining foundational skills in information gathering and problem-solving would help workers avoid becoming overreliant on AI”. In other words, to use the short-cut effectively rather than mindlessly, you need to know how to do it without the short-cut. Universities — and students — take [email protected]

شاركها.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.