Summarize this content to 2000 words in 6 paragraphs in Arabic Good morning. DeepSeek, a small Chinese artificial intelligence lab, has stunned the world and sent US tech stocks tumbling after unveiling its new R1 reasoning model. Its cheaper cost than rivals has called into question the business model of some of the big early players in AI.But those lower overheads are good news for governments — including that of Keir Starmer — hoping that AI will allow them to trim the costs of public administration. If there is “no moat” in AI then governments stand to save money on paying for staff without having to lose the same money to high costs for one company’s large language model or another.Ministers in the previous government of Rishi Sunak and that of Starmer rave about some private companies and the products they make. But Sunak recognised, rightly, that government needed to build its own sovereign capacity in AI, hence the thing likely to be his biggest positive legacy to the UK: the Incubator for Artificial Intelligence (i.AI). The AI unit in the Cabinet Office was launched in November 2023 in the hopes of expediting how information was compiled from different departments and allowing officials to make faster decisions. DeepSeek’s success is a positive indicator for the UK and for all governments that want to do the same.In addition to costs, the development in AI will also involve some big cultural changes in government. Some thoughts on one of them in today’s note. You Beta you betLast week, the government unveiled a series of new tools that, it says, will make government run faster for less. Consult is designed to use AI to reduce the cost of running consultations with stakeholders by cutting the time allocated to a civil service task that ordinarily requires hours and hours. Ministers launch about 700-800 consultations a year. Parlex and Lex (no, not that Lex) both support in researching legislation but also model how MPs might react to a policy issue.All of these and more are designed to do the same thing: reduce the amount of time the civil service spends on administrative tasks that do not require a human brain and reduce the amount of policy work that gets done on projects that will never be signed off by a special adviser or a minister. Although such technology is new, the spin is very, very old in that these are all the fruits of things that Sunak established as prime minister. Taking advantage of the good things your predecessor did is a long-standing political trick. The tools are also pretty much all in either alpha (ie “we aren’t sure if this works”) or beta stage (ie “we don’t know how much help it actually is now we’ve got it to work”). Part of seizing the benefits of AI and digital government more broadly is transparency about what you’re doing, why you’re doing it and what is working and isn’t. And one of the unalloyed positives about digital government in the UK has been the capacity to not only publish data on the outcomes of government efforts but also to show its working.A good thing about i.AI is that it has had a long history of failures: if you are not failing most of the time you either a) aren’t really innovating or b) aren’t being transparent about what you are doing or, worst of all c) you are releasing a lot of projects for public use that don’t work to save face. There’s an inevitable tension here that goes right to the heart of why governments are less nimble than private companies. As a taxpayer I want both national and local government to innovate, experiment, and be prepared to get things wrong to deliver better public services. As someone who uses public services, I want those services to work first time. And governments of all stripes are incentivised not to fail publicly, even at development stage. As a result, government programmes are often “doomed to succeed”: something that is announced, whether as an old-fashioned pilot programme or a newfangled beta test, is incentivised to progress to a proper rollout. That’s because governments are beaten up for failing. The negative headlines arising from the government’s difficulties in getting AI models that actually work in the Department for Work and Pensions are a case in point (as reported by Robert Booth over at the Guardian). My concern here isn’t that the government has had many unsuccessful pilots: that is a good thing, and part of government seizing the benefits of AI is going to be trying things at pilot stage that don’t work. My worry is that the story emerged through a freedom of information request. Government needs to shake off its old, old addiction to failing in secrecy if it is going to get the benefits both of sharing knowledge about how AI works with households and businesses, and for us to trust how governments use AI. Now try thisThis week, I mostly listened to Emma Rawicz’s Chroma while writing my column.Top stories todayCMA chair sets out stall | The UK’s competition regulator will speed up its investigations to try and unlock greater investment into the country, the newly installed chair of the CMA has said. Read Doug Gurr’s op-ed in the FT here.Troubled target | Britain is on course to fall far short of its targets for developing new solar and wind power despite the government’s efforts to lift barriers and boost the industry, according to new analysis.Flights of fancy? | One of the UK’s most expensive and controversial infrastructure projects could be back on the table. Aviation industry executives believe a third runway at Heathrow is so politically and technically challenging that Heathrow will want Rachel Reeves to go beyond simply signalling her approval for the idea of a third runway this week.New extremes | The government is set to reject internal advice to widen the definition of extremism to include potentially violent environmentalists, the far left, conspiracy theorists and men prejudiced against women, the BBC reports. The broadcaster was told that Yvette Cooper did not agree with the central findings of a rapid “sprint” report she commissioned last year. Leaked sections of the report, published by Policy Exchange, recommended the government’s counter-extremism strategy shift focus to “behaviours of concern” (including involvement in the “manosphere”) rather than “ideologies”.
رائح الآن
rewrite this title in Arabic To seize AI’s benefits, ministers must be prepared to fail in public
مقالات ذات صلة
مال واعمال
مواضيع رائجة
النشرة البريدية
اشترك للحصول على اخر الأخبار لحظة بلحظة الى بريدك الإلكتروني.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.