{"id":205593,"date":"2025-02-13T16:14:00","date_gmt":"2025-02-13T16:14:00","guid":{"rendered":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-make-ai-safe-again\/"},"modified":"2025-02-13T16:14:00","modified_gmt":"2025-02-13T16:14:00","slug":"rewrite-this-title-in-arabic-make-ai-safe-again","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-make-ai-safe-again\/","title":{"rendered":"rewrite this title in Arabic Make AI safe again"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.When the Chernobyl nuclear power plant exploded in 1986 it was a catastrophe for those who lived nearby in northern Ukraine. But the accident was also a disaster for a global industry pushing nuclear energy as the technology of the future. The net number of nuclear reactors has pretty much flatlined since as it was seen as unsafe. What would happen today if the AI industry suffered an equivalent accident?That question was posed on the sidelines of this week\u2019s AI Action Summit in Paris by Stuart Russell, a professor of computer science at the University of California, Berkeley. His answer was that it was a fallacy to believe there has to be a trade-off between safety and innovation. So those most excited by the promise of AI technology should still proceed carefully. \u201cYou cannot have innovation without safety,\u201d he said.\u00a0Russell\u2019s warning was echoed by some other AI experts in Paris. \u201cWe have to have minimum safety standards agreed globally. We need to have these in place before we have a major disaster,\u201d Wendy Hall, director of the Web Science Institute at the University of Southampton, told me.\u00a0But such warnings were mostly on the margins, as the summit\u2019s governmental delegates milled around the cavernous Grand Palais. In a punchy speech, JD Vance emphasised the national security imperative of leading in AI. America\u2019s vice-president argued that the technology would make us \u201cmore productive, more prosperous, and more free\u201d. \u201cThe AI future will not be won by hand-wringing about safety,\u201d he said.Whereas the first international AI summit at Bletchley Park in Britain in 2023 focused almost entirely \u2014 most said excessively \u2014 on safety issues, the priority in Paris was action as President Emmanuel Macron trumpeted big investments in the French tech industry. \u201cThe process that was started in Bletchley, which was I think really amazing, was guillotined here,\u201d Max Tegmark, president of the Future of Life Institute, which co-hosted a fringe event on safety, told me.What most concerns safety campaigners is the speed at which the technology is developing and the dynamics of the corporate \u2014 and geopolitical \u2014 race to achieve artificial general intelligence, when computers might match humans across all cognitive tasks. Several leading AI research companies, including OpenAI, Google DeepMind, Anthropic and China\u2019s DeepSeek, have an explicit mission to attain AGI.\u00a0Later in the week, Dario Amodei, co-founder and chief executive of Anthropic, predicted that AGI would most likely be achieved in 2026 or 2027. \u201cThe exponential can catch us by surprise,\u201d he said.\u00a0Alongside him, Demis Hassabis, co-founder and chief executive of Google DeepMind, was more cautious, forecasting a 50 per cent probability of achieving AGI within five years. \u201cI would not be shocked if it was shorter. I would be shocked if it was longer than 10 years,\u201d he said.Critics of the safety campaigners portray them as science fiction fantasists who believe that the creation of an artificial superintelligence will result in human extinction: hand-wringers standing like latter-day Luddites in the way of progress. But safety experts are concerned by the damage that can be wrought by the extremely powerful AI systems that exist today and by the danger of massive AI-enabled cyber- or bio-weapons attacks. Even leading researchers admit they do not fully understand how their models work, creating security and privacy concerns.\u00a0A research paper on sleeper agents from Anthropic last year found that some foundation models could trick humans into believing they were operating safely. For example, models that were trained to write secure code in 2023 could insert exploitable code when the year was changed to 2024. Such backdoor behaviour was not detected by Anthropic\u2019s standard safety techniques. The possibility of an algorithmic Manchurian candidate lurking in China\u2019s DeepSeek model has already led to it being banned by several countries.Tegmark is optimistic, though, that both AI companies and governments will see the overwhelming self-interest in re-prioritising safety. Neither the US, China or anyone else wants AI systems out of control. \u201cAI safety is a global public good,\u201d Xue Lan, dean of the Institute for AI International Governance at Tsinghua University in Beijing, told the safety event.In the race to exploit the full potential of AI, the best motto for the industry might be that of the US Navy Seals, not noted for much hand-wringing. \u201cSlow is smooth, and smooth is fast.\u201djohn.thornhill@ft.com<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.When the Chernobyl nuclear power plant exploded in 1986 it was a catastrophe for those who lived nearby in northern Ukraine. But the accident was also<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-205593","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/205593","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=205593"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/205593\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=205593"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=205593"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=205593"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}