{"id":109411,"date":"2024-06-07T09:00:44","date_gmt":"2024-06-07T09:00:44","guid":{"rendered":"https:\/\/globeecho.com\/ar\/tech\/rewrite-this-title-in-arabic-silicon-valley-in-uproar-over-californian-ai-safety-bill\/"},"modified":"2024-06-07T09:00:44","modified_gmt":"2024-06-07T09:00:44","slug":"rewrite-this-title-in-arabic-silicon-valley-in-uproar-over-californian-ai-safety-bill","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-silicon-valley-in-uproar-over-californian-ai-safety-bill\/","title":{"rendered":"rewrite this title in Arabic Silicon Valley in uproar over Californian AI safety bill"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework including creating a \u201ckill switch\u201d to turn off their powerful AI models, in a growing battle over regulatory control of the cutting-edge technology.The Californian legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI start-ups OpenAI, Anthropic and Cohere as well as large language models run by Big Tech companies such as Meta. The bill, passed by the state\u2019s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly-created state body that they will not develop models with \u201ca hazardous capability\u201d, such as creating biological or nuclear weapons or aiding cyber security attacks. Developers would be required to report on their safety testing and introduce a so-called kill switch to shut down their models, according to the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.But the law has become the focus of a backlash from many in Silicon Valley because of claims it will force AI start-ups to leave the state and prevent platforms such as Meta from operating open-source models. \u201cIf someone wanted to come up with regulations to stifle innovation, one could hardly do better,\u201d said Andrew Ng, a renowned computer scientist who led AI projects at Alphabet\u2019s Google and China\u2019s Baidu, and who sits on Amazon\u2019s board. \u201cIt creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.\u201d The rapid growth and huge potential of AI has prompted concerns about the safety of the technology, with billionaire Elon Musk, an early investor in ChatGPT-maker OpenAI, calling it an \u201cexistential threat\u201d to humanity last year. This week, a group of current and former OpenAI staffers published an open letter warning that \u201cfrontier AI companies\u201d do not have sufficient oversight from governments and pose \u201cserious risks\u201d to humanity. The Californian bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk\u2019s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried. Democratic state senator Scott Wiener, who introduced the legislation, said: \u201cFundamentally I want AI to succeed and innovation to continue, but let\u2019s try and get out ahead of any safety risks.\u201d He added it was a \u201clight-touch bill\u2009.\u2009.\u2009. that simply asks developers training huge models to perform basic safety evaluations to identify large risks and to take reasonable steps to mitigate those risks.\u201dBut critics have accused Wiener of being overly restrictive and placing a costly compliance burden on developers, particularly at smaller AI companies. Opponents also claim the bill focuses on hypothetical risks that add an \u201cextreme\u201d liability risk on founders.Among the fiercest criticisms is that the bill will harm open-source AI models \u2014 in which developers make source code freely available to the public, allowing developers to build on top of them \u2014 such as Meta\u2019s flagship LLM, Llama. The bill would make developers of open models potentially liable for bad actors that manipulate their models to cause harm.Arun Rao, lead product manager for generative AI at Meta, said in a post on X last week that the bill was \u201cunworkable\u201d and would \u201cend open source in [California].\u201d\u201cThe net tax impact by destroying the AI industry and driving companies out could be in the billions, as both companies and highly paid workers leave,\u201d he added. Wiener said of the criticism: \u201cThis is the tech sector, it doesn\u2019t like to have any regulation, so it\u2019s not surprising to me at all that there would be push back.\u201d Some of the responses were \u201cnot fully accurate\u201d, he said, adding he was planning to make amendments to the bill that would clarify its scope. The proposed amendments state open-source developers will not be liable for models \u201cthat undergo lots of fine-tuning\u201d, meaning that if an open source model is then sufficiently customised by a third party, it is no longer the responsibility of the group that made the original model. They also state the \u201ckill switch\u201d requirement will not apply to open-source models, he said.Another amendment states the bill will only apply to large models \u201cthat cost at least $100mn to train\u201d, and would therefore not impact most smaller start-ups.\u201cThere are these competitive pressures that are affecting these AI organisations that basically incentivise them to cut corners on safety,\u201d CAIS\u2019s Hendrycks said, adding that the bill was \u201crealistic and reasonable\u201d with most people wanting \u201csome basic oversight\u201d.Yet a senior Silicon Valley venture capitalist said they were already fielding queries from founders asking if they would need to leave the state as a result of the potential legislation. \u201cMy advice to everyone that asks is we stay and fight,\u201d the person said. \u201cBut this will put a chill on open source and the start-up ecosystem. I do think some founders will elect to leave.\u201dGovernments around the world have been taking steps to regulate AI over the past year as the technology has boomed in popularity. US President Joe Biden introduced an executive order in October that aimed to set new standards for AI safety and national security, protect citizens against AI privacy risks, and combat algorithmic discrimination. The UK government in April outlined plans to craft new legislation to regulate AI.Critics are perplexed about the pace at which the Californian AI bill emerged and passed through the Senate, shepherded by CAIS. The majority of funding for CAIS comes from Open Philanthropy, a San Francisco-based charity with its roots in the effective altruism movement. It gave grants worth about $9mn to CAIS between 2022 and 2023, in line with its \u201cfocus area of potential risks from advanced artificial intelligence\u201d. The CAIS Action Fund, a division of the non-profit that was established last year, registered its first lobbyists in Washington DC in 2023 and has spent roughly $30,000 on lobbying this year.Wiener has received funding from wealthy venture capitalist Ron Conway, managing partner of SV Angel and investors in AI start-ups, over a number of election cycles. Rayid Ghani, professor of AI at Carnegie Mellon University\u2019s Heinz College, said that there was \u201csome overreaction\u201d to the bill, adding that any legislation should focus specifically on use cases of the technology rather than regulating the development of models.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework including creating a \u201ckill switch\u201d to turn off their powerful AI models, in a growing battle over regulatory control of the<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-109411","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/109411","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=109411"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/109411\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=109411"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=109411"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=109411"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}