{"id":94837,"date":"2024-05-30T12:43:29","date_gmt":"2024-05-30T12:43:29","guid":{"rendered":"https:\/\/globeecho.com\/ar\/tech\/rewrite-this-title-in-arabic-internal-divisions-linger-at-openai-after-novembers-attempted-coup\/"},"modified":"2024-05-30T12:43:29","modified_gmt":"2024-05-30T12:43:29","slug":"rewrite-this-title-in-arabic-internal-divisions-linger-at-openai-after-novembers-attempted-coup","status":"publish","type":"post","link":"https:\/\/globetimeline.com\/ar\/tech\/rewrite-this-title-in-arabic-internal-divisions-linger-at-openai-after-novembers-attempted-coup\/","title":{"rendered":"rewrite this title in Arabic Internal divisions linger at OpenAI after November\u2019s attempted coup"},"content":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic OpenAI is struggling to contain internal rows about its leadership and safety as the divisions that led to last year\u2019s attempted coup against chief executive Sam Altman spill back into the public domain.Six months after the aborted removal of Altman, a series of high-profile resignations point to continuing rifts inside OpenAI between those who want to develop AI rapidly and those who would prefer a more cautious approach, according to current and former employees.Helen Toner, one of the former OpenAI board members who tried to remove Altman in November, spoke out publicly for the first time this week, saying he had misled the board \u201con multiple occasions\u201d about its safety processes.\u201cFor years Sam had made it really difficult for the board to actually do [its] job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,\u201d she said on the TED AI Show podcast.The most prominent of several departures in the past few weeks has been that of OpenAI co-founder Ilya Sutskever. One person familiar with his resignation described him as being caught up in Altman\u2019s \u201cconflicting promises\u201d prior to last year\u2019s leadership upset.In November, OpenAI\u2019s directors \u2014 who at the time included Toner and Sutskever \u2014 pushed Altman out as chief executive in an abrupt move that shocked investors and staff. He returned days later under a new board, minus Toner and Sutskever.\u201cWe take our role incredibly seriously as the board of a non-profit,\u201d Toner has told the Financial Times. The decision to fire Altman \u201ctook an enormous amount of time and thought\u201d, she added. [Safety] culture and processes have taken a back seat to shiny productsSutskever said at the time of his departure that he was \u201cconfident\u201d OpenAI would build artificial general intelligence \u2014 AI that is as smart as humans \u2014 \u201cthat is both safe and beneficial\u201d under its current leadership, including Altman.However, the November affair does not appear to have resolved the underlying tensions inside OpenAI that contributed to Altman\u2019s ejection.Another recent exit, Jan Leike, who led OpenAI\u2019s efforts to steer and control super-powerful AI tools and worked closely with Sutskever, announced his resignation this month. He said his differences with the company leadership had \u201creached a breaking point\u201d as \u201csafety culture and processes have taken a back seat to shiny products\u201d. He has now joined OpenAI rival Anthropic.The turmoil at OpenAI \u2014 which has bubbled back to the surface despite the vast majority of employees calling for Altman\u2019s reinstatement as CEO in November \u2014 comes as the company prepares to launch a new generation of its AI software. It is also discussing raising capital to fund its expansion, people familiar with the talks have said.Altman\u2019s direction of OpenAI towards shipping product rather than publishing research led to its breakthrough chatbot ChatGPT and kick-started a wave of investment in AI across Silicon Valley. After securing more than $13bn in backing from Microsoft, OpenAI\u2019s revenue is on track to surpass $2bn this year. Yet this focus on commercialisation has come into conflict with those inside the company who would prefer to prioritise safety, fearing OpenAI might rush into creating a \u201csuperintelligence\u201d that it cannot properly control. Gretchen Krueger, an AI policy researcher who also quit the company this month, listed several concerns about how OpenAI was handling a technology that could have far-reaching ramifications for business and the public.\u201cWe [at OpenAI] need to do more to improve foundational things,\u201d she said in a post on X, \u201clike decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.\u201d\u00a0Altman, responding to Leike\u2019s departure, said his former employee was \u201cright we have a lot more to do; we are committed to doing it\u201d. This week, OpenAI announced a new safety and security committee to oversee its AI systems. Altman will sit on the committee alongside other board members.\u00a0\u201c[Even] with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives,\u201d Toner wrote alongside Tasha McCauley, who was also on OpenAI\u2019s board until November 2023, in an opinion article for The Economist magazine, published days before OpenAI announced its new committee.Responding to Toner\u2019s comments, Bret Taylor, OpenAI\u2019s chair, said the board had worked with an external law firm to review last November\u2019s events, concluding that \u201cthe prior board\u2019s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI\u2019s finances, or its statements to investors, customers, or business partners\u201d. \u201cOur focus remains on moving forward and pursuing OpenAI\u2019s mission to ensure AGI benefits all of humanity,\u201d he said. One person familiar with the company said that since November\u2019s tumult, OpenAI\u2019s biggest backer, Microsoft, had put more pressure on it to prioritise commercial products. That had amplified tensions with those who would prefer to focus on scientific research.Many inside the company still want to focus on its long-term goal of AGI, but internal divisions and an unclear strategy from OpenAI\u2019s leadership have demotivated staff, the person said.\u201cWe\u2019re proud to build and release models that lead the industry in both capabilities and safety,\u201d OpenAI said. \u201cWe work hard to maintain this balance and think it\u2019s critical to have a robust debate as the technology advances.\u201dDespite the scrutiny invited by its recent internal ructions, OpenAI continues to build more advanced systems. It announced this week it had recently started training the successor to GPT-4, the large AI model that powers ChatGPT.Anna Makanju, OpenAI\u2019s vice-president of global affairs, said policymakers had approached her team about the recent exits to find out if the company was \u201cserious\u201d about safety.She said safety was \u201csomething that is the responsibility of many teams across OpenAI\u201d.\u201cIt\u2019s quite likely that [AI] will be even more transformational in the future,\u201d she said. \u201cCertainly, there are going to be a lot of disagreements on what exactly is the right approach to prepare society [and] how to regulate it.\u201dVideo: AI: a blessing or curse for humanity? | FT Tech<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Summarize this content to 2000 words in 6 paragraphs in Arabic OpenAI is struggling to contain internal rows about its leadership and safety as the divisions that led to last year\u2019s attempted coup against chief executive Sam Altman spill back into the public domain.Six months after the aborted removal of Altman, a series of high-profile<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[63],"tags":[],"class_list":{"0":"post-94837","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-tech"},"_links":{"self":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/94837","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/comments?post=94837"}],"version-history":[{"count":0,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/posts\/94837\/revisions"}],"wp:attachment":[{"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/media?parent=94837"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/categories?post=94837"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globetimeline.com\/ar\/wp-json\/wp\/v2\/tags?post=94837"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}