حالة الطقس      أسواق عالمية

Summarize this content to 2000 words in 6 paragraphs in Arabic Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.On his return to the White House in January, Donald Trump swiftly dismantled the regulatory framework his predecessor Joe Biden had put in place to address artificial intelligence risks. The US president’s actions included reversing a 2023 executive order that required AI developers to submit safety test results to federal authorities when systems posed a “serious risk” to the nation’s security, economy or public health and safety. Trump’s order characterised these guardrails as “barriers to American AI innovation”.This back and forth on AI regulation reflects a tension between public safety and economic growth also seen in debates over regulation in areas such as workplace safety, financial sector stability and environmental protection. When regulations prioritise growth, should companies continue to align their governance with the public interest — and what are the pros and cons of doing so? At OpenAI, founded in 2015 by Sam Altman as a non-profit organisation, this has been a topic of significant debate among investors and co-founders, including Elon Musk, particularly since ensuring that AI operates safely, ethically and for the benefit of humanity has been a concern since the technology’s earliest days. As a result, many companies have adopted novel corporate structures that aim to balance their economic interests with broader societal concerns. For example, in 2021, seven former OpenAI employees founded Anthropic and incorporated it as a benefit corporation, a structure through which a company legally commits to delivering societal benefit alongside profit. In its incorporation documents, Anthropic states its purpose is to responsibly develop and maintain advanced AI for the long-term benefit of humanity.First introduced by Maryland in 2010, benefit corporation structures have been adopted by more than 40 US states, Washington DC, Puerto Rico and countries including Italy, Colombia, Ecuador, France, Peru, Rwanda, Uruguay and the Canadian province of British Columbia. However, they have also been adopted by AI companies whose goals are not specifically aligned with environmental and societal impact. Musk’s xAI, incorporated as a benefit corporation in Nevada, has a stated corporate purpose to create “a material positive impact on society and the environment, taken as a whole”.  Critics argue that the benefit corporation model lacks teeth. While most include transparency provisions, the associated reporting requirements can fall short of providing meaningful accountability on whether the company is achieving its legal purpose. All this raises the possibility that the model opens the door to “governance washing”. Following the wave of lawsuits against opioid maker Purdue Pharma, its owner the Sackler family proposed turning the company into a benefit corporation, which would focus on making drugs to tackle the opioid crisis. Final disposition of the multitude of cases against the company is ongoing.The case of OpenAI illustrates the issues surrounding governance in the AI sector. In 2019, the company started a for-profit entity to take on billions of dollars in investment from Microsoft and others. A number of early employees left, reportedly over safety concerns. Musk sued OpenAI and Sam Altman in 2024, alleging they had compromised the start-up’s mission of building AI systems for the benefit of humanity. In December 2024, OpenAI announced plans to restructure as a public benefit corporation and in early 2025, the company’s non-profit board was reportedly working to split OpenAI into two entities: a public benefit corporation and a charitable arm valued at approximately $30bn. Musk has opposed the move and this month made an unsolicited bid of more than $97bn for OpenAI.The trajectory of OpenAI’s funding supports the argument put forth by Musk and others that OpenAI prioritises profit over public benefit. In October 2024, the company secured a landmark investment round at a $157bn valuation. But it had not yet formalised its ownership structure and governance framework, giving investors significant influence over the company’s mission and execution. As the company finalises its structure, should it embrace the vision of the industry articulated in Trump’s executive order and drop its focus on safety and humanity? Or should it maintain that focus, given that other regions of the world or future US presidents may take a different view of the responsibility of AI companies? And are voluntary mechanisms such as corporate structure and governance sufficient to create accountability while maintaining the agility needed for innovation? According to some legal experts, such structures are not necessary as traditional corporate forms of incorporation allow companies to set sustainability goals if they are in the long-term interests of shareholders.To increase accountability, some benefit corporations have created multi-stakeholder oversight councils with representatives from affected sectors such as technology and civil society. In May 2024, OpenAI did set up a safety and security committee, led by Altman (he later stepped down), although critics have pointed out that such voluntary structures could be subordinated to profit goals.Other options include adopting the EU’s Corporate Sustainability Reporting Directive, which will govern companies such as OpenAI in the coming years, or linking compensation and stock options to safety-related goals. Alternative accountability mechanisms may emerge. Meanwhile, governance at AI companies such as OpenAI raises important questions about integrating ethical and safety considerations into a mostly untested technology.Christopher Marquis is Sinyi Professor of Chinese Management at Cambridge Judge Business School

شاركها.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.
Exit mobile version