حالة الطقس      أسواق عالمية

Summarize this content to 2000 words in 6 paragraphs in Arabic OpenAI chief executive Sam Altman said he would fast-track product releases and “deliver much better models” after advances by Chinese start-up DeepSeek undermined Silicon Valley’s lead in a global artificial intelligence arms race.DeepSeek’s generative AI chatbot, a direct rival to ChatGPT, is able to perform some tasks at the same level as recently released models from OpenAI, Anthropic and Meta, despite claims it cost a fraction of the money and time to develop. The release of DeepSeek’s R1 model last week and its rise to the top of Apple’s App Store has triggered a tech stock sell-off. Asian tech shares fell on Tuesday in the wake of a Wall Street rout overnight. The Nasdaq fell 3 per cent and US chipmaker Nvidia, which produces the chips used to train large AI models, slumped 17 per cent, losing $600bn in market capitalisation.On Monday evening, Altman wrote on X that DeepSeek’s model was “impressive, particularly around what they’re able to deliver for the price”. He added: “We will obviously deliver much better models and also it’s legit invigorating to have a new competitor!”Altman, who last week announced that investors including SoftBank would spend up to $500bn to build a network of data centres to power its AI models, added that computing resources were “more important now than ever before”. Microsoft, Meta, Alphabet, Amazon and Oracle have earmarked $310bn in 2025 for capital expenditure, which includes AI infrastructure, according to data compiled by Visible Alpha. Such estimates have been based on the premise that huge amounts of computing power will be needed to advance AI capabilities. But DeepSeek’s ability to compete on a fraction of the budget of OpenAI — which was recently valued at $157bn — and rivals Anthropic, Google and Meta, has raised questions about the vast sums being poured into training systems. “The winners won’t be the ones burning the most cash,” said Aidan Gomez, founder of Toronto-based Cohere, which builds large language models for enterprises. Instead, he said, they would be those “finding efficient solutions”.The advances by DeepSeek have also exposed risks for venture capitalists who put almost $100bn into US AI start-ups last year. “There’s now an open weight model floating around the internet which you can use to bootstrap any other sufficiently powerful base model into being an AI reasoner,” said Jack Clark, co-founder of Anthropic, in a blog on Monday. “AI capabilities worldwide just took a one-way ratchet forward,” he added. “Kudos to DeepSeek for being so bold as to bring such a change into the world!”DeepSeek’s success has complicated the argument that massive cash piles create an unassailable advantage, which has helped leading Silicon Valley labs raise tens of billions of dollars over the past year.“If you’re Anthropic or OpenAI, attempting to be at the forefront, and someone can serve what you can at a tenth of the cost, that’s problematic,” said Mike Volpi, who led Index Ventures’ investment into Cohere.The sudden release of DeepSeek’s latest model surprised some at Meta. “The main frustration is, ‘Why didn’t we come up with this first?’ when we have thousands of the brightest minds working on this,” said one Meta employee.Chief executive Mark Zuckerberg — who last week said he expected to allocate up to $65bn in capital spending to expand AI teams and build a new data centre — has advocated for open source, positioning Meta at its forefront in the US.“We want the US to set the global AI standard, not China,” the company said in response to DeepSeek. Meta’s chief AI scientist Yann LeCun said “running AI assistant services for billions” would still require large levels of computing power.Rival company insiders and investors have expressed scepticism about the low costs cited by DeepSeek in developing its models. In December, the company said its V3 model, which its app’s chatbot runs on, cost just $5.6mn to train.However, it added that this figure was only for the final training run, not the complete cycle, and excluded “the costs associated with prior research and . . . experiments on architectures, algorithms, or data”.DeepSeek has attributed its success — despite using inferior chips to its US competitors — to methods that allow the AI model to selectively focus on specific parts of input data as a way of reducing the costs of running the model. For its latest R1 model, it used a reinforcement learning technique, a relatively new approach to AI in which models teach themselves how to improve without human supervision. The company also used open-source models, including Alibaba’s Qwen and Meta’s Llama, to fine tune its R1 reasoning model.The technical advances and investor interest in DeepSeek’s progress could light a fire under AI companies. “In general, we expect the bias to be on improved capability, sprinting faster towards artificial general intelligence, more than reduced spending,” said research firm Rosenblatt on Monday.Researchers and investors, including Marc Andreessen, have drawn parallels between the race between the US and China on artificial general intelligence and its competition with the Soviet Union during the cold war, both in space exploration and nuclear weapons development.Stuart Russell, professor of computer science at the University of California, Berkeley, said the race to AGI was “worse”.“Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process, because we have no idea how to control systems more intelligent than ourselves,” he said. “In other words, the AGI race is a race towards the edge of a cliff.”Additional reporting by Michael Acton and Rafe Uddin in San Francisco and Melissa Heikkilä in LondonVideo: AI and the potential for a revolution in healthcare

شاركها.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.
Exit mobile version