Smiley face
حالة الطقس      أسواق عالمية

Summarize this content to 2000 words in 6 paragraphs in Arabic I’ve always been puzzled by the way Sam Altman, the head of OpenAI, talks about what will happen if and when artificial intelligence catches up with or surpasses human intelligence.He admits it will be hard to control, come with all kinds of unpleasant side effects and — just possibly — cause civilisational collapse. And then he says his company is racing to build it as quickly as possible.Does he take the safety issues seriously, or is he just trying to earn a free pass by kidding us into thinking he really is a responsible citizen looking out for our interests?I guess now we know. The Trump administration has just canvassed the AI industry on what it would like from US policy under his new administration. The response: it’s time for Washington to clear the way for the sector so it can move much faster, regulations be damned.If you were looking for evidence of how Silicon Valley has shifted with the political winds, you’d be hard pressed to find anything else as stark as this.Under the previous administration, the biggest AI companies preached caution — at least in public. They even agreed to subject their most powerful models to external testing before unleashing them on the rest of us. The Biden White House saw this as a first step that might eventually lead to full government vetting and licensing of advanced AI.Dream on. Donald Trump ripped up Biden’s executive order on AI during his very first week back in office. Then his administration called for comments to help it shape a new AI policy — a classic case of shoot first, ask questions later.In their submissions to the White House, companies such as OpenAI, Meta and Google have been nearly unanimous: The US needs to help its AI companies move faster if it hopes to outpace China; US states shouldn’t tie down the tech giants with piecemeal regulations (since the federal government has been asleep at the wheel on tech regulation for years, that would pretty much rule out any restrictions); The White House should end uncertainties over copyright and declare that the companies are within their rights to train their models on any data that’s in the public domain.Safety? The companies have largely scrubbed that word from their vocabulary. This is probably wise: They would all have heard vice-president JD Vance’s declaration, at a recent AI summit in Paris, that the “AI future will not be won by hand-wringing over safety”.I never doubted that the AI race was just that: a race. Many of the companies involved are veterans of other winner-take-all tech battles. They have always had simplistic and self-serving ways to judge whether what they are doing is in the public interest: If people click on something, then they must want more of it. That’s the rapid feedback loop that gave us the algorithms that fed the social media boom. What’s not to like?But, after all the evidence of harm caused by social media, you’d think that companies would want to know how their AI is affection the world before rushing to give us more of it. The evidence is only starting to trickle, and — surprise, surprise — it isn’t encouraging.MIT’s Media Lab recently studied people who use AI chatbots and found that heavier use correlated closely with “higher loneliness, dependence . . . problematic use and lower socialization”. Do we have to learn the lesson all over again that the technology that captivates us may not be doing us good? It seems we do.If you were feeling particularly generous, I suppose you could try to make the case that the tech companies are only tailoring their worlds to give Trump what he wants to hear. Maybe they’re still dedicated to safety and just keeping that quiet for now. But I think it would take unusual generosity of spirit to reach that conclusion.Cristina, as a technology correspondent in San Francisco, you deal with these AI companies. Do you think they’re serious any more about AI safety, or have they thrown all that overboard in the rush to be first to artificial general intelligence (AGI)? Is this pivot just a reflection of the new mood in Washington, and the Trump White House’s demand for displays of American dominance? Or are we seeing the tech companies now in their true colours? Recommended reading You can file this under “another of those things you feared might result from Elon Musk’s attack on government spending”. The FT’s Claire Jones writes that economists are starting to worry about the credibility of US economic data. You probably never thought you’d miss the Federal Economic Statistics Advisory Committee. But now that it’s gone . . . Are Trump supporters ready for the erosion of programmes such as Medicaid that may be in the offing as Republicans look for deeper cuts to government spending? Guy Chazan went to Bogalusa, Louisiana, to hear from voters. The refrain: “It never came up in the campaign . . . I don’t think people saw it coming.”If I can bend the rules a little and include a video on the recommended reading list: In the many years I’ve written about tech (23, since you ask) I’ve never seen a company dominate an important new technology as thoroughly as chipmaker Nvidia has dominated in AI. But nothing lasts forever. This video examines some of its biggest challenges.Cristina Criddle responds The leading AI developers have deep roots in safety: Google, known for its “don’t be evil” mantra; OpenAI’s mission to ensure that AI benefits humanity; and former OpenAI employees founded Anthropic to focus on responsible AI.These labs already conduct stringent internal tests and publish academic papers and system report cards laying out the perceived risks from each model, ranking them for their dangers. There is no suggestion that these procedures will change, but it is up to lawmakers and the public to decide whether these companies marking their own homework is good enough. Plus, the rise of DeepSeek has raised the probability that the first company to reach AGI may not be operating in a country with democratic values and norms.With China threatening dominance and the new Trump administration adamant about preventing “woke” AI, there has been a pivot from “safety”, to a hotter term: “security”. The UK government’s own AI Safety Institute rebranded to AI Security Institute in February. Governments and researchers are focused how these systems might be used by adversaries in potential warfare, espionage or terrorism.Despite Europe’s efforts, the societal and human implications of this technology appear to have been deprioritised. AI start-ups often warn about the costs of compliance and how it might hamper innovation, especially in the countries with the strictest regulatory regimes. When I asked Mike Krieger, chief product officer at Anthropic, about the best approach to safety under the current government, he said the company tries to be as involved in as many conversations as possible.“We are not in there to do politics, but we are in there to help shape policy in a way that we think will lead to good outcomes without stifling innovation; there is always that balance,” he said.As Instagram’s co-founder and former chief technology officer, Krieger is all too familiar with how social media may impact democracy and the wellbeing of its users. While a lot of parallels have been drawn between the risks of social media and AI, we still do not have much meaningful regulation or solutions on the former. So how much hope can we have for the latter?Arguably, the threats posed by artificial intelligence are far more significant, and the pace of development is rapid. When ChatGPT launched, we saw widespread concern from leaders in the field, citing existential risks and calling for a moratorium on powerful AI systems.Elon Musk supported a pause in development and yet months later bootstrapped his own AI start-up, xAI, developing powerful models, swiftly raising $12bn. The move-fast attitude of Silicon Valley is stronger than ever, but has it matured to do so without breaking things?Your feedback And now a word from our Swampians . . . In response to “Will Trump make ships great again?:”“I wonder what the chances of success may be for an administration that has no funds likely to be available for shipbuilding subsides even if the president believed in them, and is more likely to threaten invasion of friendly countries (eg Canada, Panama and Denmark/Greenland) or hit them with prohibitively high tariffs (everyone) rather than seek ‘an opportunity for more constructive engagement’. Nor are threatened port fees on Chinese-built ships calling at US ports likely to produce anything other than higher, inflationary shipping costs for US businesses and consumers.” — David Gantz

شاركها.
© 2025 جلوب تايم لاين. جميع الحقوق محفوظة.
Exit mobile version