Weather     Live Markets

South Korea is hosting a mini-summit on the risks and regulation of artificial intelligence, following a meeting in the U.K. last year. The gathering in Seoul aims to build on work started in the U.K. on reining in threats posed by cutting-edge AI systems. The summit is part of global efforts to create guardrails for the rapidly advancing technology that promises to transform society but also raises concerns about potential risks. At the U.K. summit, delegates from over two dozen countries signed the Bletchley Declaration to work together to contain risks posed by advances in AI.

In March, the U.N. General Assembly approved its first resolution on AI, supporting international efforts to ensure the technology benefits all nations and respects human rights. The recent high-level talks between the U.S. and China in Geneva addressed concerns about the risks of AI and setting shared standards to manage it. The U.S. raised concerns about China’s misuse of AI, while Chinese representatives criticized the U.S. for restricting and pressuring AI development. Various countries and companies are coming together to address the safety of AI globally.

The Seoul summit, co-hosted by South Korea and the U.K., will feature virtual meetings with leaders and updates from AI companies on their commitments to AI safety. On the second day, digital ministers will gather in-person to share best practices and action plans to protect society from potential negative impacts of AI. The meeting will serve as an interim summit until a full-fledged in-person edition is held, led by France. Participants will include representatives from various countries and companies like OpenAI, Google, Microsoft, and Anthropic.

While the U.K. meeting in 2021 was light on details and didn’t propose a way to regulate AI development, efforts are being made to set shared AI safety standards by developers of the most powerful AI systems. Facebook and Amazon recently joined the Frontier Model Forum, founded by companies like Google and Microsoft last year. An expert panel’s interim report on AI safety identified risks posed by general-purpose AI, such as malicious use in fraud and scams, spreading disinformation, bias in healthcare and job recruitment, and potential automation risks to the labor market.

South Korea aims to use the Seoul summit to take the lead in global governance and norms for AI, although some critics argue that the country lacks advanced AI infrastructure to play a leadership role. The summit will discuss a wide range of AI safety issues, from algorithmic bias to existential threats, and aim to develop concrete action plans to mitigate these risks. As the AI industry continues to evolve rapidly, international cooperation is crucial to ensure that AI benefits society while respecting human rights and managing potential risks.

Share.
Exit mobile version