The United Nations (U.N.) advisory body on artificial intelligence (AI) recently issued seven recommendations to address AI-related risks, but an expert pointed out that these points do not cover critical areas of concern. The expert, Phil Siegel, co-founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), believes that the recommendations should have been more specific and taken into account the unique roles of AI in different parts of the world. He mentioned that different economic and regulatory structures will lead to different outcomes, and the U.N. should have acknowledged and addressed these differences in their recommendations.
The U.N. Secretary-General’s High-level Advisory Body on AI published its suggested guidelines on September 19, aiming to address “global AI governance gaps” among its 193 member states. The recommendations included establishing an International Scientific Panel on AI, creating a policy dialogue on AI governance, setting up a global AI capacity development network, and forming an AI office in the U.N. Secretariat. Siegel noted that these measures appear to be an effort by the U.N. to secure a better position in the global AI governance landscape and align with recommendations from different member states, especially those in the European Union.
Various entities are working towards global-level coordination on AI policy as countries strive to maintain a competitive edge while preventing adversaries from becoming leading forces in AI development. While pursuing AI for various applications, nations are also organizing safety summits to align policies, such as the upcoming U.S.-led summit in California in November. Siegel mentioned that the U.N. could serve as a coordinating body for these efforts due to its existing global platform, even as countries establish their own safety institutes to collaborate on safety guidelines. However, there are concerns about potential overreach by the U.N. in this context.
Siegel suggested that the U.N. could play a role in coordinating AI policy efforts without imposing strict rules on member states but rather focusing on implementing best practices. While acknowledging that the U.N. may be the logical agency for coordination due to its wide membership, he emphasized the importance of trusting the organization and ensuring that it does not overstep its boundaries in setting standards and benchmarks for AI governance. He also highlighted the progress made by the U.S. and Europe in establishing safety regulations and the need to involve Asian nations in these discussions moving forward.
In conclusion, the recommendations put forth by the U.N. advisory body on AI have been criticized for not addressing crucial aspects of AI governance, especially in different global contexts. Suggestions for establishing international collaborations and coordinating efforts on AI policy have been met with both support and concerns about potential U.N. overreach. While the U.N. may be a suitable platform for such coordination, it is essential to ensure that member states retain autonomy in implementing AI guidelines and regulations. Moving forward, continued dialogue and collaboration among nations will be crucial in shaping the future of AI governance on a global scale.