The Biden administration is implementing new safeguards for artificial intelligence across the US government to prevent abuse and discriminatory use. These safeguards include the ability for travelers to refuse facial recognition scans at airport security screenings without fear of delays. The mandates cover various situations, such as health care, employment, and housing decisions by government agencies, to ensure AI systems do not endanger the rights and safety of Americans. Agencies are required to publish a list of AI systems used and conduct risk assessments, as well as appoint a chief AI officer to oversee technology usage.
Vice President Kamala Harris highlighted the ethical responsibility to protect the public from potential AI harm while ensuring the technology’s full benefits are enjoyed. The policies aim to set a global model for AI adoption and advancement. The US government is rapidly adopting AI tools for various purposes, from monitoring global volcano activity to training immigration officers and pursuing investigations. The OMB Director noted that guardrails on AI usage can make public services more effective and announced a national talent surge to hire AI professionals, emphasizing the importance of transparency and the opportunities AI presents for societal progress.
The Biden administration has taken swift action to address AI’s potential benefits and risks, including signing an executive order last fall. The order directed the Commerce Department to combat deepfakes and ensure safety testing for AI models. The new policies for the federal government have been in development for years, with Congress passing legislation in 2020 to establish guidelines for agencies, but OMB missed the 2021 deadline. The new OMB policy is the latest step in shaping the AI industry, with the government expected to have a significant influence due to its purchasing power.
There are limits to what the US government can achieve through executive action, and policy experts have called for legislative action to set basic rules for the AI industry. However, Congress has taken a slower approach, and legislation is not expected in the near future. In contrast, the European Union recently approved a groundbreaking artificial intelligence law, further highlighting the need for the US to establish comprehensive regulations for disruptive technologies. The US government’s efforts to regulate AI procurement and usage are essential to address potential risks and ensure ethical AI practices across various sectors.