Smiley face
Weather     Live Markets

The National Security Transparency Advisory Group in Canada is recommending that security agencies publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications. The group was created in 2019 to increase accountability and public awareness of national security policies, programs, and activities. The government considers the group an important means of implementing a six-point federal commitment to be more transparent about national security. Security agencies are already using AI for tasks such as translation of documents and detection of malware threats, with plans to expand its use for analyzing large volumes of text and images, recognizing patterns, and interpreting trends and behavior. The report emphasizes the importance of public knowledge about the objectives and undertakings of national security services and advocates for stronger mechanisms for openness and external oversight.

As the government collaborates with the private sector on national security objectives, the report highlights the importance of transparency and engagement to foster innovation and public trust. The report also addresses the challenge of explaining the inner workings of AI to the public due to the opacity of algorithms and machine learning models. Ottawa has issued guidance on federal use of artificial intelligence, including requirements for algorithmic impact assessments before creating systems that assist or replace human decision-makers. The Artificial Intelligence and Data Act is also currently before Parliament to ensure responsible design, development, and rollout of AI systems, although it does not cover government institutions such as security agencies. The advisory group recommends considering extending the law to cover these agencies to ensure oversight and accountability.

The report details how the Communications Security Establishment (CSE), Canada’s cyberspy agency, has been using AI for data analysis and information processing. The CSE describes using high-performance supercomputers to train new artificial intelligence and machine learning models, including a translation tool that can translate content from over 100 languages. The agency’s Cyber Centre has also utilized machine learning tools to detect phishing campaigns and suspicious activities on federal networks. The CSE acknowledges the importance of remaining ethical in their use of AI and is developing comprehensive approaches to govern and monitor its AI usage. Similarly, the Canadian Security Intelligence Service (CSIS) is formalizing plans and governance around AI while prioritizing transparency, despite limitations on publicly discussing operational matters.

The report references a previous incident where the Royal Canadian Mounted Police (RCMP) was found to have broken the law by using facial recognition software from Clearview AI without ensuring compliance with privacy legislation. The RCMP has since created the Technology Onboarding Program to assess compliance with privacy laws and is working on a national policy for the use of AI that includes transparency and safeguards. The transparency advisory group is urging the Mounties to be more transparent about the onboarding program and the responsible use of technologies. It also criticizes the government for the lack of public reporting on the progress of its transparency commitment and recommends a formal review with public reporting of initiatives, impacts to date, and future activities. Public Safety Canada has shared the report’s recommendations with relevant stakeholders but has not provided a specific timeline for implementation.

Share.
© 2024 Globe Timeline. All Rights Reserved.