Weather     Live Markets

Lawmakers in the European Parliament recently approved the AI Act, which aims to regulate AI systems using a risk-based approach. The Act was passed with an overwhelming majority of 523 votes in favor, 46 against, and 49 abstentions. The Act, which had already been approved on political and technical levels, is expected to come into force in May. This regulation is being hailed as a historic step towards ensuring the safe and ethical development of AI, with Italian lawmaker Brando Benifei describing it as a significant achievement that reflects the priorities of the parliament.

The AI Act categorizes machine learning systems into four main categories based on the potential risk they pose to society. High-risk systems will be subject to stringent rules before being allowed to enter the EU market. The general-purpose AI rules are set to take effect one year after the Act comes into force, in May 2025, while obligations for high-risk systems will apply within three years. National authorities will oversee the implementation of these rules, with support from the AI office within the European Commission. Member states are now tasked with setting up national oversight agencies, with a 12-month deadline to nominate these watchdogs.

Concerns have been raised about the need for European companies to remain competitive in the global AI market. Currently, only 3% of the world’s AI unicorns are based in the EU, with significantly more private investment in AI in the US and China. By 2030, the global AI market is projected to reach $1.5 trillion, underscoring the importance of ensuring that European companies can access this market without facing excessive regulatory burdens. It will be crucial for policymakers to strike a balance between regulation and innovation to support the growth of AI within the EU.

The approval of the AI Act has been welcomed by the European Consumer Organisation (BEUC), as it will empower consumers to participate in collective redress claims if they have been harmed by AI systems. While the legislation has been seen as a positive step, there are calls for further action to protect consumers in the rapidly evolving AI landscape. The European Commission and national governments are urged to demonstrate their commitment to the AI Act by implementing it promptly and providing regulators with the necessary resources to enforce it effectively. The focus now shifts to ensuring compliance with the Act and addressing any challenges that may arise in its implementation.

One of the key goals of the AI Act is to promote a safe and human-centric development of AI within the EU. Lawmakers are already looking ahead to future legislation, including a directive on conditions in the workplace and AI. By working with partner countries and like-minded parties, the EU aims to create a global impact with these rules. Collaboration will be essential in promoting responsible AI practices and building a governance framework that reflects shared values and principles. The EU’s efforts to regulate AI will likely set a precedent for similar initiatives around the world, as governments grapple with the complex challenges posed by advanced technologies.

As the AI Act moves closer to implementation, the focus will be on ensuring that businesses and institutions comply with the new rules. The oversight of AI systems will be a shared responsibility between national authorities and the AI office in the European Commission. With the deadline for member states to nominate national oversight agencies fast approaching, stakeholders will need to work together to establish robust mechanisms for monitoring and enforcing the regulations. The successful implementation of the AI Act will be essential in demonstrating the EU’s commitment to promoting responsible AI practices and protecting consumers in the digital age.

Share.
Exit mobile version