Connect with us

Hi, what are you looking for?

Monday, Oct 14, 2024
Mugglehead Magazine
Alternative investment news based in Vancouver, B.C.

AI and Autonomy

European Union approves regulatory act for artificial intelligence

The act uses a risk-based approach, prioritizing high-risk areas such as government use of AI for biometric surveillance

European Union approves world's first regulatory act for artificial intelligence
Image from Queenfadesa via Wikimedia Commons.

European Union member states have approved the artificial intelligence act (AI Act) and it is now running the final parliamentary lap to finalization and implementation.

The agreement marks the conclusion of negotiations, with all EU member states’ permanent representatives having voted last Friday.

The EU member states agreed on new content formulations last weekend. The next stop is the European Parliament. It won’t be long after final approval that the AI Act comes into effect, but the EU is choosing a familiarization period, meaning that a list of banned AI tools will not be introduced for another six months.

The act embraces a risk-based approach, prioritizing high-risk areas such as government use of AI for biometric surveillance. It also casts a regulatory net over systems similar to ChatGPT, requiring transparency before releasing them onto the market. The landmark vote follows a December 2023 political agreement and concludes months of carefully crafting the text for legislative approval.

Meanwhile, experts anticipate implementing the AI Act in 2026, with specific provisions taking effect earlier to facilitate a gradual integration of the new regulatory framework.

Beyond establishing the regulatory foundation, the European Commission actively supports the EU’s AI ecosystem. This effort involves creating an AI Office responsible for monitoring compliance with the Act, with a particular focus on high-impact foundational models that pose systemic risks.

The EU aims to establish the world’s first comprehensive artificial intelligence law with the AI Act. Its goal is to regulate the use of artificial intelligence to ensure better conditions for deployment, protect individuals, and promote trust in AI systems.

Read more: Could AI ‘trading bots’ revolutionize investing?

Read more: Microsoft’s AI resurgence: rumored $500M robotics investment

The AI Act will be enforced by competent market surveillance authorities

The act adopts a clear and easy-to-understand approach to AI regulation, based on four different levels of risk. National competent market surveillance authorities will enforce it, supported by a European AI Office within the EU Commission.

France, Germany, and Italy previously mutually agreed to weaken the content of the AI Act. Among other things, the countries actively sought to avoid attaching sanctions to the legislation. Additionally, they focused on transitioning from foundation models to AI tools.

The final content of the AI Act significantly deviates from that vision. For instance, it mandates that general models provide technical documentation to the EU, undergo additional risk assessments, and adhere to existing copyright laws. Sanctions also include monetary fines, which can amount to up to €35 million.

The AI Act will impose transparency policies on general-purpose AI (GPAI). Systems posing higher risks will face additional regulatory requirements. However, it remains entirely unclear for whom these rules are intended.

The commission determined that all models meeting the predetermined definition of a GPAI must adhere to a transparency policy. This entails preparing technical documentation, complying with European legislation on authors’ rights, and providing a summary of training data.

Additional regulations apply to models posing systemic risks. These models have the potential to trigger a domino effect leading to the collapse of the broader economy in case of problems.

Classification in this category relies on a very unclear principle, such as the computing power required during model training. Models that surpass 10 to the power of 25 floating point operations (FLOPs) fall into this category. A FLOP is a measure used to assess the computational complexity and scale of AI models.

.

Follow Mugglehead on Twitter

Like Mugglehead on Facebook

Follow Joseph Morton on Twitter

joseph@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI and Autonomy

Traditional energy sources and renewable sources like solar and wind alike struggle with consistency and capacity issues

AI and Autonomy

This US$13.8-million-dollar AI and robotics contract was initiated last September

Medical and Pharmaceutical

The SAPPHIRE study demonstrated statistically significant and clinically meaningful improvements

Gold

The technology's near-zero surface impact and efficient smart sensors make it an environmentally friendly choice