EU creates legal framework for Artificial Intelligence

EU schafft Rechtsrahmen für Künstliche Intelligenz

Vectra AI analyses EU AI law

According to the EU, the combination of the first legal framework for artificial intelligence and a new coordinated plan with the Member States should ensure the security and fundamental rights of people and businesses in Europe. The aim is to simultaneously promote the introduction of AI, investment and innovation throughout the EU. The new AI regulation is intended to ensure that Europeans can trust the offer of AI. Appropriate and flexible rules will therefore take into account the specific risks of AI systems and set the highest standard worldwide. The EU’s coordinated plan outlines the necessary policy changes and investments at Member State level to strengthen Europe’s position in the development of a people-centred, sustainable, secure, inclusive and trustworthy AI.

Why is such a law necessary?

The EU AI Law is an ambitious attempt to create a legal framework for AI that has never been so urgently needed. AI systems are rapidly being integrated into products and services in numerous markets. However, the trustworthiness and interpretability of these systems can be quite opaque, with poorly understood risks for users and society in a broader sense. Although some of the existing legal framework and consumer protection may be relevant, applications that use AI systems are so different from traditional consumer products that they require fundamentally new legal mechanisms.

The overall objective of the draft law is to anticipate and mitigate the most critical risks arising from the use and failure of AI. This ranges from a complete ban on systems that are classified as “unacceptably high-risk” to strict regulation of “high-risk” systems. Another, less noticed consequence of the framework is that it could provide markets with clarity and certainty about what regulations exist and how they are applied. In this way, the legal framework could actually lead to more investment and market participation in the AI sector.

More than half a decade has now passed since the adoption of the EU General Data Protection Regulation (GDPR) without a similar federal law being considered in the USA. Nevertheless, the GDPR has undoubtedly influenced the behavior of multinational companies, which either had to split their data protection policies for EU and non-EU environments, or simply apply a single directive based on the GDPR worldwide. If the US decides to propose legislation to regulate AI, they will at least be influenced by the EU law.

How can the Commission determine which types of AI are classified as “high risk”?

Unfortunately, the AI act identifies a number of application areas in which the use of AI would be considered risky without necessarily discussing the risk-based criteria that could be used to determine the status of future AI applications. The seemingly ad hoc decisions about which areas of application are considered “risky” therefore seem to be too specific and too vague at the same time.

Current high-risk areas include certain types of biometric identification, critical infrastructure operations, employment decisions, and some law enforcement activities. However, it is not clear why only these areas were classified as high-risk. It is also not described which applications of statistical models and machine learning systems in these areas should be subject to strict regulatory supervision.

Finally, it is likely that clarifications and legal precedents will be necessary in the future to draw clear boundaries between risk categories. For example, the law considers “practices that have a significant potential for manipulating people through subliminal techniques outside their consciousness” as an unacceptable risk. However, it is not necessarily clear whether this also includes the already widespread use of AI, which selects content for users in order to maximize their time spent on a website. There is no doubt that ambiguities such as these need to be removed, but this will be much easier with an existing legal framework than without it.

Why is it important that the bill requires that people be notified when they encounter deepfakes, biometric recognition systems or AI applications?

While it’s good that the bill requires people to be notified when they encounter deepfakes, biometric recognition systems, etc., there are some potential issues to consider. First, the content generated by deep fakes and language models such as GPT-3 could become so ubiquitous and interwoven with every aspect of our lives that labeling all cases of using these models could lead to a general alarm fatigue. In addition, there will be a number of hard-to-answer questions about systems that live within the fuzzy boundaries of these applications. Nevertheless, it is undoubtedly a good start to demand from consumers that they can better recognize when they are classified by biometric data and when they interact with AI-generated content and not with real people or real content.

Blockchain Outsourcing | Unreal Engine Development

Ready to see us in action:

More To Explore

IWanta.tech
Logo
Enable registration in settings - general
Have any project in mind?

Contact us:

small_c_popup.png