Paving the way for AI regulation

On April 21st 2021 the Commission presented a new Regulation focused on Artificial Intelligence. This specific proposal, now to beknown as the AI Act, is accompanied and aided in its objectives by the New Machinery Regulation, aimed primarily at ensuring the safety and reliability of the hardware supporting AI software and an updated version of the Coordinated Plan, (latest version 2018), a common effort of Member States to foster innovation in the field by promoting and funding related projects and research. These three proposals focused on Artificial Intelligence will come to constitute, along with the Digital Services Act and the Digital Markets Act, a new comprehensive European approach to Digital development.

The European Union, then, is against at the forefront of regulating action, as the Artificial Intelligence Act represents the first ever effort to give a solid legal footing to the field of Artificial intelligence in the World. In fact, a common doubt stands on whether this “rush to regulate” is of actual benefit to the industry or poses more risks of creating approximative solutions that could negatively impact the functioning of the AI sector more than anything else.

The AI Regulation finds its legal basis in Article 114 of the Treaty of the Functioning of the European Union, which provides for the adoption of measures to ensure the establishment and functioning of the internal market. This objective is achieved through the creation of a harmonized set of rules regarding the development of new AI technologies and the compliance of those already used in various sectors of the economy.

The Commission chose to undertake this further regulatory step, drafting a regulation that will have direct applicability in all Member States, specifically to ensure an organic development of the AI market in the Union thus avoiding the possible fragmentating effect that could be caused by national legislative efforts on the matter setting up different requirements for providers in terms of marketing, use, liability and level of supervision by public authorities. This necessity to avoid fragmentation is motivated especially by the wide extent of circulation of products and services of this kind.

Still, the regulation allows room for Member States’ actions when affecting elements that do not undermine the objectives of the initiative, in particular in regards to the internal organisation of the market surveillance system and the uptake of measures to foster innovation.

A further motive for the Commission to adopt a regulation on AI was the need to ensure that citizen’s rights on personal data protection (Article 16 TFEU) would be safeguarded from artificial intelligence software, especially those crafted to be biometric systems of identification.

The AI Act wishes to achieve these scopes by establishing a framework aimed specifically at reducing risks associated to the use of Artificial intelligence. This risk-based approach very intelligently serves two scopes at once. On the one hand, it tackles the immediate issue of possible violation of fundamental rights and wider societal risks; on the other hand, by serving as a legal guarantee of safety, aims at improving consumer trust in a relatively new type of service with the ultimate scope of creating a wide market base for AI products.

The risk regulating criteria laid down in the Act are quite immediate. AI services are divided into four risk categories: unacceptable risk, high-risk, limited risk and minimal risk, each being subject to different requirements. In the instance of there being a risk deemed as unacceptable, the services in question will be banned; if, instead, a service falls into the high-risk category, the developer will have to comply with strict requirements regarding its security and ability to minimise error, as well as curating database quality and accessibility for oversight purposes. Excluded from any type of compulsory requirements are services deemed to have limited and minimal risk.

First, all AI services that can be considered to violate fundamental rights are deemed of unacceptable risk and banned. Such services include, most notably, systems that allow for social scoring by government and, in the wider sense, services that can be classified as aiming specifically at circumventing users’ free will or creating manipulated content (“deep-fakes”), with a special focus on children’s manipulation (Article5(1)).

Most controversially included in this category are all systems of remote biometric identification (Article 5(1)), which are banned for generalised use in all public spaces with the sole exception of cases that are considered of strict necessity (such as the abduction of a child or a terrorist menace) (Article 5(2)).

The more than sound motivation behind the ban is that, despite the high level of accuracy, even a minimal error of identification in instances such as criminal procedures would be considered a grave damage to citizens involved, not to mention the most obvious breaches of privacy in the case of general citizens. The decision to leave some exceptions to the ban was then viewed quite controversially by many, not only because they seem to indirectly allow the targeting of specific population groups, in the case of potential terrorist attacks for example, surveillance would presumably target only people of Arab descent, but also because event small exceptions could then be used as a shortcut to exponentially augment state use of the technology.

Second, the High-risk category is meant to encompass all AI products that are most common in general use across different sectors such as transportation (as in the case of self-driving cars), educational and vocational training (in the instance of exam scoring systems), employment (as when companies use CV sorting software) and essential private and public services (for example when banks use automatic systems to credit score loan recipients or when surgeries are performed through an AI implemented machine) (Articles 6 and7).

Also encompassed in this category are all the instances in which AI is used in policing activities, as could be the case when analysing the reliability of evidence and in any instance of use during the judicial process or when assessing the authenticity of travel documents.

Due to the potential gravity of damages that a misuse of AI in the sectors mentioned above could entail for citizens, they are to be subject to strict obligations. First of all, they ought be designed with a high level of robustness, security and accuracy and have adequate risk assessment systems built in whilst still allow the possibility be subject to human oversight. Their databases must be of the highest quality possible, in order to reduce discriminatory outcomes. To aide regulators, all AI systems should also be provided with detailed documentation regarding their functioning, in addition to the ability to log their activity to ensure traceability of results. To further increase customer’s trust all services must provide clear and adequate information to the user (Articles 8 to 13).

The bulk of AI systems, tough, falls into the limited or minimal risk category which include chatbots and AI technologies used in videogames and spam filters. These categories, due to lack of risk presented to users, are not subject to any restrictions under the directive.

In the case of these low-risk impact services companies are left the choice of, in a sense, complying with the blueprint provided by the AI Act by issuing their own codes of conduct to address the consumer trust issue (Article 69).

The problem of enforcement is, as usual, a key component in the success of the policy. Due to the fact that the AI Act is the first of its kind, the Commission proposed the creation of an entirely new Board dedicated to Artificial Intelligence (Articles 56 to 60) with the specific task of aiding the implementation of the new norms across European territory, working in close contact with specifically appointed national market surveillance authorities (Articles 61 to 68). Non-compliance is punished, with a similar procedure to the Digital Markets Act, with fines that can range to up 6% of global market value (Article 71).

It is interesting to note how the regulatory effort promoted by the Commission, that finds its legal basis in the core competence of the EU of creating a solid internal market, and in this case specifically, a part of it that is considered as one of the most promising in terms of its evolution into becoming a world competitive one, is based on a regulatory framework on the restrictive side, as it emerges from the provision illustrated above.

This decision has in fact to be inserted in the wider scenario of the AI market worldwide, one that sees China and the US as the leaders in the sector. One of the main goals of the Commission in drafting this act is, without a doubt, closing the gap with these competitors. It seems original then to choose high standards of users’ protection as an additional competitive edge against a country with little or no regulation on the industry, as it is in the orthodox model of free market reigning in the US, and against another country, such as China, that heavily state-powers sectors that have the potential to be globally competitive. In fact, many are left wondering if a market born under the watch of strict regulation indeed has the power to achieve international market competitiveness.

It is also important to note, though, that AI produced in Europe could still retain market advantage on foreign technology in the Continent as all technology of this kind, created both domestically and abroad, will have to comply to European standards (Article 26). This constitutes a clear disadvantage for foreign firms, especially Big Tech ones that have already profusely invested in the sector, as they would have to earn the CE marking and provide the requested documentation to be reviewed. If analysed under this point of view then the new AI Act could indeed have a potential to create a solid market, at least within the EU, as all those who wished to use an AI system would be ideally supplied by European companies already compliant with regulation.

To give it further structuring power in building a consolidated European sector in artificial intelligence, the AI regulation will be aided and complemented by the new Coordinated Plan, a common collaboration initiative between the Commission and 19 Member States on the creation and investment in strong national AI markets. One of the main aims of this coordinated action is to draw investment to the sector. In total the Commission plans to invest 1 billion Euro per year, with plans of drawing the funds from the Digital Europe plan and the Horizon Europe plan as well as implementing technology relevant to agriculture, energy and the environment through the European Green Deal. Paradoxically the coronavirus pandemic will also indirectly aid funding of AI technologies as the new Recovery and Resilience Facility allocates roughly 134 billion Euro to digital innovation which include research and funding of new technologies in the field of Healthcare.

The final step to ensure a proper development in the field of AI is regulating the physical machines crafted specifically to support AI software. This is the main aim of the New Machinery Regulation, an ad hoc proposal that ensures safety criteria are met in products developed for private consumers’ use (as could be a cleaning robot) and for professional use both in the industrial and medical sectors.