As known, on 21 April 2021 the European Commission finalised a proposal for a regulation (hereinafter for convenience “Regulation”) in order to introduce a system of harmonised rules for the development, placing and use of artificial intelligence “AI” systems within the European Union. At that time, a “grace period” was introduced, that is a two-year period in which all operators of the field would have had time to comply with the Regulation, as well as to make decisions based on the rules contained therein.
In light of the proliferation of AI systems, it is legitimate to ask whether during the aforementioned grace period they have actually complied with the rules of the mentioned Regulation.
To answer this question, it is first necessary to list below the artificial intelligence systems specifically prohibited by the Regulation:
- those that use subliminal techniques capable of distorting the behaviour of a subject;
- those that exploit the vulnerability of a specific group of people;
- those that allow public authorities to create a classification of persons based on a social score;
- those using real-time biometric identification systems, except in cases provided for by law.
In addition to the above, the European Commission has also identified three types of risk in relation to the type of artificial intelligence system in use, which are shown below:
- “unacceptable risk” if the AI system is to be considered a threat to security, means of support and individual rights; such a system is to be considered prohibited under the Regulation;
- “high risk” if AI systems are used in areas where fundamental human rights are affected; such a system can be used and implemented only by adopting a number of precautions which will be discussed below;
- “low risk” if AI systems involve risks that are considered minor, such as in the field of video games, where only transparency obligations are imposed.
On the basis of these due premises, let us try to analyse a concrete case of artificial intelligence – namely, the autonomous driving system of the well-known car manufacturer “Tesla” in relation to which the Firm has already published an article – in order to understand whether or not it complies with the Regulation and, more importantly, what kind of risk would result from its use.
It is reasonable to consider as high the risk such AI system could bring to the public, and proof of this is the more than 700 accidents caused by it outside the European Union, with damages caused both to people and third-party vehicles.
It is also reasonable to state that an autonomous driving system such as the one in question should require a double regulation, namely both the rules of the aforementioned Regulation in relation to the use/implementation of the artificial intelligence system, and new and specific traffic regulations imposed by the circulation of driverless vehicles that communicate with each other through signals and sounds not comprehensible to humans (it appears that the latter regulation is already well advanced in France, Germany and the United States).
Now, it seems reasonable to place this so-called “self-driving car” AI system within the second type of risk, i.e. the one deemed “high”; according to the Regulation, this type of risk imposes on the manufacturer of such artificial intelligence system (“Tesla”) to carry out a preliminary conformity assessment of the system as well as to provide a detailed analysis of the relative risks, all by means of a series of tests that must prove the total absence of errors of such systems. In addition, Article 10 of the Regulation lays down a further obligation for the company using the AI system concerning the proper storage and governance of the data of the users processed by the systems in question.
Thus, the Regulation provides a number of rather strict requirements and obligations that, in the writer’s opinion, are unlikely to be met by the artificial intelligence systems now in circulation and, in fact, there has been lot of criticism from those who would like to see the introduction of less stringent criteria so as to facilitate greater use of artificial intelligence systems.
Another concrete example of artificial intelligence, which we have already discussed in last month’s article, is ChatGPT which in Italy was blocked by Italian Authority for Privacy for non-compliance with the European regulations on personal data.
Precisely ChatGPT shows how it is quite complex to frame and classify different AI devices although applying the criteria and principles set out in the Regulation. In fact, at a first and superficial analysis, ChatGPT could fall within the AI systems with a lower risk (third level) as it does not seem to involve fundamental rights.
However, one has to wonder whether this also applies if one were to ask the ChatGPT application to track and disclose a person’s data or to prepare an essay or even to rework a copyrighted work.
The answer to this question can only be negative, since such use would risk violating not only the fundamental rights related to the protection of personal data of each individual but also those belonging to the authors of copyrighted works. All this forces us to classify ChatGPT in the category of “high risk” AI systems.
In this respect, it should also be pointed out that the Regulation provides strict controls in relation to “high-risk” AI systems, as well as the application of administrative penalties of up to EUR 30 million or 6% of the turnover of the company concerned. It is however unclear, or at least has not yet been identified, the body responsible for monitoring compliance with the Regulation.
In conclusion and based on the considerations spent above, the writer believes it is advisable for the principles and criteria for the classification of the various artificial intelligence systems to be better defined when the regulation is definitely approved, given that they are currently too generic and often insufficient to correctly classify more complex AI systems (such as “ChatGPT”).
It is also desirable that an independent and impartial authority will be created and set up for each Member State, which can carry out the function of monitoring and verifying the correct application of the regulation’s provisions in order to better protect the fundamental rights of individuals.