Regulation and risks related to the use of Artificial Intelligence Systems

As known, on 21 April 2021 the European Commission finalised a proposal for a regulation (hereinafter for convenience “Regulation”) in order to introduce a system of harmonised rules for the development, placing and use of artificial intelligence “AI” systems within the European Union. At that time, a “grace period” was introduced, that is a two-year period in which all operators of the field would have had time to comply with the Regulation, as well as to make decisions based on the rules contained therein.

In light of the proliferation of AI systems, it is legitimate to ask whether during the aforementioned grace period they have actually complied with the rules of the mentioned Regulation.

To answer this question, it is first necessary to list below the artificial intelligence systems specifically prohibited by the Regulation:

  • those that use subliminal techniques capable of distorting the behaviour of a subject;
  • those that exploit the vulnerability of a specific group of people;
  • those that allow public authorities to create a classification of persons based on a social score;
  • those using real-time biometric identification systems, except in cases provided for by law.

In addition to the above, the European Commission has also identified three types of risk in relation to the type of artificial intelligence system in use, which are shown below:

  • unacceptable risk” if the AI system is to be considered a threat to security, means of support and individual rights; such a system is to be considered prohibited under the Regulation;
  • high risk” if AI systems are used in areas where fundamental human rights are affected; such a system can be used and implemented only by adopting a number of precautions which will be discussed below;
  • low risk” if AI systems involve risks that are considered minor, such as in the field of video games, where only transparency obligations are imposed.

On the basis of these due premises, let us try to analyse a concrete case of artificial intelligence - namely, the autonomous driving system of the well-known car manufacturer “Tesla” in relation to which the Firm has already published an article - in order to understand whether or not it complies with the Regulation and, more importantly, what kind of risk would result from its use.

It is reasonable to consider as high the risk such AI system could bring to the public, and proof of this is the more than 700 accidents caused by it outside the European Union, with damages caused both to people and third-party vehicles.

It is also reasonable to state that an autonomous driving system such as the one in question should require a double regulation, namely both the rules of the aforementioned Regulation in relation to the use/implementation of the artificial intelligence system, and new and specific traffic regulations imposed by the circulation of driverless vehicles that communicate with each other through signals and sounds not comprehensible to humans (it appears that the latter regulation is already well advanced in France, Germany and the United States).

Now, it seems reasonable to place this so-called “self-driving car” AI system within the second type of risk, i.e. the one deemed “high”; according to the Regulation, this type of risk imposes on the manufacturer of such artificial intelligence system (“Tesla”) to carry out a preliminary conformity assessment of the system as well as to provide a detailed analysis of the relative risks, all by means of a series of tests that must prove the total absence of errors of such systems. In addition, Article 10 of the Regulation lays down a further obligation for the company using the AI system concerning the proper storage and governance of the data of the users processed by the systems in question.

Thus, the Regulation provides a number of rather strict requirements and obligations that, in the writer’s opinion, are unlikely to be met by the artificial intelligence systems now in circulation and, in fact, there has been lot of criticism from those who would like to see the introduction of less stringent criteria so as to facilitate greater use of artificial intelligence systems.

Another concrete example of artificial intelligence, which we have already discussed in last month's article, is ChatGPT which in Italy was blocked by Italian Authority for Privacy for non-compliance with the European regulations on personal data.

Precisely ChatGPT shows how it is quite complex to frame and classify different AI devices although applying the criteria and principles set out in the Regulation. In fact, at a first and superficial analysis, ChatGPT could fall within the AI systems with a lower risk (third level) as it does not seem to involve fundamental rights.

However, one has to wonder whether this also applies if one were to ask the ChatGPT application to track and disclose a person's data or to prepare an essay or even to rework a copyrighted work.

The answer to this question can only be negative, since such use would risk violating not only the fundamental rights related to the protection of personal data of each individual but also those belonging to the authors of copyrighted works. All this forces us to classify ChatGPT in the category of "high risk" AI systems.

In this respect, it should also be pointed out that the Regulation provides strict controls in relation to “high-risk” AI systems, as well as the application of administrative penalties of up to EUR 30 million or 6% of the turnover of the company concerned. It is however unclear, or at least has not yet been identified, the body responsible for monitoring compliance with the Regulation.

In conclusion and based on the considerations spent above, the writer believes it is advisable for the principles and criteria for the classification of the various artificial intelligence systems to be better defined when the regulation is definitely approved, given that they are currently too generic and often insufficient to correctly classify more complex AI systems (such as “ChatGPT”).

It is also desirable that an independent and impartial authority will be created and set up for each Member State, which can carry out the function of monitoring and verifying the correct application of the regulation's provisions in order to better protect the fundamental rights of individuals.


GPT Chat and Copyright: implications and risks

Lately, there has been a lot of discussion about Chat GPT, acronym for Chat Generative Pre-trained Transformer, a particular chatbot developed by OpenAI which is a research company engaged in the development and evolution of artificial intelligence.

Chat GPT has been presented to the users as a “friendly Ai or Fai”, therefore as an intelligence capable of contributing to the benefit of humanity. Nevertheless, its use has triggered lot of concerns and criticisms, with major implications especially in relation to intellectual property’s aspects.

In fact, thanks to its advanced machine learning technology, called Deep Learning, Chat GPT is able to generate a new text autonomously, imitating human language. Thus, it can be used not only to briefly answer questions, but also for automatic text writing.

Specifically, Chat GPT is able to create texts from scratch upon user's request, but also to process summaries or documents from existing works and thus owned by others.

However, the current absence of a specific regulation on its use risks to seriously jeopardizing the copyright of the contents “created” by Chat GPT: on one hand significantly increasing cases of copying and plagiarism and, on the other hand, making more complex for the copyright holder to defend its rights.

To fully understand the issue just above represented, we must first consider that, according to copyright law, the idea cannot be protected as such, but only its form of expression (thus, in the case of Chat GPT, the text created).

It must also be considered that once the authorship of the writing or work has been recognized, any improper use of it is forbidden, including copying in its entirety, paraphrasing and sometimes reworking it, when the differences appear to be of minor relevance. In essence, plagiarism exists in case of partial reproduction of the protected work and, based on recent case law of the Supreme Court, also in the case of “developmental plagiarism”, for example when the new work cannot simply be considered inspired by the original work because the differences, merely formal, make it an abusive and unauthorized reworking of the latter.

It is also useful to point out that, according to consistent case law, in order for copyright infringement to be established, the elements considered essential of the original work must not be reproduced and therefore they must not coincide with those of the work that has been transposed.

Although there is no specific regulation on the point, it is reasonable to state that these principles also apply to the texts generated by Chat GPT, because its use or the use of any other form of artificial intelligence cannot derogate the rules of copyright law. Consequently, the users must be very careful to ask to Chat GPT to summarize or paraphrase another's text because, if these are spread without the author’s permission, the latter might demand payment of reproduction rights in addition to compensation for any damage caused. To this extent, once can wonder whether the artificial intelligence’s system itself should refuse such a request if it precisely infringes other’s rights.

Another hypothesis that might arise must then be considered, and that is if the text prepared from scratch by Chat GPT is worthy of protection under copyright law. In this case, the question to be asked is whether copyright can be recognized in favor of Chat GPT.

To answer this question, we must first consider that, according to the Italian law, artificial intelligence systems are devoid of legal personality, therefore they cannot hold any rights, including copyright. This also seems to be confirmed by the copyright law which doesn’t mention Chat GPT nor any other form of artificial intelligence in listing the subjects to which it may be applied. And it could not be otherwise since it is a law issued in 1941.

Consequently, for a Chat GPT work to be deemed worthy of protection under the copyright law, there must necessarily be a creative contribution by a natural person, which, however, seems to be lacking at present.

In conclusion, the absence of ad hoc legislation regulating the use of Chat GPT exposes to serious risks of infringement of others' works and thus jeopardizes authors’ rights. This is also because it does not appear that Chat GPT has at present adopted suitable verification and control systems to prevent infringement of other’s rights.

Given the increasing use of this new technology and the doubts just discussed, the writer hopes that the legislature will regulate this phenomenon as soon as possible so as to clearly define its scope and any possible rights (rights which do not seem to be able to be recognized in favor of Chat GPT).

It is also hoped that Chat GPT will soon be able to implement effective monitoring and reporting systems for the protection of intellectual property rights, which will likely need the help and assistance of the rights holders themselves (similar to the Amazon platform), but which could concretely safeguard others’ creative works.