Regulation and risks related to the use of Artificial Intelligence Systems

As known, on 21 April 2021 the European Commission finalised a proposal for a regulation (hereinafter for convenience “Regulation”) in order to introduce a system of harmonised rules for the development, placing and use of artificial intelligence “AI” systems within the European Union. At that time, a “grace period” was introduced, that is a two-year period in which all operators of the field would have had time to comply with the Regulation, as well as to make decisions based on the rules contained therein.

In light of the proliferation of AI systems, it is legitimate to ask whether during the aforementioned grace period they have actually complied with the rules of the mentioned Regulation.

To answer this question, it is first necessary to list below the artificial intelligence systems specifically prohibited by the Regulation:

  • those that use subliminal techniques capable of distorting the behaviour of a subject;
  • those that exploit the vulnerability of a specific group of people;
  • those that allow public authorities to create a classification of persons based on a social score;
  • those using real-time biometric identification systems, except in cases provided for by law.

In addition to the above, the European Commission has also identified three types of risk in relation to the type of artificial intelligence system in use, which are shown below:

  • unacceptable risk” if the AI system is to be considered a threat to security, means of support and individual rights; such a system is to be considered prohibited under the Regulation;
  • high risk” if AI systems are used in areas where fundamental human rights are affected; such a system can be used and implemented only by adopting a number of precautions which will be discussed below;
  • low risk” if AI systems involve risks that are considered minor, such as in the field of video games, where only transparency obligations are imposed.

On the basis of these due premises, let us try to analyse a concrete case of artificial intelligence - namely, the autonomous driving system of the well-known car manufacturer “Tesla” in relation to which the Firm has already published an article - in order to understand whether or not it complies with the Regulation and, more importantly, what kind of risk would result from its use.

It is reasonable to consider as high the risk such AI system could bring to the public, and proof of this is the more than 700 accidents caused by it outside the European Union, with damages caused both to people and third-party vehicles.

It is also reasonable to state that an autonomous driving system such as the one in question should require a double regulation, namely both the rules of the aforementioned Regulation in relation to the use/implementation of the artificial intelligence system, and new and specific traffic regulations imposed by the circulation of driverless vehicles that communicate with each other through signals and sounds not comprehensible to humans (it appears that the latter regulation is already well advanced in France, Germany and the United States).

Now, it seems reasonable to place this so-called “self-driving car” AI system within the second type of risk, i.e. the one deemed “high”; according to the Regulation, this type of risk imposes on the manufacturer of such artificial intelligence system (“Tesla”) to carry out a preliminary conformity assessment of the system as well as to provide a detailed analysis of the relative risks, all by means of a series of tests that must prove the total absence of errors of such systems. In addition, Article 10 of the Regulation lays down a further obligation for the company using the AI system concerning the proper storage and governance of the data of the users processed by the systems in question.

Thus, the Regulation provides a number of rather strict requirements and obligations that, in the writer’s opinion, are unlikely to be met by the artificial intelligence systems now in circulation and, in fact, there has been lot of criticism from those who would like to see the introduction of less stringent criteria so as to facilitate greater use of artificial intelligence systems.

Another concrete example of artificial intelligence, which we have already discussed in last month's article, is ChatGPT which in Italy was blocked by Italian Authority for Privacy for non-compliance with the European regulations on personal data.

Precisely ChatGPT shows how it is quite complex to frame and classify different AI devices although applying the criteria and principles set out in the Regulation. In fact, at a first and superficial analysis, ChatGPT could fall within the AI systems with a lower risk (third level) as it does not seem to involve fundamental rights.

However, one has to wonder whether this also applies if one were to ask the ChatGPT application to track and disclose a person's data or to prepare an essay or even to rework a copyrighted work.

The answer to this question can only be negative, since such use would risk violating not only the fundamental rights related to the protection of personal data of each individual but also those belonging to the authors of copyrighted works. All this forces us to classify ChatGPT in the category of "high risk" AI systems.

In this respect, it should also be pointed out that the Regulation provides strict controls in relation to “high-risk” AI systems, as well as the application of administrative penalties of up to EUR 30 million or 6% of the turnover of the company concerned. It is however unclear, or at least has not yet been identified, the body responsible for monitoring compliance with the Regulation.

In conclusion and based on the considerations spent above, the writer believes it is advisable for the principles and criteria for the classification of the various artificial intelligence systems to be better defined when the regulation is definitely approved, given that they are currently too generic and often insufficient to correctly classify more complex AI systems (such as “ChatGPT”).

It is also desirable that an independent and impartial authority will be created and set up for each Member State, which can carry out the function of monitoring and verifying the correct application of the regulation's provisions in order to better protect the fundamental rights of individuals.


GPT Chat and Copyright: implications and risks

Lately, there has been a lot of discussion about Chat GPT, acronym for Chat Generative Pre-trained Transformer, a particular chatbot developed by OpenAI which is a research company engaged in the development and evolution of artificial intelligence.

Chat GPT has been presented to the users as a “friendly Ai or Fai”, therefore as an intelligence capable of contributing to the benefit of humanity. Nevertheless, its use has triggered lot of concerns and criticisms, with major implications especially in relation to intellectual property’s aspects.

In fact, thanks to its advanced machine learning technology, called Deep Learning, Chat GPT is able to generate a new text autonomously, imitating human language. Thus, it can be used not only to briefly answer questions, but also for automatic text writing.

Specifically, Chat GPT is able to create texts from scratch upon user's request, but also to process summaries or documents from existing works and thus owned by others.

However, the current absence of a specific regulation on its use risks to seriously jeopardizing the copyright of the contents “created” by Chat GPT: on one hand significantly increasing cases of copying and plagiarism and, on the other hand, making more complex for the copyright holder to defend its rights.

To fully understand the issue just above represented, we must first consider that, according to copyright law, the idea cannot be protected as such, but only its form of expression (thus, in the case of Chat GPT, the text created).

It must also be considered that once the authorship of the writing or work has been recognized, any improper use of it is forbidden, including copying in its entirety, paraphrasing and sometimes reworking it, when the differences appear to be of minor relevance. In essence, plagiarism exists in case of partial reproduction of the protected work and, based on recent case law of the Supreme Court, also in the case of “developmental plagiarism”, for example when the new work cannot simply be considered inspired by the original work because the differences, merely formal, make it an abusive and unauthorized reworking of the latter.

It is also useful to point out that, according to consistent case law, in order for copyright infringement to be established, the elements considered essential of the original work must not be reproduced and therefore they must not coincide with those of the work that has been transposed.

Although there is no specific regulation on the point, it is reasonable to state that these principles also apply to the texts generated by Chat GPT, because its use or the use of any other form of artificial intelligence cannot derogate the rules of copyright law. Consequently, the users must be very careful to ask to Chat GPT to summarize or paraphrase another's text because, if these are spread without the author’s permission, the latter might demand payment of reproduction rights in addition to compensation for any damage caused. To this extent, once can wonder whether the artificial intelligence’s system itself should refuse such a request if it precisely infringes other’s rights.

Another hypothesis that might arise must then be considered, and that is if the text prepared from scratch by Chat GPT is worthy of protection under copyright law. In this case, the question to be asked is whether copyright can be recognized in favor of Chat GPT.

To answer this question, we must first consider that, according to the Italian law, artificial intelligence systems are devoid of legal personality, therefore they cannot hold any rights, including copyright. This also seems to be confirmed by the copyright law which doesn’t mention Chat GPT nor any other form of artificial intelligence in listing the subjects to which it may be applied. And it could not be otherwise since it is a law issued in 1941.

Consequently, for a Chat GPT work to be deemed worthy of protection under the copyright law, there must necessarily be a creative contribution by a natural person, which, however, seems to be lacking at present.

In conclusion, the absence of ad hoc legislation regulating the use of Chat GPT exposes to serious risks of infringement of others' works and thus jeopardizes authors’ rights. This is also because it does not appear that Chat GPT has at present adopted suitable verification and control systems to prevent infringement of other’s rights.

Given the increasing use of this new technology and the doubts just discussed, the writer hopes that the legislature will regulate this phenomenon as soon as possible so as to clearly define its scope and any possible rights (rights which do not seem to be able to be recognized in favor of Chat GPT).

It is also hoped that Chat GPT will soon be able to implement effective monitoring and reporting systems for the protection of intellectual property rights, which will likely need the help and assistance of the rights holders themselves (similar to the Amazon platform), but which could concretely safeguard others’ creative works.


The “Magic Avatar” and the world of artificial intelligence: lights and shadows of a trend that “revolutionizes" privacy and copyright

On December 7, 2022, “Lensa” has turned out to be the most popular iPhone app on the Apple store. The reason? Although “Lensa” has been on the market since 2018, last November it launched a new feature called “Magic Avatar”: taking advantage of artificial intelligence, this feature allows users – upon payment of a fee - to transform their selfies into virtual avatars.

At first glance, once does not catch the problem arising from an avatar who shows the face (clearly enhanced) of the subject of the selfie; however, upon closer analysis, there are several legal issues connected to the use of this “Lensa”’s feature.

Indeed, the application thereof works thanks to artificial intelligence and on the basis of a huge amount of data (so-called “datasets”) which are stored and used to improve the performance of the application itself. In most of the cases, these datasets are nothing more than images collected randomly on the web in relation to which obviously there is no real control on the existence of any rights. This is the first problem: the diffusion and collection of illustrations without the consent of the artists who previously created them turn out to be a copyright’s infringement. Authors are not recognized of any contribution or prize for their works – which, instead, should be guaranteed to them pursuant to the Copyright Italian Law (l. 633/1941 and subsequent amendments) – and they also find themselves competing with artificial systems which are able to “emulate” their style in few minutes.

The problem does not concern the avatar generated by “Lensa” application, but the huge number of images extrapolated from the web, used by the system to trains itself and from which it must “learn” to then reproduce the avatar. The consequences of such a trend should not be underestimated since it is fair to wonder whether one day the artificial intelligences might completely replace human activity. Such undesirable scenario is not so unlikely if we consider that the treatment of visual works created by the use of artificial intelligence’ systems is currently being studied by the US Copyright Office.

In order to (partially) face this issue, the website “Have I Been Trained” has been created to help content creators carry out research aimed at understanding whether the datasets used by artificial intelligences unlawfully reproduce their creations.

There is also a second and more worrying aspect concerning the processing of personal data by “Lensa”. Upon payment of a very low amount to generate the avatar, people provide the application with personal data and information that may also be used for purposes completely different from the creation of “filtered images” and that have therefore a significant economic value. This is one of the main complaints made against this app, namely that, once installed, “Lensa” collects more data that those necessary for its operation, transferring them to servers located in the USA (where the company’s registered office is located). That’s enough to state that the data processing does not comply with the GDPR.

Indeed, Lensa app’s privacy policy states that users’ biometric data (defined under Art. 4 par. 14 of GDPR as “those personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”) would be delated from the servers once the app has used them to generate the Magic Avatar.

The point is that – as often happens – “Lensa”’s privacy policy is long and complex, namely it adopts legal terms difficult to be understood by the users; for example, we read that “Lensa” does not use “facial data” for reasons other than the application of filters, unless the user gives consent to use the photos and videos for a different purpose. This might seem comforting but, on a deeper analysis of the terms and conditions, it turns out that “Lensa” reserves far broader powers – of distribution, use, reproduction, creation – over the work raised from user content, subject to an additional “explicit consent” required by the applicable law (i.e., the various national laws).

But where does such “explicit consent” come from? Easy: by sharing the avatar on the public or tagging “Lensa” on social networks, even via hashtag, the user gives consent to use that content and thus authorizes the company to reproduce, distribute and modify it. This licence – which ends with the deletion of the account – is justified in Lensa’s privacy policy on the basis of the so-called “legitimate interest” (i.e. “it is our legitimate interest to make analytics of our audience as it helps us understand our business metrics and improve our products”).

However, this statement raises some concerns, especially in the light of the decision issued by the Italian Privacy Guarantor concerning the social media “Clubhouse”, according to which company’s “legitimate interest” is not the proper legal basis for processing such data and therefore it is not correct either for carrying out data analysis or for the systems “training” process.

In the end, artificial intelligence undoubtedly represents an epoch-making technological evolution, but its use may imply the risk of un unlawful compression of users’ rights; indeed, a European Regulation on artificial intelligence aimed at defining the scope and conditions of its use has been under consideration for some time.

In this respect, hopefully “Lensa” application will take steps as soon as possible to protect the illustration creator’s rights through the recognition of a proper remuneration to them, and so that the user data will be collected and processed correctly, in accordance with applicable privacy’s laws.


Artificial intelligence travels fast and with autopilot

Self-driving, profiling, social scoring, bias, chatbot and biometric identification are just some of the many terms entered in our daily life. They all refer to artificial intelligence (“AI”), which is the machine’s ability to show human-like skills such as reasoning, learning, planning and creativity[1]. Today like never before, AI has an enormous impact on persons and their security. It is sufficient to mention the Australian case that involved the driver of a Tesla “Model 3” who hit a 26-year-old nurse[2], while the vehicle was on autopilot.

With reference to this tragic accident, one naturally wonders who should be held liable for the critical conditions of the poor nurse. Is it the driver, despite the fact she was not technically driving the vehicle at the moment of the accident? Is it the manufacturer of the vehicle that hit the nurse? Or, again, the producer/developer of the software that provides to the vehicle the information on how to behave when it detects a human being on its way?

As of now, the driver – although she was released on bail – has been accused of causing a car accident. That doesn’t change the fact that – if the charge will be confirmed after the pending judgement – the driver will have the subsequent right to claim damages on the producer/developer of the AI system.

The above-mentioned case deserves an in-depth analysis, especially regarding the European AI industry.

It is worth mentioning that, despite the gradual rise of the AI use in the widest scope of our daily life[3], to date there is no law, regulation or directive related to the civil liability on the use of AI systems.

At an EU level, the Commission seems to have been the first that seriously dealt with the issue of civil liability by highlighting gaps regarding this subject, and publishing, among other things, a Regulation proposal establishing harmonized rules of AI systems[4].

By analogy, it is possible to retrieve from the above proposal three different definitions of civil liability: liability from faulty product, developer’s liability and vicarious liability.

Liability from faulty product applies in the case under exam, which considers the machine to lack legal personality[5].

Hence, as is evident, in the event an AI system causes damage to a third party, the liability will be on its producer/developer and not, on the contrary, on the device/system that incorporates it.

Returning to the case in question, it would therefore be up to the developer of the AI system (i.e. the US company Tesla) to compensate the injured nurse, if the latter is able to prove the connection between the damage/injuries caused and the fault of the AI system. For its part, the developer of the AI system could exclude the damage only if it is able to prove the so-called “development risk”, i.e. providing proof that the defect found was totally unpredictable based on the circumstances and manner in which the accident occurred.

Some commentators have observed on the point that the manufacturer should be able to control the AI system remotely and predict, thanks to the algorithms, unscheduled conduct at the time of its commercialization[6]. Moreover, as we already know, the algorithms incorporated in the AI systems installed in cars can collect data over time, self-learn and study particular behaviors and/or movements of human beings, increasingly reducing the risk of accidents.

From this point of view, the manufacturer would therefore have an even more stringent burden to exclude any hypothesis of liability, that is, to demonstrate that it has adopted all the appropriate safety measures to avoid the damage.

In this regard, the European Parliament has also drafted the “Resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence” which introduces the category of the so-called “High-risk AI”, i.e. those artificial intelligence systems operating in particular social contexts such as, for example, education, or those technologies that collect sensitive data (as in the case of biometric recognition), or that are used in the selection of personnel (which would risk falling back into social scoring or other discriminatory acts) or, again, the technologies used in the field of security and justice (through which there would be the risk of biases: prejudices of the machine on the subject being judged). It has been observed that for such “high-risk AI” systems there is an objective liability of the producer in case of a harmful event unless the latter is able to demonstrate the existence of force majeure event.

In conclusion, despite the efforts made by the Commission and then by the European Parliament with regard to the regulation of AI systems, there are still a lot of questions to be answered regarding the profiles of liability connected to them.

For example, it would be useful to understand how AI systems that are not considered to be “high risk”, such as the self-driving systems discussed in this article, should be framed and regulated. Or again, what threshold of liability to apply if in the not-too-distant future an AI device may be considered fully comparable, in terms of reasoning capabilities, to a human being (as recently claimed by a Google employee on the search engine AI system[7]).

What is sure is that, as often happens with any technological innovation, only a significant integration and adoption in our society of artificial intelligence systems will outline concrete hypotheses of liability, as applicable in contexts of daily operations.

In any case, we have high hopes that the aforementioned Regulation - whose date of entry into force is not yet known - will be able to provide a discipline that is as complete as possible and that above all reduces the risks and responsibilities of the users of AI systems and increases, on the other hand, the burdens borne by the manufacturers of the same to guarantee their safety.

[1] https://www.europarl.europa.eu/news/it/headlines/society/20200827STO85804/che-cos-e-l-intelligenza-artificiale-e-come-viene-usata
[2] https://www.drive.com.au/news/melbourne-hit-and-run-blamed-on-tesla-autopilot-could-set-legal-precedent-for-new-tech/
[3] Considerando (2), Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[4] Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[5] Barbara Barbarino, Intelligenza artificiale e responsabilità civile. Tocca all’Ue, Formiche.net, 15/05/2022
[6] Ut supra fn 5
[7] https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine


Internet of Things and Artificial Intelligence: the end or beginning of standard essential patents?

The COVID-19 pandemic has forced everyone into quarantine which in turn also has imposed a re-organisation of personal and working life directly within our homes.

All of this has simply proven and increased our already worrying dependence from IT means and new technologies, the use of which increased exponentially in 2020 in all sectors, even in those where this would have been difficult to imagine (let us consider, for example, court hearings done remotely via audio-video link, school distance learning, etc.).

Similarly, we are also witnessing an ever increasing digital integration in objects, devices, sensors, and daily goods which now have become a part of our everyday life.

With that being said, we should ask ourselves now what impact the current technological revolutions will have within the field of intellectual property and, in particular, within the patent sector.

In our view, the current changes will certainly bring about a rejuvenation in the field of inventions; indeed, to the extent that is of interest for our purposes, it should be noted that thanks to the decisive role of artificial intelligence and the “internet of things”, we may legitimately expect an increase in the filing of so-called standard essential patents.

It is well known that standard essential patents (SEPs) are patents that protect technologies considered to be – indeed – essential for the implementation of standards which are recognised by the relevant standards setting organisations.

These patents are already present within our life more than we imagine and in fact we use them for calling others, sending messages via our smartphone, sending files via e-mail, listening to our music playlists or simply watching our favourite TV series whilst sitting on our couch at home.

Today, the most well-known standards probably would include “Bluetooth”, “WiFi” and “5G” but, as we said above, performing any of the above actions involves dozens of standards each of which is in turn protected by the aforementioned patents.

In a recent communication sent out last November to the European Parliament, the European Commission evidenced the crucial role of standard essential patents in the development of 5G technology and the Internet of Things, for example noting that just for standards of mobile connectivity the ETSI (European Telecommunications Standards Institute) has declared more than 25.000 patent families.

However, in the same communication the Commission also evidenced the difficulties that some businesses encounter in trying to reach an agreement for the grant of licenses with the holders of standard essential patents, which consequently has determined a rise in disputes between rights-holders and users.

Indeed, it’s known that a patent is defined as essential following a sort of self-declaration by its holder to the effect that the patent is necessary and essential for the application of a standard and, therefore, by means of this declaration, the holder is available to grant a license over such patent to those who intend to utilize the relevant standard under so-called “FRAND”, namely conditions that are Fair, Reasonable And Non-Discriminatory.

What occurs in practice is that the holder of the standard essential patent, having ascertained the presence within the market of a product that uses a certain standard, will turn to its producer or distributor and ask the latter to sign a license agreement containing “FRAND” conditions.

At that point the user has no other choice but to accept the license at the conditions that have been proposed by the patent holder; indeed, unlike what happens with patents that are not standard essential where the user clearly may search for alternative solutions that do not infringe the patent, this is not possible with standard essential patents given that they concern standards used for complying with technical provisions that form the basis of millions of products and therefore allow for interoperability between such products.

Moreover, investing in the development of an alternative standard is very expensive (for example, consider the development of a potential alternative to the “Bluetooth” standard), but – even if we should assume the feasibility of developing an alternative standard – consumers would then have to be persuaded to “switch” to a new standard and substitute their devices with new ones.

The risk that this kind of situation may cause distortions within the market and especially instances of abuse on part of the holders of standard essential patents is therefore very high; indeed, those holders may decide the fate of a product within a certain market because they force all operators of that same market to use the standard upon payment of a royalty.

In order to balance the interests at play, the well-known judgment of the Court of Justice in the case of “Huawei v. ZTE” (C-170/13 of 16.07.2015) had already been issued in 2015 and provided for a series of obligations upon holders of standard essential patents, namely, among other things: a) the obligation to guarantee at all times so-called FRAND conditions in favour of potential licensees; b) the obligation of the patent holder to always warn in advance the user of the protected standard by indicating the patent that has been infringed and specifying how such violation has occurred and, if the user fails to cooperate, to commence legal proceedings.

According to the Court of Justice, if these conditions are met then it cannot be held that the holder of the standard essential patent has abused its domination position within the market and therefore no sanction may lie under art. 101 of the TFUE.

However, reality is somewhat different insofar as holders of standard essential patents still have excessive negotiating power vis-a-vis the user of the protected standard. Indeed, as already noted, the essential nature or lack thereof of a patent depends on a self-declaration given by the same holder of the patent which also establishes a “de facto” presumption of “essentiality” of the patent; this further facilitates the holders in legal proceedings because the burden of proof then falls onto the alleged infringer who will have to prove non-interference or the non-essential nature of the patent.

It should also be noted that as of now there are no provisions that protect the weak party, that is the user of the standard essential patent and, indeed, for example there are no reference criteria that clearly define conditions that are fair, equal, and non-discriminatory. In other words, the user cannot verify if the conditions that are proposed by the patent holder are actually “FRAND” and so two options become possible: either accept the conditions or rebel and start proceedings against the patent holder.

Even though the matter of standard essential patents has formed the subject of several judgments and specific calls by the European Commission throughout the years, several questions have been left open and require immediate action by the legislator in order to strengthen legal certainty and reduce the rising number of disputes within this field.

In our opinion, it would be advisable for example to create and establish an independent body that could verify in advance the essential nature of a patent before it is protected as well as to create rules that are specific, effective, and fair capable of regulating the grant of licenses for standard essential patents.

Furthermore, considering the ongoing technological revolution and the consequent increase in the use of such patents, we trust that these reforms will be introduced in a timely manner.