Artificial intelligence travels fast and with autopilot

Self-driving, profiling, social scoring, bias, chatbot and biometric identification are just some of the many terms entered in our daily life. They all refer to artificial intelligence (“AI”), which is the machine’s ability to show human-like skills such as reasoning, learning, planning and creativity[1]. Today like never before, AI has an enormous impact on persons and their security. It is sufficient to mention the Australian case that involved the driver of a Tesla “Model 3” who hit a 26-year-old nurse[2], while the vehicle was on autopilot.

With reference to this tragic accident, one naturally wonders who should be held liable for the critical conditions of the poor nurse. Is it the driver, despite the fact she was not technically driving the vehicle at the moment of the accident? Is it the manufacturer of the vehicle that hit the nurse? Or, again, the producer/developer of the software that provides to the vehicle the information on how to behave when it detects a human being on its way?

As of now, the driver – although she was released on bail – has been accused of causing a car accident. That doesn’t change the fact that – if the charge will be confirmed after the pending judgement – the driver will have the subsequent right to claim damages on the producer/developer of the AI system.

The above-mentioned case deserves an in-depth analysis, especially regarding the European AI industry.

It is worth mentioning that, despite the gradual rise of the AI use in the widest scope of our daily life[3], to date there is no law, regulation or directive related to the civil liability on the use of AI systems.

At an EU level, the Commission seems to have been the first that seriously dealt with the issue of civil liability by highlighting gaps regarding this subject, and publishing, among other things, a Regulation proposal establishing harmonized rules of AI systems[4].

By analogy, it is possible to retrieve from the above proposal three different definitions of civil liability: liability from faulty product, developer’s liability and vicarious liability.

Liability from faulty product applies in the case under exam, which considers the machine to lack legal personality[5].

Hence, as is evident, in the event an AI system causes damage to a third party, the liability will be on its producer/developer and not, on the contrary, on the device/system that incorporates it.

Returning to the case in question, it would therefore be up to the developer of the AI system (i.e. the US company Tesla) to compensate the injured nurse, if the latter is able to prove the connection between the damage/injuries caused and the fault of the AI system. For its part, the developer of the AI system could exclude the damage only if it is able to prove the so-called “development risk”, i.e. providing proof that the defect found was totally unpredictable based on the circumstances and manner in which the accident occurred.

Some commentators have observed on the point that the manufacturer should be able to control the AI system remotely and predict, thanks to the algorithms, unscheduled conduct at the time of its commercialization[6]. Moreover, as we already know, the algorithms incorporated in the AI systems installed in cars can collect data over time, self-learn and study particular behaviors and/or movements of human beings, increasingly reducing the risk of accidents.

From this point of view, the manufacturer would therefore have an even more stringent burden to exclude any hypothesis of liability, that is, to demonstrate that it has adopted all the appropriate safety measures to avoid the damage.

In this regard, the European Parliament has also drafted the “Resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence” which introduces the category of the so-called “High-risk AI”, i.e. those artificial intelligence systems operating in particular social contexts such as, for example, education, or those technologies that collect sensitive data (as in the case of biometric recognition), or that are used in the selection of personnel (which would risk falling back into social scoring or other discriminatory acts) or, again, the technologies used in the field of security and justice (through which there would be the risk of biases: prejudices of the machine on the subject being judged). It has been observed that for such “high-risk AI” systems there is an objective liability of the producer in case of a harmful event unless the latter is able to demonstrate the existence of force majeure event.

In conclusion, despite the efforts made by the Commission and then by the European Parliament with regard to the regulation of AI systems, there are still a lot of questions to be answered regarding the profiles of liability connected to them.

For example, it would be useful to understand how AI systems that are not considered to be “high risk”, such as the self-driving systems discussed in this article, should be framed and regulated. Or again, what threshold of liability to apply if in the not-too-distant future an AI device may be considered fully comparable, in terms of reasoning capabilities, to a human being (as recently claimed by a Google employee on the search engine AI system[7]).

What is sure is that, as often happens with any technological innovation, only a significant integration and adoption in our society of artificial intelligence systems will outline concrete hypotheses of liability, as applicable in contexts of daily operations.

In any case, we have high hopes that the aforementioned Regulation - whose date of entry into force is not yet known - will be able to provide a discipline that is as complete as possible and that above all reduces the risks and responsibilities of the users of AI systems and increases, on the other hand, the burdens borne by the manufacturers of the same to guarantee their safety.

[1] https://www.europarl.europa.eu/news/it/headlines/society/20200827STO85804/che-cos-e-l-intelligenza-artificiale-e-come-viene-usata
[2] https://www.drive.com.au/news/melbourne-hit-and-run-blamed-on-tesla-autopilot-could-set-legal-precedent-for-new-tech/
[3] Considerando (2), Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[4] Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[5] Barbara Barbarino, Intelligenza artificiale e responsabilità civile. Tocca all’Ue, Formiche.net, 15/05/2022
[6] Ut supra fn 5
[7] https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine


Internet of Things and Artificial Intelligence: the end or beginning of standard essential patents?

The COVID-19 pandemic has forced everyone into quarantine which in turn also has imposed a re-organisation of personal and working life directly within our homes.

All of this has simply proven and increased our already worrying dependence from IT means and new technologies, the use of which increased exponentially in 2020 in all sectors, even in those where this would have been difficult to imagine (let us consider, for example, court hearings done remotely via audio-video link, school distance learning, etc.).

Similarly, we are also witnessing an ever increasing digital integration in objects, devices, sensors, and daily goods which now have become a part of our everyday life.

With that being said, we should ask ourselves now what impact the current technological revolutions will have within the field of intellectual property and, in particular, within the patent sector.

In our view, the current changes will certainly bring about a rejuvenation in the field of inventions; indeed, to the extent that is of interest for our purposes, it should be noted that thanks to the decisive role of artificial intelligence and the “internet of things”, we may legitimately expect an increase in the filing of so-called standard essential patents.

It is well known that standard essential patents (SEPs) are patents that protect technologies considered to be – indeed – essential for the implementation of standards which are recognised by the relevant standards setting organisations.

These patents are already present within our life more than we imagine and in fact we use them for calling others, sending messages via our smartphone, sending files via e-mail, listening to our music playlists or simply watching our favourite TV series whilst sitting on our couch at home.

Today, the most well-known standards probably would include “Bluetooth”, “WiFi” and “5G” but, as we said above, performing any of the above actions involves dozens of standards each of which is in turn protected by the aforementioned patents.

In a recent communication sent out last November to the European Parliament, the European Commission evidenced the crucial role of standard essential patents in the development of 5G technology and the Internet of Things, for example noting that just for standards of mobile connectivity the ETSI (European Telecommunications Standards Institute) has declared more than 25.000 patent families.

However, in the same communication the Commission also evidenced the difficulties that some businesses encounter in trying to reach an agreement for the grant of licenses with the holders of standard essential patents, which consequently has determined a rise in disputes between rights-holders and users.

Indeed, it’s known that a patent is defined as essential following a sort of self-declaration by its holder to the effect that the patent is necessary and essential for the application of a standard and, therefore, by means of this declaration, the holder is available to grant a license over such patent to those who intend to utilize the relevant standard under so-called “FRAND”, namely conditions that are Fair, Reasonable And Non-Discriminatory.

What occurs in practice is that the holder of the standard essential patent, having ascertained the presence within the market of a product that uses a certain standard, will turn to its producer or distributor and ask the latter to sign a license agreement containing “FRAND” conditions.

At that point the user has no other choice but to accept the license at the conditions that have been proposed by the patent holder; indeed, unlike what happens with patents that are not standard essential where the user clearly may search for alternative solutions that do not infringe the patent, this is not possible with standard essential patents given that they concern standards used for complying with technical provisions that form the basis of millions of products and therefore allow for interoperability between such products.

Moreover, investing in the development of an alternative standard is very expensive (for example, consider the development of a potential alternative to the “Bluetooth” standard), but – even if we should assume the feasibility of developing an alternative standard – consumers would then have to be persuaded to “switch” to a new standard and substitute their devices with new ones.

The risk that this kind of situation may cause distortions within the market and especially instances of abuse on part of the holders of standard essential patents is therefore very high; indeed, those holders may decide the fate of a product within a certain market because they force all operators of that same market to use the standard upon payment of a royalty.

In order to balance the interests at play, the well-known judgment of the Court of Justice in the case of “Huawei v. ZTE” (C-170/13 of 16.07.2015) had already been issued in 2015 and provided for a series of obligations upon holders of standard essential patents, namely, among other things: a) the obligation to guarantee at all times so-called FRAND conditions in favour of potential licensees; b) the obligation of the patent holder to always warn in advance the user of the protected standard by indicating the patent that has been infringed and specifying how such violation has occurred and, if the user fails to cooperate, to commence legal proceedings.

According to the Court of Justice, if these conditions are met then it cannot be held that the holder of the standard essential patent has abused its domination position within the market and therefore no sanction may lie under art. 101 of the TFUE.

However, reality is somewhat different insofar as holders of standard essential patents still have excessive negotiating power vis-a-vis the user of the protected standard. Indeed, as already noted, the essential nature or lack thereof of a patent depends on a self-declaration given by the same holder of the patent which also establishes a “de facto” presumption of “essentiality” of the patent; this further facilitates the holders in legal proceedings because the burden of proof then falls onto the alleged infringer who will have to prove non-interference or the non-essential nature of the patent.

It should also be noted that as of now there are no provisions that protect the weak party, that is the user of the standard essential patent and, indeed, for example there are no reference criteria that clearly define conditions that are fair, equal, and non-discriminatory. In other words, the user cannot verify if the conditions that are proposed by the patent holder are actually “FRAND” and so two options become possible: either accept the conditions or rebel and start proceedings against the patent holder.

Even though the matter of standard essential patents has formed the subject of several judgments and specific calls by the European Commission throughout the years, several questions have been left open and require immediate action by the legislator in order to strengthen legal certainty and reduce the rising number of disputes within this field.

In our opinion, it would be advisable for example to create and establish an independent body that could verify in advance the essential nature of a patent before it is protected as well as to create rules that are specific, effective, and fair capable of regulating the grant of licenses for standard essential patents.

Furthermore, considering the ongoing technological revolution and the consequent increase in the use of such patents, we trust that these reforms will be introduced in a timely manner.