Regulation and risks related to the use of Artificial Intelligence Systems

As known, on 21 April 2021 the European Commission finalised a proposal for a regulation (hereinafter for convenience “Regulation”) in order to introduce a system of harmonised rules for the development, placing and use of artificial intelligence “AI” systems within the European Union. At that time, a “grace period” was introduced, that is a two-year period in which all operators of the field would have had time to comply with the Regulation, as well as to make decisions based on the rules contained therein.

In light of the proliferation of AI systems, it is legitimate to ask whether during the aforementioned grace period they have actually complied with the rules of the mentioned Regulation.

To answer this question, it is first necessary to list below the artificial intelligence systems specifically prohibited by the Regulation:

  • those that use subliminal techniques capable of distorting the behaviour of a subject;
  • those that exploit the vulnerability of a specific group of people;
  • those that allow public authorities to create a classification of persons based on a social score;
  • those using real-time biometric identification systems, except in cases provided for by law.

In addition to the above, the European Commission has also identified three types of risk in relation to the type of artificial intelligence system in use, which are shown below:

  • unacceptable risk” if the AI system is to be considered a threat to security, means of support and individual rights; such a system is to be considered prohibited under the Regulation;
  • high risk” if AI systems are used in areas where fundamental human rights are affected; such a system can be used and implemented only by adopting a number of precautions which will be discussed below;
  • low risk” if AI systems involve risks that are considered minor, such as in the field of video games, where only transparency obligations are imposed.

On the basis of these due premises, let us try to analyse a concrete case of artificial intelligence - namely, the autonomous driving system of the well-known car manufacturer “Tesla” in relation to which the Firm has already published an article - in order to understand whether or not it complies with the Regulation and, more importantly, what kind of risk would result from its use.

It is reasonable to consider as high the risk such AI system could bring to the public, and proof of this is the more than 700 accidents caused by it outside the European Union, with damages caused both to people and third-party vehicles.

It is also reasonable to state that an autonomous driving system such as the one in question should require a double regulation, namely both the rules of the aforementioned Regulation in relation to the use/implementation of the artificial intelligence system, and new and specific traffic regulations imposed by the circulation of driverless vehicles that communicate with each other through signals and sounds not comprehensible to humans (it appears that the latter regulation is already well advanced in France, Germany and the United States).

Now, it seems reasonable to place this so-called “self-driving car” AI system within the second type of risk, i.e. the one deemed “high”; according to the Regulation, this type of risk imposes on the manufacturer of such artificial intelligence system (“Tesla”) to carry out a preliminary conformity assessment of the system as well as to provide a detailed analysis of the relative risks, all by means of a series of tests that must prove the total absence of errors of such systems. In addition, Article 10 of the Regulation lays down a further obligation for the company using the AI system concerning the proper storage and governance of the data of the users processed by the systems in question.

Thus, the Regulation provides a number of rather strict requirements and obligations that, in the writer’s opinion, are unlikely to be met by the artificial intelligence systems now in circulation and, in fact, there has been lot of criticism from those who would like to see the introduction of less stringent criteria so as to facilitate greater use of artificial intelligence systems.

Another concrete example of artificial intelligence, which we have already discussed in last month's article, is ChatGPT which in Italy was blocked by Italian Authority for Privacy for non-compliance with the European regulations on personal data.

Precisely ChatGPT shows how it is quite complex to frame and classify different AI devices although applying the criteria and principles set out in the Regulation. In fact, at a first and superficial analysis, ChatGPT could fall within the AI systems with a lower risk (third level) as it does not seem to involve fundamental rights.

However, one has to wonder whether this also applies if one were to ask the ChatGPT application to track and disclose a person's data or to prepare an essay or even to rework a copyrighted work.

The answer to this question can only be negative, since such use would risk violating not only the fundamental rights related to the protection of personal data of each individual but also those belonging to the authors of copyrighted works. All this forces us to classify ChatGPT in the category of "high risk" AI systems.

In this respect, it should also be pointed out that the Regulation provides strict controls in relation to “high-risk” AI systems, as well as the application of administrative penalties of up to EUR 30 million or 6% of the turnover of the company concerned. It is however unclear, or at least has not yet been identified, the body responsible for monitoring compliance with the Regulation.

In conclusion and based on the considerations spent above, the writer believes it is advisable for the principles and criteria for the classification of the various artificial intelligence systems to be better defined when the regulation is definitely approved, given that they are currently too generic and often insufficient to correctly classify more complex AI systems (such as “ChatGPT”).

It is also desirable that an independent and impartial authority will be created and set up for each Member State, which can carry out the function of monitoring and verifying the correct application of the regulation's provisions in order to better protect the fundamental rights of individuals.


Artificial intelligence travels fast and with autopilot

Self-driving, profiling, social scoring, bias, chatbot and biometric identification are just some of the many terms entered in our daily life. They all refer to artificial intelligence (“AI”), which is the machine’s ability to show human-like skills such as reasoning, learning, planning and creativity[1]. Today like never before, AI has an enormous impact on persons and their security. It is sufficient to mention the Australian case that involved the driver of a Tesla “Model 3” who hit a 26-year-old nurse[2], while the vehicle was on autopilot.

With reference to this tragic accident, one naturally wonders who should be held liable for the critical conditions of the poor nurse. Is it the driver, despite the fact she was not technically driving the vehicle at the moment of the accident? Is it the manufacturer of the vehicle that hit the nurse? Or, again, the producer/developer of the software that provides to the vehicle the information on how to behave when it detects a human being on its way?

As of now, the driver – although she was released on bail – has been accused of causing a car accident. That doesn’t change the fact that – if the charge will be confirmed after the pending judgement – the driver will have the subsequent right to claim damages on the producer/developer of the AI system.

The above-mentioned case deserves an in-depth analysis, especially regarding the European AI industry.

It is worth mentioning that, despite the gradual rise of the AI use in the widest scope of our daily life[3], to date there is no law, regulation or directive related to the civil liability on the use of AI systems.

At an EU level, the Commission seems to have been the first that seriously dealt with the issue of civil liability by highlighting gaps regarding this subject, and publishing, among other things, a Regulation proposal establishing harmonized rules of AI systems[4].

By analogy, it is possible to retrieve from the above proposal three different definitions of civil liability: liability from faulty product, developer’s liability and vicarious liability.

Liability from faulty product applies in the case under exam, which considers the machine to lack legal personality[5].

Hence, as is evident, in the event an AI system causes damage to a third party, the liability will be on its producer/developer and not, on the contrary, on the device/system that incorporates it.

Returning to the case in question, it would therefore be up to the developer of the AI system (i.e. the US company Tesla) to compensate the injured nurse, if the latter is able to prove the connection between the damage/injuries caused and the fault of the AI system. For its part, the developer of the AI system could exclude the damage only if it is able to prove the so-called “development risk”, i.e. providing proof that the defect found was totally unpredictable based on the circumstances and manner in which the accident occurred.

Some commentators have observed on the point that the manufacturer should be able to control the AI system remotely and predict, thanks to the algorithms, unscheduled conduct at the time of its commercialization[6]. Moreover, as we already know, the algorithms incorporated in the AI systems installed in cars can collect data over time, self-learn and study particular behaviors and/or movements of human beings, increasingly reducing the risk of accidents.

From this point of view, the manufacturer would therefore have an even more stringent burden to exclude any hypothesis of liability, that is, to demonstrate that it has adopted all the appropriate safety measures to avoid the damage.

In this regard, the European Parliament has also drafted the “Resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence” which introduces the category of the so-called “High-risk AI”, i.e. those artificial intelligence systems operating in particular social contexts such as, for example, education, or those technologies that collect sensitive data (as in the case of biometric recognition), or that are used in the selection of personnel (which would risk falling back into social scoring or other discriminatory acts) or, again, the technologies used in the field of security and justice (through which there would be the risk of biases: prejudices of the machine on the subject being judged). It has been observed that for such “high-risk AI” systems there is an objective liability of the producer in case of a harmful event unless the latter is able to demonstrate the existence of force majeure event.

In conclusion, despite the efforts made by the Commission and then by the European Parliament with regard to the regulation of AI systems, there are still a lot of questions to be answered regarding the profiles of liability connected to them.

For example, it would be useful to understand how AI systems that are not considered to be “high risk”, such as the self-driving systems discussed in this article, should be framed and regulated. Or again, what threshold of liability to apply if in the not-too-distant future an AI device may be considered fully comparable, in terms of reasoning capabilities, to a human being (as recently claimed by a Google employee on the search engine AI system[7]).

What is sure is that, as often happens with any technological innovation, only a significant integration and adoption in our society of artificial intelligence systems will outline concrete hypotheses of liability, as applicable in contexts of daily operations.

In any case, we have high hopes that the aforementioned Regulation - whose date of entry into force is not yet known - will be able to provide a discipline that is as complete as possible and that above all reduces the risks and responsibilities of the users of AI systems and increases, on the other hand, the burdens borne by the manufacturers of the same to guarantee their safety.

[1] https://www.europarl.europa.eu/news/it/headlines/society/20200827STO85804/che-cos-e-l-intelligenza-artificiale-e-come-viene-usata
[2] https://www.drive.com.au/news/melbourne-hit-and-run-blamed-on-tesla-autopilot-could-set-legal-precedent-for-new-tech/
[3] Considerando (2), Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[4] Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[5] Barbara Barbarino, Intelligenza artificiale e responsabilità civile. Tocca all’Ue, Formiche.net, 15/05/2022
[6] Ut supra fn 5
[7] https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine


European Green Certificate: freedom of movement in Europe during the pandemic

It's called the European "Green Certificate," but actually you can read it as "Covid-19 Pass". It purports to make movement of citizens within the European Union easier, as well as to contribute to the containment of the spread of the Sars-CoV-2 virus.

Let's try to understand what are the features of the green certificate, who will issue it and what guarantees it will have in place for the protection of personal data, including sensitive data.

1. Premise

On 17 March 2021, the European Commission proposed the introduction of a European "Green Certificate", which aims to allow the exercise of the right of free movement of citizens – as provided under Article 21 of the Treaty on the Functioning of the EU (TFEU) – during the Covid-19 pandemic. This certificate would be issued by each Member State, in digital and/or paper format and would have the same legal value throughout the EU.

However, to speak of just one "Green Certificate" is not correct. Indeed, as explained in the Proposal for a European Regulation published by the EU Commission[1], there are three different types of certificates that may be issued:

i. “Vaccination certificate": namely an attestation certifying that a person has received an anti Covid-19 vaccine authorized for marketing in the EU;

ii. "Test certificate": a certification that an individual has tested for Covid-19, via an antigenic or molecular test (as long as it is not self-diagnostic) which has returned a negative result;

iii. "Certificate of recovery": a document proving that a person who had been diagnosed with Covid-19 has subsequently recovered from it.

Each certificate will be in the official language of the relevant Member State and in English, it will be free of charge and will be issued by duly authorized institutions/authorities (e.g., hospitals, diagnostic/testing centres or the health authority itself).

2. How does the certificate work?

The certificate has a "QR (Quick Response) code" containing essential information about its holder, a digital signature that prevents forgery and a seal that guarantees its authenticity.

When a European citizen enters a Member State of which he or she is not a native, the institutions and/or competent authorities of that State will scan the QR code on the certificate and verify the digital signature contained therein. This verification of the digital signature will take place by comparing it with the signature keys held by the institutions/authorities of the State of destination, which will be stored in a secure database of each State.

In addition, a single "gateway" managed by the EU Commission will be made available at an EU level, via which the digital signatures of green certificates may be verified throughout the EU.

3. Processing of personal data contained in the certificate

Each certificate - whether related to vaccination, test or recovery - will contain a series of information relating to the person to whom it refers such as, for example: name, surname, date of birth, date of vaccination, result of the antigenic/molecular test, diseases which he/she has recovered from. This is information that falls under the definition of "personal data" pursuant to art. 4 of Regulation 679/2016 ("GDPR") insofar as it relates to an identified natural person and, for this reason, must be processed in accordance with the principles and guarantees provided by that Regulation.

In this regard, it is appropriate to summarise the most important contents of the opinion dated 31 March 2021 that the European Data Protection Board ("EDPB") and the European Data Protection Supervisor ("EDPS") provided to the EU Commission regarding the green certificate.

a) The processing of personal data contained in the certificates should be carried out only for the purpose of proving and verifying the vaccination, negativity or recovery status of the holder of the certificate and, consequently, to facilitate the exercise of the right of free movement within the EU during the pandemic.

b) In order to facilitate the exercise of privacy rights by the data subjects, it would be advisable for each country to draw up and publish a list of the persons authorized to process such data as data controllers or data processors and of those who will receive the personal data (in addition to the authorities/institutions of each Member State competent to issue certificates, already identified as "data controllers" by the proposal for a Regulation at issue).

c) The legal basis for the processing of personal data in the certificates should be the fulfilment of a legal obligation (art. 6, para. 1, lett. c of GDPR) and "reasons of substantial public interest" (art. 9, para. 2, lett. g of GDPR).

d) In accordance with the GDPR principle of "storage limitation" of personal data, retention of data should be limited to what is necessary for the purposes of the processing (i.e. facilitation of the exercise of the right to free movement within the EU during the Covid-19 pandemic) and, in any case, to the duration of the pandemic itself, which will have to be declared ended by the WHO (World Health Organization).

e) The creation of EU-wide databases will be absolutely forbidden.

4. Critical remarks and conclusions

In addition to the innovative implications of the proposed Regulation, there are certain aspects which deserve further study or, at least, clarification by the European legislator in order to ensure the correct application of the new European legislation.

a. Issuing and delivery of certificates

The proposal for a Regulation provides that certificates are "issued automatically or at the request of the interested parties" (see Recital 14 as well as articles 5 and 6). Therefore, as also pointed out by the EDPB and the EDPS, the question is whether a certificate:

i. will be created and then delivered to the individual only if expressly requested by the latter;
or if, on the contrary
ii. such certificate will be created automatically by the competent authorities (e.g., as a result of vaccination) but delivered to the individual only upon his/her express request.

b. Possession of a certificate does not prevent member states from imposing any entry restrictions

The proposed Regulation provides that a Member State may still decide to impose on the holder of a certificate certain restrictive measures (such as, for example, the obligation to undergo a quarantine regime and/or self-isolation measures) despite the presentation of the certificate itself, as long as the State indicates the reasons, scope and period of application of the restrictions, including the relevant epidemiological data to support them.

However, one may wonder if these restrictions and their enforcement conditions will be defined at European level, or whether their identification will be left to each State; in the latter case, this would involve accepting the risk of frustrating the attempt of legal harmonisation pursued by the proposed Regulation.

c. The duration of the certificate

The proposed Regulation provides that only the "Certificate of recovery" must also contain an indication of the validity period. Therefore, once again we may wonder what will be the duration of the other two certificates ("vaccination" and "test") and how it will be possible to ensure the accuracy of what is attested by a certificate after a certain period of time from the date of issuance (for example, let us think about the various cases of positivity found after the administration of a vaccine or the so-called "false negative / positive" cases).

d. The obligations of the certificate holder

What are the obligations of the certificate holder? For example, if a certificate has been issued attesting a Covid-19 negative result and, after a few months, the holder finds out that in fact he or she is positive, would the holder be obliged to apply to the competent authorities/institutions of his/her country in order to have the certificate revoked? Furthermore: in the event that the holder tries to use a false certificate for entry in another Member State, what sanctions would he/she have to face?

e. The protection of personal data

The new proposal for a Regulation tasks the Commission with adopting, by means of implementing acts, specific provisions aimed at guaranteeing the security of the personal data contained in the certificates.

However, given the extremely sensitive nature of the data in question, will the EU Commission also ask for a prior opinion from the Data Protection Authorities of the Member States? Perhaps it would be useful to ensure, also in this context and with specific reference to the privacy documentation to be provided to the data subjects prior to the issuance of certificates, a quasi-unanimous European approach in order to prevent the possible dilution of the guarantees provided under Regulation 679/2016 ("GDPR").

In conclusion, this is a proposal which, if implemented with due care, could contribute greatly to the return to a somewhat normal life; however, it is feared that without a particularly detailed regulation on this matter, a high risk of making cross-border movements even more complex would ensue, also considering that each State would most likely adopt further measures regarding the regulation of this certificate.

[1] https://eur-lex.europa.eu/resource.html?uri=cellar:38de66f4-8807-11eb-ac4c-01aa75ed71a1.0024.02/DOC_1&format=PDF.


EU - UK agreement (so-called “Brexit”): the birth of the “comparable trademark”

It is well known by now that on 24 December 2020 the European Union and the United Kingdom reached an agreement for regulating their future commercial relations following “Brexit”.

This agreement seals the definitive separation between the British and European legal systems and, starting from the date of its enactment (i.e., 1 January 2021, namely the end of the transition period), the rules of European Union law will no longer apply to the United Kingdom, including those concerning intellectual and industrial property rights.

With a view to ensuring an orderly transition towards the new legal regime, the European Commission published a series of “Notices of Withdrawal” (related to the main sectors of European economy) which set out the main practical consequences that will affect the owners of intellectual and industrial property rights.

In particular, the Notice concerning trademarks specifies that, among other things, the owner of an EU trademark registered before 1 January 2021 will automatically become the owner of a “comparable trademark” in the United Kingdom, resulting as registered and subject to opposition in the United Kingdom, in accordance with the laws of that country.

This notion of a “comparable trademark” appears to be new within the field of IP rights, in so far as it was specifically introduced for the purpose of protecting those who – before the definitive withdrawal of the United Kingdom from the European Union – had obtained protection for their EU trademark which, at the time, produced effects also with respect to British territory.

The European Commission – evidently aware of the novelty of this legal concept – in the same Notice clarified that such “comparable trademark”:

      1. consists of the same sign that forms the object of EU registration;
      2. enjoys the date of filing or the date of priority of the EU trademark and, where appropriate, the seniority of a trademark of the United Kingdom claimed by its owner;
      3. allows the owner of an EU trademark that has acquired a reputation before 1 January 2021 to exercise equivalent rights in the United Kingdom;
      4. cannot be liable to revocation on the ground that the corresponding EU trademark had not been put into genuine use in the territory of the United Kingdom before the end of the transition period;
      5. may be declared invalid or revoked or cancelled if the corresponding EU trademark is the object of a decision to that effect as a result of an administrative or judicial procedure which was ongoing before 1 January 2021 (on a date following the “cloning”).

The British Government has confirmed that it will fall to the competent office of the United Kingdom to proceed without cost with the “cloning” of EU trademarks in the United Kingdom, where they will become “comparable trademarks”. It is not required of the owners of EU trademarks to file any request, nor to commence any administrative procedure in the United Kingdom, nor is it necessary for them to have a postal address in the United Kingdom for the three years following the end of the transition period.

Despite the precise description of the main features of the new “comparable trademarks”, there are – inevitably – uncertainties surrounding the practical application of this legal concept.

In particular, it is puzzling to find that the “comparable trademark” continues to be influenced by European administrative and judicial occurrences (see point e) above), which conflicts with the alleged independence of the United Kingdom from European laws and regulations.

Such inconsistencies evidently have already been noted, in so far as the Notice specifies (in a footnote) that the parties have acknowledged that the United Kingdom “is not obliged to declare invalid or to revoke the corresponding right in the United Kingdom where the grounds for the invalidity or revocation of the European Union trade mark … do[es] not apply in the United Kingdom”. It would seem therefore that the United Kingdom is invested with the power to not conform itself to European decisions.

However, it is not clear what should prevail in this “contest”: the invalidating decision of the European proceedings or British power to deny the effects of such European decision?

Furthermore, if the European proceedings – albeit commenced before the end of the transition period – should last for several years, how should the owner of the “cloned” trademark in the United Kingdom behave? Again, how are we to reconcile the existence of a “comparable trademark” – contemporaneously subject to European and British jurisdiction – with the known principle of territoriality applicable to the world of trademarks?

The situation appears somewhat uncertain and, in our opinion, it cannot be excluded that other issues concerning this new “comparable trademark” may arise in the future and form the object of open debate by those operating in the IP industry.

This is a key point that concerns not just acquired rights (which the British Government has undertaken to protect), but also the future political relations between the EU and the United Kingdom; indeed, it is interesting to note in this regard how such a seemingly innocent subject, namely trademark law, may in fact reveal the frailty of an agreement which is formally commercial but in reality turns out to be predominantly of a political nature.

In light of all the above, it would seem that the EU and the United Kingdom have chosen to follow the easiest path for protecting owners of EU trademarks in the midst of an orderly transition towards a new legal regime imposed by Brexit; on the other hand, this decision raises several legal questions, some of which have been anticipated above, which introduce a measure of uncertainty concerning the new “British hybrid” trademark.

Finally, we cannot fail to note how this new legal concept may represent an interesting precedent in the event that other Member States may decide in the future to leave the European Union. In that regard, we find ourselves before a new concept that certainly seems interesting from a legal standpoint, but potentially may also be “dangerous” from a political point of view and, as such, deserving of close attention in the coming years.