Artificial intelligence travels fast and with autopilot

Self-driving, profiling, social scoring, bias, chatbot and biometric identification are just some of the many terms entered in our daily life. They all refer to artificial intelligence (“AI”), which is the machine’s ability to show human-like skills such as reasoning, learning, planning and creativity[1]. Today like never before, AI has an enormous impact on persons and their security. It is sufficient to mention the Australian case that involved the driver of a Tesla “Model 3” who hit a 26-year-old nurse[2], while the vehicle was on autopilot.

With reference to this tragic accident, one naturally wonders who should be held liable for the critical conditions of the poor nurse. Is it the driver, despite the fact she was not technically driving the vehicle at the moment of the accident? Is it the manufacturer of the vehicle that hit the nurse? Or, again, the producer/developer of the software that provides to the vehicle the information on how to behave when it detects a human being on its way?

As of now, the driver – although she was released on bail – has been accused of causing a car accident. That doesn’t change the fact that – if the charge will be confirmed after the pending judgement – the driver will have the subsequent right to claim damages on the producer/developer of the AI system.

The above-mentioned case deserves an in-depth analysis, especially regarding the European AI industry.

It is worth mentioning that, despite the gradual rise of the AI use in the widest scope of our daily life[3], to date there is no law, regulation or directive related to the civil liability on the use of AI systems.

At an EU level, the Commission seems to have been the first that seriously dealt with the issue of civil liability by highlighting gaps regarding this subject, and publishing, among other things, a Regulation proposal establishing harmonized rules of AI systems[4].

By analogy, it is possible to retrieve from the above proposal three different definitions of civil liability: liability from faulty product, developer’s liability and vicarious liability.

Liability from faulty product applies in the case under exam, which considers the machine to lack legal personality[5].

Hence, as is evident, in the event an AI system causes damage to a third party, the liability will be on its producer/developer and not, on the contrary, on the device/system that incorporates it.

Returning to the case in question, it would therefore be up to the developer of the AI system (i.e. the US company Tesla) to compensate the injured nurse, if the latter is able to prove the connection between the damage/injuries caused and the fault of the AI system. For its part, the developer of the AI system could exclude the damage only if it is able to prove the so-called “development risk”, i.e. providing proof that the defect found was totally unpredictable based on the circumstances and manner in which the accident occurred.

Some commentators have observed on the point that the manufacturer should be able to control the AI system remotely and predict, thanks to the algorithms, unscheduled conduct at the time of its commercialization[6]. Moreover, as we already know, the algorithms incorporated in the AI systems installed in cars can collect data over time, self-learn and study particular behaviors and/or movements of human beings, increasingly reducing the risk of accidents.

From this point of view, the manufacturer would therefore have an even more stringent burden to exclude any hypothesis of liability, that is, to demonstrate that it has adopted all the appropriate safety measures to avoid the damage.

In this regard, the European Parliament has also drafted the “Resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence” which introduces the category of the so-called “High-risk AI”, i.e. those artificial intelligence systems operating in particular social contexts such as, for example, education, or those technologies that collect sensitive data (as in the case of biometric recognition), or that are used in the selection of personnel (which would risk falling back into social scoring or other discriminatory acts) or, again, the technologies used in the field of security and justice (through which there would be the risk of biases: prejudices of the machine on the subject being judged). It has been observed that for such “high-risk AI” systems there is an objective liability of the producer in case of a harmful event unless the latter is able to demonstrate the existence of force majeure event.

In conclusion, despite the efforts made by the Commission and then by the European Parliament with regard to the regulation of AI systems, there are still a lot of questions to be answered regarding the profiles of liability connected to them.

For example, it would be useful to understand how AI systems that are not considered to be “high risk”, such as the self-driving systems discussed in this article, should be framed and regulated. Or again, what threshold of liability to apply if in the not-too-distant future an AI device may be considered fully comparable, in terms of reasoning capabilities, to a human being (as recently claimed by a Google employee on the search engine AI system[7]).

What is sure is that, as often happens with any technological innovation, only a significant integration and adoption in our society of artificial intelligence systems will outline concrete hypotheses of liability, as applicable in contexts of daily operations.

In any case, we have high hopes that the aforementioned Regulation - whose date of entry into force is not yet known - will be able to provide a discipline that is as complete as possible and that above all reduces the risks and responsibilities of the users of AI systems and increases, on the other hand, the burdens borne by the manufacturers of the same to guarantee their safety.

[1] https://www.europarl.europa.eu/news/it/headlines/society/20200827STO85804/che-cos-e-l-intelligenza-artificiale-e-come-viene-usata
[2] https://www.drive.com.au/news/melbourne-hit-and-run-blamed-on-tesla-autopilot-could-set-legal-precedent-for-new-tech/
[3] Considerando (2), Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[4] Proposta di Regolamento del Parlamento europeo e del Consiglio che stabilisce regole armonizzate sull'intelligenza artificiale (legge sull'intelligenza artificiale) e modifica alcuni atti legislativi dell'unione, 2021/0106, del 21 aprile, 2021
[5] Barbara Barbarino, Intelligenza artificiale e responsabilità civile. Tocca all’Ue, Formiche.net, 15/05/2022
[6] Ut supra fn 5
[7] https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine


Free transfer of personal data to Republic of Korea: a historic adequacy decision is coming

The European Data Protection Board (“EDPB”) has issued its opinion on the draft adequacy decision published by the European Commission on last 16 June 2021 (available here) concerning the transfer of personal data to the Republic of Korea.

This is a decision that, once in force, will allow the EU economic operators – such as, first of all, all the electronic communication service providers, cloud providers and multinational companies - to freely transfer personal data from Europe to the Republic of Korea without having to adopt either the appropriate safeguards (e.g., "Standard Contractual Clauses") or the additional measures (e.g., consent of data subjects) required by Chapter V of EU Regulation No. 679/2016 ("GDPR").

Indeed, pursuant to articles 44 et seq. of the GDPR, the transfers of personal data to countries outside the European Economic Area or to an international organization are allowed provided that the adequacy of the third country or organization is expressly recognized by a decision of the Commission.

We will now examine in detail the contents of the opinion issued by the EDPB.

Firstly, it was noted that the Republic of Korea's legal framework on the protection of personal data is substantially aligned with the European one, especially with regard to the main definitions provided for by law (“personal data”, “processing” and “data subject”), the requirements for a lawful data processing, the general principles and the security measures.

This has been possible not only thanks to the presence of an effective privacy law (i.e., the "Personal Information Protection Act" or "PIPA" which came into force in 2011) but also because of a series of "notifications" (included the “Notification no. 2021-1”) issued by the Korean Data Protection Authority (i.e., "Personal Information Protection Commissioner" or "PIPC") which explain and make easily understandable the provisions of PIPA.

Moreover, as noted by the EDPB, the Republic of Korea is part of a number of international agreements that guarantee the right to privacy (including the "International Covenant on Civil and Political Rights", the "Convention on the Rights of Persons with Disabilities" and the “ONU Convention on the Rights of the Child"), which confirms the attention that the Republic of Korea has paid to the protection of personal data for several years now.

The EDPB's analysis then focused on some key aspects of PIPA that slightly differ from the GDPR and therefore require more attention - such as, in particular, the absence of a general right to withdraw the consent provided by the data subjects, for example, for marketing activities.

According to the EDPB, although article 37 of PIPA grants data subjects the right to request the “suspension” of the processing of their personal data - a right that can be exercised also in case of direct marketing, as expressly clarified by Recital 79 of the EU Commission adequacy decision – the PIPA provides for the right to withdraw the consent only in two specific cases:

  1. in relation to the transfers of personal data carried out in the context of special corporate operations (such as mergers, acquisitions, etc.);
  2. with regard to the processing of personal data for marketing activities by providers of electronic communication services.

The EDPB therefore considered it necessary to draw the Commission's attention to the above-mentioned issues in order to analyze in detail the consequences that, in the light of the Korean legal framework, the absence of such a right might cause for data subjects and to clarify, in the adequacy decision, the actual scope of the above-mentioned right to request the “suspension” of the processing.

Secondly, the EDPB observed that, pursuant to article 58 of PIPA, a substantial part of PIPA - including Chapters III, IV and V, which respectively regulate the general principles for data processing, the security measures and the rights of data subjects - does not apply to several processing of personal data (including those necessary to meet urgent needs for the protection of public health and safety).

The EDPB also notes that the word “urgent” in the PIPA expresses an extremely broad concept that needs to be limited and contextualized, also with the help of practical examples, in order not to compromise the confidentiality of the data subjects’ personal data.

Moreover, the EDPB, in the light of the current emergency situation caused by the Covid-19 pandemic, drew the Commission's attention in relation to the need to ensure an adequate level of protection also for personal data transferred to the Republic of Korea for purposes related to public health protection.

This is because "sensitive" information relating to European citizens (for example, the vaccination status), should receive at least the same level of protection as granted under the GDPR once transferred to the Republic of Korea. In this regard, the EDPB therefore invited the Commission to closely monitor the application of the exemptions provided for in article 58 of PIPA.

Finally, the EDPB considered it appropriate to focus on the possibility for Korean public authorities to access the personal data of European citizens for national security purposes. In this respect, there is no specific obligation for Korean authorities to inform data subjects of the access to their personal data, especially when data subjects are not Korean citizens.

However, even in the absence of such obligation, the balance between the needs of protection of the national security and the protection of the fundamental rights of the data subjects can be found in the same Korean Law that protects the privacy of interpersonal communications (the "Communications Privacy Protection Act" - see also Recital 187 of the adequacy decision), according to which the access to the personal data of European citizens for purposes of national security can be made only if certain legal requirements are met (for example, in the case of communications between "foreign agencies, groups or citizens suspected of being involved in activities threatening national security").

The EDPB notes that, as a further guarantee of the confidentiality of communications accessed by the Korean authorities, the South Korean Constitution states essential data protection principles applicable to this specific matter.

In the light of the favorable opinion issued by the EDPB, it is certainly desirable, and likely, that the European Commission will adopt an adequacy decision in respect of the Republic of Korea.

In an increasingly data-driven global economy based on the economic value of personal data as well as on the sharing of personal data, such an adequacy decision would open the door to the liberalization of trade with the east, also from a privacy perspective.

This regulatory intervention, object of this article, was due and awaited and it certainly follows the "Free Trade Agreement" between the EU and South Korea in force since 2011, which has been able to exponentially increase bilateral trade between the two countries (in 2015 the trade value of transactions amounted around € 90 billion).

Our hope is that, as the years go by, the European Commission's adequacy assessments will cover more and more legal frameworks so that the international transfer of personal data can represent a real and concrete instrument for promoting the economy and innovation worldwide.