Guidelines for the installation of video surveillance systems
Updated to the Italian Data Protection Authority’s FAQs of 5 December 2020 and to the EDPB's Guidelines no. 3/2019 on processing of personal data through video devices
GENERAL RULES
- To comply with the principle of "data minimization": the data controller must choose the video surveillance systems to be installed and the relocation the cameras on the basis of the specific purposes of the processing and must collect and process only personal data that is relevant and not excessive for such purposes.
- No prior authorization by the Italian Data Protection Authority is needed for the installation of video cameras, but the data controller must carry out an independent assessment on the lawfulness and the proportionality of the processing, taking into account the context and the purposes of the processing itself as well as the risks to the rights and freedoms of physical persons.
- A privacy notice must be provided to the data subjects: both in a short form (through a special sign clearly visible to those passing through the monitored area - the model of this sign is available on the website of the Italian Data Protection Authority - https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9496244) and in an extended form.
- An autonomous assessment of the preservation periods of the images must be carried out by the data controller (in accordance with the principle of "accountability" set forth by the GDPR), considering the context and purpose of processing, as well as the risk to the rights and freedoms of physical persons. This without prejudice to the specific provisions of law that determine how long the images should be stored in particular circumstances.
- A DPIA must be drafted when new technology cameras or "integrated" and/or "intelligent" video surveillance systems are installed (which, for example, detect, record and automatically report anomalous behaviours or events to the competent authorities), in case of a systematic monitoring of a publicly accessible area on a large scale (e.g., highways, large shopping malls) and in the other cases provided for by articles 35 and 36 of the GDPR and by provision no. 467/2018 of the Italian Data Protection Authority.
SPECIFIC CONTEXTS
WORKPLACE
(art. 4 of Italian Law no. 300/1970)
Purposes of processing: organizational and production needs, work safety and protection of company assets.
If the employer can remotely monitor the employees' activities through the video cameras:
- an agreement with the company trade union representatives (RSA/RSU) or the prior authorization from the National Labour Inspectorate is required;
- it is mandatory to carry out the DPIA;
- it is necessary to draft internal policies to be provided to the employees describing in a clear and transparent manner the methods of use of the working tools (pc, smartphone, etc.) and the possible controls that the employer can carry out over the employees;
- it is necessary to comply with the privacy obligations under the GDPR and the Privacy Code;
***
PRIVATE PROPERTY / BUSINESS PREMISES
Purposes of processing: monitoring and protection of private property or business premises, prevention of theft and/or vandalism, etc.
Specific conditions to be met:
- limitation of the angle of the video cameras to the areas of exclusive pertinence, excluding common areas (courtyards, ground floors, etc.) or areas belonging to third parties;
- prohibition on film public areas or areas of public transit;
If specific "home" cameras (so-called "smart cams") are installed within your home, it is necessary to:
- inform any employees (housekeepers, carers, etc.) of the presence of the video cameras;
- avoid monitoring the environments that would damage the persons’ dignity (such as restrooms, locker rooms, etc.);
- protect adequately with appropriate security measures the personal data collected or that can be acquired through the smart cams.
***
CONDOMINIUM
Purposes of processing: monitoring and protection of the common parts of the building and in general of the individual properties.
Specific conditions to be met:
- pursuant to art. 1136 of the Italian Civil Code, a prior deliberation of the condominium meeting is necessary;
- the maximum period for the storage of the images is 7 days from the collection (unless there are other proven needs to extend such deadline).
***
PARTICULAR CATEGORIES OF PERSONAL DATA
(hospitals and clinics)
Purposes of processing: protection of the patients’ health, monitoring of particular hospital departments, etc.
If the video surveillance is used to collect particular categories of data (e.g., to monitor the patient's health), it is necessary to:
- check the existence of a legal basis for the processing under art. 9 of the GDPR (such as, for example, the provision of health care or treatment, ensuring high standards of quality and the safety of health care, etc.).
- pay special attention so that the collection of personal data is limited to only that data necessary for the purposes of the processing ("minimization");
- carry out the mandatory DPIA if the processing of personal data concerning patients, disabled persons, mentally ill persons, minors and the elderly is not occasional;
- constantly monitor the security measures (data storage systems and access to data) applied to the processing.
***
CIRCULATION OF VEHICLES
Purposes of processing: assessment and detection of the violations of the highway code.
Specific conditions to be met:
- limitation of the relocation and angle of the video cameras to the areas necessary for the detection of violations;
- deletion/obscuration of any images collected but not necessary for the purposes of the processing (e.g., images of pedestrians or other road users, passengers present in the vehicle, etc.);
- performance of the mandatory DPIA in case of processing of personal data on a large scale (e.g., highways) to monitor drivers' behaviour.
***
MUNICIPAL LANDFILLS
Purposes of processing: control and monitoring of hazardous substance landfills and "eco station" (checking the type of waste dumped, the time of deposit, etc.).
Limitations:
- only a public body/entity (not a private person/entity) is allowed to conduct the monitoring;
- the monitoring is permitted only if alternative tools and controls are not possible or not effective for reaching the same purposes.
***
EDUCATIONAL INSTITUTES
Purposes of processing: protection of the building, of school properties therein, of staff and students, protection from vandalism, etc.
Specific conditions to be met:
- the video cameras that capture the interior of the institutes can only be activated during closing hours, therefore not during school and extracurricular activities;
- if the video cameras collect the images of the areas out of the school, the angle of the video cameras must be properly limited.
***
URBAN SAFETY
Purposes of processing: protection of urban safety of the public places or of the areas open to the public.
Specific conditions to be met:
- storage of the images for a maximum period of 7 days after the collection, unless there are special preservation needs for a longer period (art. 6, para. 8 of Law Decree no. 11/2009).
***
VIDEO SURVEILLANCE FROM HIGH ALTITUDES
Data protection laws do not apply to the processing of personal data that does not allow for the identification of physical persons, either directly or indirectly, such as in the case of the video surveillance carried out from high altitudes (for example, using drones or similar) or in the case of fake and/or switched-off cameras.
Smart printers and smart objects: friends or foes?
On 9 December 2020 the Italian Antitrust Authority (Autorità Garante della Concorrenza e del Mercato, also known by the acronym “AGCM”), among other things inflicted a fine of 10 million euros on HP Inc and HP Italy S.r.l. (hereinafter "HP") for two different commercial practices relating to HP-branded printers which were considered to be unfair. For the full text of the measure, see the following link: https://www.agcm.it/dotcmsdoc/allegati-news/PS11144_chiusura.
Firstly, the Authority sanctioned the companies in question for not having correctly informed customers of the installation in their printers of a software that allowed printing only with HP toners and cartridges, while preventing the use of non-original refills.
The second conduct that the AGCM considered punishable consisted in the recording - via firmware present on HP printers and without the knowledge of consumers - of data relating to the specific cartridges being used (both original and non-original): this data was used both to create a database useful for formulating commercial strategies and denying assistance to printers that had used non-original cartridges, thus hindering the exploitation of the legal guarantee of conformity.
With reference to the latter conduct, it is interesting to note how this is a case of distorted use of the so-called "Internet of Things". In fact, this expression means "network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment." (https://www.gartner.com/en/information-technology/glossary/internet-of-things).
Although in this case the technology used by HP was limited to the collection of information relating to the use of printers, it is clear that the significant presence of objects capable of recording and transmitting data on our daily behaviour could have disturbing implications. The concern comes not just from the possibility that data collections may occur without our knowledge, but also and especially from the uses and purposes that motivate companies to use such data.
Of course, the positive implications that a constant flow of information from objects could provide cannot be ignored, for example, when considering the efficiency and improvement of production chains, and of safety systems for citizens (think of "intelligent traffic lights"). However, cases like the one examined by the AGCM lead us to think about the possibility that these technologies may excessively limit consumers' rights.
From the present case, it is therefore possible to learn a lesson, namely that, first of all, before proceeding with the purchase of a “smart” object it is certainly advisable to acquire as much information as possible on the type of sensors and detectors that may be incorporated in such devices and especially to ascertain what will be the use of data acquired by these devices.
Furthermore, it is certainly appropriate to ask within what limits the use of these “smart” devices may support innovation and the improvement of society, as opposed to when – on the other hand – such use can compromise the rights of consumers, understood both as the right to be informed and the basic rights which arise following the purchase of a product (let us think about the limitations on the exercise of the above mentioned legal guarantee).
Caterina Bo joins the editorial board of MediaLaws
We are happy to announce that our Caterina Bo has joined the editorial board of MediaLaws for the year 2020/2021.
The editorial board monitors relevant news in the field of media law and contributes to the publication of comments and analyses for the blog associated with the specialised journal “Rivista di Diritto dei Media”, born from the initiative of Professors Oreste Pollicino (Università Commerciale Luigi Bocconi), Giulio Enea Vigevani (Università degli Studi di Milano-Bicocca), Marco Bassini (Università Commerciale Luigi Bocconi) and of the lawyer Carlo Melzi d'Eril.
#medialaw #insight #news
iPhone: neither water nor AGCM “resistant”
THE CASE
On November 30, 2020 the Italian Competition and Markets Authority (Autorità Garante della Concorrenza e del Mercato, also known by the acronym “AGCM”) inflicted a 10 million Euro fine on the companies Apple Distribution International and Apple Italia s.r.l. (hereinafter "Apple") for spreading promotional messages via which they exalted the water resistance of different models of iPhone, whilst failing to specify that this property was true only under certain circumstances which do not correspond to the normal conditions of use experienced by consumers (so-called "misleading[1]" commercial practice).
In addition, these promotional messages were contradictory due to the presence of the following disclaimer: "The warranty does not cover damage caused by liquids”. In fact, in the after-sales phase Apple denied repairs of iPhones that were damaged as a result of exposure to water or other liquids, thus hindering the exercise of warranty rights protected by the Consumer Code (a so-called "aggressive"[2] commercial practice).
WHAT CONSEQUENCES FOR THE FINED COMPANY?
The present case allows us to focus attention on the consequences befalling the company that is subject to measures adopted by an authority (in this case, the AGCM).
Firstly, such a measure will have repercussions on the very complex relationship of trust that is established between a business and its consumers.
Indeed, it is well known that a trademark suggests to the consumer, in the very moment of purchase, that the product marked by that sign originates from a certain company.
A psychological and emotional relationship is formed between the consumer and the company which - by its very nature - is easily influenced by external circumstances.
It is precisely in this delicate context that the punitive measure of AGCM against Apple acquires relevance since it makes the basis of this relationship - namely trust and reliability - vulnerable.
It should not be forgotten that the basis of this relationship is inherently mnemonic, meaning that it relies on the (positive) memory that the consumer recalls and preserves with respect to a company, its products, and its services. The punitive measure of the authority precisely affects this memory because, among other things, it aims to warn the consumer of future commercial behaviours.
Indeed, it is easy to imagine how nowadays the news of this measure is spreading quickly through social networks, thus reaching a considerable portion of the company’s clients. On this point, one can understand the AGCM’s decision to force Apple to also publish the measure in the section of its website dedicated to the sale of iPhones, under the heading "Information for consumer protection".
This has greater impact on the company in comparison to an economic penalty, in so far as it damages its image and suggests that the consumer pay more attention when planning on purchasing products from the company affected by the punitive measure.
All of this translates into additional "invisible" costs for the company, i.e. costs that the company will have to bear in the following months in order to rebuild the relationship of trust/reliability with its customer (so-called "reconstructive advertising"). That is without forgetting the corrective activities (and related costs) that, following the warning received from the Authority, the fined company will have to put in place in relation to products already or soon due to be introduced in the market.
From a different point of view, the AGCM’s action also conveys a financial penalty.
In the case in question, the penalty that has been imposed, while constituting the maximum amount provided for by current legislation, nevertheless represents less than 5% of the total turnover achieved by the Apple group in the year 2019, amounting to approximately € 231.57 billion.
Therefore, it is legitimate to wonder if this penalty could have a real deterrent effect.
The answer is surely negative. However, attention should be given to the criterion of determination of the amount of the fine.
For example, if the alleged water resistance of the iPhone had been the only reason that brought the consumer to buy an Apple rather than a Samsung product, would a sanction of just 10 million Euros be appropriate?
Evidently not and, in fact, in that case the entire cost of an iPhone (about 1,000 Euros) would need to be taken into account and the penalty would need to be defined as a percentage of the turnover generated by the (unfair) sale occurred within the territory of the competent authority.
It follows that a predetermined criterion for the quantification of a penalty - such as that provided by the Consumer Code for unfair practices - is in itself insufficient to assess all the circumstances of the case and consequently to appropriately punish the commercially unsound conduct of a business.
FINAL CONSIDERATIONS
The impression is that companies, especially those with dizzying sales, are underestimating the importance of maintaining greater transparency towards the market, perhaps in the mistaken belief that certain practices will go unnoticed. On the contrary, as we have just seen, such practices have negative consequences on the relationship with their customers. Moreover, in fields where competition is fierce, the risk of the client choosing a competitor is always around the corner.
However, punitive discipline may not be strict enough to have the desired deterrent effects. Of course, in the end it is the consumer who is at a disadvantage by not receiving adequate protection.
It is therefore reasonable to inquire whether there may be other punitive methods other than financial ones that can more adequately protect the consumer’s interests.
For example, consideration may given to the possibility of introducing different restrictive measures (which would have a more practical scope and be proportionate to the market-share held by the company) against businesses responsible for implementing unfair commercial practices.
[1] Unfair practices are defined as deceptive when they represent elements and/or features of a product that do not correspond to the truth.
[2] Unfair practices are defined as aggressive when they consist of harassment, coercion or other forms of undue psychological conditioning of consumers.
Facial Recognition and Digital World: technology at the service of convenience?
How many of us unlock our smartphones, make an online payment, authorize the download of an app and/or access a web portal simply by bringing the mobile device closer to our face? How easily do we "tag" our friends in our pictures on the most well-known social networks? And again: how many and what advantages may be obtained from knowing the number of passers-by who stop, even if just for a moment, to look at a billboard?
Statistics show that facial recognition technology is at the service of a digital world that "runs" faster and faster and which forces us to keep up with the times. But at what price for the protection of our personal data?
1. Introduction
Social networks, e-commerce websites, online magazines, home banking and mobile apps: there are millions of digital services available online that we can use through the creation of personal accounts.
When creating profiles, the most widespread trend, especially among young people, is to rely on easy and intuitive passwords (such as date of birth or first name) which are not so secure from an IT point of view and often identical for all the services those people use[1].
In order to deal with these bad habits - which only feed the already high number of data breaches – it has now become common to use so-called "facial recognition" technology (in Italian, "riconoscimento facciale"): this is a type of IT process that associates the features of a person's face with a digital image and stores that image in an electronic device for the purpose of reusing it not only as a means of identification but also for the authentication, verification and/or profiling of individuals.
But is it really always safe to rely on facial recognition? Does a biometric system always guarantee sufficient protection of our personal data?
2. The most frequent uses of facial recognition technology
It’s well known that different biometric techniques lend themselves to being used mainly in the IT context (for example, for authenticating access to a device) and the trend of the main high-tech companies is to invest ever greater amounts of money in this field.
However, facial recognition is also used outside the digital world: take for example the use of biometric systems for the control of physical access to reserved areas, for the opening of gates or for the use of dangerous devices and machinery.
But that's not all. Facial recognition techniques are also capable of serving public authorities and even research. The police in New Delhi has in fact tested facial recognition to identify almost 3,000 missing children; some researchers have used it to detect a rare genetic disease found in subjects from Africa, Asia and Latin America[2].
Faced with such a large number of uses of facial recognition, it is worrying that in our country a specific national legislation on this matter has not yet been enacted. Indeed, agreeing to the detection and collection of the features of our face by a data controller means sharing with the latter a wide range of personal data and exposing ourselves to the processing that the controller decides to make of such data.
Think about a simple "selfie" made with our smartphone: in these cases our device collects our personal image and stores it in a memory. Or again think about passing in front of billboards that detect our presence, the measurement of our body temperature using video and digital thermometers or the boarding systems with video-recognition installed in the largest airports of the world.
3. A quick vademecum for the processing of biometric data
The biometric characteristics of a face that allow for the unique identification of a natural person fall within the notion of "biometric data" provided by European Regulation no. 679/2016 ("GDPR")[3]. In fact, biometric data is defined by the GDPR as data "resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person"[4]. This means that an image / a photograph is not always qualifiable as biometric data if it is not processed through specific technical means that allow for the unique identification or authentication of a natural person[5].
Biometric data also fall within the category of "special categories of personal data" pursuant to art. 9 of GDPR (referred to by art. 2-septies of Legislative Decree no. 196/2003 - "Privacy Code") and can be processed only when the data controller complies with certain legal obligations. Let's try to list some of these obligations here below:
A. Compliance with the fundamental principles of the processing. In an increasingly digital world, the principles of "privacy by design" (data protection by design) and "privacy by default" (data protection by default) provided for by art. 25 GDPR play a leading role[6]. In order to comply with these principles, starting from the design and definition phases of the processing tools the data controllers who use facial recognition for the processing of personal data must provide adequate security measures to ensure the protection of fundamental rights and freedoms of individuals as well as the compliance with the principles set out in Article 5 of GDPR.
Specifically, attention should be paid to the principle of "data minimization" which requires the data controller to configure a biometric recognition system in order to collect and process only a limited number of information, excluding the acquisition of additional data that is not necessary for the purpose to be achieved in the specific case (for example, if the purpose of the processing is that of computer authentication, biometric data should not be processed in such a way as to infer any information of a sensitive nature belonging to the data subject including, for example, clearly visible skin diseases).
B. Information notice. The data controller must provide the data subjects with a privacy notice in accordance with art. 13 of GDPR, which, in a clear and transparent manner, indicates the purposes of the processing, the security measures that have been adopted, the possible centralization of the biometric data that has been collected, the storage periods of the personal data. In this regard, it is appropriate to point out that, as clarified by the Italian data protection Authority[7], such privacy notice has to be delivered before the so-called "enrolment" phase which take place before the creation of a biometric sample[8].
C. Legal basis of the processing. The data controller must ask for the prior consent of the data subjects in order to process their biometric data, or alternatively the data controller should assess the possibility of relying on another legal basis under Article 9 of the GDPR (including, for example, the existence of reasons of public interest in the area of public health, such as the protection against serious cross-border threats to health).
D. DPIA. As provided for by art. 35 of the GDPR and Annex 1 to Provision no. 467/2018 of the Italian data protection Authory, the data controller must assess the impact of the processing of biometric data and specifically assess the risks that such processing may entail for the rights and freedoms of individuals and, at the same time, identify the security measures adopted and to be adopted to address these risks.
E. Appointment of the data processor. Where the data controller engages a third party for the processing of biometric data, the latter must be appointed as "data processor" pursuant to art. 28 of GDPR, following the verification of the third-party's possession of suitable guarantees for the protection of the rights of the data subjects whose biometric data is processed.
F. The implementation of alternative systems. The data controller must offer alternative solutions that do not involve the processing of biometric data, without imposing restrictions or additional costs to the data subject. Such alternative solutions are necessary especially for those who are not able to comply with the constraints imposed by a biometric system (think about a disabled person who is not able to reach, with his face, the height of a thermoscanner) and in case such device is unavailable due to technical problems (for example, in case of malfunction).
4. Conclusions
The applicable data protection regulations are not and should never be considered as an obstacle to the development of new technologies applied to the IT and digital industry. On the contrary, compliance with existing legislation should be an incentive for creating practical solutions in a way that respects the confidentiality of our information.
This should also be the case for facial recognition technology, in relation to which it is important to make users aware of the security of the processing of their personal data. Also because generating awareness means gaining trust from consumers, which is the first step for a correct marketing strategy.
Just as Apple has done with the recent update to "iOS 14" which allows the owners of the latest mobile devices to know - through different color indicators (green and orange) that appear on the status bar of the device - if an installed app is using the camera and then detecting the user's image.
On the other hand, the protection of our personal data must never be sacrificed. And to do this, in our opinion, it is essential that our country enact regulations governing this technology. The added values that facial recognition is able to provide to our economy are in fact under the eyes of everyone for a long time, but if we do not act at the regulatory level in the short term the risk is to have to face in a few years the development and uncontrolled use of these technical solutions, with the consequence of having to spend time and economic resources to solve multiple problems rather than bringing about new advantages.
[1] This is confirmed by an interesting (and worrying, for all of us) study that was published during the “Safer Internet Day”, according to which more than half of Italian millennials (55%) uses the same password to access different services and 19% uses extremely simple passwords such as a numbered sequence.
[2] Also noteworthy is the new project "Telefi" funded by the European Commission and called "Towards the European Level Exchange of Facial Images" (TELEFI). It is a study on the benefits that the use of facial recognition can provide to crime investigation in EU Member States and the exchange of data collected within the "Prüm" system, through which DNA, fingerprints and vehicle registration data are exchanged between EU countries to combat cross-border crime, terrorism and illegal migration.
[3] Classic examples of biometric data, in addition to the characteristics of the face, are: the fingerprints, handwritten signature placement dynamics, the retinal vein pattern, the iris shape, the characteristics of the voice emission.
[4] See, for more details, the Opinion of the Working Party ex art. 29 (now replaced by the “European Data Protection Board”) no. 2/2012 - https://www.pdpjournals.com/docs/87997.pdf.
[5] See Recital no. 51 GDPR.
[6] See “Guidelines no. 4/2019 on Article 25 Data Protection by Design and by Default” - Version 2.0 Adopted on 20 October 2020.
[7] See on this matter “Guidelines on biometric recognition and graphometric signature” issued by the Italian data protection Authority on 12 November 2014.
[8] With the term "enrolment" it is understood the process through which a subject is accredited to the biometric system, through the acquisition of one of its biometric characteristic. Indeed, to enable biometric recognition is necessary to acquire the biometric characteristic by way of a procedure ensuring that biometric enrolment is performed appropriately, that the link with the capture subject is retained, and that the quality of the resulting biometric sample is safeguarded. Generally, the facial biometric sample is used to extract, via algorithms that are sometimes based on so-called “neural networks”, a given set of features such as the location of eyes, nose, nostrils, chin and ears in order to build up a biometric template.
Trade secrets: is civil or criminal protection more effective?
By virtue of articles 623 of the Criminal Code and 98 of Legislative Decree of 10 February 2005, no. 30, the legislator has laid the foundations for the legal protection of industrial secrets specifically with a view to safeguarding all those activities and investments that the holder of the secret keeps confidential insofar as it assures him a competitive advantage within the market.
Firstly, we should start from the well-known definition of “trade secret” found in art. 98 of the Industrial Property Code (henceforth “IPC”) which provides that only information that is secret, economically valuable and subject to strict protection measures may be safeguarded as know-how.
However, it would not be correct to state that civil protection of know-how concerns just information that can be shown to possess the aforesaid three characteristics. Indeed, art. 99 of the IPC, without prejudice to the law on unfair competition, recognizes the existence of trade secrets which, despite not meeting the criteria set out in art. 98 of the IPC, are nevertheless deemed worthy of protection.
In essence it is possible for an entrepreneur to proceed legally with a claim of unfair competition in relation to the unlawful misappropriation of information that is considered objectively confidential, despite the absence of all the appropriate protection measures. However, in this case the entrepreneur must overcome another obstacle, namely the burden of proving that the misappropriated information was understood to be objectively confidential by virtue of its inherent value.
Beside this type of civil protection, national law also offers protection under the criminal law, especially via art. 623 of the Criminal Code. Such provision does not expressly define what is meant by “know-how”, confining itself to stating as follows: “trade secrets or information destined to remain secret, discoveries or scientific inventions”. This is one initial difference that may be found between the civil and criminal provisions that concern the object and the requirements of know-how.
As a result of the extensive amount of criminal case law on the subject-matter of know-how, and in light of what has been said by the greater amount of academics, it is possible to state that what is being protected by article 623 of the Criminal Code is the interest of the holder of the trade secret in avoiding the disclosure of information which concerns the methods and procedures that define the industrial structure of a corporation.
Consequently, the so-called “know-how” – as defined by the case law of the Supreme Court – must be understood to be that knowledge and organisational plan that combined are necessary for the construction, operation and maintenance of an industrial apparatus. Such hypothesis, which has been recently stated by the Supreme Court in the well-known criminal judgment no. 16975/2020, refers therefore not just to a single technique or custom or corporate information, but rather to the entire knowledge of a company, the result of experience accrued as well as research and investments made over the years.
The aforementioned judgment of the Supreme Court reads as follows: “doctrine and case law agree that the protection offered by Article 623 of the Criminal Code goes beyond that provided by the civil law with respect to patentable inventions, and indeed the Supreme Court has repeatedly stated that, for the purposes of protection of industrial secrets under the criminal law, novelty (inherent or external) and originality are not essential requirements of industrial applications, since they are not expressly required by legislative provisions and also because the interest in the protection of confidentiality under the criminal law must not necessarily be inferred from these characteristics applicable to protected information.
This means that, even if the sequence of information – which constitutes a single whole for the implementation of a specific economic phase of the company's activity – is made up of single items of information which are in themselves known, if such entire sequence is not known and is actually considered secret by the company, then it is in itself worthy of protection. In other words, it is not necessary that every single information that makes up the sequence is "unknown"; rather, it is necessary that the organic whole is the result of an elaboration of the company. Indeed, it is through this process that the final information acquires an additional economic value in comparison to the individual elements that make up the cognitive sequence. This is what happens, precisely, in the case of a company that adopts a complex strategy to launch a product on the market: its individual elements are undoubtedly known to operators in the sector, but the whole may have been designed in such a way as to represent something new and original, thus constituting for the creator a real treasure trove from a competitive point of view[1]”.
The aforementioned principles define with greater clarity the contours of the notion of “trade secret” relevant within the field of criminal law, and favour – we believe, rightly so – a wider interpretation that can assure a more meaningful protection of the knowledge and experiences of a company.
It is worth clarifying that the type of conduct that is sanctioned by art. 98 of the IPC may still be qualified as a crime punished by art. 623 of the Criminal Code. Therefore, the person who believes his trade secret has been violated may commence a civil action for the recovery of damages that have been suffered pursuant to art. 98 of the IPC, as well as criminal proceedings with a view to obtaining a conviction of the perpetrator of the violation. However, in the civil action the claimant will have to prove that the violated trade secret was not known, was economically valuable and was protected with adequate security measures, whereas in criminal proceedings the same person will have to prove the dissemination and/or utilization – for personal or another’s gain – of the secret on part of the person who learned of its existence within the scope of his duties. This is a second difference between the two types of protection that concerns, in this case, a different regime applicable to the burden of proof.
Finally, from a procedural standpoint it must be evidenced that within civil proceedings the action may be brought equally against the natural person who reveals the trade secret and/or the company that benefits from such revelation, whereas criminal proceedings – pursuant to article 27 of the Constitution – must be commenced against the person who reveals the secret and against the person who holds a security position within the company thereby using to his own advantage the information that makes up the trade secret. Therefore, an additional difference (this time of a procedural nature) may be found between the legal provisions here being discussed.
In summary, the main differences between protection under the civil and criminal laws on know-how concern:
- the object and requirements of know-how;
- a different regime applicable to the burden of proof of know-how;
- procedural aspects (such as a different legal standing).
[1] Supreme Court, Division V (criminal) - 11/02/2020, no. 16975