We are proud to announce that Scailable has been acquired by Network Optix.
For the announcement on the Network Optix website, move here.

GDPR / AVG considerations for Vision based Edge AI.

This page provides details regarding the use of data collected by cameras in the vision-based edge AI applications provided by Scailable BV. It details which data we collect, process, and store. It also outlines which agreements users of our services enter into and how our vision-based edge AI solutions relate to the GDPR (or, in Dutch, the AVG).

Date: 13-04-2022.

Table of contents:

Definitions:

Terms of use:

FAQ: Privacy law compliance:

Definitions:

Here we detail significant terms in our vision-based edge AI solutions.

What do you mean by “vision-based”?

With “vision-based” we mean that the raw data that serves as an input to our edge AI solution(s) is captured by a digital camera of some kind. Note that the raw data represents photons activating elements of a light-sensitive grid inside the camera, which are subsequently turned into a digital image by the chip inside the camera. A digital image is made up of a digital matrix of color codes (often [R]ed, [G]reen, and [B]lue for each light-sensitive “pixel” in the camera) that represents the original pattern of electrical activations of data captured by the camera. A video made by a camera is simply a sequence of such images over time (seconds, minutes or other periods of use of the camera). So, to put it in less technical terms: our AI operates on images (data) captured by means of a camera. This is what we mean by “vision based”.

Note that a matrix (or “tensor”) of the referred RGB color codes enables the user of images out of a camera to reconstruct the captured images on a screen. As such, this tensor could potentially, when stored, represent information by which an individual can be identified by humans or computers. That is, a (professional) user might use the matrix to reconstruct the images and look at the restored image and identify a person. Or, a computer might link the reconstructed images to a known database of individuals and thereby create personally identifiable information. Our terms of use – see below – prohibit such reconstruction.

What do you mean by “edge”?

With the term edge in the phrase “edge-AI,” we mean that an AI model (see below) is executed on a so-called edge computing device. When a camera collects data, such data are generally passed on to networking equipment (for example, a gateway), processed by local computers (such as IPCs), and finally sent to a central cloud. To put this in less technical terms and somewhat loosely defined, by edge we mean any device that is close to the source (and is thus not a remote cloud-based server). Therefore, when data is processed “on the edge,” it is effectively processed locally, and so there is no transfer of data to more remote locations that are potentially outside the control of our user. Read the FAQ below where we explain the relevance of this.

What do you mean by “AI”?

AI, an acronym for Artificial Intelligence, is an overused and often poorly defined term. In our usage of the term AI it includes what one would often call “machine learning” or “traditional vision.” More specifically, we define the term AI as follows: An AI model is a set of (mathematical/computational) operations that turns distinct input data (such as a digital image, i.e., the tensor with color codes as created by a camera described above), into distinct output data, for example, an integer (number) depicting how many people are in the image.

If engineers explicitly program this set of operations, it is often called a “machine vision” model. If it is trained on data, it is generally called a “machine learning model.” And, if the set of operations constitutes a specific type of model—loosely identified as a neural network—the set of operations is usually called AI.

One very common (but not always true) property of AI is that the input data cannot be recreated from the output data. This is often true even if we know the AI model that was used. Thus, once an image (the input data) has effectively been transformed into the output data (e.g., a number), it is impossible to restore the input image. For AI models with this property, it is possible, and even common, that the output data does not contain any information that is personally identifiable by humans or computers. In these cases, a human cannot look at the output data and identify a person or any identifiable properties of a person, nor can a computer link the output data to (e.g.,) a known database of individuals and as such create personally identifiable information.

Note: Our definition of AI is, in some sense, much larger than commonly used: For example, we regard the “set of operations” that defines how two numbers (the input data) can be added to produce a single number (i.e., the sum, constituting the output-data) as an AI model. However, we do exclude a lot of common uses of the term: We do not relate our use of the term AI to the generation (or “training”) of weights of a set of operations, nor do we use the term to loosely refer to any type of modern data analysis.

What is the Scailable AI manager?

The Scailable AI manager, a software application installed on an edge device, makes it possible for this edge device to execute AI models locally. The AI manager ensure that it is easy to configure cameras, select and manager AI models, and configure the target destination of the output data. The Scailable platform allows us to manage models running on any device with the AI manager installed.

Terms of Use (“ToU”):

The use of Scailable’s vision-based edge AI solutions are subject to terms of use and we expect any user of our technology to enter into an agreement with us to ensure our compliance with privacy laws such as the GDPR.

Can I access the camera stream / raw images?

No. As described above, raw images or a raw camera stream can generally be used to retrieve (or reconstruct) personally identifiable information. Therefore, our AI manager is designed to make it virtually impossible to access the live camera stream/images using our AI manager. Our ToU prohibit such reconstruction or use of data to identify individuals.

Note: Gaining direct access to the raw images using non-Scailable tools (i.e., by accessing the camera’s images using other means) potentially has far-reaching legal consequences, such as a violation of the GDPR. The legal implications of accessing the camera streams/images outside the Scailable AI manager are your own.

Can I / should I store the camera stream / raw images in the cloud?

No. We make it as hard as possible to access the images. Thus, we do not ourselves nor programme our technology to send the images to the cloud. We process the input data (the images/video stream) using the AI model on the edge device and (potentially, depending on the configuration) send only the resulting output data to the cloud. Our ToU prohibit the user to access and transfer said images or to use them outside the scope of the license granted to users.

Will any AI model provide output data that is not personally identifiable?

No, this is not the case. The Scailable platform enables users to upload any AI model (as defined above). A user can, for example, upload a model where the output data is the exact same as the input data, so called user-generated models.

This example illustrates that it is possible to construct AI models where the output data contains information that enables the user to reconstruct the captured image (or important properties there-off).

Users that utilize the Scailable platform to upload their own custom AI models should be aware that privacy laws like GDPR contain strict rules governing the use of personal data such as the need to (explicit) consent of each individual of which data are processed.  Scailable cannot assume any responsibility in respect of user-generated models. We can, however, ensure compliance with privacy laws when you are using the models available to you in the model library that is supplied with the Scailable AI Manager.

Do all of your library models produce output data that are not personally identifiable?

Yes. We ensure that models made available to you in the AI manager (such as ANPR, people detection, line-crossing, emotion recognition) produce output data that cannot be used to reconstruct the original input data.

For the models made available by us in our library, the output data does not contain any personally identifiable information. Thus, for this set of models, a human cannot look at the output data and identify a person or detect identifiable properties of a person. Nor can a computer link the output data to (e.g.,) a known database of individuals and thereby create personally identifiable information.

FAQ: Compliance with Privacy Laws

Here we discuss frequently asked questions regarding the use of our edge AI solutions in compliance with privacy laws.

Are the camera images send to the cloud?


No, please see our agreements above.

Are the camera images stored in the cloud?


No, they are not sent to the cloud, let alone stored in the cloud.

Are the camera images stored on the edge?

No. our technology is designed – by default – to purge/delete the images after the output data from the AI model has been generated.

Are the Scailable vision based edge AI solutions GDPR (or AVG) compliant?

That is the “million-dollar question,” and perhaps disappointingly to some, the answer is not a clear “yes” or “no.” Instead, the answer is that, as long as a user complies with our Terms of Use referred to above, the GDPR does not apply to the models in our library. Edge processing and our library AI models’ properties ensure that the resulting output of our models does not allow the reconstruction of the input data. Since we do not process or store any personally identifiable information, our vision-based edge AI solutions are out of the scope of the GDPR.

Are the Scailable vision-based edge AI solutions in compliance with Privacy laws in every country?

The protection of personal data and privacy varies from country to country and should be assessed by each user prudently.

Note: Be aware that the very moment you link the output from the AI manager to other data sources or access images by going around the AI manager, the above conclusion does not hold. For example, you are likely violating the GDPR when you merge our output data with images taken through an auxiliary process and store these permanently in a central cloud.

Note: This page can be found by scanning the QR code below. Feel free to add the QR code to your privacy notice if you are using our solutions.