Disclosing data protection practices to data subjects: how do we achieve useful transparency?

“So, what are you doing exactly in academia?” That’s a quite common and recurring question that I received from different people when I’m tell them that I’m currently a PhD researcher. The short version: I’m looking at how information about data processing activites plays a role as a safeguard to mitigate or prevents harm to persons in the platform economy using open banking as a case study. Perhaps that deserves a longer explanation…

Data protection regulations, such as the GDPR, impose several duties and obligations on those that process personal data, such as the disclosure of information about these activities. Information should allow individuals to decide if they want their information to be processed or not but also allow them to act through their data rights, such as the right to be forgotten, if necessary, when the data were not obtained directly from them. As such, it serves as a safeguard to actual or potential harms that people might suffer in data processing activities. Without it, individuals are left out in the dark about what is happening with their information, regardless of if their consent was collected or not. How is this obligation complied with in practice?

We all have seen at some point of our lives a privacy notice: that boring, complex, and long document where it is supposedly explained how our data will be used. For many years, privacy notices were accessed through teeny-tiny links at the bottom of a webpage or as a mandatory checkbox when accepting certain digital product or service. The mobile apps revolution has not changed much in this sense and, even, one could argue that things got worse due to the space limitations of mobile devices. It is no surprise that a lot of academic research has reached the conclusion that privacy notices are useless in their current form. In response, new techniques, drawing heavily from the fields of design, were created but it is unclear still how and where to use them.

To make things a bit more complicated, we are experiencing a new form of data-driven economic development: the platform economy. Platforms have made relations, and the consequent assignment of roles, more complex from a legal perspective. Let’s take for example a purchase in Amazon: do we know clearly who and how is actually making the sale? Other areas of the law, in particular consumer protection regulations, also struggle with the same issue when assigning rights and obligations.

So, the obvious question that follow is this one: how do we solve this? In other words, how do we make information about certain data practices useful for individuals? The answer to this question would result in a methodology that other stakeholders, mainly data controllers, can rely upon when drafting but also reviewing disclosed information about data processing practices. The principle of proportionality could serve a guiding star for such methodology since the GDPR, as many other data protection regulations, impose stricter duties for certain data processing activities; in this respect, it would make sense to have stronger information safeguards when a data protection impact assessment was conducted. For testing purposes, it is necessary to pick a case study where to apply this. Given my background, I decided to go with the matter of platformization in the financial services industry: open banking.

My research comes at a time where the platform economy business model is consolidating in different industries but also in the political discussion arena, with the upcoming Digital Services Act and Digital Markets Act proposals from the European Commission and, even, the Digital Finance Strategy from the European Commission. This new business model implies relying heavily on data-driven technologies where reflecting about the role of data protection and its safeguards to protect people is crucial.

Leave a Reply

Your email address will not be published. Required fields are marked *