There is a growing awareness worldwide that the personal data of users of online information services are not adequately protected.  Companies harvest and utilise personal data on a massive scale, including browser behaviour, social media messages, and content on personal websites. Concerns about the collection, use and leakage of personal data have become more pronounced since the rise of “big data”, i.e. very large data sets gathered from mobile devices, bio-sensors, cameras, GPS trackers and social media. These data are analysed using increasingly sophisticated machine learning techniques to deliver new insights and predictions about an individual’s behaviour and also feed increasingly personalised AI-driven interactive digital experiences.

The rate of technological innovation, now accelerated by big data and machine learning techniques, invariably outpaces public policy debate and the development of new regulation for the protection of personal data. This comes as the scale and social impact of data analysis is rapidly increasing. Individuals and groups are poorly placed to consider the impact of use of their personal information, especially when this use also delivers attractive personalised digital services. At the same time, tech companies, especially SMEs, lack the knowledge and expertise needed to address the complex legal and ethical implications of collecting and analysing personal data. ESRs will develop new ways of empowering users of digital services to understand the risks they take with their rights and interests when they go online and offer new ways to enable developers to incorporate privacy, data protection and broader ethical considerations into the development of digital services, even as they are face growing commercial and competitive demands to exploit personal data.