Consent Request

Olha, would you be so kind and introduce yourself and your project?

My name is Olha Drozd. I am a project related research associate at theInstitute of Management Information Systems, working on the SPECIAL (Scalable Policy-aware Linked Data Architecture For Privacy,Transparency and Compliance) project a Research and Innovation Actionfunded under the H2020-ICT-2016-1 Big Data PPP call ( At the moment, together with my colleagues,I am working on the development of the user interface (UI) for theconsent request that will be integrated into the privacy dashboard.

Would you please explain the privacy dashboard?

With the help of the privacy dashboard users would be able to access the information about what data is/was processed about them, what is/was the purpose for the data processing, and what data processors are/were involved. The users would also be able to request correction and erasure of the data, review the consent they gave for the data processing and withdraw that consent.

We have two ideas of how this dashboard could be implemented:

  1. Every company could have their own privacy dashboard installed on their infrastructure.
  2. The privacy dashboard could be a trusted intermediary between a company and a user. In that case we would have different companies that are represented in a single dashboard.

As I mentioned in the beginning, I am concentrating on the development of different versions of UI for the consent request that could be integrated into the dashboard. Our plan is to test multiple UIs with the help of user studies to identify better suitable UIs for different contexts. At the moment we are planning to develop two UIs for the consent request.

Olha, would you please tell us more about the consent request?

Before a person starts using an online service he/she should be informed about:

  • What data is processed by the service?
  • How is the data processed?
  • What is the purpose for the processing?
  • Is the data shared and with whom?
  • How is the data stored?

All this information is presented in a consent request, because the user has not only to be informed but has to give his/her consent to the processing of his/her data. We are now aiming to create a dynamic consent request, so that users have flexibility and more control over giving consent compared to all-or-nothing approach that is used by companies today. For example, if the person wants to use wearable health tracking device (e.g. for a FitBit watch) but he/she does not want to have an overview of the statistics of all day heart rate but just activity heart rate, then he/she could allow collection/processing of the data just for the purpose of displaying activity heart rate. It should be also possible to show only the relevant information for the specific situation to the user. In order to ensure that the user is not over burdened with consent requests we are planning to group similar requests into categories and ask for consent once per category. Additionally, it should be possible to adjust or revoke the consent at any time.

At the moment, the main issue for the development of the consent request is the amount of information that should be presented to and digested by a user. The general data protection regulation (GDPR) requires that the users should be presented with every detail. For example, not just the company, or the department that processes the information – the users should be able to drill down through the info. In the graph below you can see an overview of the data that should be shown to users in our small exemplifying use case scenario where a person uses health tracking wearable appliance [1]. You can see how much information users have to digest even in this small use case. Maybe for some people this detailed information could be interesting and useful, but if we consider the general public, it is known that people want to immediately use the device or service and not spend an hour reading and selecting what categories of data for what purpose they can allow to be processed. In our user studies we want to test what will happen if we give users all this information.

Olha, you have mentioned that you were palnning to develop two UIs for the consent request. Would you explain the differences between those two?

One is more technical and innovative (in a graph form) and the other one is more traditional (with tabs, like in a browser). We assume that the more traditional UI might work well with older adults and with people who are not so flexible in adapting to change, new styles and new UIs. And the more innovative one could be more popular with young people.

[1] Bonatti P., Kirrane S., Polleres A., Wenning R. (2017) Transparent Personal Data Processing: The Road Ahead. In: Tonetta S., Schoitsch E., Bitsch F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2017. Lecture Notes in Computer Science, vol 10489. Springer, Cham

Council of Europe Study on Algorithms and Human Rights published

After two years of negotiations in the Council of Europe Committee of experts on Internet Intermediaries (MSI-NET) the final documents of the expert group have finally been published. While the negations among the experts and governmental representatives in the group were not without difficulty, the final texts are relatively strong for what are still negotiated texts. Of particularly interest for experts working on the regulation of algorithms and automation is the Study on Algorithms and Human Rights which was drafted by Dr. Ben Wagner, one of the members of the lab and the Rapporteur of the Study.

The study attempts to take a broad approach to the human rights implications of algorithms, looking not just at Privacy but also Freedom of Assembly and Expression or the Right to a Fair trial in the context of the European Convention on Human Rights. While the regulatory responses suggested focus both on transparency and accountability, they also acknowledge that additional standard-setting measures and ethical frameworks will be required in order to ensure that human rights are safeguarded in automated technical systems. Here existing projects at the Lab such as P7000 or SPECIAL can provide an important contribution to the debate and ensure that not just privacy but that all human rights are safeguarded online.

The final version of the study is available to download here.

“Why RFID Chips are Like a Dog Collar” Interview with Sushant Agarwal, Privacy and Sustainable Computing Lab


Sushant would you please introduce yourself and tell us about your scientific work and background?


Sushant: My name is Sushant Agarwal. I did my Bachelor and Masters in India in Aerospace Engineering at the Indian Institute of Technology Bombay.During this time, I did an internships at the University of Cambridge where I worked on a project related to RFID. There I had to carry several RFID enabled cards – key cards to unlock the university doors, college main entrance, my dorm room and also an id-card for a library. I used to wonder why they don’t just create one RFID chip which would work for everything. Later, I started my thesis which dealt with machine learning. This was the time I started thinking about privacy and how centralisation is not always a good approach. After my studies, I got an opportunity here to work on a project that combined both privacy and RFID.

Would you tell us a little more about this project?

The EU project which was called SERAMIS (Sensor-Enabled Real-World Awareness for Management Information Systems) has been dealing with the use of RFID in fashion retail. My work focused more on the privacy aspects. If you look at clothes that you buy from big fashion retailers, along with the price tags there can be RFID chips as well, which are slowly replacing the security tags or the fancy colour bombs they were using before.

Would you also tell us about the tool you created at the Lab called “PriWUcy”?

This was part of the SERAMIS project as well. We had to develop a tool for Privacy Impact Assessments. When we started developing this tool the landscape of data protection related regulation changed to the General Data Protection Regulation (GDPR). Because of this regulatory change a lot of things in our Privacy Impact Assessment tool had to be adjusted. This was the time when we thought about a sustainable solution and came up with the idea to model the legislation in a machine-readable way in order to easily update the tool based on the changes in the interpretation of the GDPR.


Sushant, what is privacy for you?

For me personally, privacy is all about control. I want to have the ultimate control of my data. At least I should be allowed to say who should get my data, as well as what kind of data they should have access to. So it shouldn’t be like logging in online and starting Facebook in one of your tabs and then Facebook tracks you for all the rest of the websites that you browse. That is something I really hate. I try to use online services where I can have the maximum amount of control possible.


Would you give us an example for how you make use of your knowledge on privacy in your daily life?


Yes, for me the concept of smart homes is something very interesting. And to try this out on a small scale, I started out with some smart bulbs. I bought  some smart-bulbs from China to experiment with. These bulbs work using Wi-Fi and through a switch in my apartment I was communicating first with a server in China and then the server was controlling my light switch. One could say the server in China was a middleman in the process of switching on my lights. And I didn’t really like this design so I looked at some open source alternatives like where I had better control and I could avoid the middleman.


A GlobArt Workshop at WU’s Privacy & Sustainable Computing Lab November 10, 2017

The Privacy & Sustainable Computing Lab together with GlobArt and Capital 300 hosted a Round Table discussion about artificial intelligence (AI), Ubiquitous Computing and the Question of Ethics on the 9th of November 2017 in Vienna. We were happy to have Jeffrey Sachs as our distinguished guest at this 4-hour intense Workshop on the future of AI. Other distinguished speakers were Bernhard Nessler from Johannes Kepler University Linz introducing to the limits of AI as well as Christopher Coenen unveiling the philosophical and historical roots of our desire to created artificial life.

The session and its speakers were structured by three main questions: What can general AI really do from a technical perspective?

What are the historical and philosophical roots of our desire for artificial life?

What sorts of ethical frameworks should AI adhere to?

The speakers argued that there is a need to differentiate between AI (Artificial Intelligence) and AGI (Artificial General Intelligence), where AI (like IBM Watson) needs quality training as well as quality data, lots of hardware and energy. In contrast, AGI is able to work with unstructured data and can have a better energy consumption rate. The other advantage of AGI is that it can react to un- foreseen situations and could be more easily applicable to various areas. One point that was stressed during the debate was that a lot of the terminology used in the scientific field of AI and AGI is borrowed from neuroscience and humans proper intelligence. Since machines – as experts confirmed – do not live up to this promise, using human-related terminology could lead to a misleading of the public as well as overly confident promises by industry.

It was discussed whether the term ”processing” might be more suitable than ”thinking” – at least at the current state.

Another phenomenon could be due to science fiction (Isaac Asimov, Neal Stephenson …) or Movies like ”Her” or ”Ex Machina”, where we rather should differentiate the terms AGI and Artificial Life. 
What are the socio-cultural, historical and philosophical roots of our desire to create a general artificial intelligence and to diffuse our environments with IT systems?
 ”The World, the Flesh & the Devil” a book published in 1929 by J. Desmond Bernal was a named inspiration for the concept of the ”mechanical man”. This book in turn provided an excellent introduction into the debate about transhumanism, which often goes hand in hand with the discussion about AI. Some prominent figures in technology – such as Ray Kurzweil or Elon Musk – frequently communicate transhumanistic ideas or philosophies.

What ethical guidance can we use as investors, researchers and developers or use in technical standards to ensure that AI does not get out of control? Concerning this question, there was a general agreement on the need to have some basic standards or even regulations of upcoming AI technology. Providing one example of such standards, the IEEE is working on Ethical Aligned Design guidelines under the leading phrase “Advancing Technology for Humanity.” Here particular hope is put into P7000 (Model Process for Addressing Ethical Concerns During System Design) that sets out to describe value based engineering. Value based engineering is an approach aiming to maximize value potential and minimize value harms for human beings in IT-rich environments. The ultimate goal of value based engineering is human wellbeing.

In conclusion, the event provided an excellent basis for further discussions about AI and it’s ethics for both experts and students alike.

Speakers at the Roundtable:

  • Christopher Coenen from the Institute for System Analysis and Technology Impact Assessments in Karlsruhe
  • Peter Hampson from the University of Oxford
  • Johannes Hoff from the University of London
  • Peter Lasinger from Capital 300
  • Konstantin Oppel from Xephor Solutions
  • Michael Platzer from Mostly AI
  • Bill Price who is a Resident Economist
  • Jeffrey Sachs from Columbia University
  • Robert Trappl from the Austrian Research Institute for AI
  • Georg Franck who is Professor Emeritus for Spacial Information Systems
  • Bernhard Nessler from Johannes Kepler University
  • Sarah Spiekermann – Founder of the Privacy & Sustainable Computing Lab and Professor at WU Vienna.



Welcome to the new Privacy and Sustainable Computing Lab blog!

We look forward to having further blog posts listed here in the next few weeks, giving visitors to this website a better insight on what we’re doing. If you have questions about the Lab please don’t hesitate to contact: