loader image

Health Data and Health Data Spaces: Actors, Narratives, and Surveillance

The impact of algorithmic health monitoring on patient responsibility.

7-8 June 2023.

En présentiel :

Lausanne, Switzerland

Alexandre Bretel
is a PhD student at the Ethics & AI Chair of the University of Grenoble Alpes, attached to the Multidisciplinary Institute of Artificial Intelligence (MIAI) and the Institute of Philosophy of Grenoble (IPhiG) under the supervision of Thierry Ménissier. Its co-director is Jean-Gabriel Ganascia of the Computer Science Laboratory of Paris 6 (LIP6) of Sorbonne Univeristy. His thesis topic is the contribution of the philosophy of technology to the ethics of artificial intelligence, and more precisely the study of the notion of responsibility in the technological civilization and the society of innovation. He also teaches courses in the ethics of artificial intelligence in health in the framework of the AI4OneHealth Master’s degree. He also looked at issues of algorithmic surveillance, comparing the different surveillance systems of public and private organisations around the world. In parallel, he intervenes in conferences or articles on current issues on the ethics of generative artificial intelligence, such as with ChatGPT or Midjourney. Email : alexandre.bretel@univ-grenoble-alpes.fr

Résumé de l’intervention

Artificial intelligence increases the capability to monitor the population, especially in health. Whether in public or private organisations, the collection of sensitive data is organised on a large scale, with the consent of users or in the name of the general interest. The notion of patient responsibility may be changing, with a tendency to make patients more responsible for their treatment while undergoing surveillance that may infringe on their privacy. A comparison of the different population surveillance systems would allow for better adaptation of health data governance [Lechterman 2021]. For example, China’s social credit system (SCS) tends to make individuals accountable for their actions, which justifies the punishment and reward mechanisms [Creemers 2018]. The Aadhar system in India harvests biometric data, and is becoming indispensable in the processing of health data, which may infringe on the privacy of patients. In the European Union, the General Data Protection Regulation (GDPR) considers health data as sensitive information. The processing of such data must be anonymised as far as possible, unless necessary for the public interest. The justification for “data altruism” also raises questions about its compatibility with the protection of privacy.

Surveillance can also lead to the ‘automation’ of human behaviour in order to facilitate prediction and profitability on users of digital services [Zuboff 2020]. This automation of behaviour can call into question the responsibility of patients. Indeed, how can patients be held responsible for behaviour that has been suggested and modified by recommendation algorithms? The notion of autonomy is fundamental to justify responsibility, at least in an accountability approach. One can even add the dimension of conditionality, which is at the basis of some conception of responsibility [König 2017]. Moreover, patient accountability implies the ability to understand and approve the processing of one’s health data in an informed way. Paradoxically, empowering patients can therefore lead to a decrease in their autonomy, by making them responsible for situations that they may have doubts about their ability to manage. Another legitimate constraint to be taken into consideration is the need for innovation. However, the collection of sensitive data is justified, among other things, only in the case of patient consent and an explanation of the purpose of the health project.

Moreover, novelties may appear due to the cross-referencing of data and correlations that cannot be anticipated beforehand. There is therefore a tension between the protection of the patient’s privacy, and his or her capacity for autonomy and responsible decision-making, and the need to guarantee research and development, particularly in Europe. This notion of unpredictability is also fundamental in artificial intelligence. Along with the notion of complexity, it calls for a redefinition of responsibility, which must be defined between the various actors in the health system, whether patients, physicians, companies, research laboratories or states. Health governance on the notion of responsibility must therefore be adapted to the technical, political, legal and ethical constraints of the development of artificial intelligence. The problem is therefore to study how the notion of patient responsibility evolves with the use of algorithmic health monitoring. The data for this study will be collected from articles in the literature on surveillance, health and accountability studies. In terms of methodology, it is through the comparison between different patient monitoring systems that it will be possible to obtain an overall view of health data governance. In terms of results, the aim will be to understand what are the characteristics of the responsibility of patients in the treatment of their health data.

To go further, it will be necessary to determine what degree of responsibility should be granted to patients in the processing of their health data. For example, how to ensure that the patient can remain accountable in an innovation regime that must include uncertainty about the purpose of the health data, which would also imply a change in the current EU regulation. Moreover, the collection, processing, storage and transfer of health data presuppose a level of privacy protection, autonomy and responsibility of the patient that is sufficiently adaptable to technological improvements. It is therefore necessary to think of a sufficiently resilient and adaptable framework of responsibility. The transformation of the patient’s responsibility towards a personalisation of the treatment taking into account above all the reputation, established through digital surveillance, is to be feared [Xin 2018]. This would result in treatment being given to the patient according to his or her compliance with the principles established by the surveillance system, thus threatening the protection of individual rights.

Pour plus de détails :

Bibliography:

  • Creemers, R., 2018. China’s Social Credit System: An Evolving Practice of Control, SSRN Scholarly Paper ID 3175792, Rochester, NY: Social Science Research Network.
  • König, P.-D., 2017. The place of conditionality and individual responsibility in a data-driven economy, Big Data & Society.
  • Lechterman T.-M., 2021. The concept of accountability in AI ethics and governance. In J. Bullock, Y.C. Chen, J. Himmelreich, V. Hudson, A. Korinek, M. Young, and B. Zhang (Eds.), The Oxford handbook of AI governance. Oxford University Press.
  • Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Londres, Profil Books.
  • Xin, D., 2018. Toward a Reputation State: The Social Credit System Project of China, Draft 2018-6-10.