loader image

The Responsible AI Forum 2023

An interdisciplinary approach to adapt the notion of accountability to AI systems: a study of the impact of the Montreal Declaration's notion of responsibility enriched by the philosophy of technology.

Munich 13-15 september 2023

Abstract

The governance and ethics of artificial intelligence is particularly interested in the notion of responsibility. This can be subdivided into several notions, such as answerability, accountability, reliability and attributability. The legal and juridical contours of these notions need to be identified in order to be effectively implemented in the framework of the governance of artificial intelligence.
Moreover, the philosophy of technology has also addressed the notion of
responsibility, whether with philosophers such as Hannah Arendt, Hans Jonas or Günther Anders. The interest of this corpus of authors is that they knew each other, which allowed a mutual development of their work and a methodological consistency. It would be relevant to draw on their notion of responsibility to apply it to artificial intelligence systems. More specifically, the Montreal Declaration on Responsible AI states that the development and use of AI must not contribute to the disempowerment of human beings when a decision is made.
Among the issues related to the Montreal Declaration, we can find the management of the smart city, the professional world, and connected objects. We can also add the education, justice, predictive policing, and health sectors. A series of proposals list solutions to avoid disempowerment, such as assigning responsibility to humans only, human-only decision-making for decisions related to a person’s quality of life or reputation, or decisions involving the risk of death to humans. Environmental liability issues can also be addressed. In formulating the ethical challenges for 2025, the participants in the co-construction process of the Montreal Declaration wondered in particular about who is held responsible for the changes brought about by AI and about the need to align profitability and responsibility. Unfortunately, it is possible that major scandals related to the deployment of AI could undermine public confidence in this technology, which should be anticipated in order to mitigate the risk.

Références :
– David Shoemaker, Attributability, Answerability, and Accountability: Toward a Wider Theory of Moral Responsibility, April 2011. Ethics, Volume 121, Number 3.
– Günther Anders, L’Obsolescence de l’Homme, 2002 [1957]. Encyclopédie des Nuisances, Paris.- Hannah Arendt, Responsabilité et Jugement, 2005. Payot, Paris.
– Hans Jonas, Le Principe responsabilité, une éthique pour la civilisation technologique, 2013 [1979]. Flammarion, Paris.
– Richard Mulgan, ‘Accountability’: An Ever-Expanding Concept?, 17 December 2002. Public Administration, Volume 78, Issue 3.
– Reuben Binns, “Algorithmic Accountability and Public Reason”, May 2017. Philosophy & Technology, Volume 31, Pages 543–556.
– Theodore M. Lechterman, “The Concept of Accountability in AI Ethics and Governance”, December 2021. In Justin Bullock, Y. C. Chen, Johannes Himmelreich, V. Hudson, M. Korinek, M. Young & B. Zhang (eds.), The Oxford Handbook of AI Governance. Oxford: Oxford University Press.
– University of Montreal, “The Montreal Declaration for a Responsible
Development of Artificial Intelligence”, November 2017.
– University of Montreal, “Assessment of the citizens’ deliberations”, June 2018.