loader image

International conference
organized by the ethics & AI Chair, MIAI,
Grenoble-Alpes University

« Artificial Intelligence and Transformations of Work »

20-22 novembre 2023

Presential :

Université Grenoble Alpes,
339, avenue centrale 38400, Saint-Martin-d’Hères,
Maison de la Création et de l’Innovation.

Presential:

Université Grenoble Alpes,
339, avenue centrale 38400, Saint-Martin-d’Hères,
Maison de la Création et de l’Innovation.

Machanical turk revisité à l'ère numérique
Computational worker - Seddik Benlaksira

For the french version of the conference argument

Conference argument

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

The conference will be structured around four axes:

1) Replacement: myth or reality

2) Case Studies: analyzing the role of current systems

3) Fairness and justice at work: what values for an ethics of work?

4) Conditions of Work

Scientific Committee:

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

AI leads to a rethinking of work’s organization and the framework in which it is performed. Managerial tools are now equipped with algorithmic methods to manage work: recruitment (Lacroux & Martin-Lacroux, 2021), planning (Lapègue & Bellenguez, 2013), establishing tasks to be carried out (Gaborieau, 2012), and customer forecasting (Pachidi, 2021). The introduction of these decision-making systems within organizations can lead to a redistribution of tasks and roles and ultimately to various demands for skills that need to be taken into account (Gekara, 2018). Used as tools for employee performance indicators, these systems are also tools for controlling work (Kellogg et al., 2020). They may nevertheless fail to take the working activities of employees into account in any meaningful way, as in the case of the employee-customer relationship in supermarkets (Evans, 2018), and pose a threat to the notion of work well done or quality work (Clot, 2015).

When we see AI’s transformations of organization and management, it’s clear how humans participate in the creation of AI as engineers, programmers, data scientists and trainers (Tubaro et al., 2020) as well as users whose activity drives the machine learning process (Cristianini et al. 2023, 93). The attention economy depends on these user contributions and the exploitation of their data and activities (Citton, 2014): to what extent does this impact the world of work? If there isn’t replacement, but instead a transformation of human activity, we need to think about new forms of cooperation (Sennett, 2012) between humans and between AI and humans. Is it possible, as Villani argues, that AI can “de-automate human work” to allow us to focus on more creative tasks (2018, 105)? 

Perhaps, what AI gives us is this opportunity to reassess working conditions, understood as the environment of performing a task, the organization of work, and our relationship to work. Work can be a source of meaning, especially through the pleasure and joy of practicing a craft (Frayne 2015), the value of our work in the eyes of others (Michaelson, 2021), making a contribution to society (Moriarty 2009), and the practice and development of autonomy (Schwartz, 1982). However, it is also the place of suffering directly linked to its practice. The centrality of work in the organization of society (Habermas, 1985) leads some to continue precarious jobs to avoid unemployment (Cholbi 2018, 13-15). This means that even if their work is a burden, workers must look for meaning within it. This complexity of finding meaning in different working conditions is exacerbated by AI’s changes to the organization and activities of working. We question the effects of the insertion of AI in this ambiguous relationship to work and employment (Méda & Vendramin, 2013).

Objectives and Call for Papers

With this conference, we want to shed light on issues related to the transformation of human work, providing a contribution to these issues from an ethical perspective. To do this, it is necessary to revisit the place of work for the human condition and analyze the redefinitions underway, asking: is AI redefining work and its value?

We invite proposals in French or English. The following disciplines are invited to participate in the reflection: philosophy, sociology, social history, economics, information and communication sciences, social psychology, ergonomics, computer science, robotics. Please send a 500 word abstract with a title, bibliographical references and a session preference to ai-work-transformations@univ-grenoble-alpes.fr by July 7th, 2023. Contributions not specifically related to these themes are also welcome.

Sessions

The conference will be structured around four axes:

1) Replacement: myth or reality

2) Case Studies: analyzing the role of current systems

3) Fairness and justice at work: what values for an ethics of work?

4) Conditions of Work

Scientific Committee:

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

AI at work also raises questions related to fairness and justice (van der Poel et al., 2022, 15). In the collective imagination, AI systems are often assumed to be objective because they have no human emotions or desires (Guzman, 2020, 48). This belief misses the subjectivity built into AI systems where even numbers “represent complex social reality” (Boucher, 2020, 23). Data, while not inherently discriminatory, can fit into discriminatory patterns, undermining the possibility of fairness in machine learning. Moreover, data analysts themselves fail to agree on consensual fairness metrics, especially since they sometimes contradict each other (Kleinberg et al., 2016).  The difficulties of achieving fairness in AI ethics (Jobin et al., 2019) lead us to go beyond fairness to develop a notion of AI justice. Rather than focusing on fair treatment by AI systems, justice would also question the modalities of using these systems. 

The explicability of a system can ensure that the justice requirements of its uses are met (Doshi-Velez & Kim, 2019). Even if we can avoid bias, we may not be able to explain the decision-making of the so-called black box of AI. For example, deep learning uses thousands or even millions of interacting links to make predictions (Liao, 2020, 7). We can determine whether the result is correct, but even reading all the documentation will not help make sense of the extensive and complex interactions that went into the solution (Wulff and Finnestrand, 2023, 3). Humans will need to be able to account for the results and justify the AI’s decision making, ensuring fairness, transparency and human autonomy (Jobin et al., 2019). We must also question if AI re-problematizes responsibility at work. When we work, it creates a “psychic friction” between us, objects, and others, which is why work constitutes a “direct experience of our responsibility to our material environment” (Crawford, 2016). Working with or under the management of AI may mean that employees lose this concrete side of their work, detaching them from the effects of their actions and resulting in a displacement of responsibility.  

 

Conditions of Work

AI leads to a rethinking of work’s organization and the framework in which it is performed. Managerial tools are now equipped with algorithmic methods to manage work: recruitment (Lacroux & Martin-Lacroux, 2021), planning (Lapègue & Bellenguez, 2013), establishing tasks to be carried out (Gaborieau, 2012), and customer forecasting (Pachidi, 2021). The introduction of these decision-making systems within organizations can lead to a redistribution of tasks and roles and ultimately to various demands for skills that need to be taken into account (Gekara, 2018). Used as tools for employee performance indicators, these systems are also tools for controlling work (Kellogg et al., 2020). They may nevertheless fail to take the working activities of employees into account in any meaningful way, as in the case of the employee-customer relationship in supermarkets (Evans, 2018), and pose a threat to the notion of work well done or quality work (Clot, 2015).

When we see AI’s transformations of organization and management, it’s clear how humans participate in the creation of AI as engineers, programmers, data scientists and trainers (Tubaro et al., 2020) as well as users whose activity drives the machine learning process (Cristianini et al. 2023, 93). The attention economy depends on these user contributions and the exploitation of their data and activities (Citton, 2014): to what extent does this impact the world of work? If there isn’t replacement, but instead a transformation of human activity, we need to think about new forms of cooperation (Sennett, 2012) between humans and between AI and humans. Is it possible, as Villani argues, that AI can “de-automate human work” to allow us to focus on more creative tasks (2018, 105)? 

Perhaps, what AI gives us is this opportunity to reassess working conditions, understood as the environment of performing a task, the organization of work, and our relationship to work. Work can be a source of meaning, especially through the pleasure and joy of practicing a craft (Frayne 2015), the value of our work in the eyes of others (Michaelson, 2021), making a contribution to society (Moriarty 2009), and the practice and development of autonomy (Schwartz, 1982). However, it is also the place of suffering directly linked to its practice. The centrality of work in the organization of society (Habermas, 1985) leads some to continue precarious jobs to avoid unemployment (Cholbi 2018, 13-15). This means that even if their work is a burden, workers must look for meaning within it. This complexity of finding meaning in different working conditions is exacerbated by AI’s changes to the organization and activities of working. We question the effects of the insertion of AI in this ambiguous relationship to work and employment (Méda & Vendramin, 2013).

Objectives and Call for Papers

With this conference, we want to shed light on issues related to the transformation of human work, providing a contribution to these issues from an ethical perspective. To do this, it is necessary to revisit the place of work for the human condition and analyze the redefinitions underway, asking: is AI redefining work and its value?

We invite proposals in French or English. The following disciplines are invited to participate in the reflection: philosophy, sociology, social history, economics, information and communication sciences, social psychology, ergonomics, computer science, robotics. Please send a 500 word abstract with a title, bibliographical references and a session preference to ai-work-transformations@univ-grenoble-alpes.fr by July 7th, 2023. Contributions not specifically related to these themes are also welcome.

Sessions

The conference will be structured around four axes:

1) Replacement: myth or reality

2) Case Studies: analyzing the role of current systems

3) Fairness and justice at work: what values for an ethics of work?

4) Conditions of Work

Scientific Committee:

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

First, how does the appearance of AI change the pre-existing debate about effects of technology on human work? Previous technologies have evoked violent criticism, for example, opposition to mechanized looms among 19th century Luddites and fear of automation in 1960’s industrial America (Autor, 2015). AI could just be the most recent technology to evoke these concerns about working conditions. Some might argue that AI only maintains the preexisting exploitation of workers (Casilli, 2019), with the addition of more efficient forms of control via algorithmic surveillance and management (Boltanski & Chiapello, 1999). Others show how AI systems actually end up being diluted into workers’ daily lives and are reduced to their mere functionality (Condé & Ferguson, 2023).

Yet AI systems are capable of rendering decisions with or without human input (Shrestha, 2019). In addition, while previous technologies were a means of humans communicating with other humans, AI systems are involved in communication with humans (Guzmann, 2020). Finally, we see AI systems which involve human cooperation to function, creating interconnections where the AI learns from and adapts to human behavior and labor (Cristianini et al., 2023). In many ways, AI seems to function differently than previous technologies, perhaps definitively altering working conditions.

Fairness and Justice at Work

AI at work also raises questions related to fairness and justice (van der Poel et al., 2022, 15). In the collective imagination, AI systems are often assumed to be objective because they have no human emotions or desires (Guzman, 2020, 48). This belief misses the subjectivity built into AI systems where even numbers “represent complex social reality” (Boucher, 2020, 23). Data, while not inherently discriminatory, can fit into discriminatory patterns, undermining the possibility of fairness in machine learning. Moreover, data analysts themselves fail to agree on consensual fairness metrics, especially since they sometimes contradict each other (Kleinberg et al., 2016).  The difficulties of achieving fairness in AI ethics (Jobin et al., 2019) lead us to go beyond fairness to develop a notion of AI justice. Rather than focusing on fair treatment by AI systems, justice would also question the modalities of using these systems. 

The explicability of a system can ensure that the justice requirements of its uses are met (Doshi-Velez & Kim, 2019). Even if we can avoid bias, we may not be able to explain the decision-making of the so-called black box of AI. For example, deep learning uses thousands or even millions of interacting links to make predictions (Liao, 2020, 7). We can determine whether the result is correct, but even reading all the documentation will not help make sense of the extensive and complex interactions that went into the solution (Wulff and Finnestrand, 2023, 3). Humans will need to be able to account for the results and justify the AI’s decision making, ensuring fairness, transparency and human autonomy (Jobin et al., 2019). We must also question if AI re-problematizes responsibility at work. When we work, it creates a “psychic friction” between us, objects, and others, which is why work constitutes a “direct experience of our responsibility to our material environment” (Crawford, 2016). Working with or under the management of AI may mean that employees lose this concrete side of their work, detaching them from the effects of their actions and resulting in a displacement of responsibility.  

 

Conditions of Work

AI leads to a rethinking of work’s organization and the framework in which it is performed. Managerial tools are now equipped with algorithmic methods to manage work: recruitment (Lacroux & Martin-Lacroux, 2021), planning (Lapègue & Bellenguez, 2013), establishing tasks to be carried out (Gaborieau, 2012), and customer forecasting (Pachidi, 2021). The introduction of these decision-making systems within organizations can lead to a redistribution of tasks and roles and ultimately to various demands for skills that need to be taken into account (Gekara, 2018). Used as tools for employee performance indicators, these systems are also tools for controlling work (Kellogg et al., 2020). They may nevertheless fail to take the working activities of employees into account in any meaningful way, as in the case of the employee-customer relationship in supermarkets (Evans, 2018), and pose a threat to the notion of work well done or quality work (Clot, 2015).

When we see AI’s transformations of organization and management, it’s clear how humans participate in the creation of AI as engineers, programmers, data scientists and trainers (Tubaro et al., 2020) as well as users whose activity drives the machine learning process (Cristianini et al. 2023, 93). The attention economy depends on these user contributions and the exploitation of their data and activities (Citton, 2014): to what extent does this impact the world of work? If there isn’t replacement, but instead a transformation of human activity, we need to think about new forms of cooperation (Sennett, 2012) between humans and between AI and humans. Is it possible, as Villani argues, that AI can “de-automate human work” to allow us to focus on more creative tasks (2018, 105)? 

Perhaps, what AI gives us is this opportunity to reassess working conditions, understood as the environment of performing a task, the organization of work, and our relationship to work. Work can be a source of meaning, especially through the pleasure and joy of practicing a craft (Frayne 2015), the value of our work in the eyes of others (Michaelson, 2021), making a contribution to society (Moriarty 2009), and the practice and development of autonomy (Schwartz, 1982). However, it is also the place of suffering directly linked to its practice. The centrality of work in the organization of society (Habermas, 1985) leads some to continue precarious jobs to avoid unemployment (Cholbi 2018, 13-15). This means that even if their work is a burden, workers must look for meaning within it. This complexity of finding meaning in different working conditions is exacerbated by AI’s changes to the organization and activities of working. We question the effects of the insertion of AI in this ambiguous relationship to work and employment (Méda & Vendramin, 2013).

Objectives and Call for Papers

With this conference, we want to shed light on issues related to the transformation of human work, providing a contribution to these issues from an ethical perspective. To do this, it is necessary to revisit the place of work for the human condition and analyze the redefinitions underway, asking: is AI redefining work and its value?

We invite proposals in French or English. The following disciplines are invited to participate in the reflection: philosophy, sociology, social history, economics, information and communication sciences, social psychology, ergonomics, computer science, robotics. Please send a 500 word abstract with a title, bibliographical references and a session preference to ai-work-transformations@univ-grenoble-alpes.fr by July 7th, 2023. Contributions not specifically related to these themes are also welcome.

Sessions

The conference will be structured around four axes:

1) Replacement: myth or reality

2) Case Studies: analyzing the role of current systems

3) Fairness and justice at work: what values for an ethics of work?

4) Conditions of Work

Scientific Committee:

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.

Artificial intelligence (AI) refers to a system which displays goal-oriented problem-solving, situational adaptation, experiential learning, and decision-making with at least a minimum level of autonomy (European Commission, 2018; Liao, 2020). These characteristics of so-called intelligent behavior have previously been associated with the cognitive functions of living beings but are now generated through machine learning. Today, AI designates a relatively heterogeneous set of objects that are the subjects of as many discourses: scientific discipline, super-intelligence, socio-technical or socio-economic system (Benbouzid et al., 2022).

AI can process, analyze, and classify vast quantities of data more quickly than humans, and excels at finding patterns and making accurate predictions (Autor, 2015; Podolny, 2015). Increasingly, AI also achieves tasks which were previously assumed to require human creativity: producing articles on sports games and weather events, interpreting medical tests, and generating books and images (Miroschnichenko, 2018; Topalovic et al., 2019; Bensinger, 2023). 

AI’s application across different domains leads us to ask: what must be done with AI at work? Its efficiency, reliability, and speed have led to diverging perspectives. Techno-pessimism claims that humans will be replaced, leading to a loss of employment and increasing inequality, while a techno-optimistic position sees the cooperation of AI and humans as emancipative, allowing humans to focus on meaningful, creative tasks (Vicsek, 2021). Regardless of the perspective, AI has implications for the value of work and conditions of work in the future. Through this conference, we aim to provide a social philosophy of AI and work, grounded in an ethical framework. Focusing on three axes of research, we’ll consider the history of the technological replacement debate, the conditions of work with AI, and the transformations of fairness and justice to highlight AI’s specificities. We aim to address developing questions within an emerging literature on AI’s impact on work (Deranty and Corbin, 2022). 

History of the Machine Replacement Debate

First, how does the appearance of AI change the pre-existing debate about effects of technology on human work? Previous technologies have evoked violent criticism, for example, opposition to mechanized looms among 19th century Luddites and fear of automation in 1960’s industrial America (Autor, 2015). AI could just be the most recent technology to evoke these concerns about working conditions. Some might argue that AI only maintains the preexisting exploitation of workers (Casilli, 2019), with the addition of more efficient forms of control via algorithmic surveillance and management (Boltanski & Chiapello, 1999). Others show how AI systems actually end up being diluted into workers’ daily lives and are reduced to their mere functionality (Condé & Ferguson, 2023).

Yet AI systems are capable of rendering decisions with or without human input (Shrestha, 2019). In addition, while previous technologies were a means of humans communicating with other humans, AI systems are involved in communication with humans (Guzmann, 2020). Finally, we see AI systems which involve human cooperation to function, creating interconnections where the AI learns from and adapts to human behavior and labor (Cristianini et al., 2023). In many ways, AI seems to function differently than previous technologies, perhaps definitively altering working conditions.

Fairness and Justice at Work

AI at work also raises questions related to fairness and justice (van der Poel et al., 2022, 15). In the collective imagination, AI systems are often assumed to be objective because they have no human emotions or desires (Guzman, 2020, 48). This belief misses the subjectivity built into AI systems where even numbers “represent complex social reality” (Boucher, 2020, 23). Data, while not inherently discriminatory, can fit into discriminatory patterns, undermining the possibility of fairness in machine learning. Moreover, data analysts themselves fail to agree on consensual fairness metrics, especially since they sometimes contradict each other (Kleinberg et al., 2016).  The difficulties of achieving fairness in AI ethics (Jobin et al., 2019) lead us to go beyond fairness to develop a notion of AI justice. Rather than focusing on fair treatment by AI systems, justice would also question the modalities of using these systems. 

The explicability of a system can ensure that the justice requirements of its uses are met (Doshi-Velez & Kim, 2019). Even if we can avoid bias, we may not be able to explain the decision-making of the so-called black box of AI. For example, deep learning uses thousands or even millions of interacting links to make predictions (Liao, 2020, 7). We can determine whether the result is correct, but even reading all the documentation will not help make sense of the extensive and complex interactions that went into the solution (Wulff and Finnestrand, 2023, 3). Humans will need to be able to account for the results and justify the AI’s decision making, ensuring fairness, transparency and human autonomy (Jobin et al., 2019). We must also question if AI re-problematizes responsibility at work. When we work, it creates a “psychic friction” between us, objects, and others, which is why work constitutes a “direct experience of our responsibility to our material environment” (Crawford, 2016). Working with or under the management of AI may mean that employees lose this concrete side of their work, detaching them from the effects of their actions and resulting in a displacement of responsibility.  

 

Conditions of Work

AI leads to a rethinking of work’s organization and the framework in which it is performed. Managerial tools are now equipped with algorithmic methods to manage work: recruitment (Lacroux & Martin-Lacroux, 2021), planning (Lapègue & Bellenguez, 2013), establishing tasks to be carried out (Gaborieau, 2012), and customer forecasting (Pachidi, 2021). The introduction of these decision-making systems within organizations can lead to a redistribution of tasks and roles and ultimately to various demands for skills that need to be taken into account (Gekara, 2018). Used as tools for employee performance indicators, these systems are also tools for controlling work (Kellogg et al., 2020). They may nevertheless fail to take the working activities of employees into account in any meaningful way, as in the case of the employee-customer relationship in supermarkets (Evans, 2018), and pose a threat to the notion of work well done or quality work (Clot, 2015).

When we see AI’s transformations of organization and management, it’s clear how humans participate in the creation of AI as engineers, programmers, data scientists and trainers (Tubaro et al., 2020) as well as users whose activity drives the machine learning process (Cristianini et al. 2023, 93). The attention economy depends on these user contributions and the exploitation of their data and activities (Citton, 2014): to what extent does this impact the world of work? If there isn’t replacement, but instead a transformation of human activity, we need to think about new forms of cooperation (Sennett, 2012) between humans and between AI and humans. Is it possible, as Villani argues, that AI can “de-automate human work” to allow us to focus on more creative tasks (2018, 105)? 

Perhaps, what AI gives us is this opportunity to reassess working conditions, understood as the environment of performing a task, the organization of work, and our relationship to work. Work can be a source of meaning, especially through the pleasure and joy of practicing a craft (Frayne 2015), the value of our work in the eyes of others (Michaelson, 2021), making a contribution to society (Moriarty 2009), and the practice and development of autonomy (Schwartz, 1982). However, it is also the place of suffering directly linked to its practice. The centrality of work in the organization of society (Habermas, 1985) leads some to continue precarious jobs to avoid unemployment (Cholbi 2018, 13-15). This means that even if their work is a burden, workers must look for meaning within it. This complexity of finding meaning in different working conditions is exacerbated by AI’s changes to the organization and activities of working. We question the effects of the insertion of AI in this ambiguous relationship to work and employment (Méda & Vendramin, 2013).

Objectives and Call for Papers

With this conference, we want to shed light on issues related to the transformation of human work, providing a contribution to these issues from an ethical perspective. To do this, it is necessary to revisit the place of work for the human condition and analyze the redefinitions underway, asking: is AI redefining work and its value?

We invite proposals in French or English. The following disciplines are invited to participate in the reflection: philosophy, sociology, social history, economics, information and communication sciences, social psychology, ergonomics, computer science, robotics. Please send a 500 word abstract with a title, bibliographical references and a session preference to ai-work-transformations@univ-grenoble-alpes.fr by July 7th, 2023. Contributions not specifically related to these themes are also welcome.

Sessions

The conference will be structured around four axes:

1) Replacement: myth or reality

2) Case Studies: analyzing the role of current systems

3) Fairness and justice at work: what values for an ethics of work?

4) Conditions of Work

Scientific Committee:

Odile Bellenguez, Professor of Computer Science, Nantes Digital Science Laboratory and IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom.

Thomas Berns, Professor of Political Philosophy, Université Libre de Bruxelles.

Yann Ferguson, PhD in sociology, Scientific Director of LaborIA (MTPEI-Inria), Expert at the Global Partnership for Artificial Intelligence, Icam Toulouse.

Stefania Mazzone, Associate Professor of History of Political Thought, University of Catania.

Manuel Zacklad, Professor of Information and Communication Sciences, CNAM, Paris.

Scientific coordinators of the conference:

Chloé Bonifas, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI

Louis Devillaine, PACTE (CNRS & UGA) & chaire éthique & IA MIAI

Thierry Ménissier, Institut de Philosophie de Grenoble (UGA)

Dakota Root, Institut de Philosophie de Grenoble (UGA) & chaire éthique & IA MIAI  

The Chair

The ethics & IA Chair (https://www.ethics-ai.fr/) is part of the Multidisciplinary Institute in Artificial Intelligence (MIAI) and is affiliated to the Grenoble Institute of Philosophy (IPhiG). It aims to develop a philosophical understanding of artificial intelligence through a sustained dialogue with computer science and robotics, cognitive, social and clinical psychology, sociology of organizations, information and communication studies, as well as legal studies and management sciences. At the intersection of political philosophy, public ethics and philosophy of technology, the Chair seeks to explore the social, moral and political dimensions at stake in the deployment of AI technologies, in a way that is both critical and attentive to their technical realities.