loader image

Frontiers of Artificial Intelligence – Philosophical Explorations

Inbetweenness: the existence of artificial intelligence systems

1 December 2023

Commission on Philosophy of Science, Polish Academy of Arts and Sciences Chair of History and Philosophy of Sciences, Pontifical University of John Paul II

About the conference

The conference explores the philosophical problems at the frontiers of Artificial Intelligence (AI). The assumption behind the conference is that AI technology itself cannot explain what AI technology does and can accomplish, why AI systems do what they do and not do and why they cannot (so far) move beyond a certain class of problems. The claim is that to progress, AI needs to understand the philosophical background of the problems it attempts to model. Thus, AI needs philosophy; the question “What philosophy can do for

AI?” may be another title of this conference.

We are proud to invite you to the eighth edition of our “Philosophy in Informatics” conference. This edition of the event will take place on 1-2 December, in a virtual form. The main organiser is Commision for Philosophy of Science of Polish Academy of Arts and Sciences in Kraków and Chair of History and Philosophy of Science of Pontifical University of John Paul II in Kraków.


Gunkel’s work on robots has led him to argue that AIS may “deconstruct the existing logical order that differentiates person from thing” (2023, 162). In this presentation, we use Gunkel’s point as a jumping off place to think about the existence of artificial intelligence systems. We will ask 1) are AIS ontologically different than other objects?
We will use key features of AIS to distinguish them from other objects, illustrating our point with real-life examples. AIS process and identify data, for example Google maps uses neural networks to distinguish features of the environment (Lookingbill and Russell 2019; Boiling and Bohl 2022). AIS add new content into the world where the AIS is the creator, exemplified by Chat-GPT’s text generation (Miroshnichenko 2018). AIS learn from past data for adjustment and improvement, illustrated by the Transformer deep learning architecture for natural language processing (Uszkoreit 2017). Finally, AIS can actively adapt the environment, such as Google Nest thermometer system that changes temperature in a room.
We argue that these distinguishing features suggest that AIS are ontologically distinguished from other objects. We introduce the term inbetweenness, emphasizing the relationality that characterizes machine learning, to support our arguments. Finally, we will address what the existence of AIS means for subjective experience of humans.

Works Cited

– Andrew Lookingbill, and Ethan Russell. 2019. “Google Maps 101: How We Map the World.” Google. July 22, 2019. https://blog.google/products/maps/google-maps-101-how-we-map-world/.
– Bolling, Liam, and Kristi Bohl. 2022. “How AI and Imagery Build a Self-Updating Map.” Google. April 7, 2022. https://blog.google/products/maps/how-ai-and-imagery-build-self-updating-map/.
– Miroshnichenko, Andrey. 2018. “AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is ‘Yes’).” Information 9 (7): 183. https://doi.org/10.3390/info9070183.
– Uszkoreit, Jakob. 2017. “Transformer: A Novel Neural Network Architecture for Language Understanding.” Google Research. August 31, 2017. https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html.