Artificial agency and responsibility: the rise of LLM-powered avatars
Venue: Faculty of Philosophy, University of Bucharest, 22 & 23 May 2025
Description: Recent developments in digital and robotic avatars that integrate Large Language Models with interactive representations of real human beings blur the boundaries of human and artificial agency. Use of highly autonomous or semi-teleoperated LLM-powered avatars in virtual and physical environments (e.g. professional, educational or healthcare) raises pressing questions related to legal and moral responsibility for outcomes generated with and by avatars. The purpose of this two-day workshop is to explore the puzzle of artificial agency and responsibility in the context of LLM-powered avatars, considering various streams of research in domains such as AI ethics, social ontology, as well as philosophy of law.
Organizers: This is the second of a series of five yearly workshops hosted by the Research Center in Applied Ethics of the Faculty of Philosophy, University of Bucharest, within the framework of the ERC Starting Grant project “avataResponsibility” (Avatar agency. Moral responsibility at the intersection of individual, collective, and artificial social entities in emergent avatar communities). Details of the previous edition are available here. The workshop is part of the larger series of events “Responsibility Matters Workshop Series” (RMWS) covering topics related to responsibility across various fields.
Keynote and Guest Speakers
Pekka Mäkelä
Pekka Mäkelä is the vice director of Helsinki Institute for Social Sciences and Humanities (HSSH). Mäkelä is also a PI, jointly with Raul Hakli, of RADAR research group. His research interests are in normative dimensions of collective action, e.g. collective responsibility and trust, social ontology, the philosophy of the social sciences, and philosophical problems of social robotics and human-robot interaction. Presently he is the PI in two research projects on such themes as “Trust and Value-Sensitive Design” and Erasmus+-project “Implementing Ethics by Design in AI: A Training Framework for Healthcare”.
Raul Hakli
Raul Hakli is a University Researcher at the Department of Practical Philosophy, University of Helsinki, Finland. His research interests include philosophy of social robotics and artificial intelligence, social ontology, and institutional epistemology. He has edited collections on social ontology and robophilosophy and published articles in journals like The Monist, Synthese, Journal of Philosophical Logic, Cognitive Systems Research, Economics and Philosophy, Annals of Mathematics and Artificial Intelligence, and International Journal of Social Robotics. Together with Pekka Mäkelä, he leads the research group RADAR that studies philosophical and societal questions of modern technologies like AI and robotics. He is the editor-in-chief of the Springer series Studies in the Philosophy of Sociality.
Dina Babushkina
Dina is an assistant professor in philosophy at the University of Twente, which she joined after her postdoctoral appointment at the University of Helsinki (https://www.helsinki.fi/en/researchgroups/robophilosophy-ai-ethics-and-datafication). She has a PhD in Social Sciences (ethical theory and moral psychology) from the University of Helsinki and a PhD in Philosophy (History of Philosophy) from Saint-Petersburg State University. Her expertise is in moral psychology, normative ethics, value theory, and philosophical anthropology (existentialism). She is also known for her work on F.H. Bradley’s ethical idealism. Her current research interests fall into two groups: the philosophy of psychology (the concept of the self; desire and emotions; moral motivation) and the philosophy of technology. With respect to AI, she is interested in its effects on the human condition and agency (moral and cognitive) as well as the way these disruptive effects should inform moral norms on AI. Dina is a co-founder and the coordinator of Ethics and Epistemology of AI Initiative.
Matteo Pascucci
Matteo Pascucci is a research fellow at the Department of Philosophy of Central European University and a researcher at the Institute of Philosophy of the Slovak Academy of Sciences. His main research areas are: modal logic, temporal logic, deontic logic and normative reasoning, ethics for artificial intelligence and formal analysis of indeterminism. He is currently Principal Investigator for the international project MODREQUAM “Modal Reasoning, Quarc and Metaphysics”, funded by FWF and DFG and hosted at CEU. His academic education includes a PhD in Computer Science, (University of Verona, 2016), an MA in Philosophy (University of Trento, 2012) and a BA in Philosophy (University of Siena, 2010).
Diana Mocanu
Diana Mocanu is a postdoctoral researcher in LEGACY. She is in charge of the subproject which will look into whether the anthropomorphic features that seem to increasingly set AI systems apart from mere things also trouble the traditional, Liberal notion of agency that is implied in Western legal systems, enabling them to display behaviors that might merit some form of legal recognition. In so doing, this research will draw insights from and parallels with previous work on group agency and collective intentionality, comparing alleged AI agency with the agency of the old, slow AIs that corporations are.
Daniel Dodds Berger
Daniel Dodds Berger is in charge of the subproject concerning the historical background of Liberal Agency and its challengers. He is a postdoctoral researcher at the Faculty of Law, University of Helsinki. He earned a law degree from the Universidad de Chile and then continued his studies at Goethe-Universität Frankfurt, where he completed a master’s degree in philosophy and a Ph.D. in 2024. His research focuses on the development of legal agency in 18th and 19th century German philosophy and legal science. He also engages with contemporary theories of rights, seeking to bridge historical perspectives with modern legal challenges.
Izabela Skoczeń
Izabela Skoczeń is an assistant professor at the Jagiellonian University in Kraków and a member of the Jagiellonian Centre for Law, Language and Philosophy in Kraków. Her research interests include experimental jurisprudence, cognitive science, philosophy of language and philosophy of law. In 2019 she published a book ‘Implicatures within Legal Language’ in the Springer Law and Philosophy Library, she has published experimental articles in journals such as Cognition, Synthese, Ratio Juris, the Leiden Journal of International Law, German Law Journal, the International Journal for the Semiotics of Law or Intercultural Pragmatics.
Elena Popa
Elena Popa is Ramón y Cajal fellow at the University of Seville. She works on causality and causal reasoning and values in science, with special emphasis on cultural and social issues in medicine, particularly psychiatry and public health. Her published work includes articles in journals such as Synthese, Studies in History and Philosophy of Science: Part C, and Topoi and book chapters with publishers such as Oxford University Press and Cambridge University Press. She has been the principal investigator of several research projects in the philosophy of medicine.
Steven S. Gouveia
Steven S. Gouveia is a Contracted Researcher at the Mind, Language, and Action Group of the Institute of Philosophy, University of Porto, and an Honorary Professor at the Andrés Bello Faculty of Medicine. He recently served as a Visiting Researcher at the Robotics Lab of the University of Palermo. He has authored 14 academic books, the latest being “The Odyssey of the Mind: Dialogues on the Brain and Consciousness”, featuring contributions from eight international thinkers, including Nobel Prize-winning physicist Sir Roger Penrose. At the University of Porto, he leads a six-year project focused on AI Ethics in Medicine (www.trustaimedicine.weebly.com). A TEDx Speaker with over 40,000 views, he hosted the documentary “The Age of Artificial Intelligence: A Documentary.” Steven has been a keynote speaker at numerous international conferences. For more information, visit www.stevensgouveia.weebly.com.