A new article by Mihaela Constantinescu (Executive Director, CCEA), co-authored with Muel Kaptein, has been published on 13th of November. The research addresses one of the most critical challenges in business ethics and Artificial Intelligence (AI) governance.
The article, titled “Responsibility Gaps, LLMs & Organisations: Many Agents, Many Levels, and Many Interactions,” proposes a novel framework for allocating moral responsibility when Large Language Models (LLMs) are deployed across organizations.
“Abstract
In this article, we propose a business ethics-inspired approach to address the distribution dimension of responsibility gaps introduced by general-purpose AI models, particularly large language models (LLMs). We argue that the pervasive deployment of LLMs exacerbates the long-standing problem of “many hands” in business ethics, which concerns the challenge of allocating moral responsibility for collective outcomes. In response to this issue, we introduce the “many-agents-many-levels-many-interactions” approach, labelled M3, which addresses responsibility gaps in LLM deployment by considering the complex web of interactions among diverse types of agents operating across multiple levels of action. The M3 approach demonstrates that responsibility distribution is not merely a function of agents’ roles or causal proximity, but primarily of the range and depth of their interactions. Contrary to reductionist views that suggest such complexity inevitably diffuses responsibility to the point of its disappearance, we argue that these interactions provide normative grounds for safeguarding the attribution of responsibility to agents. Central to the M3 approach is identifying agents who serve as nodes of interaction and therefore emerge as key loci of responsibility due to their capacity to influence others across different levels. We position LLM-developing organisations as an example of such agents. As nodes of interactions, LLM-developing organisations exert substantial influence over other agents and should be attributed broader responsibility for harmful outcomes of LLMs. The M3 approach thus offers a normative and practical tool for bridging potential gaps in the distribution of responsibility for LLM deployment.”
The article is available for reading, Open Access, here:
https://link.springer.com/article/10.1007/s11948-025-00560-1