International Conference on Artificial Intelligence and Cybersecurity

Claudio Sarra Profile

Claudio Sarra

Claudio Sarra

Biography

Claudio Sarra is Associate Professor of Philosophy of Law at the University of Padua, with national qualification as Full Professor (sector GIUR 17/A). He holds a PhD in Philosophy of Law and has conducted postdoctoral research in the same field. His academic work focuses on legal epistemology, legal metaphors, data protection law, AI and law, and ethics of technology. Prof. Sarra has extensive international teaching experience as well as professional experience as lawyer and serves as Deputy Director of two second-level master?s programs in Bioethics and Biolaw, and in Metaverse and Legal Informatics. He is chief editor of the Journal of Ethics and Legal Technologies and sits on multiple editorial and scientific boards, as well as the Ethical Committee of the HIT (?Human-Inspired Technology?) Centre . He has authored several monographs and over 40 peer-reviewed contributions in international journals and edited volumes. His current research addresses incommensurability in law and science, contestability in automated decision-making, and the symbolic dimensions of digital regulation.

Research Interest

Fairness in Algorithmic Systems: Between Legal Semantics and Mathematical Constraint

Abstract

The digital revolution has fostered unprecedented interdisciplinary collaboration, particularly in the ethical, legal, and technical governance of Artificial Intelligence (AI). Among the central challenges emerging from this convergence is the issue of ?fairness? in algorithmic systems. The imperative to design ?fair? AI?whether in terms of pre-processing data, in-processing logic, or post-processing outcomes?requires both technical formalization and normative grounding. However, recent literature in computer science has identified a structural obstacle: the so-called ?impossibility of fairness? theorem. Foundational contributions by Kleinberg et al. (2016) and Chouldechova (2017) demonstrate that multiple commonly accepted fairness criteria?such as demographic parity, equalized odds, and predictive parity?are mathematically incompatible in non-trivial cases. This constraint is further aggravated in application domains like healthcare, criminal justice, and large language models (LLMs), where differing base rates and contextual variability exacerbate trade-offs between competing fairness metrics. This technical impossibility foregrounds a broader conceptual tension: any attempt to algorithmically encode fairness necessarily privileges one interpretation at the expense of others. To illuminate the roots of this tension, the talk will combine an overview of key results in algorithmic fairness with a historical-philological reconstruction of the term ?fairness,? tracing its metaphorical evolution and juridical assimilation. The multiplicity of its contemporary uses, particularly within European legal frameworks where ?fairness? appears across diverse domains (e.g., data protection, competition law, fair trial), suggests a semantic fluidity that resists operationalization. The analysis will conclude with a philosophical reflection on how the indeterminacy of fairness?both in law and machine learning?invites an interdisciplinary rethinking of normative design and evaluation.