Maria Sinaci
Biography
Maria Sinaci is an Assoc.prof. at Aurel Vlaicu University in Arad, Romania - Faculty of Economic Sciences. In 2008, she obtained her PhD in Philosophy from the West University of Timi?oara, a field in which her subsequent research is circumscribed. Her academic activity mainly focuses on ethics, often with interdisciplinary approaches. She has written two books and is the editor/co-editor of eight collective volumes. M. Sinaci has published over 60 articles as author/co-author and has presented over 70 papers at conferences. He has also organized and served on scientific committees for international conferences, such as the International Conference on Bioethics in the New Era of Science (BNAS) - 2017, and has coordinated several projects in the field of applied ethics. Her current research addresses topics related to business ethics, bioethics, the philosophy of education, and the ethical and social implications of the use of artificial intelligence.
Research Interest
Her current research addresses topics related to business ethics, bioethics, the philosophy of education, and the ethical and social implications of the use of artificial intelligence.
Abstract
Between algorithmic autonomy and human responsibility: emerging ethical dilemmas in artificial intelligence governance
The accelerated progress of artificial intelligence (AI) and its deployment in increasingly sensitive domains?such as healthcare, criminal justice, autonomous transportation, and the creative industries?has generated unprecedented opportunities for innovation, efficiency, and social welfare. At the same time, this rapid diffusion has exposed society to significant ethical risks, including bias and discrimination, opacity of decision-making, and the potential erosion of accountability. The central issue emerging from these transformations concerns the tension between algorithmic autonomy, expressed through the ability of systems to perform complex tasks and make decisions with minimal human intervention, and human responsibility, which remains indispensable for legitimacy, trust, and the protection of fundamental rights.
This paper examines these dilemmas through an interdisciplinary lens that combines perspectives from ethics, law, and technology studies. It argues that algorithmic autonomy should not be conceptualized as the transfer of moral or legal responsibility to machines but rather as a functional feature that requires embedding within a broader framework of human-centered governance. Without such integration, societies risk facing ?responsibility gaps? and the externalization of moral agency to technical systems.
Building on this premise, the study proposes a model of distributed responsibility in which human actors?developers, policymakers, institutions, and end-users?retain decisive roles in oversight, accountability, and remediation. The model emphasizes the necessity of transparency, auditability, and fairness as guiding principles, operationalized through mechanisms such as algorithmic auditing, contestability, and the monitoring of social impacts.
By aligning technological innovation with ethical and legal safeguards, the paper outlines the foundations of an ethical AI governance model oriented toward equity, human dignity, and democratic legitimacy.
Keywords: algorithmic autonomy, human responsibility, AI governance, distributed responsibility, ethics of artificial intelligence.