Antonio Ballarin
Biography
The pervasive adoption of artificial intelligence (AI) in various everyday devices, such as smartphones, TVs, food processors, cars, etc., raises increasingly relevant ethical questions. This diffusion not only improves the efficiency and functionality of the devices, but also poses significant challenges in terms of liability, privacy, security, etc. In particular:
? The collection and processing of personal data by smart devices can compromise user privacy. Sensitive information can be used without consent or exposed to breaches.
? In the event of malfunctions or accidents caused by automated systems, the question of who is responsible (the manufacturer, the software or the user) is complex and requires clear regulation.
? Algorithms can perpetuate existing biases if not carefully designed. This can lead to unfair decisions in areas such as credit, hiring, etc.
Research Interest
A Method to Test the Ethics of Some AI Classifiers - The Example of School Dropouts Problem
Abstract
If an AI artifact is an emulation of human behavior in relation to the performance of some activity and if the human being, in carrying out that activity, is required to respect a behavioral framework defined by laws, rules, regulations, procedures, best practices, etc., then the AI that emulates that human behavior is also required to respect the same behavioral framework. The idea of ethics tests is developed on this principle and, precisely on the basis of this principle, a pragmatic methodology can be developed that can test the correspondence in the observance of the artefact to the behavioral framework within which it will necessarily be placed in its operation. The approach proposed in this work allows us to offer an extremely pragmatic solution to the search for an ?ethical behavior? for AI artifacts, bypassing the difficult applicability of the complex and abstract legislation currently in force on this topic. In order to explain the applicability of this methodology to a concrete problem, this work considers the problem of school dropout as an example and describes the construction of two classifiers, one based on a neural network and one on a decision tree, able to predict the phenomenon. The application of the methodology clearly shows how the explainability offered by a symbolic system, such as a decision tree, is not applicable as an element of explainability in the behavior of a neural classifier.
Keywords ? ethics tests, explainability, classifications system, bias, categorical polarization