The advent of neural networks in machine learning has brought about a paradigm shift in many fields, from applied sciences to everyday life. We are now interacting with neural networks on a daily basis: on our smartphones, while browsing the Internet or through the presence of connected objects.
Nevertheless, their internal functioning remains rather obscure, even for the scientific communities that develop them, and it is generally accepted that there is currently no satisfactory mathematical formalism to describe their learning process. Yet, would an understanding on the level of mathematics alone be sufficient? Given the huge impact of these technologies outside the laboratory, a new imperative (based on different social, legal, economic dimensions) appears: we need to produce explanations of the results of neural networks for users.
In this presentation, I will propose a preliminary analysis of the answers brought to this question by Explainable AI research, while discussing the conditions in which this very young field has been constituted.
| Attachment | Size |
|---|---|
| Nicolas Berkouk1juillet2022.pdf | 3.22 Mo |