The progress made by deep neural networks over the last decade for various classification tasks in all domains has raised concerns about the "black box" nature of these models. The reliability of deep neural network decisions in a human-understandable way is an open problem. Recently, with the advent of deep neural models such as transformers, the increasing complexity and number of parameters make explanations in a human-understandable way more important. The work presented in this thesis can be divided into two parts. The first part concerns the development of a multimodal network for the application of risk detection for frail people in the home environment. The data consists of egocentric videos and signals acquired from various physiological and motion sensors. As data acquisition takes place in a real-life scenario, the use of this complex data in multimodal networks poses several problems: i) poor data synchronization between modalities, ii) lack of data, and iii) understanding the representation between modalities. To develop a truly multimodal network, we first focus on unimodal components, designing and evaluating our models on freely available uni-modal datasets. Then, the models are merged into a multimodal architecture to make decisions on real multimodal data. One of the configurations we have proposed is a multimodal transformer. Two forms of information fusion have been studied: i) intermediate fusion in feature space and ii) late fusion in decision space.
In the second part of the thesis, we develop explanation methods for transformers, more specifically visual transformers. We evaluated our method in terms of the plausibility of the explanations obtained in relation to human gaze fixation density maps. This part of the work was carried out on a still image dataset. Our aim being to develop solutions for the analysis of temporal information, such as video, and based on the philosophy of importance through explanation, we proposed a model to highlight the temporal importance of images in video. This model was used on visual data from the risk detection system and compared with a large-scale dataset of human actions. Next, we take advantage of our proposed explicability method and use it for a better generalization of the proposed multimodal transformer. Indeed, the use of explainability techniques in multimodal transformers increases the accuracy of these classifiers on complex real-world data and opens up interesting perspectives for studies on the sparsity and robustness of these approaches.