Optimizing masking-based XAI for enhanced interpretability of deep learning models
Background Explainability is a necessary component for implementation of deep learning models in domains with critical decision-making, such as healthcare, finance and climate. The black-box nature of the models makes them less trustworthy and the aim of eXplainable AI (XAI) is to open to black box. Masking-based methods uses repeated perturbation of the input to measure the change in the output and assess the relevance of each input pixel. The relevance is either estimated using Monte Carlo sampling of the masks [1] or by optimizing the masks using back-propagation [2]....