Local Interpretable Model-Agnostic Explanations for Neural Ranking Models
DOI:
https://doi.org/10.3384/ecp208017Abstract
Neural Ranking Models have shown state-of-the-art performance in Learning-To-Rank (LTR) tasks. However, they are considered black-box models. Understanding the logic behind the predictions of such black-box models is paramount for their adaptability in the real-world and high-stake decision-making domains. Local explanation techniques can help us understand the importance of features in the dataset relative to the predicted output of these black-box models. This study investigates new adaptations of Local Interpretable Model-Agnostic Explanation (LIME) explanation for explaining Neural ranking models. To evaluate our proposed explanation, we explain Neural GAM models. Since these models are intrinsically interpretable Neural Ranking Models, we can directly extract their ground truth importance scores. We show that our explanation of Neural GAM models is more faithful than explanation techniques developed for LTR applications such as LIRME and EXS and non-LTR explanation techniques for regression models such as LIME and KernelSHAP using measures such as Rank Biased Overlap (RBO) and Overlap AUC. Our analysis is performed on the Yahoo! Learning-To-Rank Challenge dataset.Downloads
Published
2024-06-14
Issue
Section
Contents
License
Copyright (c) 2024 Amir Hossein Akhavan Rahnama, Laura Galera Alfaro, Zhendong Wang, Maria Movin
This work is licensed under a Creative Commons Attribution 4.0 International License.