Visible to the public SAM: The Sensitivity of Attribution Methods to Hyperparameters

TitleSAM: The Sensitivity of Attribution Methods to Hyperparameters
Publication TypeConference Paper
Year of Publication2020
AuthorsBansal, Naman, Agarwal, Chirag, Nguyen, Anh
Conference Name2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Date Publishedjun
Keywordsattribution, composability, Heating systems, Human Behavior, Limiting, Metrics, Noise measurement, pubcrawl, Robustness, Sensitivity, smoothing methods
AbstractAttribution methods can provide powerful insights into the reasons for a classifier's decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.
Citation Keybansal_sam_2020