Monday, February 21, 2022

Interpretable Deep Learning under Fire

Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far

https://www.usenix.org/system/files/sec20spring_zhang_prepub.pdf



No comments:

Post a Comment