Deep Neural Networks (DNNs) have recently received significant attention in the side-channel community due to their state-of-the-art performance in security testing of embedded systems. However,
research on the subject mostly focused on techniques to improve the
attack efficiency in terms of the number of traces required to extract secret
parameters. What has not been investigated in detail is a constructive
approach of DNNs as a tool to evaluate and improve the effectiveness
of countermeasures against side-channel attacks. In this work, we try to
close this gap by applying attribution methods that aim for interpreting
DNN decisions, in order to identify leaking operations in cryptographic
implementations. In particular, we investigate three different approaches
that have been proposed for feature visualization in image classification
tasks and compare them regarding their suitability to reveal Points of
Interests (POIs) in side-channel traces. We show by experiments with
three separate data sets that Layer-wise Relevance Propagation (LRP)
proposed by Bach et al. provides the best result in most cases. Finally, we
demonstrate that attribution can also serve as a powerful side-channel
distinguisher in DNN-based attack setups.