Hardware Attacks on Deep Learning

New Attacks on Deep Learning Using Micro-architectural Vulnerabilities

Research Overview

Research in the attacks against deep learning mainly focuses on test-time evasion attacks or training-time poisoning attacks. However, the emergence of Machine Learning as a Service (MLaaS) business models, enlarges the attack surface tremendously by enabling the low-level attacks against the hardware that runs the service. Deep learning requires extensive computation and special hardware, all of which make the hardware-based attacks an emerging threat. First, we first measure the vulnerability of a deep learning model to bit-flips that are facilitated by prominent RowHammer attack that can trigger random or targeted bit-flips in the physical memory. The results of this project are presented in our research paper: "Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks" Second, we focus on cache side-channel attacks for reverse-engineering the deep learning models and stealing the potentially proprietary network architectures, by leveraging the information leaked by the cache access timings. The results of this project are presented in our research paper: "Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks"


Rowhammer Attacks

Our work is the first work that exposes the graceless degradation of DNNs by single bit-flips; contrary to the belief that an adversary is hard to inflict the significant accuracy drop (> 10%) by known attacks such as data poisoning (needs to blend crafted instances ~3%) or hardware fault attacks (shows the avg. accuracy drop over random bit-flips is negligible).

We found: a single bit-flip in a model parameter can inflict the accuracy drop up to 90%, and 50% of the model parameters have at least one-bit that inflict the accuracy drop over 10%.

We systematically revealed that, in the cloud, a RowHammer-enabled attacker who knows the model internals and has the control over the location of bit-flips can inflict the accuracy drops up to 99% with 100% chance (without crashes). Further, an attacker, who does not have the model knowledge and the control over the bit-flip location, can inflict the accuracy drop (> 10%) within, at most, few minutes in our simulated environment.

We presented our paper titled "Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks" at the 28th USENIX Security Symposium, on Aug. 14th, 2019.

MC2 logo             VUSec logo

Cache Side-channel Attacks

Our work presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels.

We define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine victim's deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework.

We introduce DeepRecon, an attack that reconstructs the architecture of the victim network by using the internal information extracted via Flush+Reload. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim's entire network architecture.

We propose and evaluate new DNN-level defense techniques that obfuscate our attacker's observations. Our empirical security analysis represents a step toward understanding the DNNs' vulnerability to cache side-channel attacks.

We released our paper titled "Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks" in Arxiv on Oct. 2018.