Practical Hardware Attacks on Deep Learning

New Attacks on Deep Learning Using Micro-architectural Vulnerabilities


Is Deep Learning Robust to Hardware Attacks?

No. Our research is focused on exposing and characterizing the vulnerabilities of deep learning models, especially deep neural networks (DNNs), caused by hardware attacks. We first expose that DNNs are vulnerable to bitwise corruptions that are facilitated by a prominent RowHammer attack which can trigger a random or targeted bit-flip in physical memory [1]. Second, we show that a motivated adversary can reverse-engineer DNNs and steal the potentially proprietary network architectures, by leveraging a small amount of information leaked by a cache side-channel attack, Flush+Reload [2, 3]. As it becomes a widespread practice for researchers and engineers to outsource the training and deployment of their deep learning models to cloud-based services (e.g., MLaaS) or to use special hardware for acceleration, we believe our research has taken the first step toward understanding the impact of new, imminent threats to deep learning models (DNNs) deployed and running in the real-world.


News

08.2020: We publish a nice write-up about our USENIX'19 paper, check this out!
03.2020: We released the source code used in our ICLR'20 paper!
12.2019: Our "How to 0wn NAS in Your Spare Time" paper is accepted in ICLR'20!
08.2019: Our "Terminal Brain Damage" paper is presented in USENIX'19!
05.2019: Our paper on "Terminal Brain Damage" is accepted in USENIX'19!
10.2018: We released the preprint of our research on reconstructing DNN architectures using cache side-channel attack in arXiv.


Projects

How to 0wn NAS in Your Spare Time

Can an attacker steal your unique DNN architectures accurately within an hour?

Our research found that the leakage extracted from a cache side-channel attack, Flush+Reload, while a DNN model is processing a single sample, contains a lot of information about the architecture details. Here, we design a novel algorithm that reconstructs the victim DNN’s architecture from the leakage accurately...

Read More

A Single Bit-flip Can Cause Terminal Brain Damage to DNNs

One specific bit-flip in a DNN’s representation leads to accuracy drop over 90%

Our research found that a specific bit-flip in a DNN’s bitwise representation can cause the accuracy loss up to 90%, and the DNN has 40-50% parameters, on average, that can lead to the accuracy drop over 10% when individually subjected to such single bitwise corruptions...

Read More


Publications

[3] Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Dana Dachman-Soled, and Tudor Dumitraş. How to 0wn NAS in Your Spare Time. In 8th International Conference on Learning Representation (ICLR). Virtual. Apr. 26 - May 2, 2020. [PDF]
[2] Sanghyun Hong, Pietro Frigo, Yiǧitcan Kaya, Cristiano Giuffrida, and Tudor Dumitraş. Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks. In 28th USENIX Security Symposium (USENIX). Santa Clara, CA. Aug. 14-16, 2019. [PDF]
[1] Sanghyun Hong*, Michael Davinroy*, Yiǧitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, Tudor Dumitraş. Security Analysis of Deep Neural Network Operating in the Presence of Cache Side-Channel Attacks. Preprint in arXiv. Oct. 8, 2018. [PDF] (* indicates equal contribution)


People

Sanghyun Hong (PhD Student, University of Maryland College Park)
Pietro Frigo (PhD Student, Vrije Universiteit Amsterdam)
Yigitcan Kaya (PhD Student, University of Maryland College Park)
Michael Davinroy (PhD Student, Northeastern University)

Tudor Dumitras (Associate Professor, University of Maryland College Park)
Cristiano Giuffrida (Associate Professor, Vrije Universiteit Amsterdam)
Dana Dachman-Soled (Associate Professor, University of Maryland College Park)


University of Maryland Maryland Cybersecurity Center Vrije Universiteit Amsterdam VUSec