Overview
While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g. sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.
Room
Shoreline room in Hyatt Regency Long Beach Hotel
Invited Speakers
Contributing your talks
The Call for Papers is available.
Schedule
9:00 - Opening Remarks
Session 1: Secure Machine Learning in Practice
Session Chair: Chang Liu
9:15 - Invited Talk #1: AI Applications in Security at Ant Financial by Alan Qi
9:45 - Contributed Talk #1: A Word Graph Approach for Dictionary Detection and Extraction in DGA Domain Names by Mayana Pereira, Shaun Coleman, Martine De Cock, Bin Yu and Anderson Nascimento [Slides]
10:00 - Contributed Talk #2: Practical Machine Learning for Cloud Intrusion Detection by Ram Shankar Siva Kumar, Andrew Wicker and Matt Swann [Slides]
10:15 - Poster Spotlights #1
-
Verifying Properties of Binarized Deep Neural Networks by Nina Narodytska
-
Cascade Adversarial Machine Learning Regularized with a Unified Embedding by Taesik Na
-
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models by Huan Zhang
-
DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning by Min Du
-
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks by Yanjun Qi
-
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations by Dimitris Tsipras
-
A Neural-Symbolic Approach to Design of CAPTCHA by Qiuyuan Huang
10:30 - Coffee Break
Session 2: Machine Learning, Cybersecurity, and Society
Session Chair: Jacob Steinhardt
11:00 - Invited Talk #2: International Security and the AI Revolution by by Allan Dafoe [Slides]
11:30 - Contributed Talk #3 (Best Attack Paper): BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain by Tianyu Gu, Brendan Dolan-Gavitt and Siddharth Garg
11:45 - Poster Spotlights #2
-
Interpretation of Neural Networks is Fragile by Amirata Ghorbani
-
Thermometer Encoding: One Hot way to resist Adversarial Examples by Aurko Roy
-
Adversarial Patch by Justin Gilmer
-
Distributionally Robust Deep Learning as a Generalization of Adversarial Training by Matthew Staib
-
Certifiable Distributional Robustness with Principled Adversarial Training by Aman Sinha
-
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation by Matthias Hein
-
Query-limited Black-box Attacks to Classifiers by Suya FNU
-
JPEG-resistant Adversarial Images by Richard Shin
12:00 - Lunch
Session 3: Security Vulnerabilities of Machine Learning Systems
Session Chair: Nicolas Papernot
1:30 - Invited Talk #4: Defending Against Adversarial Examples by Ian Goodfellow [Slides]
2:00 - Contributed Talk #4 (Best Defense Paper): Provable defenses against adversarial examples via the convex outer adversarial polytope by J. Zico Kolter and Eric Wong
2:15 - Invited Talk #5: Games People Play (With Bots) by Donald Brinkman
2:45 - Contributed Demo: Synthesizing Robust Adversarial Examples by Anish Athalye, Logan Engstrom, Andrew Ilyas and Kevin Kwok [Slides]
3:00 - Poster Session/Break
Session 4: Formal Definitions and Formal Verification
Session Chair: Bo Li
3:45 - Invited Talk #5: Privacy-preserving Mechanisms for Correlated Data by Kamalika Chaudhuri [Slides]
4:10 - Invited Talk #6: Towards Verification of Deep Neural Networks by Clark Barrett [Slides]
4:40 - Invited Talk #7: Adversarially Robust Optimization and Generalization by Ludwig Schmidt [Slides]
List of contributed posters
-
DeepXplore: Automated Whitebox Testing of Deep Learning Systems by Kexin Pei, Yinzhi Cao, Junfeng Yang and Suman Jana
-
Verifying Properties of Binarized Deep Neural Networks by Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv and Toby Walsh
-
Cascade Adversarial Machine Learning Regularized with a Unified Embedding by Taesik Na, Jong Hwan Ko and Saibal Mukhopadhyay
-
Interpretation of Neural Networks is Fragile by Amirata Ghorbani, Abubaka Abid and James Zou
-
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples by Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon and Nate Kushman
-
Thermometer Encoding: One Hot way to resist Adversarial Examples by Jacob Buckman, Aurko Roy, Colin Raffel and Ian Goodfellow
-
Adversarial Patch by Tom B Brown, Dandelion Mané, Aurko Roy, Martin Abadi and Justin Gilmer
-
Distributionally Robust Deep Learning as a Generalization of Adversarial Training by Matthew Staib and Stefanie Jegelka
-
Certifiable Distributional Robustness with Principled Adversarial Training by Aman Sinha, Hongseok Namkoong and John Duchi
-
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation by Matthias Hein and Maksym Andriushchenko
-
Query-limited Black-box Attacks to Classifiers by Fnu Suya, Yuan Tian, David Evans and Paolo Papotti
-
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models by Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi and Cho-Jui Hsieh
-
DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning by Min Du, Feifei Li, Guineng Zheng and Vivek Srikumar
-
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks by Weilin Xu, David Evans and Yanjun Qi
-
JPEG-resistant Adversarial Images by Richard Shin and Dawn Song
-
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations by Logan Engstrom, Ludwig Schmidt, Dimitris Tsipras and Aleksander Madry
-
A Neural-Symbolic Approach to Design of CAPTCHA by Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng and Dapeng Wu