Overview

While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g. sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.

Room

Shoreline room in Hyatt Regency Long Beach Hotel

Invited Speakers

Contributing your talks

The Call for Papers is available.

Schedule

9:00 - Opening Remarks

Session 1: Secure Machine Learning in Practice

Session Chair: Chang Liu

9:15 - Invited Talk #1: AI Applications in Security at Ant Financial by Alan Qi

9:45 - Contributed Talk #1: A Word Graph Approach for Dictionary Detection and Extraction in DGA Domain Names by Mayana Pereira, Shaun Coleman, Martine De Cock, Bin Yu and Anderson Nascimento [Slides]

10:00 - Contributed Talk #2: Practical Machine Learning for Cloud Intrusion Detection by Ram Shankar Siva Kumar, Andrew Wicker and Matt Swann [Slides]

10:15 - Poster Spotlights #1

10:30 - Coffee Break

Session 2: Machine Learning, Cybersecurity, and Society

Session Chair: Jacob Steinhardt

11:00 - Invited Talk #2: International Security and the AI Revolution by by Allan Dafoe [Slides]

11:30 - Contributed Talk #3 (Best Attack Paper): BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain by Tianyu Gu, Brendan Dolan-Gavitt and Siddharth Garg

11:45 - Poster Spotlights #2

12:00 - Lunch

Session 3: Security Vulnerabilities of Machine Learning Systems

Session Chair: Nicolas Papernot

1:30 - Invited Talk #4: Defending Against Adversarial Examples by Ian Goodfellow [Slides]

2:00 - Contributed Talk #4 (Best Defense Paper): Provable defenses against adversarial examples via the convex outer adversarial polytope by J. Zico Kolter and Eric Wong

2:15 - Invited Talk #5: Games People Play (With Bots) by Donald Brinkman

2:45 - Contributed Demo: Synthesizing Robust Adversarial Examples by Anish Athalye, Logan Engstrom, Andrew Ilyas and Kevin Kwok [Slides]

3:00 - Poster Session/Break

Session 4: Formal Definitions and Formal Verification

Session Chair: Bo Li

3:45 - Invited Talk #5: Privacy-preserving Mechanisms for Correlated Data by Kamalika Chaudhuri [Slides]

4:10 - Invited Talk #6: Towards Verification of Deep Neural Networks by Clark Barrett [Slides]

4:40 - Invited Talk #7: Adversarially Robust Optimization and Generalization by Ludwig Schmidt [Slides]

List of contributed posters

Organizer