Call for Papers

Overview

While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g. sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.

Topics

We invite submissions on any aspect of machine learning that relates to computer security. This includes, but is not limited to:

Submissions should have a clear explanation of their relationship to security, for instance by describing an attack model and the ways in which the submitted work addresses such attacks. While not mandatory, submissions are encouraged to take special care in facilitating reproducibility of research results (e.g., by open-sourcing their code).

Submissions

The submission site is open!

There are two tracks for submissions:

We have two rounds of rolling deadlines (Anywhere On Earth):

Contact Chang Liu for any questions.

Criteria

Our decisions will be made mainly based on relevance of the submissions to the topics of the workshop. For technical submissions, we will further evaluate the novelty and the scientific quality. White papers and strategic analysis papers will be further evaluated based on the clarity of presentation and the significance of the questions addressed.

Best Paper Award

We will present a best paper award ($1000)!

Program Committee

Sadia Afroz (UC Berkeley)

Nicholas Carlini (UC Berkeley)

Xinyun Chen (UC Berkeley)

Earlence Fernandes (University of Washington)

Bo Li (UC Berkeley)

Chang Liu (UC Berkeley, co-chair)

Zhuang Liu (UC Berkeley)

Nicolas Papernot (Pennsylvania State University)

Richard Shin (UC Berkeley)

Jacob Steinhardt (Stanford University, co-chair)

Florian Tramer (Stanford University)

Xinyun Xing (Pennsylvania State University)

Weilin Xu (University of Virginia)

Steering Committee

Dan Boneh (Stanford)

Percy Liang (Stanford)

Dawn Song (UC Berkeley)

Back to main page