Call for Papers
While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g. sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.
We invite submissions on any aspect of machine learning that relates to computer security. This includes, but is not limited to:
- Case studies of machine learning used in cyber security, such as detection of spam, sybils, or malicious URLs
- Training time attacks (e.g., data poisoning)
- Adversarial examples at test time
- Model stealing (e.g., for reconnaissance of a system before mounting an attack)
- Theoretical foundations of adversarially robust learning
- Formal verification of machine learning systems
- Identifying bugs in machine learning systems, especially if they present security vulnerabilities
- Strategic analysis of present or future security / misuse risks and how to prioritize them
- White papers proposing or building on formal threat models and definitions of security
Submissions should have a clear explanation of their relationship to security, for instance by describing an attack model and the ways in which the submitted work addresses such attacks. While not mandatory, submissions are encouraged to take special care in facilitating reproducibility of research results (e.g., by open-sourcing their code).
The submission site is open!
There are two tracks for submissions:
- Research Track: We accept submissions on novel results. The submissions should be formated using a template from any major conferences (NIPS, ICML, etc.). A maximum of 4 pages are allowed for the main body of this type of submission, and in addition, an unlimited number of pages are allowed for references and well-marked appendix.
- Encore Track: We also accept papers that have already been published. We do not have a page limit on this type of submissions. In the submission site, please use the Keyword field to provide the information on the venue where the submission was originally published.
We have two rounds of rolling deadlines (Anywhere On Earth):
- Round 1: deadline for submission October 22nd, 2017; notification November
- Round 2: deadline for submission November
(3rd)6th, 2017; notification November (17th)21, 2017
Contact Chang Liu for any questions.
Our decisions will be made mainly based on relevance of the submissions to the topics of the workshop. For technical submissions, we will further evaluate the novelty and the scientific quality. White papers and strategic analysis papers will be further evaluated based on the clarity of presentation and the significance of the questions addressed.
Best Paper Award
We will present a best paper award ($1000)!
Sadia Afroz (UC Berkeley)
Nicholas Carlini (UC Berkeley)
Xinyun Chen (UC Berkeley)
Earlence Fernandes (University of Washington)
Bo Li (UC Berkeley)
Chang Liu (UC Berkeley, co-chair)
Zhuang Liu (UC Berkeley)
Nicolas Papernot (Pennsylvania State University)
Richard Shin (UC Berkeley)
Jacob Steinhardt (Stanford University, co-chair)
Florian Tramer (Stanford University)
Xinyun Xing (Pennsylvania State University)
Weilin Xu (University of Virginia)
Dan Boneh (Stanford)
Percy Liang (Stanford)
Dawn Song (UC Berkeley)
Back to main page