top of page

FoMLAS 2022
July 31 - August 1, 2022, Haifa, Israel

5th Workshop on Formal Methods for ML-Enabled Autonomous Systems

Affiliated with FLoC 2022

Important Dates

  • Paper submission: May 11

  • Author notification: June 17

  • Workshop: July 31 - August 1

FoMLAS ​2022 is over. We look forward to seeing you in 2023!

For more info. please visit FoMLAS 2023 website.

Scope and Topics of Interest

In recent years, deep learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.  

The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous
systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through
algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and examples.

This year, the workshop will host the 3rd International Verification of Neural Networks Competition (VNN-COMP'22).

For additional info, see https://sites.google.com/view/vnn2022.


The topics covered by the workshop include, but are not limited to, the following:
 

  • Formal specifications for systems with ML components

  • SAT-based and SMT-based methods for analyzing systems with deep neural network components

  • Mixed-integer Linear Programming and optimization-based methods for the verification of systems with deep neural network components

  • Testing approaches for ML components

  • Statistical approaches to the verification of systems with ML components

  • Approaches for enhancing the explainability of ML-based systems

  • Techniques for analyzing hybrid systems with ML components

  • Verification of quantized and low-precision neural networks

  • Abstract interpretation techniques for neural networks

Paper Submission and Proceedings


Three categories of submissions are invited:
 

  • Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.
     

  • Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.
     

  • Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. These reports may range in length from very short to full papers, and will be judged based on the expected level of interest for the community.


New this year: the workshop will have official proceedings, published by Springer. Authors of papers in the "original papers" category may choose to have their papers included in these proceedings. Authors of "extended abstract" papers may also choose this option, provided that these papers are at least 6 pages long. All papers not published in the official proceedings will be made available on the workshop's website, as part of informal proceedings, on the day of the workshop.​

Papers in all categories will be peer-reviewed. We recommend that papers be submitted as a single column, standard-conforming PDF, using  the LNCS style, with a suggested page limit of 10 pages, not counting references - although other formats will be accepted. The review process will be single-blind.


To submit a paper, use EasyChair.

Invited Talk

Speaker: Prof. Suman Jana, Columbia University

Title: Efficient Neural Network Verification using Branch and Bound

Abstract:
In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons;  and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. 


Bio:
Suman Jana is an associate professor in the department of computer science and the data science institute at Columbia University.  His primary research interest is at the intersections of computer security and machine learning. His research has received six best paper awards, a CACM research highlight, a Google faculty fellowship, a JPMorgan Chase Faculty Research Award, an NSF CAREER award, and an ARO young investigator award.

image0.jpeg
FםMLAS2022.jpeg
bottom of page