12th Int. Workshop on
Applied Verification for Continuous and Hybrid Systems
Wednesday, June 04, 2025, online
The workshop on applied verification for continuous and hybrid systems (ARCH) brings together researchers and practitioners to establish a curated set of benchmarks and test them in a friendly competition.
Call for Submissions
Verification of continuous and hybrid systems is increasing in importance due to new cyber-physical systems that are safety- or operation-critical. This workshop addresses verification techniques for continuous and hybrid systems with a special focus on the transfer from theory to practice. Topics include, but are not limited to
- Proposals for new benchmark problems (not necessarily yet solvable)
- Tool presentations
- Tool executions and evaluations based on ARCH benchmarks
- Experience reports including open issues for industrial success
- Reports on results of our friendly competition (separate call)
Researchers are welcome to submit examples, tools and benchmarks that have already appeared in brief form, but whose details were omitted. The online benchmark repository allows researchers to include modeling details, parameters, simulation results, etc. Submissions are encouraged, but not required, to include executable data (models, configuration files, code etc.). It is not required to show that the benchmark has a solution; it suffices that the problem is described in enough detail that somebody else can try to solve it.
Prize
The tool with the most promising results in the ARCH competition receives a prize of 500 Euros. The winner is determined by an audience voting.
General Submission Guidelines
Submissions consist of papers (ideally 3-8 pages) and optional files (e.g. models or traces) submitted through the ARCH'25 EasyChair web site. ARCH'25 will provide proceedings in the EasyChair EPiC series, indexed by DBLP. Detailed submission guidelines can be found here: submission instructions. Submissions receive at least 3 anonymous reviews, including one from industry and one from academia.
Benchmark papers: A zip archive with additional data (description details, model files, sample traces, code, known results, etc.) is to be submitted together with the extended abstract. Benchmarks can be academic or industrial, of small size or extensive case studies.
Evaluation Criteria for Benchmarks
While the review criteria for tool presentations, benchmark results, and experience reports are more general, benchmark proposals should address the following criteria:
- Relevance: How typical is the benchmark for its application domain or academic topic? How important (scientifically or practically) are the phenomena it exhibits? Does the benchmark correspond to an existing real-world system?
- Clarity: How easy is it to create a working model from the description? How clear is the specification of the properties to be verified?
- Verification advantages: Can verification show properties of the benchmark that are difficult to obtain using other approaches (stochastic simulation etc.)?
Important Dates
Submission Deadline | April 02, 2025 |
Notification of Acceptance | April 30, 2025 |
Final Version | May 28, 2025 |
Workshop | June 04, 2025 |
Organizers
Program Chairs: | Goran Frehse, ENSTA-ParisTech, France Matthias Althoff, Technical University of Munich, Germany |
Evaluation Chair: | Taylor T. Johnson, Vanderbilt University, USA |
Program Committee (tentative)
Academia | Industry |
Stanley Bak (Air Force Research Lab) | James Kapinski (Amazon) |
Xin Chen (University of Dayton) | Aditya Zutshi (Galois Inc.) |
Christian Schilling (Aalborg University) | Jens Oehlerking (Bosch) |
Joseph K. Scott (Georgia Institute of Technology) | Alessandro Pinto (United Technologies) |