FMOℤ 2023

Home | FMOz 2026 | Previous years

Program

Thursday, 01 June 2023
9.30 – 10.00 Welcome
10.00 – 10.35

Ian Hayes

Reasoning about distribution properties in a concurrent refinement algebra

10.35 – 11.05 Morning Tea
11.05 – 11.40

Rajeev Gore

CEGARBox(Ktn): Improved decision procedures for multimodal tense description logic ALCI

11.40 – 12.15

Zhe Hou

A Parallel and Distributed Quantum SAT Solver Based on Entanglement and Quantum Teleportation

12.15 – 14.15 Lunch
14.15 –14.50

Naipeng Dong

Verifying Concurrency in Blockchain: From Application Layer to Consensus Layer

14.50 – 15.25

Daniel Britten

Provably Correct Smart Contracts using DeepSEA

15.30 – 16.00 Afternoon Tea
16.00 – 16.35

Horacio Quiles

Deductive Verification of Smart Contracts with Dafny

16.35 – 17.10

Amos Robinson

Pipit: early work verifying reactive control systems in F*

Friday, 02 June 2023
9.00 – 9.35

Annabelle McIver

Leakage Logic for programs

9.35 – 10.10

Natasha Fernandes

Formal analysis of utility in privacy pipelines using quantitative information flow and Kronecker products

10.10 – 10.40 Morning Tea
10.40 – 11.15

Rob Sison

Formalising the Prevention of Microarchitectural Timing Channels by Operating Systems

11.15 – 11.50

Scott Buckley

Will it bend? Plugging a real microkernel (seL4) into our generic time protection model

12.00 – 14.00 Lunch
14.00 – 14.35

Athanasios Evangelou-Oost

Trace models of concurrent valuation algebras

14.35 – 15.10

Weiyou Wang

Enabling Preserving Bisimulation Equivalence

15.10 – 15.40 Afternoon Tea
15.40 – 16.15

Ralf Huuck

Static Analysis: A Billion Dollars Later

16.15 – 16.50

Matthew Roberts

The Teaching for Formal Methods in Australia and New Zealand

17.00 Close of workshop

Abstracts

Ian Hayes

Reasoning about distribution properties in a concurrent refinement algebra

Distribution properties are important for algebraic reasoning in arithmetic and logic. They are equally important for algebraic reasoning about concurrent programs. While parallel composition of a command d distributes over a non-deterministic choice between commands, parallel composition of d only distributes over a sequential composition of commands provided d satisfies suitable restrictions. We are also concerned with distribution of a parallel composition of d into an iteration. Again this only holds for suitable restrictions on d, and the restrictions for distribution into a finite iteration are weaker than those required for distribution into a (possibly) infinite iteration. Our algebra also includes a weak conjunction (specification) operator. Parallel composition and weak conjunction share many properties. Our approach is to introduce an abstract synchronisation operator that can be instantiated to either parallel composition or weak conjunction, and prove the distribution properties, once, for the abstract synchronisation operator. To support the rely/guarantee approach of Jones, his rely and guarantee conditions are encoded as rely and guarantee commands, which may be combined with precondition and postcondition commands using weak conjunction to form a rely/guarantee specification of a concurrent operation. The rely and guarantee commands satisfy the restrictions for distribution over sequential composition and into iterations, both finite and infinite. The distribution properties can also be applied to evolution guarantees and generalised invariants, which are also encoded as commands.


Rajeev Gore (with Cormac Kikkert and Robert McArthur)

CEGARBox(Ktn): Improved decision procedures for multimodal tense description logic ALCI

We extend the monomodal CEGAR-tableaux of Gore and Kikkert to normal multimodal (description) logic Kn (ALC) with global assumptions (TBoxes). We show how to handle the five basic extensions of seriality (KD), reflexivity (KT), transitivity (K4), euclideanness (K5), and symmetry (KB), individually, but in three different ways. We extend the CEGAR-tableaux for Kn to multimodal tense (description) logic Ktn (ALCI) with global assumptions (TBoxes) by “compiling in” the residuation conditions between the “future” and “past” modalities. We describe how we refactored our Haskell implementation called CEGARBox into CEGARBox++ using C++. We describe and analyse the effect of multiple optimisations which speed up proof-search and then provide experimental evidence that CEGARBox++ is the current best satisfiability-checker for all of these logics, sometimes by multiple orders of magnitude.


Zhe Hou

A Parallel and Distributed Quantum SAT Solver Based on Entanglement and Quantum Teleportation

Boolean satisfiability (SAT) solving is a fundamental problem in computer science. Finding efficient algorithms for SAT solving has broad implications in many areas of computer science and beyond. Quantum SAT solvers have been proposed in the literature based on Grover’s algorithm. Although existing quantum SAT solvers can consider all possible inputs at once, they evaluate each clause in the formula one by one sequentially, making the time complexity O(m) — linear to the number of clauses m — per Grover iteration. In this work, we develop a parallel quantum SAT solver, which reduces the time complexity in each iteration from linear time O(m) to constant time O(1) by utilising extra entangled qubits. To further improve the scalability of our solution in case of extremely large problems, we develop a distributed version of the proposed parallel SAT solver based on quantum teleportation such that the total qubits required are shared and distributed among a set of quantum computers (nodes), and the quantum SAT solving is accomplished collaboratively by all the nodes. We have proved the correctness of our approaches and demonstrated them in simulations.


Naipeng Dong

Verifying Concurrency in Blockchain: From Application Layer to Consensus Layer

The blockchain technology, also known as the distributed ledger, has undergone several stages of significant evolution, leading to a complex architecture with multiple layers. Due to its decentralized nature, concurrency issues are pervasive throughout the entire blockchain architecture. Our work explores how concurrency issues manifest themselves at the application layer taking smart contracts as example, and at the consensus layers taking a Byzantine Fault Tolerant algorithm as an example, and discusses its relevance to blockchain security, and then apply formal verification methods to address these challenges for improving the overall robustness of the entire blockchain system.


Daniel Britten

Provably Correct Smart Contracts using DeepSEA

It is possible to download a piece of software over the internet and then verify its correctness locally using an appropriate trusted proof system. However, on a blockchain like Ethereum, smart contracts cannot be altered once deployed. This guarantee of immutability makes it possible for end users to interact collectively with a ’networked’ piece of software, with the same opportunity to verify its correctness. Formal verification of smart contracts on a blockchain therefore offers an unprecedented opportunity for end users to collectively interact with a deployed instance of software that they can verify while not relying on a central authority. The DeepSEA system, along with the Coq Proof Assistant, could serve as a proof system making this possible. This talk will discuss research undertaken related to extending the DeepSEA system.


Horacio Mijail Anton Quiles

Deductive Verification of Smart Contracts with Dafny

We present a methodology to develop verified smart contracts. We write smart contracts, their specifications and implementations in the verification-friendly language Dafny. In our methodology the ability to write specifications, implementations and to reason about correctness is a primary concern. We propose a simple, concise yet powerful solution to reasoning about contracts that have external calls. This includes arbitrary re- entrancy which is a major source of bugs and attacks in smart contracts. Although we do not yet have a compiler from Dafny to EVM bytecode, the results we obtain on the Dafny code can reasonably be assumed to hold on Solidity code: the translation of the Dafny code to Solidity is straightforward. As a result our approach can readily be used to develop and deploy safer contracts.


Amos Robinson

Pipit: early work verifying reactive control systems in F*

Safety-critical control systems, such as the anti-lock braking systems in most cars today, need to be correct and execute in real-time. We could write these systems directly in an imperative language such as C or Rust and verify them, but it can be difficult to state high-level specifications about such low-level implementations. Another approach, favoured by large parts of the aerospace industry, is to implement the controllers in a high-level language such as Lustre or SCADE, and verify that the implementations satisfy the high-level specification using a model-checker. These model-checkers can prove many interesting properties automatically, but do not provide many options when the automated proof techniques fail. In this talk I will describe my preliminary work on Pipit, an embedded domain-specific language for implementing and verifying such controllers in F*. Pipit aims to provide a high-level language similar to Lustre, while reusing F*’s proof automation and manual proofs for verifying controllers, and using LowStar’s C-code generation for real-time execution. Pipit translates its expression language to a transition system for k-inductive proofs, which is mostly verified; verifying the translation to imperative code is future work.

Amos Robinson did his PhD in streaming programming languages and stream fusion at UNSW in Sydney, Australia. For the past three years he was working at a self-driving car company where he verified these sorts of control systems. He has just started a postdoc at the Australian National University, where he will be working with Alex Potanin.


Annabelle McIver

Leakage Logic for programs

An information-flow analysis views a program as a system that processes “high security” secret information, e.g. passwords; but sometimes programs’ treatment of secrets can be correlated to observed behaviour, or other data of “low security”. Those programs then “leak” information when an adversary can infer something about the “high security” secret by using their knowledge of the program code, and observing real-time behaviours such as control flow, timings, or low- security correlations. In such programs verification usually entails quantifying the amount of information that the program leaks. Interestingly, some inputs are more vulnerable than others and an important goal of such formal analysis is to identify which inputs are most vulnerable.

In this talk I shall explain how to specify leakage bounds on programs that can be verified directly using a logic of information leakages. The leakage logic is based on the g-leakage framework for analysing quantitative security propeties; importantly verification steps are similar to the backwards reasoning in standard functional verification.

A number of examples will be presented to illustrate the mathematical properties of leakage logic.


Natasha Fernandes

Formal analysis of utility in privacy pipelines using quantitative information flow and Kronecker products

Systems which process sensitive information scheduled for public release typically require privacy protections to protect the sensitive attributes in the data, while also allowing useful information to be extracted from the release. This balance is usually achieved empirically, following the assumption that increasing the level of privacy in the system naturally corresponds to a decrease in the resulting utility. In this talk I will present a formal analysis showing that the above assumption does not always hold: that in fact one can achieve more privacy and more utility at the same time. For privacy we use the well-known definition of differential privacy, and for utility we use a standard Bayesian consumer model. Our formal analysis of the above phenomenon uses existing concepts from quantitative information flow, and introduces Kronecker products, which provide an algebra for composability under the “local” model for differential privacy. This approach offers the prospect of formal-analysis tools for utility that complement extant formal analyses of privacy.


Rob Sison

Formalising the Prevention of Microarchitectural Timing Channels by Operating Systems

Microarchitectural timing channels are a well-known mechanism for information leakage. Time protection has recently been demonstrated as an operating-system mechanism able to prevent them. However, established theories of information-flow security are insufficient for verifying time protection, which must distinguish between (legal) overt and (illegal) covert flows. We provide a machine-checked formalisation of time protection via a dynamic, observer-relative, intransitive nonleakage property over a careful model of the state elements that cause timing channels. We instantiate and prove our property over a generic model of OS interaction with its users, demonstrating for the first time the feasibility of proving time protection for OS implementations. This work was first presented in March at FM 2023.


Scott Buckley

Will it bend? Plugging a real microkernel (seL4) into our generic time protection model

In our earlier work, we built a generic kernel model that enforces time protection, distilling formal requirements and showing that a microkernel that meets those requirements will correctly enforce time protection. Recently our task has been to prove that seL4’s time protection mechanisms satisfy these requirements. seL4’s existing specification and proofs are over a million lines of Isabelle – making any changes to the abstract specification will create significant work repairing its proofs. I will discuss how we accomplished the task of bending seL4 into the right shape to be able to prove time protection properties about it, while minimising breakages to its existing model and proofs.


Athanasios Evangelou-Oost

Trace models of concurrent valuation algebras

We introduce concurrent valuation algebras (CVAs), algebraic structures that build upon valuation algebras and offer significant potential for the specification and modelling of concurrent and distributed systems. Our study begins with an introduction to a variant of valuation algebras based on a presheaf formulation devised by Abramsky and Caru, in conjunction with the Grothendieck construction. We proceed to define a CVA, which comprises two valuation algebra structures on a space, where combine operators adhere to a weak exchange law. This structure is inspired by the structures of a Concurrent Kleene Algebra and of a 2-monoidal/duoidal category. We discuss morphisms between valuation algebras and CVAs, as well as additional structures and results relating to extension operations and valuation algebras induced by tuple systems. Moreover, we indicate how the classical reasoning methodologies of Hoare and rely-guarantee logics for concurrent programs can be expressed within the algebraic structure of a CVA. Finally, we outline several trace models for CVAs with practical relevance, including action, state, and relative state trace models.

Our findings pave the way for future research on the applications of CVAs to the specification and modelling of concurrent and distributed systems, and the import of the highly effective and practical local computation framework of valuation algebras to this realm.


Weiyou Wang

On the way towards justness

Most fairness assumptions used for verifying liveness properties are criticised for being too strong or unrealistic. On the other hand, justness, arguably the minimal fairness assumption required for the verification of liveness properties, is not preserved by classical semantic equivalences, such as strong bisimilarity.

In this talk, we will discuss (1) labelled transition systems upgraded with a successor relation to capture the notion of justness, (2) enabling preserving bisimulation equivalence (ep-bisimilarity), a finer alternative to strong bisimilarity, which preserves justness, and (3) our recent work enriching the well-known De Simone format and showing that ep-bisimilarity is a congruence for operators and recursion, regarding all process algebras in this format.


Ralf Huuck

Static Analysis: A Billion Dollars Later

Static analysis has come a long way from compiler construction and optimisation to forming the core of application security tooling for most software companies on the planet. Similarly, the underlying technologies have been changing from early pattern matching and data flow techniques to the inclusion of formal methods, constraint solving and AI. However, not all that is feasible in academia gets adopted by industry. The reasons are manifold and sometimes surprising. In this talk we give an overview of different static analysis techniques, their uptake in industry and some of the barriers to entry. Moreover, we highlight a few drivers in industry that can potentially inspire new research directions.

Ralf is a recovering academic and entrepreneur with a background in formal methods and software security. He received a PhD in Computer Science from the University of Kiel, Germany, and had been a long time research leader at NICTA/DATA61. He led the static analysis security spinout Goanna Software, which was later acquired by Synopys and forms part of their Coverity SAST product. Recently Ralf has been leading Logilica, a boutique consulting and software productivity company, counting many major players as their customers.


Matthew Roberts

The Teaching for Formal Methods in Australia and New Zealand

We present the results of research examining which tertiary institutions in Australia and New Zealand offer formal methods courses. We categorise and compare the offerings according to ACM curriculum categories. The aim of this research is to inform institutions planning on offering or redeveloping formal methods and to ignite a discussion amongst the formal methods community about the best way to teach formal methods.