ECLaPS 2024
ECLaPS 2024 took place on Saturday, December 7, 2024 at Université du Québec à Montréal (UQAM). There were ten presentations and nineteen attendees from the LACIM, McGill University, l’Université de Montréal, l’UQAM, the University of Massachusetts Lowell, and the University of Toronto. We thank UQAM’s computer science department for sponsoring the event and Jean Privat (UQAM) for taking pictures of the afternoon sessions.
Schedule
All regular talks were twenty minutes long followed by five minutes for questions. Several breaks are scheduled to encourage discussion between attendees.
Welcome | ||
Opening Remarks | ||
Session 0: Keynote Talk | ||
---|---|---|
Call-By-Unboxed-Value: A Logical Foundation for Representation Irrelevance | Paul Downen (University of Massachusetts Lowell) | |
Break | ||
Session 1: Semantic and Logical Foundations | ||
Normalization by evaluation for contextual modal type theory | Antoine Gaulin (McGill University) | |
Reconciling Impredicative Axiom and Universe | Stefan Monnier (Université de Montréal) | |
Lunch | ||
Session 2: Mechanization | ||
Explicit Contexts for Substructural Languages | Daniel Zackon (McGill University) | |
Structural Proto-Quipper: Mechanization of a Linear Quantum Programming Language in a Structural Setting | Max Gross (McGill University) | |
Autoformalizing Euclidean Geometry | Xujie Si (University of Toronto) | |
Break | ||
Session 3: Concurrency Theory Segues Into Metaprogramming | ||
Message-Observing Sessions | Ryan Kavanagh (Université du Québec à Montréal) | |
Reflective Metaprogramming for Session-Typed Processes | Chuta Sano (McGill University) | |
Break | ||
Session 4: Metaprogramming | ||
Polymorphic Metaprogramming with Resource Guarantees — An Adjoint Analysis of Metaprogramming | Junyoung "Clare" Jang (McGill University) | |
Safe and easy compile-time generative programming | Ningning Xie (University of Toronto) | |
Closing Remarks |
Abstracts
Call-By-Unboxed-Value: A Logical Foundation for Representation Irrelevance
Call-By-Push-Value has famously subsumed both call-by-name and call-by-value by decomposing programs along the axis of “values” versus “computations.” This talk will introduce Call-By-Unboxed-Value which further decomposes programs along an orthogonal axis separating “atomic” versus “complex.” As the name suggests, these two dimensions make it possible to express the representations of values as boxed or unboxed, so that functions pass unboxed values as inputs and outputs. More importantly, Call-By-Unboxed-Value allows for an unrestricted mixture of polymorphism and unboxed types, giving a foundation for studying compilation techniques for polymorphism based on representation irrelevance.
To begin, this talk will start with an introductory review of two important foundations: (1) Call-By-Push-Value from the study of program semantics and (2) focusing and polarity from proof theory. While the two are quite similar in many aspects, there is an unusual way in which they differ. We will see how honing in on this small difference gives a proofs-as-programs language for describing representations of data structures and parameter passing as used by systems-level programs. Importantly, the ideas from proof theory will give us high-level criteria for reasoning about when the representation of data is relevant to a computation—letting us express true run-time polymorphism—without needing to know about lower-level concerns of code generation and run-time systems. We will also explore how the logical foundations let us explore new features to improve expressiveness and abstraction in low-level programs, including generalized codata types, the compile-time fusion of several types into a single representation, and the encoding of efficient Haskell-style type classes.
Normalization by evaluation for contextual modal type theory
Normalization by evaluation (NbE) is an approach to proving normalization from which we naturally obtain an evaluation procedure for the underlying type theory, making it a prime choice for certified implementation. We design an NbE procedure for contextual modal type theory (CMTT), a type theory aimed at formalizing dependently-typed meta-programming over LF objects, where first-class contexts facilitate reasoning about binders. Our main contributions consist of extending the known NbE methods to first-class contexts and to CMTT’s contextual box modality. In this talk, we discuss our modifications to the standard presentation of CMTT, then present our evaluation procedure, and briefly sketch the proof of normalization.
Reconciling Impredicative Axiom and Universe
Impredicativity and type theory share a long history since Russel introduced the notion of types specifically to try and rule out the logical inconsistency that can be derived from arbitrary impredicative quantification. In modern type theory, impredicativity is most commonly (re)introduced in one of two ways: the traditional way found in systems like the Calculus of Constructions, Lean, Coq, and System-F, is to make the bottom universe impredicative, typically called Prop; the other way, used in HoTT, is to provide a propositional resizing axiom that makes it possible to move proof-irrelevant types to a lower universe.
Coq’s Prop and HoTT’s propositional resizing axiom both restrict the use of impredicative quantification to the definition of proof irrelevant propositions, which suggests they may be closely related, yet the actual mechanism by which they allow it is very different, making it unclear how they compare to each other in terms of expressiveness and interactions with other axioms.
We try to provide an answer to this question by axiomatizing a Prop universe. More specifically, we describe a predicative type theory augmented with a set of axioms inspired by HoTT’s propositional resizing axiom and then show a quasi-equivalence between this system and a similar type theory augmented with a Prop universe. We start with a small PTS and then grow the system to UTT-style inductive types.
This final result suggests that the kind of impredicativity provided by Coq’s Prop universe is very close to that offered by HoTT’s propositional resizing. In practical terms, the syntactic nature of the proof means that such an approach can be used also to translate code between such systems.
Explicit Contexts for Substructural Languages
A central challenge in mechanizing the meta-theory of substructural languages is modelling contexts. In this talk, we will describe CARVe, a general syntactic infrastructure for managing substructural contexts explicitly, where elements are annotated with tags from a resource algebra denoting their availability. We model resource consumption by changing these tags, and so assumptions persist as contexts are manipulated. This allows us to define relations between substructural contexts via simultaneous substitutions without the need to split them. Moreover, we establish a series of algebraic properties about our context operations that are typically required to carry out proofs in practice. The infrastructure is implemented in Beluga, and has been used to mechanize a broad range of substructural systems.
Structural Proto-Quipper: Mechanization of a Linear Quantum Programming Language in a Structural Setting
This project aims to develop a general technique for mechanizing quantum programming languages that rely on linear logic for resource management, such as the handling of qubits. The main challenge is integrating linear logic, which enforces constraints like one-time use of resources, into systems built on structural logic frameworks. To address this, the project adopts the strategy of “enforcing linearity without linearity.” As a proof of concept, we mechanize Proto-Quipper, a quantum programming language used to generate circuits, utilizing Beluga, a tool for formal reasoning about systems. The approach is grounded in Crary’s method for representing Girard’s linear logic in Twelf, and introduces two linearity predicates that ensure classical and quantum variables are used linearly within typing judgments. This technique leverages Beluga’s higher-order abstract syntax to streamline proofs and avoids the need for external extensions, unlike prior work by Mahmoud et al., who mechanized Proto-Quipper using a linear extension of Hybrid in Coq. While the mechanization of Proto-Quipper was successful, demonstrating the soundness of the approach, proofs for key properties such as subject reduction and progress are still under development. These proofs exist within Beluga’s computational layer, where they are represented as recursive functions. Future efforts will focus on completing these proofs and expanding the method to more complex quantum programming languages, such as Proto-Quipper-Dyn, which introduces dynamic lifting.
Autoformalizing Euclidean Geometry
Theorems and proofs in machematics are communicated informally in natural languages over the past two thousand years, ambiguities, misunderstandings, and mistakes happen from time to time. In this talk, I will present an auto-formalization framework for Euclidean geometry, which translates the informal mathematics theorems into formal, machine-verifiable theorems. This framework bridges the gap between diagrammatic reasoning and formal logic by leveraging SMT solvers, domain knowledge, and state-of-the-art Large Language Models (LLMs). I will share some interesting findings in auto-formalizing Euclidean geometry, including an old mistake made by Euclid two thousand years ago.
Message-Observing Sessions
We present Most, a process language with message-observing session types. Message-observing session types extend binary session types with type-level computation to specify communication protocols that vary based on messages observed on other channels. Hence, Most allows us to express global invariants about processes, rather than just local invariants, in a bottom-up, compositional way. We give Most a semantic foundation using traces with binding, a semantic approach for compositionally reasoning about traces in the presence of name generation. We use this semantics to prove type soundness and compositionality for Most processes. We see this as a significant step towards capturing message-dependencies and providing more precise guarantees about processes.
Reflective Metaprogramming for Session-Typed Processes
We introduce FuSes, a functional programming language that supports reflective metaprogramming over session-typed process calculus code. In particular, a functional layer sits on top of a session-typed process layer, and the functional layer uses the contextual box modality extended with linear channel contexts to capture open process code. FuSes not only supports code generation but also supports pattern matching over open process code.
Furthermore, we introduce a set of run primitives that enable processes to be executed and observed from within the functional layer. Using these constructs, we implement well-known optimizations over concurrent code such as batch optimizations and geo-load balancing in a type-safe manner. Our technical contributions include a type system for FuSes, an operational semantics, and a proof of its type safety.
Polymorphic Metaprogramming with Resource Guarantees — An Adjoint Analysis of Metaprogramming
In this talk, I describe Elevator, a unifying polymorphic foundation for metaprogramming with resource guarantees based on adjoint modalities. Elevator distinguishes between multiple memory regions using modes where each mode has a specific set of structural properties. This allows us not only to capture linear (i.e. garbage-free) memory regions and (ordinary) intuitionistic (i.e. garbage-collected or persistent) memory re- gions, but also to capture accessibility between the memory regions using a preorder between modes. This preorder gives us the power to describe monadic and comonadic programming. As a consequence, it extends the existing logical view of metaprogramming in two directions: first, it ensures that code generation can be done efficiently by controlling resource dependency; second, it allows us to provide resource usage guarantees about the generated code (i.e. code that is for example garbage-free). I present the static and dynamic semantics of Elevator. In particular, we prove the substructurality of variable references and type safety of the language. We also establish mode safety, which guarantees that the evaluation of a term does not access a value in an inaccessible memory.
Safe and easy compile-time generative programming
In this talk we overview MacoCaml, a new design and implementation of compile-time computation for OCaml. MacoCaml features a unifying and novel combination of phase separation and quotation-based staging. We review MacoCaml’s recent developments, including a comprehensive formalism of a feature-rich macro calculus with key meta-theoretic properties, and an extension to module functors that leads to explicit phase distinction. We describe how the meta-theoretical results offer practical benefits for programmers, and conclude the talk with a few directions for future exploration.