The Foundations of Programming group holds regular research meetings. The meetings last around an hour, and are intended to be a friendly forum for discussion and exchange of ideas. Most meetings take the form of a talk by a group member or an invited speaker, but we also have various kinds of discussion sessions. The meetings are open to everyone, not just members of the group. Please contact Graham Hutton if you are interested in visiting the group and giving a talk.
Details of previous years meetings are also available: 1995; 1996; 1997; 1998; 1999; 2000; 2001.
Details of recent and upcoming meetings are given below.
Alberto's tak is postponed until further notice
Instead we'll have a meeting where some people will tell
about interesting conferences they've been to recently.
We describe an extension of Hoare's logic for reasoning
about programs that alter data structures. We consider
a low-level storage model based on a heap
with associated lookup, update, allocation and deallocation operations,
and unrestricted address arithmetic. The assertion language
is based on a possible worlds model of the logic of bunched implications,
and includes spatial conjunction and implication connectives alongside
those of classical logic.
Heap operations are axiomatized using what we call the ``small axioms'',
each of which mentions only those cells accessed by a particular
command. Through these and a number of examples we show that
the formalism supports local reasoning: A specification
and proof can concentrate on only those cells in memory
that a program accesses. A central role is played by an
inference rule for automatically inferring frame axioms,
which describe invariant properties of heap fragments
not accessed by a program
This talk describes joint work with John Reynolds and Hongseok Yang.
The axiom of choice does not have an unambiguous status in
constructive mathematics. On the one hand it is said to be
an immediate consequence of the constructive interpretation
of the quantifiers. Any proof of
$\forall x\in a\,\exists y\in b\,\phi(x,y)$ must yield a function
$f:a\rightarrow b$ such that $\forall x\in a\,\exists y\in b\,\phi(x,f(x))$.
This is certainly the case in Martin-L\"of's intuitionistic theory of
types. On the other hand, from the very earliest days, the axiom of
choice has been criticised as an excessively non-constructive principle
even for classical set theory. Moreover, it has been observed that the
full axiom of choice cannot be added to systems of constructive set theory
without yielding constructively unacceptable cases of excluded middle.
On the other hand, it has been shown by Peter Aczel that constructive
set theory has a canonical interpretation in Martin-L\"of's type theory
and that this interpretation also validates several choice principles, e.g.,
the axiom of countable choice and the axiom of dependent choices.
The main thrust of my talk will be draw a line between choice principles
which are constructively acceptible and those which are not.
The refinement calculus (Back & von Wright, Morgan) is an elegant and
practical framework for `calculating' imperative programs from
specifications. The central notion is that of a predicate
transformer, or unary operator on the powerset of a set of states.
These have a rich alebraic structure, combing the structure of
predicates (namely inclusion, sups, and infs) with the structure of
unary operators on any domain (namely identity and composition).
The idea is that an executable program is written by reducing or
refining its specification (a monotone predicate transformer) to one
which is `feasible' (commuting with all intersections). The code
is extracted from the development history, and transcribed into a
real programming language.
The `official' foundation of the refinement calculus is impredicative
classical higher-order logic, and it has been implemented using the
theorem-proving system HOL.
As I will show in the talk, one can implement the same algebraic
structure using predicative type theory a la Martin-Lof. The central
data structures and operations were first described by Petersonn &
Synek in 1989. I shall discuss the question of whether this could
have some significance for specification and programming of system
components using a dependent type theory.
We present a new and simple account of $\alpha$-conversion suitable
for formal reasoning. Our main tool is to define $\alpha$-conversion
as a a structural congruence parametrized by a partial bijection on
free variables. We show a number of basic properties of
substitution. e.g. that substitution is monadic which entails all the
usual substitution laws. Finally, we relate $\alpha$-equivalence
classes to de Bruijn terms.
I'll talk about logics for modelling implicit and explicit knowledge,
and argue that existing approaches are not very good at modelling
explicit knowledge of resource-bounded agents, and propose alternative
ways of knowledge ascription and modelling reasoning of resource-bounded
agents. This is joint work with Brian Logan.
I briefly present the Duration Calculus (DC), which is an extension of real
time Interval Temporal Logic by state expressions and duration terms
for states. I review earlier results of mine on Craig interpolation
and interval-related interpolation about ITL. Then I discuss the
possibility to generalise this result and present a new interpolation result
on a subset of DC, where "interval-related" is replaced by
"projection-related", projection being an a modality in DC, which I
introduce and motivate too. I give a counter-example showing
that neither Craig nor the more general interval-related interpolation hold
about DC without restrictions, such as the ones appearing in the
interpolation theorems I present. Time permitting, I show how
Craig interpolants can be constructed in a subset of DC widely regarded as
the "propositional" subset, in a fashion similar to one known for classical
propositional logic.
Simple games can be useful in explaining fundamental programming concepts --
invariants and fixed points -- to students. The talk will introduce work I
have been doing with Diethard Michaelis with the goal of using game theory as
a basis for developing students' calculational skills.
Since its earliest conception, (proof theory in) mathematical
logic has been formulated as a formalization of deductive reasoning:
given a collection of hypotheses, a conclusion is derived. The advent
of computational logic, however, has emphasized the significance of
reductive reasoning: given a putative conclusion, sufficient hypotheses
are determined. We provide, for intuitionistic logic and classical
logic, a model-theoretic semantics of reductions of comparable value to
those available for proofs. Within this general theory, which combines BHK
and Kripke semantics, we formulate a model-theoretic semantics for
proof-search, giving a specific games model interpretation of the key
control mechanism known as backtracking.
It has long been a dream of computer scientists to be able to "do
mathematics" on a computer. Some of the earliest users of digital
computers had high hopes of being able to automate and formalise much of
mathematical practice. These initial high hopes were not realised and
even now more abstract mathematics uses computers more for their word
processing capabilities than for automated or mechanised support.
This talk will present an overview of the state of formal and mechanised
mathematics and include discussions of current issues such as the
formalisation of continuous mathematics, mathematical knowledge
management and the successes and failures of the "computer mathematics"
field.
The Burrows-Wheeler Transform converts a piece of text to another
that can be compressed more effectively. Naturally, one would want
to perform the inverse conversion in the decompression phase. However,
it is not obvious, at least on a first encounter, why the transform
is invertible, let alone how to carry out the inversion transform.
In this talk we will specify the transform in the functional style
and exploit simple equational reasoning to derive the inverse of the
Burrows-Wheeler Transform. As a bonus, we will also sketch the
outlines of deriving the inverse of two more general versions of the
transform, one proposed by Schindler and another by Chapin and Tate.
I examine the proof of the ``whisky problem'', a challenge problem from
the domain of temporal logic. The proof requires a generalisation of the
original conjecture and I examine two proof critics which can be used to
create this generalisation. Using these critics I believe I have produced
the first automatic proofs of this challenge problem.
I use this to motivate a comparison of the two critics and propose that
there is a place for specialist critics are well as powerful general
critics. In particular I advocate the development of critics that do not
use meta-variables.
School of Computer Science and IT
University of Nottingham
Jubilee Campus, Wollaton Road
Nottingham NG8 1BB
United Kingdom