I am currently looking at the relationships between nested and non-nested data-types, particularly the way in which nested data-types can often be thought of as non-nested types with structural invariants imposed. For example, nests can be thought of as perfectly-balanced binary trees via an appropriate embedding. I will demonstrate how data-type embeddings can be constructed, and how to prove that invariants are satisfied, within the context of inductive categorical types.
This is work in progress and I would greatly appreciate any comments.
Fixed point logics extended with generalized quantifiers have been
studied widely in the last few years. One direction has been to
examine if such logics can capture complexity classes. In a
significant paper, among other results, Dawar and Hella showed that
these logics can not capture PTIME on the class of cliques.
For each structure (and an integer k) one can define the number of
k-automorphism types realized in this structure. Using this for any
class of structures, growth rate of number of automorphism types for
structures in the class can be defined as a function of size of the
structures in this class. For instance, for the class of cliques such
functions are "constant" functions.
In this talk, I will describe a powerful extension of the result
mentioned above by showing that PFP (partial fixed point logic)
extended with finitely many generalized quantifiers, can not capture
PSPACE on any (recursively enumerable) class of structures which has
growth functions slower than "polynomials". Our Proof depends on the
simple ideas of evaluating a fixpoint formula with generalized
quantifiers efficiently on structures with few automorphism
types. Using the same techniques, we can get other interesting results
such as, on the class of complete binary trees IFP extended with
finitely many generalized quantifiers, can not capture PTIME.
Multithreaded architectures have been developed as a way to hide latencies
in memory access, communication, and long pipelines. One problem with
multithreaded processors is that caches are shared between threads, so
threads may unintentionally evict each others data. This talk describes a
novel cache architecture where the cache can be divided into partitions.
Each thread is assigned a set of partitions, each of which is used to
cache a view of a data structure. The partition assignment is completely
automated in the compiler. With our compiler and architecture, all forms of
interference are eliminated and predictable execution of multithreaded
programs is achieved in moderate sized caches.
This talk will describe some of the work I have been doing jointly
with Roy Crole on mechanized operational semantics. We give a fully
automated description of a small (functional) programming language PL
in the theorem prover Isabelle98. The language syntax and semantics
are encoded, and we formally verify a range of semantic properties.
This is achieved via uniform (co)inductive methods. We encode notions
of bisimulation and contextual equivalence. The main original
contribution is to give a fully automated proof that PL bisimulation
coincides with PL contextual equivalence.
Designing and programming algorithms for parallel machines is difficult.
This can't be helped, since compared to sequential computing there are
extra factors to consider, such as load balancing, communication and
synchronisation. The difficulty is exacerbated by using unsuitable
sequential languages which provide no abstraction over implementation
details.
Approaches such as Abstract Parallel Machines improve on this by eliding
irrelevant detail. This allows algorithms to be described and refined
much more easily. Unfortunately, the result is a specification of the
algorithm, which still has to be implemented.
This talk introduces a system to bridge the gap between high level APM
derivations and low-level implementation. The specification is
transformed though a series of combinator languages. Each language
introduces implementation details and restrictions, until finally a
representation is reached which can be trivially translated to C+MPI.
Coloured Petri Nets [1] are a high-level form of Petri nets [2], in
which transition inscriptions in some programming language operate on
tokens attributed with values of the inscription language. A brief
review of functional Petri nets (CPNs with functional inscription
languages) will focus on their success despite the high startup costs
that have hampered their use so far.
The startup costs can be reduced by embedding functional Petri nets into
their inscription language. As an example, I will introduce the variant
of Haskell-Coloured Petri Nets (HCPNs) and show that they have a simple
mapping to Haskell. HCPNs can thus be used for system modelling in
preparation of system implementation in Haskell, following a process of
stepwise refinement in which all intermediate specifications are
executable Haskell programs.
(This will be an extended version of a talk given at IFL'99. A draft
paper [3] is available from my homepage and comments are very welcome.)
[1]
Coloured Petri Nets at the University of Aarhus
[2]
Homepage of the International Petri Net Community
[3]
Haskell-Coloured Petri Nets (draft)
Non-uniform data-types (also known as nested data-types) are recursive
data-types in which the structure changes with recursion depth; they
have been successfully used to impose structural invariants on types, for
example, Chris Okasaki's work on efficient functional data-structures.
Structured recursion operators such as fold have shown their worth in
helping programmers define functions over uniform recursive data-types,
and the hope is that they will provide similar benefits for non-uniform
data-types.
However, the standard construction of folds for uniform types does not
transfer well to non-uniform types because of restrictions connected to
parametric polymorphism.
My research has focussed on trying to construct general and useful fold
operators for non-uniform types, as well as establishing calculational
properties analogous to those enjoyed by folds over uniform types.
In this talk I will give a general overview of the problems and solutions
found.
A good formal specification style allows specifications which are
both abstract and to the point. Abstractness is not just there for the
theorists, but mainly to provide a maximal degree of implementation
freedom. However, implementation freedom should not be provided by
giving an enumeration of allowable implementations - this leads to
specifications which aren't concise or to the point. Ideally, a lot of
the implementation freedom should derive from what is *not* mentioned in
the specification.
In this talk I will illustrate this point using the Z specification
notation. The traditional refinement relation for the "states and
operations" style in Z is very much based on the assumption that everything
which will appear in a final refinement will need to be included in the
initial specification already. Our work on partial specification has led
to generalised refinement relations for Z which allow things to be added
in refiment that were not previously mentioned. In particular, these
include inputs, outputs, and extra external or internal operations.
Based on the work of a number of researchers, Peter Potts and
Abbas Edalat have developed a framework for real-number computation which
is both expressive and elegant. In his thesis, Peter Potts shows how to
compute all standard mathematical functions in this framework. His
algorithms are derived from classical formulas for continued fractions. In
his presentation, a calculation always corresponds to a (potentially
infinite) expression tree, which consists of tensors and matrices at its
nodes, into which the input stream(s) are fed at (potentially infinitely
many) leaves, and which creates an output stream from its root.
Reinhold Heckmann showed that in the simple situation where the tree
consists of a single matrix, this matrix must contain larger and larger
entries as the computation proceeds. He gives necessary and sufficient
conditions for when the entries can be bounded by a constant.
In this talk, I report on work done by Michal Konecny at Birmingham, who
has extended Heckmann's work in two directions: He considers general
finite automata rather than expression trees, and he allows arbitrary
digit systems to be used. His main result is that a differentiable
function, which is computed by a finite automaton, must be equal to a
linear fractional transformation. (By Heckmann's results, this linear
fractional transformation must furthermore be of a very special kind.)
The approximation lemma was recently introduced as a simplification of the
well-known take lemma, and is used to prove properties of programs
that produce lists of values. We show how the approximation lemma, unlike the
take lemma, can naturally be generalised from lists to a large class of
datatypes, and present a generic approximation lemma that is parametric
in the datatype to which it applies. As a useful by-product, we find that
generalising the approximation lemma in this way also simplifies its proof.
This talk is based upon joint work with Jeremy Gibbons.
This talk describes a programme of research directed to unifying the
functional, logic, and object paradigms in an algebraic setting.
Our chief resource in this endeavour is "hidden algebra",
a framework that allows the specification of abstract machines
with hidden local state.
The initial goal of our research was both straightforward and
ambitious: to give a semantics for software engineering, and in
particular
for the object paradigm, supporting correctness proofs that are as
simple and
mechanical as possible. This emphasises proofs rather than models,
and thus suggests an equational approach,
because equational logic is fully expressive, yet simple and has a
great deal of mechanical support.
Hidden algebra takes as basic the notion of behavioural abstraction, or
more
precisely, behavioural satisfaction: our specifications characterise how
objects (and systems) behave, not how they are implemented; they provide
a
notion of behavioural type, which we prefer to call a hidden theory.
Building on the algebraic specification tradition, hidden algebra uses
some
sorts for data values (of attributes) as in the algebraic approach to
data
types, and some for states, as in the algebraic approach to abstract
machines;
the latter give us objects and classes.
These two uses of sort are dual: induction establishes properties of
data types, while coinduction establishes properties of objects.
Similarly,
initiality is important for data types, whereas finality is important
for
objects.
Our correctness proofs show that one such theory behaviourally
satisfies another.
An important contribution of this programme of research is
the development of powerful hidden coinduction techniques for proving
behavioral correctness of object systems.
Program transformation systems and theorem proving tools need a suitable
user interface: a structure editor is ideal for the purpose. We describe a
design for such an editor, where the edit actions are Haskell functions on a
fairly simple data structure - the user can easily extend the editor by
writing new functions using the combinators provided. Two guiding design
principles are that the editor should be modeless, and that entry of new
tree fragments should be no more difficult than in an ordinary text editor.
We show how our design can be combined with a graphical front end, written
using the FranTk library. The result is a compact Haskell program that is
easily adapted to serve as a user interface for various tools.
(Joint work with Meurig Sage and Bernard Sufrin)