This is the first meeting after the Christmas period. Each member of
the group will present a brief (2-3 minute) overview of their recent
and current work, addressed to a general audience. We'll also draw
up a list of external speakers to invite over the coming months.
Reflection is the ability of a computational process to reason about
itself. It can be seen as an extreme case of meta-programming as it
allows programs in an object language to access their meta language.
One idea of this talk is to introduce some basic concepts of
reflection, and to motivate the need for reflective capabilities in
functional languages, especially in the context of my current project.
Last December, these musings on reflection in functional languages led
to a slightly unusual Christmas card being sent to the members of the
Foundations of Programming Group (involving a program that does not quite
print itself). A major part of the talk will consist of an explanation
of that program -- how it works, and how such (almost self-reproducing)
programs can be developed. Particular emphasis will be put on problems
of reflection in purely functional languages, some of which can be
demonstrated using the Christmas card program as an example.
Parser combinators provide a general way of building recursive descent
parsers in functional languages. Parsers, together with certain
combinators, form a monad. I will show how this monad, and the
corresponding combinators, can be found in a very natural way from the
composition of two adjunctions that are hidden in the parser data-type.
The result is an instance of a more general result for factorizing
monads.
I will then examine how the combinators can be manipulated within the
Kleisli category of the parser monad - does it make calculating with
parsers easier?
Moggi's computational lambda calculus lambdaC is a well-established model
of computation. We define a two-level version lambdaNC of the computational
lambda calculus and demonstrate that it is an inevitable description for
sound specialization. We implement the calculus in terms of a standard
two-level lambda calculus via a continuation-passing style transformation.
This transformation is sound and complete with respect to lambdaNC and it
establishes a reduction correpondence to a standard two-level call-by-value
lambda calculus. By imposing suitable restrictions on the target language
it can be made to form a reflection in the two-level lambda calculus of
lambdaNC. As a practical ramification of this work we show that several
published specialization algorithms are unsound in the presence of effects
and develop a sound specializer similar to continuation-based specializers.
This talk is based upon joint work with Julia Lawall, Oberlin College.
Have you come across an interesting technical problem that you
don't know how to solve but can be explained in a few minutes?
Then come along to today's meeting and give a 3-4 minute overview
of the problem to the group. With the increasing size of the
group, there is unlikely to be time for everyone to present
a problem. So don't feel that you need to rack your brains to come
up with something - if you have a problem bring it along, otherwise
just bring your brain along to help others
solve their (technical) problems!
Helen: JIC vs JIT in evolving environments;
This talk will overview a mathematical model of sets of hypermedia links
and of the navigation of these links. Each set of links is modelled as
a binary relation and different representations of binary relations then
model all the possible implementations of sets of links. The navigation
of links is modelled by questions asked of binary relations in any of
these representaions. For some representations, the questions cannot
always be answered, so the usefulness of a link set implementations is
goverened by its underlying representation of the binary relation.
The purpose of this model is to create a reference framework for the
different implementations of links in hypermedia systems. This reference
framework provides a common basis for the analysis and comparison of
link implementations, performed independently of the hypermedia system
that implements them and independently of the data being linked.
Many recursive datatypes can be expressed very neatly in terms of a
generic fixed point operator that captures recursion, and a
non-recursive functor that captures the remaining structure of the
type. The benefit of this approach is that it allows us to define
and to work with very general operators that capture common patterns
of computation, such as mapping or folding, across a wide range of
data structures.
However, given the conventional mechanisms for defining algebraic
datatypes, there are many examples that do not fit comfortably into
this framework. In this talk, I will describe (very) recent work
that generalizes the use of fixed point operators to deal with
arbitrary algebraic datatypes. In fact, the new approach goes
further by allowing us to describe datatypes that cannot be defined
as conventional algebraic datatypes.
The talk also provides some nice examples to demonstrate the use of
the new higher-order features that are beginning to find their way
into the type systems of languages like Haskell.
In this blackboard presentation I will show how various sorting methods
can be expressed using the recursion operators fold and unfold. As we
will see, viewing sorting methods in this structured manner makes certain
dualities and generalisations plainly evident.
We present a generic framework for specifying and implementing offline
partial evaluators. The framework provides the infrastructure for
specializing higher-order programs with computational effects specified
through a monad. It performs sound specialization for all monadic instances
and is evaluation-order independent.
It subsumes most previously published partial evaluators for higher-order
functional programming languages in the sense that they are instances of the
generic framework with respect to a particular monad.
A common intermediate language for both Haskell and ML could
unify much research in optimization and code generation, and
also improve the optimizations available to each language in
isolation. Getting all the details right has, however, proved
to be very subtle. I'll discuss our intermediate language and
sketch the essence of its algebraic, denotational and operational
semantics.
This talk is based upon joint work with John Launchbury, Simon Peyton Jones,
and Andrew Tomach.
In previous work with Andy Pitts, we developed a type theory T+FIX
which amounts to a categorical internal language for computational
(strong/Moggi) monads T, and initial algebras for (the underlying
functor of) T. Abadi and Cardelli have shown that their Object Calculi
admit sound translations of the typed lambda calculus and the standard
fixpoint combinators.
In this talk we shall show that Ob+\mu, the recursively typed object
calculus, admits a sound translation of T+FIX, and explain the origins
of some of the objects which code recursion in terms of their T+FIX
counterparts. Depending on time, we will discuss equational theories,
operational semantics, and PER models.
If the net is the computer, what is the formalism? As a step towards an
answer, we introduce a new abstraction, the ambient. An ambient is a named
boundary enclosing a group of local processes and a group of smaller, nested
ambients. We present a nominal process calculus of ambients, in which
computation is based on primitives for ambient mobility and local
communication within an ambient. The calculus is nominal in the sense that
like the pi-calculus, it includes a set of names and dynamic name
generation. We can derive a variety of synchronisation and communication
mechanisms and encode traditional computational models such as Turing
machines, the lambda-calculus and the pi-calculus. More to the point,
we show that ambients model a variety of net-centric concepts: named machines
on a network, packets moving from machine to machine, mobile devices, mobile
agents, encrypted packets and firewalls.
This talk is based upon joint work with Luca Cardelli.
Dynamic binding, which traditionally has always been associated with
Lisp, is still semantically obscure to many. Even though most
programming languages favour lexical scope, not only does dynamic
binding remain an interesting and expressive programming technique in
specialised circumstances, but also it is a key notion in formal
semantics. This article presents a syntactic theory that enables the
programmer to perform equational reasoning on programs using dynamic
binding. The theory is proved to be sound and complete with respect
to derivations allowed on programs in ``dynamic-environment passing
style''. From this theory, we derive a sequential evaluation function
in a context-rewriting system. Then, we further refine the evaluation
function in two popular implementation strategies: deep binding and
shallow binding with value cells. Afterwards, following the saying
that deep binding is suitable for parallel evaluation, we present the
parallel evaluation function of a future-based functional language
extended with constructs for dynamic binding. Finally, we exhibit the
power and usefulness of dynamic binding in two different ways. First,
we prove that dynamic binding adds expressiveness to a purely
functional language. Second, we show that dynamic binding is an
essential notion in semantics that can be used to define exceptions.
Folds are appreciated by functional programmers; the
benefits of encapsulating common patterns of computation as
higher-order operators are well-known and well
understood. Their dual, unfolds, are nearly as
well-known, but not nearly as well appreciated. We believe
they deserve better. To illustrate, we present (indeed, we
calculate) a number of algorithms for computing the
breadth-first traversal of a tree. We specify
breadth-first traversal in terms of level-order
traversal, which we characterize first as a fold. The
presentation as a fold is simple, but it is inefficient, and
removing the inefficiency makes it no longer a fold. We
calculate a characterization as an unfold from the
characterization as a fold; this unfold is equally clear,
but more efficient. We also calculate a characterization of
breadth-first traversal directly as an unfold; this turns
out to be the `standard' queue-based algorithm.
This talk is based upon joint work with Geraint Jones.
Powertrees (or perfectly balanced binary trees) feature in circuit
design and many parallel algorithms. They may be defined as a nested
(or non-regular) datatype, so that the balance constraint is enforced by
the type system. However, many common manipulations of these trees cannot
be expressed in this form. The solution is to define another nested type
describing homogeneous (or depth-preserving) functions over these trees.
This type is another non-monadic example of John Hughes's notion of
`arrow'. All programming is then done in terms of homogeneous functions,
without referring to powertrees. Program-forming operators include
counterparts of maps, folds and unfolds, with associated fusion laws.
With these we are able to define many powertree algorithms, and verify
their properties.
A generic compact printer and a corresponding parser are constructed. These
programs transform values of any regular datatype to and from a bit stream.
The algorithms are constructed along with a proof that printing followed
by parsing is the identity. Since the binary representation is very compact,
the printer can be used for compressing data - possibly supplemented with
some standard algorithm for compressing bit streams. The compact printer and
the parser are described in the polytypic Haskell extension PolyP.
This is a joint work with Valeria de Paiva and Eike Ritter from Birmingham.
Our aim was to define Kripke (possible worlds) models for two constructive
modal logics and find some kind of correspondence with their categorical
models. I will first talk about the logics and why they are interesting.
One of the logics originates from Eugenio Moggi's computational
lambda calculus where the operator corresponding to modality is used
to obtain "computations of type A" from "values of type A". Then I will
introduce categorical models for these logics, with which Valeria and Eike
were working. Finally, I will define possible worlds models for the two
logics and show how they relate to the categorical models.
To construct a useful semantics for the animation language Fran, it is
necessary to bridge two worlds; the continuous time functions of the
conceptual model underlying Fran, and the discrete time functions
used in the implementation.
In Fran, animations are descibed by functions of time that give the
values of points, vectors, colours, images, and other kinds of values,
at all times during the animation. From these basic building blocks,
which we call behaviours, we can build more complex behaviours by
combining them compositionally, and ultimately produce the image
behaviour that is the overall animation. Both space and time are
conceptually continuous--this is the only sensible way to think
about motion (ask Zeno :)--so it seems that there must be some
continuous time mathematical description of each animation, and
this, we hope, is what the implementation of Fran approximates by
using discrete methods.
In fact, to define a continuous time model of Fran is not as
straightforward as it may seem. We will show that the original
semantics given to Fran not only fails to give unique meanings to
some programs, but also results in serious anomalies. I will
explain an alternative approach that avoids these problems, and
then explore some other aspects of this research.
Type-directed partial evaluation is a new approach to program
specialization for functional programming languages. Its merits with
respect to the tradional offline partial evaluation approach have not yet
been fully explored. We present a comparison of the two methods in both a
qualitative and quantitative way.
The powerful abstraction mechanisms of functional programming
languages provide the means to develop domain-specific programming
languages within the language itself. Typically, this is realised by
designing a set of combinators (higher-order reusable programs) for an
application area, and by constructing individual applications by
combining and coordinating individual combinators. This paper is
concerned with a successful example of such an embedded programming
language, namely Fudgets, a library of combinators for building
graphical user interfaces in the lazy functional language Haskell.
The Fudget library has been used to build a number of substantial
applications, including a web browser and a proof editor interface to
a proof checker for constructive type theory. This paper develops a
semantic theory for the non-deterministic stream processors that are
at the heart of the Fudget concept. The interaction of two features
of stream processors makes the development of such a semantic theory
problematic:
We demonstrate that this combination of features in a higher-order
functional language can be tamed to provide a tractable semantics
theory and induction principles suitable for reasoning about
contextual equivalence of Fudgets.
This talk is based upon joint work David Sands and Magnus Carlsson.
Nested datatypes generalise regular datatypes in much the same way that
context-free languages generalise regular ones. Although the categorical
semantics of nested types turns out to be similar to the regular case,
the fold functions are more limited because they can only describe natural
transformations. Practical considerations therefore dictate the introduction
of a generalised fold function in which this limitation can be overcome. In
this talk we show how to construct generalised folds systematically for each
nested datatype. As a consequence, generalised folds satisfy fusion properties
similar to those developed for regular datatypes. Such properties form the
core of an effective calculational theory of inductive
datatypes.
Term Rewriting Systems (TRSs) are widely used throughout computer
science as they form an abstract model of computation while retaining
a relatively simple and concrete syntax. Unfortunately, this
concreteness has led to a tendency to concentrate on the technical
details of specific problems to the detriment of a proper interaction
with other disciplines and a deeper understanding of the subject.
Abstract Reduction Systems (ARSs) provide a relational semantics for
TRSs but do not posses enough structure to adequately model key
concepts arising in more complex problems. Category theory provides a
semantics for TRSs at an intermediate level of abstraction between the
syntax and the relational model. I will talk about recent research
using monads as models of TRSs and the application of this semantics
to modularity results in rewriting.
The Department of Geoinformation, together with five others, takes part in
education of students at the Technical University of Vienna who want to
achieve a M.Sc. degree in surveying. The department teaches courses in
geoinformation, adjustment, software engineering, and the organization and
legal aspects of cadastral surveying.
We use functional programming, Gofer dialect of Haskell in particular, for
building abstract executable specifications of geographic processes, data
exchange standards, hierarchical spatial reasoning, cadastral systems, and
spatio-temporal databases.
In general, we build classes for all operations, which permits parametric
overloading. Dividing the specification (classes) from the implementation
(instances), we achieve the level of abstraction required for design of
sophisticated specifications in spatial information theory, which are not
available in other languages known to us. Sometimes, the limits of the
Gofer inference mechanism for type parameters are met.
Graham: Expressive power of fold;
Mark: Existential types and fold;
Louise: Equivalent transition systems;
Mark: Show functions.