HTML>
The Foundations of Programming group holds regular research meetings. The meetings last around an hour, and are intended to be a friendly forum for discussion and exchange of ideas. Most meetings take the form of a talk by a group member or an invited speaker, but we also have various kinds of discussion sessions. The meetings are open to everyone, not just members of the group. Please contact Neil Ghani if you are interested in visiting the group and giving a talk.
Details of previous years meetings are also available: 1995; 1996; 1997; 1998; 1999; 2000; 2001; 2002. 2003. 2004.
Details of recent and upcoming meetings are given below.
Mark Ryan , University of Birmingham
We present a model of access control which provides fine-grained data-dependent control, can express permissions about permissions, can express delegation, and can describe systems which avoid the root-bottleneck problem. We describe a language for describing goals of agents; these goals are typically to read or write the values of some resources. We provide a decision procedure which determines whether a given coalition of agents has the means (possibly indirectly) to achieve its goal. This question is decidable in the situation of the potential intruders acting in parallel with legitimate users and taking whatever temporary opportunities the actions of the legitimate users present. Our technique can also be used to synthesise finite access control systems, from an appropriately formulated logical theory describing a high-level policy. As well as the verification tool, we have developed a tool which can translate the access control policy in to XACML, a language for specifying access control systems which can be used to generate Java code.
We describe a method for combining formal program development with a
disciplined and documented way of introducing realistic compromises, for
example necessitated by resource bounds. Idealistic specifications are
identified with the limits of sequences of more ``realistic"
specifications, and such sequences can then be refined in their
entirety.
Compromises amount to focusing the attention on a particular element
of the sequence instead of the sequence as a whole.
This method addresses the problem that
initial formal specifications can be abstract or complete but rarely
both. Various potential application areas are sketched, some
illustrated with examples. Research issues for this kind of
formal development of sequences are identified -- continuity turns out
to be a central issue.
It widely believed that low-level languages with jumps must be difficult
to reason about because they are inherently non-modular. We argue that
this is a myth. Proceeding from just one very simple but fundamental
observation (that, differently from statements of a high-level
language, pieces of low-level code are multiple-entry and
multiple-exit), we obtain a beautifully compositional natural
semantics and a matching Hoare logic for a basic low-level
language. By their simplicity and intuitiveness, they are comparable
to the standard natural semantics and Hoare logic of While. The Hoare
logic is sound and complete wrt. the semantics and allows for
compilation of proofs of the Hoare logic of While.
The closest to our work is the new work by Tan and Appel, but their
logic employs continuations and interprets Hoare triples in a
non-standard fashion via approximations of truth. (Joint work with
Ando Saabas.)
It turns out to be surprisingly straightforward to extend proof nets
for classical logic to treat the operators of the simple modal logic
S5. In this talk I explain just how you can do that, and --
furthermore -- how you can use the resulting proof nets to define a
new, simple, cut-free sequent calculus for S5.
This talk motivates the results from the point of view of the
philosopher interested in inference, but the connections to category
theory and computation are not too far from the surface.
I'm going to introduce my master's thesis: a language extension of a
parametrically typed Prolog for writing polytypic predicates over the
large class of regular types. A polytypic predicate is defined as a
template for predicates, which is a typed version of a predicate over
some sort of a generic datatype. From the template, a specific instance
of the predicate is be generated by partial evaluation with respect to a
specific abstraction predicate. The generated predicate is a standard
Prolog predicate over the type similar to what the programmer would have
written by hand. Polytypic instances are thus expanded at compile-time
with no overhead at runtime and can be subject to further program
transformation and optimization. The extension allows for writing both
purely generic polytypic predicates, such as maps or folds, and also
specific polytypic predicates over a limited set of data types.
Automated Reasoning tools are notoriously uninformative when
they fail to find a proof. There have been a number of approaches to
this problem in recent years, most focusing on the generation of
counter-examples. In this talk I will discuss preliminary work on
making use of the structure of the failed proof attempt itself in order
provide diagnoses and patches for non-theorems.
We present a brief summary of the work done during the three years
long EU-funded SOCS project, with respect to the task of modelling
interaction amongst CL-based agents.
We describe the SOCS social model: an agent interaction
specification and verification framework equipped with a
declarative and operational semantics, expressed in terms of
abduction.
The operational counterpart of the proposed framework has
been implemented and integrated in SOCS-SI, a tool that can be
used for on-the-fly verification of agent compliance with respect
to specified protocols. We show the functioning of the tool using
a simple case study.
2-3 finger trees are a functional representation of
persistent sequences supporting access to the ends in amortized
constant time, and concatenation and splitting in time logarithmic in
the size of the smaller piece. Representations achieving these bounds
have appeared previously, but 2-3 finger trees are much simpler, as
are the operations on them. Further, by defining the split operation
in a general form, we obtain a general purpose data structure that can
serve as a sequence, priority queue, search tree, priority search
queue and more.
It is useful to be able to reason about agents at the intentional level:
the level of beliefs, desires and intentions. Traditional logics of belief
are concerned with the intensional content of such states, i.e. what such
states are about. However, there are reasons for holding that it is more
important to consider what information such states contain to the agent
which holds them. This is why we explain behaviour in terms of beliefs and
desires. I present a logic which deals with belief states in a way which is
fine-grained enough to represent resourc bounds. I contrast this with the
traditional approach and show how the latter can be accommodated within my
account but not vice versa. Moreover, this logic has the desirable
computational properties of the traditional account - it is a decidable
modal logic, it can be used with existing model checking technology - plus
some added bonuses.
Partial and total program correctness are important
verification tasks. Partial correctness is often analysed by Hoare logic
or dynamic logic and based on a weakest liberal
precondition semantics, which is essentially relational. Total
correctness seems beyond the purely relational framework; the usual
weakest precondition semantics can be obtained by enriching relations
with termination constraints.
I will discuss simple correctness semantics based on idempotent
semirings. They use essentially the regular operations for modelling
program actions and forward and backward modal operators `a la Jonsson
and Tarski for modelling state transitions. Different algebraic
correctness semantics are obtained from that basis using natural
algebraic constructions: (i) a modal operator algebra that considerably
simplifies soundness and correctness proofs of Hoare logic and allows
particularly concise specifications and proofs; (ii) a command algebra
for total correctness that reconstructs a previous approach by Nelson;
(iii) refinement algebras in the style of von Wright. (ii) and (iii) are
ongoing work and there are still many questions open.
Benefits of this regular approach to program correctness are as
follows: a uniform treatment of algebraic, set-based and modal
approaches to program correctness; a correspondence result for the
weakest liberal precondition and the weakest precondition operator; an
efficient automata-based decision procedure for partial correctness
analysis based on dynamic logic; point-free calculational reasoning
about programs and therefore suitability for mechanisation and automation.
In this talk, I plan to survey some of the properties, design
principles, advantages (and disadvantages) in functional
programming with unrestricted type dependency, as exemplified by
the language EPIGRAM, introduced by myself and Conor in 2004.
Ondrej Rypacek ,
University of Nottingham
Thomas Ågotnes ,
University of Bergen/University of Liverpool
An interesting property of systems containing several autonomous
components (or "actors", "players" or "agents") is strategic ability, i.e.,
the ability of an agent or a group of agents to select a strategy such that
if the agent/group follow(s) the strategy a certain outcome will be the case
- - no matter what the other agents in the system do. While strategic ability
has been extensively studied in game theory, the topic of this talk is an
account of strategic ability in a formal logic. One of the most popular of
such logics is Alternating-time Temporal Logic (ATL). ATL does, however,
assume that agents at all times have complete information about the state of
the world. No commonly accepted semantics of strategic ability under
incomplete information has yet appeared. We propose such a semantics. It is
"non-standard" in the sense that formulae are interpreted in sets of states,
rather than in single states as in modal logic or in ATL. We also propose a
new epistemic operator which can be used to express strategic ability under
incomplete information, and show that the resulting logic is strictly more
expressive than existing proposals while retaining the same model checking
complexity. I will also discuss the interpretation of traditional operators,
such as negation, in a semantics like this. The talk is based on joint work
with Wojtek Jamroga.
Mark Jago ,
University of Nottingham
School of Computer Science and IT
University of Nottingham
Jubilee Campus, Wollaton Road
Nottingham NG8 1BB
United Kingdom