## BCTCS 2005 Abstracts

Is Constructive Logic Relevant for Computer Science? (slides)
Thorsten Altenkirch, University of Nottingham

Modern Mathematics is based on classical logic and Zermelo-Fraenkel set theory. In this talk I'll discuss why a constructive approach, such as Martin-Loef's Type Theory, may be more appropriate for Computer Science.

Games for Algorithmic Problem Solving (invited tutorial) (slides)
Prof Roland Backhouse, University of Nottingham

Combinatorial games provide a very fruitful source of examples of algorithmic problem solving because they are all about winning --- identifying an algorithm that will guarantee that you win and your opponent loses! The theory of combinatorial games is also a fruitful source of illustrations of advanced algebraic structures (fields, vector spaces etc.), with the advantage that the problems are concrete and easily understood, but their solution is often very difficult.

This tutorial focuses on using combinatorial games to illustrate important concepts in fixed-point calculus. After a short introduction and overview, we use so-called "loopy games" (games that are not guaranteed to terminate) to illustrate least and greatest fixed points; we also relate properties of winning and losing to calculational rules, like the so-called "rolling rule" of the fixed-point calculus.

Foundations of a Generic C++ Library for Generalised Satisfiability Problems (slides)
Tony H Bao, University of Wales Swansea

The Satisfiability Problem (SAT) is an actively researched area in computational theory today. While many SAT Solvers were contributed over the years, most of them are written or rewritten from scratch due to the immense implementation details required for increasing efficiency and very often generality was sacrificed for the need for speed. However, with the help of Generative Programming (GP), generality and efficiency can coexist and play nicely together. In this talk I'll give an overview on some of the work I have done on OKLibrary - a generative C++ library aimed at providing a generic algorithmic framework for solving generalised satisfiability problems with combined focus on both high generality and high efficiency.

Communications protocols consist of a set of rules to govern the communication between two or more agents. The study of such protocols illustrates a common feature of modern computing culture: the meeting of the mathematical approaches to modelling and specification favoured by computer science, and the pragmatic development characteristic of traditional engineering. Many communication protocols were first developed without a rigorous specification, and formal analysis has typically been first ventured only after problems have emerged in practice. A key problem in applying modelling and specification techniques in this context is identifying suitable abstract properties that can be usefully studied in isolation from the actual physical components of the implementation. We explore this issue and its implications by comparing and contrasting the role that formal specification has so far played in the practical development and analysis of communications protocols of two different kinds: authentication and network protocols.

Authentication protocols typically consist of agents who attempt to reliably establish the identity of the other agents through an exchange of messages. Analysis of authentication protocols establishes properties such as secrecy and the agreement of identities. Tools such as CASPER and the model-checker FDR have been successfully applied to check for these properties and have discovered flaws such as the message replay attack in the Needham-Schroeder authentication protocol. The properties being analysed are clear, easily identified and well understood.

Network protocols, such as those documented by the Internet Engineering Task-force (IETF), are quite varied with each exhibiting a different set of traits. This means that network protocols cannot be characterised as neatly as authentication protocols. For instance, in some circumstances it can be difficult to formulate the correctness properties of a network protocol. This raises the question of what abstract support can be given to the engineering of such protocols through informal or semi-formal modelling and specification.

We review some recent literature describing efforts at network protocol specification and as a specific example we discuss the Internet Group Management Protocol (IGMP), part of the Internet Protocol (IP) specification and necessary for hosts on a network to receive multicast data. Multicast allows the transmission of data to multiple hosts simultaneously while only transmitting one copy of the data onto the physical network. This is in contrast to unicast, where a copy of the data is sent across the network to each recipient individually. IGMP allows a host to report their membership of multicast groups to a local multicast router.

Some of the problematic issues that arise in studying communications protocols appear to be related more generally to well-recognised challenges in applying formal methods in software development. They include the problem of integrating state-based and event-based formalisms within a coherent framework, and integrating manually generated code with code that has been derived from a formal specification. The overall objective of our study is to determine what further insight into these difficult problems of integration can be gained from a critical appraisal of previous work on formalising communications protocols.

This is joint work with Jane Sinclair and Meurig Beynon.

Undecidability of the Membership Problem for a Diagonal Matrix in a Matrix Semigroup (slides)
Paul Bell, University Liverpool

We present a proof of the undecidability of diagonal matrix reachability in a matrix semigroup. Since our proof shows that the above problem is undecidable in dimension 5, it sets a new bound on the problem (currently the problem is open in all dimensions). We will show a number of encodings to reduce the Post correspondence problem to the reachability of a diagonal matrix in a 5x5 matrix semigroup.

This is a joint work with Igor Potapov.

Termination Analysis of lambda-calculus and a subset of core ML (slides)
William Blum, University of Oxford

Lee, Jones and Ben-Amram introduced "size-change termination", a decidable property strictly stronger than termination. They proposed a method called the "size-change principle" to analyze it.

I propose an extension of the size-change principle to a subset of ML featuring ground type values, higher-order type values and recursively defined functions. Compared to other works, this is the first time that the size-change principle is applied to a higher-order functional language.

The language handles natively if-then-else and let rec structures. The resulting algorithm produces the expected result for higher-order values but can also analyze the size of ground type values. This enhances the scope of the termination analyzer to some recursively defined function operating on numbers.

Functional programming systems tend to consist of a separate editor and compiler/run-time environment, and static type checking is performed after editing. However, with a document-centred system, there is no separation between the editor and the run-time environment. This is a talk about issues arising from the combination of functional programming with a document-centred interface. The main focus is on the advantages which type-checking brings to the user, and the best ways to provide clipboard operations, and other direct manpipulation facilities, in a type aware environment.

Checking Dependently Typed Programs (slides)
James Chapman, University of Nottingham

The goal of this work is to present an independent type checker for Epigram programs. Epigram elaborates programs written in its high-level syntax into the low-level formal language of type theory. Constructions in this formal language for even modest programs are large and complex therefore we cannot hope to just look at them to convince ourselves that programs are correct. An idea put forward by Pollack is to write a small type checker that we can understand directly to provide ourselves with the necessary conviction that our programs are correct. I will report on progress made to this end.

A Theory of Tracing Pure Functional Programs (slides)
Olaf Chitil, University of Kent

Tracing enables us to see how different parts of a program cause a computation to perform its observable input/output actions. Thus tracing supports understanding how a program works and locating the source of runtime errors. For declarative languages tracing methods that are different from, and arguable more powerful than, conventional methods for imperative languages have been developed. These methods take advantage of the purity of declarative languages, that is, explicit data flow and absence of side effects. Functional languages are particularly suitable for these methods, because they have simple and well-studied semantics, and lazy functional languages are the most widely used completely pure programming languages.

There now exist several tracing systems for Haskell, demonstrating the practical viability of different tracing methods. However, combining different methods and generalising them further proves hard to do and many anomalies and defects in existing systems have come to light. To overcome these problems we need simple models of the tracing systems. We need a theoretical foundation for tracing. It is the aim of an EPSRC project starting right now to provide this foundation.

In the talk I will describe several tracing methods. I continue with a number of problems and shortcomings that have come to light. Finally I will outline the semantical theory that we intend to establish. The theory will describe both strict and non-strict languages and prove the correctness of various fault location algorithms.

Software verification, unlike hardware verification often gives rise to infinite state systems, and hence the standard techniques that have been so successfully employed in the former rarely carry over to the latter. This talk is about unearthing various classes of infinite structures with decidable model-checking properties. In particular, we investigate higher-order grammars as generators of infinite trees. Higher order grammars generate infinite term trees and these term trees can be thought of as an accurate model of the syntax trees of higher-order recursive programs. In 2001, Knapik et al. managed to show that subject to a particular syntactic restriction on the grammars known as "safety", these trees were ripe for model checking -- i.e. they possessed a decidable MSO theory. We show that (at least up to level 2), this restriction can be removed. In this talk I will introduce the general problem of software verification, monadic second order logic, higher-order grammars, the "safety" restriction and culminate in the statement of our result (to appear in TLCA '05).

Game Semantics and CSP Based Approach for Software Model Checking (slides)
Aleksandar Dimovski, University of Warwick

We present an approach for software model checking based on game semantics and CSP. Open program fragments are compositionally modelled as CSP processes which represent their game semantics. This translation is performed by a prototype compiler. Observational equivalence and verification of properties are checked by traces refinement using the FDR tool. Effectiveness of our approach is evaluated on several examples.

Algebraic Decompositions of Finite Automata and Formal Models of Understanding (slides)
Attila Egri-Nagy, University of Hertfordshire

Wreath product decomposition of finite state automata has many different potential applications: it could yield automated object-oriented programming in software development; hierarchical decompositions may serve as formal methods for understanding in artificial intelligence; the least number of needed levels for decomposition provides a widely applicable integer-valued complexity measure; and it relates to conservation laws and symmetries in physics. The algebraic theory of such decompositions was first developed in '60s, but there has never been a computational implementation until recent work carried out by the author with the supervision of Prof. C. L. Nehaniv at the University of Hertfordshire This talk briefly delineates the algebraic decomposition theory and reviews the variations of the algorithms presented in different proofs, then discusses the results of the first computational implementation.

The main objective of this proposal is to build a coordination-based solution for supporting modelling of dynamic human-centric collaborative systems. We will focus on how modelling of predictable human interactions, in terms of norms and norms violation, and externalizing its emerging properties can contribute to managing human components in a flexible way that can respond easily to evolving social and organizational settings as well as new requirements. These properties shall be conceived by studying how to correlate Institutional power for modelling norms, category theory for specifying components and their architectural connectors and finally temporal logic for injecting time and state notions. Appropriate problem analysis and design approaches will be investigated to provide tools and techniques to systematic Architecture-based development for collaborative systems that are of human-centric nature. Socio-technical systems are the ideal examples of such systems.

Network Discovery and Landmarks in Graphs (slides)
Thomas Erlebach, University of Leicester

Consider the problem of discovering the edges and non-edges of an unknown graph. Initially, only the vertex set of the graph is known. A query at a vertex reveals all edges that lie on shortest paths between that vertex and the rest of the graph. The goal is to minimize the number of queries made until all edges and non-edges have been discovered. We discuss upper and lower bounds on the competitive ratio achievable by on-line algorithms for this problem. We also give results for the off-line version of the problem, which turns out to be equivalent to the problem of placing landmarks in graphs.

Coherence problems are widespread within category theory and, although individual problems have been addressed in detail, there seems to be no systematic way of tackling them. We show how coherence can be studied uniformly by using a 2-dimensional version of Knuth Bendix completion which constructs not only a complete rewrite system, but also an equational theory extending Levy's permutation equivalence. We exemplify our theory with a number of canonical examples.

The Soft Machines: Computing with the Code of Life (invited talk) (slides)
Prof Alan Gibbons, King's College London, and Dr Martyn Amos, University of Exeter

Cellular computing is a new, rapidly expanding field of research at the intersection of biology and computer science. It is becoming clear that, in the 21st century, biology will be characterized more often as an information science. The flood of data generated first by the various genome projects, and now by large-throughput gene expression, has led to increasingly fruitful collaborations between biologists, mathematicians and computer scientists. However, until recently, these collaborations have been largely one-way, with biologists taking mathematical expertise and applying it in their own domain. With the onset of molecular and cellular computing, though, flow of information been growing in the reverse direction. Computer scientists are now taking biological systems and modifying them to obtain computing devices. Cells are being re-engineered to act as microbial processors, smart sensors, drug delivery systems and many other classes of device. This talk traces the brief history of cellular computing and suggests where its future may lie. No biological background will be assumed.

This talk is sponsored by the London Mathematical Society.

We formalise the syntax and static semantics of the recently proposed specification language CSP-CASL, in the style of Natural Semantics. We then show how to build a parser and a static analyser according to these definitions within the framework of Hets, the Heterogeneous Tool Set for the algebraic specification language CASL. On the technical side this uses the combinator parser library Parsec, in the functional programming language Haskell. All this takes place within the wider context of developing tool support (theorem prover, model checker, tools for computing different representations), for CSP-CASL.

Samoa: Formal Tools for Securing Web Services (invited talk) (slides)
Dr Andrew Gordon, Microsoft Research, Cambridge

An XML web service is, to a first approximation, a wide-area RPC service in which requests and responses are encoded in XML as SOAP envelopes, and transported over HTTP. Applications exist on the internet (for programmatic access to search engines and retail), on intranets (for enterprise systems integration), and are emerging between intranets (for the e-science Grid and for e-business). Specifications (such as WS-Security, now standardized at OASIS) and toolkits (such as Microsoft's WSE product) exist for securing web services by applying cryptographic transforms to SOAP envelopes.

The underlying principles, and indeed the difficulties, of using cryptography to secure RPC protocols have been known for many years, and there has been a sustained and successful effort to devise formal methods for specifying and verifying the security goals of such protocols. One line of work, embodied in the spi calculus of Abadi and Gordon and the applied pi calculus of Abadi and Fournet, has been to represent protocols as symbolic processes, and to apply techniques from the theory of the pi calculus, including equational reasoning, type-checking, and resolution theorem-proving, to attempt to verify security properties such as confidentiality and authenticity, or to uncover bugs.

The goal of the Samoa Project is to exploit these recent theoretical advances in the analysis of security protocols in the practical setting of XML web services. This talk will present some of the outcomes of our research, including: automatic analysis of hand-written pi-calculus descriptions of WS-Security protocols; automatic generation of pi-calculus descriptions from WSE config and policy files; and formal analysis of the WS-Trust and WS-SecureConversation specifications. Along the way, we have discovered a range of security issues.

Based on joint work with K. Bhargavan, R. Corin, C. Fournet, and R. Pucella at Microsoft Research, Cambridge.

A Compiler for a Functional Quantum Programming Language (slides)
Jonathan Grattage, University of Nottingham

We introduce a compiler for the functional quantum programming language QML, developed in Haskell. The compiler takes QML expressions as input and outputs a representation of quantum circuits (via the category FQC of finite quantum computations) which can be simulated by the simulator presented here, or by using a standard simulator for quantum gates. We briefly discuss the structure of the compiler and how the semantic rules are compiled.

Level of Repair Analysis and Minimum Cost Homomorphisms of Graphs (slides)
Gregory Gutin, Royal Holloway, University of London

Level of Repair Analysis (LORA) is a prescribed procedure for defence logistics support planning. For a complex engineering system containing perhaps thousands of assemblies, sub-assemblies, components, etc. organized into several levels of indenture and with a number of possible repair decisions, LORA seeks to determine an optimal provision of repair and maintenance facilities to minimize overall life-cycle costs. For a LORA problem with two levels of indenture with three possible repair decisions, which is of interest in UK and US military and which we call LORA-BR, Barros (1998) and Barros and Riley (2001) developed certain branch-and-bound heuristics. The surprising result of this talk is that LORA-BR is, in fact, polynomial-time solvable. To obtain this result, we formulate the general LORA problem as an optimization homomorphism problem on bipartite graphs, and reduce a generalization of LORA-BR, LORA-M, to the maximum weight independent set problem on a bipartite graph.

This is joint work with A. Rafiey, A. Yeo and M. Tso.

Bisimilarity is the canonical behavioural equivalence for processes algebras, Weak Bisimilarity generalises this by admitting silent behaviour. Weak Bisimulation Approximants approach weak bisimilarity by considering games whose length is bounded by an ordinal number; so, for example, the nth approximant relates exactly those processes whose behaviour cannot be distinguished in a game lasting n turns. On BPP, a simple process algebra, it is believed that one never need play a game that lasts for more than omega times two steps (i.e. that the approximant hierarchy collapses at level omega times two). However, currently the only proven bound is the Church-Kleene ordinal (the least non-recursive ordinal number). I wish to show how the bound can be brought down to omega to the omega, and suggest an approach for reaching the omega times two conjecture.

Experiments and Optimal Results for Outerplanar Drawings of Graphs (slides)
Hongmei He, University of Loughborough

We will review our experiments with heuristics and genetic algorithms for low crossing outerplanar drawings and on classes of graphs with optimal outerplanar drawings.

Number Systems and Data Structures (invited talk) (slides)
Dr Ralf Hinze, University of Bonn

Data structures are a lot like number systems: adding an element to a container corresponds to incrementing a number, deleting an element from a container corresponds to decrementing a number. This talk takes a closer look at the intimate relationship between data structures and number systems. We show, in particular, how number systems with certain properties can be utilised in the design and analysis of data structures. To this end, we discuss various number systems - some well-known, some less so - and show how operations on container types can be modelled after their number-theoretic counterparts. As a worked-out example, we introduce a variant of finger trees that supports deque operations in amortised constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece. Finally, we exhibit the number systems that underly well-known data structures such as red-black trees and 1-2 brother trees.

This talk is sponsored by the Engineering and Physical Sciences Research Council.

In this talk we will concentrate on the following four computational classes of monoid: rational, automatic, asynchronously automatic and hyperbolic. We study algebraic and computational properties of the relations between these classes of monoids. We will present the complete inclusion structure of these classes of monoids is described. By doing so we will answer in particular open questions concerning the relationship between automatic and hyperbolic monoids: Hyperbolic monoid are not necessarily automatic.

Starting with an evaluator for a language, an abstract machine for the same language can be mechanically derived using successive program transformations. This has relevance to studying both the space and time properties of programs because these can be estimated by counting transitions of the abstract machine and measuring the size of the additional data structures needed, such as environments and stacks. In this talk I will use this process to derive a function that accurately counts the number of steps required to evaluate expressions in a simple language.

We are concerned with the problem of selecting the location for facilities, subject to some constraints, in a given network. The nodes selected are called sources and users at other nodes access the facilities at the sources through the network. Thus a measure of the robustness of the network is the number of disjoint paths there are between each user and the sources.

More formally, a set of (k,l)-sources for a digraph D = (V,A) is a subset S of V such that for any v in V there are k arc-disjoint paths that each join a vertex of S to v and l arc-disjoint paths that each join v to S. The Source Location Problem is to find a minimum size set of (k,l)-sources. We provide a polynomial algorithm to solve this problem and discuss progress on other variants of the Source Location Problem.

This is joint work with Jan van den Heuvel.

Rationally updating and revising beliefs is traditionally thought to be possible only for computationally ideal agents who can compute all consequences of their beliefs, regardless of time and space limitations. Focusing on rule-based agents, we show how resource bounded agents can revise and update their beliefs in time linear in the size of the agent's state and program. We argue that such operations are rational, as they satisfy all but one of the AGM postulates for revision. We also discuss the relationship between belief revision and update.

Combinatorial Tools for Propositional Satisfiability Decision (slides)
Oliver Kullmann, University of Wales Swansea

Boolean formulas in CNF (conjunctive normal form) are the dominating input format for SAT (satisfiability) solvers. I will discuss a natural (and well-known) generalisation of CNF's as "combinatorial data structures", which leads to a very natural approach to (hyper)graph colouring problems; a theoretical application, generalising a theorem of Seymour, is outlined.

Coalgebras X-->TX for a functor T on a category XX are a well-established model for transition systems. The basic idea of the talk is that specification languages for these transition systems are obtained from the dual functor L of T on the Stone dual of the category XX. But the dual L itself yields only abstract propositions' (meaning equivalence classes of formulas up to interderivability), neither a practical definition of the formulas nor a calculus defining derivability. We introduce the notion of a functor L being presentable by operations and equations and show that each presentation of L gives rise to a sound, complete, and expressive modal logic for T-coalgebras (modal operators are induced by the operations, modal axioms by the equations).

Pareto Optimality in House Allocation Problems (slides)
David Manlove, University of Glasgow

In many practical situations we have two sets of entities S, T, and we seek to assign the members of S to elements of T. For example, S might be a set of people and T might be a set of houses - this corresponds to large-scale applications such as the allocation of students to campus housing in American universities, or the allocation of families to government-subsidised housing in China.

Formally, an instance of the House Allocation problem (HA) involves a set A of agents, and a set H of houses. Each agent in A has an acceptable set of houses and ranks this set in strict order of preference. A matching M ofagents to acceptable houses is Pareto optimal if there is no other matching M' such that (i) some agent is better off in M' than in M, and (ii) no agent is worse off in M' than in M. In this talk I will present an efficient algorithm for the problem of finding a maximum Pareto optimal matching in an instance of HA, together with related results concerning Pareto optimal matchings.

The problem model described here has applications in many other contexts, such as the allocation of students to projects, and also the Scottish Executive's Teacher Induction Scheme. This is joint work with David Abraham, Katarna Cechlrov and Kurt Mehlhorn.

Types characterize the data and programs which in some way ‘make sense’. Dependent types—types expressed in terms of data—can capture notions of ‘sense’ which are relative to that data. There are many such notions which inhabit our heads but not so typically our programs. Dependently typed programming seeks to reduce this gap between ideas and their execution. For example, we can take i×j to be the type of matrices with i rows and j columns, then give multiplication the type

∀i,j,k.  i×j → j×k → i×k

Epigram is a dependently typed functional language, equipped with an interactive programming environment. Epigram programs elaborate to constructions in Type Theory by a process strongly resembling inductive theorem proving. This is no idle coincidence. Epigram uses induction princples—expressed by types and interpreted by programs—to characterize derivable notions of pattern matching and recursion in a flexible and extensible way. Epigram's definition and elaboration rules can be found in ‘The view from the left’, by myself and James McKinna. The language and system continue to be developed at Durham, St Andrews, Royal Holloway and here in Nottingham.

I shall illustrate Epigram by example, working ‘live’ at the machine, journeying from induction principles to functional programs and back again.

We are considering the problem where n mobile users want to get data from m base stations. Every base station is capable to send data to arbitrary user and this creates an interference around the base station in the form of a disk with radius equal to the distance of the user to the base station. We assume that sending data to a user takes one time unit (one round) and user can receive the data only when it is not in interference produced by another base station. The goal is to assign users to base stations and find the round in which the user is served by the assigned base station while MINIMIZING the number of rounds. We study both one dimensional and two dimensional case. We present 2-apporoximation algorithm for the 1D-case and leave the complexity unsolved. For the 2D case we show the problem is NP-hard and show some lower bounds on some natural greedy algorithms.

Total Pasta: Static Analysis For Unfailing Pointer Programs (slides)
Neil Mitchell, University of York

Most errors in computer programs are only found once they are run, which results in critical errors being missed due to inadequate testing. If static analysis is performed, then the possibility exists for detecting such errors, and correcting them. This helps to improve the quality of the resulting code, increasing reliability.

A static analysis has been developed and implemented for the experimental pointer based language Pasta, which is designed to represent pointer manipulating code, such as data structures. The analysis checks for totality, proving that a particular procedure cannot crash and will always terminate. Where a procedure does not satisfy this, the preconditions for the procedure are generated.

Geometric Interpretation of Crossover (slides)
Alberto Moraglio, University of Essex

Boiling down the non-algorithmic differences, the various evolutionary algorithms differ only in the solution representation and the relative genetic operators (mutation and crossover) customized for the specific representation. A way to treat different representations uniformly is therefore prerequisite for unification. The focal questions of my research are: what is mutation? what is crossover? what is common in all mutation operators and all crossover operators beyond the specific representation? In the talk I will give my answers to these questions and discuss a number of surprising implications: (i) unveiling the geometric nature of evolutionary search (ii) simplifying and clarifying the connections between mutation, crossover, neighbourhood structure and fitness landscape (iii) giving rigorous general representation-independent definitions of genetic operators forming a solid ground for the generalization of pre-existing representation-specific theories (many, incomplete, fragmented, partially inconsistent bits of theory) and (iv) suggesting a easy and automatic way to do crossover principled design for any solution representation.

Generic Programming in a Dependently Typed Language (slides)
Peter Morris, University of Nottingham

It is possible using dependent types to write equality functions that give results in a more informative type than just Bool, we can construct as a result either a proof that the two are in fact the same thing or a proof that they are not. Another dependently typed trick is to use reflection to define the type of regular types (with constructors mu, +, *, One, Zero, etc) and then the type of elements for a given regular type (in for mu, inl/inr for +, pairing for *, etc). By combining reflection and a more informative equality testing we can write a data-type generic equality test that is obviously correct from it's type.

Tool support for Modular SOS (slides)
Peter Mosses, University of Wales Swansea

Modular SOS (MSOS) is a variant of conventional SOS. Using MSOS, the transition rules for each construct of a programming language can be given incrementally, and do not need reformulation when further constructs are added to the described language. MSOS thus provides an exceptionally high degree of modularity in semantic descriptions, removing a shortcoming of the original SOS framework.

The crucial feature of MSOS is that labels on transitions are now morphisms of a category, and exploited to ensure the appropriate flow of information (such as environments and stores) during computations.

The talk first recalls the foundations of MSOS, and illustrates how MSOS descriptions are written in practice. It then introduces a Prolog-based system that can be used for validating MSOS descriptions. An earlier version of the system has been used to support undergraduate courses on semantics at Aarhus and Monash.

The research proposes model of extensible knowledge and its dynamic change using classical computer science. The model has been systematically developed into working educational software for module organisation.

Exploring Lightweight Implementations of Generics (slides)
Bruno Oliveira, University of Oxford

In the paper "A lightweight implementation of generics and dynamics" Ralf Hinze presents a simple approach to generic programming that does not require major extensions to Haskell 98 type system. Type representations are used to "typecase" over the structure of types. While the approach is quite powerful, supporting generic functions over nearly all Haskell datatypes, it has still some limitations. One of those limitations consists on the impossibility to override the behaviour of a polytipic definition at some specific instances.

In this talk I will show how to address this problem using ad-hoc polymorphism. This can be readily implemented in Haskell using a two-parameter type class, however this has some drawbacks. Those drawbacks could be removed with a small language extension.

A Framework Based on Coordination and Software Architecture for Mobile Systems (slides)
Cristovao Oliveira, University of Leicester

Due to technological advances in the past years of which the World Wide Web and mobile phones are only two striking examples distributed and mobile systems have become more common. Disciplines like Software Engineering have studied the decomposition and design of systems, but there is still very little support for the development of this new kind of applications. The goal of this work is to develop a methodological approach and support tools for the modelling of distributed and mobile systems along three architectural views Computation, Coordination and Distribution. This approach has a sound semantic basis over the CommUnity architectural approach developed by Jose Luiz Fiadeiro, Antonia Lopes and Michel Wermelinger. The views to be developed will also provide support for the distributed execution of the modelled system. Finally, the CommUnity Workbench developed by the candidate in his MSc thesis will be extended to the overall the framework, thus providing a tool for modelling mobile systems according to the developed architectural views, following the developed methodology, and animating them using the developed technologies.

Automatic Presentations and Classes of Semigroups (slides)
Graham Oliver, University of Leicester

Structures presentable by finite automata are of growing interest, particularly as the closure properties of automata give a generic algorithm for deciding first order properties. The work presented here is a contribution to the foundational problem: given a class of structures X, classify the members of X which have an automatic presentation. The classes considered here are various interesting subclasses of the class of finitely generated semigroups. In particular, a classification for the class of finitely generated groups allows a direct comparison with the theory of automatic groups.

Computational Completeness of Rule-Based Languages (slides)
Detlef Plump, University of York

We study the computational completeness of rule-based programming languages on arbitrary domains. We only assume a universe of rules and a domain of objects such that rules induce binary relations on the domain. Programs are built from three constructs: nondeterministic application of a finite set of rules, either (a) in one step or (b) as long as possible, and (c) sequential composition of programs. We present a completeness condition guaranteeing that every computable binary relation (and hence every computable partial function) on the domain is computed by some program. Instantiating the abstract framework with string, term or graph rewriting yields computationally complete languages. In each of these cases, omitting either the construct `as long as possible'' or the sequential composition results in an incomplete language. For string and graph rewriting, the one-step application of rules cannot be omitted either and hence the string and graph rewriting languages are minimal.

We describe a new tool called CSP-Prover which is an interactive theorem prover dedicated to refinement proofs within the process algebra CSP. It aims specifically at proofs on infinite state systems, which may also involve infinite non-determinism. Semantically, CSP-Prover offers both classical approaches to denotational semantics: the theory of complete metric spaces as well as the theory of complete partial orders. Technically, CSP-Prover is based on the generic theorem prover Isabelle, using the logic HOL-Complex. Within this logic, the syntax as well as the semantics of CSP is encoded, i.e., CSP-Prover provides a deep encoding of CSP. Our applications include refinement proofs (1) in the context of an industrial case study on formalising an electronic payment system and (2) for the classical example of the dining mathematicians.

The linear complexity of a sequence (i.e. the length of the shortest recurrence relation, or Linear Feedback Shift Register which generates the sequence) is a fundamental parameter for virtually all applications of linearly recurrent sequences, including cryptography. Computing the linear complexity of a linearly recurrent sequence takes in general quadratic time, using the Berlekamp-Massey algorithm. For binary sequences whose period N is a power of two, Games and Chan developed a more efficient, linear algorithm. For such sequences, the k-error linear complexity (i.e. the minimum complexity that can be obtained by changing up to k terms in a period) can also be computed in linear time and space using the Stamp-Martin algorithm. The whole k-error linear complexity spectrum can be computed in O(N log^2 N) by the Lauder-Paterson algorithm. In our talk we will first show that one can use the Games-Chan, Stamp-Martin and Lauder-Paterson algorithms to compute the analogue notions of linear complexity and k-error linear complexity for finite sequences. Secondly, using ideas from the Stamp-Martin and Lauder-Paterson algorithms we develop a new algorithm which computes the minimum number of errors needed to bring the linear complexity of a sequence below a given value. This algorithm has linear (bit) time and space complexity and can also be used for encoding and decoding certain binary repeated-root codes. We improve thus on a previous O(N log^2 N) decoding algorithm of Lauder and Paterson.

Event Structure Semantics for Higher Order Process Calculi (slides)
Lucy Saunders-Evans, University of Cambridge

There are many different process calculi for modelling concurrent processes. It is desirable to give a denotational semantics for them that will allow them to be related. Event structure spans appear to provide a very natural semantics fo r a variety of higher order processes, including Affine HOPLA [Winskel, Nygaard]. However, they are currently limited by the fact that many forms of parallel composition of processes are given an interleaving semantics. I will explore som e ideas for overcoming this problem and present a small process language together with its semantics with partially synchronous parallel composition as a construct.

A Typed Semantics for Languages with Higher-Order Store and Subtyping (slides)
Jan Schwinghammer, University of Sussex

In languages like ML references to functions may be created. The heap store of such languages is often referred to as higher-order. While it is fairly straightforward to obtain untyped domain models for languages involving higher-order store, modelling types is much harder. This is due to a mixed-variant mutual recursive definition of types and store.

Paul Levy has presented an elegant typed model of higher-order store, based on a possible worlds semantics. I will briefly outline its construction before extending the type system with a simple notion of subtyping. This has the consequence that derivations of typing judgements are no longer unique. By adapting a neat proof method due to John Reynolds I prove coherence, using a Kripke logical relation and retractions between the typed and untyped models. The resulting semantics is sufficiently expressive to interpret Abadi and Cardelli's (imperative) object calculus.

Graph Programs for Graph Algorithms (slides)
Sandra Steinert, University of York

Graph programs as introduced by Habel and Plump provide a simple and yet computationally complete language for computing functions and relations on graphs. We extended this language such that numerical computations on labels can be conveniently expressed. The resulting language has a simple syntax and semantics and therefore facilitates reasoning on programs. In this talk, we will present the language GP and its semantics. We will demonstrate the use of GP by giving programs for Dijkstra's shortest path algorithm and showing how to prove correctness and complexity properties.

Routing via Single-Source and Multiple-Source Queries in Static Sensor Networks (slides)
Chang Su, University of Liverpool

Sensor networks are a newly emerged and promising field in wireless computing. They consist of a large number of sensor nodes with small transmission range, limited memory, limited power and limited computational capacity. These features make communication in sensor networks different from traditional networks. This talk is about the routing problem in static sensor networks. We show how to finish the routing in both the single source case and the multiple source case using limited memory and energy. After we do the preprocessing, we can finish the routing in optimal transmissions for the single source case. For the multiple case, it is also an asymptotically optimal algorithm.

The Gap between Crossing Numbers and Outerplanar Crossing Numbers (slides)
Ondrej Sykora, University of Loughborough

An outerplanar drawing of an n-vertex graph G=(V(G),E(G)) is a drawing in which the vertices are placed on the corners of a convex n-gon in the plane and each edge is drawn using one straight line segment. We derive a general lower bound on the number of crossings in any outerplanar drawing of G, using isoperimetric properties of G. The lower bound implies that outerplanar drawings of many planar graphs have at least $Omega(n\log n)$ crossings. Moreover, for any drawing of $G$ with $c$ crossings in the plane, we construct an outerplanar drawing with at most $O((c+\sum_{v\in V}d^2_v)\log n)$ crossings, where $d_v$ is the degree of $v$. This upper bound is asymptotically tight. For planar graphs, a outerplanar drawing with the required properties can be constructed in $O(n\log n)$ time.

Groups With A Co-Context Free Word Problem (slides)
Rick Thomas, University of Leicester

The purpose of this talk is to survey some interesting connections between formal language theory on the one hand and group theory on the other.

Given a group G generated by a finite set X, the "word problem" of G consists of all words over X that represent the identity element of G. One natural question is to ask about the connections between the complexity of the word problem (as a formal language) and the algebraic structure of G.

The purpose of this talk is to survey some recent results on the set of groups whose word problem is the complement of a context-free language; this is joint work with Derek Holt, Claas Roever and Sarah Rees. The talk is intended to be expository; we will not assume any prior knowledge of group theory and we will review what we need from formal language theory.

Finally, a Simple Semantics (slides)
Joel Wright, University of Nottingham

In this talk we develop a simple functional language in which to explore the semantics of synchronous exceptions and asynchronous interrupts. In particular, we seek to define a finally operator in terms of primitives of the language, and prove that this operator has the desired behaviour. We are not interested in whether the language is practical for programming. This is part of a larger program of work concerned with the semantics of the exception handling mechanisms provided in Concurrent Haskell, and reasoning about operators and programs written using them.

New Solution for Multiple Mobile Agent Rendezvous in a Ring (slides)
Xiaohui Zhang, University of Liverpool

We study the rendezvous search problem for k >= 2 mobile agents in an n node ring. Each agent has only logK memory, and the one token. Our idea is that we let all the agents start traverse with different speed, also, that speed is not fixed! Under some condition, the agent may change its speed. So we can guarantee all of the agents can finally meet together in the non-periodic case.

Nondeterministic Quantum Programming (slides)
Paolo Zuliani, Princeton University, USA

In standard computation, nondeterminism provides a way for specifying and reasoning about programs. In particular, nondeterminism is used for specifying programs' behaviour, without having to specify details of implementation. Such details may be made more precise (refined) at a later stage, though program correctness is preserved throughout reasoning.

In quantum computation, nondeterminism is either meant to be "classical" probabilism or it is not considered at all, since quantum computation is the physical theory of computation and thus it does not deal with non-implementable features. However, the most known examples of quantum computation (e.g. Shor's factoring algorithm and Deutsch-Jozsa's algorithm) have natural nondeterministic specifications: the former returns any prime factor of a given integer number, while the latter resolves whether a given boolean function is either non-balanced or non-constant (or both). In this talk we argue that nondeterminism arises in other examples of quantum computation. In particular, we consider nondeterminism embedded in a programming language for quantum computation, qGCL, and use that to model and reason about quantum nonlocality and quantum mixed-state systems.

Quantum nonlocality has been puzzling physicists since the '30s: events may have simultaneous effects, no matter how far one from the other they occur. Nonlocality is an intrinsic feature of Nature: it has been actually proved (both theoretically and experimentally) that any theory wishing to mimick certain physical experiments must necessarily be non-local (i.e. allowing those simultaneous effects). In this work we analyse an "ideal" experiment showing nonlocality. We code it using qGCL and we show that local theories cannot fully predict the outcomes of the experiment.

Mixed-state systems are a generalisation of standard quantum systems for which the state is best described by a probability distribution over "pure" quantum states. Mixed state systems find application in the description of "real" quantum systems where, due to unavoidable causes (e.g. imperfections in our apparatuses or interactions with the environment), the exact state of the system cannot be specified. We show that qGCL can model such systems by proving that the treatment of mixed states in qGCL is consistent with the corresponding quantum formalism.