This page contains links to all electronic material used in the guest lectures given as part of G54PDC, University of Nottingham, March 2012.
The slides are all in PDF, and there are three versions available for each lecture. The basic version is intended for on-screen viewing only, whereas the 4 up and 9 up versions are mainly intended for printing, putting 4 and 9 slides respectively onto each page.
Functional Reactive Programming (FRP) is an emerging paradigm for programming reactive, concurrent systems in a purely declarative way. The FRP paradigm, or ideas based on FRP, has been used to program robots, vision, video games, musical synthesizers, sensor networks, financial event-based processing, and more. The central idea of FRP is to write program in terms of signals, or time-varying values, as opposed to values at isolated points in time in a fundamentally temporally agnostic setting. This allows for a purely declarative formulation and avoids, by construction, many problems related to imperative idioms for concurrency and synchronisation. In this, FRP is closely related to the synchronous data-flow paradigm, exemplified by languages like Esterel and Lucid Synchrone. However, but unlike in synchronous language, where time is discrete, FRP typically allows for a hybrid (discrete and continuous) notion of time, and moreover a much more flexible programming model, including support for higher-order data-flow and dynamic system structure.
This lecture, after giving a brief overview of FRP and synchronous languages in general, aims to give a concrete idea about what FRP is and how to program in FRP, as a counter-point to more traditional ways to structure concurrent applications. To that end, the lecture will focus on a specific FRP system called Yampa, which is a realisation of FRP embedded in the lazy, purely functional language Haskell. While the present Yampa implementation is not parallel, it should hopefully become clear that FRP programs are expressed at a high-enough level that parallel execution mainly is a matter of how a specific FRP system is implemented, as opposed to how an application is written.
The idea of using locks to, for example, grant exclusive access for periods of time to shared data, is central to traditional approaches for coordination of concurrent processes such as semaphores and monitors. However, among other drawbacks, lock-based coordination is often unnecessarily pessimistic, limiting the scope for parallel execution, and moreover leads to poor compositionality, making it hard to develop high-level, reusable software components. Software Transactional Memory (STM) is a new promising approach to coordination addressing such shortcomings. The inspiration for STM comes from the notion of a database transaction and the idea is to optimistically execute a section of code and then decide whether or not this execution should succeed or fail in its entirety, depending on whether there was some kind of interference or not. Having been considered highly experimental, STM is now poised to go mainstream with the arrival of STM hardware support in mainstream, multi-core processors, such as Intel's upcoming Haswell architecture. From a programming language perspective, it turns out that STM is a particularly good fit for pure languages as they enforce a highly disciplined use of effects. To illustrate this, this talk will introduce STM in the context of the purely functional language Haskell.