Graham Kendall
Individual Projects (3rd Year Undergraduates)
(Academic Year 2001/02)
Introduction
This page details the third year individual projects that are/have been supervised
by Graham Kendall who is a lecturer in the Department
of Computer Science at The University of Nottingham.
This page contains details of projects supervised in the academic year 2001/02.
The main project page can be found here.
Downloadable Dissertations
Some of the projects are available for download, so that you can view them.
This facility is for current third year students, being supervised by me,
who can use previous dissertations for their own research and also to look at
previous dissertations to see how to lay them out etc. (it also saves the inevitable
requests I receive asking to see a "good" project from last year, with the risk
that I never see it again!).
As such, these files are password protected and, to get the username/password
you need to EMAIL me. People outside
of the university will not be given access.
In addition, I cannot give access to those students
outside of my tutor group. You should approach your own tutor to get copies
of previous dissertations they have supervised.
Note : I cannot make the files available in formats other than those
already supplied.
3rd Year Projects Supervised in 2001/02
- Sports Fixture Scheduling by David Simmons
(PDF: 413KB)
Abstract : In many sports fixture lists are coming up for debate
more and more. Fixture lists are vital for any sporting club because a large
percentage of their revenue is generated from gate receipts and media coverage.
The creation of a suitable schedule is a very difficult problem with a myriad
of conflicting requirements and preferences. I will attempt to develop a new
method for creating fixture lists, which is both quicker, and produces better
schedules. This will be done by combining the techniques used in different
sports, and then to further refine by possibly merging the results with AI
techniques (such as Adaptive Learning or Genetic Programming).
- Adaptive Poker Playing by Ian Dennis
(PDF: 802KB) ( Word:
851KB)
Abstract : Games of deception, with unreliable and unknown information
represent a greater challenge to game-playing software developers than more
traditional games of complete information. Games such as chess and checkers
have been conquered with brute-force searches but games such as Poker seem
more difficult to master with such techniques. Several groups have attempted
to create computerised poker players and with some success, but none have
been able to compete at the highest level. This dissertation aims to develop
a set of statistical poker players and investigate techniques for modelling
their playing styles. The assumption behind it is that opponent modelling
is the key to playing poker successfully.
A player space was defined which encompasses a range of poker
styles. A computerised player was created that is capable of playing anywhere
in the player space; so, to any style. Games were then run with computerised
players, each playing to a different style. The relative success of the different
players suggested that the model worked well. The results reflected what would
have been expected in a real world game between human players of the styles
specified.
Three opponent modelling techniques were then examined. The challenge was;
given a game being played by computer players of unknown styles, uses their
actions to work out which styles they are playing to.
The first opponent modelling technique is called Guess-Sum modelling. Essential,
it maintains a static model which is added to each time a player acts. This
modeller is able to accurately gauge the playing style of a static player
but is unable to refine the model if the player changes styles.
The second technique is called Evolutionary Modelling and was the most successful.
It maintains a dynamic model which is refined after each action of the player.
It is able to find a close solution for static players more quickly than the
Guess-Sum modeller and, crucially, it is able to adapt to players with changing
playing styles. The Evolutionary modeller is able to track dynamic players
around the player-space.
The third technique is called Point-Certainty modelling and is an attempt
to adaptively model the whole player space. It keeps track of where it thinks
the player isnt playing as well as where it thinks the player is playing.
This is designed to capture a complete picture of the player. Although this
technique showed some promise it didnt prove as reliable as the other
two techniques.
The work demonstrates the potential of opponent modelling in games of incomplete
information. It shows that player types can be identified and tracked even
though all the information about their decisions is not available. It demonstrates
the strengths and weaknesses of the three techniques that were examined and
achieves some very promising results.
- Producing an Evolving Poker Player Using Neural Networks
and Evolutionary Strategies by Nadeem Ladha (Word
: 1527KB, appendix
: 136KB, acknowledgements
: 19KB) (PDF : 377KB, appendix
: 36KB, acknowledgements
: 4KB)
Abstract : Artificial Intelligence has had a long-standing relationship
with game playing. The question since the realisation of computer power has
been, can computer logic beat the human mind. The extremity of Hollywood thinking
of machine dominance is the ultimate fear, but may be a stride too far. A
good test to see if a computer can out-think us would be in a war of the mind,
and games provide this challenge.
Neural networks have been used in previous work to evolve agents playing games
of perfect information such as chess and checkers. I decided to research into
using the same technique to allow an agent to learn how to play poker well,
a game of imperfect information, with no priori knowledge of poker or its
tactics, besides the basic rules that govern a legal game.
This project demonstrated that it is possible to model a poker player using
a combination of neural networks, evolutionary strategies and co-evolution,
and that this player will improve over time. The neural network that was implemented
took a diverse range of inputs, which represented the playing environment
and described the opponents. These inputs were passed through a layer of hidden
nodes, to an output using various mathematical functions. The output value
was converted into a decision, telling the player to fold, call or raise.
Learning took place using a (50+50) evolutionary strategy. The weights on
the synapses of the network were changed using a gaussian random mutation
function. This changed how the player played when given a situation. The population
evolved using co-evolution. It competed against variations of itself, the
best players were kept and mutated, and the worst were discarded.
The project set to find out if through using these techniques, the program
would evolve so that it could play a good game of poker. The results indicated
a definite learning process. The evolving players seemed to have passed through
four main stages of development, which dealt with maximising winnings and
minimising losses.
The project was successful in it preliminary findings, and could be developed
further to eventually produce a high-class player.
- Investigating A-Life by Khai Hoang (PDF
: 929KB) (Word : 2556KB)
Abstract : A-life is the study of non-natural living forms. This
study investigates some of the various a-life systems currently available
and the possibility of creating an open-ended evolution system. An a life
environment will exist in which organisms are born, live, reproduce and die.
The organisms will be represented by two separate genetic structures based
on the Tierra and Virtual Creatures a-life systems. These structures will
have the ability to evolve to unlimited complexity. It is hoped that by combining
the techniques from these two systems, that steps can be taken towards creating
an unbounded evolutionary system displaying a number of emergent behaviours
similar to real life.
- Intelligent Trading of the FTSE-100
Market by Shahid Ladha (PDF
: 811KB)
Abstract : This study investigates the effectiveness of modelling
the behaviour of equities (stocks) using neural networks. Neural networks
are a powerful computational system that can approximate any non-linear continuous
function on a compact domain to any desired level of accuracy. Investment
houses have been employing linear modelling systems including mechanical (rule-based)
techniques for over a decade. Due to the poor performance of using linear
modelling techniques within a typically complex and non-linear domain, firms
have recently started to explore intelligent systems. These systems have the
capability of actually learning and interpreting market conditions rather
than simply reacting to predefined human rules.
This work comprises of a theoretical study examining the ability of specific
Artificial Intelligence techniques to facilitate high returns from Equity
Investment. A neural network trading system was built (using MATLAB) and applied
to two different problems. During training, 27 inputs comprising of technical
and fundamental (market) indicators together with a target output are presented
to the network. The back propagation learning algorithm uses the error signal
between the networks output and the target output to update the connection
weights between neurons. The neural network trading system used for stock
selection proves to be capable of significantly outperforming the FTSE-100
Index indicator between 1995 and 1998. The empirical findings of the study
establish that neural networks are able to solve the problem of (1) Stock
Selection successfully although are not entirely suitable when applied to
2) Raw Time-series Prediction.
This project won the prize for the most outstanding project of the year
(across all of computer science, not just those I supervised)
- Artificial Intelligent Evolving Blackjack Player with
Adaptive Learning by Stuart Foremean (Word
: 1080 KB) (PDF : 434 KB)
Abstract : Introducing evolutionary algorithms into specialised
computer programs enables elementary systems to evolve into expert systems,
which are able to play games at an equivalent level of a human player, and
in some cases, exceed human capabilities. Adaptive learning utilises knowledge
acquisition from experience and observation of tasks within a particular environment
to evolve and adapt the internal structure of a program achieving a higher
level of learning.
Previous work describing the proven mathematical analysis and strategy of
blackjack has shown that a player can effectively compete against the casino
over a given period of time using a basic strategy. Further formulation of
a winning strategy, using card counting, has given rise to successful triumphs
over the house in casinos. This study uses blackjack to investigate how agents
can evolve and adapt to learn strategies for games of imperfect information.
In this thesis a general learning architecture is described using reinforcement
learning to manipulate internal learning parameters, allowing a blackjack
player to develop its playing strategy over time. The results suggest that
the blackjack player is able to adapt from previous experience and learn how
to play blackjack at an expert level. The evolved player competes well against
well-known casino blackjack strategies. Limitations of evolutionary learning
in previous work have been overcome, enabling a system to learn blackjack
through exploitation and exploration of actions as apposed to supervised learning
using mathematical formulation. The underlying structure for this work outlines
areas of potential exploration for future advancements in this field.
- An Evolving Awari Player by Richard Burton
(Word : 722 KB) (PDF
: 121 KB)
Abstract : Awari is an ancient African board game for two players.
It has many different names "Wari", "Awale", "Oware"
and "Ayo" being just a few. It involves a board with 12 pits and
48 stones and the game is won when one player has captured more than 25 of
the stones. Recent attempts at producing the strongest Awari playing entity
have relied on a high level of human expertise in the game and use large end
game databases. It was the aim of this project to take the perfect information
game of Awari and develop an evolutionary agent capable of playing to as high
a standard as possible in the given time constraints. This is implemented
using a neural network and a min-max algorithm without having to rely on expert
human advice about the game such as end-game databases and opening moves.
Instead the agent evolves over a number of games and is then tested against
a number of random players to show that it does in fact evolve into a better
Awari player.
- Studying Cooperation in the N-player, N-Choice Iterated
Prisoners Dilemma by Darren Tong (PDF
: 418KB)
Abstract : This report details the research, development and implementation
of a simulation that examines the levels of cooperation achieved in the N-player,
N-choice Iterated Prisoners Dilemma. It explores some fundamental game theory
concepts such as zero and non-zero sum games in relation to the Prisoners
Dilemma problem. The report discusses the intriguing nature of the Iterated
Prisoners dilemma, and some of the issues associated with it. Detail of the
design and implementation of the program used to simulate the Prisoners Dilemma
tournaments is also given. Each of the simulations undertaken is examined
and evaluated, with discussion of the implications of results which reinforce
earlier papers by Darwen and Yao, whilst proposing that larger player groups
and increased numbers of choices in the Iterated Prisoners Dilemma can be
mutually reinforcing.
- Evolving Artificial Neural Networks for Corridor Following
Behaviour by Thomas Pocock (Word
: 1277KB) (PDF : 277KB)
Abstract : Neural Networks are widely used for navigation and control
systems. However, the learning algorithms for them are difficult to program
and often need to know the required output to use for training purposes. In
addition, the network structure has to be designed by hand, or using slow
algorithms, which often find sub-optimal solutions. Using evolutionary strategies
means knowledge of the solution is not needed and the entire process can be
evolved. Previously, high overheads made evolving neural networks very time-consuming,
but improvements in computing power are making this problem less relevant.
This paper looks at the current state of research into evolvable neural networks
and at its application to the problem of collision avoidance in a virtual
world made up of irregular corridor structures. Evidence was found to suggest
that the effects of mutation were only apparent in smaller populations or
with harder problems. Likewise, when evolving network structures, specialisation
in the structure only happened with very difficult problems. Networks were
found to generalise, but only when trained on a harder problem and taken to
an easier one.
- Awari by Jennifer Lancaster
(PDF : 211KB)
Abstract : Awari, an ancient African game is a member of the oldest
family of games in the world, Mancala. This project applies current Artificial
Intelligence research to this ancient game in order to create a computer player
capable of improving its performance over time. Based on the work of David
Fogel, this project uses a MiniMax search with a Neural Network as an evaluation
function. The weights within the Neural Network are evolved over time using
the Evolutionary Strategy based on that put forward by Fogel. Testing was
done to determine the number of hidden nodes to be used within the Neural
Network, as well as to test whether the player's gameplay had improved over
time. Results show a small improvement using both sigmoid and identity function
within the network, and significant improvement using just a sigmoid activation
function. Further extensions to this project are also discussed.
- Neural Networked Vehicle by Neil Windelinckx
(Word : 23,596 KB)
(PDF 1964 KB)
Abstract : The human brain works by using a vast neural network.
There has been extensive work on replicating this process. Current technology
and techniques are unable to replicate this process in any order of magnitude.
Instead much work has been placed on replicating smaller functions of the
brain.
Dean A. Pomerleau has worked in one such area. His work has been based
on the ALVINN system. ALVINN (Autonomous Land Vehicle In a Neural Network)
is a system that uses neural networks to guide a robot vehicle (a modified
Chevy Van). ALVINN learns by watching a human driver for five minutes. ALVINN
is required to do this for every new environment it has to drive in. ALVINN
can then take full control of the vehicle.
Using a two dimensional virtual system, this project builds upon, and
extends the ALVINN project to produce a system that is fully autonomous. Although
this project uses a simplified two-dimensional world, it is capable of being
scaled up for use in a real world situation.
This project compares three different unsupervised learning techniques:
an unsupervised version of the back-propagation algorithm, genetic algorithms
and evolutionary networks. This report shows that evolutionary networks have
the fastest learning rate and produce the best globally optimal neural networks
for unsupervised robot guidance. This learning strategy is also capable of
combining networks to produce larger networks capable of driving many varied
tracks. Showing that the combining of network produces a neural network with
greater driving capabilities than a single network structure.
- Artificial Evolution - The Division of Labour in Social
Insects by Louisa Bradley (Word
: 5465 KB) (PDF : 368 KB)
Abstract : Social Insects individually are simple agents but collectively
they produce highly intelligent behaviour. Division of labour occurs within
many species of social insects and is the focus of this project. Within this
study a simplified model of division of labour within a colony of bees is
produced and an evolution strategy is used to develop and improve this model.
- An Investigation into Developing a Tournament Poker Player
using AI Techniques by Phil Tanner (Word
: 3662KB) (PDF : 217KB)
Abstract : Game playing plays a big part in AI research and has
received a lot of interest over the years, but it is not until recently, with
the success of chess and checkers playing programs, that research interest
has turned to more complex games like Poker. The work by the University of
Alberta Poker Research Group has provided them with the opportunity to produce
a World-Class Texas Hold'em Poker Player. A paper was recently produced on
how a program can be evolved using evolutionary techniques to solve games
of imperfect information. In particular the paper demonstrated research on
a simplified version of Texas Hold'em poker. My project takes this area of
research and extends it to the Tournament variation of Texas Hold'em poker.
I performed an investigation into using an artificial neural network and a
co-evolutionary strategy to produce a better tournament player. I developed
a player that makes a decision during a game using an artificial neural network
and a program capable of training a tournament player using a co evolutionary
strategy. This player was trained over 4000 generations and then was subjected
to a number of experiments. The first experiments compared the number of tournament
wins by an early generation player and the player after 4000 generations of
training. These progressed to the comparison of the number of players left
in the tournament when an early generation player and the generation 4000
player exited. Overall there was no conclusive proof of learning between the
generations, however the results of the last experiment were positive showing
a small amount of learning may have taken place.
- Tens and Twos by Dave Law (PDF
: 301KB)
Abstract : Tens and Twos is a popular card game that has existed
for more than a hundred years in one form or another. Although the exact origins
of the game are unknown, it has proved to be a popular game due to the combination
of relatively simple rules and complex tactical decisions. Games have always
proved popular in the development of Artificial Intelligence techniques and
for exploring the capabilities of computational intelligence. The rules are
fixed, the scope of the problem is constrained and the interactions of the
players are well defined.
This dissertation and associated project details the development of several
computer players using a variety of Artificial Intelligence techniques and
the development of software to enable human players to play over the Internet
or local area network.
- Using Artificial Intelligence for Hide and Seek by Ajay
Kanani (PDF
: 205KB) (Word : 6035KB)
Abstract : Artificial Intelligence is the branch of computer science
that deals with writing computer programs concerned with the development of
machines having the ability to solve problems creatively. This is just one
of a number of ways to look into and understand this concept. Hide and Seek
basically means a group of people, animals, etc. trying to hide from another
group who aim to try and find and/or catch them.
This project explores the predator-prey problem, using the concept
of hide and seek, in a two-dimensional pursuit domain. The model used for
testing it is the game of "Robot Chase", where computer controlled
predators have to capture a human controlled player in a grid world, while
avoiding fatal moves onto toxic squares.
The design of the game has been based partly on the original version
and partly on 'clones' found on the World Wide Web. The game has been implemented
in the Java programming language using a documented framework for graphical
games in Java. Depth-first search was chosen as the underlying search strategy
for both the predator and the maze generation.
Various techniques were then employed to improve the Cat's (predator's)
search strategy. Imposing a lock system made the Cat more independently minded.
Calculations were reduced by storing the route for each instance of the Cat
to the Mouse in a "Vector". Later the use of the "Vector"
class was replaced with "JumpQueue", a purpose designed class similar
to a bounded circular queue as it proved to be much more efficient.
The results obtained on having carried out a large number of tests
happen to be extremely successfully. The Cat can comfortable capture the Mouse
in most scenarios using the modified Depth-First search strategy. Not only
does the Cat successfully manage to capture the Mouse but it also does this
in a very short space of time. This thus implies that the Cat is not only
successful but also efficient.
EMAIL : gxk@cs.nott.ac.uk
Home page
Last Updated 15th June 2001