Home | Teaching | Supervision | Grants | Publications | Academics |
Benchmark Data Sets in Exam TimetablingR. Qu, E. K. Burke, B. McCollum, L.T.G. Merlot, and S.Y. Lee. A Survey of Search Methodologies and Automated System Development for Examination Timetabling. Journal of Scheduling, 12(1): 55-89, 2009. online publication Oct 2008. doi: 10.1007/s10951-008-0077-5. .pdf University of Toronto Benchmark DataCarter, Laporte and Lee in 1996 introduced a set of 13 real-world exam timetabling problems from 3 Canadian highs schools, 5 Canadian, 1 American, 1 UK and 1 mid-east universities. During the years, however, two versions of the 5 problem instances in the data (presented in the table below) were derived. Also within the 5 problem instances 3 contain duplicated exams for some students.
Problem Description A Conflict Matrix C was defined where each element C_{ij} = 1 if exam i conflict with exam j (have common students), or C_{ij} = 0 otherwise. The Conflict Density represents the ratio between the number of elements of value "1" to the total number of elements in the conflict matrix. Two variants of objectives were defined: Evaluation Function executable program For students sitting two exams s apart, the approximate costs are w_{s} i.e. w_{0}=16, w_{1}=8, w_{2}=4, w_{3}=2 and w_{4}=1. The aim is to space out the conflicting exams within a limited number of timeslots. There is no restriction on the capacity of the rooms in the original data. An executable program which calculates the penalty (using the above eveluation function) and the source code are available for download. Variants in the Literature In "x/y/z" in the table, x: characteristics of the original; y: characteristics of the derived data; z: characteristics of the corrected data. *: total number of enrolments conflict in data files. The single value in the table indicates the same value in different versions. We are collecting the solutions from the researchers who published the best results in the literature for this dataset. Not all publications provided the actual solutions. We will upload the solutions as soon as they are available.
Results from Pilar Moreno Diaz, UNIVERSIDAD ALFONSO X EL SABIO, Feb, 2017
University of Nottingham Benchmark DataBurke, Newall and Weare in 1996 also introduced the exam timetabling data at University of Nottingham. In the problem, the total room capacity 1550 cannot be exceeded by the total number of students in the exams assigned for each timeslot. The objective is to minimise the students sitting into two consecutive exams on the same day. Later in 1998 the objective function was extended to also consider two consecutive exams overnight.
University of Melbourne Benchmark DataMerlot, Boland, Hughes and Stuckey introduced exam timetabling data sets from the University of Melbourne at the PATAT conference in 2002. Two data sets were introduced (with 521 exams, 28 sessions, 20656 students and 62248 enrolments and 562 exams, 31 sessions, 19816 students and 60637 enrolments). For these data sets, there were two exam sessions on each day for each of the five workdays, and the capacity for each session varied. The objective for these data sets was to minimize the number of occasions of students having two exams consecutively either on the same day or overnight. The availability of sessions for some of the exams was restricted. In one problem instance this prevented all feasible solutions so an alternate data set was created which allowed feasible solutions. These data sets can be downloaded from http://www.or.ms.unimelb.edu.au/timetabling. This website also presents results for these data sets on other problems, such as those presented for the University of Toronto problems.
A Random Exam Timetabling Problem GeneratorAn exam timetabling instance generator was developed and a benchmark exam timetabling problem data set was generated that consists of 18 different problem instances. The 18 problems consist of 9 small (< 100 exams) and 9 large problems (>= 500 exams). The problems are generated with conflict density values starting from 6% to 47% using 5% intervals. The number of students and their enrolments are variable according to the problem size and conflict density. The problems have similar constraints and use the same objective function as the Toronto benchmark data set stated in the previous section.The problem generator and the 18 new random problems are available for public to encourage comparisons and feedback. Last updated date 1/June/2008 |