G53OPS - Operating Systems

This course is run at the The University of Nottingham within the School of Computer Science & IT. The course is run by Graham Kendall (EMAIL : gxk@cs.nott.ac.uk)


Design Issues for Paging

In this section we are going to look at some other aspects of designing a paging system so that we can achieve good performance.

Working Set Model

The most obvious way to implement a paging system is to start a process with none of its pages in memory. When the process starts to execute it will try to get its first instruction, which will cause a page fault. Other page faults will quickly follow as the process tries to access the stack, global variables and executes other instructions. After a period of time the process should start to find that most of its pages are in memory and not so many page faults occur.

This type of paging is known as demand paging as pages are brought into memory on demand.

The reason that page faults decrease (and then stabilise) is because processes normally exhibit a locality of reference. This means that at a particular execution phase of the process it only uses a small fraction of the pages available to the entire process. The set of pages that is currently being used is called its working set (Denning, 1968a; Denning 1980).

If the entire working set is in memory then no page faults will occur. Only when the process moves onto the next phase of execution (e.g. the next phase of a compiler) will page faults begin to occur as pages not part of the existing working set are brought into memory.

If the memory of the computer is not large enough to hold the entire working set, then pages will constantly be copied out to disc and subsequently retrieved. This drastically slows a process down as the time taken to execute an instruction is a lot faster than disc accesses.

A process which causes page faults every few instructions is said to be thrashing (Denning 1968b).

In a system that allows many processes to run at the same time (or at least give that illusion) it is common to move all the pages for a process to disc (i.e. swap it out). When the process is restarted we have to decide what to do. Do we simply allow demand paging, so that as the process raises page faults, it pages are gradually brought into memory? Or do we move all its working set into memory so that it can continue with minimal page faults?

It will come as no surprise that the second option is to be preferred. We would like to avoid a process, every time it is restarted, raising page faults. In order to do this the paging system has to keep track of the processes' working set so that it can be loaded into memory before it is restarted.

The approach is called the working set model (Denning, 1970). Its aim, as we have stated, is to avoid page faults being raised. This method is also known as prepaging.

A problem arises when we try to implement the working set model as we need to know which pages make up the working set.

One solution is to use the aging algorithm described above. Any page that contains a 1 in n high order bits is deemed to be a member of the working set. The value of n has to be experimentally found although it has been found that the value is not that sensitive

Last Page Back to Main Index Next Page

 

 


 Last Updated : 23/01/2002