G53OPS - Operating Systems

This course is run at the The University of Nottingham within the School of Computer Science & IT. The course is run by Graham Kendall (EMAIL : gxk@cs.nott.ac.uk)


Disk Space Management

This section is based on (Tanenbaum, 1992), pages 170-172.

In the discussions above we can probably agree that a contiguous allocation scheme is probably unrealistic except in exceptional circumstances (in which case it will prove to be a very good scheme).

Therefore, allocating files as a list of blocks seems like a good idea. However, up till now, we have not mentioned the block size of the file. In this section we will consider this question.

We will also look at how we keep track of what blocks are free on the disc. We obviously need to know this information in order to allocate them as necessary.

Block Size

Whatever block size we choose then every file must occupy this amount of space as a minimum (as each file must consist of at least one block). If we choose a large allocation unit, such as a cylinder (which is say 32K) then even a 1K file will occupy 32K of the disc.

However, choosing a small allocation size (of say 1K) means that files will occupy many blocks which results in more time accessing the file as more blocks have to be located and accessed.

To take this argument further, accessing a block (which is in a random part of the disc) means the read/write heads have to move (the seek time to the correct cylinder/track). Next the correct sector has to come round so that it is under the read/write heads (rotational delay or latency).

Therefore, there is a compromise between a block size, fast access and wasted space. The usual compromise is to use a block size of 512 bytes, 1K bytes or 2K bytes.

If you are interested a study is reported in (Mullender, 1984).

Last Page Back to Main Index Next Page

 

 


 Last Updated : 24/01/2002