G53OPS - Operating Systems

This course is run at the The University of Nottingham within the School of Computer Science & IT. The course is run by Graham Kendall (EMAIL : gxk@cs.nott.ac.uk)


History of Operating Systems


This section is based on (Tanenbaum, 1992), pages 5-12.

In this section we take a brief look at the history of operating systems - which is almost the same as looking at the history of computers.

If you are particularly interested in the history of computers you might like to read (Levy, 1994). Although the title of the book suggests activities of an illegal nature, in fact, hacking, used to refer to people who had intimate knowledge of computers and were addicted to using computers and extending their knowledge.

You are probably aware that Charles Babbage is attributed with designing the first digital computer, which he called the Analytical Engine. It is a matter of regret that he never managed to build the computer as, being of a mechanical design, the technology of the day could not produce the components to the needed precision. Of course, Babbage's machine did not have an operating system.

First Generation (1945-1955)

Like many developments, the first digital computer was developed due to the motivation of war. During the second world war many people were developing automatic calculating machines. For example:

· By 1941 a German engineer (Konrad Zuse) had developed a computer (the Z3) that designed airplanes and missiles.

· In 1943, the British had built a code breaking computer called Colossus which decoded German messages (in fact, Colossus only had a limited effect on the development of computers as it was not a general purpose computer - it could only break codes - and its existence was kept secret until long after the war ended).

· By 1944, Howard H. Aiken, an engineer with IBM, had built an all-electronic calculator that created ballistic charts for the US Navy. This computer contained about 500 miles of wiring and was about half as long as a football field. Called The Harvard IBM Automatic Sequence Controlled Calculator (Mark I, for short) it took between three and five seconds to do a calculation and was inflexible as the sequence of calculations could not change. But it could carry out basic arithmetic as well as more complex equations.

· ENIAC (Electronic Numerical Integrator and Computer) was developed by John Presper Eckert and John Mauchly. It consisted of 18,000 vacuum tubes, 70,000 soldered resisters and five million soldered joints. It consumed so much electricity (160kw) that an entire section of Philadelphia had their lights dim whilst it was running. ENIAC was a general purpose computer that ran about 1000 faster than the Mark I.

· In 1945 John von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) which had a memory which held a program as well as data. In addition the CPU, allowed all computer functions to be coordinated through a single source. The UNIVAC I (Universal Automatic Computer), built by Remington Rand in 1951 was one of the first commercial computers to make use of these advances.

These first computers filled entire rooms with thousands of vacuum tubes. Like the analytical engine they did not have an operating system, they did not even have programming languages and programmers had to physically wire the computer to carry out their intended instructions. The programmers also had to book time on the computer as a programmer had to have dedicated use of the machine.

Second Generation (1955-1965)

Vacuum tubes proved very unreliable and a programmer, wishing to run his program, could quite easily spend all his/her time searching for and replacing tubes that had blown. The mid fifties saw the development of the transistor which, as well as being smaller than vacuum tubes, were much more reliable.

It now became feasible to manufacture computers that could be sold to customers willing to part with their money. Of course, the only people who could afford computers were large organisations who needed large air conditioned rooms in which to place them.

Now, instead of programmers booking time on the machine, the computers were under the control of computer operators. Programs were submitted on punched cards that were placed onto a magnetic tape. This tape was given to the operators who ran the job through the computer and delivered the output to the expectant programmer.

As computers were so expensive methods were developed that allowed the computer to be as productive as possible. One method of doing this (which is still in use today) is the concept of batch jobs. Instead of submitting one job at a time, many jobs were placed onto a single tape and these were processed one after another by the computer. The ability to do this can be seen as the first real operating system (although, as we said above, depending on your view of an operating system, much of the complexity of the hardware had been abstracted away by this time).

Third Generation (1965-1980)

The third generation of computers is characterised by the use of Integrated Circuits as a replacement for transistors. This allowed computer manufacturers to build systems that users could upgrade as necessary. IBM, at this time introduced its System/360 range and ICL introduced its 1900 range (this would later be updated to the 2900 range, the 3900 range and the SX range, which is still in use today).

Up until this time, computers were single tasking. The third generation saw the start of multiprogramming. That is, the computer could give the illusion of running more than one task at a time. Being able to do this allowed the CPU to be used much more effectively. When one job had to wait for an I/O request, another program could use the CPU.

The concept of multiprogramming led to a need for a more complex operating system. One was now needed that could schedule tasks and deal with all the problems that this brings (which we will be looking at in some detail later in the course).

In implementing multiprogramming, the system was confined by the amount of physical memory that was available (unlike today where we have the concept of virtual memory).

Another feature of third generation machines was that they implemented spooling. This allowed reading of punch cards onto disc as soon as they were brought into the computer room. This eliminated the need to store the jobs on tape, with all the problems this brings.

Similarly, the output from jobs could also be stored to disc, thus allowing programs that produced output to run at the speed of the disc, and not the printer.

Although, compared to first and second generation machines, third generation machines were far superior but they did have a downside. Up until this point programmers were used to giving their job to an operator (in the case of second generation machines) and watching it run (often through the computer room door - which the operator kept closed but allowed the programmers to press their nose up against the glass). The turnaround of the jobs was fairly fast.
Now, this changed. With the introduction of batch processing the turnaround could be hours if not days.

This problem led to the concept of time sharing. This allowed programmers to access the computer from a terminal and work in an interactive manner.

Obviously, with the advent of multiprogramming, spooling and time sharing, operating systems had to become a lot more complex in order to deal with all these issues.

Fourth Generation (1980-present)

The late seventies saw the development of Large Scale Integration (LSI). This led directly to the development of the personal computer (PC). These computers were (originally) designed to be single user, highly interactive and provide graphics capability.

One of the requirements for the original PC produced by IBM was an operating system and, in what is probably regarded as the deal of the century, Bill Gates supplied MS-DOS on which he built his fortune.

In addition, mainly on non-Intel processors, the UNIX operating system was being used.

It is still (largely) true today that there are mainframe operating systems (such as VME which runs on ICL mainframes) and PC operating systems (such as MS-Windows and UNIX), although the edges are starting to blur. For example, you can run a version of UNIX on ICL's mainframes and, similarly, ICL were planning to make a version of VME that could be run on a PC.

Fifth Generation (Sometime in the future)

If you look through the descriptions of the computer generations you will notice that each have been influenced by new hardware that was developed (vacuum tubes, transistors, integrated circuits and LSI).

The fifth generation of computers may be the first that breaks with this tradition and the advances in software will be as important as advances in hardware.

One view of what will define a fifth generation computer is one that is able to interact with humans in a way that is natural to us. No longer will we use mice and keyboards but we will be able to talk to computers in the same way that we communicate with each other. In addition, we will be able to talk in any language and the computer will have the ability to convert to any other language.Computers will also be able to reason in a way that imitates humans.

Just being able to accept (and understand!) the spoken word and carry out reasoning on that data requires many things to come together before we have a fifth generation computer. For example, advances need to be made in AI (Artificial Intelligence) so that the computer can mimic human reasoning. It is also likely that computers will need to be more powerful. Maybe parallel processing will be required. Maybe a computer based on a non-silicon substance may be needed to fulfill that requirement (as silicon has a theoretical limit as to how fast it can go).

This is one view of what will make a fifth generation computer. At the moment, as we do not have any, it is difficult to provide a reliable definition.

Another View

The view of how computers have developed, with regard to where the generation gaps lie, is slightly different, depending who you ask. Ask somebody else and they might agree with the slightly amended model below.

Most commentators agree on what is the first generation and the fact they are characterised by the fact that they were developed during the war and used vacuum tubes.

Similarly, most people agree that the transistor heralded the second generation.

And, the third generation came about because of the development of the IC and operating systems that allowed multiprogramming. But, in the model above, we stated that the third generation ran from 1965 to 1980.

Some people would argue that the fourth generation actually started in 1971 with the introduction of LSI, then VLSI (Very Large Scale Integration) and then ULSI (Ultra Large Scale Integration).

Really, all we are arguing about is when the PC revolution started. Was it in the early 70's when LSI first became available? Or was it in 1980, when the IBM PC was launched?

Case Study

To show, via an example, how an operating system developed, we give a brief history of ICL's mainframe operating systems.

One of ICL's first operating systems was known as manual exec (short for executive). It ran on its 1900 mainframe computers and provided a level of abstraction between the hardware and also allowed multi-programming.

However, it was very much a manual operating system. The operators had to load and run each program. Commands such as these were used.

LO#RA15#REP3
GO#RA15 21

The first instruction told the computer to load the program called RA15 from a program library called REP3. This loaded the program from disc into memory.
The "GO 21" instruction told the program to start running, using entry point 21. This (typically) told the program to read a punched card(s) from the card reader, which held information to control the program.

The important point is that the computer operator had control over every program in the computer. It had to be manually loaded into memory, initiated and finally deleted from the memory of the computer (which was typically 32K). In between, any prompts had to be dealt with. This might mean allowing the computer to use tape decks, allowing the program to print special stationery or dealing with events that were unusual.

ICL then brought out an operating system they called GEORGE (GEneral ORGanisational Environment). The first version was called George 1 (G1). G2 and G2+ quickly followed.

The idea behind G1/2/2+ was that it ran on top of the operating system. So it was not an operating system as such (in the same way that Windows 3.1 is not a true operating system as it is only a GUI that runs on top of DOS).

What G2+ (we'll ignore the previous versions for now) allowed you to do was submit jobs to the machine and then G2+ would schedule those jobs and process them accordingly. Some of the features of G2+ included.

· It allowed you to batch many programs into a single job. For example, you could run a program that extracted data from a masterfile, run a sort and then run a print program to print the results. Under manual exec you would need to run each program manually.It was not unusual to have a typical job process twenty or thirty separate programs.
· You could write parameterised macros (or JCL - Job Control Language) so that you could automate tasks.
For example, you could capture prompts that would normally be sent to the operator and have the macro answer those prompts.
· You could provide parameters at the time you submitted the job so that the jobs could run without user intervention.
· You could submit many jobs at the same time so that G2+ would run them one after another.
· You could adjust the scheduling algorithm (via the operators console) so that an important job could be run next - rather than waiting for all the jobs in the input queue to complete.
· You could inform G2+ of the requirements of each job so that it would not run (say) two jobs which both required four tape decks when the computer only had six tape decks.

Under G2+, the operators still looked after individual jobs (albeit, they now consisted of several programs).

When ICL released George 3 (G3) and later G4, all this changed. The operators no longer looked after individual jobs. Instead they looked after the system as a whole.

Jobs could now be submitted via interactive terminals. Whereas the operators used to submit the jobs, this role was now typically carried out by a dedicated scheduling team who would set up the workload that had to be run over night, and would set up dependencies between the jobs.

In addition, development staff would be able to issue their own batch jobs and also run jobs in an interactive environment.

If there were any problems with any of the jobs, the output would either go to the development staff or to the technical support staff where the problem would be resolved and the job resubmitted.

Operators, under this type of operating system were, in some peoples opinion, little more than "tape monkeys", although the amount of technical knowledge held by the operators varied greatly from site to site.

In addition to G3 being an operating system in its own right G3 also had the following features:


· To use the machine you had to run the job in a user. This is a widely used concept today but was not a requirement of G2+.
· The Job Control Language (JCL) was much more extensive than that of G2+.
· It allowed interactive sessions
· It had a concept of filestore. When you created a file you had no idea where it was stored. G3 simply placed it in filestore. This was a vast amount of disc space used to store files. In fact the filestore was virtual in that some of it was on tape. What files were placed on tape was controlled by G3. For example, you could set the parameters so that files over a certain size or files that had not been used for a certain length of time were more likely to be placed onto tape. If your job requested a file that was in filestore but had been copied to tape the operator would be asked to load that tape. The operator had no idea what file was being requested or who it was for (although they could find out). G3 simply asked for a TSN (Tape Serial Number) to be loaded.
· The operators ran the system, rather than individual jobs.

After G3/G4, ICL released their VME (Virtual Machine Environment) operating system. This is still the operating system used on ICL mainframes today.VME, as it's name suggests, creates virtual machines that jobs run in. If you log onto (or run a job on) VME, you will be created a virtual machine for your session. In addition, VME is written to cater for the many different workloads that mainframes have to perform. It supports databases (using ICL's DBMS - Database Management System and, more recently relational databases such as Ingress and Oracle), TP (Transaction Processing) systems as well as batch and interactive working.

The job control language which, under VME is called SCL - System Control Language is a lot more sophisticated and you can often carry out tasks without having to use another language for operations such as file I/O.

There is still the concept of filestore but, due the to the (relatively) low cost of disc space and the problems associated with having to wait for tapes, all filestore is now on disc. In addition, the amount of filestore available to users or group of users is under the control of the operating system (and thus the technical support teams).

Like G3, the operators control the entire system and are not normally concerned with individual jobs. In fact, there is a move towards having lights out working. This removes the need for operators entirely and if there are any problems, VME, will telephone a pager.

Last Page Back to Main Index Next Page

 

 


 Last Updated : 24/01/2002