Tom Kelliher, CS42

Sept. 4, 1996

Class Objectives

Why study, write operating systems?

Some things I hope you take with you:

  1. An understanding of what an OS does and how it does it.
  2. Concurrency (a new concept for you)
  3. The design & review process.
  4. Practical applications of algorithms and data structures.

OS As Interface

  1. High level view: virtual machine abstraction --- convenient ``user'' interface. Abstractions: files, applications. I/O devices integrated into filesystem.
  2. Low level: management of real resources: CPU cycles, memory, disk space, device allocations.
  3. Secondary concerns: efficiency, fairness.


  1. Multiprogramming, protection and security.
  2. Virtual memory.
  3. File systems.

Concurrency and Synchronization

Understanding concurrency, pitfalls, mechanisms of control.

Can there really be two processes/threads executing simultaneously?

Example: Banking system where account updates can occur concurrently and independently.

Deposit process:

temp1 = balance;
temp1 += deposit;
balance = temp1;
Withdrawal process:
temp2 = balance;
temp2 -= withdrawal;
balance = temp2;

  1. No concurrency, no problem.
  2. Consider interleaving.

Layering/Abstraction Within a Computer System

  1. Hardware: CPU, memory, I/O.
  2. Operating system: kernel, file system, device handlers.
  3. Application programs: editors, compilers, workbench tools.
  4. Users: people, other programs, computers.

The ``Hello world'' program:

  1. Compiled into assembly code.
  2. Assembled in machine code.
  3. Written to a file.
  4. Loaded into memory.
  5. Linked against system libraries.
  6. Executes
  7. Makes supervisor calls to access I/O devices through OS.

Historical Developments

  1. Common device drivers:
    1. Reinventing the wheel.
    2. Abstract I/O device interface for software.
  2. Resident monitors:
    1. Keep expensive hardware utilized.
    2. Automatic job sequencing --- compile, run user program.
    3. Monitor is always resident in memory.


  3. System parallelism:
    1. Overlap processing of one job with I/O of another.
    2. Off-line processing (card to tape, vice versa).
    3. Spooling.
  4. Multiprogramming (batch):
    1. Overlap processing yields multiple jobs in memory simultaneously --- job pool.

    2. CPU idle when job does I/O.
    3. Automatically switch (CPU scheduling) to ``next'' job.
    4. Job runs until completion or I/O.
    5. Protection?
  5. Timesharing:
    1. Batch systems aren't productive for program development.
    2. Preemptive CPU scheduling.
    3. Brief quantum for each process.
    4. Response time important.
  6. Realtime systems:
    1. Hard deadlines for process completion.
    2. Example: flight surface control on the Space Shuttle.
  7. Workstations:
    1. Essentially, a single-user computer.
    2. Mainframe features trickle down
    3. Why multiprogram a workstation?
      1. Increase productivity.
      2. Allows modular system design --- consider the windows in a GUI.
  8. Parallel processing:
    1. Tightly coupled multiprocessors.
    2. Shared memory space.
    3. CPU scheduling issues.
    4. Working set locality.
    5. Finding parallelism opportunities within a problem.
  9. Distributed systems:
    1. Loosely-coupled, independent systems.
    2. Private memory spaces.
    3. Client/server computing.
    4. Reliability, resource sharing, load balancing, communication, transparency.
    5. Granularity of parallelism.
    6. Network Latency.

Thomas P. Kelliher
Tue Sep 3 21:20:56 EDT 1996
Tom Kelliher