CPU Scheduling

Tom Kelliher, CS 311

Mar. 2, 2012


From last time:

  1. Processes and threads; context switching.


  1. Traditional process scheduling.

  2. Comparison criteria.

  3. Priority functions.

  4. Thread scheduling.

  5. Multiprocessor scheduling issues.

  6. Gantt chart examples.


Traditional Process Scheduling


What are the three schedulers, and how do they function?

Model of a process: CPU-I/O burst cycles:


Distribution of bursts.

CPU bursts terminate due to:

  1. Process waits on event (blocked, suspended).

  2. Process' quantum expires (back to ready Q).

Preemptive Scheduling

  1. Non-Preemptive scheduling.

    Context switches occur:

    1. Running process terminates.

    2. Running process blocks.

    I.e., running process controls the show.

    New process takes over if running process blocks.

  2. Preemptive scheduling.

    Principle: Highest priority ready process runs.

    Quantum timers come into play.

    Additional context switches:

    1. Higher priority process changes state from blocked to ready, preempting running process.

    2. Quantum expires (kernel preempts).

    Higher overhead.

  3. Selective preemptive scheduling.

Comparison Criteria

User oriented, performance related criteria.

User oriented, other criteria

System oriented, performance related criteria

System oriented, other criteria

The Priority Function

Examples of Priority Functions

Implemented using queues or priority queues.

  1. FCFS, FIFO -- non-preemptive. Run oldest process. Standard batch priority function

  2. LIFO -- non-preemptive. Run newest process. Not real useful.

  3. SJF -- shortest job first. Non-preemptive. Run process with shortest required CPU time.

    Provably optimal from turnaround/waiting point of view:


  4. SRT -- (shortest remaining time) preemptive version of SJF.

  5. RR -- (round robin) preemptive FCFS with a time quantum limitation. Used in time sharing systems.

  6. Multi-level queues -- prioritized set of queues, $Q_1$ to $Q_n$.

    1. Processes in queue $i$ always have priority over queues $>

    2. A process remains within the same queue.

    3. Each queue may have its own scheduling algorithm.

    4. Alternative: each queue gets some fixed slice of the total CPU cycles.

    5. Example: Queue for interactive jobs, RR scheduling; queue for batch jobs, FCFS.

  7. Multi-level feedback queues -- similar to multi-level queues, except that a process can move between different queues, based upon CPU usage.

    1. Must specify rules for moving the processes between queues.

    2. Ordinarily, lower priority queues have greater quantums, etc.

    3. Linux uses this method, with a 100ms quantum for all queues. 141 priorities and run queues. A limited amount of dynamicism for non-realtime tasks. Higher priority tasks have longer quanta, but get ``expired,'' preventing starvation.

Scheduling Examples

Suppose the following jobs arrive for processing at the times indicated and run with the specified CPU bursts (at the end of a burst a process waits for one time unit on a resource). Assume that a just-created job enters the ready queue after any job entering the ready queue from the wait queue.

Job Arrival Time CPU Bursts
1 0 1 2
2 1 1 3
3 2 1 1

Calculate the average turnaround time for each of the scheduling disciplines listed:

  1. First Come First Served.
  2. Shortest Remaining Time (assume that the running time is the sum of the CPU bursts).
  3. Round robin with a quantum of 1.

Don't forget the ``bubble'' cycles (where no process is runnable), if required.

Thread Scheduling

Kernel-level (system scope) vs. user-level (process scope) threads.

pthread possibilities (implementation dependent):

  1. Quantum allocation.

  2. Process scope thread priorities; starvation.

  3. Process scope threads with same priority: FIFO (no preemption) or RR (preemption) algorithms available.

Multiprocessor Scheduling Issues

  1. Symmetric Multiprocessing vs. asymmetric multiprocessing: $1$ or $n$ run queues.

  2. Processor affinity: maximize cache hit rates vs. load balancing vs. specialized devices attached to a single CPU.

  3. Hyperthreading to reduce memory stall-forced CPU idling.

  4. Virtualization: When a process quantum on a guest OS isn't all it's cracked up to be.

Thomas P. Kelliher 2012-03-01
Tom Kelliher