CPU Scheduling

Tom Kelliher, CS42

Sept. 25, 1996

CPU Scheduling

Examples of Priority Functions, Continued

  1. Multi-level queues --- prioritized set of queues, to .

    1. Processes in queue i always have priority over queues > i.

    2. A process remains within the same queue.

    3. Each queue may have its own scheduling algorithm.

    4. Alternative: each queue gets some fixed slice of the total CPU cycles.

    5. Example: Queue for interactive jobs, RR scheduling; queue for batch jobs, FCFS.

  2. Multi-level feedback queues --- similar to multi-level queues, except that a process can move between different queues, based upon CPU usage.

    1. Must specify rules for moving the processes between queues.

    2. Ordinarily, lower priority queues have greater quantums, etc.

    3. Unix uses this method, with a 100ms quantum for all queues. 128 priorities, with 32 queues.

Scheduling Examples

Suppose the following jobs arrive for processing at the times indicated and run with the specified CPU bursts (at the end of a burst a process waits for one time unit on a resource). Assume that a just-created job enters the ready queue after any job entering the ready queue from the wait queue.

Calculate the average turnaround time for each of the scheduling disciplines listed:

  1. First Come First Served.
  2. Shortest Remaining Time (assume that the running time is the sum of the CPU bursts).
  3. Round robin with a quantum of 1.

Don't forget the ``bubble'' cycles (where no process is runnable), if required.

Process Synchronization

Recall cooperating processes --- they affect or are affected by other processes through access to shared variables.

E.g., threads.

Example

Producer/Consumer (again).

const int N = 10;
int in = 0, out = 0, count = 0;
data buffer[N];

producer()
{
   data next;

   while (1)
   {
      produce in next;

      while (counter ==  n)
         ;

      buffer[in] = next;
      in = ++in % N;
      ++counter;
   }
}

consumer()
{
   data next;

   while (1)
   {
      while (counter == 0)
         ;

      next = buffer[out];
      out = ++out % N;
      --counter;

      consume next;
   }
}

What variables are being shared?

What is the nature of the sharing?

Reasons for Synchronizing Processes

  1. For getting shared access to resources (variables, buffers, devices, etc.).

    Critical variables; critical sections.

  2. For communicating.

Cases where cooperating processes need not synchronize to share resources:

  1. All processes are read only.

  2. All processes are write only (CRT screen).

  3. One process writes (atomically), all other processes read.

What does atomic mean?

Characteristics of ``trouble situations:''

  1. Multiple processes doing updates non-atomically.

  2. Non-atomic write processes coupled with read or update processes

The Critical Section Problem

The overlapping portion of each process, where the shared variables are being accessed.

Necessary conditions for a solution to the c.s. problem:

  1. Mutual Exclusion --- if is executing in one of its critical sections, no , , is executing in its critical sections.

  2. Progress --- a process operating outside of its critical section cannot prevent other processes from entering theirs; processes attempting to enter their critical sections simultaneously must decide which process enters eventually.

  3. Bounded Waiting --- a process attempting to enter its critical region will be able to do so eventually.

Assumptions:

  1. No assumptions made about relative speed of processes

  2. No process may remain in its critical section indefinitely (may not terminate in its critical section)

  3. A memory operation (read or write) is atomic --- cannot be interrupted. For now, we do not assume indivisible RMW cycles.

Model

(for n processes, )

Pi:
do {
   mutexbegin;       /* CS entry */
   CSi;
   mutexend;         /* CS exit  */
   non-CS
} while (!done);

Hardware Solutions

FAI solution to CS problem for n processes:

int nextTicket = 0, serving = 0;

mutexbegin()
{
   int myTicket;

   myTicket = FAI(nextTicket);

   while (myTicket != serving)
      ;
}

mutexend()
{
   ++serving;
}

Correctness proof:

  1. Mutual exclusion: contradiction.

    Assumption: no process ``plays'' with control variables.

  2. Progress: by inspection, there's no involvement from processes operating outside their CSs.

  3. Bounded waiting: myTicket at most n more than serving.



Thomas P. Kelliher
Mon Sep 23 22:04:04 EDT 1996
Tom Kelliher