Interactions Between Processes

Tom Kelliher, CS42

Sept. 18, 1996

Cooperating Processes

Types of processes:

  1. Independent --- No sharing.

  2. Cooperating --- Share data.

Reasons for design cooperating processes:

  1. Information sharing (shared file).

  2. Computation speedup.

  3. Modularity.

  4. Convenience. Example: editing, compiling, printing in parallel. (Aren't these just multiple independent processes?)

Bounded Buffer Problem

Two cooperating processes: producer, consumer.

Global data:

const int N = 10;

int buffer[N];
int in = 0;
int out = 0;
int full = 0;
int empty = N;

Producer:

while (1)
{
   while (empty == 0)
      ;

   buffer[in] = inData;
   in = ++in % N;
   --empty;
   ++full;
}

Consumer:

while (1)
{
   while (full == 0)
      ;

   outData = buffer[out];
   out = ++out % N;
   --full;
   ++empty;
}
Is there potential for trouble here?

Threads

Heavyweight process --- expensive context switch.

Thread:

  1. Lightweight process.

  2. Consist of PC, general purpose register state, stack.

  3. Shares code, heap, resources with peer threads.

  4. Easy context switches.

Task: peer threads, shared memory and resources.

Can peer threads scribble over each other?

What about non-peer threads?

User-level threads:

  1. Implemented in user-level libraries; no system calls.

  2. Kernel only knows about the task.

  3. Threads schedule themselves within task.

  4. Advantage: fast context switch.

  5. Disadvantages:
    1. Unbalanced kernel level scheduling.

    2. If one thread blocks on a system call, peer threads are also blocked.

Kernel-level threads:

  1. Kernel knows of individual threads.

  2. Advantage: If a thread blocks, its peers can still proceed.

  3. Disadvantage: Slower context switch (kernel involved).

How do threads compare to processes?

  1. Context switch time.

  2. Shared data space. (Improved throughput for file server: shared data, quicker response.)

Example: Solaris 2

User-level threads multiplexed upon lightweight processes:

IPC Mechanisms

Basics: send(), receive() primitives.

Design Issues:

  1. Link establishment mechanisms:
    1. Direct or indirect naming.

    2. Circuit or no circuit.

  2. More than two processes per link (multicasting).

  3. Link buffering:
    1. Zero capacity.

    2. Bounded capacity.

    3. Infinite capacity.

  4. Variable- or fixed-size messages.

  5. Unidirectional or bidirectional links (symmetry).

  6. Resolving lost messages.

  7. Resolving out-of-order messages.

  8. Resolving duplicated messages.

Mailboxes --- An Indirect Communication Mechanism

Resources owned by kernel.

Messages kept in a queue.

Assume:

  1. Only allocating process may execute receive.

  2. Any process (including ``owner'') may send.

  3. Variable-sized messages.

  4. Infinite capacity.

Primitives:

  1. int AllocateMB(void)

  2. int Send(int mb, char* message)

  3. int Receive(int mb, char* message)

  4. int FreeMB(int mb)

Example: Process Synchronization

Consider:

Process1()
{
   ...
   S1;
   ...
}

Process2()
{
   ...
   S2;
   ...
}
How can we guarantee that S1 executes before S2?

Example: Tape Drive Allocation and Use

The situation:

Tape allocator process:

initialize();
while (1)
{
   Receive(Tamb, message);
   if (message is a request)

      if (there are enough tape drives)

         for each tape drive being allocated
         {
            fork a handler daemon;
            send daemon mb # in message to requesting process;
            update lists;
         }

      else

         send a rejection message;

   else if (message is a return)
   {
      update lists;
      send an ack message;
   }

   else

      ignore illegal messages;
}

Summary of user process actions:

  1. Send request to tape allocator.

  2. Receive message back giving mailbox(es) to use in communicating with tape drive(s).

  3. Start sending/receiving with tape drive daemon(s).

  4. Close tape drives.

  5. Send message to tape allocator returning tape drive(s).



Thomas P. Kelliher
Tue Sep 17 21:35:20 EDT 1996
Tom Kelliher