Interactions Between Processes
Tom Kelliher, CS42
Sept. 18, 1996
Types of processes:
- Independent --- No sharing.
- Cooperating --- Share data.
Reasons for design cooperating processes:
- Information sharing (shared file).
- Computation speedup.
- Modularity.
- Convenience. Example: editing, compiling, printing in parallel.
(Aren't these just multiple independent processes?)
Two cooperating processes: producer, consumer.
Global data:
const int N = 10;
int buffer[N];
int in = 0;
int out = 0;
int full = 0;
int empty = N;
Producer:
while (1)
{
while (empty == 0)
;
buffer[in] = inData;
in = ++in % N;
--empty;
++full;
}
Consumer:
while (1)
{
while (full == 0)
;
outData = buffer[out];
out = ++out % N;
--full;
++empty;
}
Is there potential for trouble here?
Heavyweight process --- expensive context switch.
Thread:
- Lightweight process.
- Consist of PC, general purpose register state, stack.
- Shares code, heap, resources with peer threads.
- Easy context switches.
Task: peer threads, shared memory and resources.
Can peer threads scribble over each other?
What about non-peer threads?
User-level threads:
- Implemented in user-level libraries; no system calls.
- Kernel only knows about the task.
- Threads schedule themselves within task.
- Advantage: fast context switch.
- Disadvantages:
- Unbalanced kernel level scheduling.
- If one thread blocks on a system call, peer threads are also
blocked.
Kernel-level threads:
- Kernel knows of individual threads.
- Advantage: If a thread blocks, its peers can still proceed.
- Disadvantage: Slower context switch (kernel involved).
How do threads compare to processes?
- Context switch time.
- Shared data space. (Improved throughput for file server: shared data,
quicker response.)
User-level threads multiplexed upon lightweight processes:
Basics: send(), receive() primitives.
Design Issues:
- Link establishment mechanisms:
- Direct or indirect naming.
- Circuit or no circuit.
- More than two processes per link (multicasting).
- Link buffering:
- Zero capacity.
- Bounded capacity.
- Infinite capacity.
- Variable- or fixed-size messages.
- Unidirectional or bidirectional links (symmetry).
- Resolving lost messages.
- Resolving out-of-order messages.
- Resolving duplicated messages.
Resources owned by kernel.
Messages kept in a queue.
Assume:
- Only allocating process may execute receive.
- Any process (including ``owner'') may send.
- Variable-sized messages.
- Infinite capacity.
Primitives:
- int AllocateMB(void)
- int Send(int mb, char* message)
- int Receive(int mb, char* message)
- int FreeMB(int mb)
Consider:
Process1()
{
...
S1;
...
}
Process2()
{
...
S2;
...
}
How can we guarantee that S1 executes before S2?
The situation:
Tape allocator process:
initialize();
while (1)
{
Receive(Tamb, message);
if (message is a request)
if (there are enough tape drives)
for each tape drive being allocated
{
fork a handler daemon;
send daemon mb # in message to requesting process;
update lists;
}
else
send a rejection message;
else if (message is a return)
{
update lists;
send an ack message;
}
else
ignore illegal messages;
}
Summary of user process actions:
- Send request to tape allocator.
- Receive message back giving mailbox(es) to use in communicating with
tape drive(s).
- Start sending/receiving with tape drive daemon(s).
- Close tape drives.
- Send message to tape allocator returning tape drive(s).
Thomas P. Kelliher
Tue Sep 17 21:35:20 EDT 1996
Tom Kelliher