Homework 2 Solution

CS42

  1. (10 pts.) What two advantages do threads have over multiple processes? What major disadvantage do they have? Suggest one application, not discussed in class, that would benefit from the use of threads, and one that would not.

    The major conceptual breakthrough with threads is a usable shared address space. From this, the two most important advantages are:

    Another advantage is conferred when multiple processors are available: a task can run on several processors simultaneously.

    Data sharing is a double-edged sword. I consider it also to be a major disadvantage of threads. I've seen what happens when user level threads walk over each others stacks; these interactions can introduce extremely subtle bugs which are mind-numbingly difficult to track down. Processes are insulated from each other and any interaction can be minimized and controlled.

    ``Fair'' thread management is also a problem. User level threads can suffer starvation problems. Kernel level threads begin to approach the complexity of processes.

    The best ``application'' that I can think of for threads is the kernel. System call ``throughput'' can be greatly increased if the kernel is threaded. Another big win is that the kernel can run on a multiprocessor system. Every operating system that I can think of which runs on a multiprocessor is threaded. Server applications (NFS) are good candidates for threads.

    I'd say that whoami in Unix is a pretty poor candidate for threading. (I hope you weren't expecting something of an earth-shattering nature.)

  2. (15 pts.) Consider a system in which mailboxes are used for IPC. A kernel-level disk process is associated with the single disk drive in the system and it handles read/write requests from user-level processes. A system library provides fread() and fwrite() functions for user-level processes to access a disk. The functions are defined:
    1. int fread(int sector, char *buff) --- Read data sector sector from the disk into buff. Block until the operation completes, returning 1 on success and 0 on failure.
    2. int fwrite(int sector, char *data) --- Write data from data to disk sector sector, blocking until the operation completes. Return 1 on success and 0 on failure.

    Write pseudo-code for the disk process and the two system level library functions. Clearly indicate the content of mailbox messages.

    Here's an example with three user processes:

    We can make several observations. The disk process is a ``well-known'' process, so all fread()s and fwrite()s will know what mailbox to use. On the other hand, the disk process must know what user mailbox is to be used for the reply, so the user's mailbox number must be part of the request. The request mailbox can certainly have a mixed set of read requests and write requests within.

    A disk operation request message will contain these fields:

    Send() does not block, therefore another mechanism must be used to block fread() and fwrite. The best choice is to perform a Receive, awaiting the status reply from the disk process.

    The disk process communicates directly with the disk. It will perform all StartRead()s and StartWrite()s and busy wait on a disk.busy flag. Its request mailbox number is kept in the global variable DiskProcMbox, which is available to all processes.

    Here are the functions:

    int fread(int sector, char *buff) {
    
       buffer reply;
    
       // Assume every user process has a variable, MyMbox, which
       // is the mailbox number for the process' replies
    
       Send(DiskProcMbox, "<MyMbox>,<Read>,<sector>,<buff>");
       Receive(MyMbox, &reply);
    
       return reply == "OK";
    }
    

    int fwrite(int sector, char *data) {
    
       buffer reply;
    
       Send(DiskProcMbox, "<MyMbox>,<Write>,<sector>,<data>");
       Receive(MyMbox, &reply);
    
       return reply == "OK";
    }
    

    Here's the pseudo-code for the disk process:

    DiskProcess() {
    
       buffer request;
    
       while (1) {
    
          Receive(DiskProcMbox, &request);   // get next request
    
          // process next request
    
          if (request.type == "Write")
             StartWrite(disk, request.sector, request.data);
          else if (request.type == "Read")
             StartRead(disk, request.sector, request.buff);
          else {
             InvalidRequest();   // terminate offending process
             continue;   // skip busy wait and Send()
          }
    
          while (disk.busy)   // wait for I/O to complete
             ;
    
          Send(request.mbox, "<disk.status>")   // disk.status is either "OK"
                                                // or "ERROR"
       }
    }
    

  3. (15 pts.) Consider the following set of processes:

    Draw four Gantt charts illustrating the execution of these processes using FCFS (non-
    preemptive), preemptive SJF, non-preemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1).

    Also, for each scheduling algorithm, answer the following:

    1. What is the turnaround time of each process?

    2. What is the waiting time of each process?

    Which algorithm results in the smallest average waiting time?

    SRT results in the smallest average waiting time.

  4. (10 pts.) The first known correct software solution to the CS problem was proposed by Dekker:
    int flag[2] = { FALSE, FALSE }   /* flag[i] indicates that Pi wants to */
                                     /*  enter its critical section */
    int turn = 0;                    /* turn indicates which process has */
                                     /*  priority in entering its critical */
                                     /*  section */
    
    flag[i] = TRUE;
    while (flag[1 - i])
       if (turn == 1 - i) {
          flag[i] = FALSE;
          while (turn == 1 - i)
             ;
          flag[i] = TRUE;
       }
    CSi;
    turn = 1 - i;
    flag[i] = FALSE;
    
    Prove that the algorithm satisfies the three requirements for a solution to the CS problem.

    1. Mutual Exclusion --- Assume both processes are executing in their critical sections. Note that each process has set its flag before entering its critical section and leaves it set until after it leaves. Therefore, both flags are set. Not also that a process only enters its critical section if the other processes flag is reset. Therefore, both flags are reset, a contradiction.

    2. Progress --- The flag of a process not in its critical section is reset, allowing a process attempting to enter its critical section to enter. If both processes attempt to enter critical sections at approximately the same time, the turn checking code will cause one of the processes to reset its flag, permitting the other process to enter its critical section.

    3. Bounded Waiting --- If one process is waiting at the spin loop while the other is executing its critical section, the waiting process will be next to enter a critical section. This is due to the toggling of turn in the critical section exit code.



Thomas P. Kelliher
Wed Oct 16 08:06:47 EDT 1996
Tom Kelliher