Virtual Memory I

Tom Kelliher, CS42

Nov. 1, 1996

  1. Virtual memory --- what is it?

  2. What are the advantages?
    1. A program's logical address space can be larger than physical memory.

    2. Degree of multiprogramming can be increased (40 pages of memory; allocate only 5 pages to processes with spaces of 10 pages).

    3. Less I/O needed to load/swap a process.

  3. Demand paging.

  4. Why does it work?
    1. A lot of code is rarely run (error-handling routines).

    2. Oversizing of data structures.

    3. Locality of reference:
      1. Spatial.

      2. Temporal.

What we'll consider:

  1. System support.

  2. Page fault sequence.

  3. Replacement policies.

  4. Placement (allocation) policies.

System Support for Virtual Memory

  1. Kernel support.

  2. MMU support.

  3. CPU support.

Kernel Support

  1. Page fault handler.

  2. Page placement policies.

  3. Page replacement policies.

MMU Support

Page table changes:

  1. Valid/invalid bit takes on greater role:
    1. Valid: frame field contains frame number.

    2. Invalid: frame field contains block number on paging device or invalid reference.

  2. Dirty bit: page has been modified.

  3. Read/write bit.

  4. Access counter: (sometimes) for implementing LRU.

Traps generated:

  1. Memory fault.

  2. Page fault.

  3. Write on read-only fault.

CPU Support

Instructions must be restartable:

  1. May page fault on instruction fetch.

  2. May page fault on operand fetch/store. State may have been modified. Design approaches:
    1. Checkpoint state.

    2. Ensure that all required pages are in memory before proceeding.

Page Fault Sequence

  1. Memory reference generated.

  2. MMU looks up Page table entry (TLB then memory).

  3. Valid bit examined. If set, get frame number and finish.

  4. Otherwise, generate page fault trap.

  5. Kernel page fault handler called as result of trap.

  6. Kernel examines page table entry.

  7. If non-mapped, generate page violation trap.

  8. Otherwise, locate a frame for the incoming page (possibly designate a victim frame and page it out first).

  9. Schedule disk I/O.

  10. Re-schedule CPU.

  11. Disk I/O completes, interrupt generated.

  12. Kernel interrupt handler called and determines source of interrupt.

  13. Update page table and put process back on ready queue.

  14. Faulting instruction re-started.

Demand Paging Performance

where:

  1. p is the page fault rate. 25 ms or more.

  2. ma is main memory access time. 100 ns or less.

  1. If page fault rate is 1/1,000, effective access time is 25 microseconds!!!

  2. If we want only a 10% penalty (110 ns), page fault rate must be less than 1/2,500,000.

Swap Space Policies

  1. Copy image into swap at process start-up. Demand paging done from swap device. Wastes swap space, extra I/O, but swap device is faster than filesystem.

  2. Demand page from filesystem. Read-only pages are never swapped out, just overwritten and re-read from filesystem. Conserves swap space, uses slower filesystem.

  3. Demand page from filesystem, swap out to swap device. Only demanded pages are read from filesystem, only necessary pages are replaced to swap device.

Replacement Policies

  1. What happens if a page fault occurs and all frames are in use?

  2. Must select a victim frame:
    1. Page-out victim frame.

    2. Update victim process' page table.

    3. Page-in faulted page.

  3. How do we select the victim frame?

  4. Comparison criteria for replacement algorithms.

Reference Strings

  1. What is it?

  2. Where do I get one?

  3. What about redundancy?

FIFO Replacement

  1. In the set of candidate victim pages, select the ``oldest'' page.

  2. Example reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Three frames allocated.

  3. Belady's anomaly:
    1. Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.

    2. Three frames allocated.

    3. Four frames allocated.

  4. Stack property: Set of pages in memory with n frames allocated is a subset of set of pages in memory with n + 1 frames allocated.

Optimal Replacement

  1. Replace the page which won't be used for the longest time.

  2. Example reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Three frames allocated.

  3. Implementation?

LRU Replacement

  1. Approximation to optimal: replace page which hasn't been used for the longest time.

  2. ``Reversal'' of optimal.

  3. Example reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Three frames allocated.

  4. Implementation:
    1. Counters. Hardware support required

    2. Stack. Expensive.

    3. Reference bits concatenation as approximation to counter.

    4. Second chance (clock) algorithm: FIFO, but skip over page if reference bit set (reset reference bit).

Page Buffering Optimizations

  1. Keep a small pool of empty frames so paging-in can occur without waiting for victim page-out.

  2. When idle, write dirty pages out and clear dirty bit.

  3. Keep track of what's in free frames, so page-ins can possibly use an old, free frame.



Thomas P. Kelliher
Thu Oct 31 22:50:32 EST 1996
Tom Kelliher