Virtual Memory II
Tom Kelliher, CS 318
Apr. 27, 1998
Announcements:
From last time:
-
Outline:
-
Assignment:
- Maintain frame list as a ``stack.'' Implementation?
- Timestamp page table entry on each access. Implementation?
- Simulating the time stamp through recording and concatenation of
reference bits.
- Implementation: system daemon and timer interrupt.
- How often to record?
- How many bits?
AKA clock algorithm.
- Maintain process' frame list in a circular queue.
- The ``victim'' pointer.
- Victim page referenced: skip over, resetting reference bit.
- All pages referenced: pure FIFO.
- Enhancement: examine (reference bit, dirty bit):
- Four combinations --- priorities?
- Possibly several passes through the queue.
How are frames allocated to processes?
Issues:
- Minimum # of frames.
- Static vs. dynamic policies.
- Global vs. local.
- Thrashing.
- Optimizations.
- Interaction with I/O subsystem.
Must have sufficient frames in memory to execute an instruction.
Requirements:
- Instruction fetch: One or two frames.
- Operand fetch/store: One or two frames per operand.
- Massive indirection?
- Equal allocation.
- How realistic?
- Proportional allocation.
- Allocated space according to need.
- Process i needs pages.
- Total need is pages.
- Memory broken into m frames.
- Process i granted frames.
- How realistic?
- Can be generalized to priority schemes.
Problems with static policies.
Suppose we re-adjusted proportional at intervals. Any improvement?
If a victim page must be selected, what is the candidate pool?
- Local: page frames of only the faulting process.
Process' fault rate not dependent on other process' behavior.
- Global: any page frame.
- Frame stealing.
- Interdependence of processes.
- Combined policy (prioritized).
- Effect of CPU utilization on degree of multiprogramming.
- Effect of degree of multiprogramming on frame distribution.
- Effect of frame distribution on page fault rate.
- Effect of page fault rate on CPU utilization.
CPU utilization low due to paging, so more processes started, making memory
situation worse, leading to more paging, lowering CPU utilization, so more
processes started, ...
Attempt to adapt to process behavior.
Action on memory over-commitment?
Idea:
- ``Locality.''
- Number of pages for each locality.
- This is the working set.
How do we determine the working set?
- : working set window.
- : working set interval.
- Every time units, examine last references and
determine number of unique references. That number is the working set
size.
- How do we implement this?
- System daemon and timer interrupt.
- Concatenate a few reference bits.
- What happens if the window straddles two localities, etc.? (Working
set aliasing)
- Ultimate goal: maintain each process' page fault rate within some
target region.
- Implementation: Track each process' fault rate
- If above target region: allocate more frames to process.
- If below target regions: de-allocate frames.
- Implementation?
- Pre-paging.
- Page Size:
- Reasons for large page size:
- Smaller page table.
- Maximize I/O efficiency.
- Reasons for small page size:
- Better match localities.
- Decrease internal fragmentation.
- Page sizes have increased over time to accommodate faster CPUs,
memory, making page faults more costly.
- Program Structure:
- Burroughs Algol: Each row of a 2-D array is allocated in a
separate segment.
- A 1-D array is allocated in a single segment.
- For most speed, simulate a 2-D array with a 1-D array.
What happens if I/O is occurring to a page and it gets paged out?
Solutions:
- Lock buffer pages in memory.
- All I/O occurs to kernel space with memory copies to/from user space.
- Copies expensive; is there another way?
Thomas P. Kelliher
Sat Apr 25 10:28:42 EDT 1998
Tom Kelliher