Memory Management I

Tom Kelliher, CS 318

Mar. 11, 1998

Announcements: Homework 2 due Friday.

From last time:

  1. CPU scheduling: criteria, algorithms.

  2. Gantt chart examples.

Outline:

  1. Address binding, loading, libraries, and overlays.

  2. Logical and physical address spaces.

  3. Swapping.

  4. Contiguous and non-contiguous allocation.

Assignment: Read Chapter 8.

Assumption: several processes competing for memory. Questions:

  1. How is memory allocated among processes?

  2. Sharing?

  3. Contiguous/non-contiguous allocation?

  4. Fixed- or variable-sized allocation chunks?

  5. Over-commitment. What happens?

Preliminaries

Address Binding

  1. Sometimes, a program can't be loaded ``just anywhere'' in memory.

  2. When does a symbolic name become associated with a memory location?

  3. How is that location addressed? (Direct, indexed, indirect, etc.)

Address binding options and consequences

  1. Compile time. Absolute (direct) addressing used. Program must be loaded into memory at a fixed location. MS-DOS .COM programs.

  2. Load time. Compiler/linker insert ``dummy'' references in code. Loader knows load address at run time, finds and fixes-up dummy references.
    1. Addressing modes?

    2. Re-location? Must re-load to re-locate.

  3. Run time. Pc-relative or base addressing. Base register points to load address.

Dynamic Loading

  1. On-demand module loading --- ``lazy'' loading.

  2. Load only what's necessary.

Dynamic Linking

  1. Static linking --- link at compile-time.
    1. Larger binaries --- wasted disk space.

    2. Must re-link source when updating libraries.

  2. Dynamic linking (shared libraries) --- link at run-time.
    1. Smaller binaries.

    2. Stubs: locating the library in memory or on disk.

    3. Automatic library updates to fix library faults in all binaries. (With static must re-link all binaries.)

    4. Shared libraries. Major (new version #) and minor (same version #) versions.

    5. DLLs in Wintel world. Broken programs.

  3. Compiler issue: code scheduling for out-of-order execution model superscalar processors.
    1. Static (compile-time) optimization. VLIW model.

    2. Dynamic (run-time) optimization. DIF model.

    3. Intel IA-64.

Overlays

Umm, my program is larger than physical memory. What do I do? Example: two-pass assembler.

Logical and Physical Addresses

Logical = Virtual

Recall:

  1. Program (CPU) generates logical addresses.

  2. MMU converts them to physical addresses.

  3. Compile-time, load-time binding: physical address = logical address.

Simple MMU scheme:

Note: Program can utilize compile-time binding.

Swapping

  1. Process must be entirely in memory to execute.

  2. If in ready Q, can swap-out to ``backing store.''

  3. If in I/O wait must ``lock'' in memory. Why?

  4. Must process be swapped-in to same memory space? Depends on loader design.
  5. What needs to be swapped out, exactly? The entire partition? The entire process space?

  6. Swapping in Unix occurs:
    1. The system page map has become too fragmented to allocate page tables for some process.

    2. There are at least two runnable processes, the average amount of free memory has been less than that desired ( desfree) over the last 5 and 30 seconds, and either the paging rate is excessive or the short-term average free memory is less than minfree.

    3. A swapped-out process is ready to run and a process is found in memory that has been sleeping for at least 20 seconds; or the swapped-out process has been out for at least 10 seconds and the process chosen for swapping out has been in memory at least 20 seconds.

Contiguous Allocation

Key: contiguous.

What are we leading up to, here?

Single User Partition

  1. Only one process in memory at a time.

  2. Must swap to do a context switch. Speed?

  3. Unused Memory: internal fragmentation.

Multiple User Partitions

Fixed Partitions

  1. Partitions usually of differing sizes.

  2. Job placement policy.

  3. Can swap jobs in and out of partitions.

  4. Internal fragmentation.

Variable Partitions

  1. Number of partitions varies.

  2. Partitions allocated according to process requirements.

  3. What if a process grows?

  4. Free list of unallocated holes.

  5. Placement policies:
    1. First fit. Best time.

    2. Best fit.

    3. Worst fit --- worst time, storage utilization.

  6. External fragmentation.

  7. Combining adjacent holes.

  8. Memory compaction --- must be using relocatable code. Minimizing copying.



Thomas P. Kelliher
Mon Mar 9 08:40:57 EST 1998
Tom Kelliher