DMA, Introduction to Caches

Tom Kelliher, CS26

Nov. 12, 1996

  1. We'll skip 4.5.2--4.7.

Direct Memory Access


How does a disk perform a transfer?

CPU driven method:

  1. Prepare memory buffer, pointer into buffer, and count.

  2. Send disk a command through I/O registers:
    1. Cylinder, head, sector.

    2. Read or write.

    3. Go.

  3. Do other things.

  4. Repeat:
    1. Receive interrupt.

    2. Transfer byte between disk and memory.

    3. Update pointer, count.

One possible speed-up: block transfers.

How efficient is this? Assume:

  1. Disk:
    1. 3600 RPM.

    2. 512 bytes/sector.

    3. 56 sectors/track.

    4. How many bytes/sec.?

  2. CPU:
    1. 100 MHz.

    2. 200 instructions/interrupt.

  1. The problem here?

  2. DMA removes the burden.

  3. DMA controller is a bus master --- arbitration required.

DMA Hardware

DMA architecture (2 channels)

DMA channel interface:

Is it memory mapped?

Programming a DMA Channel

  1. Write starting address, count registers.

  2. Write control register.

  3. Interrupt will be received upon completion.

Is the order important?

Schematic operation:

  1. CPU reserves an area of memory as a buffer for the I/O (assume a read is performed).

  2. CPU loads the starting address of the buffer into the DMA controller

  3. CPU loads the transfer count into the DMA controller (assume that it's same as the buffer size).

  4. Concurrently:

    Memory arbitration problems here.

  5. I/O device interrupts CPU upon completion.

  6. CPU receives interrupt, checks status, schedules formerly blocked process.

Bus Arbitration

  1. Only one device --- bus master --- can control bus.

  2. CPU and DMA controller are bus masters.

  3. How is control passed back and forth?

Centralized arbitration:

Operation. Assume CPU is bus master at start:

  1. DMA n asserts bus request.

  2. CPU accepts request, asserts BG.

  3. BG daisy chains until reaching requesting controller.

  4. Controller releases bus request, waits for bus busy to go away.

  5. Controller asserts bus busy and begins using bus.

  6. Controller releases bus busy when done.

De-centralized arbitration?



  1. Addressing conventions for 32-bit memory:

  2. CPU/memory behavior on word/byte accesses. Consider writing a single byte.

  3. Memory organization:

  4. Memory access is a bottleneck to CPU operation. Speed-ups:
    1. Caches.

    2. Interleaving. Pipelined access to multiple memory banks. Example:
      1. 200 ns. RAM.

      2. 50 ns. CPU cycle.

      3. No interleaving vs. 4-way interleaved.

Memory Cell Array Organization


Static RAM

  1. No address bit sharing.

  2. Memory cell organization:

    How many transistors?

  3. Reading, Writing?

A static RAM:

Dynamic RAM

  1. Row, Column share address lines --- must strobe and latch.

  2. Memory cell organization:

    How many transistors?

  3. Reading, Writing?

  4. Refresh. Stalls.

A dynamic RAM:

  1. Fast page mode DRAMS.


Static, Dynamic RAM Comparison

  1. Faster?

  2. Denser?


Caches use static.

Main memory uses dynamic.

Design techniques:

  1. Modular.

  2. Scalable.

  3. Reducing number of drawn transistors.

A 4Mx32 DRAM Memory System

Uses DRAMS in chip array.

The Memory Hierarchy

  1. Registers --- flip-flops.

  2. L1 cache --- on-chip SRAM.

  3. L2 cache --- off-chip SRAM.

  4. Main memory --- DRAM.

  5. Secondary storage --- magnetic disk.

Size, speed, cost? Management?

Thomas P. Kelliher
Mon Nov 11 15:28:37 EST 1996
Tom Kelliher