Cache Terms and Definitions
Table 2.
Term
DMATerms and Definitions (Continued)DefinitionDirect Memory Access. Typically, a DMA operation copies a block of memory from
one range of addresses to another, or transfers data between a peripheral and
memory. On the C64x DSP, DMA transfers are performed by the enhanced DMA
(EDMA) engine. These DMA transfers occur in parallel to program execution. From a
cache coherence standpoint, EDMA accesses can be considered accesses by a
parallel processor.
The process of removing a line from the cache to make room for newly cached data.
Eviction can also occur under user control by requesting a writeback-invalidate for an
address or range of addresses from the cache. The evicted line is referred to as the
victim. When a victim line is dirty (that is, it contains updated data), the data must be
written out to the next level memory to maintain coherency.
A block of instructions that begin execution in parallel in a single cycle. An execute
packet may contain between 1 and 8 instructions.
A block of 8 instructions that are fetched in a single cycle. One fetch packet may
contain multiple execute packets, and thus may be consumed over multiple cycles.
A cache miss that occurs on the first reference to a piece of data. First-reference
misses are a form of compulsory miss.
A cache that allows any memory address to be stored at any location within the
cache. Such caches are very flexible, but usually not practical to build in hardware.
They contrast sharply with direct-mapped caches and set-associative caches, both of
which have much more restrictive allocation policies. Conceptually, fully-associative
caches are useful for distinguishing between conflict misses and capacity misses
when analyzing the performance of a direct-mapped or set-associative cache. In
terms of set-associative caches, a fully-associative cache is equivalent to a
set-associative cache that has as many ways as it does line frames, and that has
only one set.
In a hierarchical memory system, higher-level memories are memories that are
closer to the CPU. The highest level in the memory hierarchy is usually the Level 1
caches. The memories at this level exist directly next to the CPU. Higher-level
memories typically act as caches for data from lower-level memory.
A cache hit occurs when the data for a requested memory location is present in the
cache. The opposite of a hit is a miss. A cache hit minimizes stalling, since the data
can be fetched from the cache much faster than from the source memory. Thedetermination of hit versus miss is made on each level of the memory hierarchyseparately—a miss in one level may hit in a lower level.EvictionExecute packetFetch packetFirst-reference missFully-associativecacheHigher-level memoryHit
14TMS320C64x Two-Level Internal MemorySPRU610B