Cache Terms and Definitions
2Cache Terms and Definitions
Table 2 lists the terms used throughout this document that relate to the
operation of the C64x two-level memory hierarchy.
Table 2.
Term
AllocationTerms and Definitions DefinitionThe process of finding a location in the cache to store newly cached data. This
process can include evicting data that is presently in the cache to make room for the
new data.
The number of line frames in each set. This is specified as the number of ways in the
cache.
A cache miss that occurs because the cache does not have sufficient room to hold the
entire working set for a program. Compare with compulsory miss and conflict miss.
A cache line that is valid and that has not been written to by upper levels of memory
or the CPU. The opposite state for a valid cache line is dirty.
Informally, a memory system is coherent if any read of a data item returns the most
recently written value of that data item. This includes accesses by the CPU and theEDMA. Cache coherence is covered in more detail in section 8.1.
Sometimes referred to as a first-reference miss. A compulsory miss is a cache miss
that must occur because the data has had no prior opportunity to be allocated in the
cache. Typically, compulsory misses for particular pieces of data occur on the first
access of that data. However, some cases can be considered compulsory even if
they are not the first reference to the data. Such cases include repeated write misses
on the same location in a cache that does not write allocate, and cache misses to
noncacheable locations. Compare with capacity miss and conflict miss.
A cache miss that occurs due to the limited associativity of a cache, rather than due
to capacity constraints. A fully-associative cache is able to allocate a newly cached
line of data anywhere in the cache. Most caches have much more limited
associativity (see set-associative cache), and so are restricted in where they may
place data. This results in additional cache misses that a more flexible cache would
not experience.
A direct-mapped cache maps each address in the lower-level memory to a single
location in the cache. Multiple locations may map to the same location in the cache.
This is in contrast to a multi-way set-associative cache, which selects a place for the
data from a set of locations in the cache. A direct-mapped cache can be considered
a single-way set-associative cache.
In a writeback cache, writes that reach a given level in the memory hierarchy may
update that level, but not the levels below it. Thus, when a cache line is valid and
contains updates that have not been sent to the next lower level, that line is said to
be dirty. The opposite state for a valid cache line is clean.AssociativityCapacity missCleanCoherenceCompulsory missConflict missDirect-mapped cacheDirty
SPRU610BTMS320C64x Two-Level Internal Memory13