Visible to Intel only — GUID: nju1672776388801
Ixiasoft
Visible to Intel only — GUID: nju1672776388801
Ixiasoft
3.6.3.5. Level 1 Memory System
Instruction Cache
- 64-byte instruction side cache line length
- 4-way set associative
- 128-bit read interface to the L2 memory system
Data Cache
- 64-byte data side cache line length
- 4-way set associative
- Read buffer that services both Data Cache Unit (DCU) and the Instruction Fetch Unit (IFU)
- 64-bit read path from the data L1 memory system to the data path
- 128-bit write path from the data path to the L1 memory system
- Merging store buffer capability which writes to all types of memory
- Data side prefetch engine that detects patterns of strides with multiple streams are allowed in parallel, capable of detecting both constant and patterns of strides.
Data Cache Unit
The data cache unit (DCU) manages all the load and store operations. The L1 data cache RAMs are protected using ECC. The ECC scheme is single error correct double error detect (SECDED). The DCU includes a combined local and global exclusive monitor that is used by Load-Exclusive and Store-Exclusive instructions.
Store Buffer
The store buffer (STB) holds store operations when they have left the load/store pipeline in the data cache unit (DCU) and have been committed by the data processing unit (DPU). The STB can request access to the L1 data cache. Initiate line fills or write to L2 and L3 memory systems. The STB is also used to queue maintenance operations before they are broadcast to other cores in the cluster.
Bus Interface Unit
The bus interface unit (BIU) contains the interface to the L2 memory system and buffers to decouple the interface from the L1 data cache and STB.
Data Prefetching
The Cortex* -A55 core has a data prefetch mechanism that looks for cache line fetches with regular patterns. If the data prefetcher detects a pattern, then it signals to the memory system that memory accesses from a specified address are likely to occur soon. The memory system responds by starting new line fills to fetch the predicted addresses ahead of the demand loads.