Legal Disclaimer

Tech in your hand
An easy approach to understand tech specs
Learn to Impress
Custom Search

Feedback ? Send it to or join me at

Legal Disclaimer

Memory Dis Cache Memory Associative Cache Cache Hardware Cache control ACache arch Memory Array Direct Mapped Cache Direct Mapped D
Home.Pics.LTE Home.OFDMA  .
Cache Memory Organization
In high speed computing data is made available to processors in small temporary memories know as Cache. These memories are placed much closer to the CPU in comparison to main memory available on the device.
(2) Direct MappedCache Memory.
Direct Mapped (also known as 1-way set Associative): Fixed mapping of cache lines within a cache page from main memory to cache location. So one location in cache can only have one corresponding entry from main memory cache pages. less complex and also less efficient.

(3) Set-associative Cache Memory.
Set Associative (2 way or 4 way): Hybrid of above 5 architectures. Cache memory is partitioned into  2 or 4 equal parts called ways. Then we have 2/4 way full associative and each partition acts as direct mapped.
Cache memory doesn’t act as permanent storage rather its transparent to processor. It primary operation is to act as temporary storage for high probability data for future memory accesses.
Return to Verilog Tutorial
Interview Questions. Main, FPGA, Digital Fundamentals

(1) Associative Cache Memory.

Only cache lines no cache pages involved. Any cache line from main memory can be placed at any location in cache. Lots of comparator are required.


Fully Associative Cache memory & TAG

Block diagram of fully-Associative Cache.

Address match circuitry Associative cache.

Complete hardware fully Associative cache.

Cache memory has further classifications in terms of proximity to the processor.

L1 Cache- Its on the same chip as the microprocessor.
L2 Cache- Its usually on a separate static RAM (SRAM) chip.

Next topic : Associative Cache Memory
Associative Cache .
Interview Questions. Main, FPGA, Digital Fundamentals

What’s the difference between Write-Through and Write-Back Caches?

Hint: Write Through: All the writes to cache always go to the main memory. Advantage: Avoids cache coherence issues for multi-thread/multi-processor systems. Disadvantage: Its slow and writes needs to reach main memory.

Write back: Writes to cache do not go to the main memory. Advantage: Fast. Dis-advantage: need logic to manage cache coherency.  


Cache Size is 64KB, Block size is 32B and the cache is Two-Way Set Associative. For a 32-bit physical address, give the division between Block Offset, Index and Tag.

No of blocks = 64 kb/32B =2k blocks = 2048

2 way set associative = 2k/2 = 1K for addressing space

2^10, so 10 bits for index.

32B block = 2^5 = 5 bits for block offset.

Remaining tag bits = 32 -(10+5) = 17 bits


What is snooping and snarfing?

Snooping: Cache monitors the address lines to cache

Snarfing: Cache also monitors data lines and updates the contents.


What is cache hit and cache miss?

Cache Hit: for processor address request corresponding data is available in Cache

Cache Miss: data in not available.


What are main cache components?

Cache components:

SRAM: Main location to store data from main memory.

Tag RAM: a small SRAM to store address from main memory for data in Cache.

Cache controller: Manages SRAM, Tag RAM, read control (snooping, snarfing), write policies,  cache hit or miss.  



What is ACBF(Hex) divided by 16? ACB(hex)


Convert 65(Hex) to Binary?



Cache memory related interview questions

What is a cache? Hint: check details above.