NPTEL Advanced Computer Architecture Week 8 Assignment Answers 2025
1. If the row address is n bits, there are __ rows in an SRAM array.
- n
- log (n)
- 2n
- n2
Answer :- For Answers Click Here
2. Each SRAM cell is connected to a bit line pair such that each bit line stores __________ value(s). During a read operation from an SRAM array, when a word line (WL) is set to 1, it enables __________ SRAM cells in that row.
- complimentary, selective
- complimentary, all the
- same, selective
- same, all the
Answer :-
3. What is the minimum number of transistors required to build a functional CAM cell?
- 6
- 8
- 10
- 12
Answer :-
4. Choose the incorrect statement.
- The MSHR is a hardware structure.
- MSHRs are essential for non-blocking caches.
- An MSHR keeps a record of all the accesses to a missed cache block.
- An MSHR has a single miss queue for all cache blocks.
Answer :-
5. Which of the following is not true about cache banks?
- Banking allows parallel access.
- A bank is an independent array.
- Banking requires less area.
- Banking incurs higher routing and decoding overheads.
Answer :-
6. Consider the following statements and select the most appropriate option in terms of the results derived by using the Elmore -Delay model?
S1: The latency of a wire is proportional to the square of its length.
S2: SRAM banks can be modeled as a set of simple circuit elements
- Only S1 is true
- Only S2 is true
- Both S1 and S2 are true
- Both S1 and S2 are false
Answer :- For Answers Click Here
7. In an MSHR, if the secondary miss is a____________ it is just appended to the tail of the miss queue. Whereas if the secondary miss is a__________ we search the earlier writes in the miss queue to the same set of bytes. If we find such an entry its value is _________________.
- Read, write, forwarded
- Read, write, dropped
- Write, read, forwarded
- Write, read, dropped
Answer :-
8. Consider the following statements and select the most appropriate option.
S1: Implementing replacement policies are tricky in skewed-associative caches.
S2: Pipelined caches provide a reduced access latency per memory access request compared to non-pipelined caches.
- Only S1 is true
- Only S2 is true
- Both S1 and S2 are true
- Both S1 and S2 are false
Answer :-
9. Choose the pair of operations that can “specifically” happen in parallel in a VIPT cache.
- Tag comparison, Reading the data array
- Address Translation, Tag comparison
- Address Translation, Reading the data array
- Address Translation, Data block selection
Answer :-
10. Which among the following is not an input to the Cacti tool?
- Cache size in bytes
- Block size in bytes
- Associativity
- Total wire length
Answer :- For Answers Click Here