Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Direct Mapping, Fully Associative Mapping & k-Way Set Associative

Explaining the different Cache Mapping Techniques
by

Kimberley McDowell

on 12 March 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Direct Mapping, Fully Associative Mapping & k-Way Set Associative

Main Memory Address
Main Memory Address
Direct Mapping, Fully Associative Mapping & k-Way Set Associative Mapping Techniques.
A cache memory has a line size of four 8-bit words and a capacity of 32K words. The size of main memory is 16M words. Addressing is done at the byte level. Determine the addressing format using a direct, fully associative and two-way set associative mapping techniques.

Outline the steps taken by the CPU to determine whether the contents of a requested main memory address resided within the cache.
Cache Mapping Techniques
K-WAY SET ASSOCIATIVE MAPPING
Set associative mapping is essentially a
compromise
between fully associative mapping and direct mapping, utilizing the
'best of both worlds'
.

Lines of cache are grouped into
sets
and each direct mapped set is called a
‘way’
. A block of main memory can be mapped into any of the lines of the specific set that that block is assigned to.

2 lines per set is
2-way set associative mapping
, 4 lines per set is
4-way set associative mapping
and so on. Note that the lines per set increase in powers of 2.

In the event of a
cache miss
, the
Least Recently Used
(LRU) algorithm is employed for a fresh read from main memory.

Direct Mapping
Direct Mapping is a method used to store information in cache for quick access by the
processor
.

It is the way in which
memory blocks
are stored/mapped to the lines in cache.

Essentially, each block in main memory maps to a
specific
line in cache.

Test Your Knowledge!
Fully Associative
Mapping

With fully associative mapping, a block in main memory can be stored to
any
line in Cache.

To determine whether a block of main memory is in cache, a comparison between the
tag bits
of the memory address in cache and the main memory block is made in parallel.

If there is a
cache miss
, a
cache replacement algorithm
is used to copy the required memory block into cache
Advantages
Fully associative mapping allows for flexible block placement such that the main memory blocks can be mapped to any line of cache. This differs from direct mapping which limits each main memory block to a specific line of cache.

This technique allows for replacement algorithms to be utilized making the access of data much more efficient. Examples include the
Least Recently Used
(LRU) algorithm and the
Least Frequently Used
(LFU) algorithm.
Key Terms
Key Terms
The Most Significant Bits (s) – Represent the number of bits required to uniquely identify a block in Main Memory, it is further broken down into the Tag & the Line.

Tag – The number of bits required to uniquely identify a Memory Address Block.

Line – The number of bits required to uniquely identify a line of cache.

Word – Represents the least significant bits which uniquely identify a word on a line of Cache.


By: Cheney Antoine, Shem Best, Kimberley McDowell, Terris Scott & Latrisha Thorington
Objectives
Explain the 3 mapping techniques (direct, fully associative & k-way set associative)

Discuss the advantages and disadvantages of the three mapping techniques

Explain how main memory blocks are divided and interpreted by cache
What is Cache Mapping?
Cache mapping is the way in which main memory blocks are copied into cache. Mapping is necessary because there are fewer cache lines than main memory blocks and there needs to be some way of determining
where
and
how
the main memory blocks will be copied to the different cache lines.

Furthermore, we need to be able to determine which main memory block currently occupies a cache line.

The choice of mapping technique will determine
how cache is organized
. The three mapping techniques are:

Direct Mapping

Fully Associative Mapping

K-Way Set-Associative Mapping

The Concept of Computer Memory
There are two computer memory categories:

External

Internal
Internal memory can be divided into three main categories:

Registers

Cache

Main Memory
Main Memory
Main memory is used to store programs and data during the computer's operation. It consists of RAM and RPM chips. There are two forms of RAM chips:

Dynamic

Static

Dynamic RAM (DRAM) is used in main memory while static RAM (SRAM) is used in cache.
Cache Memory
Cache memory is used by the CPU to store frequently used data and instructions by using the
Principle of Locality of Reference
(see reference manual for more details).

This allows the
average execution time
to be reduced because
cache is faster than main memory
. Cache is placed between main memory and the CPU. There can be different levels of cache, for example level 1 cache (L1) and level 2 cache (L2). These levels follow the memory hierarchy, so L1 cache will be faster, smaller and more expensive than L2 cache.


When the CPU needs to access memory to retrieve some data, it first searches cache. Cache contains a copy of portions of main memory.

If the data is found in cache, it is delivered to the CPU. If the data isn't found in cache, main memory is searched, and the portion of main memory containing the required data is copied and transferred into cache. The data is then delivered to the CPU. This idea of transferring that portion of main memory where the data item was found to cache is based on the
principle of locality of reference
.
The CPU, Cache & Main Memory Connection
How Data is Stored in Memory
How Main Memory Address Blocks are Divided For Direct Mapping
A Main Memory Address

Tag Line Word

s
Advantages
Because it only needs to make one comparison to detect a Cache Hit.
Disadvantages

Each Block in Main Memory is mapped to a single line of cache


It means that if a program accesses 2 Blocks that map to the same line repeatedly, the probability for a cache miss is very high (Low Hit Ratio)
The smallest unit of information in a computer is a bit, which can be either a 0 or a 1. However, data can't be stored simply as a bit.

A byte is 8 bits. This means one character can now be represented as a byte because there are enough combinations to allow for this.

The processor however, works with a unit called a word. A word has a fixed number of bits that is stored within each memory address of a computer and is taken as a unit. The size of a word differs depending on the machine.
Let’s check out an Example!!
QUESTION
The main memory of a computer system consists of 256 blocks of 128 Kbits. The cache memory has a line size of sixteen 32-bit words and a capacity of 32 K words.

Assuming that the addressing is done at the
byte level
, show the format of main memory addresses using direct mapping.
Step 1
LIST KEY INFORMATION

Main Memory: 256 blocks of 128 Kbits
Cache Memory Line Size: 16 32-bit words
Cache Capacity: 32K words
Byte level addressing
Using Direct Mapping

Step 2
Step 3
Main Memory Address
Step 4
Main Memory Address
Step 5
Main Memory Address

Simple to implement

Inexpensive


– Why?
It is not flexible due to the nature of how it operates



Fixed location for a given Block – What does this mean?
(s+w) =
22 bits
2 = 2 x 2 x 2 = 2 bits (
We need to convert these bits to bytes
)

= 2 / 2 = 2 bytes
*


Therefore (s+w) =
22 bits
Information Used: Main Memory: 256 blocks of 128 Kbits
s+w 7 8 10 25
Information used: Cache Memory Line Size: 16 32-bit words

2 = 2 x 2 = 2 bits
(We need to convert these bits to bytes)

= 2 / 2 = 2 bytes

Therefore (w) =
6 bits
(s+w) =
22 bits
Information used: Cache Capacity: 32K words
Cache Memory Line Size: 16 32-bit words

32K words 2 x 2 2
16 words 2 2

Therefore r =
11 bits
5 10 15
4 4
Formula required for (r) bits 2
= Cache size
Line Size
(s+w) =
22 bits
(s+w) =
22 bits
(s - r) = (s + w) – (w + r)

= 22 - (6 + 11)

= 22 – 17 = 5

Therefore (s-r) =
5 bits
Formula required for (s-r) bits is (s - r) = (s + w) – (w + r)
Thanks for checking out Direct Mapping, we hope to learned a lot!
Now on to Fully Associative Mapping!
How Main Memory Address Blocks are Divided For Fully Associative Mapping
A Main Memory Address
QUESTION
The main memory of a computer system consists of 256 blocks of 128 Kbits. The cache memory has a line size of sixteen 32-bit words and a capacity of 32 K words.

Assuming that the addressing is done at the
byte level
, show the format of main memory addresses using direct mapping.
What we know?
KEY INFORMATION

Main Memory: 256 blocks of 128 Kbits.

Cache Memory Line Size: 16 32-bit words.

Cache Capacity: 32K words.

Addressing done at the Byte level.

Using Fully Associative Mapping scheme.

Step 1
2 = 256 x 128K = 2 x 2 x 2

= 2
(
At the byte level, so

divide by 8 to get # of bits
)

= 2 / 2 = 2

Therefore s + w =
22 bits
Step 2
Step 3
Tag Word
s+w
Let’s check out an Example!!
Main Memory Address
s + w
s+w
s + w =
22 bits
Thanks for checking out Fully Associative Mapping, we hope to learned a lot!
Now on to K-Way Set Associative Mapping!
QUESTION
The main memory of a computer system consists of 256 blocks of 128 Kbits. The cache memory has a line size of sixteen 32-bit words and a capacity of 32 K words.

Assuming that the addressing is done at the
byte level
, show the format of main memory addresses using 4-way set associative mapping.
What We Know
KEY INFORMATION

Main Memory: 256 blocks of 128 Kbits.

Cache Memory Line Size: 16 32-bit words.

Cache Capacity: 32K words.

Addressing done at the Byte level.

Using Set-Associative Mapping.
Step 1
CALCULATING (S + W) BITS
Step 2
Main Memory Address
Step 3
Main Memory Address
Lets Check Out An Example!!
Cache – Refers to small high speed memory which the CPU uses to access data.

Cache Hit – When the required data resides in Cache.

Hit Ratio – The percentage of all memory access found in level 1 Cache

Main Memory Address(s+w) – The total number of bits required to represent a Memory Address

Main Memory Block – Data stored in Main Memory

Cache Replacement Algorithm - An algorithm used to replace current data residing in cache

THANK YOU FOR LEARNING WITH US!
Cache
Main Memory
TAG
<-- LINE -->
This is how cache interprets the main memory address.
VICTIM CACHE
Victim cache
improves on the disadvantages of direct mapping.

Victim cache is fully associative.

Instead of swapping data to the main memory when a new block needs to be placed into a specified line of cache, the data is placed into victim cache. This improves the
hit ratio
.
CALCULATING (S+W) BITS
Main Memory Address
25 3 22
CALCULATING (W) BITS
= 6 bits
w 4 5 9
9 3 6

*REMEMBER, 8 BITS = 1 BYTE

8 = 2


3
CALCULATING (r) BITS
= 11 bits
= 6 bits
= = = 2
11
r
CALCULATING (s-r) BITS
= 11 bits
= 6 bits
= 5 bits
Cache
Main Memory
TAG
<-- LINE -->
This is how cache interprets the main memory address.
Disadvantages of this technique include the fact that
complex circuitry
is required to examine the tags of all cache lines simultaneously.
Disadvantages
CALCULATING (S+ W) BITS
8 7 10
25
25 3 22
= 22 bits
CALCULATING (W) BITS
Information Used: Line Size: 16 32-bit words
2 = 16 x 32 = 2 x 2 = 2
w 4 5 9
(
At byte level, so divide by 8 to get # of bits
)
= 2 / 2 = 2
9 3 6
Therefore w =
6 bits
6 bits
CALCULATING (S) BITS
=
22 bits
6 bits
s represents the TAG and uniquely identifies a block in memory
s = (s + w) - w

= 22 - 6

= 16
Therefore S =
16 bits
S represents the Tag which uniquely identifies a block in memory
How Main Memory Address Blocks are Divided For
K-Way Set Associative Mapping
A Main Memory Address
(s + w bits)
This is how cache interprets the main memory address
Advantages
With k-way set associative mapping, the Tag is relatively long and must be compared to every line in the cache. However, with
set-associative
mapping, the Tag is
much smaller
and is only compared to the number of Tags within a single set.

This technique is very
robust
, as the number of sets increases, the hit ratio increases.
Disadvantages
The major disadvantage of this technique is that as the number of sets increase, the overall search speed decreases.
s+w
Main Memory Address
(s + w)
2 = 256 x 128K = 2 x 2 x 2

= 2
(
At the byte level, so

divide by 8 to get # of bits
)

= 2 / 2 = 2

Therefore S + W =
22 bits
Information Used: Main Memory: 256 blocks of 128 Kbits
Information Used: Main Memory: 256 blocks of 128 Kbits
8 7 10
s+w
25
25 3 22
CALCULATING (W) BITS
= 22 bits
(s + w) =
22 bits
Information Used: Line Size: 16 32-bit words
2 = 16 x 32 = 2 x 2 = 2
w 4 5 9
(
At byte level, so divide by 8 to get # of bits
)
= 2 / 2 = 2
9 3 6
Therefore W =
6 bits
CALCULATING (d) BITS
(s + w) =
22 bits
Information used: Cache Capacity: 32K words
Cache Memory Line Size: 16 32-bit words

32K words 2 x 2 2
16 words 2 2

Therefore the number of bit used to represent a line =
11 bits
5 10 15
= = = 2
4 4
11
To figure out d, the number of lines must be worked out.
2 = 2 / 2 = 2
d 11 2 9
Therefore d =
9 bits
Step 4
CALCULATING (s-d) BITS
Main Memory Address
(s + w) =
22 bits
Formula required for (s-d) bits is (s - d) = (s + w) – (w + d)
(s - d) = (s + w) – (w + d)

= 22 - (6 + 9)

= 22 – 15 = 7

Therefore (s-d) =
7 bits
Thanks for checking out Set Associative Mapping, we hope to learned a lot!
Direct Mapping
Direct Mapping implies that the number of lines in cache must be equal to the number of blocks in main memory.
Fully Associative Mapping
How does the CPU interpret the memory address when using the fully associative mapping technique?
K- Way Set Associative Mapping
How is cache organized in set associative mapping?
FALSE
b
Cache is organized into sets, which each contain the same number of lines
TRUE OR FALSE?
a. Tag, Line & Word
b. Tag & Word
c. Tag, Line, Set & Word
d. Tag, Set & Word
Practice a Question
Tag Set Word
6 bits
6 bits
9 bits
6 bits
9 bits
7 bits
Full transcript