Prezi

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in the manual

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Symmetric Shared Memory Architecture

No description
by fj fj on 5 January 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Symmetric Shared Memory Architecture

Centralized shared-memory multiprocessor or
Symmetric shared-memory multiprocessor (SMP)

Multiple processors connected to a single centralized
memory – since all processors see the same memory
organization uniform memory access (UMA)

Shared-memory - all processors can access the
entire memory address space

Can centralized memory emerge as a bandwidth
bottleneck? – not if you have large caches and employ
fewer than a dozen processors Symmetric Shared Memory Architecture Structure of centralized shared memory multiprocessor Shared-memory:
Well-understood programming model
Communication is implicit and hardware handles protection
Hardware-controlled caching

Message-passing:
No cache coherence simpler hardware
Explicit communication easier for the programmer to
restructure code Models for Communication Cache Coherence Directory-based: A single location (directory) keeps track of the sharing status of a block of memory

Snooping: Every cache block is accompanied by the sharing status of that block – all cache controllers monitor the shared bus so they can update the sharing status of the block, if necessary

Write-invalidate: a processor gains exclusive access of
a block before writing by invalidating all other copies
Maintains consistency by reading from local caches until a write occurs.

Write-update: when a processor writes, it updates other shared copies of that block
Maintains consistency by immediately updating all
copies in all caches.

An invalidation protocol works on cache blocks, while an update protocol must work on individual words Cache Coherence Protocols Memory Organization Processor Processor Processor Processor Cache Cache Cache Cache Main Memory I/O System Support the caching of both shared and private data
Private data is used by a single processor
shared data is used by multiple processors Time Event Value of X in Cache-A Cache-B Memory
0 - - 1
1 CPU-A reads X 1 - 1
2 CPU-B reads X 1 1 1
3 CPU-A stores 0 in X 0 1 0 Two different processors can have two different values for the same location.
Generally referred to as the cache- coherence problem. A memory system is coherent if:

Write propagation: P1 writes to X; no other processor writes to X; sufficient time elapses; P2 reads X and receives value written by P1

Write serialization:Two writes to the same location by two processors are seen in the same order by all processors.
See the full transcript