Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Efficient Memory Management Techniques in Cloud Computing

No description
by

hisham raslan

on 1 February 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Efficient Memory Management Techniques in Cloud Computing

Efficent Memory Management Techniques In
Cloud Computing

Sharing Memory
several VMs may be running instances of the same guest OS, have the same applications or components
loaded, or contain common data.

the hypervisor exploits these sharing opportunities, so that server workloads running in VMs on a single machine often consume less memory than they would running on separate physical machines.
I/O Page Remapping
Some high-end systems provide hardware support that can be used to remap memory for data transfers using a separate I/O MMU.

More commonly, support for I/O involving “high” memory above the 4 GB boundary involves
copying the data through a temporary bounce buffer in “low” memory.

Unfortunately, copying can impose significant overhead resulting in increased latency,
reduced throughput, or increased CPU load.

This problem is exacerbated by virtualization, since even pages size from virtual machines configured with less than of “physical” memory may be mapped to machine pages residing in high memory [7].

Fortunately, this same level of indirection in the virtualized memory system can be exploited to transparently remap guest pages between high and low memory.

Memory Reclamation
The hypervisor supports overcommitment of memory to facilitate a higher degree of server consolidation than would be possible with simple static partitioning.

Overcommitment means that the total size configured for all running virtual machines exceeds the total amount of actual machine memory.

The system manages the allocation of memory to VMs automatically based on configuration parameters and system load.
Memory Virtualization
A guest operating system that executes within a virtual machine expects a zero-based physical address space, as provided by real hardware.

The hypervisor gives each VM this illusion, virtualizing physical memory by
adding an extra level of address translation.

The hypervisor maintains a pmap data
structure for each VM to translate “physical”
page numbers (PPNs) to
machine page numbers (MPNs) [3].
Demand Paging
When ballooning is not possible or insufficient, the system falls back to a paging mechanism.

The hypervisor Server swap daemon receives information about target swap levels for each VM from a higher level policy module.

It manages the selection of candidate pages and coordinates asynchronous page outs to a swap area on disk.

Conventional optimizations are used to maintain free slots and cluster disk writes. A randomized page replacement policy is used to prevent the types of pathological interference with native
guest OS memory management algorithms.
Memory Ballooning
Page Replacement
When memory is overcommitted, the Server must employ some mechanism to reclaim space from one or more virtual machines.

The standard approach used by earlier virtual machine systems is to introduce another level of paging, moving some VM “physical” pages to a swap area on disk [4].

Unfortunately, an extra level of paging requires a meta-level page replacement policy, the virtual machine system must choose not only the VM from which to revoke memory, but also which of its particular pages to reclaim.

Share-Based Allocation
In proportional-share frameworks, resource rights are encapsulated by shares, which are owned by clients that consume resources.

A client is entitled to consume resources proportional to its share allocation; it is guaranteed a minimum resource fraction equal to its fraction of the total shares in the system.

Shares represent relative resource rights that depend on the total number of shares contending for a resource.

Client allocations degrade gracefully in overload situations, and clients proportionally benefit from extra resources when some allocations are underutilized.

Content-Based Page Sharing
The basic idea is to identify page copies by their contents.

Pages with identical contents can be shared regardless of when, where, or how those contents were generated.

This general-purpose approach has two key advantage:

First, it eliminates the need to modify, hook, or even understand guest OS code.

Second,it can identify more opportunities for sharing; by definition, all potentially shareable pages can be identified by their contents.
Transparent Page Sharing
Transparent page sharing as a method for eliminating redundant copies of pages, such as code or read-only data, across virtual machines.

Once copies are identified, multiple guest “physical” pages are mapped to the same machine page, and marked copyon- write [5].

Writing to a shared page causes a fault that generates a private copy.

Brief introduction

Efficient Memory management is one of the hot topics these days in Cloud because the need of integrated data handling and exigency of optimized memory management algorithms.


The trending Software As A Service(SaaS),Platform As A Service (PaaS) and Infrastructure As A Service (IaaS) paradigms are in need of smart memory management protocols to be integrated in Cloud in order to get rid of the latency and load balancing issues.

Virtual Machine in Cloud Computing
VMs used for decades to allow multiple copies of potentially different operating systems to run concurrently on a single hardware platform.

Each virtual machine (VM) is given the illusion of being a dedicated physical machine that is fully protected and isolated from other virtual machines [1].
Virtual Machine Monitor (Cloud Hypervisor)
VMM is a software layer that virtualizes hardware resources, exporting a virtual hardware interface that reflects the underlying machine architecture.

Let me introduce some basic concepts about my research scope .

Virtual Machine Monitor (Cloud Hypervisor)

Designed to multiplex hardware resources efficiently among virtual machines.

Provide scalability and fault containment for commodity operating systems running on large-scale shared memory multiprocessors [2].


HOW VMMs efficiently manage the memory resource ?


The hypervisor uses a ballooning technique to achieve such predictable performance by coaxing the guest OS into cooperating with it when possible.
A small balloon module is loaded into the guest OS as a pseudo-device driver or kernel service.
When the server wants to reclaim memory, it instructs the driver by allocating pinned physical pages within the VM
Reclaiming Idle Memory
A significant limitation of pure proportional-share algorithms is that they do not incorporate any information
about active memory usage or working sets.

Memory is effectively partitioned to maintain specified ratios.
However, idle clients with many shares can hoard memory unproductively, while active clients with few shares
suffer under severe memory pressure.

The basic idea is to charge a client more for an idle page than for one it is actively using.
When memory is scarce, pages will be reclaimed preferentially from clients that are not actively using their full
allocations [6].

The tax rate specifies the maximum fraction of idle pages that may be reclaimed from a client.
If the client later starts using a larger portion of its allocated memory, its allocation will increase, up to its full share.

Thank you,

Presented by:
Hisham Gamal Raslan
[1] Robert P. Goldberg. “Survey of Virtual Machine Research,” IEEE Computer, 7(6), June 1999.

[2] Jeremy Sugerman, Ganesh Venkitachalam, and Beng-Hong Lim. “Virtualizing I/O Devices on VMware Workstation’s
Hosted Virtual Machine Monitor,” Proc. Usenix Annual Technical Conference, June 2001.

[3] Andrea C. Arpaci-Dusseau and Remzi H. Arpaci-Dusseau. “Information and Control in Gray-Box Systems,”
Proc. Symposium on Operating System Principles, October 2001.

[4] Kinshuk Govil, Dan Teodosiu, Yongqiang Huang, and Mendel Rosenblum. “Cellular Disco: Resource Management Using Virtual Clusters on Shared-Memory Multiprocessors,” Proc. Symposium on Operating System Principles, December 1999.

[5] Edouard Bugnion, Scott Devine, Kinshuk Govil, and Mendel Rosenblum. “Disco: Running Commodity Operating systems on Scalable Multiprocessors,” ACM Transactions on Computer Systems, 15(4), November 1997.

[6] David G. Sullivan and Margo I. Seltzer. “Isolation with Flexibility: A Resource Management Framework for Central Servers,” Proc. Usenix Annual Technical Conference,June 2000.

[7] Intel Corporation. IA-32 Intel Architecture Software Developer’s Manual. Volumes I, II, and III, 2001
References
Full transcript