Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Cloud Computing

No description

Shreyas Kulkarni

on 2 December 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Cloud Computing


Cloud Computing
Cloud is a network of remote servers.

Key word here is remote.

They are not present on the local system.

Cloud can also be remote IT infrastructure.
What is Cloud?
Cloud computing is computing in which this network of remote servers are used to provide services to clients.

Cloud computing is computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources.

These components services should be able to be adjusted daily to the needs of the customer
and offered with the utmost availability and security.

What is Cloud Computing?
1. Public Clouds

2. Private Clouds

3. Hybrid Clouds.

Types of Cloud
Examples of Public clouds:

1. Amazon AWS
2. Microsoft
3. Google.

All of the above own operate and maintain their infrastructure at a data center.

Connection to the clouds is over the internet.
Private Clouds
Hybrid Clouds
Private cloud is cloud infrastructure operated solely for a single organization.
The cloud can be maintained by
the organization itself or by a third party vendor.
Private Cloud
The Private Cloud can be hosted internally or externally.
A cloud is called a "public cloud" when the services are rendered over a network that is open for public use.
Public Cloud
Public cloud services may be free or offered on a pay-per-usage model.

Technically there may be little or no difference between public and private cloud architecture.
Security consideration may be substantially different for services.
Hybrid cloud is a composition of two or more clouds (private, community or public).

These clouds remain separate (distinct).

They offer the benefits of multiple deployment models.

eg. Hybrid cloud which is made up of a public and private cloud, offers the benefits of both these deployment models.
Hybrid Clouds
Gartners Definition of Hybrid Clouds:

Hybrid cloud is composed of some combination of private, public and community cloud services, from different service providers.

A hybrid cloud service crosses isolation and provider boundaries so that it can’t be simply put in one category of private, public, or community cloud service.
Hybrid Cloud Example
An organization may store sensitive client data in house on a private cloud application

This data may be interconnected to a business intelligence application provided on a public cloud as a software service.
Community Clouds
Community cloud shares infrastructure between several organizations from a specific community with common concerns.
These clouds are managed internally or by a third party.

They may be hosted internally or externally.
Community Clouds
Costs are saved as the costs are shared between fewer users, less than that of a public cloud but more than that of a private cloud.
Distributed Clouds
Cloud computing can also be provided by a distributed set of machines that are running at different locations, while still connected to a single network or hub service.

Example: These include distributed computing platforms like BOINC and Folding@Home.
The services provided by the cloud are used by a single organization.
History of Cloud Computing
August 24, 2006 will go down in history as the birthday of cloud computing.

On this day Amazon made a test version of its Elastic computing cloud (EC2).

EC2 provided flexible IT resources to (Computing Resources) to developers who had no wish to hold their IT infrastructure.
History of Cloud Computing
Interesting Facts:

Cloud computing as a term became popular in 2007 as the world was introduced in the oxford dictionary.

Dell attempted to trademark the word , Dell was successful but the trademark was later revoked.

By 2008 the scope of cloud computing grew from simple infrastructure services, to one which involved applications entirely.
Historical Perspective of Cloud Computing
At the bottom of the evolution of cloud computing is the shifting of IT services form a local computer to the Internet.

Cloud Computing realized an idea that had already been hit upon by Sun Microsystems long before the Cloud Computing hype: The network will be the computer.

As the cloud is considered to be a invisible computer, it is supposed to be transparent to the user/client.
Cloud Computing is more an evolution than a revolution.
Understanding the Cloud
Cloud providers bundled together a whole series of components for the customers

Let us take a look at the various components of a cloud.

Components of Cloud
Components of Cloud
These components should be offered as services on the internet

These components should be easily usable
Why Cloud Computing
Business needs are the main driver of cloud computing.

According to Gartner in 2008, Cloud computing will transform the market, and today it has.

Companies have started focusing on cloud computing and there have been tremendous advancements in the cloud.

Businesses must be able to act quickly and assuredly in the market, cloud computing allows businesses that power.
Why Cloud Computing
Improving Cost Structure:

Companies are always under cost pressure.

Companies have to adjust to the financial market,
At a time when cloud computing was picking up, the market was under recession.

Companies are forced as a result, to adjust or even improve their cost structures.
Why Cloud Computing
Improving Cost Structure:

Cost reductions claimed by cloud providers.

A public-cloud delivery model converts capital expenditure to operational expenditure.

This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks

Why Cloud Computing
Dealing with Changes in the market:

Companies must position themselves in increasing dynamic markets.

Companies now need to react to these changes, while they want to operate successfully in the market.

Why Cloud Computing
Dealing with changes in the market:

Thus, the pressure increases not only on the management of a company, but also on its ICT which has to be able to quickly and flexibly adapt to new circumstances in all business processes being supported and mapped by ICT

Cloud computing allows the companies infrastructure to be flexible so that the company can react quickly and appropriately to changes in the market.
Why Cloud Computing
Realizing increases in productivity:
Business processes and ICT of a company are very much linked together.

ICT services have to be readily available for the companies business processes to avoid delays.

Delays in the business processes, will cause a drastic reduction in productivity of company.

Productivity decrease will affect profits.
Why Cloud Computing
Overview of Cloud Computing Drivers
The concept of cloud computing allows ICT services to be readily available in a flexible and efficient way

Cloud computing is capable of improving the effectiveness , efficiency of these ICT services, and also reducing their costs.

This benefits productivity of a organization.

Characteristics of Cloud
Improves with users' ability to re-provision technological infrastructure resources
A public-cloud delivery model converts capital expenditure to operational expenditure.

This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks.
Characteristics of the Cloud
Device and Location Independence

Cloud infrastructure is provided by a third party and users can connect to the cloud from anywhere.

This characteristic enables users to access the application on the cloud using a web browser, regardless of the device they are using.

Maintenance is easy, because applications do not
need to be installed on each persons computer
and can be accessed from different places.
Characteristics of Cloud

Allows sharing of resources and costs across
a large pool of users.

This allows for:
Peak Load Capacity:
Utilization and Efficiency:


Performance of any application or
service can be monitored on the cloud.
Characteristics of Cloud
Cloud allows users to work on the same data at the
same time, this can result in a increase in productivity.

Cloud as mentioned earlier plays an important role in the organizations ICT services. This helps business processes, which results in a productivity increase.
Cloud Computing Architectures.
Best Practices:
In the following section we will discuss
some unprecedented concepts like elasticity and certain best practices in designing an architecture for the cloud.
Business Benefits of Cloud
Almost zero upfront infrastructure investment:

Just-in-time Infrastructure:

More efficient resource utilization:

Usage-based costing:

Reduced time to market:
Technical Benefits of Cloud
Automation :


Proactive Scaling:

More Efficient Development lifecycle:

Improved Testability:

Disaster Recovery and Business Continuity:

“Overflow” the traffic to the cloud:
Cloud Concepts for Scalable Architectures
The cloud reinforces some old concepts of building highly scalable Internet architectures and introduces some new
concepts that entirely change the way applications are built and deployed.

The cloud changes several processes, patterns, practices, philosophies and reinforces some traditional service-oriented architectural principles.

Let us take a look at some new philosophies that cloud brings in when it comes to architecture.
Building Scalable Architectures
The cloud is designed to provide conceptually infinite scalability.

To leverage all that scalability in infrastructure , the architecture has to be scalable. Both have to work together.

Designers need to re factor the architecture to take advantage of the scalable infrastructure.
Characteristics of a Truly Scalable
Increasing resources results in a proportional increase in performance

A scalable service is capable of handling heterogeneity

A scalable service is operationally efficient

A scalable service is resilient

A scalable service should become more cost effective when it grows (Cost per unit reduces as the number of
units increases)
Different Approaches for
Scale-up approach:
Investing heavily in larger and more powerful computers (vertical scaling) to accommodate the demand.
Costs a fortune.

The traditional scale-out approach
Creating an architecture that scales horizontally and investing in infrastructure in
small chunks.

This still requires predicting the demand at regular intervals and then deploying infrastructure in chunks to meet the demand. This often leads to excess capacity.
Automated Elasticity
Cloud Elasticity
Elasticity is one of the fundamental properties of the cloud.

Elasticity is the power to scale computing resources up and down easily and with minimal friction.

It is important to understand that elasticity will ultimately drive most of the benefits of the cloud.

As a cloud architect, you need to internalize this concept and work it into your application

Designing intelligent elastic cloud architectures, so that infrastructure runs only when you need it, is an art in itself.

Elasticity should be one of the architectural design requirements or a system property
Specific Techniques to
Implement Elasticity
Not Fearing Constraints:
Cloud might not have the exact specification of the resource that you have on-premise.

You should not be afraid and constrained when using cloud resources because it is important to understand that even if you might not get an exact replica of your hardware in the cloud environment, you have the ability to get more of those resources in the cloud to compensate that need.

When you combine the on-demand provisioning capabilities with the flexibility, you will realize that
apparent constraints can actually be broken in ways that will actually improve the scalability and overall system performance

Cloud Best Practices
We will take a look a few best practices to build an application in the cloud

Design for failure:

Decouple your components:

Implement Elasticity:

Think parallel:

Keep dynamic data closer to the compute and static data closer to the end-user.

Design For Failure
Be a pessimist when designing architectures in the cloud; assume things will fail.

In other words, always design, implement and deploy for automated recovery from failure.

Realize that things fail over time and incorporate that thinking into your architecture.

Build mechanisms to handle that failure before disaster strikes

To deal with a scalable infrastructure, you will end up creating a fault-tolerant architecture that is optimized for the cloud.

Mechanisms to handle Failure
Have a coherent backup and restore strategy for your data and automate it

Build process threads that resume on reboot

Allow the state of the system to re-sync by reloading messages from queues

Keep pre-configured and pre-optimized virtual images to support (2) and (3) on launch/boot

Avoid in-memory sessions or stateful user context, move that to data stores.
Decouple your components
The cloud reinforces the SOA design principle that the more loosely coupled the components of the system, the bigger
and better it scales.

The key is to build components that do not have tight dependencies on each other.

If one component were to die (fail), sleep (not respond) or remain busy (slow to respond) for some reason, the other components in the system are built so as to continue to work as if no failure is happening
Decouple your Systems
One can build a loosely coupled system using messaging queues.

Queue/buffer is used to connect any two
components together,

It can support concurrency, high availability and load spikes.

Implement Elasticity
Elasticity can be implemented in three ways:

1. Proactive Cyclic Scaling:
Periodic scaling that occurs at fixed interval.

2.Proactive Event-based Scaling:
Scaling just when you are expecting a big surge of traffic requests due to a scheduled business event.

3. Auto-scaling based on demand:
By using a monitoring service, your system can send triggers to take appropriate actions so that it scales up or down based on metrics (utilization of the servers or network i/o, for instance).
Implement Elasticity
To implement “Elasticity”, one has to first automate the deployment process and streamline the configuration and build process.

1.Automate your infrastructure.

2.Bootstrap your instances.
Think Parallel
The cloud makes parallelism effortless.

It is advisable to not only implement parallelism wherever possible but also automate it because the cloud allows you to create a repeatable process every easily.

In cloud it becomes even more important to leverage parallelization.

A general best practice, in the case of a web application, is to distribute the incoming requests across multiple asynchronous web servers using load balancer.

Keep dynamic data closer to the compute and static data closer to the end-user
In general it’s a good practice to keep your data as close as possible to your compute or processing elements to reduce latency.

This best practice is even more relevant and important because you often have to deal with Internet latencies.

In the cloud, you are paying for bandwidth in and out of the cloud by the gigabyte of data transfer and the cost can add up very quickly.
Realizing Increase in Productivity
Service Models
1) User Cloud aka Software as a Service
Eg: Salesforce.com, Google Docs, Red Hat Network/RHEL

2) Development Cloud aka Platform as a Service
Eg: VMware CloudFoundry, Google AppEngine, Windows Azure,Rackspace Sites, Red Hat OpenShift, Active State Stackato, Appfog

3) Systems Cloud aka Infrastructure as a Service
Eg: EC2,Rackspace CloudFiles, OpenStack, CloudStack, Eucalyptus, OpenNebula

Software as a Service
1)Defined as software that is deployed over the internet.

Web access to commercial software
Software is managed from a central location
Software delivered in a “one-to-many” model
Users not required to handle software upgrades and patches
API’s allow for integration between different pieces of software
Platform as a Service
1)PaaS can be defined as a computing platform that allows the creation of web applications quickly and easily without the complexity of buying and maintaining the software and infrastructure beneath it.

Services to develop applications in the same IDE
Multiple tenant architecture
Built in scalability of deployed software
Integration with databases and webservices via common standards
Support for development team collaboration
Tools to handle billing and subscription management
Infrastructure as a Service
1) IaaS is a way of delivering Cloud Computing infrastructure – servers, storage, network and operating systems – as an on-demand service.

2) Characteristics:
Resources are distributed as a service
Allows for dynamic scaling
Has a variable cost, utility pricing model
Generally includes multiple users on a single piece of hardware
Datacenters and Operating Costs
1) What are datacenters?

2) Cloud datacenter is much simpler & cost effective to organize than traditional datacenters.

3) Cost involved in traditional datacenters: 10-25 million/year
42% : Hardware, software, disaster recovery arrangements, uninterrupted power supplies & n/w
58% : Cooling, property and labor costs, taxes etc.

4) Estimates for setting up a datacenter for a cloud infrastructure:
Labor costs = 6%
Power distribution and cooling: 20%
Computing costs = 48%

Datacenter for the Cloud
Three salient modules:

Building blocks: Similar to traditional datacenters with few enhanced capabilities
10 Gigabit Ethernet
Unified fabric
Unified computing
Key software components
Key facilities components

PUE and Energy Efficiency
1) Data center uses the PUE measurement to measure efficiency.

2) A PUE of 2.0 means that for every watt of IT power, an additional watt is consumed to cool and distribute power to the IT equipment. A PUE closer to 1.0 means nearly all of the energy is used for computing.

3) According to the Uptime Institute's 2014 Data Center Survey, the global average of respondents' largest data centers is around 1.7

More information: http://www.google.com/about/datacenters/efficiency/internal/
Cloud infrastructure architecture
Phased evolution of the cloud
Virtual Machine & its Components
1) Why virtualize?
Consolidation of servers
Support heterogeneous and legacy OSes
Rapid deployment and provisioning
Testing/Debugging before going to production.
Recovery and backup
Load balancing

2) Virtual machines are the basic computing components of your virtual infrastructure.
Overview of Virtualization Techniques
1) Guest operating system virtualization

2) Shared kernel virtualization

3) Kernel level virtualization

4) Hypervisor virtualization
Para virtualization
Full virtualization
Hardware virtualization
Hypervisor Layout
VMWare vSphere
1) No cost production ready Hypervisor from Vmware which lets you virtualize your servers.

2) Built on proprietary vmware ESXi architecture

3) Allows thin provisioning

4) Is OS independent and a small install foot-print

5) Eg. Applications: enterprise email, small scale databases and ERP systems.

vSphere capabilities
1) Hot pluggable PCIe SSD devices.

2) Support for Reliable memory technology

3) Improved virtual machine compatibility

4) Expanded Virtual graphic support: Modern distros are enabled to support technologies as:
OpenGL 2.1
DRM Kernel mode setting

5) Vmotion: allows live migration of virtual machines from one host to another without shutting down a migrating VM.

6) High Availability

7) Distributed Resource Scheduler: dynamically allocates resources to high priority applications.

Networking for the Cloud
1) Vmware vSphere distributed switch

2) One virtual switch for the entire vSphere environment greatly simplifies management.

3) Enhanced LACP

4) Traffic filtering

5) DSCP Marking support

Link Aggregation Control Protocol
1) LACP a subcomponent of IEEE 802.3ad provides additional functionality for LAG’s by aggregating one or more Ethernet interfaces.

2) Comprehensive load-balancing algorithm support.

3) Support for multiple link aggregation groups: 64 LAGs/host & 64 LAGs/vSphere switch

4) Configuration templates
LACP example – 2 LAG’s
Traffic filtering in vSphere distributed switch

1) Ability to filter packets based on the parameters of the packet header.

2) Capability also referred to as Access Control Lists (ACLs) is used to provide port level security.
Quality of Service Tagging
1) Class of Service (CoS) applied on
Ethernet packets.

2) DSCP ()Differentiated Service Code Point) applied on IP packets

3) Helps reserve bandwidth for important traffic types
Xen Project
1) Large user base

2) Powers some of the largest clouds in production.

3) Not just for servers

4) OS agnostic and device driver isolation

5) Makes use of Para virtualization and PVH mode

6) Not included in the Linux kernel

7) Xen packages are included in all Linux distros (except RHEL6)
Xen hypervisor architecture
Xen components in detail
1) Xen project hypervisor

2) Guest domains / Virtual

3) Control domain

4) Toolstack and Console

5) Xen project enabled operating systems
Characteristics of Cloud
On-Demand Self Service
Broad Network Access
Resource Pooling
Rapid Elasticity
Measured Service

On-Demand Self Service
Refers to the service provided by cloud computing vendors that enables the provision of cloud resources
Prime feature of most cloud offerings where the user can scale the required infrastructure up to a substantial level
Most users begin by using limited resources and increase them over time

Broad Network Access
Refers to resources hosted in a private cloud network that are available for access from a wide range of devices
Companies that have broad network access within a cloud network need to deal with certain security issues that arise
In a private cloud, secure data is accessed only by company employees within a company's own firewall

Resource Pooling
Resource pooling is an IT term used in cloud computing environments to describe a situation in which providers serve multiple clients, customers or "tenants" with provisional and scalable services
The kinds of services that can apply to a resource pooling strategy include data storage services, processing services and bandwidth provided services

Resource Pooling Cont’d
Resource pooling, the sharing of computing capabilities, leads to increased resource utilization rates
Pooling resources on the software level means that a consumer is not the only one using the software
The software must be designed to partition itself and provide scalable services to multiple unrelated tenants

Rapid Elasticity
Rapid elasticity is a cloud computing term for scalable provisioning, or the ability to provide scalable services
Rapid elasticity allows users to automatically request additional space in the cloud or other types of services
In a sense, cloud computing resources appear to be infinite or automatically available
Requests that come from multiple sources can also be demanding and require precise administration

Service Statelessness
Service statelessness is a design principle that is applied within the service-orientation design paradigm for designing scalable services by separating them from their state data whenever possible
The interaction of any two software programs involves keeping track of the interaction-specific data as each subsequent interaction may depend upon the outcome of the previous interaction
This becomes more important in distributed architectures where the client and the server do not exist physically on the same machine

Measured Service
This is a reference to services where the cloud provider measures or monitors the provision of services
The general idea is that in automated remote services, these measurement tools will provide both the customer and the provider with an account of what has been used
In more traditional systems, items like invoices and service change agreements would fill these same roles

A Generic 3-Tier Application
3 Tier Cloud Application
A Distributed Application is decomposed into application components to scale individual application functions independently
There can be many differentiating factors of application tiers
Each tier is elastically scaled independently

SPARC Cloud Architecture
SPARC is a RISC architecture technology for microprocessors developed by Sun Microsystems, which introduced it in 1987
It is generally identified with the Solaris OS
The SPARC architecture is designed to optimize both 32-bit and 64-bit implementations
SPARC is a highly-scalable open architecture designed to offer fast execution rates

The word "scalable" in SPARC means the register stack can be scaled up to 512, or 32 windows, to minimize processor loads
It can also be scaled down to minimize interference and context switching time
Since its release, there have been several revisions to the SPARC architecture
SPARC has introduced many new features in version 8, which includes multiply and divide functionality and a 128-bit quad-precision register

3-Tier Architecture
Oracle SPARC Supercluster T4
One of the most popular clusters used for distributed systems and cloud applications
Used with Oracle VM Server and Solaris 11 OS
Easy, GUI-based provisioning on new VMs
Automated HA failover in the event of physical server failures
Automatic load balancing across a cluster of VM hosts
Complete end-to-end monitoring

SPARC Server Platform
Four SPARC T4 Processors–Each processor comes with eight cores and eight threads per core.
1TB of Memory– 64 of the latest 16 GB DDR3 memory DIMMs.
Eight Disk drives–There are six 600GB SAS2 disk drives and two 300GB SSD disk drives
Sun PCIe Dual Port QDR InfiniBand Host Channel Adapters
Sun Dual 10 GbE SFP+ PCIe 2.0 Low Profile network interface cards

Virtualization Technology: Oracle VM Server for SPARC
Hard CPU and memory resource constraints
Guest OS fault isolation
Live migration of guests among hosts in the server pool
Automatic HA restart of guests on other hosts in the server pool after a physical server failure

Datacenter Management: Enterprise Manager Ops Center 12c
Intelligent management of the Oracle stack and Engineered Systems
Repeatable and consistent deployment of operating systems and applications
Rapid deployment of a virtualized and physical infrastructure
Life cycle management and compliance reporting
Integration with My Oracle Support (MOS) for call logging and tracking

Network and Storage Infrastructure
Sun Network 10GbE Switch 72p Top of Rack (ToR) switches provide excellent throughput between physical server and storage nodes in this implementation
Oracle’s Sun ZFS Storage 7320 or 7420 Appliance provides performant, highly-available storage for this implementation
Dual storage heads provide clustered access to a ZFS storage pool configured with the desired mix of redundancy and capacity.

Applications Run on T4-4 Supercluster
Database Domain: A domain dedicated to running Oracle Database 11g Release 2, using Oracle Exadata Storage Servers for database storage.
This domain must run Oracle Solaris 11
Application Domain: A domain dedicated to running applications on either Oracle Solaris 11 or Oracle Solaris 10

OpenStack: IaaS
The OpenStack project is an open source cloud computing platform for all types of clouds
Simple to implement
Massively scalable
Feature rich
Provides an Infrastructure-as-a-Service solution through a set of interrelated services

Conceptual Architecture
OpenStack Services
OpenStack Services Cont’d
OpenStack Services Cont’d
OpenStack Services Cont’d
Overview of the Topics
1) Cloud Service Models

2) Datacenters and their Architecture

3) VMWare vSphere

4) Xen OpenSource Project
1) http://www.rackspace.com/knowledge_center/whitepaper/understanding-the-cloud-computing-stack-saas-paas-iaas


3) http://searchservervirtualization.techtarget.com/definition/hypervisor?int=off

4) http://searchservervirtualization.techtarget.com/definition/hypervisor?int=off

5) https://www.paloaltonetworks.com/resources/learning-center/what-is-a-data-center.html

6) http://www.virtuatopia.com/index.php/An_Overview_of_Virtualization_Techniques

7) http://wiki.xen.org/wiki/Xen_Overview#Introduction_to_Xen_Project_Architecture

8) http://wiki.qemu.org/Main_Page

9) http://wiki.xen.org/wiki/Xen_Overview#What_is_the_Xen_Project_Hypervisor.3F

10) http://www.cisco.com/web/strategy/docs/gov/CiscoCloudComputing_WP.pdf

11) http://media.amazonwebservices.com/AWS_Cloud_Best_Practices.pdf

12) http://www.vmware.com/files/pdf/techpaper/VMware-Whats-New-in-VMware-vSphere-Networking.pdf
Full transcript