Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
CloudCmp: Comparing Public Cloud Providers
Transcript of CloudCmp: Comparing Public Cloud Providers
Case Study:E-commerce Website PaaS IaaS SaaS Customers outsource their computation and storage to cloud providers.
Pay as you go.
Clouds scale with demand. Introduction & Motivation Systematic comparator of performance and cost
Strives to ensure fairness
Provides customers with the performance-cost trade-off
Provides providers with specific areas of improvement
Enables predicting application performance w/o having to port the application onto every cloud provider CloudCmp http://www.cloudcmp.net Design Goals Guide a customer's choice of cloud provider.
Relevant to cloud providers.
Thoroughness vs. measurement cost.
Coverage vs. development cost.
Compliant with acceptable use policies. Provide performance and cost information about various cloud providers. Help provider identify its underperforming services. Provide a fair comparison amoung various providers by characterizing all providers using the same set of workloads and metrics. To minimize measurement cost, periodically each provider at different times of day across all its locations are measured. To restrict the development cost, cloud providers based on two criteria are chosen:
Representativeness Identifying Common Services MEASUREMENT METHODOLOGY CHOOSING PERFORMANCE METRICS Elastic Compute Cluster Bechmark Finishing Time
Scaling Latency Measures how long the instance takes to complete the benchmark tasks Monetary cost to complete each benchmark task Time taken by a provider to alloacate a new instance after a customer requests it. Operation response time
Time to consistency
Cost per operation Persistent Storage Intra-cloud Network Path Capacity
Path Latency Optimal Wide-area network latency Wide-area Network Three types of storage service:
Robustness to failure
However, no strong consistency guarantees.
Two pricing models:
Based on the CPU cycles consumed to run an operation
Fixed per-operation cost regardless of operation's complexity. How long it takes for a storage operation to finish The time between when a daturm is written to the storage service and when all reads for the datum return consistent and valid results. How much each operation costs Connects a customer's instances among themselves and with the shared services offered by the cloud.
All providers promise high intra-datacenter bandwidth. Path capacity = TCP throughput Collection of network paths between a cloud's data centers and external hosts on the Internet. Minimum latency between a vantage point and any provider owned data center Compute cluster provides virtual instances that host and run customer's applications.
Two Types of charging models:
Based on how long an instance remains allocated (AWS, Azure and Cloud Servers)
Based on how many CPU cycles a customer's applications consumes (AppEngine)
Elastic == Scaling
Transparent Scaling Elastic compute cluster
Wide-area network Computation Metrics Benchmark tasks
Benchmark finishing time
Cost per benchmark
Scaling latency IMPLEMENTATION Java based benchmark tasks
Includes several CPU intensive tasks such as cryptographic operations and scientific computations.
Each benchmark tasks runs in a single thread and finishes within 30 seconds. Run benchmark tasks on each of the virtual instance types provided by the clouds and measure their finishing time. Based on time:
Compute cost of each benchmark task using the task's finishing time and published per hour price. Scripts which repeatedly request new virtual instances and record the time from when it is available to use. Benchmark tasks
Time to consistency
Cost per operation Storage Metrics Wrote own Java-based client based on the reference implementation from the providers.
Uses persistent HTTP connections to avoid SSL and other connection set up overheads.
Also vary the size of the data fetched to understand the latency vs throughput bottlenecks of the storage service. Measure the time from when the client instance begins the operation to when the last byte reaches the client. Measure the maximaum rate that a client instance obtains from the storage service Write an object to a storage service (row in a table, blob or message ina queue)
Repeatedly read the object and measure how long it takes before the read returns correct result. Use published prices and billing API to obtain the cost per storage operation Network Metrics Use standard tools such as iperf and ping to measure throughput and path latency.
To prevent TCP thruoghput from being bottlenecked by flow control, we control the sizes of the TCP send and receive windows.
To measure optimal wide area network latency, they instantiate an instance in each data center owned by the provider and ping these instances from over 200 vantage points. RESULTS:
Elastic Compute Cluster C4.1 and C1.1 are in the same pricing model(former 30 % more expensive) but twice as fast as latter
For C1, the high-end instance(C1.2 ad C1.3) have shorter finishing times in both single nd multiple threaded CPU/memory tests.
C2 have the same performance regardless of the prices. RESULTS:
Performance at cost Fig shows the monetory cost torun each task.
C1.1 is not as cost effective as C1.2, because the latter has much higher performance due to faster CPU or lower contention RESULTS: Persistent Storage
Table Storage RESULTS: Persistent Storage
Blob Storage RESULTS: Persistent Storage
Queue Storage Figure shows the distributions of response time for each type of opeation on large table.
C3 is slightly slower than the other two providers.
C1's service has signifantly shorter response time than other two.
C4's service has a very long response time. Figure shows the response time distribution for uploading and downloading one blob.
When the blob is small, C4 has the best perfomance amoungthe three providers.
When the blob is large, C1's average performance is better than the others. Figure shows the comparison of queue services of C1 and C4.
C4 is slightly faster at sending messages while C1 is faster at retrieving messages.
Both service charge similar. (1 cent per 10K operations) RESULTS:
Intra cloud network Figure shows the TCP throughput between two instance in the same data center.
C1 and C4 provide very high TCP throughput.
C2 has much lower throughput, probably due to throttling or under-provisioned network Figure shows the TCP throughput across data centers of the same provider.
Both Ci and C4 have their median inter-datacenter TCP throughput higher than 200Mbps, while C2's throughput is much lower. Why CloudCmp ? Cloud Providers differ in pricing models.
Diversity in Cloud Providers leads to confusion.
Need a systematic way to compare cloud providers. CASE STUDY : E-Commerce Website Java implementation of storage intensive TPC-W was used and was ported to various cloud providers.
Major performance goal was to minimize the page generation time.
From CloudCmp's comparison results of table storage,C1 offers the lowest table service response time among all providers.
Figure shows the page generation time for all 12 pages.