Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Load Balancing in Cloud Computing

Major Project S7

Aviral Nigam

on 14 January 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Load Balancing in Cloud Computing

photo (cc) Malte Sörensen @ flickr LOAD BALANCING IN CLOUDS Load balancing is the process of distributing the load among various nodes of a distributed system (data centers/servers) to improve both resource utilization and job response time i.e., maximize the availability of resources and reducing the amount of downtime.

Several algorithms for load balancing have been proposed over the years including honeybee foraging, biased random sampling, ant colony optimization, active clustering, etc. INTRODUCTION Cloud - Type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers Project Guide Vinod Pathari
(Asst Professor) LOAD BALANCING
CLOUD COMPUTING Group Members Aviral Nigam
Snehal Chauhan
Varsha Murali Load Balancing - Computer networking methodology to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to optimize resource utilization, maximize throughput, minimize response time, and avoid overload. PROBLEM STATEMENT To study the existing load balancing algorithms in Cloud computing and implement a new hybrid load balancing algorithm. This will include a comparison of these above algorithms and to draw conclusions based on their execution time. PROJECTED ALGORITHMS Biased Random Sampling
Honeybee Foraging
Proposed new hybrid algorithm BIASED RANDOM SAMPLING Server network- represents availability of resources in each server(node).

The free resources (indegree) of the server is updated:
Allocation of job - indegree is decremented (lesser free resources)
Completion of job - indegree is incremented (freeing of allocated resources) HONEYBEE FORAGING Scout- The first job sent into the network to explore the resource availability at each server.

Fitness function- Function that allocates the job to its corresponding best-fit server.

Advert Board- Information on each server and current status of resources at each server.This is updated when each job is attached to the server. /*Scout on entering network*/

For all other incoming jobs
node = fitness(job)
UpdateAdvertBoard(node,job) PROPOSED HYBRID ALGORITHM Combination of previously discussed algorithms such that if:
job size = 1 - Biased Random Sampling
Otherwise - Honeybee Foraging
is selected. /*Jobs on entering network*/
if jobsize==1 then
(BIASED RANDOM SAMPLING) Average running time = 10870ms
(for a batch of 100 jobs) Inconsistency in the graph occurs due to the checking of resources at the last step after the random walk has been taken. ANALYSIS
(HONEYBEE FORAGING) Average running time= 10140ms
(for a batch of 100 jobs) Inconsistency occurs due to deadlock. A graph traversal and array traversal (constant time) results in the more consistent regions. ANALYSIS
(PROPOSED HYBRID ALGORITHM) Average running time= 17617ms
(for a batch of 100 jobs) Biased - allocates to server with maximum resources
Honeybee- allocates to best-fit server i.e., in best case, jobs perfectly use all the resources of the server.
This contradiction in allocation creates inconsistency. FUTURE WORK Currently,
best allocation for individual jobs
but in clouds where a batch of jobs is collectively taken, the batch may not have the best result

Objective is to tackle this issue using game-theory. REFERENCES THANK YOU! SERVER NETWORK Graph which acts as server network for analysis b=log n

For any new node that receives a process/token
if job.walklength < b
neighbour= RandomSelect(node)
Full transcript