Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

13 PCA Queuing Theory-Part 1

No description
by

Keith Smith

on 15 May 2011

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of 13 PCA Queuing Theory-Part 1

Performance
Engineering
Fundamentals Queuing
Theory Part 1 Remember this diagram? What is "queuing theory"? It's the mathematical study of waiting lines, or queues. By trying to understand queues better, it also explores the behavior of systems that have queues. So, when do queues form? They form when ... ... arrivals occur faster than servicing ... ... or when you're in a hurry. You Getting sick of it yet? We've added Service rate to the diagram; queuing theory relies more in service rate than on service time. In fact, queuing theory has specific notation for all this. The queuing behavior of a system can be described by six main parameters: A/S/m/K/N/D where ... A is the interarrival time distribution S is the service time distribution m is the number of servers K is the maximum number in the system N is the total pool of requests D is the queuing discipline Wait, what's "interarrival time distribution"? This is kinda the same thing as the interarrival time distribution. This includes both the ones being serviced as well as the ones in the queue; similar to the "number in system" value from Little's Law. Again, we usually assume this is infinity. We usually just assume this is infinity. It's not exactly realistic, but it makes the math easier. This means what order we process the items in the queue. FIFO, LIFO, SIRO, priority queuing, ... are all possibilities. FIFO is [unrealistically] assumed. Interarrival time (and service time) distributions Going back to our original diagram, we can look at this as if there are two competing processes at work here. The first process is here and it's shooting service requests at the system that it wants the server to process. The second process is here and it's trying to clear out all the tasks that are so thoughtlessly being sent its direction. Queuing theory is really all about studying how these two processes balance out under a variety of different circumstances and assumptions. Make sense? Given that, the first process we come across is the arrival process, that is, how do things arrive to be serviced. And when we say that, we're really concerned about two characteristics:
What's the rate that they arrive?
How "randomly" are they arriving? Sounds like a probability's involved here, doesn't it? Probability is involved. This is an example of an arrival process. The time between each arrival is a random variable. To keep things simple, we [nearly] always assume that each random variable is from the same probability distribution. So ... The time between each arrival is a random variable. Similarly, each service time is a random variable. The collection of interarrival times is an arrival process. Given that we typically assume specific values or conditions for K, N, and D, we'll often drop them from the notation, giving us: A/S/m This notation is called Kendall notation (from David G. Kendall) So, now that we have our notation, what values do we plug in for A, S, and m? It turns out that there are a ton of options for these parameters. Fortunately, only three are very common:
M = memoryless
D = deterministic
G = general This has a fairly complicated definition in probability theory and involves characteristics of the conditional distribution. "M" can also stand for "Markovian". It turns out that the only continuous distribution with this characteristic is the exponential. This is the same as "constant." That is, the interval between arrivals (or the service time) is a constant. This means that any distribution will work, random or constant. Many systems have behavior that, if not memoryless, is pretty close. So the exponential distribution is used most often for both A and S. And we'll start with a single server, which gives us:





our basic queuing model. M/M/1 So, how do we use all this? Well, given:
an arrival rate, and
a service rate,
we can calculate: And given the utilization, we can calculate:
Full transcript