Jump to content

Queueing theory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by David Cooke (talk | contribs) at 00:06, 15 October 2007 (Notation: fix placement of reference footnote.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Queueing theory (also commonly spelled queuing theory) is the mathematical study of waiting lines (or queues).

The theory enables mathematical analysis of several related processes, including arriving at the (back of the) queue, waiting in the queue (essentially a storage process), and being served by the server(s) at the front of the queue. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, the expected number waiting or receiving service and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.

Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide service. It is applicable in a wide variety of situations that may be encountered in business, commerce, industry, public service and engineering. Applications are frequently encountered in customer service situations as well as transport and telecommunication (note that something called ride theory is sometimes mentioned, but it is uncertain whether it is a valid theory or a hoax). Queueing theory is directly applicable to intelligent transportation systems, call centers, PABXs, networks, telecommunications, server queueing, mainframe computer queueing of telecommunications terminals, advanced telecommunications systems, and traffic flow.

Spelling

The word queue comes, via French, from the Latin cauda, meaning tail. Most researchers in the field prefer the spelling 'queueing' over 'queuing',[1] although the latter is somewhat more common in other contexts.

History

Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on queueing theory in 1909.

David G. Kendall introduced an A/B/C queueing notation in 1953.

Notation

Notation for describing the characteristics of a queueing model was first suggested by David G. Kendall in 1953. Kendall's notation introduced an A/B/C queueing notation that can be found in all standard modern works on queueing theory, for example, Tijms.[2] The A/B/C notation designates a queuing system having A as interarrival time distribution, B as service time distribution, and C as number of servers. So, for instance, G/D/1 would indicate a General (may be anything) arrival process, a Deterministic (constant time) service process and a single server. More details on this notation are given in the article about queueing models.

Application to telephony

The Public Switched Telephone Networks (PSTNs) are designed to accommodate the offered traffic intensity with only a small loss. The performance of loss systems is quantified by their Grade of Service (GoS), driven by the assumption that if insufficient capacity is available, the call is refused and lost.[3] Alternatively, overflow systems make use of alternative routes to divert calls via different paths — even these systems have a finite or maximum traffic carrying capacity.[3]

However, the use of queueing in PSTNs allows the systems to queue their customer's requests until free resources become available. This means that if traffic intensity levels exceed available capacity, customer’s calls are here no longer lost; they instead wait until they can be served.[4] This method is used in queueing customers for the next available operator.

A queueing discipline determines the manner in which the exchange handles calls from customers.[4] It defines the way they will be served, the order in which they are served, and the way in which resources are divided between the customers.[4][5] Here are details of three queueing disciplines:

  • First In First Out – This principle states that customers are served one at a time and that the customer that has been waiting the longest is served first.[5]
  • Last In First Out – This principle also serves customers one at a time, however the customer with the shortest waiting time will be served first.[5]
  • Processor Sharing – Customers are served equally. Network capacity is shared between customers and they all effectively experience the same delay.[5]

Queueing is handled by control processes within exchanges, which can be modelled using state equations.[4][5] Queueing systems use a particular form of state equations known as Markov chains which model the system in each state.[4] Incoming traffic to these systems is modelled via a Poisson distribution and is subject to Erlang’s queueing theory assumptions viz.[3]

  • Pure-Chance Traffic – Call arrivals and departures are random and independent events.[3]
  • Statistical Equilibrium – Probabilities within the system do not change.[3]
  • Full Availability – All incoming traffic can be routed to any other customer within the network.[3]
  • Congestion is cleared as soon as servers are free.[3]

Classic queueing theory involves complex calculations to determine call waiting time, service time, server utilisation and many other metrics which are used to measure queueing performance.[4][5]

Queueing networks

Queues can be chained to form queueing networks where the departures from one queue enter the next queue. Queueing networks can be classified into two categories: open queueing networks and closed queueing networks. Open queueing networks have an external input and an external final destination. Closed queueing networks are completely contained and the customers circulate continually never leaving the network.

Role of Poisson process, exponential distributions

A useful queueing model both (a) represents a real-life system with sufficient accuracy and (b) is analytically tractable. A queuing model based on the Poisson process and its companion exponential probability distribution often meets these two requirements. A Poisson process models random events (such as a customer arrival, a request for action from a web server, or the completion of the actions requested of a web server) as emanating from a memoryless process. That is, the length of the time interval from the current time to the occurrence of the next event does not depend upon the time of occurrence of the last event. In the Poisson probability distribution, the observer records the number of events that occur in a time interval of fixed length. In the (negative) exponential probability distribution, the observer records the length of the time interval between consecutive events. In both, the underlying physical process is memoryless.

Models based on the Poisson process often respond to inputs from the environment in a manner that mimics the response of the system being modeled to those same inputs. The analytically tractable models that result yield both information about the system being modeled and the form of their solution. Even a queuing model based on the Poisson process that does a relatively poor job of mimicking detailed system performance can be useful. The fact that such models often give "worst-case" scenario evaluations appeals to system designers who prefer to include a safety factor in their designs. Also, the form of the solution of models based on the Poisson process often provides insight into the form of the solution to a queuing problem whose detailed behavior is poorly mimicked. As a result, queuing models are frequently modeled as Poisson processes through the use of the exponential distribution.

Limitations of mathematical approach

Classic queueing theory is often too mathematically restrictive to be able to model all real-world situations exactly. This restriction arises because the underlying assumptions of the theory do not always hold in the real world.

For example; the mathematical models often assume infinite numbers of customers, or queue capacity, or no bounds on inter-arrival or service times, when it is quite apparent that these bounds must exist in reality. Often, although the bounds do exist, they can be safely ignored because the differences between the real-world and theory is not statistically significant, as the probability that such boundary situations might occur is remote compared to the expected normal situation. In other cases the theoretical solution may either prove intractable or insufficiently informative to be useful.

Alternative means of analysis have thus been devised in order to provide some insight into problems which do not fall under the mathematical scope of queueing theory, though they are often scenario-specific since they generally consist of computer simulations and/or of analysis of experimental data. See network traffic simulation.

See also

References

  1. ^ Spelling of queueing/queuing
  2. ^ Tijms, H.C, Algorithmic Analysis of Queues", Chapter 9 in A First Course in Stochastic Models, Wiley, Chichester, 2003
  3. ^ a b c d e f g Flood, J.E. Telecommunications Switching, Traffic and Networks, Chapter 4: Telecommunications Traffic, New York: Prentice-Hall, 1998.
  4. ^ a b c d e f Bose S.J., Chapter 1 - An Introduction to Queueing Systems, Kluwer/Plenum Publishers, 2002.
  5. ^ a b c d e f Penttinen A., Chapter 8 – Queueing Systems, Lecture Notes: S-38.145 - Introduction to Teletraffic Theory.

Further reading

  • Gross, Donald (1998). Fundamentals of Queueing Theory. Wiley. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  • Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first edition ed.). Addison-Wesley. p. 673. ISBN 0-201-14502-2. {{cite book}}: |edition= has extra text (help) chap.15, pp.380-412