What is Queuing theory? Queueing theory is the mathematical study of waiting lines, or queues. Queueing theory is considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide service. The theory enables mathematical analysis of several related processes, including arriving at the end of the queue, waiting in the queue (essentially a storage process), and being served at the front of the queue. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, the expected number waiting or receiving service, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.
Queues form when the arrival time of customers or products is faster than their individual service time. The picture below illustrates a simple queue example in which there is one server. Any individual that has been to a bank, a grocery store, a tollbooth, or a fast food restaurant has experienced a queue.
Queueing theory is particularly important to companies that manufacture goods and/or provide services. Throughout the production system, machines/laborers have various service times per component. When entities arrive for machining/processing, if the arrival time is faster than the service time, a queue will form. Where multiple machines exist, the production system could experience multiple queues. Therefore, companies that understand queueing theory and apply corresponding analytical methods will be able to establish clear measures of performance, and subsequently, optimize their machines or operator processing orders, which ultimately lead to greater efficiencies and overall effectiveness. Below are just a few of the terms and characteristics that define a variety of queueing models:
This queuing model consists of single server and an infinite queue. The graph below depicts the arrival of entities and servicing of entities over time. Illustrated within the graph are stochastic arrival times and stochastic service times as well as times when the server is idle. At “t1” the first entity arrives. Given that the server is idle, the number in the queue is zero. At “t2” and “t3” additional entities arrive. Given the server is still processing the first entity, a queue forms and grows with each arrival. At “t4” service is completed on the first entity and processing begins on the next entity thus reducing the size of the queue. The process continues with subsequent arrivals and service completions. Charting queue size over time will allow companies to accurately determine one key measure of performance – Server Utilization, which can be calculated as a percentage of non-idle time divided by total time. Other key measures of performance are listed below.
In situations where arrival and service times are stochastic, probability theory is used to determine “best fit” distributions, which can be used in modeling scenarios. As far as software solutions, there are a number of programs that allow companies to simulate various queueing models, which assists in steady state analysis.
Queueing Theory can provide critical insights into areas such as:
Remember, Operations Research techniques are designed to provide Scientific Solutions to company problems. However, it is by great leadership and management where companies obtain the competitive advantage. OSI can assist you in optimizing both.