Simple Strategies to Eliminate Network Congestion Headaches!

Congestion in networks that serve time-sensitive applications is an undesirable phenomenon. IP networks nowadays carry broadband voice and video. These applications have strict requirements regarding delay, delay variation and packet loss. Special, tactical measures need to be taken to fulfill those requirements. The only way to accomplish this is to avoid congestion in the network.

The difficult part of this is that congestion can happen anywhere in the network. Speed mismatch between two links and traffic aggregation towards a single link are the main causes of congestion. Special queuing disciplines have to take place in order to provide a sort of traffic classification. Also, a dropping mechanism is needed to take the necessary actions in case of congestion so that high priority traffic can be served first.

So today, I’ll give you a brief description of the various congestion avoidance techniques and show you how choose the right one to avert your congestion headaches!

Can You Guess What Causes TCP Slow Start?

The answer is pretty obvious Congestion! The TCP protocol can adjust its parameters according to the state of the network. If the network is congested, TCP reduces the transmitted data rate. This can be achieved due to the fact that the sender TCP entity maintains a so-called congestion window. The TCP entity monitors timeouts for signs of congestion between sender and receiver. Timeout means that the sender has not received an acknowledgment within a specified time interval, for a segment that has already been sent. If a timeout is observed, then, TCP goes into slow start as a measure for congestion recovery. This slow start process is repeated until timeouts stop. This is how TCP manages congestion.

What is Queuing?

The diagram below shows a generic example of queuing. A router or a network device in general contains a set of queues and a forwarder or scheduler. The queues represent device memory that is used to hold arriving traffic until it is forwarded. The forwarder removes traffic from the queues in the form of packets and sends these packets to the next node.

Network Queuing

Whenever packets enter a device faster than they can exit it, then congestion occurs. Queue scheduling algorithms are then used to pace the rate at which packets on different queues are forwarded out of the device. In the simplest case, all packets on a congested egress device interface form a single queue and are forwarded out of the device in a first in, first out (FIFO) manner. In other cases, traffic is classified into multiple queues. A queue-servicing algorithm then determines the order and rate at which packets are forwarded from each of the queues.

Step One: Choosing a Queuing Mechanism

FIFO Queuing

  • FIFO (first in, first out) is the simplest queuing scheme. All incoming traffic at the network device is queued on a single queue and the scheduler forwards packets in the order in which they arrived. Of course, this way QoS (Quality of Service) cannot be supported, because there is no way of treating certain traffic preferentially over other traffic. Furthermore, more aggressive flows will experience better service than less aggressive flows.

Priority Queuing (PQ)

  • PQ, also known as strict priority queuing is another queuing scheme, where queues are ordered according to priority levels. The forwarder first forwards packets from the highest-priority queue. When no packets are left in the highest-priority queue, the forwarder removes a packet from the next highest-priority queue. Although, this scheme provides better service to the highest-priority flow, it also has the effect of starving all lower-priority flows.

Fair Queuing (FQ)

  • FQ scheme provides fairness to best effort traffic by serving the output queue of each flow in a round-robin fashion. This way, greedy flows are implicitly penalized, with increasing delays. A disadvantage of this queuing scheme is that flows with large packets are favored in terms of throughput in comparison to flows with small packets.

Deficit Round Robin (DRR)

  • DRR queuing scheme is similar to FQ, but also takes into account the packet sizes. Hence, queues are served in a round robin manner but packets of large size are served in a way that does not deplete the service observed by the rest of the packets.

Weighted Fair Queuing (WFQ)

  • WFQ queuing scheme can be used to provide unequal capacity share among flows. Unequal sharing is not unfair sharing. For example, audio flows generally need less forwarding capacity than data flows, but are also less delay-tolerant. With WFQ, weights can be assigned to different queues according to their requested resources.

Step Two: Choosing a Dropping Scheme

Queues are formed in network devices as a result of congestion. When queues grow in depth, more memory is required to hold the packets in the queues. In addition the packets experience excess delays. At certain times, there is no more device memory to
hold the packets in the queues and therefore packets must be dropped.

Dropping schemes are designed to limit queue depths by selecting packets to be dropped as queue depths surpass certain preconfigured thresholds before device memory is exhausted and before congestion becomes uncontrollable. For this reason, dropping schemes can be considered congestion-avoidance schemes. A few examples of dropping schemes are presented below.

Tail Dropping

  • Tail dropping is the simplest dropping scheme, where packets are dropped from the tails of the various packet queues. This type of dropping scheme offers no traffic differentiation treatment.

Random Early Detection (RED)

  • Instead of dropping packets from the tails of the queues (tail dropping), RED scheme drops packets randomly from different locations within each queue. This way the problem of dropping all the packets of a certain flow is avoided and also fairness is achieved by penalizing those traffic flows that are utilizing greater shares of the forwarding capacity. RED supports DSCP based queuing as well.

Weighted Random Early Detection (WRED)

  • In WRED dropping scheme, all traffic flows are considered to share a single queue (either real or virtual). Traffic is not dropped from a flow until the queue reaches a threshold associated with that flow. The queue can be represented as a single queue divided in various regions through a number of thresholds. See the example below:

    Weighted Random Early Detection (WRED)

    When queue occupancy is less than the minimum threshold (THmin1,2), then packets are added to the queue. When the queue occupancy is greater or equal the minimum threshold (THmin1,2) and less that the maximum threshold 1 (THmax1) , packets are dropped at random based on a calculated probability value. With queue occupancy greater than the maximum threshold 1 (THmax1) packets of flow A are always dropped. Packets of flow B continue to be dropped at random up to maximum threshold 2 of the queue occupancy, where all packets of flow B are dropped.

    In WRED scheme, flows that are associated with higher queue thresholds (and/or lower drop probabilities) will experience improved QoS over flows with lower queue thresholds (and/or lower drop probabilities). WRED similar to RED supports DSCP-based queuing for Differentiated service capable implementations.

Congestion: The Worst Enemy for Broadband Services

Choosing the right congestion mechanism is not a simple process. To be able to benefit from such a mechanism you should first explore and understand your network. By doing that you should be able to identify possible congestion bottlenecks. You
should distinguish traffic according to its QoS requirement.

Choose a combination of Queuing and dropping mechanisms and test them in rough conditions to see their reaction. Choose another combination and perform some more tests. Testing, gathering results and evaluating them will convince you about the stability and reliability of your choice.

Broadband services would not exist today if congestion avoidance techniques were not implemented; therefore their contribution to customer satisfaction is enormous.

 in Cisco


This site uses Akismet to reduce spam. Learn how your comment data is processed.