Today’s real-time multimedia applications are sensitive to delivery delays, demanding Quality of Service (QoS) on networks. QoS also controls and manages network resources. Quality issues can also be encountered on a network infrastructure, which can be caused by the lack of bandwidth, packet loss, and latency and jitter.
The data traffic path uses the lowest-bandwidth link. Therefore, when the lowest-bandwidth link’s capacity is reached, link congestion occurs, and it causes the traffic to drop.
Increasing the link bandwidth capacity is not always viable because of financial or technological limitations. QoS mechanisms, such as policing and priority queuing, can be implemented to prioritize network traffic based on their importance.
Video and voice traffic and other business-critical traffic should be prioritized and allocated adequate bandwidth, and the least important traffic should be allotted the remaining bandwidth. In this way, network performance is optimized.
Packet loss is typically caused by interface congestion. If this occurs, the routers and switches start dropping packets. It can be avoided by implementing the following:
- Increase the link’s speed.
- Utilize QoS congestion-avoidance and congestion-management mechanisms.
- Integrate traffic policing to discard low-priority packets and permit high-priority packets.
- Implement traffic shaping to delay packets. However, it is not advised for real-time traffic since it depends on queuing, which causes jitter.
QoS Latency and Jitter
Network latency or one-way end-to-end delay is the time it takes for data packets to go from the source to the destination over a network. International Telecommunication Union-Telecommunication Standards Sector (ITU-T) G.114 recommendation states that the latency should not exceed 400 ms for general network planning and 150 ms for real-time traffic.
However, Cisco and ITU also stated that real-time traffic quality does not significantly degrade until the latency exceeds 200 ms.
The causes of network latency include the following:
- Fixed Latency
- Propagation Delay
- Serialization Delay
- Processing Delay
- Variable Latency
- Delay Variation or Jitter
The propagation delay is the time it takes for a packet to travel from a source to a destination at the speed of light via a medium, like copper wires or fiber optic cables.
In a vacuum, light travels at a speed of 299,792,458 meters per second. However, copper and fiber optic cables don’t have perfect vacuum conditions, so the speed of light is reduced by a ratio referred to as the refractive index. The bigger the refractive index value, the slower light travels.
The formula for the speed of light through a medium (v) is the speed of light in a vacuum (c), divided by the refractive index (n):
v = c / n
An optic fiber cable has an average refractive index value of 1.5, and the speed of light is approximately 300,000,000 meters per second. Using the formula to calculate the speed of light through a fiber optic cable:
v = 300,000,000 / 1.5
v = 200,000,000 meters per second.
The propagation delay is calculated by dividing the Earth’s equatorial circumference (approximately 40,075 km) by the speed of light through a medium (v). So if the fiber optic cable is placed around the Earth’s equatorial circumference, the propagation delay would be around 200 ms (40,075 / 200,000,000), which is still acceptable even for real-time network traffic.
Fiber optic cables are not always installed using the shortest path between two points. Moreover, repeaters and amplifiers can also cause additional delays. Therefore, check the service provider’s Service-Level Agreement (SLA) to estimate and plan for the network latency.
For satellites, the propagation delay is computed using the time it takes a radio wave traveling at the speed of light from the Earth’s surface to go to a satellite and then back to the Earth’s surface. It could take multiple satellite hops, resulting in a delay exceeding 400 ms. The only solution is to find another satellite provider offering a lower propagation delay.
Serialization delay is the time it requires to put all the packet’s bits on a link. The value is fixed, but it varies according to the link speed. The delay decreases as the link speed increases. The formula for calculating serialization delay (s) is the packet size in bits divided by the line speed in bits per second. You might need to convert units first before computing.
For example, to calculate the serialization delay for a 1,024-byte packet over a 100 Mbps line, the computation will be:
s = packet size / line speed
s = 1,024 byte / 100 Mbps
s = 8,192 bits / 100,000,000 bps
s = 0.00008192
Then, the serialization delay can be converted into milliseconds (ms) or microseconds (μs) using the following formula:
s × 1,000 = 0.08192 ms
ms × 1000 = 81.92 μs
Therefore, it would take 81.92 μs to serialize a 1024-byte packet into a 100 Mbps interface.
Processing delay is the fixed time required by network devices, such as a routing or switching device, to receive a packet from an input interface and put it in the output queue of the output interface. The processing delay is affected by the following:
- CPU Speed
- CPU Utilization
- IP Packet Switching Mode
- Router Architecture
- Configured Features on Interfaces
Delay variation or jitter is the difference in the network latency between packets in a single traffic flow. For instance, if the first packet takes 30 ms to travel from source to destination, and the next packet takes 80 ms, then the jitter is 50 ms.
The following reasons mainly affect delay variation:
- Queuing Delay
- De-jitter Buffers
- Packet Size
A jitter is created when queueing generates different delays for packets in the same flow. The quantity and size of the packets in the queue, the link speed, and the queuing method influence the queuing delay under network congestion.
A de-jitter buffer can adapt to different packet arrival delays of about 30 ms. The packet is discarded, and the overall quality suffers if a packet is not received within the 30 ms limit.
Download our Free CCNA Study Guide PDF for complete notes on all the CCNA 200-301 exam topics in one book.
We recommend the Cisco CCNA Gold Bootcamp as your main CCNA training course. It’s the highest rated Cisco course online with an average rating of 4.8 from over 30,000 public reviews and is the gold standard in CCNA training: