QoS Mechanisms
QoS mechanisms are like tools for managing network resources and play a crucial role in network convergence. They aim to make it seem like voice, video, and data work smoothly for users. QoS lets different traffic types compete fairly for network resources. Important applications like voice, video, and critical data can get special treatment from network devices to ensure their quality doesn't drop too much.
QoS is essential for making network convergence work well and relies on several key mechanisms such as Marking, Policy and Shaping, Congestion management, Congestion avoidance, and Link efficiency. By combining these mechanisms, network administrators can create a QoS framework that ensures critical applications receive the necessary resources while efficiently managing network traffic and avoiding congestion.
Marking
Once classified, traffic can be marked with specific values or tags. These marks indicate the priority or treatment the traffic should receive as it travels through the network. Commonly used standards include Differentiated Services Code Point (DSCP) or IP precedence.
Policy and Shaping
Network administrators define QoS policies that specify how traffic should be treated. These policies include rules for prioritization, bandwidth allocation, and traffic shaping. Policing is a mechanism that enforces traffic rate limits, dropping or remarking packets that exceed defined thresholds to control network congestion and prioritize traffic. Shaping, on the other hand, regulates the flow of traffic to match the desired QoS profile, preventing bursts that can cause congestion.
Congestion Management
When a packet arrives at an exit interface and can't exit immediately, congestion may arise. In non-congested situations, packets are processed and sent out as soon as they arrive. However, when network resources become insufficient, and congestion occurs, congestion management techniques come into play.
These mechanisms prioritize traffic based on QoS policies, ensuring that critical applications receive preferential treatment during periods of high demand. Congestion management tools such as Scheduling and Queuing are used to prioritize and manage network traffic during periods of congestion, ensuring critical data is delivered promptly while avoiding network congestion collapse.
Scheduling is a QoS mechanism that determines the order in which packets are transmitted, optimizing resource allocation for various traffic classes. This process happens whether or not there is congestion, meaning if the link is uncongested, packets are transmitted as they arrive at the interface. Strict Priority, Round-robin, and Weighted Fair are the most commonly used scheduling tools.
Queuing, or buffering, on the other hand, is a method that involves organizing packets in output buffers, primarily used during congestion. When queues become full, packets can be reorganized to prioritize higher-priority ones for quicker transmission through the exit interface.
Although many different queuing mechanisms are available, such as First-In, First-Out (FIFO), Priority Queuing (PQ), Custom Queueing (CQ), Weighted Fair Queuing (WFQ), the newer queuing mechanisms such as Class-Based Weighted Fair Queueing (CBWFQ) and Low-Latency Queuing (LLQ) are recommended for modern rich-media networks today.
Congestion Avoidance
To prevent congestion, congestion avoidance mechanisms monitor network conditions and take proactive measures. For example, Random Early Detection (RED) drops packets before congestion becomes severe, signaling to senders to reduce their transmission rates.
Weighted Random Early Detection (WRED), on the other hand, is a QoS mechanism that builds upon RED by allowing more fine-grained control over packet drop probabilities. Unlike RED, WRED classifies traffic into different queues and assigns distinct drop probabilities to each queue, enabling more precise traffic management.
Congestion Avoidance
QoS mechanisms can optimize link usage by ensuring that bandwidth is efficiently utilized. This involves techniques like compressing data or using header compression to reduce overhead, making more bandwidth available for user data.