DHCP and TCP/IP  «Prev  Next»
Lesson 6 QoS connections
Objective Define the process of setting up QoS connections.

QoS Connections and Network Bandwidth

Linux and Unix-based systems have robust and flexible QoS capabilities. In fact, they often offer more granular control and advanced features compared to Windows. Here's why:
  • Traffic Control (tc): Linux uses the tc command-line utility to manage QoS. This tool provides a powerful framework for shaping traffic, prioritizing applications, and managing bandwidth. It allows for very specific rules and configurations.
  • Queuing Disciplines (qdiscs): Linux supports various queuing disciplines (qdiscs) like HFSC, HTB, and CBQ, which offer different algorithms for managing network queues and prioritizing traffic. These provide fine-grained control over how network traffic is handled.
  • Flexibility: Linux's open-source nature allows for greater flexibility in customizing QoS implementations. You can even write your own scripts and programs to interact with tc and tailor QoS to your specific needs.

How QoS is used on Linux/Unix:
  • Prioritize applications: You can give priority to VoIP or video streaming over less latency-sensitive applications like file downloads.
  • Bandwidth management: Limit the bandwidth used by certain applications or users to ensure fair network usage.
  • Traffic shaping: Smooth out bursts of network traffic to prevent congestion and improve overall network performance.

In summary: Linux and Unix offer powerful and versatile QoS capabilities. While Windows provides some QoS functionality, the level of control and customization available on Linux/Unix systems is generally much greater.
Implementing QoS enables real-time programs to make the most efficient use of network bandwidth. The goal of a QoS implementation is a guaranteed delivery system for network traffic, such as IP packets.

QoS mechanisms

The set of mechanisms QoS uses to set up a delivery system for network traffic includes services and protocols. The following SlideShow describes these mechanisms.

QoS mechanisms


QoS Admission Control Service (QoS ACS)
  1. Extracted Text:
    • "QoS Admission Control"
    • "Action"
    • "View"
    • "QoS Admission Control - tacteam"
    • "Enterprise Settings"
    • "Subnetwork Settings"
      • "192.168.0.0/24"
      • "192.168.1.0/24"
    • "User / OU"
      • "Un-Authenticated User"
      • "Any Authenticated User"
    • "Direction"
      • "Send & Receive"
      • "Send & Receive"
    • "QoS Admission Control Service (QoS ACS) administers subnet bandwidth resources necessary to ensure QoS transmission of data for a server."
  2. Relevant Features Analysis:
    • QoS Admission Control Panel: The image displays a graphical interface for managing Quality of Service (QoS) settings, specifically focusing on network bandwidth allocation and control.
    • Enterprise Settings: The panel allows administrators to configure settings at an enterprise level for bandwidth management, emphasizing the control of network resources.
    • Subnetwork Settings: It lists the specific subnets (e.g., 192.168.0.0/24 and 192.168.1.0/24) that are under the management of the QoS Admission Control Service. This suggests the ability to assign QoS policies per subnet.
    • User/OU Settings: Users are categorized into "Un-Authenticated User" and "Any Authenticated User," both configured with permissions for "Send & Receive." This implies that QoS rules can be enforced based on user authentication status.
    • Bandwidth Management: The reference to "Send & Receive" under "Direction" indicates that bandwidth policies may be applied for both incoming and outgoing traffic.

This setup appears designed for administrators to ensure efficient data transmission and manage bandwidth to prioritize critical traffic and maintain service quality. QoS Admission Control Service (QoS ACS) administers subnet bandwidth resources necessary to ensure QoS transmission of data for a server
Subnet bandwidth management (SBM) is a service that manages the use of network resources on a shared segment, or subnet.
2) Subnet bandwidth management (SBM) is a service that manages the use of network resources on a shared segment, or subnet.

RESV is a signaling protocol that enables the sender and receiver to set up a reserved QoS high-way between them. The RESV message carries the reservation request to each router and switch along the communication path between the sender and receiver
3) RESV is a signaling protocol that enables the sender and receiver to set up a reserved QoS high-way between them. The RESV message carries the reservation request to each router and switch along the communication path between the sender and receiver

Traffic control chooses the traffic lane across which the packets travel. The traffic controls service has two components that work together to determine the traffic lane. The traffic control service components include: Packet classifier: 
The packet classifier separates packets into queues based on their priority and tells the packet scheduler how fast to empty the queues.
4) Traffic control chooses the traffic lane across which the packets travel. The traffic controls service has two components that work together to determine the traffic lane. The traffic control service components include: Packet classifier: The packet classifier separates packets into queues based on their priority and tells the packet scheduler how fast to empty the queues.

Packet scheduler. Th
e packet scheduler manager the queues set up by the packet classifier. It retrieves packets from the queues and sends them across the QoS-reserved highway.
5) Packet scheduler. Th e packet scheduler manager the queues set up by the packet classifier. It retrieves packets from the queues and sends them across the QoS-reserved highway.


Setting up QoS connection

RSVP enables end nodes to communicate with each QoS-aware network device (included in the hop path between RSVP session members), and negotiates QoS parameters and network usage admission. The RSVP protocol is used to exchange PATH and RESV messages with the network.
A PATH message, which the sender initiates, describes the QoS parameters of the traffic, the sender's address, and the destination address of the traffic. The reserve (RESV) message, returned by the receiver, describes the QoS parameters of the traffic to be received. When the sender receives the RESV message, the QoS data flow begins.
The QoS service provider constructs and periodically updates the PATH and RESV messages on behalf of an application. You can also configure sending applications, such as those controlling multicast transmissions, to begin sending immediately on a best-effort basis, which is then upgraded to QoS on receipt of the RESV message.
The following image illustrates a basic QoS connection.
Basic QoS connection
Basic QoS connection

Example of a Sending Host

For example, a sending host that wants to reserve bandwidth sends path messages toward the intended recipient, through an RSVP-enabled Windows Sockets (WinSock) service provider. These path messages, which describe the bandwidth requirements and relevant parameters of the data to be sent, are propagated to all intermediate routers along the path.
A receiving host confirms the flow and network path by sending RESV messages back through the network, describing the bandwidth characteristics of data from the sender. As these reserve messages propagate back toward the sender, intermediate routers decide whether or not to accept the proposed reservation and commit bandwidth resources. If all routers commit the required resources, the sender can begin sending data.
The next lesson wraps up this module.

SEMrush Software 4 SEMrush Banner 4