Define how to optimize IP performance on a network.
Optimizing IP Network Performance
To optimize the flow of TCP/IP data within an internetwork requires the classification of the traffic flow so that you can understand where configuring and tuning the TCP/IP implementation might provide performance gains.
Recognizing Traffic Patterns
The transactions involved between hosts across a network vary from simple datagram interactions with low packet counts to complex authenticated transfers with security and verifications involved. In general, you can categorize packet traffic into two major groups, both of which are sensitive to particular characteristics of a network:
Delay and latency-sensitive traffic
Bandwidth-sensitive traffic
The following series of images describes and provides examples of each traffic pattern.
Delay in latency-sensitive traffic consists mainly of single packet transfer that must be acknowledged before communication can continue.
Bandwidth-sensitive traffic consists principally of unidirectional communications where a large amount of traffic flows in one direction and acknowledgments flow in the other.
Consider a latency-sensitive traffic examples on a 10 megabits per second (mbps) local area network (LAN) segment where delay is essentially zero.
The transaction time in the previous example is dominated by the domain controller response times.
The level of performance is typical in LAN based environments.
Many applications exhibit hybrid traffic-type characteristics, and must be designed to minimize performance limitations when used in a WAN environment.
TCP/IP Performance Factors
The TCP/IP implementation in Windows® 2000 is largely self-tuning, but some design choices made for both the network infrastructure and the software installation can influence the performance that you will ultimately achieve. In particular, when WAN networks span large distances, the delay through a network becomes a significant factor in any design considerations. Principal factors that influence TCP/IP performance are:
TCP/IP receive window size: This is the buffer required to receive packets in a TCP stream before an acknowledgment is sent. For
Ethernet-based TCP connections, the window is normally set to 17,520 bytes, or 16 K rounded up to 12 Maximum Segment Size (MSS) segments. Where network delay is high, you can increase the minimum window size offered for a connection by modifying the registry.
Delay/Bandwidth product: High bandwidth/high delay networks, such as satellite links, require special consideration when you are configuring the network transports and designing the applications being used. When network delay becomes significant, always select the largest bandwidth available for links to maximize performance.
Packet loss on the network: This is usually caused due to network errors or congestion in routers.
There are certain factors that influence performance,
but because they are part of the existing ISO layer one and two infrastructure, you may not be able to configure them.
Factors that influence Performance
Maximum Transmission Unit (MTU). This is usually set by the underlying network technology. For example, Ethernet provides a
1,500 byte MTU, whereas Token Ring can support up to 17,914 bytes.
Maximum Segment Size (MSS). This is the TCP/UDP payload that can be carried in the MTU. For example, the MSS for an Ethernet
MTU of 1,500 bytes is 1,460 bytes.
Note:
In network environments that include links with large delay components, your network design may be required to place network services, authentication, and application servers on both sides of the links to achieve acceptable client performance. This situation is common when dealing with issues related to placement of domain controllers, WINS servers, DHCP Servers, and DNS servers.
The next lesson focuses on how to optimize remote subnets.
Optimizing IP Performance Network- Exercise
Click the Exercise link below to apply what you know about identifying traffic patterns as a first step in improving IP performance. Optimizing IP Performance Network- Exercise