Welcome!

Java Authors: Sharon Barkai, Elizabeth White, Liz McMillan, Pat Romanski, Kevin Benedict

Related Topics: SOA & WOA, Java, Linux, AJAX & REA, Web 2.0, Big Data Journal

SOA & WOA: Article

Understanding Application Performance on the Network | Part 3

TCP Slow-Start

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss - which is often an inevitable result of heavy network congestion. I'll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV's in-depth review of Packet Loss.

TCP Slow-Start
TCP uses a slow-start algorithm as it tries to understand the characteristics (bandwidth, latency, congestion) of the path supporting a new TCP connection. In most cases, TCP has no inherent understanding of the characteristics of the network path; it could be a switched connection on a high-speed LAN to a server in the next room, or it could be a low-bandwidth, already congested connection to a server halfway around the globe. In an effort to be a good network citizen, TCP uses a slow-start algorithm based on an internally-maintained congestion window (CWD) which identifies how many packets may be transmitted without being acknowledged; as the data carried in transmitted packets is acknowledged, the window increases. The CWD typically begins at two packets, allowing an initial transmission of two packets and then ramping up quickly as acknowledgements are received.

At the beginning of a new TCP connection, the CWD starts at two packets and increases as acknowledgements are received.

The CWD will continue to increase until one of three conditions is met:

Condition

Determined by

Blog discussion

Receiver's TCP Window limit

Receiver's TCP Window size

Part VII

Congestion detected (via packet loss)

Triple Duplicate ACK

Part IV

Maximum write block size

Application configuration

Part VIII

Generally, TCP slow-start will not be a primary or significant bottleneck. Slow-start occurs once per TCP connection, so for many operations there may be no impact. However, we will address the theoretical case of a TCP slow-start bottleneck, some influencing factors, and then present a real-world case.

The Maximum Segment Size and the CWD
The Maximum Segment Size (MSS) identifies the maximum TCP payload that can be carried by a packet; this value is set as a TCP option as a new connection is established. Probably the most common MSS value is 1460, but smaller sizes may be used to allow for VPN headers or to support different link protocols. Beyond the additional protocol overhead introduced by a reduced MSS, there is also an impact on the CWD, since the algorithm uses packets as its flow control metric.

We can consider the CWD's exchanges of data packets and subsequent ACKs as TCP turns, or TCP round trips; each exchange incurs the round-trip path delay. Therefore, one of the primary factors influencing the impact of TCP slow-start is network latency. A smaller MSS value will result in a larger number of packets - and additional TCP turns - as the sending node increases the CWD to reach its upper limit. It is possible that with a small MSS (536 Bytes) and high path delay (200 msec) that slow-start might introduce 3 seconds of delay to an operation as the CWD increases to a receive window limit of 65KB.

How Important Is TCP Slow-Start?
While significant, even a 3-second delay is probably not interesting for large file transfers, or for applications that reuse TCP connections. But let's consider a simple web page with 20 page elements, averaging about 120KB in size. A misconfigured proxy server prevents persistent TCP connections, so we'll need 20 new TCP connections to load the page. Each connection must ramp up through slow-start as content is downloaded. With a small MSS and/or high latency, each page component will experience a significant slow-start delay.

For more network performance insight from click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analysis at Compuware APM. He has global field enablement responsibilities for performance monitoring and analysis solutions embracing emerging and strategic technologies, including WAN optimization, thin client infrastructures, network forensics, and a unique performance management maturity methodology. He is also a co-inventor of multiple analysis features, and continues to champion the value of software-enabled expert network analysis.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.