Jim Gettys has a nice tale of what he calls â€˜bufferbloatâ€™. Instinctively, it seems like bigger buffers should result in less packet loss. As long as you can buffer it, the other guy doesnâ€™t have to retransmit, right? But that is not the way TCP works. Itâ€™s going to retransmit if you donâ€™t reply fast enough. And if you clog the buffers, its going to take a long time before the endpoint can acknowledge the data.
One interesting anecdote to me (and it isnâ€™t really a conclusion) is that the worldâ€™s love affair with Windows XP (which has an ancient TCP stack) may actually be helping the internet at large, even though the Vista TCP stack is measurably a better stack:
The most commonly used system on the Internet today remains Windows XP, which does not implement window scaling and will never have more than 64KB in flight at once. But the bufferbloat will become much more obvious and common as more users switch to other operating systems and/or later versions of Windows, any of which can saturate a broadband link with but a merely a single TCP connection.
Gettys did conclude that this was a problem for video downloads, which is something everyone is doing these days. Heâ€™s not wrong, but real video services may not be as subject to this as it seems. Video services live-and-die by bandwidth costs, so to preserve bandwidth costs, they avoid simply transmitting the whole video â€“ instead they dribble it out manually, at the application layer. If they depended on TCP for throttling, heâ€™d be right, but I donâ€™t think many large-scale video services work this way. Need more data!
Anyway, a great read.