Queueing theory for TCP

[paper]
ORC seminar at MIT, 13 Sep 2007, [program] [slides ppt]
INFORMS applied probability, 11 July 2007, [program] [slides ppt]
UIUC Stochastic Networks Conference, 23 June 2006, [program] [slides ppt] [slides pdf]
IZS, February 2006 [slides ppt]
Hamilton Institute congestion control workshop, 28 September 2005, [program] [slides ppt] [slides pdf]
ECOC, 27 September 2005, [program] [slides ppt]
Coseners Multiservice Networks, 7 July 2005, [program] [slides ppt]
Mathematics of Networks meeting, Imperial College, [program] [slides ppt]
UCL Networks research group, 15 December 2004, [program] [slides ppt]
Stanford Operations Research Colloquium, Wednesday 6 April 2005. [program] [slides ppt]

Abstract.

Most traffic in the Internet is controlled by TCP, an algorithm which adjusts the transmission rate of a traffic flow in response to the congestion it perceives in the network. In the core of the Internet, where there are many TCP flows, the aggregate traffic behaves predictably, both at a stochastic level and a fluid level. This predictability is the basis for queueing theory for TCP.

I will describe the natural limiting regime which permits us to obtain limit theorems for TCP, and show how the stochastic and fluid parts of the limit mesh together. In fact, there are three different ways they can mesh together, depending on how big the buffer is -- by changing the buffer size we obtain different features of the TCP models of Baccelli, Kelly, and Srikant. I will also describe the sorts of traffic descriptors which are appropriate for capturing the burstiness of TCP traffic.

I will finish with practical application of this work, an answer to the question: how big should buffers be in core Internet routers?

Abstract.

Most traffic in the Internet is controlled by TCP, an algorithm which adjusts the transmission rate of a traffic flow in response to the congestion it perceives in the network. In core Internet routers, which have many TCP flows, the aggregate traffic flow behaves predictably, both at a stochastic level (Cao, Cleveland, Lin, Sun [paper]) and a fluid level (Misra, Gong, Towsley [crippled abstract], Baccelli, McDonald, Reynier [paper]). This is the basis for a growing literature on Internet congestion control [bibliography].

This predictability can be used to address the question: how big do buffers need to be in Internet routers? I will describe three different buffer sizing rules, the analysis of which draws on tools from optimization, queueing theory, statistical physics, and dynamical systems theory: as link speeds grow, and the number of flows increases in proportion,

It turns out that buffers can be made far smaller than the current standard recommends (the buffer in a high-end router could be reduced from 10 Gbytes to 20 kbytes). This reduction gives a slight improvement in performance. This opens the way to all-optical routers.