Abstract
This paper presents a distributed, end-to-end congestion control protocol for use in high-traffic, packet-switched networks. The network is represented as a stochastic single-server queue, with arrival rates being the control variables. A time-stamp based measure of network state called warp is defined, and it is shown to be an estimator of network utilization. Congestion is modeled explicitly using unimodal load-throughput functions, and its mono tonicity property is exploited to yield characterizations of the interplay between optimality and stability. First, a protocol based on “perfect” information is analyzed, whose prowess is then shown to be emulated by one which only uses locally computable, delayed information. Both are subject to the unimodality assumption which has the effect of inducing a division of the phase space into stable and unstable regions, the optimal operating point lying at its boundary. Protocols are devised for dealing with each regime separately, rate adjustment protocol being the control that guides the system to the optimal operating point, pro active and reactive rate protocols dealing with the issue of maintaining near optimal throughput in the long-run. Protocols for handling fairness and structural perturbation augment the basic suite. The potential instability of the optimal operating point, even with control, carries implications for the different costs associated with proactive versus reactive congestion control mechanisms and their feasibility in high-speed networks. The analysis is supported by simulations showing the global dynamical properties of the system.
Keywords
Get full access to this article
View all access options for this article.
