On the trade-off between control rate and congestion in single server systems
The goal of this paper is to characterize the tradeoff between the rate of control and network congestion for flow control policies. We consider a simple model of a single server queue with congestion-based flow control. The input rate at any instant is decided by a flow control policy, based on the...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers
2010
|
Online Access: | http://hdl.handle.net/1721.1/54690 https://orcid.org/0000-0002-6108-0222 https://orcid.org/0000-0001-8238-8130 |
Summary: | The goal of this paper is to characterize the tradeoff between the rate of control and network congestion for flow control policies. We consider a simple model of a single server queue with congestion-based flow control. The input rate at any instant is decided by a flow control policy, based on the queue occupancy. We identify a simple 'two threshold' control policy, which achieves the best possible congestion probability, for any rate of control. We show that in the absence of control channel errors, the control rate needed to ensure the optimal decay exponent for the congestion probability can be made arbitrarily small. However, if control channel errors occur probabilistically, we show the existence of a critical error probability threshold beyond which the congestion probability undergoes a drastic increase due to the frequent loss of control packets. Finally, we determine the optimal amount of error protection to apply to the control signals by using a simple bandwidth sharing model. |
---|