1 Charge-Sensitive TCP and Rate Control in the Internet
TCP拥塞控制算法理论及调优实践

TCP拥塞控制算法理论及调优实践TCP(Transmission Control Protocol)是当前Internet上最重要的传输协议之一,其主要特点是提供了可靠的数据传输服务。
然而,在高负载情况下,TCP数据传输过程中容易出现拥塞现象,导致网络性能下降、数据丢失等问题。
因此,TCP拥塞控制算法成为网络性能优化中的重要一环。
TCP拥塞控制算法的原理TCP拥塞控制算法主要基于网络反馈机制实现,在网络出现拥塞时,TCP协议会相应地降低发送数据的速度,以此来缓解网络负载压力。
TCP拥塞控制算法主要包括四种基本算法:Slow Start、Congestion Avoidance、Fast Retransmit和Fast Recovery。
Slow Start算法是TCP拥塞控制算法中最基本的算法之一,其主要原理是当TCP协议开始发送数据时,先以一个较小的速率进行发送,逐渐递增发送速率,同时不断根据网络反馈调整发送速率,直到网络达到拥塞阈值时,TCP协议则根据反馈信息逐渐降低发送速率,以缓解网络拥塞压力。
Congestion Avoidance算法主要是在Slow Start算法的基础上进一步进行优化,其主要想法是当网络出现拥塞时,不仅仅是降低发送速率,同时也要通过降低拥塞窗口大小来减少拥塞现象的发生。
Fast Retransmit算法主要是当发送方在经过一段时间后始终没有收到确认数据包时,则会认为数据包已经丢失,此时会立即重发数据包以避免数据包过多地停留在网络中发生拥塞现象。
这种方式可以大大缩短丢包重传的时间,提高数据传输的时效性。
Fast Recovery算法主要是在Fast Retransmit中进一步进行优化,当收到重复的确认数据包时,TCP协议会认为数据包已经被正确接收,此时会立即完成重传操作并根据网络反馈情况以逐渐增加发送速率的方式来提高数据传输效率。
TCP拥塞控制算法的调优实践TCP拥塞控制算法的调优是一项非常复杂的工作,需要综合考虑网络拓扑结构、流量类型、网络负载情况等多个因素。
tcp拥塞控制阈值

tcp拥塞控制阈值TCP(传输控制协议)是一种应用层协议,用于在两个网络节点之间传输数据。
它通过一系列拥塞控制算法来控制网络传输速率,以确保吞吐量和传输延迟的平衡。
拥塞控制算法的一个重要参数是拥塞控制阈值(Congestion Control Threshold),也称为可用窗口(Advertised Window)。
拥塞控制阈值是TCP拥塞控制算法的一个关键参量,它影响着能够从发送方发送的数据块的大小,从而产生了一系列的性能问题。
当TCP使用慢启动算法发送数据包时,它会增加拥塞控制阈值(Congestion Control Threshold),以此来控制发送端发送速率。
当TCP遇到丢包现象时,拥塞控制阈值会急剧下降,以抑制发送端发送速率。
这样,TCP算法能够根据网络中的拥塞情况,自动调节网络发送速率。
调整拥塞控制阈值,可以提高网络的效率和传输性能。
网络上的通信协议都有一个参数,叫做最大发送报文段(MSS),即最大的可以在每一次传输中发送的数据量。
因此,调整拥塞控制阈值可以改变MSS的大小,从而影响传输性能。
此外,调整拥塞控制阈值还可以改变网络的吞吐量。
增加拥塞控制阈值可以增加网络的吞吐量,因为它可以提高发送端发送速率。
另一方面,减小拥塞控制阈值可以降低网络吞吐量,因为它会抑制发送端发送速率。
另外,调整拥塞控制阈值也可以改变网络的传输延迟。
当拥塞控制阈值增加时,数据包会更快地传输,从而减少数据包在网络中延迟的时间。
反之,调整拥塞控制阈值可以延迟数据包在网络中传输,从而减少网络的传输延迟。
总之,拥塞控制阈值(Congestion Control Threshold)是TCP 拥塞控制算法中非常重要的一个参数,它对网络性能有着重大影响。
正确地调整拥塞控制阈值,可以提高传输效率以及减少网络发送和接收的延迟,从而提高网络的传输性能。
CCIE R&S 理论各个击破—Security

Security目录Log和log-input作用 (1)remark (4)Autocommand (6)Privilege Levels (9)基于时间的ACL (16)Reflexive ACL (19)Context-Based Access Control(CBAC) (24)Port to Application Mapping(PAM) (30)Lock-and-Key Security (Dynamic ACL) (34)TCP Intercept (39)Unicast Reverse Path Forwarding (uRPF) (42)AAA (50)IP Source Tracker (56)Secure Shell (SSH) (57)Intrusion Prevention System (IPS) (61)Zone-Based Policy Firewall (66)Control Plane Policing (CoPP) (80)Log和log-input作用概述当路由器为用户转发了数据之后,如果管理员想查看路由器曾经为哪些用户转发过数据,在正常情况下,这是无法查证的。
但是,可以通过接口配置ACL,并且强制ACL记录下曾经转发过的用户记录,这样,就能从路由器得知哪些用户是发起过数据的,并且发送了多少数据,但是用户发出的数据内容,是无法记录的。
要达到以上目的,那在配置ACL时,使用Log和log-input的功能,并且将配置好的ACL用于接口上。
Log和log-input的区别是:Log只能记录数据包通过时的源IP和目的IP,而log-input除了记录源IP和目的IP之外,还会记录源的MAC地址。
配置1.配置ACL中的Log说明:配置路由器R1,让其允许R2发来的数据包通过,但拒绝R3的数据包通过,并且记录下它们数据量。
(1)配置ACL说明:配置ACL,允许R2,拒绝R3,分别使用log关键字r1(config)#access-list 100 permit ip host 10.1.1.2 any logr1(config)#access-list 100 deny ip host 10.1.1.3 any log(2)应用ACLr1(config)#int f0/0r1(config-if)#ip access-group 100 in(3)测试结果说明:从R2和R3分别ping R4,查看R1上的logOct 1 14:15:26: %SEC-6-IPACCESSLOGDP: list 100 permitted icmp 10.1.1.2 -> 14.1.1.4 (0/0), 5 packetsOct 1 14:16:46: %SEC-6-IPACCESSLOGDP: list 100 denied icmp 10.1.1.3 -> 14.1.1.4 (0/0), 5 packet说明:从R1上弹出的日志可以看出,R2到R4的数据包是被放行了的,而R3到R4的数据包被丢弃了。
RFC2889交流

Page 13
RFC 2889测试-其他 测试- 测试
地址学习速度测试-检测DUT MAC地址学习速度
注意测试过程中不能超过地址缓冲容量,不要超过MAC老化时间 测试至少使用3个接口,测试口、学习口和监测口 测试方法和地址缓存容量类似,只是N值改为学习速度
Page 14
RFC 2889测试-其他 测试- 测试
Page 11
RFC 2889测试-其他 测试- 测试
转嫁压力测试
测试仪发出超线速流量,例如88比特帧间隙,如果接收端口收到小于 96比特帧间隙流量,表示转嫁压力机制开启
最大转发速率测试
测试DUT最大转发能力,测试仪器发出吞吐量和线速之间速率的流量, 探测DUT最大转发能力 每种帧长,测试仪器首先发出DUT吞吐量速率,以最小单位递增速率, 直到发现最大转发速率或者出现转嫁压力
Forwarding测试相关参数
Forwarding测试包含吞吐量、丢包率和转发率 帧长-按照RFC 2544第9节,建议的帧长为64,128,256,512,1024, 1280和1518字节 帧间间隙(IFG)-burst中两帧之间的帧间隙必须为被测试设备标准中指定的 最小值 双工模式-半双工或者全双工 计划负载(ILoad)-每端口的计划负载以媒质的最大理论负载的百分比表示, 不考虑通信方向或双工模式。在半双工通信模式下,计划负载超过50%将超 过DUT预定负载 每端口地址数-表示每个端口将要被测试的地址的数量。地址的数量应当是 二的指数(即:1,2,4,8,16,32,64,128,256,……) 测试时间-推荐的测试时间长为30秒。测试期间长应当可在1至300秒之间调 整
我们测试例在拥塞配置了超过100流量但是没有丢包出现则表示dut启动了反压功能dut可能不能处理输入端口100的负载在这种情况下非拥塞端口可能有帧丢失如果没有检测到拥塞控制拥塞端口在150的超载下期望的帧丢失百分比33rfc2889测试其接收端口收到小于96比特帧间隙流量表示转嫁压力机制开启最大转发速率测试测试dut最大转发能力测试仪器发出吞吐量和线速之间速率的流量探测dut最大转发能力每种帧长测试仪器首先发出dut吞吐量速率以最小单位递增速率直到发现最大转发速率或者出现转嫁压力rfc2889测试其他地址缓存容量测试dut可以正常转发不泛洪不丢包情形下测量dut每端口每模块和每设备最大mac地址容量地址学习速度地址学习帧速率可调整到50帧或50帧每秒以下地址老化时间调整到最大老化时间测试过程测试至少包含3个端口学习口测试口和监测口首先测试口发出广播帧第二步学习口回复n个不同源地址帧让dut学习第三步测试口以n个源端口mac为目的发流量如果监测口收到流量或者学习口没有收到全部流量则表示mac表满了减少n的值以此类推使用二分发继续测量最大容量rfc2889测试其他地址学习速度测试检测dutmac地址学习速度注意测试过程中不能超过地址缓冲容量不要超过mac老化时间测试至少使用3个接口测试口学习口和监测口测试方法和地址缓存容量类似只是n值改为学习速度rfc2889测试其他错误帧过滤测试错误帧过滤测试的目的是为了确定dut在错误或反常帧情况下的行为测试结果说明dut是过滤出错误的帧还是仅仅继续传播错误帧到目的地址以太网上每个合法帧必须被检测并且正确转发过长帧dut应该可过滤出超过1518字节的帧
ipsec内核参数

ipsec内核参数
IPsec内核参数主要包括以下几个方面:
1.参数为yes或no(缺省为no)的disable_port_floasting,表示是否启用
NAT-T。
2.参数为yes或no(缺省为no)的force_keepalive,表示是否强制发送NAT-T
保活。
3.参数为yes或no(缺省为no)的keep_alive,表示NAT-T保活包发送间隔
时间。
4.参数为yes或no(缺省为no)的oe,表示是否启用机会加密(Opportunistic
Encryption)。
5.参数为yes或no(缺省为no)的nhelpers,表示设置pluto处理密码运算的
进程(线程)。
6.参数为yes或no(缺省为no)的crlcheckinterval,表示CRL检查间隔时间,
单位为秒。
7.参数为yes或no(缺省为no)的forwardcontrol,此选项不再使用。
这些参数主要影响IPsec的性能和安全性。
具体参数的含义和设置方式可能会因不同的操作系统和IPsec实现而有所不同,建议参考相应的操作系统或IPsec实现文档进行设置。
2-4互操作相关参数解释

GPRIO:GSM侧优先级为1,咱们现在设为3.
WPRIO:该参数暂不用。
PSTHR: ALWAYS,表示一直测,因为GSM优先级最低,所以对于高优先级的LTE是一直测量。
LPTHR:4 该参数不起作用,是针对LTE优先级低时才起作用的。
HRPIO:5
TIMEH:5 这两个参数不起作用,也是针对LTE优先级低时才起作用的。
UTTR,UQRL: 参数不起作用,针对UTRAN网络的。
INDEX:指GSM加LTE的第几条频点,从1类推。
MCC,MNC,TAC:指LTE频的,目前TAC号是按GSM本小区的LAC设置,作用暂不明确,待测试研究一下。
FREQ:指加的LTE的频点,加上后可重选至这个频的LTE小区上。
LTEMB:LTE邻区带宽,是指GSM要测量LTE多大的带宽,LTE为20M带宽,100表示RB数量。
LTECP:这个LTE频点小区的优先级,设为7,7最高了。
LTERUT: 6,即2*6=12,-128+12=116。
当LTE为高优先级时,且LTE小区高于-116时,满足GSM重选至LTE门限。
LTERLT:指LTE频点比GSM优先级低时的重选门限值,目前不用这个。
LTERXM:-128,指重选至LTE的最低接入电平。
后面四个参数是LTE BARRED相关的,暂不用,也搞不太清楚。
添加2-4G邻区指令.
xlsx。
icap ms典型调谐参数

icap ms典型调谐参数ICAP(Internet Content Adaptation Protocol)是一种用于在HTTP通信中进行内容适应和修改的协议。
在ICAP中,ICAP服务器可以根据特定的需求对传输的内容进行检查、修改或过滤,并将处理后的内容返回给客户端。
ICAP也提供了一种机制,使得其他网络应用程序能够与HTTP代理服务器进行交互,以实现各种功能,如病毒扫描、内容过滤、广告拦截等。
ICAP的调谐参数是指在使用ICAP协议时需要进行调整和配置的一些参数。
这些参数可以根据实际需求进行设置,以达到最佳的性能和效果。
下面将介绍几个ICAP MS(ICAP Management Server)的典型调谐参数。
1. ICAP服务器地址和端口:ICAP MS的地址和端口是客户端与ICAP服务器进行通信的基础。
在配置ICAP客户端时,需要指定ICAP MS的地址和端口,以便能够正确地与ICAP服务器进行通信。
通常情况下,ICAP服务器的地址和端口会在部署ICAP服务器时进行配置,并由系统管理员提供给客户端。
2. ICAP请求超时时间:ICAP请求超时时间是指在发送ICAP请求后,等待ICAP服务器响应的最长时间。
如果ICAP服务器在超时时间内没有响应,客户端可以认为ICAP服务器不可用,并采取相应的措施。
通常情况下,ICAP请求超时时间的设置应根据网络延迟和ICAP服务器的负载情况进行调整,以保证请求的及时响应。
3. ICAP连接池大小:ICAP连接池大小是指客户端与ICAP服务器之间建立的并发连接数。
通过增加连接池大小,可以提高并发处理能力,加快处理速度。
然而,连接池大小过大可能会占用过多的系统资源,造成性能下降。
因此,在设置ICAP连接池大小时需要综合考虑系统资源和并发处理需求。
4. ICAP请求重试次数:ICAP请求重试次数是指当ICAP请求失败时,客户端尝试重新发送ICAP请求的次数。
4500系列核心交换物理参数

Cisco Catalyst 4500系列Cisco Catalyst 4500系列具有永续性,因而能对融合网络实行有力的控制。
概述Cisco® Catalyst® 4500 系列可以提供无阻塞的第2-4层交换和集成化的永续性,因而能进一步加强对融合网络的控制。
具有高可用性的集成化语音、视频和数据网络能够为正在部署基于互联网的业务应用的企业和城域以太网客户提供业务永续性。
作为新一代Cisco Catalyst 4500系列平台,Cisco Catalyst 4500系列包括四种新型Cisco Catalyst 机箱:Cisco Catalyst 4510R (十个插槽)、Cisco Catalyst 4507R(七个插槽)、Cisco Catalyst 4506(六个插槽)和Cisco Catalyst 4503(三个插槽)。
Cisco Catalyst 4500系列中提供的集成式永续性增强包括1+1超级引擎冗余(仅限Cisco Catalyst 4507R/4510R)、集成式IEEE 802.3af兼容以太网供电、基于软件的容错以及1+1电源冗余。
硬件和软件中的集成式永续性能够最大限度地缩短网络的中断时间,从而提高生产率、利润率和客户成功率。
作为思科AVVID(集成化语音、视频和数据体系结构)的重要组成部分,Cisco Catalyst 4500能够通过智能网络服务将控制扩展到网络边缘,包括高级服务质量(QoS)、可预测性能、高级安全性、全面管理和集成式永续性。
由于Cisco Catalyst 4500系列可以提供与Cisco Catalyst 4000系列线路卡和超级引擎的兼容性,因而能够在融合网络中延长Cisco Catalyst 4000系列的部署时间。
由于这种方式能够最大限度地减少重复性的运营开支和提高投资回报(ROI),从而可以降低总体拥有成本。
图1:Cisco Catalyst 4500 系列机箱Cisco Catalyst 4503 Cisco Catalyst 4507R Cisco Catalyst 4510R Cisco Catalyst 4506Cisco Catalyst 4500 系列机箱Cisco Catalyst 4500系列共提供了四种机箱和三种超级引擎。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1 Charge-Sensitive TCP and Rate Control in theInternetRichard ,and Venkat AnantharamDepartment of Electrical Engineering&Computer SciencesUniversity of California at Berkeleyhyongla@,ananth@Abstract—We investigate the fundamental problem of achieving the sys-tem optimal rates in a distributed environment,which maximize the to-tal user utility,using only the information available at the end hosts.This is done by decomposing the system problem into two subproblems–net-work and user problems–and introducing an incentive-compatible pric-ing scheme,while maintaining proportional fairness.We demonstrate that when users update their parameters by solving their own optimization problem,at an equilibrium the system optimum is achieved.Furthermore, this algorithm does not require any explicit feedback from the network and can be deployed over the Internet with modifications only on the end hosts. In the second part of paper we model as a noncooperative game the case where the choice of each user’s action has nonnegligible effect on the price per unitflow at the resources and investigate the Nash equilibria of the game.We show,in the simple case of a single bottleneck,that there exists a unique Nash equilibrium of the game.Further,as the number of users increases,the unique Nash equilibrium approaches the system optimum.I.I NTRODUCTIONAs the Internet explodes in size and in the number of users, one of the challenging questions network designers face is how to provide a fair and efficient allocation of the available band-width.To this end researchers have proposed many different rate allocation mechanisms.In the current Internet,many con-nections use Transmission Control Protocol(TCP),which is a window-based congestion control mechanism,so as to con-trol the transmission rate.TCP however does not necessarily lead to an efficient or fair allocation of bandwidth among the connections[2],[4].The fact that the Internet is in a public domain and potentially noncooperative environment has stim-ulated much work on pricing mechanisms.Researchers have proposed different schemes based on time and volume measure-ments[8]or on a per-packet charge[13].Furthermore in[3]it is shown thatflat rate charging leads to inefficiency and a large number of low-usage users end up subsidizing a small number of high-usage users.This argues that usage-based pricing is de-sirable.Recently Kelly[6]has suggested that the problem of achiev-ing the system optimum can be decomposed into two subprob-lems–network and user problems–and a system optimum is achieved when users’choices of charges and the network’s choice of allocated rates are in equilibrium.Kelly et al.[7]have proposed two classes of algorithm which can be used to imple-ment proportionally fair pricing and which solve a relaxation of the system optimum problem.The algorithm in[7]requires explicit feedback from the switches inside network.Due to the size of the Internet,an algo-rithm that requires an extensive modification inside the network may not be suitable for deployment.Further,many connections in the Internet use TCP to control their transmission rates.In this paper we investigate the fundamental problem of achieving the system optimum that maximizes the aggregate utilities of the users,using only the information available at the end hosts.We introduce an algorithm that may be deployed over the Internet without significant modification within the network.This algo-rithm can be thought of as a natural extension of an algorithm in-troduced in[11].The algorithm effectively decomposes the sys-tem problem into user and network problems.While the users solve the user optimization problems on a larger time scale,the underlying window-based transmission rate control mechanism solves the network problem on a smaller time scale.The unique fixed point of the mapping defined from the algorithm is proven to result in the system optimal rates.The rest of the paper is organized as follows.Wefirst moti-vate the problem and describe the model used in the analysis and the methodology adopted by the algorithm,followed by some related work.Section IV explains the algorithm,which is fol-lowed by a numerical example.Section VI models the problem as a noncooperative game and discusses the Nash equilibrium points of the game.II.M OTIVATION,B ACKGROUND&M ODELIn this section wefirst motivate our problem by describing some of the currently most popular congestion control mecha-nism used in the Internet.We discuss some of the problems with the current congestion control mechanisms.We then formulate the problem of maximizing the total users’utility and describe the model we will use to present our congestion control pro-posal.A.MotivationFrom its advent the Internet has been decentralized,relying on disciplined behavior from the users.Without a centralized authority,the network users have a great deal of freedom as to how they share the available bandwidth in the network.The in-creasing complexity and size of the Internet renders centralized rate allocation control impractical.In view of these constraints,researchers have proposed many rate allocation algorithms to be implemented at the end hosts in a decentralized manner.These algorithms can be roughly cat-egorized into two classes:rate-based algorithms and window-based algorithms.A rate-based algorithm directly controls the transmission rate of the connection,based on either feedback from the network[7]or on measurements taken at the end host.A window-based algorithm adjusts the congestion window size, which is the maximum number of outstanding packets,in order2to control the transmission rate and backlog inside the network associated to the connection.The most widely deployedflow/congestion control mecha-nism in current Internet is the Transmission Control Protocol (TCP),which is a window-based congestion control mechanism. TCP however does not necessarily lead to a fair or efficient rate allocation of the available bandwidth.It is well known that TCP as currently implemented suffers from a high packet loss rate and a delay bias.The high packet loss rate is a conse-quence of periodic oscillation of the window sizes,while the delay bias is the result of a discrepancy in window update rate among different connections.In order to address these issues Brakmo et al.[1]have introduced another version of TCP,named TCP Vegas,with a fundamentally different bandwidth estima-tion scheme.They have demonstrated that TCP Vegas results in a lower packet loss,which in turn leads to higher efficiency, and have suggested that TCP Vegas does not suffer from a delay bias,which leads to a more fair rate allocation of the band-width.This was later proved by Mo et al.[10]in the case that there is a single bottleneck in the network.Mo and Walrand[11]in another paper,have proposed and studied another fair window-based end-to-end congestion con-trol mechanism,which is similar to TCP Vegas but has a more sophisticated window updating rule.They have shown that pro-portional fairness can be achieved by their(,1)-proportionally fair algorithm and that max-min fairness can be achieved as a limit of-proportionally fair algorithms as goes to [11].Clearly fairness is a desirable property for a rate allocation mechanism.There is,however,another aspect of a rate alloca-tion problem that has not been addressed by the previous mecha-nisms.Due to the various requirements of different applications it is likely that the users will have different utility functions.This suggests that although fairness is a desirable property,fairness alone may not be a good objective.We suggest that a good rate allocation mechanism should not only be fair,but should also allocate the available bandwidth in such a way that the overall utility of the users is maximized.In this paper we describe a pricing mechanism that achieves these goals without requiring knowledge of the users’utility functions and without requiring any explicit feedback from the network.B.Model and BackgroundWe consider a network with a set of resources or links and a set of users.Let denote thefinite capacity of link. Each user has afixed route,which is a non-empty subset of.We define a zero-one matrix,where1if link is in user’s route and0otherwise.When thethroughput of user is,user receives utility.We as-sume that the utility is an increasing,strictly concave and continuously differentiable function of over the range0.Furthermore,we assume that the utilities are additive so that the aggregate utility of rate allocation is The most popular versions of TCP in the Internet are TCP Tahoe and Reno.Fairness refers to max-min fairness that is discussed later.Proportional fairness is introduced by Kelly in[6],and is discussed in the next subsection.Such a user is said to have elastic traffic..Let and. In this paper we study the feasibility of achieving the maximum total utility of the users in a distributed environment,using only the information available at the end hosts.Under our model this problem can be formulated as follows.SYSTEM(U,A,C):(1)Thefirst constraint in the problem says that the total rate through a resource cannot be larger than the capacity of the resource. Given that the system knows the utility functions of the users, this optimization problem may be mathematically tractable. However,in practice not only is the system not likely to know ,but also it is impractical for a centralized system to compute and allocate the user rates,due to the large scale of the network. Hence,Kelly in[6]has proposed to consider the following two simpler problems.Suppose that given the price per unitflow,user selects an amount to pay per unit time,,and receives aflow(2)The network,on the other hand,given,attempts to maximize the sum of weighted log functions.Then the network’s optimization problem can be written as follows[6].NETWORK(A,C;p):(3)Note that the network does not require the true utility func-tions,and pretends that user’s utility function is to carry out the computation.It is shown in[6]that one can alwaysfind vectors, and such that solves for all ,solves,and for all.Further,the rate allocation is also the unique solu-tion to.This suggests that the problem of solving can be achieved by an algorithm that solves for a given at time3on a smaller time scale,and drives to on a larger time scale.Another important aspect of a rate allocation mechanism is fairness.There are many different definitions for fairness,the most commonly accepted one being max-min fairness.A rate allocation is max-min fair if a user’s rate cannot be increased without decreasing the rate of another user who is already re-ceiving smaller rate.In this sense,max-min fairness gives an absolute priority to the users with smaller rates.However,often-times in order to achieve a max-min fair allocation the efficiency of the network needs to be sacrificed.In order to handle this tradeoff between fairness and efficiency Kelly[6]has proposed another definition of fairness.A vector of ratesis said to be weighted proportionally fair with weight vector if is feasible,i.e.,0and,and for any other feasible vector,the following holds:(4) whereThese equations can be motivated as follows.Each userfirst sets a price per unit time it is willing to pay.Then,every user adjusts its rate based on the feedback provided by the resources in the network in such a way that at an equilibrium the price it This is called rates per unit charge are proportionally fair in[6]is willing to pay equals its aggregate cost.The feedback from a resource can be thought of as a congestion indicator,re-quiring a reduction in theflow rates going through the resource. Kelly et al.show that,forfixed,under some conditions on ,the above system of differential equations con-verge to a point that maximizes the following expressionFurther,if the users update their willingness to pay ac-cording towhile evolves according to(4),then converges to a unique stable point that maximizes the following revised expres-sion(5)Note that thefirst term in(5)is the objective function in our SYS-TEM(U,A,C)problem.Thus,the algorithm proposed by Kelly et al.solves a relaxation of the problem.B.TCP and a Fair End-to-end Congestion Control Algorithm Unlike a connection with a rate-based congestion control al-gorithm,a TCP connection adjusts its rate by updating its win-dow size,based on the estimated congestion state of the net-work.There are several different versions of TCP,which can be categorized into two classes,based on their bandwidth esti-mation schemes.TCP Tahoe and Reno use packet losses as an indication of congestion in the network,while TCP Vegas[1] and the algorithm proposed by Mo and Walrand[11]use the estimated queue size to adjust the congestion window size. Over the years researchers have observed that the most widely used versions of TCP,which are TCP Tahoe and Reno,exhibit several undesirable characteristics such as a delay bias and a high packet loss rate.In order to deal with these issues Mo and Walrand[11]have investigated the existence of a fair window-based end-to-end congestion control algorithm that updates the congestion window size more intelligently.Wefirst present thefluid model of the network that describes the relationship between the window sizes,rates,and queue sizes.Throughout the paper we assume that the switches ex-ercise thefirst-in-first-out(FIFO)service discipline.This model can be represented by the following equations:(6)(7)(8)(9) whereWe call these loss-based and queue-based TCP,respectively.4is the congestion window size of user,is thefixed prop-agation delay of route,and denotes the backlog at link buffer.For simplicity of analysis we assume that the buffer sizes are infinite.Thefirst condition in(6)represents the capacity constraint.The second constraint says that there is backlogged fluid at a resource only if the total rate through it equals the ca-pacity.The third constraint follows from that the window size of connection should equal the sum of the amount offluid in transit and the backloggedfluid in the buffers,i.e.,where denotes connection’s total backlog in the buffers. Let,where is connection’s maximum congestion window size announced by the receiver,and. It has been shown in[11]that the rate vector is a well defined function of the window sizes,and we can define a function that maps a window size vector to a rate vector. Under the(,1)-proportionally fair algorithm by Mo and Wal-rand[11]each connection has a target queue size0and attempts to keep packets at the switch buffers similarly as in enhanced TCP Vegas.Let,and denote the round trip delay,the congestion window size,and the rate of connection at time,respectively.Suppose that each connec-tion has afixed target queue size.Connection updates its window size according to the following differential equa-tionwhereThis algorithm is called a-proportionally fair algorithm. They prove that the above algorithm converges to a unique sta-ble point of the system for allfixed and the max-min fair al-location is achieved as a limit as.However,since their We use to denote when there is no confusion.work is motivated by the fundamental question of the existence of a fair end-to-end congestion control mechanism,they have not considered the problem of maximizing the aggregate utility of the users,while maintaining fairness.IV.C HARGE-S ENSITIVE TCPThe algorithms suggested in[7],[5]are based on the assump-tions that the network can provide the necessary feedback to the users,and users adjust their rates based on the feedback infor-mation.However,in the current Internet,many,if not most, connections use TCP,which is a window-based congestion con-trol mechanism,to control their rates.Thus,users control the rates only through the window sizes.There are several arguments for a window-based congestion control algorithm in the Internet over a rate-based algorithm. One of the arguments is the stability of the Internet.Suppose that users use a rate-based congestion control algorithm.If they have incorrect estimates of the available bandwidth and release packets into the network at a rate that is much higher than they should,the network could temporarily experience an extremely high packet loss rate due to buffer overflows and may take a long time to recover from it.A window-based congestion control algorithm not only controls the transmission rate,but also limits the maximum number of outstanding packets according to the congestion window size.This helps alleviate the effect of the inaccurate estimation of the available bandwidth by the users. This is an important advantage of a window-based algorithm.In an open system such as the Internet,it is important to control the number of packets the connections can keep within the network for stability and to ensure that no users can arbitrarily penalize other users by increasing their rates during congestion periods due to incorrect estimates.In this section we propose an algorithm that can be imple-mented over the current Internet without extensive modifica-tions inside the network.The algorithm requires simple mod-ifications of the already existing TCP at the end hosts.We demonstrate,using afluid model,that at an equilibrium of the algorithm,the resulting rates are the optimal rates that solve.A.Pricing SchemeSection III-B tells us that,given,the users can reach a solution to NETWORK(A,C;p)using a window-based congestion control mechanism,namely the-proportionally fair algorithm of Mo and Walrand.Throughout the rest of this paper,we refer to this-proportionally fair algorithm when we say TCP.The challenge now is to design a mechanism that drives the users to the right where the resulting rate alloca-tion solves.In this subsection we describe a simple pricing mechanism that can achieve this.We show that the price a user pays can be directly computed by the user with-out any feedback from the network,by using the same mecha-nism already built into TCP.When the total rate through a link is strictly smaller than its capacity,there is no congestion,as each user receives its de-sired rate without contention from other users.However,when the total rate reaches the link capacity any increase in a conges-tion window size by a user in an attempt to increase its own5rate results in increased backlog at the resource.This leads to higher queueing delay at the resource.If the users are delay sensitive this increase in queueing delay represents an increase in the implicit cost the users pay through larger delay.From the perspective of the network,this increase in queueing delay may be interpreted as an increase in undesirable delay cost for the system.Suppose that the network attempts to recover the increase in the system cost due to queueing delay through a pricing mecha-nism as follows.Let,denote the backlog at resource .When the total rate through the resource is strictly smallerthan its capacity,we assume that there is no backlog from(7). When resource is congested,i.e.,the total rate through it equals its capacity,the resource charges a price,where the price per unitflow is the queueing delay at the resource,i.e.,(10)We have,however,claimed that no information is explicitly fed back to the end hosts from the network.Hence,the switches are not allowed to send any information regarding the current price per unitflow back to the end users.In order to maximize the objective function in(10)given the current prices per unitflow,a user only needs to know the total price per unitflow of its route,but not the price at each resource. We now show that users using TCP can compute their prices per unitflow without any help from the network.Suppose that each connection knows the propagation delay of its route.In practice this is done by keeping track of the minimum round trip time of the packets[1],[10].Suppose that the target queue sizes of the users are given by,and the users window sizes converge to the stable point of TCP,where the resulting rates are weighted proportionally fair with weight vector.Then,the price per unit time user pays,,can be computed as follows:(11)where is connection’s queue size at resource.Here we have implicitly assumed that the queue size of each connection at a congested resource is proportional to its rate,which is a consequence of the assumption that the switches exercise FIFO service discipline.Therefore,at the stable point of TCP for a fixed the price of a user equals its target queue size,and the user can infer its own price from its target queue size.One important aspect of a pricing scheme is fairness.The price a connection pays for using resource should be proportional to its rate.In other words,the price per unitflow for each connection at a resource,should be the same.This is obviously the case with the above pricing scheme.Moreover,it is easy to see that when a new user comes into the network,the We say the cost is implicit because the users do not necessarily have to pay in monetary form.price the new user pays is exactly the increase in the total system cost,i.e.,the increase in the total queueing delay experienced by the packets.This can be seen from that the price user pays per unit time equals its target queue size and the total system cost per unit time is given by.Suppose that is some small positive constant and users up-date their willingness to pay or price,at time accord-ing to(12)where is the price per unitflow along user’s route at time.We assume that the price updates take place in a much larger time scale,while users allow their window sizes to con-verge to a point close to the unique stable point of TCP forfixed ().This is a natural assumption since the window sizes typically take only seconds to converge and users are not likely to keep changing their parameters before estimating the current throughput.The intuition behind the updating rule is as follows.At time based on the price per unitflow at time user computes its optimal price that maximizes its net utility. If the current price per unitflow is too high,user prefers to wait till the price per unitflow is lower.In such a case user needs to probe the network for the available bandwidth and price per unit flow along its route.In order to measure the residual bandwidth not used by the other users,if there is any,and the price per unit flow,user needs to set its window size large enough so that it utilizes all residual capacity not taken by the other users.Hence, we assume that user sets the target queue size,which is its wil-ingness to pay in our model,to some small positive constant that is arbitrarily small,so that each user can estimate its well defined price per unitflow,which is strictly positive,along the route.We now let0and consider the limit case as0.In the limit case as0(12)can be written as6 .The updating rule in(13)can be mo-tivated as follows.If the price per unitflow is larger thanor equal to,then user receives a negative net utilityfrom any positive.Thus,user should wait till the price perunitflow is smaller than.If the price per unitflow isstrictly smaller than,then there exists a unique solutionto problem in(2).This solution is the uniquesuch that(14) where,and is the price per unitflow vector at the unique stable point of TCP with price vector.Afixed pointof the mapping is a vector in such that. Theorem1:There exists a uniquefixed point of the map-ping,and the resulting rate allocation from is the optimal rate allocation that solves.Proof:The proof is given in Appendix A.whereand that the initial price vector is such that0.This ensures that0for all 1.Suppose that the utility functions satisfy the following addi-tional assumption.(A2)There exists0such that(i)for all(ii)for allwhere and.p (p )tFig.2.Assumption(A2).Consider the following update scheme.Wefirst assume that all users are synchronized and model the user updates with a discrete-time model.At each period1,every user updates its price based on its rate and price as follows:and7 scheme,i.e.,Proof:The proof is given in Appendix B.where0.We assume that the sets,are infinite and if is a sequence of elements in that tends to infinity,thenThe update scheme described here is called a totally asyn-chronous update scheme.Theorem3:In a single bottleneck case,under assumptions (A1)and(A2),the user prices con-verge to as under the totally asyn-chronous update scheme.The proof of Theorem3is omitted in this paper.Refer to[9]for the proof.Let us give a few examples of user utility functions that satisfy the assumptions(A1)-(A2).Suppose that user’s utility function has a form1.,where0and012.,where0and01 Then,one can easily show that with appropriate budget con-straints on the users,both assumptions(A1)and(A2)hold.A numerical example of the convergence of user prices to the uniquefixed point of the mapping with the second type of utility function is given in the next section.In this section for the purpose of analysis we have assumed that users are delay insensitive.However,if they are sensitive to delay and the cost due to queueing delay is given by in(11), then the algorithm does not require any pricing mechanism.Re-call that the purpose of the pricing mechanism is to impose the system cost due to queueing delay on the users in an incentive-compatible way.In this case when users maximize their utility functions with explicit delay cost,the resulting rate allocation at an equilibrium are the optimal rates.V.N UMERICAL E XAMPLEIn this section we give a numerical example of a simple net-work and demonstrate the convergence of users’parameters through simulation.Although the convergence results have been proved only for single bottleneck cases,the simulation results indicate that the user rates converge to the system optimal rates even in general networks with the utility functions satisfying the6 : e2-B7-B6-e6S89 : e2-B2-B8-B4-e48 : e3-B3-B4-e47 : e1-B1-B2-e210 : e2-B2-B7-B5-e511 : e1-B1-B2-B8-B4-e41 : e2-B2-B8-B4-E42 : e2-B2-B7-B5-e53 : e1-B1-B2-B8-B4-e44 : e1-B1-B7-B5-e55 : e3-B3-B8-B7-B6-e6Fig.3.Topology of the simulated network and the users’routes.0.5840.2830.5210.7241.0760.4240.4591.5001.1750.4920.5218Fig.4.Convergence of user prices and rates.VI.N ASH E QUILIBRIUMIn the previous sections a user is assumed not to anticipate the effect of its own action on the prices.In other words,a user, given an opportunity to update its price,solves(13)to compute its best price as a function offixed.However,the price per unitflow of user is a function of,i.e.,,where.This as-sumption may be reasonable in the Internet due to its size and the number of users.In some special cases,however,as a user occupies more and more of a resource in the network,eventu-ally the user will have a nonnegligible impact on prices and an incentive to estimate its impact on the prices[7],i.e.,user may find it beneficial to solve,where andis user’s budget constraint.If an agent on a computer is acting on behalf of the user,the user may specify the maximum amount In this case users are said to have market power.it is willing to pay per unit time as done in[6].In this case the price per unitflow depends only on,and a user with positive can easily estimate and.Hence,the user can compute its own rate as it changes its parameter,and it is reasonable to assume that users can easily anticipate the effect of its parameter on the price per unitflow.Since the users are likely to be interested only in maximizing their own net benefit,i.e.,users are selfish,it is natural to model the problem as a noncooperative game,where the action space for user,is.Let usfirst define a Nash equilibrium point(NEP)of this game.A price or target queue size vectoris an NEP if,for all,the following holds:(17) where denotes()andThe following theorem states that in the simple case of single bottleneck link there exists a unique NEP of the game. Theorem4:In the case of simple network where there is a single bottleneck link withfinite capacity and afinite number of users,there exists a unique Nash equilibrium of the game.Proof:The proof is given in Appendix C.。