Transmission Control Protocol(TCP) uses acongestion controlalgorithm that includes various aspects of anadditive increase/multiplicative decrease(AIMD) scheme, along with other schemes includingslow start[1]and acongestion window(CWND), to achieve congestion avoidance. TheTCP congestion-avoidance algorithmis the primary basis forcongestion controlin the Internet.[2][3][4]Per theend-to-end principle,congestion control is largely a function ofinternet hosts,not the network itself. There are several variations and versions of the algorithm implemented inprotocol stacksofoperating systemsof computers that connect to theInternet.
To avoidcongestive collapse,TCP uses multi-faceted congestion-control strategy. For each connection, TCP maintains a CWND, limiting the total number of unacknowledged packets that may be in transit end-to-end. This is somewhat analogous to TCP'ssliding windowused forflow control.
Additive increase/multiplicative decrease
editTheadditive increase/multiplicative decrease(AIMD) algorithm is aclosed-loop control algorithm.AIMD combines linear growth of the congestion window with an exponential reduction when congestion occurs. Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link.[5]
This is the algorithm that is described inRFC5681for the "congestion avoidance" state.[6]
Congestion window
editIn TCP, thecongestion window(CWND) is one of the factors that determines the number of bytes that can be sent out at any time. The congestion window is maintained by the sender and is a means of preventinga linkbetween the sender and the receiver from becoming overloaded with too much traffic. This should not be confused with the sliding window maintained by the sender which exists to preventthe receiverfrom becoming overloaded. The congestion window is calculated by estimating how much congestion there is on the link.
When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of themaximum segment size(MSS) allowed on that connection. Further variance in the congestion window is dictated by anadditive increase/multiplicative decrease(AIMD) approach. This means that if all segments are received and the acknowledgments reach the sender on time, some constant is added to the window size. It will follow different algorithms.
Asystem administratormay adjust the maximum window size limit, or adjust the constant added during additive increase, as part ofTCP tuning.
The flow of data over a TCP connection is also controlled by the use of thereceive windowadvertised by the receiver. A sender can send data less than its own congestion window and thereceive window.
Slow start
editSlow start, defined byRFC5681.[7]is part of thecongestion controlstrategy used by TCP in conjunction with otheralgorithmsto avoid sending more data than the network is capable of forwarding, that is, to avoid causing network congestion.
Slow start begins initially with a congestion window size (CWND) of 1, 2, 4 or 10 MSS.[8][3]: 1 The value for the congestion window size can be increased by 1 MSS with eachacknowledgement(ACK) received, effectively doubling the window size eachRTT.[a]
The transmission rate will be increased by the slow-start algorithm until either a packet loss is detected, the receiver's advertised window (rwnd) becomes the limiting factor, orslow start threshold (ssthresh)is reached, which is used to determine whether the slow start or congestion avoidance algorithm is used, a value set to limit slow start.
If the CWND reachesssthresh,TCP switches to the congestion avoidance algorithm. It should be increased by up to 1 MSS for each RTT. A common formula is thateach new ACKincreases the CWND byMSS * MSS / CWND.It increases almost linearly and provides an acceptable approximation.
If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network. These measures depend on the exact TCP congestion avoidance algorithm used.
When a TCP sender detects segment loss using the retransmission timer and the given segment has not yet been resent, the value ofssthreshmust be set to no more than half of the amount of data that has been sent but not yet cumulatively acknowledged or2 * MSS,whichever value is greater.
- TCP Tahoe
- When a loss occurs, retransmit is sent, half of the current CWND is saved asssthreshand slow start begins again from its initial CWND.
- TCP Reno
- Afast retransmitis sent, half of the current CWND is saved asssthreshand as new CWND, thus skipping slow start and going directly to the congestion avoidance algorithm. The overall algorithm here is calledfast recovery.
Slow start assumes that unacknowledged segments are due to network congestion. While this is an acceptable assumption for many networks, segments may be lost for other reasons, such as poordata link layertransmission quality. Thus, slow start can perform poorly in situations with poor reception, such aswireless networks.
The slow start protocol also performs badly for short-lived connections. Olderweb browserswould create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time. To avoid this problem, modern browsers either open multiple connections simultaneously orreuse one connectionfor all files requested from a particular web server. Connections, however, cannot be reused for the multiple third-party servers used by web sites to implementweb advertising,sharing featuresofsocial networking services,[9]andcounter scripts of web analytics.
Fast retransmit
editFast retransmitis an enhancement toTCPthat reduces the time a sender waits before retransmitting a lost segment. A TCP sender normally uses a simple timer to recognize lost segments. If an acknowledgment is not received for a particular segment within a specified time (a function of the estimatedround-trip delay time), the sender will assume the segment was lost in the network and will retransmit the segment.
Duplicate acknowledgmentis the basis for the fast retransmit mechanism. After receiving a packet an acknowledgement is sent for the last in-order byte of data received. For an in-order packet, this is effectively the last packet's sequence number plus the current packet's payload length. If the next packet in the sequence is lost but a third packet in the sequence is received, then the receiver can only acknowledge the last in-order byte of data, which is the same value as was acknowledged for the first packet. The second packet is lost and the third packet is not in order, so the last in-order byte of data remains the same as before. Thus aDuplicate acknowledgmentoccurs. The sender continues to send packets, and a fourth and fifth packet are received by the receiver. Again, the second packet is missing from the sequence, so the last in-order byte has not changed. Duplicate acknowledgments are sent for both of these packets.
When a sender receives three duplicate acknowledgments, it can be reasonably confident that the segment carrying the data that followed the last in-order byte specified in the acknowledgment was lost. A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout. On receipt of the retransmitted segment, the receiver can acknowledge the last in-order byte of data received. In the above example, this would acknowledge to the end of the payload of the fifth packet. There is no need to acknowledge intermediate packets since TCP uses cumulative acknowledgments by default.
Algorithms
editThe naming convention for congestion control algorithms (CCAs) may have originated in a 1996 paper by Kevin Fall and Sally Floyd.[10][failed verification]
The following is one possible classification according to the following properties:
- the type and amount of feedback received from the network
- incremental deployability on the current Internet
- the aspect of performance it aims to improve: highbandwidth-delay productnetworks (B); lossy links (L); fairness (F); advantage to short flows (S); variable-rate links (V);speed of convergence(C)
- the fairness criterion it uses
Some well-known congestion avoidance mechanisms are classified by this scheme as follows:
Variant | Feedback | Required changes | Benefits | Fairness |
---|---|---|---|---|
(New) Reno | Loss | — | — | Delay |
Vegas | Delay | Sender | Less loss | Proportional |
High Speed | Loss | Sender | High bandwidth | |
BIC | Loss | Sender | High bandwidth | |
CUBIC | Loss | Sender | High bandwidth | |
C2TCP[11][12] | Loss/Delay | Sender | Ultra-low latency and high bandwidth | |
NATCP[13] | Multi-bit signal | Sender | Near Optimal Performance | |
Elastic-TCP | Loss/Delay | Sender | High bandwidth/short & long-distance | |
Agile-TCP | Loss | Sender | High bandwidth/short-distance | |
H-TCP | Loss | Sender | High bandwidth | |
FAST | Delay | Sender | High bandwidth | Proportional |
Compound TCP | Loss/Delay | Sender | High bandwidth | Proportional |
Westwood | Loss/Delay | Sender | Lossy links | |
Jersey | Loss/Delay | Sender | Lossy links | |
BBR[14] | Delay | Sender | BLVC,Bufferbloat | |
CLAMP | Multi-bit signal | Receiver, Router | Variable-rate links | Max-min |
TFRC | Loss | Sender, Receiver | No Retransmission | Minimum delay |
XCP | Multi-bit signal | Sender, Receiver, Router | BLFC | Max-min |
VCP | 2-bit signal | Sender, Receiver, Router | BLF | Proportional |
MaxNet | Multi-bit signal | Sender, Receiver, Router | BLFSC | Max-min |
JetMax | Multi-bit signal | Sender, Receiver, Router | High bandwidth | Max-min |
RED | Loss | Router | Reduced delay | |
ECN | Single-bit signal | Sender, Receiver, Router | Reduced loss |
TCP Tahoe and Reno
editTCP Tahoe and Reno algorithms were retrospectively named after the versions or flavours of the4.3BSDoperating system in which each first appeared (which were themselves named afterLake Tahoeand the nearby city ofReno, Nevada). The Tahoe algorithm first appeared in 4.3BSD-Tahoe (which was made to support theCCI Power 6/32 "Tahoe" minicomputer), and was later made available to non-AT&T licensees as part of the 4.3BSD Networking Release 1; this ensured its wide distribution and implementation. Improvements were made in 4.3BSD-Reno and subsequently released to the public as Networking Release 2 and later 4.4BSD-Lite.
While both consider retransmission timeout (RTO) and duplicate ACKs as packet loss events, the behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs:
- Tahoe: if three duplicate ACKs are received (i.e. four ACKs acknowledging the same packet, which are not piggybacked on data and do not change the receiver's advertised window), Tahoe performs a fast retransmit, sets the slow start threshold to half of the current congestion window, reduces the congestion window to 1 MSS, and resets to slow start state.[15]
- Reno: if three duplicate ACKs are received, Reno will perform a fast retransmit and skip the slow start phase by instead halving the congestion window (instead of setting it to 1 MSS like Tahoe), setting the ssthresh equal to the new congestion window, and enter a phase calledfast recovery.[16]
In both Tahoe and Reno, if an ACK times out (RTO timeout), slow start is used, and both algorithms reduce the congestion window to 1 MSS.[citation needed]
TCP New Reno
editTCP New Reno, defined byRFC6582(which obsolesces previous definitions inRFC3782andRFC2582), improves retransmission during the fast-recovery phase of TCP Reno.
During fast recovery, to keep the transmit window full, for every duplicate ACK that is returned, a new unsent packet from the end of the congestion window is sent.
The difference from Reno is that New Reno does not halve ssthresh immediately which may reduce the window too much if multiple packet losses occur. It does not exit fast-recovery and reset ssthresh until it acknowledges all of the data.
After retransmission, newly acknowledged data have two cases:
- Full acknowledgments: The ACK acknowledges all the intermediate segments sent, the ssthresh can be not changed, cwnd can be set to ssthresh
- Partial acknowledgments: The ACK does not acknowledge all data. It means another loss may occur, retransmit the first unacknowledged segment if permitted
It uses a variable called "recover" to record how much data needs to be recovered. After a retransmit timeout, it records the highest sequence number transmitted in the recover variable and exits the fast recovery procedure. If this sequence number is acknowledged, TCP returns to the congestion avoidance state.
A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. In this case, New Reno mistakenly enters fast recovery. When the reordered packet is delivered, duplicate and needless retransmissions are immediately sent.
New Reno performs as well as SACK at low packet error rates and substantially outperforms Reno at high error rates.[17]
TCP Vegas
editUntil the mid-1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer.University of ArizonaresearchersLarry PetersonandLawrence Brakmointroduced TCP Vegas in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. In a comparison study of various TCPCCAs, TCP Vegas appeared to be the smoothest followed by TCP CUBIC.[18]
TCP Vegas was not widely deployed outside Peterson's laboratory but was selected as the default congestion control method forDD-WRTfirmware v24 SP2.[19]
TCP Hybla
editTCP Hybla[20][21]aims to eliminate penalties to TCP connections that use high-latency terrestrial or satellite radio links. Hybla improvements are based on analytical evaluation of the congestion window dynamics.[22]
TCP BIC
editBinary Increase Congestion control (BIC) is a TCP implementation with an optimized CCA for high-speed networks with high latency, known aslong fat networks(LFNs).[23]BIC is used by default inLinux kernels2.6.8 through 2.6.18.[citation needed]
TCP CUBIC
editCUBIC is a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. CUBIC is used by default inLinux kernelssince version 2.6.19.
Agile-SD TCP
editAgile-SD is a Linux-based CCA which is designed for the real Linux kernel. It is a receiver-side algorithm that employs a loss-based approach using a novel mechanism, calledagility factor(AF). to increase the bandwidth utilization over high-speed and short-distance networks (low bandwidth-delay product networks) such as local area networks or fiber-optic network, especially when the applied buffer size is small.[24]It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows) and CUBIC (the default of Linux) using NS-2 simulator. It improves the total performance up to 55% in term of average throughput.
TCP Westwood+
editWestwood+ is a sender-only modification of TCP Reno that optimizes the performance of TCP congestion control over both wired andwireless networks.TCP Westwood+ is based on end-to-endbandwidthestimation to set the congestion window and slow-start threshold after a congestion episode, that is, after three duplicate acknowledgments or a timeout. The bandwidth is estimated by averaging the rate of returning acknowledgment packets. In contrast with TCP Reno, which blindly halves the congestion window after three duplicate ACKs, TCP Westwood+ adaptively sets a slow-start threshold and a congestion window that takes into account an estimate of bandwidth available at the time congestion is experienced. Compared to Reno and New Reno, Westwood+ significantly increases throughput over wireless links and improves fairness in wired networks.[citation needed]
Compound TCP
editCompound TCP is aMicrosoftimplementation of TCP which maintains two different congestion windows simultaneously, with the goal of achieving good performance on LFNs while not impairingfairness.It has been widely deployed in Windows versions since MicrosoftWindows VistaandWindows Server 2008and has been ported to older Microsoft Windows versions as well asLinux.
TCP Proportional Rate Reduction
editTCP Proportional Rate Reduction (PRR)[25]is an algorithm designed to improve the accuracy of data sent during recovery. The algorithm ensures that the window size after recovery is as close as possible to the slow start threshold. In tests performed byGoogle,PRR resulted in a 3–10% reduction in average latency and recovery timeouts were reduced by 5%.[26]PRR is available inLinux kernelssince version 3.2.[27]
TCP BBR
editBottleneck Bandwidth and Round-trip propagation time (BBR) is a CCA developed at Google in 2016.[28]While most CCAs are loss-based, in that they rely on packet loss to detect congestion and lower rates of transmission, BBR, likeTCP Vegas,is model-based. The algorithm uses the maximum bandwidth and round-trip time at which the network delivered the most recent flight of outbound data packets to build a model of the network. Each cumulative or selective acknowledgment of packet delivery produces a rate sample that records the amount of data delivered over the time interval between the transmission of a data packet and the acknowledgment of that packet.[29]
When implemented atYouTube,BBRv1 yielded an average of 4% higher network throughput and up to 14% in some countries.[30]BBR has been available for Linux TCP since Linux 4.9.[31]It is also available forQUIC.[32]
BBR version 1 (BBRv1) fairness to non-BBR streams is disputed. While Google's presentation shows BBRv1 co-existing well with CUBIC,[28]researchers like Geoff Huston and Hock, Bless and Zitterbart found it unfair to other streams and not scalable.[33]Hock et al. also found "some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss" in the BBR implementation of Linux 4.9.[34]Soheil Abbasloo et al. (authors of C2TCP) show that BBRv1 doesn't perform well in dynamic environments such as cellular networks.[11][12]They have also shown that BBR has an unfairness issue. For instance, when aCUBICflow (which is the defaultTCPimplementation in Linux, Android, and MacOS) coexists with a BBR flow in the network, the BBR flow can dominate the CUBIC flow and get the whole link bandwidth from it (see figure 16 in[11]).
Version 2 attempts to deal with the issue of unfairness when operating alongside loss-based congestion management such as CUBIC.[35]In BBRv2 the model used by BBRv1 is augmented to include information about packet loss and information fromExplicit Congestion Notification(ECN).[36]Whilst BBRv2 may at times have lower throughput than BBRv1 it is generally considered to have bettergoodput.[citation needed]
Version 3 (BBRv3) fixes two bugs in BBRv2 (premature end of bandwidth probing, bandwidth convergence) and performs some performance tuning. There is also a variant, termed BBR.Swift, optimized for datacenter-internal links: it uses network_RTT (excluding receiver delay) as the main congestion control signal.[36]
C2TCP
editCellular Controlled Delay TCP (C2TCP)[11][12]was motivated by the lack of a flexible end-to-end TCP approach that can satisfy variousQoSrequirements for different applications without requiring any changes in the network devices. C2TCP aims to satisfy ultra-lowlatencyand high-bandwidth requirements of applications such asvirtual reality,video conferencing,online gaming,vehicular communication systems,etc. in a highly dynamic environment such as currentLTEand future5Gcellular networks.C2TCP works as anadd-onon top of loss-based TCP (e.g. Reno, NewReno,CUBIC,BIC,...), it is only required to be installed on the server-side and makes the average delay of packets bounded to the desired delays set by the applications.
Researchers atNYU[37]showed that C2TCP outperforms the delay and delay-variation performance of various state-of-the-art TCP schemes. For instance, they showed that compared to BBR, CUBIC, and Westwood on average, C2TCP decreases the average delay of packets by about 250%, 900%, and 700% respectively on various cellular network environments.[11]
Elastic-TCP
editElastic-TCP was proposed in February 2019 to increase bandwidth utilization over high-BDP networks in support of cloud computing. It is a Linux-based CCA that is designed for the Linux kernel. It is a receiver-side algorithm that employs a loss-delay-based approach using a novel mechanism called a window-correlated weighting function (WWF). It has a high level of elasticity to deal with different network characteristics without the need for human tuning. It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows), CUBIC (the default for Linux) and TCP-BBR (the default of Linux 4.9 used by Google) using the NS-2 simulator and testbed. Elastic-TCP significantly improves the total performance in terms of average throughput, loss ratio, and delay.[38]
NATCP
editSoheil Abbasloo et al. proposed NATCP (Network-Assisted TCP)[13]acontroversial[according to whom?]TCP design targetingmulti-access edge computing(MEC). The key idea of NATCP is that if the characteristics of the network were known beforehand, TCP would have been designed differently. Therefore, NATCP employs the available features and properties in the current MEC-based cellular architectures to push the performance of TCP close to the optimal performance. NATCP uses out-of-band feedback from the network to the servers located nearby. The feedback from the network, which includes the capacity of the cellular access link and the minimum RTT of the network, guides the servers to adjust their sending rates. As preliminary results show, NATCP outperforms the state-of-the-art TCP schemes.[13][39]
Other TCP congestion avoidance algorithms
edit- FAST TCP
- Generalized FAST TCP[40]
- H-TCP
- Data Center TCP
- High Speed TCP
- HSTCP-LP[41]
- TCP-Illinois
- TCP-LP[41]
- TCP SACK
- Scalable TCP
- TCP Veno[42]
- Westwood
- XCP[43]
- YeAH-TCP[44]
- TCP-FIT[45]
- Congestion Avoidance with Normalized Interval of Time (CANIT)[46]
- Non-linear neural network congestion control based on genetic algorithm for TCP/IP networks[47]
- D-TCP[48]
- NexGen D-TCP[49]
- Copa[50]
TCP New Renowas the most commonly implemented algorithm,[citation needed]SACK support is very common[citation needed]and is an extension to Reno/New Reno. Most others are competing proposals that still need evaluation. Starting with 2.6.8 the Linux kernel switched the default implementation from New Reno toBIC.The default implementation was again changed to CUBIC in the 2.6.19 version.FreeBSDfrom version 14.X onwards also uses CUBIC as the default algorithm.[51]Previous version used New Reno. However, FreeBSD supports a number of other choices.[52]
When the per-flow product of bandwidth and latency increases, regardless of the queuing scheme, TCP becomes inefficient and prone to instability. This becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links.
TCP Interactive (iTCP)[53]allows applications to subscribe to TCP events and respond accordingly enabling various functional extensions to TCP from outside TCP layer. Most TCP congestion schemes work internally. iTCP additionally enables advanced applications to directly participate in congestion control such as to control the source generation rate.
Zeta-TCPdetects congestion from both latency and loss rate measures. To maximize thegoodputZeta-TCP and applies different congestion window backoff strategies based on the likelihood of congestion. It also has other improvements to accurately detect packet losses, avoiding retransmission timeout retransmission; and accelerate and control the inbound (download) traffic.[54]
Classification by network awareness
editCCAs may be classified in relation to network awareness, meaning the extent to which these algorithms are aware of the state of the network. This consist of three primary categories: black box, grey box, and green box.[55]
Black box algorithms offer blind methods of congestion control. They operate only on the binary feedback received upon congestion and do not assume any knowledge concerning the state of the networks which they manage.
Grey box algorithms use time-based measurement, such as RTT variation and rate of packet arrival, in order to obtain measurements and estimations of bandwidth, flow contention, and other knowledge of network conditions.
Green box algorithms offer bimodal methods of congestion control which measures the fair share of total bandwidth which should be allocated for each flow, at any point, during the system's execution.
Black box
edit- Highspeed-TCP[56]
- BIC TCP(Binary Increase Congestion Control Protocol) uses a concave increase of the sources rate after each congestion event until the window is equal to that before the event, in order to maximize the time that the network is fully utilized. After that, it probes aggressively.
- CUBIC TCP– a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event.
- AIMD-FC(additive increase multiplicative decrease with fast convergence), an improvement of AIMD.[57]
- Binomial Mechanisms
- SIMD Protocol
- GAIMD
Grey box
edit- TCP Vegas– estimates the queuing delay, and linearly increases or decreases the window so that a constant number of packets per flow are queued in the network. Vegas implements proportional fairness.
- FAST TCP– achieves the same equilibrium as Vegas, but usesproportional controlinstead of linear increase, and intentionally scales the gain down as the bandwidth increases with the aim of ensuring stability.
- TCP BBR – estimates the queuing delay but uses exponential increase. Intentionally slows down periodically for fairness and decreased delay.
- TCP-Westwood(TCPW) – a loss causes the window to be reset to the sender's estimate of the bandwidth-delay product (the smallest measured RTT multiplied by the observed rate of receiving ACKs).[58]
- C2TCP[12][11]
- TFRC[59]
- TCP-Real
- TCP-Jersey
Green box
edit- Bimodal Mechanism–Bimodal Congestion Avoidance and Controlmechanism.
- Signalling methods implemented by routers
- Random Early Detection(RED) randomly drops packets in proportion to the router's queue size, triggering multiplicative decrease in some flows.
- Explicit Congestion Notification(ECN)
- Network-Assisted Congestion Control
- NATCP[13]– Network-Assisted TCP uses out-of-band explicit feedback indicating minimum RTT of the network and capacity of the cellular access link.
- The variable-structure congestion control protocol (VCP) uses two ECN bits to explicitly feedback the network state of congestion. It includes an end host side algorithm as well.[citation needed]
The following algorithms require custom fields to be added to the TCP packet structure:
- Explicit Control Protocol(XCP) – XCP packets carry a congestion header with a feedback field, indicating the increase or decrease of the sender's congestion window. XCP routers set the feedback value explicitly for efficiency and fairness.[60]
- MaxNet– Uses a single header field, which carries the maximum congestion level of any router on a flow's path. The rate is set as a function of this maximum congestion, resulting inmax-min fairness.[61]
- JetMax,like MaxNet, responds only to the maximum congestion signal, but also carries other overhead fields.
Linux usage
edit- BIC is used by default in Linux kernels 2.6.8 through 2.6.18. (August 2004 – September 2006)
- CUBIC is used by default in Linux kernels since version 2.6.19. (November 2006)[62]
- PRR is incorporated in Linux kernels to improve loss recovery since version 3.2. (January 2012)
- BBRv1 is incorporated in Linux kernels to enable model-based congestion control since version 4.9. (December 2016)
See also
edit- Transmission Control Protocol §§Congestion control andDevelopment
- Network congestion § Mitigation
- Low Extra Delay Background Transport(LEDBAT)
Notes
editReferences
edit- ^Jacobson & Karels 1988.
- ^abW. Stevens (January 1997).TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms.doi:10.17487/RFC2001.RFC2001.
- ^abM. Allman; S. Floyd; C. Partridge (October 2002).Increasing TCP's Initial Window.doi:10.17487/RFC3390.RFC3390.
- ^"TCP Congestion Avoidance Explained via a Sequence Diagram"(PDF).eventhelix.
- ^Chiu, Dah-Ming; Raj Jain (1989). "Analysis of increase and decrease algorithms for congestion avoidance in computer networks".Computer Networks and ISDN Systems.17:1–14.CiteSeerX10.1.1.136.8108.doi:10.1016/0169-7552(89)90019-6.
- ^Allman, M.; Paxson, V. (September 2009).TCP Congestion Control.Internet Engineering Task Force.sec. 3.1.doi:10.17487/RFC5681.RFC5681.Retrieved4 March2021.
- ^Blanton, Ethan; Paxson, Vern; Allman, Mark (September 2009)."TCP Congestion Control".Internet Engineering Task Force.
{{cite journal}}
:Cite journal requires|journal=
(help) - ^Corbet, Jonathan."Increasing the TCP initial congestion window".LWN.Retrieved10 October2012.
- ^Nick O'Neill. "What's Making Your Site Go Slow? Could Be The Like Button".AllFacebook,10 November 2010. Retrieved on 12 September 2012.
- ^Fall, Kevin; Sally Floyd (July 1996)."Simulation-based Comparisons of Tahoe, Reno and SACK TCP"(PDF).Computer Communications Review.26(3): 5–21.CiteSeerX10.1.1.586.2403.doi:10.1145/235160.235162.S2CID7459148.
- ^abcdefAbbasloo, S.; Xu, Y.; Chao, H. J. (2019). "C2TCP: A Flexible Cellular TCP to Meet Stringent Delay Requirements".IEEE Journal on Selected Areas in Communications.37(4): 918–932.arXiv:1810.13241.doi:10.1109/JSAC.2019.2898758.ISSN0733-8716.S2CID53107038.
- ^abcdAbbasloo, S.; Li, T.; Xu, Y.; Chao, H. J. (May 2018). "Cellular Controlled Delay TCP (C2TCP)".2018 IFIP Networking Conference (IFIP Networking) and Workshops.pp. 118–126.arXiv:1807.02689.Bibcode:2018arXiv180702689A.doi:10.23919/IFIPNetworking.2018.8696844.ISBN978-3-903176-08-9.S2CID49650788.
- ^abcdAbbasloo et al. 2019.
- ^Cardwell, Neal; Cheng, Yuchung; Gunn, C. Stephen; Yeganeh, Soheil Hassas; Jacobson, Van (2016)."BBR: Congestion-Based Congestion Control".Queue.14(5): 20–53.doi:10.1145/3012426.3022184.
- ^Kurose & Ross 2008,p. 284.
- ^Kurose & Ross 2012,p. 277.
- ^VasanthiN., V.; SinghM., Ajith; Kumar, Romen; Hemalatha, M. (2011). "Evaluation of Protocols and Algorithms for Improving the Performance of TCP over Wireless/Wired Network". In Das, Vinu V; Thankachan, Nessy (eds.).Computational Intelligence and Information Technology.Communications in Computer and Information Science. Vol. 250. Springer. pp. 693–697.doi:10.1007/978-3-642-25734-6_120.ISBN978-3-642-25733-9.
- ^"Performance Analysis of TCP Congestion Control Algorithms"(PDF).Retrieved26 March2012.
- ^"DD-WRT changelog".Retrieved2 January2012.
- ^"Hybla home page".Archived fromthe originalon 11 October 2007.Retrieved4 March2007.
- ^Caini, Carlo; Firrincieli, Rosario (2004)."TCP Hybla: a TCP enhancement for heterogeneous networks".International Journal of Satellite Communications and Networking.22(5): 547–566.doi:10.1002/sat.799.ISSN1542-0973.S2CID2360535.
- ^Caini, C.; Firrincieli, R.; Lacamera, D. (2009)."Comparative Performance Evaluation of TCP Variants on Satellite Environments".2009 IEEE International Conference on Communications.pp. 1–5.doi:10.1109/ICC.2009.5198834.S2CID8352762.
- ^V., Jacobson; R.T., Braden.TCP extensions for long-delay paths.doi:10.17487/RFC1072.RFC1072.
- ^Alrshah, M.A.; Othman, M.; Ali, B.; Hanapi, Z.M. (September 2015). "Agile-SD: A Linux-based TCP congestion control algorithm for supporting high-speed and short-distance networks".Journal of Network and Computer Applications.55:181–190.arXiv:1601.05908.doi:10.1016/j.jnca.2015.05.011.S2CID2645016.
- ^Mathis, M.; Dukkipati, N.; Cheng, Y. (2013).Proportional Rate Reduction for TCP.doi:10.17487/RFC6937.RFC6937.
- ^Corbet, Jonathan."LPC: Making the net go faster".Retrieved6 June2014.
- ^"Linux 3.2 – Linux Kernel Newbies".Retrieved6 June2014.
- ^ab"BBR: Congestion-Based Congestion Control".Retrieved25 August2017.
- ^Cheng, Yuchung; Cardwell, Neal; Yeganeh, Soheil Hassas; Jacobson, Van (3 July 2017)."Delivery Rate Estimation".Internet Engineering Task Force.Retrieved25 August2017.
{{cite journal}}
:Cite journal requires|journal=
(help) - ^"TCP BBR congestion control comes to GCP – your Internet just got faster".Retrieved25 August2017.
- ^"BBR congestion control [LWN.net]".lwn.net.
- ^"BBR update".Internet Engineering Task Force.
- ^"TCP and BBR"(PDF).Retrieved27 May2018.
- ^"Experimental Evaluation of BBR Congestion Control"(PDF).Retrieved27 May2018.
- ^"A Performance Evaluation of TCP BBRv2".Retrieved12 January2021.
- ^abGoogle TCP BBR team; Google QUIC BBR team (26 July 2023).BBRv3: Algorithm Bug Fixes and Public Internet Deployment.IETF 117: San Francisco.
- ^"Cellular Controlled Delay TCP (C2TCP)".wp.nyu.edu.Retrieved27 April2019.
- ^Alrshah, M.A.; Al-Maqri, M.A.; Othman, M. (June 2019)."Elastic-TCP: Flexible Congestion Control Algorithm to Adapt for High-BDP Networks".IEEE Systems Journal.13(2): 1336–1346.arXiv:1904.13105.Bibcode:2019ISysJ..13.1336A.doi:10.1109/JSYST.2019.2896195.
- ^Abbasloo, Soheil (3 June 2019),GitHub – Soheil-ab/natcp,retrieved5 August2019
- ^Yuan, Cao; Tan, Liansheng; Andrew, Lachlan L. H.; Zhang, Wei; Zukerman, Moshe (6 June 2008)."A generalized FAST TCP scheme".Computer Communications.31(14): 3242–3249.doi:10.1016/j com.2008.05.028.hdl:1959.3/44051.S2CID17988768.
- ^ab"Rice Networks Group".
- ^"TCP Veno: TCP Enhancement for Transmission over Wireless Access Networks"(PDF).IEEE Journal on Selected Areas in Communication.
- ^"XCP @ ISI".
- ^"High speed TPC"(PDF).csc.lsu.edu.
- ^"TCP-Fit".Archived fromthe originalon 3 April 2011.Retrieved5 March2011.
- ^Benaboud, H.; Berqia, A.; Mikou, N. (2002). "An analytical study of CANIT algorithm in TCP protocol".ACM SIGMETRICS Performance Evaluation Review.30(3): 20.doi:10.1145/605521.605530.S2CID6637174.
- ^Rouhani, Modjtaba (2010). "Nonlinear Neural Network Congestion Control Based on Genetic Algorithm for TCP/IP Networks".2010 2nd International Conference on Computational Intelligence, Communication Systems and Networks.pp. 1–6.doi:10.1109/CICSyN.2010.21.ISBN978-1-4244-7837-8.S2CID15126416.
- ^Kanagarathinam, Madhan Raj; Singh, Sukhdeep; Sandeep, Irlanki; Roy, Abhishek; Saxena, Navrati (January 2018)."D-TCP: Dynamic TCP congestion control algorithm for next generation mobile networks".2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC).pp. 1–6.doi:10.1109/CCNC.2018.8319185.ISBN978-1-5386-4790-5.S2CID3991163.
- ^Kanagarathinam, Madhan Raj; Singh, Sukhdeep; Sandeep, Irlanki; Kim, Hanseok; Maheshwari, Mukesh Kumar; Hwang, Jaehyun; Roy, Abhishek; Saxena, Navrati (2020)."NexGen D-TCP: Next Generation Dynamic TCP Congestion Control Algorithm".IEEE Access.8:164482–164496.Bibcode:2020IEEEA...8p4482K.doi:10.1109/ACCESS.2020.3022284.ISSN2169-3536.S2CID221846931.
- ^Arun, Venkat; Balakrishnan, Hari (2018)."Copa: Practical Delay-Based Congestion Control for the Internet".15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18):329–342.ISBN978-1-939133-01-4.
- ^"tcp: make CUBIC the default congestion control mechanism".13 September 2022.
- ^"Summary of Five New TCP Congestion Control Algorithms Project".8 March 2011.
- ^"iTCP – Interactive Transport Protocol – Medianet Lab, Kent State University".
- ^"Whitepaper: Zeta-TCP – Intelligent, Adaptive, Asymmetric TCP Acceleration"(PDF).Retrieved6 December2019.
- ^Lefteris Mamatas; Tobias Harks; Vassilis Tsaoussidis (January 2007)."Approaches to Congestion Control in Packet Networks"(PDF).Journal of Internet Engineering.1(1). Archived fromthe original(PDF)on 21 February 2014.
- ^"HighSpeed TCP".icir.org.
- ^"AIMD-FC Homepage".neu.edu.Archived fromthe originalon 13 January 2009.Retrieved13 March2016.
- ^"Welcome to Network Research Lab".cs.ucla.edu.
- ^"Equation-Based Congestion Control for Unicast Applications".icir.org.
- ^Katabi, Dina; Handley, Mark; Rohrs, Charlie (2002). "Congestion control for high bandwidth-delay product networks".Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications.New York, New York, USA: ACM Press. p. 89.doi:10.1145/633025.633035.ISBN1-58113-570-X.
- ^"MaxNet -- Max-Min Fair, Stable Explicit Signalling Congestion Control".netlab.caltech.edu.
- ^"TCP Congestion Control: A Systems Approach. Peterson, Brakmo and Davie".Retrieved17 May2024.
Sources
edit- Kurose, James;Ross, Keith (2008).Computer Networking: A Top-Down Approach(4th ed.). Addison Wesley.ISBN978-0-13-607967-5.
- Kurose, James;Ross, Keith (2012).Computer Networking: A Top-Down Approach(6th ed.). Pearson.ISBN978-0-13-285620-1.
- Abbasloo, Soheil; Xu, Yang; Chao, H. Jonathon; Shi, Hang; Kozat, Ulas C.; Ye, Yinghua (2019)."Toward Optimal Performance with Network Assisted TCP at Mobile Edge".2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19).Renton, WA: USENIX Association.
- Afanasyev, A.; N. Tilley; P. Reiher; L. Kleinrock (2010)."Host-to-host congestion control for TCP"(PDF).IEEE Communications Surveys and Tutorials.12(3): 304–342.CiteSeerX10.1.1.228.3080.doi:10.1109/SURV.2010.042710.00114.S2CID8638824.
- Jacobson, Van;Karels, Michael J.(November 1988)."Congestion avoidance and control"(PDF).ACM SIGCOMM Computer Communication Review.18(4): 314–329.doi:10.1145/52325.52356.
External links
edit- Approaches to Congestion Control in Packet Networks
- Papers in Congestion Control
- Allman, Mark; Paxson, Vern; Stevens, W. Richard (April 1999)."Fast Retransmit/Fast Recovery".TCP Congestion Control.Internet Engineering Task Force.sec. 3.2.doi:10.17487/RFC2581.RFC2581.Retrieved1 May2010.
- TCP Congestion Handling and Congestion Avoidance Algorithms– The TCP/IP Guide