• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Modeling TCP Incast Issue in Data Center Networks and an Adaptive Application-Layer Solution

    2018-04-08 03:11:37JinTangLuoJieXuandJianSun

    Jin-Tang Luo, Jie Xu, and Jian Sun

    1.Introduction

    In data centers, an application often simultaneously requests data from numerous servers. This results in a many-to-one traffic pattern where multiple servers concurrently send data fragments to a single client via a bottleneck switch. For instance, in web search, a search query is partitioned and assigned to many workers, and then the workers’ responses are aggregated to generate the final result. As the number of concurrent senders increases, the bottleneck switch buffer which is usually shallow[1]can be easily overflowed, leading to massive packet losses and subsequent transmission control protocol (TCP) timeouts. As the minimum retransmission timeout (minRTO) is much greater than the round-trip delay in data centers, even one timeout remarkably prolongs the overall data transmission time. Hence, the client’s goodput decreases to be lower than the link capacity by one or even two orders of magnitude. Such catastrophic TCP goodput collapse for applications with many-to-one traffic pattern is referred to as TCP incast[1].

    Many solutions to the TCP incast issue have been proposed from the aspects of different layers. For instance, at the Ethernet layer, the fully-qualified class name (FQCN)[2]uses explicit network feedback to control congestion among switches;at Internet protocol (IP) layer, [3] explores the effectiveness of tuning explicit congestion notification (ECN) to mitigate incast,and the cyclic prefix (CP)[4]drops only the packet payload instead of the entire packet upon congestion to reduce timeout possibility; at the transport layer, data center TCP (DCTCP)[5],predicted rice interactome network (PRIN)[6], and pFabric[7]decrease the timeout possibility through the cooperation of end-hosts and network; by adding a “shim layer” to the data receiver, incast congestion control for TCP (ICTCP)[8], proxy auto config (PAC)[9]and deadline and incast aware TCP(DIATCP)[10]proactively adjust the in-flight data size to reduce the packet loss; and at the application layer, [1] and [11]-[13]restrict the number of concurrent connections to avert incast.Among these proposals, the application-layer solutions are most practical for their low deployment barrier and minimal impact on ordinary one-to-one applications. Indeed, Ethernetlayer solutions require hardware revisions not supported by current commodity switches, and transport-layer and shimlayer solutions may cause fairness issues to ordinary applications running on regular TCP. In contrary, application-layer solutions merely regulate the data transfer of applications with the manyto-one traffic pattern, so they are easy to deploy and pose no side effect on ordinary applications.

    Despite of the numerous application-layer solutions in literature, few works analytically study how application’s regulation on data transfer can affect incast. Currently, there are several analytical models for the TCP incast problem[1],[6],[11],[14]. However, most of them either ignore the possible existence of background TCP traffic[1],[11], or oversimplify the application layer as a dumb data source/sink without the ability to control data transfers[6],[14]. Therefore, the existing models provide few useful insights into addressing incast from the application layer, which explains why the current application-layer solutions can only avert incast in known and predefined network environments, e.g., the bottleneck link is the last-hop link to the receiver[12],[13], or the available bottleneck bandwidth is known by the receiver[1],[11]).But in real data centers, many network parameters may change drastically and cannot be known in prior. For instance, the bottleneck link often varies due to load balancing, and available bandwidth may fluctuate drastically in the presence of background TCP traffic. In these varying environments, the current solutions often fail to effectively prevent TCP incast, as demonstrated in Section 4.

    In this paper, we intend to understand and solve the TCP incast problem from the viewpoint of applications. Toward this goal, we first develop an analytical model to comprehensively investigate the impact of the application layer on incast.Then under the model’s enlightenment, we propose an adaptive application-layer control mechanism to eliminate incast.

    Since incast is essentially caused by TCP timeout, here we focus on modeling the relationship between the appearance of timeout and various factors related to applications, including the network environment and connection variables. Compared with the existing models[1],[6],[11],[14], our model is based on more general assumptions about TCP behaviors, and it considers the impact of background TCP connections on incurring timeout.In addition, integrated with the optimization theory, the model provides some insightful guidelines for dynamically tuning connection variables to minimize the incast probability in a wide range of network environments.

    According to the theoretical results, the crux of avoiding timeout is to adaptively adjust the number of concurrent connections and equally allocate the sending rate to connections. Following this idea, we design an applicationlayer incast probability minimization mechanism (AIPM),which only modifies receiver-side applications to avert timeout. With AIPM, a client (receiver) concurrently sets up a small number of connections, and assigns an equal TCP advertised window (awnd) to each connection. After a connection finishes data transferring, a new connection can be started. The regulation of awnd is based on network settings and the amount of concurrent connections. The concurrency amount is decided by a sliding connectionwindow mechanism similar to TCP’s congestion control. By this mechanism, the connection-window size grows gradually upon a successful data transmission and shrinks as a connection is “l(fā)ost”. A connection is considered as the lost connection when three new connections have finished since the last time it transmits new data. The lost connection is terminated at once and will be re-established when allowed by the connection-window.

    The simulations show that AIPM effectively eludes incast and consistently achieves high performance in various scenarios. Particularly, in a leaf-spine-topology network with dynamic background TCP traffic, AIPM’s goodput is above 68% of the bottleneck capacity, while the proposals in [1]and [11] both suffer from rather low goodput (< 30%) due to incast.

    The major contributions of this paper are two-folds:

    ● Build an analytical model to disclose the influence of network environment and connection variables on the occurrence of incast. The model provides some insightful guidelines for tuning connection variables to minimize incast possibility.

    ● Design an adaptive application-layer solution to the TCP incast problem. To our best knowledge, this solution is the first application-layer solution that can efficiently control incast in network environments with dynamic background TCP traffic and multiple bottleneck links.

    The rest of the paper is organized as follows. Section 2 describes our model for incast probability and provides some insightful guidelines for taming incast. Section 3 proposes an adaptive application-layer solution to incast, namely, AIPM. In Section 4 we evaluate AIPM in various scenarios using NS2 simulations. Section 5 concludes the paper.

    2.Modeling and Minimizing TCP Incast Probability

    As a data requestor, the receiver-side application (i.e., the client) can implicitly manage data transmission by adjusting connection variables (including the number of concurrent connections and the sending window size for each connection).This fact enlightens us to model the TCP incast probability as a function of connection variables. Based on this model, we derive the conditions on which the incast risk is minimal.

    2.1 Notations and Assumptions

    First of all, we put forward an important concept related to data transfers of concurrently living connections, called round.The first round starts from the beginning of data transmission from an endpoint, and lasts one round-trip time (RTT). The next round starts from the end of last round and lasts one RTT.

    Then consider the TCP incast scenario in Fig. 1. Let R be the number of rounds for transmitting data. During the kth round, there are n(k) servers sending data fragments, formally termed as a sever request unit (SRU) to a client via a common bottleneck link. The ith server’s sending rate is xi(k), and its sending window is wi(k), for 1≤i≤n. The bottleneck link is also shared by m(k) background connections from other applications,whose sending rates are yj(k) for. For the bottleneck link, its capacity is C packets per second, its buffer size is B packets, and the propagation RTT is D. The rest notations are summarized in Table 1.

    We also make some assumptions to facilitate modeling.Firstly, we assume that the queuing policy of the bottleneck link is drop-tail. Secondly, we assume the spare bottleneck buffer is ignorable compared with the sum of the sending windows. This assumption is reasonable due to the “buffer pressure” phenomenon and the fact that switches’ buffers are usually very shallow in data centers[1]. Thirdly, we assume that a connection sees timeout only if the entire window of packets are lost. Most researches[5],[14]have shown that the full-window loss is the dominating kind of timeout that causes TCP incast, thus other kinds of timeout are trivial while modeling incast. At last, we assume that minRTO =200 ms by default, which is significant for the overall transmission time of the requested data block, so even one timeout leads to TCP incast with drastic goodput degradation.

    Fig. 1. General scenario of TCP incast, where multiple servers concurrently transmit data fragraments (SRU) to a single client.

    Table 1: Meanings of the commonly used parameters

    2.2 Probability of TCP Incast

    Now we begin to model the probability of TCP incast as a function of current network condition (i.e., B, C, D, and yi(k))and connection context variables (i.e., n(k) and xi(k)).

    First, recall that the client will suffer from incast if it has at least one timeout during R rounds, so the incast probability can be expressed by

    where Pi(k) is the timeout probability of the ith connection in the kth round.

    The timeout possibility Pi(k) of a connection is determined jointly by the sending window wi(k) and the packet loss rate pi(k) of that connection. Specifically, we consider full window loss as the only cause to timeout, thus Pi(k) equals to

    The next task is to derive the packet drop rate pi(k). As we assumed previously, the spare bottleneck buffer is ignorable compared with the sum of the servers’ sending windows,which means that the bottleneck link would drop packets once the servers start transmitting data. We denoteas the sum of the servers’ sending rates,andas the sum of the background connections’ sending rates. During the kth round, packets are injected into the network at the total rate of, and are processed by the bottleneck link at the rate of C.Henceforth, the drop rate pi(k) is

    Moreover, a connection’s sending window wi(k) is related with its sending rate xi(k) and RTT D as

    Eventually, by substituting (2) to (4) into (1), we derive the TCP incast probability as an analytical function of network condition (i.e., B, C, D, and Y(k)) and connection variables (i.e.,n(k) and xi(k)) as follows:

    2.3 Minimizing Incast Probability

    As (5) indicates, to minimize the incast probability Pincast,we must minimize the timeout probability in every round. This is to maximize the timeout-free possibility that no connection ever incurs timeout in any round k as follows

    Here we explore how could the client maximize the timeout-free probability (6) through adjusting the sending ratesand the connection amount n(k).To reveal the individual impact of a parameter on timeout, we analyze it by keeping other parameters unchanged. Because our analysis below focuses on maximizing (6) in every single round, we simply omit the round number k in the notations, like n(k) to n.

    1) Adjusting sending sizes, x: We fix other parameters. Then the maximization of the timeout-free probability in (6) is to solve the optimization problem below:

    It is straightforward to check that the Hessian matrix of?ln[f(x)] is positive semi-definite over the region x≥0, which means that ln[f(x)] is a concave function. This gives our analysis a great convenience. Specifically, the method of the Lagrange multiplier states that ln[f(x)] is globally maximized by the sending rate allocationif and only if

    The unique solution of (8) that maximizes ln[f(x)] is

    which also minimizes the incast probability (6).

    Remark 1: To minimize the incast risk, the connections should always be given a same sending rate. This operation is feasible at the client application, as the client knows the concurrent connections’ number n, and it is able to implicitly control each connection’s sending rate xithrough modifying the TCP advertised window field in acknowledgement (ACK)packets.

    Remark 2: While the sending rates’ sum X is assumed as a constant for deriving (9), actually the client can change X by tuning the sending rates. But as proved in Appendix,the optimal X that can minimize the incast probability is dependent on the background traffic Y. Since the client does not know Y, it is unable to properly set X to prevent incast.

    2) Adjusting the concurrent connections’ number, n: We fix other parameters. According to (6) and (9),we optimally set the sending rates to be xi=X/n for 1≤i≤n and rewrite the timeout-free probability as

    which is always negative since p lies in (0, 1). This suggests that the timeout-free probability (10) decreases with the concurrent connection number n. Henceforth, the incast probability is an increasing function of n and will be minimized at n=1.

    Remark: The client should lower the number of concurrent connections n to reduce the incast risk. On the other hand, an excessively small n may cause a great waste of bandwidth in the cases where each connection has so little data to send SRU that it finishes sending before fully utilizing the bandwidth.How to properly set n will be discussed in the next section.

    3.Minimizing Incast Probability at Application Layer

    Based on the analyses of (9) and (11), we propose an AIPM scheme. The AIPM is implemented at the client application, and it minimizes the incast probability via equally allocating advertised windows of concurrent connections and dynamically adjusting concurrent connections’ amount.

    3.1 Allocate Equal Advertised Window to Connections

    As (9) indicates, the risk of TCP incast is minimal if the existing connections have an equal sending rate. However,AIPM is essentially a part of the client application, which means it cannot directly control each connection’s sending rate. Therefore, AIPM emulates the equal sending rate allocation by setting awnd at the client (e.g., via calling the setsokopt() application programming interface (API) in Berkeley software distribution (BSD) systems) as follows.

    First, according to (4) and (9), the ideal sending rate allocation is equivalent to the following sending window allocation:

    where X is the total sending rate of AIPM’s n connections,and D is RTT without queuing.

    Next, AIPM should let X equal the bottleneck capacity C,so that it can fully utilize the bottleneck link without selfinduced drops. From X=C and (12), we derive that

    Finally, since TCP’s sending window size is upperbounded by the advertised window size (awnd), AIPM can emulate the equal sending window assignment in (13) by assigning an identical awnd to its connections as

    where awndiis awnd of the connection i. Although such emulation is suboptimal, it avoids the polarized allocation of sending window sizes and thus decreases the timeout probability.

    Naturally, even if AIPM strictly follows (14) and equally allocates awnd to its connections, timeout may still happen due to the background traffic. Therefore, AIPM must further decrease timeout risk by adaptively tuning the connections’ amount. Besides, it must quickly recover data transmission from the timeout connections. These two demands can be met by the following two mechanisms,respectively.

    3.2 Determine the Proper Number of Concurrent Connections with Sliding Window Mechanism

    To reduce the timeout probability while keeping high goodput, AIPM must selectively connect to a subset of the servers at one time. The question is which servers AIPM should connect to. To find the answer, AIPM employs a sliding-window-like mechanism to maintain a window of the concurrently existing connections. In concept, we term this window as the connection window, or shortly, con_wnd. When the existing connections are less than the con_wnd size, AIPM establishes a new connection and admits it to con_wnd. When a connection finishes, AIPM removes it from con_wnd.

    AIPM uses an additive increase multiplicative decrease(AIMD) policy to decide the con_wnd size n. The initial value for n is n=1. Whenever a connection in con_wnd completes,AIPM infers that the bottleneck link is not jamming, thus it gradually increases n as

    The growth of the concurrent connections inevitably leads to timeout. Based on the fact that minRTO=200 ms, is much greater than a connection’s ordinary life period(mostly less than 1 ms). The AIPM deduces a connection to be timeout if three new connections have been admitted and finished since the last time the connection transmitted any data.

    After detecting timeout, AIPM realizes that the timeout probability right now is too high. According to (11), AIPM can reduce the timeout probability by lowering the number of concurrent connections n while fixing other parameters. Hence,AIPM halves con_wnd as follows:

    Because AIPM does not reduce the total awnd when performing (16), the live connections that are not timeout can quickly occupy the spare bandwidth and maintain relatively high link utilization.

    3.3 Fast Reconnection and Slow Withdrawal

    1) Fast reconnection: For a timeout-broken data connection, AIPM terminates that connection (by sending finish (FIN) to the connection’s server) and removes it from con_wnd. Then AIPM will reconnect to the data server as long as con_wnd allows. Since the SRU is so small(typically dozens of KB), the server can retransmit it within an ignorable period of time. This scheme is termed as fast reconnection. It enables AIPM to quickly recover data transmission from the timeout-broken servers rather than being slowed down by TCP’s sluggish timeout retransmission mechanism.

    2) Slow withdrawal: As timeout occurs to some connections,if AIPM naively follows (16) and instantly halves the number of the concurrent connections, it may have to close some live connections that are not timeout. Nevertheless, these live connections have already cut their sending windows after seeing packet losses and are unlikely to cause more timeout situation. Therefore, closing these live connections at one time will result in unnecessary data retransmissions and may even lead to link under-utilization.

    To solve the above two issues, AIPM adopts the slow withdrawal scheme in the presence of timeout. Specifically,upon detecting timeout, AIPM records the current con_wnd size n, and then slowly decreases con_wnd by one after each live connection finishes. This means that AIPM does not close any live connections that are still transmitting data. Moreover,this gives the live connections sufficient time to grow their congestion windows and to fully utilize the spare bandwidth left by the closed or timeout connections (since the delays in datacenters are so small, the live connections can grow their congestion windows very fast). Once con_wnd shrinks to n/2,AIPM ends slow withdrawal and resumes to the normal AI operation (15).

    3.4 Some Discussions about Design Issues

    1) How can AIPM know the bottleneck link’s capacity,C? Nowadays datacenters employ several mechanisms like offering the uniform high capacity between racks and load balancing technologies so that congestion only happens at the edge top-of-rack (TOR) switches[8]. This feature allows AIPM to conveniently set C to be the link capacity of TOR switch.

    2) How can AIPM know the round-trip propagation delay D? Data center’s network settings (i.e., hardware, framework,topology, etc.) are usually stable over a relatively long period.Thereby, the network administrator can measure D offline and feed the value to AIPM every time network settings change.

    3) How should AIPM react if it observes several timeout connections at one time? To avoid the link under- utilization,after detecting timeout-broken connections, AIPM starts slow withdrawal normally and halves the window only once. AIPM will not further halve con_wnd even if it detects more timeoutbroken connections during the slow withdrawal state.

    4.Empirical Validations of AIPM

    4.1 Simulation Settings

    We evaluate the performance of AIPM with NS2 simulation in two different network scenarios. The first one is a single 1 Gbps bottleneck link with a 64 KB buffer and 100 μs RTT. This scenario represents static network environments.The second scenario has the leaf-spine topology as in Fig. 2,which is commonly used in data centers[7]. The network has 144 end-hosts through 9 leaf (i.e., top-of-rack) switches that are connected to 4 spine switches in a full mesh. Each leaf switch has 16 downlinks of 1 Gbps to the hosts and four uplinks of 4 Gbps to the spine. RTT between two hosts connected to different leaf switches is 100 μs. Background TCP flows arrive following the Poisson process, where the source and the destination for each flow are chosen randomly, and each flow’s size satisfies the distribution observed in real-world data mining workloads[7]as shown in Fig. 3. The background flow arrival rate is set to obtain a data load of 0.5. This scenario represents realistic datacenter network environments that are complex and dynamic. Throughout our simulations, the client requests for a data block that is scattered over N servers, and it requests for the next data block after all N servers finish sending the current one. The data packet size is 1000 Byte, and the acknowledgement (ACK) size is 40 Byte.

    Fig. 2. Leaf-spine network topology used in the second simulation scenario.

    Fig. 3. Flow size distribution in the second simulation scenario is based on real-world measurements of data mining workloads[7].

    We compare AIPM with the Na?ve method (i.e., the client concurrently connects to all N servers), as well as two state-of-art application-layer solutions, namely, Oracle security developer tools (OSDT)[1]and OSM[11]. Similar to AIPM, these two solutions also try to mitigate incast by restricting the concurrent TCP connections’ number and sending rates. The major difference is that AIPM can correspondingly tune its own parameters for various network scenarios, whereas OSDT and OSM are only specified for predefined environments with single bottleneck link and no background TCP traffic.Due to this difference, AIPM is able to remarkably outperform OSDT and OSM in changing network environments, as we will see below.

    4.2 Fixed SRU Size

    In this subsection, we fix the SRU size of each server to 256 kB, and investigate the goodput of AIPM in the two aforementioned network scenarios, respectively.

    Fig. 4 shows that our solution AIPM achieves best goodput (0.68 Gbps to 0.88 Gbps) in both cases. The reason is that AIPM dynamically adjusts each connection’s advertised window size (awnd) and the connection window size (con_wnd) based on its estimation for the network state(i.e., if some connections incur timeout). Such adjustment enables AIPM to adaptively minimize incast risk and rapidly recover from timeout even in the network environment with multiple bottleneck links and varying background traffic(Fig. 4 (b)).

    Reversely, OSDT and OSM both restrict the concurrent connection number by predefined values. Although such fixed values are well suited for static conditions, they can hardly elude incast in dynamic network environments where both bottleneck links and available bandwidth can change drastically. This is why OSDT and OSM only have the goodput less than 0.3 Gbps in Fig. 4 (b).

    Fig. 4. Goodput with SRU=256 kB for (a) a single bottleneck link without background traffic and (b) leaf-spine topology with background TCP load of 0.5.

    4.3 Varying SRU Size

    Next, we fix the overall data block size to 2 MB, and set the SRU size to 2 MB/N, where N is the total number of the servers. We evaluate AIPM’s performance in terms of goodput and request completion time.

    Fig. 5. Goodput with block size=2 MB for (a) a single bottleneck without background traffic and (b) leaf-spine topology with background TCP load of 0.5.

    As Fig. 5 illustrates, AIPM achieves higher goodput than the alternative solutions in both network scenarios, which demonstrates again that AIPM can effectively address incast issue even in highly dynamic environments. Observe that AIPM, OSDT, and OSM all have descendant goodput as N grows, for a larger N reduces each server’s SRU and then decreases the average sending window size of the concurrent connections. However, AIPM’s goodput decreases more slowly than the other two’s due to its adaptive adjustment of the number of concurrent connections. Indeed, AIPM adapts to small SRU values by allowing more connections to concurrently send data (i.e., larger con_wnd), so that it can fully utilize the bottleneck link and keep high goodput regardless of the SRU value.

    Fig. 6 compares AIPM’s request completion time (RCT)with the other three schemes. As we can see, AIPM’s RCT keeps being the smallest in both scenarios. Particularly, in the leaf-spine scenario, AIPM’s RCT is less than 20% of OSM or OSDT’s RCT, and is less than 3% of Na?ve’s RCT.This result clearly demonstrates that AIPM effectively avoids incast by triggering no TCP retransmission timeout.With such small RCT, AIPM makes the client application respond more promptly to the upper-level user, and hence improves user experience.

    Fig. 6. Request completion time (RCT) with block size=2 MB for(a) a single bottleneck without background traffic and (b) leafspine topology with background TCP load of 0.5.

    4.4 Higher Bottleneck Capacity

    At last, we explore the scalability of AIPM in higher- speed data centers. In the single-bottleneck scenario, we increase the bottleneck capacity from 1 Gbps to 10 Gbps. In the leaf-spine scenario, we increase the edge link capacity from 1 Gbps to 10 Gbps and the core link capacity from 4 Gbps to 40 Gbps. Other settings remain unchanged, i.e., SRU=256 kB, buffer size=64 kB, and RTT=100 μs.

    As we can see in Fig. 7, the goodput of AIPM is generally higher than the goodput of other methods. In particular, AIPM maintains goodput up to 91% of the bottleneck capacity in the network with no background TCP (Fig. 7 (a)), and it achieves nearly two times higher goodput than the alternative solutions while coexisting with the background TCP traffic (Fig. 7 (b)).Such good performance shows that AIPM is readily scalable for higher-speed data centers in future.

    Fig. 7. Goodput with the bottleneck capacity C=10 Gbps for (a) a single bottleneck link without background traffic and (b) leaf-spine topology with background TCP load of 0.5.

    5.Conclusions

    We built an analytical model to reveal how TCP incast is affected by various factors related to applications. From this model, we derive two guidelines for minimizing the incast risk,including equally allocating the sending rate to connections and restricting the number of concurrent connections.

    Based on the analytical results, we designed an adaptive application-layer solution to incast, which allocates an equal advertised window to connections, and uses a slidingconnection-window mechanism to manage concurrent connections. Simulation results indicate that our solution effectively eliminates incast and achieves high goodput in various network scenarios.

    Appendix

    We adjust the sending rate sum X to maximize the timeoutfree probability in (6) while fixing other parametersAccording to (9), we let the sending rates xibe xi=X/n for 1≤i≤n, and express the timeout-free probability(6) as

    If the background traffic Y is much smaller than the sum of the connections’ sending rates X, the timeout-free probabilityreduces to, which is a decreasing function of X. Reversely, if Y is much greater than X, thenbecomes, which is an increasing function of X. As a result, the optimal X that maximizesis dependent on the background traffic Y.

    [1]S. Zhang, Y. Zhang, Y. Qin, Y. Han, Z. Zhao, and S. Ci,“OSDT: A scalable application-level scheduling scheme for TCP incast problem,” inProc. of IEEE Intl. Conf. on Communications, 2015, pp. 325-331.

    [2]Y. Zhang and N. Ansari, “On mitigating TCP incast in data center networks,” inProc. of IEEE Conf. on Computer Communications, 2011, pp. 51-55.

    [3]H. Wu, J. Ju, G. Lu, C. Guo, Y. Xiong, and Y. Zhang,“Tuning ECN for data center networks,” inProc. of the 8th ACM Intl. Conf. on Emerging Networking Experiments and Technologies, 2012, pp. 25-36.

    [4]P. Cheng, F. Ren, R. Shu, and C. Lin, “Catch the whole lot in an action: rapid precise packet loss notification in data centers,” inProc. of the 11th USENIX Symposium on Networked Systems Design and Implementation, 2014, pp.17-28.

    [5]M. Alizadeh, A. Greenberg, D. A. Maltz,et al., “Data center TCP,”ACM SIGCOMM Computer Communication Review,vol. 40, no. 4, pp. 63-74, Oct. 2011.

    [6]J. Zhang, F. Ren, L. Tang, and C. Lin, “Modeling and solving TCP incast problem in data center networks,”IEEE Trans. on Parallel and Distributed Systems, vol. 26, no. 2,pp. 478-491, Feb. 2015.

    [7]M. Alizadeh, S. Yang, M. Sharif,et al., “pFabric: Minimal near-optimal datacenter transport,”ACMSIGCOMM Computer Communication Review, vol. 43, no. 4, pp. 435-446, 2013.

    [8]H. Wu, Z. Feng, C. Guo, and Y. Zhang, “ICTCP: Incast congestion control for TCP in data-center networks,”IEEE/ACM Trans. on Networking, vol. 21, no. 2, pp. 345-358, 2013.

    [9]W. Bai, K. Chen, H. Wu, W. Lan, and Y. Zhao, “PAC:taming TCP incast congestion using proactive ACK control,”inProc. of IEEE the 22nd Intl. Conf. on Network Protocols,2014, pp. 385-396.

    [10]J. Hwang, J. Yoo, and N. Choi, “Deadline and incast aware tcp for cloud data center networks,”Computer Networks, vol.68, pp. 20-34, Feb. 2014.

    [11]K. Kajita, S. Osada, Y. Fukushima, and T. Yokohira,“Improvement of a TCP incast avoidance method for data center networks,” inProc. of IEEE Intl. Conf. on ICT Convergence, 2013, pp. 459-464.

    [12]H. Zheng and C. Qiao, “An effective approach to preventing TCP incast throughput collapse for data center networks,” inProc. of IEEE Global Telecommunications Conf., 2011, pp.1-6.

    [13]Y. Yang, H. Abe, K. Baba, and S. Shimojo, “A scalable approach to avoid incast problem from application layer,” inProc. of IEEE the 37th Annual Computer Software and Applications Conf. Workshops, 2013, pp. 713-718.

    [14]W. Chen, F. Ren, J. Xie, C. Lin, K. Yin, and F. Baker,“Comprehensive understanding of TCP incast problem,” inProc. of IEEE Conf. on Computer Communications, 2015,pp. 1688-1696.

    十八禁网站免费在线| 午夜精品国产一区二区电影 | 亚洲精品日韩在线中文字幕 | 日韩一本色道免费dvd| 国内精品美女久久久久久| 亚洲成人av在线免费| 国产熟女欧美一区二区| 91在线观看av| 欧美+日韩+精品| 少妇的逼水好多| 中文在线观看免费www的网站| 国产精品美女特级片免费视频播放器| 中文字幕精品亚洲无线码一区| 成年女人毛片免费观看观看9| 97超级碰碰碰精品色视频在线观看| 热99re8久久精品国产| 少妇人妻精品综合一区二区 | 99久久精品热视频| 亚洲精品一区av在线观看| 中文字幕av成人在线电影| 3wmmmm亚洲av在线观看| 日本色播在线视频| 我的老师免费观看完整版| 亚洲经典国产精华液单| 少妇熟女aⅴ在线视频| 精品不卡国产一区二区三区| 色尼玛亚洲综合影院| 欧美最黄视频在线播放免费| 精品99又大又爽又粗少妇毛片| 好男人在线观看高清免费视频| 99久国产av精品国产电影| 日日干狠狠操夜夜爽| 亚洲精品影视一区二区三区av| 黄色视频,在线免费观看| 久久99热这里只有精品18| 国产精品久久久久久精品电影| 男人舔女人下体高潮全视频| 最新中文字幕久久久久| 村上凉子中文字幕在线| 神马国产精品三级电影在线观看| 男人舔奶头视频| 男人舔女人下体高潮全视频| 国产精品人妻久久久影院| 亚洲一级一片aⅴ在线观看| 偷拍熟女少妇极品色| 乱人视频在线观看| 日日干狠狠操夜夜爽| 国产亚洲av嫩草精品影院| 精品无人区乱码1区二区| 又爽又黄a免费视频| 大又大粗又爽又黄少妇毛片口| 国产伦在线观看视频一区| 网址你懂的国产日韩在线| 国产爱豆传媒在线观看| 一级a爱片免费观看的视频| 色在线成人网| 搡女人真爽免费视频火全软件 | 亚洲精品粉嫩美女一区| 免费电影在线观看免费观看| 日本精品一区二区三区蜜桃| 日本色播在线视频| 国产高清激情床上av| 国产精品一及| 搡女人真爽免费视频火全软件 | 亚洲一级一片aⅴ在线观看| 美女黄网站色视频| 国产伦精品一区二区三区视频9| 久久中文看片网| 欧美性猛交黑人性爽| 美女xxoo啪啪120秒动态图| 亚洲精品粉嫩美女一区| 亚洲国产精品成人综合色| 日韩高清综合在线| 高清毛片免费观看视频网站| 18禁裸乳无遮挡免费网站照片| 给我免费播放毛片高清在线观看| av视频在线观看入口| av卡一久久| 天天躁日日操中文字幕| 亚洲七黄色美女视频| 午夜激情福利司机影院| 天堂影院成人在线观看| 欧美高清性xxxxhd video| 成年女人毛片免费观看观看9| 国产色婷婷99| 亚洲国产日韩欧美精品在线观看| 午夜日韩欧美国产| 亚洲欧美日韩东京热| 国产精品乱码一区二三区的特点| 欧美三级亚洲精品| 免费在线观看影片大全网站| 91av网一区二区| 99久久精品热视频| 欧美bdsm另类| 最近视频中文字幕2019在线8| 国产精品一区www在线观看| 国产 一区 欧美 日韩| 看十八女毛片水多多多| 国产高清激情床上av| 性插视频无遮挡在线免费观看| 麻豆av噜噜一区二区三区| 国产在线男女| 一进一出抽搐gif免费好疼| 久久久久久伊人网av| 18禁在线无遮挡免费观看视频 | 精品人妻偷拍中文字幕| 99热只有精品国产| 免费观看人在逋| 舔av片在线| 久久久久久大精品| 69人妻影院| 亚洲五月天丁香| 可以在线观看的亚洲视频| 高清毛片免费看| 别揉我奶头 嗯啊视频| 亚洲av成人精品一区久久| 欧美一级a爱片免费观看看| 亚洲在线观看片| 欧美日本视频| 国产亚洲精品av在线| 久久久久免费精品人妻一区二区| 两性午夜刺激爽爽歪歪视频在线观看| 欧美区成人在线视频| 波多野结衣高清无吗| 99热网站在线观看| www.色视频.com| 亚洲精品成人久久久久久| 黄色配什么色好看| 日本成人三级电影网站| 三级国产精品欧美在线观看| 人人妻,人人澡人人爽秒播| 亚洲美女视频黄频| 久久精品人妻少妇| 亚洲精品日韩av片在线观看| 国产成人福利小说| 毛片一级片免费看久久久久| 好男人在线观看高清免费视频| 日本在线视频免费播放| 日日摸夜夜添夜夜添小说| 国产精品人妻久久久久久| 欧美最新免费一区二区三区| 全区人妻精品视频| 午夜a级毛片| 综合色av麻豆| 久久午夜亚洲精品久久| 亚洲美女搞黄在线观看 | 村上凉子中文字幕在线| 九色成人免费人妻av| av卡一久久| 中文字幕精品亚洲无线码一区| 麻豆精品久久久久久蜜桃| 国产色爽女视频免费观看| 亚洲,欧美,日韩| 我的老师免费观看完整版| 精华霜和精华液先用哪个| a级毛片免费高清观看在线播放| 成人漫画全彩无遮挡| 韩国av在线不卡| 国产aⅴ精品一区二区三区波| 日本熟妇午夜| 日本黄色视频三级网站网址| 五月伊人婷婷丁香| 露出奶头的视频| 老司机影院成人| 韩国av在线不卡| 人妻丰满熟妇av一区二区三区| 熟妇人妻久久中文字幕3abv| 成人av一区二区三区在线看| 菩萨蛮人人尽说江南好唐韦庄 | 在线免费十八禁| 国产大屁股一区二区在线视频| 日本熟妇午夜| 久久久成人免费电影| 亚洲av免费高清在线观看| 中国美白少妇内射xxxbb| 国产伦精品一区二区三区视频9| 国产淫片久久久久久久久| 18+在线观看网站| 黄片wwwwww| 日本欧美国产在线视频| 久久鲁丝午夜福利片| 高清日韩中文字幕在线| 99热6这里只有精品| 网址你懂的国产日韩在线| 精品久久久久久久久av| 国产乱人偷精品视频| 久久精品人妻少妇| 亚洲欧美清纯卡通| 看免费成人av毛片| 亚洲av成人av| 亚洲国产精品国产精品| av中文乱码字幕在线| 欧美成人a在线观看| 国产激情偷乱视频一区二区| www日本黄色视频网| 亚洲欧美日韩高清专用| 免费av不卡在线播放| АⅤ资源中文在线天堂| 又爽又黄a免费视频| 国产精品久久久久久亚洲av鲁大| 欧美xxxx黑人xx丫x性爽| 亚洲三级黄色毛片| 久久精品久久久久久噜噜老黄 | 99久久无色码亚洲精品果冻| 99精品在免费线老司机午夜| 最新中文字幕久久久久| 乱系列少妇在线播放| 成人高潮视频无遮挡免费网站| 久久精品国产清高在天天线| 国产美女午夜福利| 日本-黄色视频高清免费观看| 噜噜噜噜噜久久久久久91| 亚洲精品乱码久久久v下载方式| 国产女主播在线喷水免费视频网站 | av卡一久久| 欧美最黄视频在线播放免费| 蜜臀久久99精品久久宅男| 夜夜爽天天搞| 国产伦在线观看视频一区| 亚州av有码| 一级av片app| 国产极品精品免费视频能看的| 听说在线观看完整版免费高清| 国产乱人偷精品视频| 日本五十路高清| 男插女下体视频免费在线播放| 成人永久免费在线观看视频| 国产欧美日韩精品亚洲av| 国产精品久久久久久久久免| 精品一区二区三区视频在线观看免费| 一级毛片久久久久久久久女| 99精品在免费线老司机午夜| 高清日韩中文字幕在线| 成年av动漫网址| 神马国产精品三级电影在线观看| 最近在线观看免费完整版| 欧美不卡视频在线免费观看| 美女xxoo啪啪120秒动态图| 亚洲av成人精品一区久久| 亚洲精品成人久久久久久| 人妻少妇偷人精品九色| 日本黄色视频三级网站网址| 久久午夜福利片| 波多野结衣巨乳人妻| 国产成人91sexporn| 韩国av在线不卡| 精品一区二区三区视频在线观看免费| 欧美xxxx黑人xx丫x性爽| 久久久久免费精品人妻一区二区| 国产一区二区亚洲精品在线观看| 欧美一级a爱片免费观看看| 少妇丰满av| 少妇人妻精品综合一区二区 | 欧美一区二区精品小视频在线| 天美传媒精品一区二区| 国产伦精品一区二区三区四那| 亚洲美女视频黄频| 亚洲av不卡在线观看| 99久久久亚洲精品蜜臀av| 少妇猛男粗大的猛烈进出视频 | 寂寞人妻少妇视频99o| 免费看美女性在线毛片视频| 亚洲在线观看片| 午夜a级毛片| 国产精品美女特级片免费视频播放器| 精品熟女少妇av免费看| 69av精品久久久久久| 日日摸夜夜添夜夜添小说| 久久久久久伊人网av| 蜜桃亚洲精品一区二区三区| 午夜福利18| 亚洲成人久久爱视频| 熟女电影av网| 国产精品久久久久久av不卡| 精品少妇黑人巨大在线播放 | av免费在线看不卡| 国产精品久久电影中文字幕| 美女大奶头视频| 能在线免费观看的黄片| 不卡一级毛片| 国产精华一区二区三区| 黄色日韩在线| 免费大片18禁| 国产午夜精品论理片| 久久精品影院6| 蜜桃久久精品国产亚洲av| 神马国产精品三级电影在线观看| 国产美女午夜福利| 成人毛片a级毛片在线播放| 99riav亚洲国产免费| 色综合色国产| 亚洲aⅴ乱码一区二区在线播放| 欧美日本视频| 我要看日韩黄色一级片| 亚洲精华国产精华液的使用体验 | 99在线视频只有这里精品首页| h日本视频在线播放| 日韩欧美免费精品| 国产伦一二天堂av在线观看| 最近视频中文字幕2019在线8| 99热精品在线国产| 久久人人爽人人爽人人片va| 午夜精品一区二区三区免费看| 国产精品av视频在线免费观看| 成年女人毛片免费观看观看9| 亚洲精品在线观看二区| 非洲黑人性xxxx精品又粗又长| 99久久久亚洲精品蜜臀av| 日本一二三区视频观看| 一级黄色大片毛片| 午夜老司机福利剧场| 国产精品一区二区三区四区免费观看 | 精品久久久久久久久久久久久| 国产一区二区在线观看日韩| 女的被弄到高潮叫床怎么办| 精华霜和精华液先用哪个| 97超级碰碰碰精品色视频在线观看| 22中文网久久字幕| 久久久国产成人免费| 久久精品91蜜桃| 亚洲高清免费不卡视频| 亚洲专区国产一区二区| 日日干狠狠操夜夜爽| 熟妇人妻久久中文字幕3abv| 男女之事视频高清在线观看| 久久鲁丝午夜福利片| 超碰av人人做人人爽久久| 亚洲av中文字字幕乱码综合| 成年女人永久免费观看视频| 丝袜喷水一区| av免费在线看不卡| 亚洲天堂国产精品一区在线| 99热只有精品国产| 精品一区二区三区av网在线观看| 99久久精品热视频| 直男gayav资源| 18禁黄网站禁片免费观看直播| 韩国av在线不卡| 97碰自拍视频| 在线免费观看不下载黄p国产| 国产高潮美女av| 欧美色视频一区免费| 日本-黄色视频高清免费观看| 亚洲va在线va天堂va国产| 三级男女做爰猛烈吃奶摸视频| 亚洲国产精品成人综合色| 国内精品一区二区在线观看| 一个人观看的视频www高清免费观看| 国内精品一区二区在线观看| 99久久无色码亚洲精品果冻| 国产高清不卡午夜福利| 男插女下体视频免费在线播放| 国产视频一区二区在线看| 国产精品久久久久久av不卡| 女同久久另类99精品国产91| 国产亚洲精品久久久com| 欧美日韩一区二区视频在线观看视频在线 | 国产一区二区三区在线臀色熟女| 亚洲国产日韩欧美精品在线观看| 欧美日韩乱码在线| 熟妇人妻久久中文字幕3abv| 成人午夜高清在线视频| 欧美不卡视频在线免费观看| 中国美女看黄片| 国语自产精品视频在线第100页| 69人妻影院| 久久鲁丝午夜福利片| 国产高清有码在线观看视频| 国语自产精品视频在线第100页| 久久韩国三级中文字幕| 在线观看免费视频日本深夜| 亚洲欧美清纯卡通| 免费在线观看影片大全网站| 人妻少妇偷人精品九色| 亚洲精品在线观看二区| 一级av片app| 97人妻精品一区二区三区麻豆| 欧美3d第一页| 日韩中字成人| 国产精品永久免费网站| 日韩三级伦理在线观看| 国产av麻豆久久久久久久| 亚洲熟妇熟女久久| 麻豆一二三区av精品| 国产一区二区亚洲精品在线观看| 亚洲国产日韩欧美精品在线观看| 69人妻影院| 99久久九九国产精品国产免费| 一个人观看的视频www高清免费观看| 三级经典国产精品| 国产成人影院久久av| 久久精品91蜜桃| 亚洲欧美日韩高清专用| 97碰自拍视频| 国产精品伦人一区二区| 搡女人真爽免费视频火全软件 | 日韩中字成人| 精品久久久久久久久久久久久| 国产私拍福利视频在线观看| 真实男女啪啪啪动态图| 99在线人妻在线中文字幕| 日韩一区二区视频免费看| 精品午夜福利在线看| 淫妇啪啪啪对白视频| 变态另类丝袜制服| 久久久久久久久中文| 99九九线精品视频在线观看视频| 亚洲,欧美,日韩| 女人十人毛片免费观看3o分钟| 国产黄片美女视频| 毛片女人毛片| 男女啪啪激烈高潮av片| 男人狂女人下面高潮的视频| 草草在线视频免费看| 乱码一卡2卡4卡精品| 97超视频在线观看视频| av专区在线播放| 亚洲精品粉嫩美女一区| 国产午夜精品论理片| 岛国在线免费视频观看| 三级男女做爰猛烈吃奶摸视频| 99久久无色码亚洲精品果冻| 国产亚洲精品久久久久久毛片| 亚洲欧美清纯卡通| 国产高清视频在线播放一区| 欧美色欧美亚洲另类二区| 亚洲精品日韩av片在线观看| 亚洲国产精品国产精品| 久久人人爽人人爽人人片va| 久久亚洲国产成人精品v| 日韩成人av中文字幕在线观看 | 日韩亚洲欧美综合| 午夜福利在线观看免费完整高清在 | 嫩草影院入口| 欧美人与善性xxx| 99国产精品一区二区蜜桃av| 在线天堂最新版资源| 99在线视频只有这里精品首页| 欧美+亚洲+日韩+国产| 欧美3d第一页| 色综合亚洲欧美另类图片| 国产综合懂色| 丰满乱子伦码专区| 欧美日韩国产亚洲二区| 性插视频无遮挡在线免费观看| 亚洲国产高清在线一区二区三| 简卡轻食公司| 国产真实伦视频高清在线观看| 麻豆乱淫一区二区| 婷婷色综合大香蕉| 尤物成人国产欧美一区二区三区| 精品久久久久久久久av| 搞女人的毛片| 久久天躁狠狠躁夜夜2o2o| 成人特级黄色片久久久久久久| 国产人妻一区二区三区在| 亚洲av中文av极速乱| 亚洲成a人片在线一区二区| 亚洲成人中文字幕在线播放| 又爽又黄a免费视频| 国产国拍精品亚洲av在线观看| 国产亚洲精品av在线| 日韩av在线大香蕉| 国产精品一区二区性色av| 久久精品国产亚洲网站| 热99在线观看视频| 国产在线精品亚洲第一网站| 国产一区二区在线av高清观看| 国产成人一区二区在线| 欧美成人一区二区免费高清观看| 22中文网久久字幕| 晚上一个人看的免费电影| 免费电影在线观看免费观看| 人人妻人人看人人澡| 日本爱情动作片www.在线观看 | 午夜激情欧美在线| 男女下面进入的视频免费午夜| 十八禁国产超污无遮挡网站| 九九热线精品视视频播放| 精品少妇黑人巨大在线播放 | 亚洲国产精品成人综合色| 日本黄色视频三级网站网址| 国产日本99.免费观看| 热99在线观看视频| 国产av麻豆久久久久久久| 国产黄色小视频在线观看| av天堂中文字幕网| 久久精品夜色国产| 搡女人真爽免费视频火全软件 | 国产精品一及| 欧美高清成人免费视频www| 一区二区三区四区激情视频 | 波多野结衣高清作品| 欧美日本视频| 一级毛片我不卡| 天天躁夜夜躁狠狠久久av| 精品日产1卡2卡| 亚洲av美国av| 国产精品99久久久久久久久| 高清毛片免费观看视频网站| 国产av在哪里看| 国内久久婷婷六月综合欲色啪| 变态另类丝袜制服| 日韩精品中文字幕看吧| 精品一区二区三区视频在线| 五月伊人婷婷丁香| 免费观看精品视频网站| 狠狠狠狠99中文字幕| 永久网站在线| 男人狂女人下面高潮的视频| 亚洲精品色激情综合| 真人做人爱边吃奶动态| 天天躁日日操中文字幕| 亚洲欧美精品自产自拍| 性欧美人与动物交配| 国产精品一及| 国产成人一区二区在线| 99久久精品国产国产毛片| 自拍偷自拍亚洲精品老妇| 中文字幕人妻熟人妻熟丝袜美| 综合色av麻豆| 欧美日韩综合久久久久久| 非洲黑人性xxxx精品又粗又长| 久久这里只有精品中国| 看黄色毛片网站| 99国产精品一区二区蜜桃av| 国产精品综合久久久久久久免费| 如何舔出高潮| 国产黄a三级三级三级人| 黄色一级大片看看| 亚洲欧美日韩高清专用| eeuss影院久久| 成人一区二区视频在线观看| 免费人成视频x8x8入口观看| 国产乱人视频| 亚洲欧美精品综合久久99| 免费高清视频大片| 日韩,欧美,国产一区二区三区 | 久久久久九九精品影院| 自拍偷自拍亚洲精品老妇| 老师上课跳d突然被开到最大视频| 黄色视频,在线免费观看| 中文亚洲av片在线观看爽| 麻豆av噜噜一区二区三区| 亚洲av免费在线观看| 搞女人的毛片| 麻豆成人午夜福利视频| 日韩精品有码人妻一区| 99精品在免费线老司机午夜| 99久久无色码亚洲精品果冻| 亚洲精品亚洲一区二区| 中文字幕人妻熟人妻熟丝袜美| 国产精品精品国产色婷婷| 精品不卡国产一区二区三区| 国产亚洲欧美98| 黑人高潮一二区| 国产精品乱码一区二三区的特点| av女优亚洲男人天堂| 久久亚洲精品不卡| 日韩欧美在线乱码| 久久久久九九精品影院| 久久国产乱子免费精品| 亚洲丝袜综合中文字幕| 久久天躁狠狠躁夜夜2o2o| 国产av在哪里看| 久久久久久久亚洲中文字幕| 岛国在线免费视频观看| 男女啪啪激烈高潮av片| 老师上课跳d突然被开到最大视频| 免费大片18禁| 两个人的视频大全免费| 亚洲久久久久久中文字幕| 日韩强制内射视频| 日本黄色视频三级网站网址| 国产精品久久久久久久久免| 色哟哟·www| 国语自产精品视频在线第100页| 99热全是精品| 国产精品免费一区二区三区在线| 特级一级黄色大片| 成人性生交大片免费视频hd| 九九久久精品国产亚洲av麻豆| 久久久欧美国产精品| 春色校园在线视频观看| 黄片wwwwww| 午夜日韩欧美国产| 久久九九热精品免费| 一a级毛片在线观看| 真人做人爱边吃奶动态| 一本久久中文字幕| 精品久久久久久久久亚洲| av专区在线播放| 日韩高清综合在线| 亚洲精品日韩av片在线观看| 天堂av国产一区二区熟女人妻| 最后的刺客免费高清国语| 美女大奶头视频| 天堂av国产一区二区熟女人妻| 99riav亚洲国产免费| 五月伊人婷婷丁香| 久久久欧美国产精品| 久久九九热精品免费| 精品国产三级普通话版| 91狼人影院| 人妻少妇偷人精品九色| av免费在线看不卡| 又黄又爽又免费观看的视频| 韩国av在线不卡| 长腿黑丝高跟| 亚洲精品一卡2卡三卡4卡5卡| 日本撒尿小便嘘嘘汇集6| 久久精品影院6| 日本成人三级电影网站| 91精品国产九色| a级毛片a级免费在线|