• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    GFCache: A Greedy Failure Cache Considering Failure Recency and Failure Frequency for an Erasure-Coded Storage System

    2019-02-22 07:32:54MingzhuDengFangLiuMingZhaoZhiguangChenandNongXiao
    Computers Materials&Continua 2019年1期

    Mingzhu Deng, Fang Liu , Ming Zhao, Zhiguang Chen and Nong Xiao

    Abstract: In the big data era, data unavailability, either temporary or permanent,becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally,GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.

    Keywords: Failure cache, greedy recovery, erasure coding, failure recency, failure frequency.

    1 Introduction

    In recent years, unstoppable data explosion [Wu, Wu, Liu et al. (2018); Sun, Cai, Li et al.(2018)] generated by wireless sensors [Yu, Liu, Liu et al. (2018)] and terminals [Liu and Li (2018); Guo, Liu, Cai et al. (2018)] keeps driving up the demand for larger space in various big data storage systems. Due to the ever-growing data volume and ensuing space overhead concern, erasure coding, blessed with its capability to provide higher levels of reliability at a much lower storage cost, are gaining popularity [Wang, Pei, Ma et al.(2017)]. For instance, Facebook clusters employ a RS (10, 4) code to save money[Rashmi, Shah, Gu et al. (2015); Rashmi, Chowdhury, Kosaian et al. (2016)] while Microsoft invents and deploys its own LRC code in Azure [Huang, Simitci, Xu et al.(2012)]. Conversely, countless commercial components of storage systems are inherently unreliable and susceptible to failures [Zhang, Cai, Liu et al. (2018)]. Moreover, as the system aggressively scales up to compensate for the in flux of data [Liu, Zhang, Xiong et al. (2018)], data corruptions, either temporary or permanent, become a normal daily occurrence. For an erasure-coded storage system, a reconstruction operation is called to recover the failed data blocks, with the help of parity blocks. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the- fly, in order to serve the ongoing read request. This is because, while permanent failures are under system surveillance by monitoring mechanisms like heartbeat, temporary data failure is unknown until it is accessed. In big data storage system like Hadoop, an I/O exception will occur upon accessing unavailable data. Insistence on such exceptions after several repeated attempts leads to the situation of de-graded read.Although data recovery process is triggered immediately to ful fill a degraded read request,system performance is still degraded. This is due to disproportionate amounts of I/O and network bandwidth [Cai, Wang, Zheng et al. (2013)] consumed by each recovery process.For example, given a (6, 4) RS code and the block size to be 16 MB, a corrupted data block in a stripe needs a block of 16MB read and then downloaded from each of other six healthy nodes. In general, given a (k, m) MDS eras-ure code, k times of overhead is incurred to reconstruct one block.

    However, those newly revived data are either discarded immediately or not tracked after serving the request. Such disposal is somewhat reasonable due to the assumption that data experiencing temporary failures could later come back alive. On the other hand, such design overlooks the importance of keeping recovered data. For instance, due to the uncertainty of causal factors, like hardware glitches, which data is to be unavailable as well as when it occurs, conforms to no particular distribution. This makes failure pattern dif ficult to find, and to follow amid the various failure statistics. In other words, repeated temporary data unavailability is likely to occur for reasons such as persisting system hot spots, recurring software upgrades and so forth. Therefore, the existing disposal of a recovered data would inevitably lead to repeated recoveries due to its recurring unavailability, thereby straining system resources and performance. Furthermore, statistics[Subedi, Huang, Liu et al. (2016)] indicate that multiple-failure scenarios occur, and multiple blocks of a stripe can be unavailable simultaneously or incrementally. Given the current data recovering practice of one reconstruction operation per degraded read,multiple data corruptions on a stripe inevitably require multiple data recovery processes.However, those failed data can be produced by using only one recovery process.Therefore, such redundant recoveries on a single stripe lead to wasted system resources[Li, Cai and Xu (2018)] and degraded performance. Additionally, big data storage systems like Azure, often support multiple clients’ access. Without a central storing recorder (medium) of recovered data to enable the sharing of failure information among multiple-clients, those repeated and redundant data reconstruction operation happening to one client would unnecessarily reoccur among different clients. Therefore, we argue buffering and sharing failure information is of instrumental importance in the avoidance of unnecessary data reconstruction, and will improve system performance.

    To this end, this paper considers a typical distributed setting of an erasure-coded storage system and proposes GFCache to cache corrupted data to serve those purposes. GFCache employs a greedy caching approach of opportunity to promote not only the failed data,but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) cache replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access.Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to signi ficantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. For instance, compared to the current system without a failure cache, GFCache manages to reduce the average latency of a request to 12.19%.Also, GFCache achieves 24.62% hit ratio than 22.74% of CoARC in Subedi et al.[Subedi, Huang, Liu et al. (2016)] under workload h3.

    The rest of the paper is organized as follows: Section II presents the background information.Section III illustrates the design of GFCache whereas Section IV evaluates with the experiments. Section V reviews the related work and Section VII concludes the paper.

    2 Backgrounds and motivations

    2.1 Erasure coding

    In comparison to replication, erasure coding essentially employs mathematical computation in the production of redundancy to provide data protection [Plank,Simmerman and Schuman (2008)]. In general, a stripe consisted of k +m partitions is used as the smallest independent unit to preserve the capability to reconstruct itself. Such recon figurable capability is formed through an encoding process, where m parity partitions are produced by k data partitions through a matrix multiplication. In practice,as expressed ina generator matrix G is used to form a mathematical relationship between the original data D and the generated parity P partitions, and any one of the k data partitions is protected. Furthermore, if a matrix formed by any k rows of the generator matrix G is invertible, any combination of no more than m partition failures can be restored through a similar reverse matrix multiplication process with k surviving partitions. Such desirable property is called MDS (maximum distance separable) property[Plank (2013)] and the process to regenerate corrupted data partitions are called data recovering. So far the most widely-used MDS code is RS code. In comparison, non-MDS codes feature non-uniform numbers of participants involved in the encoding and decoding processes. For example, in LRC codes [Huang, Simitci, Xu et al. (2012);Sathiamoorthy, Asteris, Papailiopoulos et al. (2013)], fewer data blocks are needed to generate local parity blocks than that of global parity blocks.

    2.2 Failure scenarios

    Data can be corrupted due to various reasons, ranging from software glitches, and hardware wear-out to possible human mistakes. Although a large school of studies focuses on gathering failure statistics, the time and place of a data corruption are near impossible to find beforehand. Therefore, before actual data loss occurs, the process to somehow restore the corrupted data is of great importance. In order to prioritize data recovery in different failure scenarios, a clear failure classi fication is used to divide failures into permanent and temporary, according to the lasting of an event. Permanent failures refer to the permanently lost data, like a node breakdown or a malfunctioning disk. Since a permanent failure often involves a large amount of data and may cause heavy damage, it is always under system surveillance to be alarmed in a timely manner. For example, in Hadoop, each node comprising the cluster reports its health to the metadata server through a periodical heartbeat mechanism. Once a permanent failure is con firmed, repairing efforts are to be scheduled at less busy hours to revive data in a new replacement. This is because such repairing work requires a great deal of cooperation from other nodes and takes a long time (e.g. days) to complete [Rashmi, Shah, Gu et al. (2015)].

    Conversely, temporary failures indicate a brief unavailability of the data caused by nondestructive reasons, such as a persisting system hot spot, software upgrade. In essence, a temporary failure is transient and may revive later by itself. Instead of causing a damaging impact of data loss, a temporary failure often slows down the current access request. This is because a temporary failure cannot be found until the data is accessed.Due to such essence, a temporary failure needs to be dealt with right away to finish serving the ongoing request. In systems like Hadoop, a healthy node is randomly chosen to initiate a corresponding recovery by data read from other surviving nodes on the fly.

    A large school of studies gathers failure statistics and show that temporary failures are more common, and account for the majority in distributed storage systems. Moreover,among all levels of failures, single failure accounts for the majority (99.75%), and multiple failures are not impossible [Pinheiro (2007); Ma (2015)].

    3 Design

    This section details the design of the proposed GFCache for an erasure-coded storage system.

    3.1 Architecture

    Although big data storage systems feature good scalability with a distributed data storage cluster, clients still need to contact the centralized Metadata Server for metadata queries.For example, in Hadoop, an access request from the client is sent to the master NameNode, before the client contacts corresponding DataNode(s) for actual data access.This paper argues that providing a failure cache, which is installed in the Metadata Server,will not greatly impact the traditional flow of data access, but can signi ficantly improve performance by avoiding unnecessary data recovery processes. Fig. 1 demonstrates the proposed architecture, with our GFCache installed upon a distributed storage system. In the system, two Metadata Servers manage the metadata and supervise a cluster of DataNodes. GFCache is implemented on top of an NVM storage device, such as an inexpensive, small-sized, solid-state drive (SSD). A failure cache management module controls both the promotion and the eviction of the data. The GFCache can be directly plugged into the Metadata Server, such as the NameNode in Hadoop, and can be shared by all clients. GFCache can also be installed as a standalone node, independent of the Metadata Server. Upon receiving queries from a client, the Metadata Server is able to check GFCache for speed access. Otherwise, a client continues to contact corresponding Data Servers to fetch the requested data. In terms of data recovery, recently failed data is stored in GFCache after being recovered from a designated Data Server as a background job with lower priority. In return, before performing a data recovery, the designated node(a Data Server) can check GFCache to gather participating data needed for reconstruction.As there is already a module for erasure-coding that maintains information of erasurecoded data (e.g. RaidNode in Hadoop), the Cache management module can communicate and coordinate for failure data eviction, such as the placement of the evicted data, and the update of the stripe metadata.

    Figure 1: Architecture of adding GFCache

    3.2 Greedy caching

    Storing the failed data in a cache enables the sharing of failure information, which thereby avoids the process of repeatedly regenerating the same data. However, simply caching currently unavailable data does not reduce redundant recovery processes caused by multiple unavailable data in one stripe. This is because, unlike node failure that can be monitored via the heart-beat mechanism, temporary data failure is unknown until attempts to access it are made. Each failed data will incur one degraded read, which must recover the data before it can be promoted to the failure cache, as shown in subplot (a) of Fig. 2. In this figure, unavailable data blocks B and C in stripe 1, experience a degraded read and is cached after their recovery.

    Figure 2: (a) The architecture and flow of a traditional big data storage system. (2) Our proposed architecture installs a NVM cache as a plug-in within the Metadata Server,which captures newly recovered data and supports fast data access

    Therefore, aside from recurrences of data unavailability, GFCache also strives to diminish redundant data recovery processes of multiple data in a stripe. In regards to the uncertain status of data pre-access, GFCache employs greedy caching. In essence, this is a means of opportunistically pre-fetching increased amounts of data, needed for a better hit ratio. Such greedy practice is based upon the key insight, in which the whole information of a stripe can be recovered by a single reconstruction process, given any combination of no more than m failures under a (k, m) MDS erasure code. In other words,data produced by many recovery processes due to multiple failures happening in one stripe can actually be done in one single recovery, thus leading to a good save. The subplot (b) of Fig. 2 illustrates our greedy caching upon each degraded read recovery. We can see that upon the recovery of data block B, all the blocks of stripe 1 can be produced.GFCache greedily caches block B and block C, and thus resulting a failure cache hit when unavailability of data block C is encountered upon its access. In this way, the redundant recovery of block C is avoided.

    Fundamentally, the greedy caching is opportunistic and how and how much of the greed is actually matters. In plain words, over-greed may lead to the abusive use of cache space and the eviction of more important failure data, while under-greed does not serve the purpose of reducing redundant recoveries. To this end, GFCache accordingly adopts three adjustments on respectively which data, and how much data to pre-fetch with greed, and how to manipulate such data in the cache. Firstly, GFCache caches sequential data after the current vulnerable block of the same stripe, e.g. the data block C behind data block B.This is due to the assumption that access locality may lead to failure locality.Additionally, GFCache maintains a greedy window of size m to reduce the abusive use of cache space. This is due to the upper bound of the failure tolerance of a (k, m) MDS erasure code. Adjustment of the window size can be made if using a non-MDS erasure code, like LRC codes. Last but not least, except for the actually corrupt data (e.g. block B), other data promoted to GFCache with greed (e.g. block C) is assumed with an uncon firmed possibility to failure in the near future. Therefore, GFCache treats such data as failure-likely and accordingly keeps them at the closest place to eviction, such that they will not occupy the cache space long if the prediction does not result in a hit. Pseudo codes of greedy caching are included in the replacement algorithm.

    3.3 Failure caching replacement algorithm

    Essentially, the signi ficance of a cache depends on how data is managed within the device. As the core of caching, various innovative and powerful cache replacement algorithms are proposed with regards to the incoming normal workload accesses.GFCache differs itself by caching newly recovered data, which undergoes temporary unavailability recently. In other words, instead of interacting with the normal access of healthy data, GFCache only functions when failure happens and functions as a static ROM for normal access without dynamic adjustments.

    Since data failure statistics are quite random and do not follow a certain distribution,GFCache considers a comprehensive failure caching replacement algorithm with respect to both failure recency and failure frequency to aim for a higher hit ratio. In other words,this paper assumes that recently failed data is likely to fail again in the near future and a data which fails often is prone to failure again. By keeping more recently and frequently failed data longer in GFCache, more time is allowed for such temporarily unavailable data to revive healthy. The general idea behind this combined consideration can be expressed as C=W×R+(1-W)×F, in which, R stands for failure recency and F for failure frequency. F will increment by one if a data is promoted into cache due to actually corruption. For the failure-likely data which is cached by greed, the R is set to zero and the F does not increment for its corruption is not con firmed yet. Data with smallest C is evicted if the cache is full. Since the weights of failure recency and failure frequency are dynamically changing, an adaptive update of W is vital to maintaining a good hit ratio.Algorithm 3.1 provides details of our comprehensive caching algorithm with dynamic tuning, which gains inspiration from ARC algorithm [Megiddo and Modha (2003)].

    In Algorithm 3.1, different treatments are applied separately to corrupted data and data cached by greed. If data is cached by its own corruption, Failure ARC (FARC) algorithm is used for adjustment. In comparison, data cached by greed is directly put into the place of GFCache for earliest eviction if it is a miss. For any evicted data, if its original copy comes back alive, it can be discarded. Otherwise, it is written back to its original node or a designated node. After that, the metadata information of the corresponding stripe is updated accordingly. Note that if the original residing node of the evicted happens to undergo a permanent node failure, the data can be directly written to the replacement node and save some reconstructing resources.

    4 Experiments

    This section experiments with real-world traces to compare GFCache with other approaches. The other approaches are: (1) Without a failure cache (No Cache), which is common in current big-data storage systems, as shown in Fig. 1; (2) FARC (failure ARC),which represents a simple adoption of the classic ARC [Megiddo and Modha (2003)]; (3)CoARC from related work [Subedi, Huang, Liu et al. (2016)]. Note that, GFCache differs from FARC with our proposed greedy caching. As opposed to CoARC, GFCache features a greedy caching and the consideration of both failure recency and failure frequency.

    4.1 Environments and workloads

    SimulatorThis paper adopts a trace-driven simulation method for evaluation purposes.Our simulator simulates a distributed storage system with a cluster of storage nodes and bases on PFSsim [Liu, Figueiredo, Xu et al. (2013)], which is widely used in various research works [Liu, Cope, Carns et al. (2012); Li, Dong, Xiao et al. (2012a, 2012b); Li,Xiao, Ke et al. (2014)]. Our simulator runs on node 19 of the computing cluster of VISA lab at ASU. The node has 2X Intel Xeon E5-2630 2.40 GHz processor and 62GiB of RAM with 2X1TB disk of Seagate 7200 and the model number is ST1000NM0033-9ZM.The operation system of the machine is Ubuntu 14.04.1 with Linux version 3.16.0-30. By default, all simulations emulate a cluster of 12 storage nodes with an RS (8, 4) code employed.

    DatasetsThe real-world traces we adopt come from Chen et al. [Chen, Luo and Zhang(2011)], which are widely used in academic studies and prototype implementations. In detail, the CAFTL traces come from three representative categories, ranging from typical of fice work-loads (Desktop), big data queries (Hadoop) to transaction processing on PostgreSQL (Transaction). More detail on the collecting of the CAFTL traces can be found in Chen et al. [Chen, Luo and Zhang (2011)].

    Failure CreationSince there are no existing failure traces of CAFTL workloads,randomization is used to generate data corruption to emulate degraded read. We set the total failure rate to be around 1% of the whole working data set. In detail, we corrupt random size of data in a random stripe to cause unavailability. Results are averaged of 20 runs, in which the total failure rate varies slightly but manages to conform to an expected normal distribution.

    MetricsThis paper adopts the latency and the hit ratio of the failure cache as the metrics to compare different approaches. Note that latency is averaged across all the requests and then normalized by that of GFCache to rule out the difference of units.

    4.2 Effectiveness of a failure cache

    Fig. 3 shows the signi ficant difference made on the average latency of a request between having and having not a failure cache. We can clearly see that with a failure cache, the average latency of a request has been reduced drastically. For instance, an 88.56%latency reduction is seen under workload d1, whereas the gap grows to almost 94% under workload h1. Although the performance gap varies, the system performance is signi ficantly improved as a whole. The reason behind the boost is that with failed data cached, fast data access becomes possible with cache hits in GFCache in the following two situations. (1) Normal data access is boosted with a shortcut to check GFCache during its communication with the Metadata servers in the first place. Therefore, a degraded read can be bypassed with a cache hit. (2) During a degraded read, helper data participating the data recovery can be fast downloaded from the GFCache, instead of from the corresponding node. In other words, the fact that buffering corrupted data in a cache manages to make a boosting contribution to the system performance justi fies the installment of a failure cache, considering the decreasing cost of a storage device.

    Figure 3: Effectiveness of a failure cache

    4.3 Effectiveness of greedy caching

    Although it is straightforward to add a failure cache to an existing system, it fails to reduce redundant data reconstruction operations occurred in a stripe without our proposed greedy caching technique. In order to compare the difference made by the greedy caching,we implement a baseline failure cache called FARC (Failure ARC), which does not use the greedy caching technique.

    Figure 4: Effectiveness of the greedy caching

    Fig. 4 provides the comparison of the average latency of a request between FARC and GFCache. In general, a shorter latency is experienced by a request under GFCache throughout all the workloads. The range starts from around 13% to 58% depending on the workload. For example, GFCache outperforms FARC by 13.06% under the workload of h1. This is because, with greedy caching, failure-likely data are aggressively cached in GFCache. If such opportunistic gambling with greed produces a cache hit in the near future, the performance is to be boosted without suffering redundant repairs of an otherwise de- graded read. If data cached with greed is a cache miss, little overhead is caused due to its early eviction from the GFCache.

    4.4 Comparison with CoARC

    The most related work to our GFCache is CoARC from Subedi et al. [Subedi, Huang and Liu (2016)], which features an LRF (least-recently-failed) failure caching algorithm and an aggressive recovery of all other temporarily unavailable blocks in the same stripe. Fig. 5 and Fig. 6 respectively compare our GFCache with CoARC in the latency and the hit ratio.In Fig. 5, a close performance in latency is witnessed under some workloads (e.g. h6),whereas GFCache contributes to larger latency reduction as opposed to CoARC in other cases. For instance, GFCache experiences a 5.74% smaller latency under d1 than that of CoARC. Under h1, CoARC is 8.46% slower than GFCache.

    Figure 5: Latency comparison to CoARC

    Figure 6: Hit ratio comparison to CoARC

    In Fig. 6, unlike a normal cache, the hit ratio of a failure cache is generally low (no more than 30%) regardless of a speci fic replacement algorithm. This is largely due to two reasons. Firstly, as an input source, data failures are comparatively in much smaller amount than the normal data access. Secondly, the failure pattern of corrupted data is hard to capture even with caching algorithms which prove effective in a normal cache.

    Regarding the contrast between GFCache and CoARC, the gap in between is not very big in general. For example, CoARC and GFCache exhibit a resembling hit ratio to be respectively 28.14% and 28.92% under h6. However, GFCache surpasses CoARC in other cases. Under d1, GFCache achieves a hit ratio of 16.59% in opposition to 14.51%of CoARC.

    We argue those disparity gaps result from two aspects. One is that GFCache considers both failure recency and failure frequency to manage the cached data while CoARC’s LRF only focuses on failure recency, thus leading to higher hit ratios in some cases. The other is that the aggressive approach in CoARC to recover all the failed data in a stripe needs to wait for the completion of identi fication of the last data, thus leading to idle wait time. In contrast, greedy caching adopted in GFCache causes no idle wait. However,greedy caching do incur cache miss due to its speculative opportunism.

    5 Related works

    This paper studies data recovery of an erasure-coded storage system with a failure cache.Therefore, we review related work in the following order.

    Data Recovery of Erasure CodingA large school of works conducted excellent research in the area of enhancing data recovery of erasure coding, which can be classi fied into three categories: (1) designing new classes of coding algorithm to essentially reduce data needed for per recovery process [Dimakis, Godfrey, Wu et al. (2010); Huang,Simitci, Xu et al. (2012)]; (2) searching for a more ef ficient recovery sequence with less data reads [Khan, Burns, Plank et al. (2012)]; (3) proposing optimization in different system and network settings [Fu, Shu and Luo (2014); Shen, Shu, Lee et al. (2017)]. All these works focus on facilitating per recovery process of either single failure or multifailure. In comparison, this paper treats the data recovery process as a black box and differentiates itself by reducing repeated and redundant recovery of failed data through buffering failed data. Therefore, this paper is perpendicular and complimentary to above work.

    Normal CachingCaching is one of the oldest and most fundamental and use techniques in modern computing, which has been ubiquitously employed in nearly everywhere in the entire computational stack. Although various caching policies [Mattson, Gecsei, Slutz et al. (1970); Megiddo (2003)] have been proposed with different trade-off, the common purpose of such caches is to accommodate incoming normal data access, rather than failed data. Therefore, this paper contrasts itself with conventional caches by buffering a completely different source of data source, the temporary unavailable data.

    Failure CachingIn terms of failure caching in a setting of an erasure-coded storage system, very few research pays attention to the recurring data recovery problems. Sudedi et al. [Subedi, Huang, Liu et al. (2016)] treats per recovery process as a black box and first proposes CoARC, which essentially features a least-recently-failed (LRF) cache to buffering newly recovered data in order to eliminate repeated recoveries of the same data.This paper falls into the same track of reference [Subedi, Huang, Liu et al. (2016)].However, this paper distinguishes itself with Subedi et al. [Subedi, Huang, Liu, et al.(2016)] in the following two aspects: (1) our GFCache employs a greedy caching policy to buffer all the data in a stripe upon its first recovery while CoARC waits to con firm all unavailable blocks in the failed strip to start recovery. (2) our GFCache features a more complicated eviction policy considering both failure recency and failure frequency while CoARC employs a simple LRU algorithm on failed data. Behinds, our GFCache is selfadaptive and scan-resistance while CoARC is not. Therefore, GFCache is able to achieve a higher hit ratio in general.

    6 Conclusion

    This paper proposes GFCache to address the repeated and redundancy data recoveries with the classic caching idea to buffer failed data. GFCache features a greedy caching each data recovery process and designs an innovative and self-adaptive caching replacement algorithm with a combined consideration of failure recency and failure frequency. Last but not least, cached data in GFCache provides fast read access to normal access workloads. Evaluations prove that GFCache achieves good hit ratio and manages to signi ficantly boost system performance.

    Acknowledgement:We would like to greatly appreciate the anonymous reviewers for their insightful comments. This work is supported by The National Key Research and Development Program of China (2016YFB1000302) and The National Natural Science Foundation of China (61433019, U1435217).

    老司机深夜福利视频在线观看| 好男人电影高清在线观看| 久久久久久免费高清国产稀缺| 老司机在亚洲福利影院| 一二三四在线观看免费中文在| 日本黄色视频三级网站网址| 18禁观看日本| 男人舔女人的私密视频| 999久久久国产精品视频| 午夜视频精品福利| 免费看a级黄色片| 精品电影一区二区在线| 午夜福利高清视频| 亚洲少妇的诱惑av| 一a级毛片在线观看| 18禁国产床啪视频网站| 天堂动漫精品| 女性生殖器流出的白浆| 制服丝袜大香蕉在线| 一边摸一边抽搐一进一小说| 后天国语完整版免费观看| 人人妻人人澡人人看| 久久久久国产一级毛片高清牌| 日本在线视频免费播放| 啦啦啦韩国在线观看视频| 高潮久久久久久久久久久不卡| 男男h啪啪无遮挡| 天天一区二区日本电影三级 | 大陆偷拍与自拍| 亚洲av第一区精品v没综合| www.www免费av| 黄色片一级片一级黄色片| 欧美成人免费av一区二区三区| 欧美大码av| 国产亚洲精品综合一区在线观看 | 国产熟女午夜一区二区三区| 精品一区二区三区四区五区乱码| 亚洲欧美精品综合久久99| 久久午夜综合久久蜜桃| 欧美最黄视频在线播放免费| 亚洲av成人一区二区三| 亚洲专区字幕在线| 久热爱精品视频在线9| 久久人人97超碰香蕉20202| 精品久久久久久,| 亚洲第一青青草原| 亚洲人成77777在线视频| 国产精品自产拍在线观看55亚洲| 亚洲欧美一区二区三区黑人| 久久久久九九精品影院| 黄色片一级片一级黄色片| 精品久久久久久成人av| 亚洲情色 制服丝袜| 国产成人精品久久二区二区免费| 亚洲最大成人中文| 成人特级黄色片久久久久久久| 自线自在国产av| 在线观看一区二区三区| 91精品三级在线观看| 日韩欧美免费精品| 这个男人来自地球电影免费观看| 操出白浆在线播放| 色av中文字幕| 免费观看精品视频网站| 亚洲最大成人中文| 色哟哟哟哟哟哟| 精品久久久久久,| 99热只有精品国产| 国产精品av久久久久免费| 色播亚洲综合网| 99精品久久久久人妻精品| 99久久综合精品五月天人人| 级片在线观看| 成人精品一区二区免费| 女人爽到高潮嗷嗷叫在线视频| 亚洲欧美精品综合一区二区三区| 亚洲全国av大片| 国产又色又爽无遮挡免费看| 99在线人妻在线中文字幕| 一夜夜www| 亚洲成av人片免费观看| 女人被狂操c到高潮| 黑人欧美特级aaaaaa片| 在线国产一区二区在线| 欧美精品亚洲一区二区| 伊人久久大香线蕉亚洲五| 欧美黄色片欧美黄色片| 黄色视频,在线免费观看| 一边摸一边抽搐一进一小说| 欧美老熟妇乱子伦牲交| АⅤ资源中文在线天堂| 久久人妻熟女aⅴ| 午夜精品在线福利| 可以免费在线观看a视频的电影网站| 人人澡人人妻人| 亚洲一区二区三区不卡视频| 97人妻天天添夜夜摸| 欧美一区二区精品小视频在线| 中文字幕人成人乱码亚洲影| 国产一区二区三区综合在线观看| 亚洲欧美一区二区三区黑人| 午夜精品久久久久久毛片777| 人妻久久中文字幕网| 国产激情欧美一区二区| 老熟妇乱子伦视频在线观看| 97碰自拍视频| 久久天堂一区二区三区四区| 免费高清在线观看日韩| 午夜免费激情av| 女人精品久久久久毛片| 国产在线精品亚洲第一网站| 一本大道久久a久久精品| 亚洲中文字幕一区二区三区有码在线看 | 曰老女人黄片| 免费看a级黄色片| 久久天堂一区二区三区四区| 日韩欧美免费精品| 两性夫妻黄色片| 亚洲成a人片在线一区二区| 黄色丝袜av网址大全| 香蕉久久夜色| 国产精品综合久久久久久久免费 | 极品教师在线免费播放| 18禁裸乳无遮挡免费网站照片 | 亚洲成av人片免费观看| 成熟少妇高潮喷水视频| 久久久久久久久中文| 激情视频va一区二区三区| 国产xxxxx性猛交| 久久久久久国产a免费观看| 久久精品成人免费网站| 在线av久久热| 久久 成人 亚洲| 在线观看免费视频日本深夜| 国产成人系列免费观看| 国产午夜精品久久久久久| 18美女黄网站色大片免费观看| 十分钟在线观看高清视频www| 久久人人爽av亚洲精品天堂| netflix在线观看网站| 亚洲中文字幕一区二区三区有码在线看 | 色在线成人网| 久久精品影院6| 麻豆av在线久日| 欧美中文日本在线观看视频| 脱女人内裤的视频| 国产精品亚洲一级av第二区| 午夜视频精品福利| 日韩欧美一区视频在线观看| 久久久国产精品麻豆| 国产亚洲av嫩草精品影院| 欧美成人免费av一区二区三区| 中文字幕精品免费在线观看视频| 老司机靠b影院| 免费av毛片视频| 制服丝袜大香蕉在线| 午夜老司机福利片| 午夜免费鲁丝| 色婷婷久久久亚洲欧美| 久久久久久人人人人人| 免费av毛片视频| 大型av网站在线播放| 国产区一区二久久| 国产成人免费无遮挡视频| 午夜两性在线视频| 久久精品aⅴ一区二区三区四区| 精品国产乱码久久久久久男人| 国产欧美日韩综合在线一区二区| 精品一区二区三区av网在线观看| 国产精品一区二区精品视频观看| 亚洲国产精品sss在线观看| 国产精品日韩av在线免费观看 | 身体一侧抽搐| 国产精品,欧美在线| 电影成人av| 成人三级做爰电影| 亚洲专区字幕在线| 人人妻人人澡欧美一区二区 | 成人精品一区二区免费| 午夜视频精品福利| 国产欧美日韩精品亚洲av| 欧美成人一区二区免费高清观看 | 欧美国产精品va在线观看不卡| 韩国精品一区二区三区| 欧美日本中文国产一区发布| 桃红色精品国产亚洲av| 级片在线观看| 久热爱精品视频在线9| 久久亚洲精品不卡| 欧美在线一区亚洲| 91麻豆av在线| 两性午夜刺激爽爽歪歪视频在线观看 | 99riav亚洲国产免费| 国产精品 欧美亚洲| 亚洲午夜精品一区,二区,三区| 波多野结衣av一区二区av| 国产精品av久久久久免费| 亚洲人成电影观看| 欧美色欧美亚洲另类二区 | av有码第一页| 中文字幕人妻熟女乱码| 看免费av毛片| 亚洲精品中文字幕在线视频| 欧美黄色片欧美黄色片| 给我免费播放毛片高清在线观看| 精品久久久久久久久久免费视频| 妹子高潮喷水视频| 88av欧美| 久久午夜亚洲精品久久| 夜夜躁狠狠躁天天躁| 妹子高潮喷水视频| 天堂动漫精品| 如日韩欧美国产精品一区二区三区| 日韩欧美国产在线观看| 亚洲五月色婷婷综合| 国产真人三级小视频在线观看| 亚洲人成77777在线视频| 国产成人av教育| 亚洲中文日韩欧美视频| 日本免费一区二区三区高清不卡 | 久久 成人 亚洲| 日韩精品青青久久久久久| 国产午夜精品久久久久久| 日韩有码中文字幕| 真人做人爱边吃奶动态| 一区二区三区高清视频在线| 亚洲av成人av| www.www免费av| 欧美久久黑人一区二区| 亚洲 欧美 日韩 在线 免费| 在线免费观看的www视频| 成人手机av| 啦啦啦观看免费观看视频高清 | 久9热在线精品视频| 精品国产乱子伦一区二区三区| 亚洲一区二区三区色噜噜| 欧美黄色片欧美黄色片| 亚洲色图 男人天堂 中文字幕| 淫妇啪啪啪对白视频| 亚洲一区二区三区色噜噜| 一二三四社区在线视频社区8| 男人操女人黄网站| 看免费av毛片| 91字幕亚洲| 露出奶头的视频| 桃色一区二区三区在线观看| 欧美性长视频在线观看| 人人妻,人人澡人人爽秒播| 黄色视频,在线免费观看| 亚洲一区高清亚洲精品| 搞女人的毛片| 一区福利在线观看| av超薄肉色丝袜交足视频| 欧美国产日韩亚洲一区| 亚洲欧美日韩另类电影网站| 99热只有精品国产| 真人一进一出gif抽搐免费| 日韩大尺度精品在线看网址 | 18禁黄网站禁片午夜丰满| netflix在线观看网站| 久久久国产欧美日韩av| 久久精品国产亚洲av高清一级| 成熟少妇高潮喷水视频| 国产精品免费视频内射| 亚洲,欧美精品.| 久久午夜亚洲精品久久| 亚洲色图综合在线观看| 欧美+亚洲+日韩+国产| 久久中文字幕人妻熟女| 91麻豆av在线| 又紧又爽又黄一区二区| 宅男免费午夜| 国产熟女午夜一区二区三区| 丝袜美足系列| 久久欧美精品欧美久久欧美| 久久人妻福利社区极品人妻图片| 91麻豆av在线| 在线观看66精品国产| 88av欧美| 国产国语露脸激情在线看| 亚洲一区二区三区不卡视频| 久久国产精品男人的天堂亚洲| 久久精品国产亚洲av高清一级| 亚洲国产精品久久男人天堂| 一区二区三区激情视频| 国产黄a三级三级三级人| 女警被强在线播放| 一本大道久久a久久精品| 亚洲精品久久成人aⅴ小说| 一本久久中文字幕| 免费观看精品视频网站| av中文乱码字幕在线| 欧美成人性av电影在线观看| 高清在线国产一区| 老司机靠b影院| 久久中文字幕人妻熟女| 午夜成年电影在线免费观看| 夜夜看夜夜爽夜夜摸| 久久香蕉国产精品| 国产亚洲欧美98| 欧美日韩精品网址| 深夜精品福利| 99久久精品国产亚洲精品| www日本在线高清视频| 亚洲激情在线av| 国产欧美日韩综合在线一区二区| 少妇熟女aⅴ在线视频| 黄色女人牲交| 久久久国产成人免费| 搡老熟女国产l中国老女人| 一区福利在线观看| 香蕉国产在线看| 色播在线永久视频| 久久狼人影院| 欧美成狂野欧美在线观看| 久久热在线av| 黑人巨大精品欧美一区二区蜜桃| 亚洲av第一区精品v没综合| 国产精品 欧美亚洲| 国产一级毛片七仙女欲春2 | 久久久国产精品麻豆| 中出人妻视频一区二区| 亚洲中文字幕日韩| 亚洲av电影在线进入| 美女扒开内裤让男人捅视频| 亚洲专区国产一区二区| 亚洲精品粉嫩美女一区| 久久久久久亚洲精品国产蜜桃av| 国产精品99久久99久久久不卡| 亚洲精品美女久久久久99蜜臀| 国产成人av教育| 国产乱人伦免费视频| 老汉色av国产亚洲站长工具| 女人被狂操c到高潮| 亚洲精华国产精华精| 国产99久久九九免费精品| 91成人精品电影| 中亚洲国语对白在线视频| 精品国内亚洲2022精品成人| 波多野结衣av一区二区av| www.www免费av| 精品国产国语对白av| 国产主播在线观看一区二区| 69精品国产乱码久久久| 又黄又粗又硬又大视频| 欧美老熟妇乱子伦牲交| 99久久精品国产亚洲精品| 一级a爱视频在线免费观看| 国产一区二区在线av高清观看| www国产在线视频色| 日韩中文字幕欧美一区二区| 免费高清在线观看日韩| 国产片内射在线| 好男人电影高清在线观看| 久久亚洲真实| 久久国产亚洲av麻豆专区| 欧美在线黄色| 99在线视频只有这里精品首页| 一本大道久久a久久精品| 美女高潮喷水抽搐中文字幕| 97超级碰碰碰精品色视频在线观看| 久久香蕉精品热| videosex国产| 亚洲第一电影网av| 亚洲伊人色综图| 日韩大码丰满熟妇| 女人爽到高潮嗷嗷叫在线视频| 日韩欧美三级三区| 国产精品亚洲av一区麻豆| 日韩中文字幕欧美一区二区| 免费看美女性在线毛片视频| 两个人看的免费小视频| 精品久久久久久,| 亚洲色图 男人天堂 中文字幕| 不卡av一区二区三区| 国产午夜精品久久久久久| www.精华液| 久久久久亚洲av毛片大全| 国产成人影院久久av| 香蕉久久夜色| 国产精品一区二区三区四区久久 | 欧美乱妇无乱码| 一本大道久久a久久精品| 亚洲久久久国产精品| 色综合欧美亚洲国产小说| 久久婷婷成人综合色麻豆| 男女下面插进去视频免费观看| 国产激情欧美一区二区| 午夜福利一区二区在线看| 久久这里只有精品19| 亚洲人成网站在线播放欧美日韩| 中文字幕另类日韩欧美亚洲嫩草| 高潮久久久久久久久久久不卡| 久热这里只有精品99| 两人在一起打扑克的视频| 亚洲中文字幕一区二区三区有码在线看 | 久久中文字幕一级| 淫妇啪啪啪对白视频| 99re在线观看精品视频| 一进一出抽搐动态| 女同久久另类99精品国产91| 叶爱在线成人免费视频播放| 午夜免费鲁丝| 亚洲片人在线观看| av片东京热男人的天堂| 欧美亚洲日本最大视频资源| 亚洲精品中文字幕一二三四区| 可以在线观看毛片的网站| 精品国产美女av久久久久小说| 女人被狂操c到高潮| 午夜视频精品福利| 久久天躁狠狠躁夜夜2o2o| 国产亚洲精品一区二区www| 国产欧美日韩精品亚洲av| 正在播放国产对白刺激| 巨乳人妻的诱惑在线观看| 国产麻豆69| 亚洲中文字幕一区二区三区有码在线看 | АⅤ资源中文在线天堂| 日本三级黄在线观看| 涩涩av久久男人的天堂| 欧美色视频一区免费| 免费无遮挡裸体视频| 国产av精品麻豆| 亚洲天堂国产精品一区在线| 丁香欧美五月| 给我免费播放毛片高清在线观看| 亚洲色图av天堂| 亚洲av第一区精品v没综合| 精品高清国产在线一区| 国产91精品成人一区二区三区| 久久久久国产一级毛片高清牌| 男人舔女人的私密视频| 精品欧美国产一区二区三| 动漫黄色视频在线观看| 亚洲精华国产精华精| 欧美绝顶高潮抽搐喷水| 变态另类丝袜制服| 日韩欧美国产在线观看| 国产又爽黄色视频| av片东京热男人的天堂| 少妇粗大呻吟视频| 午夜福利在线观看吧| 不卡av一区二区三区| 黄色女人牲交| 亚洲成av人片免费观看| 少妇 在线观看| 免费人成视频x8x8入口观看| 亚洲一区二区三区不卡视频| 美女高潮到喷水免费观看| 两个人看的免费小视频| 在线观看免费视频网站a站| 欧美黄色淫秽网站| 嫩草影视91久久| 国产色视频综合| 黄色成人免费大全| 高清黄色对白视频在线免费看| 亚洲国产日韩欧美精品在线观看 | 欧美日本亚洲视频在线播放| av福利片在线| 999久久久精品免费观看国产| 一级,二级,三级黄色视频| 欧美激情久久久久久爽电影 | 国产伦一二天堂av在线观看| av有码第一页| 国内精品久久久久精免费| 欧美老熟妇乱子伦牲交| 日韩欧美在线二视频| 精品久久久久久,| 久久久精品国产亚洲av高清涩受| 黄色 视频免费看| 亚洲五月婷婷丁香| 麻豆成人av在线观看| 99国产精品一区二区蜜桃av| 欧美乱色亚洲激情| 嫩草影院精品99| 99国产精品99久久久久| 长腿黑丝高跟| 午夜免费鲁丝| 嫩草影视91久久| 91国产中文字幕| 在线观看免费视频日本深夜| 久久久久久久久久久久大奶| 欧美精品啪啪一区二区三区| 91av网站免费观看| 亚洲国产日韩欧美精品在线观看 | 天天躁狠狠躁夜夜躁狠狠躁| 老司机午夜十八禁免费视频| 老熟妇仑乱视频hdxx| 国产欧美日韩一区二区精品| 亚洲精品中文字幕在线视频| 91国产中文字幕| 亚洲欧美激情在线| 又大又爽又粗| 亚洲五月色婷婷综合| 久久精品91无色码中文字幕| 一边摸一边抽搐一进一出视频| 性色av乱码一区二区三区2| 国产成+人综合+亚洲专区| av在线天堂中文字幕| 精品国产国语对白av| 亚洲av电影在线进入| 国产成人精品久久二区二区免费| av有码第一页| 久久国产亚洲av麻豆专区| 女生性感内裤真人,穿戴方法视频| 亚洲欧美日韩高清在线视频| 欧美黑人欧美精品刺激| 亚洲情色 制服丝袜| 亚洲av美国av| 亚洲情色 制服丝袜| 中文字幕人妻丝袜一区二区| 精品第一国产精品| 黄片小视频在线播放| 最新美女视频免费是黄的| 国产一卡二卡三卡精品| 中文字幕最新亚洲高清| 亚洲激情在线av| 9色porny在线观看| 欧美黄色片欧美黄色片| 免费观看精品视频网站| АⅤ资源中文在线天堂| 国产激情欧美一区二区| 国产一区二区激情短视频| 丁香六月欧美| 一个人免费在线观看的高清视频| 午夜日韩欧美国产| 久久中文字幕人妻熟女| 搞女人的毛片| 欧美成人性av电影在线观看| 又黄又爽又免费观看的视频| 欧美在线一区亚洲| 亚洲精品一卡2卡三卡4卡5卡| 1024视频免费在线观看| 又紧又爽又黄一区二区| 久久久久九九精品影院| 亚洲精品国产区一区二| 可以在线观看的亚洲视频| 伊人久久大香线蕉亚洲五| 国产麻豆成人av免费视频| 久久欧美精品欧美久久欧美| 免费不卡黄色视频| 最好的美女福利视频网| 成年女人毛片免费观看观看9| 国产真人三级小视频在线观看| 一卡2卡三卡四卡精品乱码亚洲| 一区福利在线观看| 国产精品永久免费网站| 91麻豆精品激情在线观看国产| 久久午夜综合久久蜜桃| 日本精品一区二区三区蜜桃| 精品国内亚洲2022精品成人| 久久久久久久午夜电影| 美国免费a级毛片| 人成视频在线观看免费观看| 黄色成人免费大全| 在线观看免费日韩欧美大片| 免费在线观看影片大全网站| 国产国语露脸激情在线看| 日本五十路高清| 大码成人一级视频| 男女下面进入的视频免费午夜 | 1024视频免费在线观看| 日本黄色视频三级网站网址| 在线观看日韩欧美| 日本五十路高清| 中出人妻视频一区二区| 国产蜜桃级精品一区二区三区| 两个人看的免费小视频| 国产三级黄色录像| 国产精品亚洲美女久久久| 久久久久久久精品吃奶| 久热爱精品视频在线9| 午夜精品在线福利| 97碰自拍视频| 中出人妻视频一区二区| 国产精品久久久久久精品电影 | 免费在线观看日本一区| 一个人免费在线观看的高清视频| 一本大道久久a久久精品| 国产精品久久久久久人妻精品电影| 一级a爱视频在线免费观看| a在线观看视频网站| 欧美亚洲日本最大视频资源| 国产成人av教育| 国产在线观看jvid| 国内久久婷婷六月综合欲色啪| 99精品欧美一区二区三区四区| 91精品国产国语对白视频| 成在线人永久免费视频| 岛国在线观看网站| 1024视频免费在线观看| 黄色 视频免费看| 日韩一卡2卡3卡4卡2021年| 亚洲国产精品sss在线观看| 国产成人啪精品午夜网站| 亚洲男人天堂网一区| 中文字幕高清在线视频| 国产高清有码在线观看视频 | 两个人视频免费观看高清| 久热爱精品视频在线9| 69精品国产乱码久久久| 女人高潮潮喷娇喘18禁视频| 成人三级做爰电影| 99热只有精品国产| 一区二区三区国产精品乱码| 久久精品国产亚洲av高清一级| 国产亚洲av高清不卡| 麻豆国产av国片精品| 国产野战对白在线观看| 日韩欧美在线二视频| 长腿黑丝高跟| 又黄又粗又硬又大视频| 免费看十八禁软件| 大型黄色视频在线免费观看|