• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A method to extend available main memory for computer systems*

    2021-05-18 05:48:32HAOXiaoranCHENLanNIMaoPANLei

    HAO Xiaoran, CHEN Lan, NI Mao, PAN Lei

    (EDA Center, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China)(Received 26 September 2019; Revised 19 Novembe 2019)

    Hao X R, Chen L, Ni M, et al. A method to extend available main memory for computer systems[J]. Journal of University of Chinese Academy of Sciences, 2021,38(3):423- 432.

    Abstract In operating systems, swapping mechanism provides extended main memory space for memory-intensive applications. However, page-granularity swapping causes extra system I/O when objects are smaller than a page. This paper uses NAND flash-based SSD to extend DRAM main memory, and proposes an object-granularity hybrid main memory management scheme to reduce extra system I/O by caching as many hot objects as possible in DRAM. Compared to Linux-swap system, the proposed memory management scheme improves system performance by up to 47.5% for microbenchmarks and 73.3% for real-world memory-intensive workloads.

    Keywords memory extension; flash-based SSD; memory-intensive application; hybrid memory

    For memory-intensive applications, the computer systems are required to cache huge amount of user data in main memory to achieve high performance due to the gap between main memory and main memory extension, such as hard disk. However, the current main memory of computer systems, dynamic random access memory (DRAM) faces its constraints of high-power consumption and high price-per bit. What’s more, the resources on computers restrict the expansion of memory capacity. Hence, large capacity of DRAM makes the computers neither cost-effective nor energy-effective. For applications, when the available DRAM is to be exhausted, operating system extends main memory capacity using a part of storage, such as hard disk.

    Many new non-volatile memory (NVM) devices have emerged recently, such as phase change memory (PCM), magnetic random-access memory (MRAM), resistive random-access memory (RRAM) and NAND flash-based SSD. Some researchers[1-3]introduce PCM into computer memory system, which reduces memory power consumption and improves memory capacity. However, the newly appeared PCM technology is not yet mature to be widely used in computer system because of its high cost and low memory capacity, such as Intel Optane memory. Among the various emerging NVM devices, the NAND flash-based SSD has been widely used in enterprise data storage systems. Compared with the hard disk, NAND flash memory owns low access latency, which makes it more suitable for main memory extension than the hard disk.

    There are already many researches about using flash as DRAM extension. The works in Refs. [4-6]have proposed the methods of integrating SSD as system swap device. The OS swapping mechanism works at a page granularity, which causes unnecessary increase of data accessing and wastage of IO bandwidth when objects are smaller than a page. Some other research[7-8]map the SSD file into memory through POSIX mmap(). The performance of this method relies on smart data access patterns and achieves only a fraction of the SSD’s performance by using mmap() system call. A few research[9-10]modify the memory management of OS and applications, which need large overhead. Yoshida1 et al.[11]implements flash translation layer (FTL) as software on the host side, so that small but frequently accessed data such as hash table is stored in DRAM, while key/value data occupying most of the data capacity is stored in the memory space composed of DRAM and flash. However, this application-optimized architecture needs to redesign the software FTL and hardware acceleration module for different applications, which leads to poor portability. And some approach[12-13]implement hybrid memory by adding hardware extension or modification. Some efforts have adopted SSD as a main memory extension by implementing a runtime library. SSDAlloc[14]can boost performance by managing data at an object granularity. Meanwhile, it needs extra page faults that cause more latency even when fetching data from the DRAM object cache. Chameleon[15]reduces the extra page faults by providing the access to the entire physical memory for applications, which leads to the waste of physical main memory.

    In summary, using traditional Linux operating system, such as swap system and mmap(), to manage extended SSD memory can not make full use of the performance advantages of SSD. Some hardware based improvements have poor portability. And the application-oriented solution has not good portability. Designing a special extended memory management method, such as SSDAlloc and Chameleon, would be an effective approach.

    In this paper, we propose a hybrid main memory architecture named HMM_alloc (hybrid main memory allocator) to augment DRAM with SSD. HMM_alloc implements a library to provide several application interfaces for allocating and releasing memory in user space. Data accessing is managed at object granularity by a custom page fault handler mechanism, similar to the thought of Liu et al.[16]. When the object size is smaller than 4 KB, object-granularity management can reduce extra system I/O by caching as many hot objects as possible in DRAM. Our key contributions are as follows:

    1)HMM_alloc: A hybrid main memory architecture using SSD as the extension of main memory and using DRAM as the cache of SSD. To transparently manage the movement of data between DRAM and SSD, HMM_alloc offers interfaces for applications like malloc by implementing a runtime library.

    2)Memory management based on object granularity: In comparison with using SSD as system swap device which manages data at a page granularity, HMM_alloc works at an application object granularity to boost the efficiency of data accessing on SSD. To reduce the waste of physical memory caused by object granularity, HMM_alloc provides a flexible memory partition strategy and a multi-mapping strategy to manage the physical memory by micro-pages.

    3)Evaluation: The HMM_alloc architecture has been evaluated on a DRAM-restricted server with a number of memory-intensive workloads including microbenchmarks and some representative real-world applications.

    1 Design

    This section explains the details of several strategies to implement the HMM_alloc system.

    1.1 HMM system architecture

    The proposed DRAM/SSD hybrid main memory architecture treats NAND flash-based SSD as an extension of system main system. It can efficiently handle allocation/ de-allocation requests in user space and random read/write operations with a large number of various-sized objects for applications. A runtime library is implemented to take charge of both virtual memory address space allocation and memory page exception. Accessing data out of DRAM is handled by a custom page fault handler as shown in Fig.1. Instead of a page granularity of swapping management, SSDRAM works at an application object granularity to boost system performance and provide efficient use of SSD. Thus, DRAM memory is partitioned into micro-pages with several kinds of size and offset which are both less than one page. When an object is fresh accessed or fetched from SSD, an appropriate micro-page will be materialized dynamically to fill its page table entry.

    Fig.1 The data accessing of the hybrid memory

    In the Linux-swap system, as shown in Fig.2(a), data exchange between memory and storage is managed in page granularity. If the selected eliminated page includes hot data, this page would be fetched back again soon, which leads to extra system I/O. However in the proposed HMM_alloc system, as shown in Fig.2(b), memory is managed in object granularity(e.g. object size=1 KB), so that we can eliminate cold objects to SSD, which reducing the probability of hot data being swapped out of memory. What is more, SSD is managed as a log-structured object store, which is partitioned at a sector granularity. Instead of whole pages, only objects are transferred when data fetching happens. Hence, packing objects and writing them in a log can not only reduce write data volume but also increase SSD lifetime.

    Fig.2 Comparison between Linux swap system and HMM_alloc system

    1.2 Virtual memory management

    Virtual memory address allocation is managed in an object-per-page manner so that data accessing can be performed at an object granularity instead of one page in the traditional swap policy. That is, only an object is allocated on each virtual page and aligned with a changing offset. But in order to reduce the waste of DRAM memory caused by the object granularity and provide a transparent tiering to applications, a compact caching method and a multi-mapping strategy are implemented. In details, the offset of each object in each virtual memory page should be aligned with its corresponding physical memory space. This is completed by a kernel module to fill the page table entries. Thus, multiple virtual pages can be mapped into one physical memory page, as shown in Fig.3.

    Fig.3 The mapping between virtual and physical memory

    1.3 Physical memory management

    A large pool of physical memory is reserved for HMM_alloc initially in the system. Physical memory is divided into lots of memory slabs with the size of 1 MB. For each memory slab, physical page frames are partitioned into several micro-pages according to axed-size pattern of current request. When a micro-page is required for the memory mapping, a unique slab is picked up to satisfy the allocation for the requested size. Then all the pages of the slab are divided into micro-pages with the predetermined size of the first object, and these micro-pages are grouped to different patterns according to different offsets.

    Reference to our preliminary work[17], system keeps a group of FIFO lists for the micro-pages with the same size and offset using a two-FIFO clock algorithm. One FIFO list is used to track active objects whereas the other is used to track inactive ones which will be chosen as victim objects for swapping out. Micro-pages of one required slab are first assembled in each free list, waiting for physical memory allocation. Once a micro-page is selected for an object loading, it will be added to the head of active FIFO list. When FIFO is updated next time, the access bit of page at active FIFO head is set to 0 to record its accessing characteristics. When all the physical slabs are used up and the free list of the required pattern is empty, it needs to evict some objects and reclaim space for the incoming requests. The eviction begins to check the access bit of the page from the tail of active FIFO. If the access bit is set to 1, it reveals the recent accessing information and this object will be moved back to the head of active FIFO. If the access bit is already 0, this page is a good candidate for replacement and it will be moved to the head of inactive FIFO. The inactive FIFO list is managed with the similar method for objects which are evicted from active FIFO list.

    1.4 SSD management

    Flash-based SSD is managed as a log-structured object store and holds the objects flushed from DRAM memory in the log. To reduce the latency of read/write and make better use of DRAM memory and IO bandwidth, enough metadata is ensured in DRAM to perform objects fetching instead of pages. A hierarchical page table which is indexed by virtual memory page address is used to restore the information of size, offset and SSD location for each object.

    To meet the read/write characteristics of NAND flash-based SSD, reading is performed at page granularity and writing back is executed at granularity of block. A clean log block is prepared for dirty objects flushing and logs are operated in rounds. Dirty objects collected from DRAM memory are packed into a fresh cache block and their virtual page addresses are stored in the header of each block as reverse pointers. The SSD locations prepared for these objects are then recorded to the hierarchical object table according to the reverse pointers.

    As known to all, SSD does not support to write back in place directly because it needs an erase operation before writing again. This feature requires each dirty object to write to another place, which makes the old location garbage. Therefore, a garbage collection strategy is needed. Garbage collection is also started as a background task for the runtime system when free blocks on SSD are not enough for dirty object eviction. The system maintains a GC table for the whole SSD and records the garbage volume for each block. When the hierarchical object table is updated and the former location exists, it will generate an invalid object of a certain size on that block and the corresponding element in GC table will be increased. One SSD block is determined to be recycled when its garbage volume exceeds a pre-defined threshold. The garbage block can be reused as a free block after that all the valid data objects are gathered to a new SSD block.

    2 Implementation

    2.1 API

    A runtime library is implemented to replace the system standard library by the proposed hybrid main memory architecture. It provides the analogous application interfaces with the standard library such as malloc and free. The interfaces are stated as below.

    1)void sysInit(), void sysQuit(): sysInit is used to initialize the metadata of the hybrid main memory, and sysQuit is called to release all the resources before that the application exists.

    2)void*nv_malloc(SIZE t userReqSize): is used to allocate userReqSize bytes of memory for one object using the proposed memory allocator.

    3)void*nv_calloc(SIZE t nmemb, SIZE t userReqSize): is used to allocate userReqSize bytes of memory for each object of an array with nmemb objects.

    4)void*nv_realloc(void*vaddr, SIZE t userReqSize): is used to adjust the size of memory space pointed by the vaddr to userReqSize bytes.

    5)void nv_free(void*vaddr): is used to deallocate the object with virtual address vaddr. Actually, the physical memory of the object is reclaimed to the corresponding free list depending on its metadata which is stored in object table.

    2.2 The kernel module

    Applications call the proposed library to allocate the hybrid memory transparently. To track data accessing after allocating read/write protected virtual pages to the applications, the proposal replaces the handler of segmentation fault signal and registers with a new function for dealing with this page fault. When the protected virtual pages are accessed, they will trigger the page fault processing function. Then appropriate physical micro-pages are allocated to the objects and a kernel module is called to modify PTE to map the physical address of micro-pages to multiple virtual pages. This enables the proposed architecture to provide thread-safe operations for concurrent applications. Moreover, the same kernel module also has the ability to check and modify the access or dirty bit in page table entries, which are respectively used to manage the two FIFO lists and dirty object writing back. To reduce the communication overhead between user and kernel space, a message buffer is used to pack a number of PTE accessing requests into one message to transfer.

    2.3 Support for multithreading

    To response the concurrent requests from applications, Multithreading operation is designed carefully to reduce the conflict of tasks such as page fault handler, SSD writing and garbage collection. SSD logical pages are locked when data objects are prefetched from SSD bulk store, whereas SSD logical blocks are locked when dirty objects are combined to a whole block and ready to write back. And all the writing operations are performed in the background.

    2.4 Applicable scene

    HMM_alloc applies to memory-intensive applications, such as in-memory database. To speed up data access, user data is stored in main memory rather than the storage device. This requires the computer system to provide enough physical memory to store user data. However, due to the constraints of DRAM and computer system, main memory capacity can’t be expanded unlimited. When physical memory is to be exhausted, data exchange between DRAM and swap device is started. Under these circumstances, HMM_alloc can effectively reduce the adverse effects of the data exchange process on system performance through decreasing the unnecessary data exchange.

    3 Performance evaluation

    3.1 Experimental setup

    All the experiments are conducted on a server with an eight-core Intel i7-4790 3.6 GHz processor and 6 GB DRAM. A 64-bit Linux 2.6.32 kernel is installed and a 120 GB Kingston SSD is adopted for both SSD swap device and the proposed main memory extension. If main memory is sufficient for applications, there is no need to build up the hybrid main memory because its performance is the same with that using the standard library. Therefore, only when the total capacity of main memory is not enough for a large-scale data set, the performance of data accessing is evaluated. To simulate the out-of-core DRAM accessing, the capacity of DRAM memory is restricted to 64 MB so that a handful of objects can reside in DRAM. The specific parameters of the evaluation are given in Table 1.

    Table 1 The parameters of the evaluation

    3.2 Microbenchmarks

    Microbenchmarks are used to test the basic performance of random access with certain object size ranging from 256 bytes to 4 KB. An array of total size of 10 GB is loaded into SSD for all read, all write and read-write ratio of 80∶20.

    Figure 4 shows the average latency of HMM_alloc and SSD-swap at different object size. Accessing objects allocated by HMM_alloc runtime can achieve much higher performance than that in SSD-swap system. It beats SSD-swap by a factor of 1.23-1.78 times depending on object size and read/write ratio. With the increase of object size, the benefit of hybrid memory decreases because invalid data accessing per read or write is decreased. When object size increases to one-page size, the performance improvement decreases to 23%, which is the benefit of log-structured SSD store and background dirty data flushing.

    Fig.4 Performance of microbenchmarks with fixed object size

    Random object size allocation requests are also used to test the proposed hybrid memory. As shown in Fig.5, three sets of object size under different ranges are tested respectively. Even the random size can produce a waste of memory space, HMM_alloc still trumps SSD-swap system up to 47.5% in average access time.

    Fig.5 Performance of microbenchmarks for random size allocations

    The hybrid main memory consists of DRAM memory device and NAND flash-based SSD device. DRAM caches hot objects and provides a high performance of accessing in-core data, and SSD extends the capacity and reduces energy consumption. To test the benefit of DRAM memory cache, the reserved DRAM memory size varies from 32 MB to 768 MB for fixed size dataset. When accessing 1 GB data of objects with size of 512 bytes from 10 GB dataset randomly, the larger size of DRAM memory improves the performance for caching hot objects as many as possible and reducing IO bandwidth to SSD device. Figure 6 indicates the linear relationship between DRAM memory size and latency. So DRAM capacity in the system depends on the tradeoff of cost and performance according to the requirement in applications.

    Fig.6 Performance of microbenchmarks for various DRAM size

    3.3 Real-world memory-intensive applications

    To demonstrate the physical micro-page management of HMM_alloc, a sorting workload with different fixed object size is tested. Initially, 10 GB worth of object records spread across the whole SSD device. Then a small number of these records are selected randomly to be sorted in place using a quick-sorting algorithm. Figure 7 presents the results of the sorting workload at different object size. As can be seen, HMM_alloc outperforms SSD-swap system by up to 1.3 times. Because the micro-page policy of HMM_alloc can make more effective use of physical memory and cache more hot objects when object size is smaller than 4 KB. On the contrary, SSD-swap can only fetch and flush data at a page granularity, which caches 1/16 hot objects for 256-bytes object size in the worst case. With the increase of object size, the performance gap between HMM_alloc and SSD-swap system decreases until that the average accessing latency approaches that of SSD-swap.

    Fig. 7 Performance comparison of quick-sorting workload

    To verify the performance of write-heavy workloads, a bloomfilter is adopted. It is used to retrieve whether an element is in a set. Bloomfilter consists of a long bit vector and a series of random mapping functions. Each element has different locations mapped by several mapping functions. The total size of the bloomlter is 80 billion bits (10 GB) and most of the data resides onash-based SSD because of the restricted capacity of DRAM memory. Then checking and insertion of random keys in the bloomlter are performed using 8 hash functions. Figure 8 shows the results of running the bloomfilter. HMM_alloc outperforms SSD-swap system and reduces average accessing latency by up to 42%. To SSD-swap, when one element in the huge bitmap is checked, it needs to fetch one page from swap device. And modifying one bit also causes to write more than one object that is less than one page. Thus, SSD-swap produces much more dirty physical memory pages and causes more write-back operations to SSD device. Whereas, HMM_alloc reduces latency by evicting dirty data at a granularity of object and coalescing multiple dirty objects into one block.

    Fig.8 Performance comparison of bloomlter insertions

    To evaluate the performance benefit for the existing in-memory database servers, memcached is chosen as the representative. Memcached is a high performance, distributed memory object caching system, which has been widely applied in data-center servers. For ease of use, the object allocator inside memcached is replaced by the proposed memory allocator. The experiments are conducted under two situations as shown in Fig.9. One tests random object size with sequential access, the other runs random access with fixed object size. It can be seen that compared with SSD-swap, HMM_alloc can improve performance by 43%-61% at the SET phase, and 37%-58% at the GET phase in Fig.9(b). Because memcached is suitable for caching key/value objects and the management policy of HMM_alloc is based on object granularity, which offers better service for memcached. Thus, the extension of memory capacity in the proposed way can improve the ability of memcached to cache more database requests and reduce the numbers of accessing to databases from the clients.

    Fig.9 Performance comparison of running memcached

    We also use YCSB(Yahoo! Cloud Serving Benchmark)to evaluate the performance benefit for the memcached database server equipped with HMM-alloc. Workload A and workload B with Zipf distribution in YCSB have different read-write ratio. There are 50% Read operations and 50% Update operations in workload A, and 95% Read operations and 5% Update operations in workload B. 10 million records are inserted in the database first, and then run 1 million database operations(read/update) on the database. Compared with SSD-swap, HMM_alloc can improve performance by 61.9%-73.3% for the Insert operation, 25.0%-29.1% for the Read operation, and 26.9%-32.6% for the Update operation in Fig.10.

    Fig.10 Performance comparison of running memcached (YCSB)

    3.4 Metadata overhead

    The metadata overhead of HMM_alloc is mainly composed of two parts.

    1)HMM_alloc manages each micro-page using a 24-bit structure. When user data are all smaller than the minimum micro-pageCmin, all physical pages are divided according to the minimum micro-page granularity, and the number of mirco-pages reaches the maximum. Under this circumstance, the metadata overhead of managing physical memory also reaches the maximumPmeta_max.

    (1)

    whereMis the capacity of physical memory.

    2)HMM_alloc uses a hierarchical page table indexed by virtual memory page address to restore the information of size, offset and SSD location for each object. Each table item is 8 bytes in size. As the number of objectsNobjincreases, the total size of the page tableTmetagrows linearly.

    Tmeta=Nobj×8 B.

    (2)

    Compared with the original memory management system, the extra metadata overhead of HMM_alloc mainly comes from the hierarchical page table. Although HMM_alloc increases metadata overhead, it improves system performance significantly through decreasing the unnecessary data exchange.

    4 Conclusions

    Many applications try to keep huge amount of data in main memory to speed up data access. However, computer systems fail to provide high main memory capacity because of the constraints of hardware cost and power consumption of DRAM device. Fortunately, NAND flash-based memory with high capacity, low price and low power offer an opportunity to alleviate the drawbacks of DRAM. Therefore, a hybrid main memory architecture HMM_alloc using SSD to augment DRAM is proposed. It provides interfaces for applications to access data transparently through a byte-addressable approach. DRAM is used as a hot data cache and works at an object granularity, whereas SSD is organized into a log-structured sequence and serves as a secondary dynamic user data storage partition. The comprehensive experiments demonstrate that the proposed hybrid main memory system HMM_alloc can improve the performance of memory-intensive applications significantly against the system using SSD as a swap device.

    高清视频免费观看一区二区| 久久热精品热| 国内揄拍国产精品人妻在线| 国产av一区二区精品久久| 国产在线视频一区二区| 国产熟女午夜一区二区三区 | 日韩免费高清中文字幕av| 亚洲精品国产av蜜桃| 国产伦理片在线播放av一区| 日产精品乱码卡一卡2卡三| 黑人高潮一二区| 能在线免费看毛片的网站| 欧美 日韩 精品 国产| 少妇被粗大的猛进出69影院 | 免费人成在线观看视频色| 一级毛片黄色毛片免费观看视频| 亚洲av男天堂| 国语对白做爰xxxⅹ性视频网站| 国产爽快片一区二区三区| 欧美+日韩+精品| 两个人的视频大全免费| 欧美区成人在线视频| 大陆偷拍与自拍| 免费观看av网站的网址| 亚洲国产精品一区二区三区在线| 在线观看av片永久免费下载| av天堂中文字幕网| 国产永久视频网站| 亚洲美女黄色视频免费看| 精品国产一区二区久久| 亚洲自偷自拍三级| 久久午夜综合久久蜜桃| 插阴视频在线观看视频| 久久久久国产精品人妻一区二区| 另类精品久久| 国产免费视频播放在线视频| 亚洲婷婷狠狠爱综合网| 日韩一区二区三区影片| 国产精品久久久久久精品古装| 精品亚洲成国产av| 国产精品国产av在线观看| 亚洲精品一区蜜桃| 在线免费观看不下载黄p国产| 久久久久久久亚洲中文字幕| 大香蕉久久网| 国产极品天堂在线| 九色成人免费人妻av| 日本av手机在线免费观看| 欧美日韩国产mv在线观看视频| 国产美女午夜福利| 新久久久久国产一级毛片| 日韩成人伦理影院| 欧美日韩一区二区视频在线观看视频在线| 亚洲精品自拍成人| 视频区图区小说| 多毛熟女@视频| 国产成人免费无遮挡视频| 在线 av 中文字幕| 中文字幕亚洲精品专区| 成人综合一区亚洲| 黄色一级大片看看| 亚洲av欧美aⅴ国产| 日韩不卡一区二区三区视频在线| 国产亚洲午夜精品一区二区久久| 久久97久久精品| 简卡轻食公司| 亚洲图色成人| 国产精品不卡视频一区二区| 99视频精品全部免费 在线| 日韩欧美 国产精品| 久久精品久久久久久噜噜老黄| 多毛熟女@视频| 精品卡一卡二卡四卡免费| 久久久久精品性色| 99热6这里只有精品| 内地一区二区视频在线| 国产色婷婷99| 日韩,欧美,国产一区二区三区| 国产女主播在线喷水免费视频网站| 久久精品久久久久久噜噜老黄| 国产精品一区二区在线不卡| 国产高清有码在线观看视频| av不卡在线播放| 韩国av在线不卡| 蜜桃久久精品国产亚洲av| 另类亚洲欧美激情| 一个人看视频在线观看www免费| 男女免费视频国产| 九九在线视频观看精品| 日韩,欧美,国产一区二区三区| 国产成人精品久久久久久| 一区二区三区精品91| 国产一区亚洲一区在线观看| 亚洲国产精品一区三区| 18禁在线无遮挡免费观看视频| 建设人人有责人人尽责人人享有的| 中文字幕久久专区| 欧美日韩一区二区视频在线观看视频在线| av免费在线看不卡| 在线观看美女被高潮喷水网站| 丝袜脚勾引网站| 欧美日韩亚洲高清精品| 国产成人精品一,二区| 久热这里只有精品99| 伦理电影大哥的女人| 欧美精品人与动牲交sv欧美| 成人黄色视频免费在线看| 亚洲精品色激情综合| 一区二区三区四区激情视频| 亚洲欧美清纯卡通| 另类精品久久| 成人综合一区亚洲| 看免费成人av毛片| 国产黄频视频在线观看| 日韩一本色道免费dvd| 黑人猛操日本美女一级片| 国产视频内射| 欧美激情极品国产一区二区三区 | 久久国产精品大桥未久av | 蜜桃在线观看..| 国产精品无大码| 亚洲成人手机| 久久毛片免费看一区二区三区| 午夜av观看不卡| 久久国产亚洲av麻豆专区| 乱码一卡2卡4卡精品| 大话2 男鬼变身卡| 国产黄片视频在线免费观看| 国产日韩欧美亚洲二区| 欧美精品一区二区大全| 欧美性感艳星| 老女人水多毛片| 边亲边吃奶的免费视频| 美女xxoo啪啪120秒动态图| 如何舔出高潮| 三级国产精品欧美在线观看| 成人漫画全彩无遮挡| 免费看日本二区| 久久国产精品大桥未久av | 午夜免费观看性视频| 午夜福利,免费看| 熟妇人妻不卡中文字幕| 国产伦精品一区二区三区四那| 成人亚洲欧美一区二区av| 蜜臀久久99精品久久宅男| 国产精品秋霞免费鲁丝片| 国产精品人妻久久久影院| 9色porny在线观看| 精品一区二区三区视频在线| 欧美日韩亚洲高清精品| a级一级毛片免费在线观看| 欧美日韩一区二区视频在线观看视频在线| 少妇人妻一区二区三区视频| 日韩中文字幕视频在线看片| 一级二级三级毛片免费看| 大香蕉97超碰在线| 精品99又大又爽又粗少妇毛片| 国产免费又黄又爽又色| 欧美最新免费一区二区三区| 日韩精品有码人妻一区| 水蜜桃什么品种好| 不卡视频在线观看欧美| 日本猛色少妇xxxxx猛交久久| 天堂俺去俺来也www色官网| 日本黄色日本黄色录像| 一级片'在线观看视频| 国产精品一二三区在线看| 午夜福利,免费看| 2022亚洲国产成人精品| 精品久久久久久久久av| av卡一久久| 最近2019中文字幕mv第一页| 成人美女网站在线观看视频| 国产av精品麻豆| 只有这里有精品99| 国产精品欧美亚洲77777| 欧美 亚洲 国产 日韩一| 九九爱精品视频在线观看| 日韩成人伦理影院| 如何舔出高潮| 内地一区二区视频在线| 夜夜爽夜夜爽视频| 韩国av在线不卡| 日本欧美视频一区| 国产有黄有色有爽视频| 国产欧美日韩综合在线一区二区 | 亚洲,一卡二卡三卡| 99热全是精品| 欧美97在线视频| av天堂中文字幕网| 搡女人真爽免费视频火全软件| 美女xxoo啪啪120秒动态图| 免费在线观看成人毛片| 一级毛片aaaaaa免费看小| 亚洲成色77777| 99热国产这里只有精品6| 观看美女的网站| 妹子高潮喷水视频| 五月开心婷婷网| 99热国产这里只有精品6| 精品一区二区三区视频在线| 久久国产精品大桥未久av | 久久久国产精品麻豆| 国产成人精品婷婷| 国产欧美日韩一区二区三区在线 | 有码 亚洲区| 国产欧美日韩一区二区三区在线 | 男女国产视频网站| 成人美女网站在线观看视频| 亚洲熟女精品中文字幕| 最近的中文字幕免费完整| 成人亚洲精品一区在线观看| 国产精品女同一区二区软件| av专区在线播放| 国内揄拍国产精品人妻在线| 国产成人aa在线观看| 五月开心婷婷网| 久久久精品94久久精品| 丝袜喷水一区| 亚洲怡红院男人天堂| 国产精品99久久99久久久不卡 | 我的女老师完整版在线观看| 极品教师在线视频| 免费高清在线观看视频在线观看| 国产老妇伦熟女老妇高清| 久久久久国产网址| 午夜久久久在线观看| 狠狠精品人妻久久久久久综合| 22中文网久久字幕| 99久国产av精品国产电影| 欧美人与善性xxx| 国产亚洲欧美精品永久| 十八禁高潮呻吟视频 | 午夜精品国产一区二区电影| 看十八女毛片水多多多| 22中文网久久字幕| 全区人妻精品视频| 日韩制服骚丝袜av| 高清欧美精品videossex| 久久久国产欧美日韩av| 女人久久www免费人成看片| 嘟嘟电影网在线观看| 亚洲在久久综合| 亚洲真实伦在线观看| 国国产精品蜜臀av免费| 久久久久精品性色| 在线观看免费视频网站a站| 成人毛片a级毛片在线播放| av又黄又爽大尺度在线免费看| 老司机亚洲免费影院| 久久久精品免费免费高清| 精品人妻偷拍中文字幕| 国国产精品蜜臀av免费| 久久久久久久亚洲中文字幕| 99热网站在线观看| 热re99久久精品国产66热6| 日本黄大片高清| 亚洲,一卡二卡三卡| 国产精品国产av在线观看| 一本色道久久久久久精品综合| 国产精品伦人一区二区| 欧美亚洲 丝袜 人妻 在线| 我要看黄色一级片免费的| 岛国毛片在线播放| 中国国产av一级| 夜夜爽夜夜爽视频| 丰满人妻一区二区三区视频av| 91精品国产九色| 天天躁夜夜躁狠狠久久av| 午夜福利影视在线免费观看| 亚洲成色77777| 国产综合精华液| 久久精品熟女亚洲av麻豆精品| 一区二区av电影网| 欧美日韩亚洲高清精品| 男人舔奶头视频| 男人和女人高潮做爰伦理| 亚洲av二区三区四区| 久久人人爽av亚洲精品天堂| 国产又色又爽无遮挡免| 中文精品一卡2卡3卡4更新| 久久精品国产鲁丝片午夜精品| 国产黄频视频在线观看| 人妻系列 视频| 亚洲精品aⅴ在线观看| 午夜91福利影院| 伊人久久精品亚洲午夜| 日韩成人av中文字幕在线观看| 国产av一区二区精品久久| 日韩三级伦理在线观看| 免费黄频网站在线观看国产| 亚洲欧美成人综合另类久久久| 在线观看www视频免费| 天天躁夜夜躁狠狠久久av| 一本大道久久a久久精品| 免费大片黄手机在线观看| 国产高清国产精品国产三级| 欧美变态另类bdsm刘玥| 一级a做视频免费观看| 免费黄频网站在线观看国产| 色吧在线观看| freevideosex欧美| 亚洲精品国产成人久久av| 久久99热6这里只有精品| 岛国毛片在线播放| 精品午夜福利在线看| 日本黄色片子视频| 两个人免费观看高清视频 | h日本视频在线播放| 国产真实伦视频高清在线观看| 在线观看美女被高潮喷水网站| 狠狠精品人妻久久久久久综合| 国产高清不卡午夜福利| a级毛片免费高清观看在线播放| 亚洲人成网站在线播| 成人毛片60女人毛片免费| 18禁在线播放成人免费| 99热这里只有精品一区| 男女免费视频国产| 嫩草影院新地址| 欧美精品高潮呻吟av久久| 欧美亚洲 丝袜 人妻 在线| 99九九线精品视频在线观看视频| 少妇人妻 视频| av国产久精品久网站免费入址| 精品亚洲成a人片在线观看| 午夜福利在线观看免费完整高清在| 精品久久国产蜜桃| 超碰97精品在线观看| 插逼视频在线观看| 青青草视频在线视频观看| 日韩视频在线欧美| 国产 一区精品| 成年人午夜在线观看视频| 国产精品99久久久久久久久| 日本wwww免费看| av女优亚洲男人天堂| 久久女婷五月综合色啪小说| 亚洲欧洲精品一区二区精品久久久 | 精品亚洲成a人片在线观看| 深夜a级毛片| 老司机亚洲免费影院| 最近2019中文字幕mv第一页| 欧美3d第一页| h日本视频在线播放| xxx大片免费视频| 午夜福利影视在线免费观看| 亚洲av成人精品一区久久| 亚洲一区二区三区欧美精品| 久久国产亚洲av麻豆专区| 日韩成人伦理影院| 精品久久国产蜜桃| 亚洲国产毛片av蜜桃av| 边亲边吃奶的免费视频| 肉色欧美久久久久久久蜜桃| 搡老乐熟女国产| 免费少妇av软件| 国产精品福利在线免费观看| 最后的刺客免费高清国语| 一级爰片在线观看| 成人国产麻豆网| 久久久久国产精品人妻一区二区| av天堂中文字幕网| 色视频在线一区二区三区| 午夜影院在线不卡| 欧美日韩在线观看h| 水蜜桃什么品种好| 波野结衣二区三区在线| 女人精品久久久久毛片| 亚洲精品中文字幕在线视频 | 日本-黄色视频高清免费观看| 色哟哟·www| 国产爽快片一区二区三区| 永久网站在线| 色视频在线一区二区三区| 一级爰片在线观看| 日本与韩国留学比较| 我要看黄色一级片免费的| 熟女av电影| 我的女老师完整版在线观看| 免费在线观看成人毛片| 99九九在线精品视频 | 久久久国产一区二区| 亚洲国产欧美日韩在线播放 | a 毛片基地| 免费大片黄手机在线观看| 精品一区在线观看国产| av专区在线播放| 久久久精品94久久精品| 国产成人午夜福利电影在线观看| 色视频在线一区二区三区| 亚洲欧美日韩东京热| 高清不卡的av网站| 九九在线视频观看精品| 亚洲国产色片| 男人舔奶头视频| 国产精品一二三区在线看| 国产真实伦视频高清在线观看| 亚洲国产欧美日韩在线播放 | 久久国产精品男人的天堂亚洲 | 九九久久精品国产亚洲av麻豆| av黄色大香蕉| 国产国拍精品亚洲av在线观看| av专区在线播放| 国产视频首页在线观看| 一个人看视频在线观看www免费| 午夜免费鲁丝| 黄色日韩在线| 欧美 亚洲 国产 日韩一| 天堂中文最新版在线下载| 一本一本综合久久| 日韩av不卡免费在线播放| 青春草国产在线视频| 哪个播放器可以免费观看大片| 中文字幕免费在线视频6| 少妇人妻 视频| 久久久国产欧美日韩av| 大片电影免费在线观看免费| 国产午夜精品一二区理论片| 日本爱情动作片www.在线观看| 国产精品.久久久| 99九九在线精品视频 | 午夜激情福利司机影院| 午夜日本视频在线| 女的被弄到高潮叫床怎么办| 十分钟在线观看高清视频www | 亚洲经典国产精华液单| 2021少妇久久久久久久久久久| 久久久久久久久久久免费av| 天美传媒精品一区二区| 国产伦在线观看视频一区| 日日摸夜夜添夜夜爱| 老司机影院毛片| 99久国产av精品国产电影| 天天躁夜夜躁狠狠久久av| 国产 精品1| 午夜福利在线观看免费完整高清在| 亚洲国产欧美在线一区| 亚洲美女视频黄频| 麻豆乱淫一区二区| 成人毛片a级毛片在线播放| 夜夜骑夜夜射夜夜干| 看十八女毛片水多多多| 偷拍熟女少妇极品色| 亚洲第一av免费看| 日韩制服骚丝袜av| 久久99热6这里只有精品| 纵有疾风起免费观看全集完整版| 亚洲国产最新在线播放| 亚洲国产精品999| 欧美少妇被猛烈插入视频| 国产色婷婷99| 99久久中文字幕三级久久日本| 成人无遮挡网站| 久久久久久久久久久丰满| 国产高清三级在线| 免费观看a级毛片全部| 亚洲国产精品成人久久小说| 色94色欧美一区二区| 欧美日韩视频高清一区二区三区二| 国产在线男女| 一边亲一边摸免费视频| 国产一区有黄有色的免费视频| 国产一区二区在线观看av| 如何舔出高潮| 亚洲熟女精品中文字幕| 日韩一区二区三区影片| 日韩一本色道免费dvd| 国产片特级美女逼逼视频| 日本免费在线观看一区| 午夜老司机福利剧场| 日韩熟女老妇一区二区性免费视频| 精品少妇久久久久久888优播| 国产伦理片在线播放av一区| 亚洲精品国产成人久久av| 国产伦精品一区二区三区四那| 一边亲一边摸免费视频| 日韩视频在线欧美| 最近最新中文字幕免费大全7| 亚洲美女搞黄在线观看| 成人黄色视频免费在线看| 亚洲国产欧美日韩在线播放 | videossex国产| 亚洲国产日韩一区二区| 午夜免费鲁丝| 九草在线视频观看| 卡戴珊不雅视频在线播放| 尾随美女入室| 久久人人爽人人爽人人片va| 涩涩av久久男人的天堂| 亚洲国产精品专区欧美| 高清毛片免费看| 国产亚洲5aaaaa淫片| 久久久久国产精品人妻一区二区| 国产成人精品无人区| 欧美 亚洲 国产 日韩一| 国内少妇人妻偷人精品xxx网站| 人妻系列 视频| 日韩av在线免费看完整版不卡| 水蜜桃什么品种好| 美女脱内裤让男人舔精品视频| 美女内射精品一级片tv| 国产精品.久久久| 91精品国产国语对白视频| 黄色配什么色好看| 免费观看无遮挡的男女| 内射极品少妇av片p| av天堂久久9| 国产一区二区三区综合在线观看 | 日本av手机在线免费观看| 最近中文字幕2019免费版| 在线免费观看不下载黄p国产| 日韩不卡一区二区三区视频在线| 午夜免费观看性视频| 日日爽夜夜爽网站| 日韩欧美 国产精品| 欧美 日韩 精品 国产| xxx大片免费视频| 99国产精品免费福利视频| 婷婷色综合www| 一区二区三区精品91| 涩涩av久久男人的天堂| 99九九线精品视频在线观看视频| 国产熟女欧美一区二区| 日本黄色片子视频| 亚洲精品色激情综合| 亚洲,欧美,日韩| 免费观看的影片在线观看| 亚洲欧美精品专区久久| 一级爰片在线观看| 色视频在线一区二区三区| 日本-黄色视频高清免费观看| 国产69精品久久久久777片| 久久久久久伊人网av| 亚洲欧美成人精品一区二区| 亚洲va在线va天堂va国产| 自拍偷自拍亚洲精品老妇| 久久久国产欧美日韩av| 观看美女的网站| 男女免费视频国产| 中文字幕制服av| 亚洲欧洲日产国产| 亚洲经典国产精华液单| 秋霞在线观看毛片| 99久久中文字幕三级久久日本| 亚洲无线观看免费| 国产精品三级大全| 尾随美女入室| 2021少妇久久久久久久久久久| 日本av免费视频播放| 日本欧美视频一区| 啦啦啦视频在线资源免费观看| 国内少妇人妻偷人精品xxx网站| 欧美成人午夜免费资源| 噜噜噜噜噜久久久久久91| 国产av精品麻豆| 日本黄色片子视频| 自拍欧美九色日韩亚洲蝌蚪91 | av一本久久久久| 国产综合精华液| 天堂8中文在线网| 内射极品少妇av片p| 久久久久久久精品精品| 免费看光身美女| av卡一久久| 你懂的网址亚洲精品在线观看| 国产精品嫩草影院av在线观看| 男的添女的下面高潮视频| 在线 av 中文字幕| 在线观看av片永久免费下载| 久久精品夜色国产| 中文欧美无线码| 内地一区二区视频在线| 最近最新中文字幕免费大全7| 中文字幕久久专区| 伦理电影免费视频| 亚洲欧美日韩东京热| 美女内射精品一级片tv| 亚洲欧美一区二区三区黑人 | 边亲边吃奶的免费视频| 乱人伦中国视频| 久久人人爽av亚洲精品天堂| 永久免费av网站大全| 久久精品国产亚洲av天美| 久久影院123| 男女免费视频国产| 黑人巨大精品欧美一区二区蜜桃 | √禁漫天堂资源中文www| 国产黄片美女视频| 久久精品国产亚洲网站| 精品午夜福利在线看| 女性生殖器流出的白浆| 建设人人有责人人尽责人人享有的| 自拍欧美九色日韩亚洲蝌蚪91 | 黄色毛片三级朝国网站 | 一区二区三区免费毛片| 精品国产露脸久久av麻豆| 麻豆乱淫一区二区| 啦啦啦在线观看免费高清www| 精品酒店卫生间| 三级国产精品片| 久久人人爽人人爽人人片va| .国产精品久久| 亚洲精品中文字幕在线视频 | 又黄又爽又刺激的免费视频.| 久久6这里有精品| 汤姆久久久久久久影院中文字幕| 一个人免费看片子| 国产免费视频播放在线视频| 亚洲精品,欧美精品| 久久婷婷青草| 国产在视频线精品| 最新中文字幕久久久久| 日日摸夜夜添夜夜添av毛片| 国产精品嫩草影院av在线观看|