• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature mapping space and sample determination for person re-identification①

    2022-10-22 02:23:12HOUWeiHUZhentaoLIUXianxingSHIChangsen
    High Technology Letters 2022年3期

    HOU Wei(侯 巍), HU Zhentao, LIU Xianxing, SHI Changsen

    (School of Artificial Intelligence, Henan University, Zhengzhou 450046, P.R.China)

    Abstract Person re-identification (Re-ID) is integral to intelligent monitoring systems. However, due to the variability in viewing angles and illumination, it is easy to cause visual ambiguities, affecting the accuracy of person re-identification. An approach for person re-identification based on feature mapping space and sample determination is proposed. At first, a weight fusion model, including mean and maximum value of the horizontal occurrence in local features, is introduced into the mapping space to optimize local features. Then, the Gaussian distribution model with hierarchical mean and covariance of pixel features is introduced to enhance feature expression. Finally, considering the influence of the size of samples on metric learning performance, the appropriate metric learning is selected by sample determination method to further improve the performance of person re-identification. Experimental results on the VIPeR,PRID450S and CUHK01 datasets demonstrate that the proposed method is better than the traditional methods.

    Key words: person re-identification (Re-ID),mapping space,feature optimization,sample determination

    0 Introduction

    The purpose of person re-identification (Re-ID)is to match the same person from different camera views[1]. Person Re-ID is a key component of video surveillance, which is of great significance in security monitoring, person search and criminal investigation.Although great progress has been made in person Re-ID, there are still many problems to be solved due to the existence of visual ambiguities.

    The visual ambiguities brought by changes in viewpoint and illumination are manifested in the person images like large changes in scale and background of the same person, which can significantly degrade the performance of the person Re-ID system. To overcome this limitation, there have been studies that try to use local information and information discrimination[2-3]. Properly utilizing the information in person images and better discriminating them can effectively improve the performance of person Re-ID. The related studies that have emerged in person Re-ID can be generally classified into two types: feature extraction and metric learning.

    Some researchers construct features of person images based on color, texture and other appearance attributes[4-5]. The basic idea is that the person image is divided into multiple overlapping or non-overlapping local image blocks, and then color or texture features are extracted from them separately, thus adding spatial region information into person image features. When calculating the similarity of two person images, the features within the corresponding image blocks will be compared separately, and then the comparison results of each image block will be fused as the final recognition result. Nevertheless, the features constructed by the above method are weak and the feature representation for person Re-ID is abated.

    On the other hand, there are many work that use a given set of training samples to obtain a metric matrix that effectively reflects the similarity between data samples, increasing the distance between non-similar samples while reducing the distance between similar samples[6]. However, these methods do not consider the effect of sample size on the metric learning performance, making the person Re-ID results less reliable.

    Color features are robust to pose and viewpoint changes, but are susceptible to illumination and obstructions. It is difficult to effectively distinguish largescale person images using only color features due to the similarity of dressing problem. The clothing often contains texture information, and texture features involve comparison of neighboring pixels and are robust to illumination, so making full use of color and texture features is very effective for person Re-ID. However, traditional methods apply single color and texture features to the person Re-ID task, and they are insufficient to handle the differences between different person images.In addition, the completeness and richness of feature representations also affect the results of similarity metrics, and traditional methods do not fully utilize the richness of samples when dealing with such metrics,resulting in lower overall performance of the methods.

    To address the above problems, this paper proposes a person Re-ID method based on feature mapping space and sample determination metric learning. The method combines an improved weighted local maximal occurrence (wLOMO) feature that modifies the original LOMO[7]feature with the Gaussian of Gaussian(GOG)[8]feature, and uses a sample determination method to select a suitable metric learning method to rank the similarity of person images. The method in this paper performs simulation experiments on each of the three typical datasets and is compared with other methods. The main contributions are summarized as follows.

    (1) A fused feature mapping space is proposed to enhance the person images features. The mean information of the horizontal direction of person image is introduced into LOMO feature, and the weighted mean and max are fused to obtain the proposed wLOMO feature. To enhance the feature expression of each person image, wLOMO feature is combined with GOG feature.On this basis, in order to simplify the complexity of feature extraction model, the feature transformation processes of wLOMO and GOG are integrated into one feature mapping space.

    (2) A sample determination method is proposed to accommodate different sample sizes. In the dataset,the sample determination method selects the appropriate metric learning to accomplish the similarity ranking of person images according to the demand of different sample sizes. In addition, the selected sample size is dynamically tuned according to the matching rate of different metric learning outputs.

    (3) Extended experiments on three publicly available datasets are designed to evaluate the performance of the proposed method and the comparison method, and to demonstrate the effectiveness and applicability of the proposed method in person Re-ID.

    1 Related work

    The research on person Re-ID can be divided into two groups: feature extraction and metric learning.Person Re-ID based on feature extraction is usually constructed by basic color, texture and other appearance attributes. Ref.[2] proposed the symmetry driven accumulation of local feature (SDALF) based on the symmetrical and asymmetric characteristics of person body structure, which fused three kinds of color feature in person image to complete the discrimination of person image. Ref.[4] proposed an ensemble of localized features (ELF) method. The method adopted AdaBoost algorithm to select the appropriate feature combination from a group of color and texture features,which improved the experimental accuracy. Refs[5,9,10]introduced biologically inspired features (BIF) in person images. By calculating the characteristics of BIF on adjacent scales, a feature called Bicov was proposed. On this basis, Gabor filter and covariance feature were introduced to deal with the problems caused by illumination change and background transformation in person images. Ref.[11] proposed a feature transformation method based on Zero-Padding Augmentation, which could align the features distributed across the disjoint person images to improve the performance of the matching model. Ref.[12] constructed the feature fusion network (FNN) by combining the manually extracted features and deep learning features, and realized the fusion of deep learning features and artificial features by constantly adjusting the parameters of the deep neural network. Ref.[13] proposed a deep convolution model, which highlights the discriminative part by giving the features in each part of the person a different weight to realize the person Re-ID task. The person Re-ID method based on deep learning needs to consider using a large number of labeled samples to train a complex model, and the training process is very time-consuming.

    Person Re-ID methods based on metric learning minimizes the distance between similar person by learning appropriate similarity. Ref.[3] introduced the concept of large margin in Mahalanobis distance and proposed a metric learning method called large margin nearest neighbor (LMNN). LMNN assumed that the sample features of the same class were adjacent, so there was a big gap between the feature samples of different classes. Thus, when calculating the distance,the features of the same kind of samples were gathered,and the different types of samples were pushed. Ref.[6]proposed a local fisher discriminative analysis (LFDA)method, which introduced a matrix based on subspace learning, allocated different scale factors for the same classes and different classes, and used the local invariance principle to calculate the distance. Ref.[14]proposed a Mahalanobis distance metric called keep it simple and straightforward metric (KISSME) by calculating the difference between the intra class and inter class covariance matrices of sample features. The method did not need to calculate the metric matrix through complex iterative algorithm, so it was more efficient. Ref.[15] used a new multi-scale metric learning method based on strip descriptors for person Re-ID. According to this method, the internal structure of different person images can be effectively extracted,improving the recognition rate. However, due to the non-linearity of the person image in the cross field of view, the linear transformation generated by the general metric learning method effects commonly general.Therefore, the kernel correlation based metric learning method was introduced to solve the nonlinear problem in person Re-ID[16-17]. However, the above-mentioned methods adopt a single strategy to deal with the change of sample size, without considering the accuracy impact of the method itself.

    2 Problem description

    It considers that the general process of person rerecognition is to extract features first and then rank them by metric learning. The performance of the method depends strongly on the expression ability of features and metric learning, and the existence of visual ambiguities will inevitably affect the ability. To solve this problem, a new method is proposed to improve the matching rate of person re-recognition.

    The framework of the proposed method is divided into three parts in Fig.1. The first part is the extraction of basic color, texture and spatial features, the second part is the mapping process of basic features, and the third part is the metric learning method based on sample determination.

    Fig.1 The person re-identification framework

    3 Methodology

    Based on the wLOMO in subsection 3.1 and the proposed sample determination in subsection 3.2, the proposed method flowchart is shown in Fig.2.

    3.1 Feature mapping space

    When designing the feature mapping space, two state-of-the-art feature transformation processes are merged into one feature mapping space by cascading,which simplifies the feature extraction.

    3.1.1 LOMO

    When extracting LOMO features, a 10 ×10 sliding subwindow is used to represent the local area of a person image,and an 8 ×8 ×8 bin combined color histogram of the hue, saturati, value (HSV) and two scale the scale invariant local ternary pattern (SILTP)texture histogramFSILTPare extracted from each subwindow. Then the maximum value of pixel features occurrence of all subwindows at the same horizontal position is calculated as

    whereρ(?)is the pixel feature occurrence in all subwindows.

    3.1.2 The proposed wLOMO

    Fig.2 Flowchart of the proposed method

    Considering that the maximization of pixel features leads to the loss of some person features,and the clothes worn by person are often composed of a small number of colors in each part, the mean information can enhance the feature expression of person images when the person background changes little. Therefore, the mean information of pixel feature distribution is introduced into the feature expression, expressed as

    3.1.3 GOG

    Considering that color features are more sensitive to illumination changes in cross view person images,and the impact of spatial information loss on person Re-ID, this paper further extracts GOG features from the same person image to enhance the feature expression.Firstly, the pixel level featurefis extracted as

    f= [y,FMθ,FRGB,FHSV,FLAB,FRG]T(6)

    whereFRGB,FHSV,FLAB,FRGare the color features,FMθis the texture feature,yis the space feature. The color features are channel values of person image,Mθconsists of the values of pixel intensity gradients in the four standard directions of the two-dimensional coordinate system.yis the position of the pixel in the vertical direction of image. After that, block level features are extracted. Each person image is divided intoGpartially overlapped horizontal regions, and each region is divided intok×klocal blocks. The pixel features in each local blocksare represented by Gaussian distribution to form a Gaussian blockzi

    whereμsis the mean vector,Σsis the covariance matrix of blocks.

    Then, the Gauss blockziis mapped to symmetric positive definite matrix to complete block level feature extraction. Finally,the region level features are extracted. The Gaussian blocks are modeled as a Gaussian region by Gaussian distribution. Meanwhile, Gaussian region is embedded into symmetric positive definite matrix. These vectors are finally aggregated to form the GOG featureFGOGof a person image.

    wherezGis theG-th horizontal region feature of a person image.

    3.1.4 Feature mapping space

    The proposed wLOMO describes only maximum occurrence and mean occurrence of pixel features, moreover, GOG can provide covariance information.

    To comprehensively consider the maximum occurrence, mean occurrence and covariance information of pixel features, Eq.(5) and Eq.(8) are combined. It means that wLOMO feature and GOG feature are aligned according to the person’s identity, and their feature mapping process is simplified to one feature mapping space by cascading.

    whereFis the feature of the output of the mapping space.

    3.2 Sample determination

    Cross-view quadratic discriminant analysis (XQDA)[7]and kernel cross-view quadratic discriminant analysis (k-XQDA)[18]are state-of-the-art methods in depending on feature dimension and samples size respectively. Based on the two methods, a sample determination method is proposed to synthesize the advantages of the two methods.

    3.2.1 XQDA

    Before summarizing the XQDA method,a brief introduction is given to the distance measurement of person Re-ID. For a datasetX, it containsCclasses personci(1 ≤i≤C) ∈Rn.The classical Mahalanobis distance metric learns the distanced(xi,zj) between personxi= [xi1,xi2,…,xin] in cameraaand personzj=[zj1,zj2,…,zjm] in camerab.

    3.2.2 k-XQDA

    XQDA metric learning method is directly trained in the original linear feature space, and the similarity and difference among samples are not well expressed.k-XQDA uses a kernel function to map the original samples into the easily distinguishable nonlinear space,and then distinguishes the differences of samples in the nonlinear space. The derivation of k-XQDA method involves mainly the distance metric functiond(xi,zj) in XQDA and the kernelization of the cost functionJ(wk).

    In the kernel space, two kinds of expansion coefficientsαandβcorresponding to person in cameraaandbare used, respectively. Mapping matrixwkcan be expressed as

    3.2.3 Sample determination

    All the intrinsic matrix dimensions of k-XQDA method depend on the size of samples, which greatly reduces the amount of calculation compared with the XQDA method depending on the feature dimension.

    On the basis of subsection 3.2.1 and subsection

    3.2.2, considering the different focus of the two metric learning methods, in order to integrate the advantages of the two and make the actual person re-identification task a better match, this paper proposes a sample determination method, that is, when the size of training setSsatisfies the Eq.(18), using the corresponding metric learning method will make a better effect in the corresponding dataset.

    whereSis the sample size to be determined,sis the current sample size.

    4 Experiments

    To evaluate the performance of the method fairly,all the comparison methods run in the same environment. The hardware environment is Intel Core i7-9700F CPU@3.00 GHz, 8 GB RAM. The operating system is Windows 10 64 bit, and the software environment is Matlab 2019b.

    4.1 Datasets and evaluation protocol

    The effectiveness of the proposed method is demonstrated by three publicly available datasets, they are VIPeR[19], PRID450S[20]and CUHK01[21]. The VIPeR dataset contains 632 persons with different identities. Each person involves two images captured from two disjoint camera views, including variations in background and illumination. The PRID450S dataset contains 450 persons with different identities. Each person covers two images captured by two non-overlapping cameras with a single background. The CUHK01 dataset consists of 971 persons with a total of 3884 shots captured by two non-overlapping cameras with an average of two images for each person, and the person poses vary greatly.

    To evaluate the results of the features in different metric learning,cumulative match characteristics(CMC)curve is used as the evaluation protocol.

    4.2 Comparison with state-of-the-art

    All images are normalized to the same size of 128×48 pixels. The datasets of VIPeR, PRID450S and CUHK01 are randomly divided into two equal parts,one half for training and the other for testing. The size of images in the training set of the three data sets is 632,450 and 972 respectively. To eliminate the performance difference caused by randomly dividing the training set and the testing set, the process is repeated 10 times, and the average cumulative matching accuracies at rank 1, 5, 10 and 20 are reported over 10 runs. In addition, the corresponding CMC curves are shown.

    4.2.1 Evaluation of the mapping space

    To analyze the effectiveness of the proposed mapping space, the output features of the mapping space are sent to the XQDA metric learning method to verify the performance of the method. Since the method is iterative, different weights are looped in different datasets to retain the one with the highest performance.Furthermore, showing the Rank-1 values corresponding to various weights may indicate that the weights are not constant and change between datasets. This paper selects three different datasets and compares the results with state-of-the-art approaches.

    VIPeR dataset: to analyze the influence of weightaon the performance of the wLOMO, the Rank-1 under different weight on VIPeR dataset are shown in Fig.3. It can be seen the introduction of mean information has a certain impact on the method performance. Whenais in range of 0.1 -0.2, the performance of the method is optimal, and increasingacontinually the performance of the method declines.

    The compared methods and their matching rates on VIPeR are shown in Table 1 and Fig.4. The results are reported in Table 1, the Rank-1 of LOMO, LSSCDL, DNS and GOG are better, all exceeding 40%.The proposed approach achieves 50.63% in Rank-1,which is 2.37% better than GOG.

    Fig.3 Rank-1 matching rates

    Table 1 Comparison of Rank results with other methods on VIPeR dataset

    Fig.4 CMC curves

    PRID450S dataset: Fig.5 shows the performance comparison of the wLOMO under different weight values. When the weight value is 0.3 -0.4, the method performance is optimal.

    The comparison methods and their matching rates results on PRID450S dataset are shown in Table 2 and Fig.6. Different from the person images in VIPeR and CUHK01 datasets, the background of person images in PRID450S dataset is relatively simple, and the background interference to all methods is small, the final matching results are generally better. For the proposed method with mean information, the matching rate of Rank-1 is 71.42%, outperforming the second best one GOG by 3.6%.

    Fig.5 Rank-1 matching rates

    Table 2 Comparison of Rank results with other methods on PRID450S dataset

    Fig.6 CMC curves

    CUHK01 dataset: the performance of the wLOMO has been declining withaincreasing, because the person background information is more complex than the first two datasets in Fig.7, and the introduction of mean information leads to performance degradation.Thus, the combination with GOG can strengthen the feature expression and weaken the error caused by mean information.

    Fig.7 Rank-1 matching rates

    The compared methods and their matching rates on CUHK01 dataset are shown in Table 3 and Fig.8.Considering that each person in the CUHK01 dataset contains four images, the first two images contain one front/back view, the last two images contain one side view, and the overall difference between them is little.Therefore, in the experiment, one is randomly selected from the foreground and background images of each person, and one is randomly selected from the side images of each person. The training sets contain 486 pairs of person images, and the test sets contain 485 pairs of person images. As listed in Table 3,the performance of proposed method is better than other methods,outperforming the second-best method with improvements of 5.65%.

    Table 3 Comparison of Rank results with other methods on CUHK01 dataset

    Fig.8 CMC curves

    4.2.2 Evaluation of the sample determination

    The proposed method has achieved state-of-the-art performance, with inputting the output features of the mapping space into XQDA in the above experiment.Then,in order to verify the effectiveness of the proposed sample determination method, the output features of the mapping space are sent to XQDA and k-XQDA respectively to compare the performance of the methods.The experiment results are shown in Table 4, Table 5 and Table 6, in which the size of samples is the number of sample.

    VIPeR dataset:in Table 4,when the size of training set samples is gradually increased, Rank-1 of the two metric learning methods is also increasing during the experiment on the VIPeR dataset. According to the Rank-1, the matching rate of XQDA is greater than that of k-XQDA even with the increase of training set samples. However, the increase of XQDA is 6. 87%and 15.3%, the increase of k-XQDA is 7.97% and 16.93%. The increase extent of k-XQDA is greater than that of XQDA. Thus, when the size of training set samples increases to a certain size, k-XQDA can show better accuracy than XQDA.

    Table 4 Ranks matching rates versus different size of samples on VIPeR dataset

    Table 5 Ranks matching rates versus different size of samples on PRID450S dataset

    Table 6 Ranks matching rates versus different size of samples on CUHK01 dataset

    PRID450S dataset: when the size of samples in the training set increases from 225 to 300 and 436, the Rank-1 of XQDA is better than that of k-XQDA,reported in Table 5. In terms of the extent of Rank-1 increases, XQDA increases by 6.38% and 16.32%, k-XQDA increases by 8.06% and 20.94%. According to the experiment results on PRID450S dataset, when the size of training sets increases to a certain size, the Rank-1 of k-XQDA can exceed that of XQDA.

    CUHK01 dataset: the output features of the mapping space are calculated by XQDA and k-XQDA respectively on CUHK01 dataset. When the size of training set samples is 486,the Rank-1 of k-XQDA exceeds that of XQDA by 1.8%, reported in Table 6.

    In summary, when the size of training set samples is about 532, the performance of k-XQDA is better than that of XQDA in Table 4. Here, the k-XQDA can obtain better results. When the size of training sets is less than 532, the performance of XQDA is better than that of k-XQDA. On PRID450S dataset, when the size of training set samples is bigger than 436, the performance of k-XQDA method is better than that of XQDA method, and better results can be obtained by using k-XQDA. When the size of training sets is less than 436, the performance of XQDA is better than that of k-XQDA in Table 5. According to the results in Table 6,when person Re-ID is conducted on CUHK01 dataset,the size of training set samples is about 486, k-XQDA can obtain good results.

    5 Conclusion

    Based on multi-feature extraction,an effective feature mapping space and a sample determination method is proposed to solve the problem of visual ambiguities in person re-identification. The feature mapping space simplifies the process of complex feature extraction,which takes the basic features in person images as input and outputs the mapped features through the feature mapping space. The mapped features are discriminated by the proposed metric learning method to complete the similarity ranking. Compared with the existing correlation methods, the proposed method improves matching rate effectively. In the future, it is proposed to further study the determination method of metric learning and optimize the performance of the algorithm.

    999久久久国产精品视频| 99久久99久久久精品蜜桃| 久久精品亚洲av国产电影网| 国产成人精品在线电影| 一进一出抽搐gif免费好疼 | 欧美日韩瑟瑟在线播放| 久久久国产一区二区| 亚洲性夜色夜夜综合| 亚洲 国产 在线| 亚洲自偷自拍图片 自拍| 麻豆av在线久日| 亚洲av日韩精品久久久久久密| 亚洲全国av大片| 亚洲五月色婷婷综合| 麻豆一二三区av精品| 搡老乐熟女国产| 午夜成年电影在线免费观看| bbb黄色大片| 久久天躁狠狠躁夜夜2o2o| 免费观看人在逋| 黄色视频不卡| 亚洲片人在线观看| 好男人电影高清在线观看| 日韩免费高清中文字幕av| 搡老乐熟女国产| 高清av免费在线| 免费在线观看亚洲国产| 最近最新中文字幕大全免费视频| 黄色视频不卡| 久久精品影院6| 精品国产美女av久久久久小说| 亚洲欧美日韩无卡精品| 亚洲一区二区三区色噜噜 | 国产亚洲av高清不卡| 国产免费男女视频| 级片在线观看| 大码成人一级视频| 美女 人体艺术 gogo| 免费人成视频x8x8入口观看| 久久久国产精品麻豆| 99国产精品免费福利视频| 国产高清激情床上av| 亚洲少妇的诱惑av| 精品卡一卡二卡四卡免费| 狂野欧美激情性xxxx| 99香蕉大伊视频| 亚洲黑人精品在线| 美女高潮到喷水免费观看| 国产精品久久久人人做人人爽| 国产又爽黄色视频| 久久久久亚洲av毛片大全| 嫩草影视91久久| 亚洲全国av大片| 热re99久久精品国产66热6| 久久国产精品影院| 亚洲av成人一区二区三| 久久午夜综合久久蜜桃| 国产亚洲精品久久久久5区| 久久午夜综合久久蜜桃| 可以免费在线观看a视频的电影网站| 亚洲一区二区三区欧美精品| 亚洲伊人色综图| 欧美 亚洲 国产 日韩一| 国产高清videossex| 久久精品国产99精品国产亚洲性色 | 欧美日韩一级在线毛片| 欧美av亚洲av综合av国产av| 欧美精品亚洲一区二区| 国产aⅴ精品一区二区三区波| 亚洲成国产人片在线观看| 69av精品久久久久久| 亚洲中文av在线| 国产亚洲精品久久久久久毛片| av片东京热男人的天堂| 午夜免费激情av| 欧美日韩乱码在线| 欧美黄色片欧美黄色片| 丁香欧美五月| 色老头精品视频在线观看| 国产精品一区二区在线不卡| 后天国语完整版免费观看| 亚洲欧美一区二区三区黑人| 88av欧美| 欧美av亚洲av综合av国产av| 色哟哟哟哟哟哟| 高清欧美精品videossex| 欧美成狂野欧美在线观看| 欧美 亚洲 国产 日韩一| 免费一级毛片在线播放高清视频 | 亚洲自拍偷在线| 亚洲欧美精品综合一区二区三区| 欧美成人免费av一区二区三区| 热99re8久久精品国产| 欧美在线一区亚洲| 免费人成视频x8x8入口观看| 国产精品免费一区二区三区在线| 好看av亚洲va欧美ⅴa在| 日本一区二区免费在线视频| 国产精品久久久久成人av| 91大片在线观看| 欧美大码av| 热re99久久国产66热| 91九色精品人成在线观看| 在线十欧美十亚洲十日本专区| 精品少妇一区二区三区视频日本电影| 日本免费一区二区三区高清不卡 | 免费人成视频x8x8入口观看| tocl精华| 黄色丝袜av网址大全| 咕卡用的链子| 久久人人97超碰香蕉20202| 亚洲国产毛片av蜜桃av| 日韩欧美三级三区| 51午夜福利影视在线观看| 国产精品成人在线| 黑人猛操日本美女一级片| 成人精品一区二区免费| 久久国产精品影院| 亚洲精品久久成人aⅴ小说| 岛国视频午夜一区免费看| 波多野结衣高清无吗| 一进一出好大好爽视频| 丝袜美腿诱惑在线| 99在线视频只有这里精品首页| 国产区一区二久久| 国产成人免费无遮挡视频| 日韩精品中文字幕看吧| 伊人久久大香线蕉亚洲五| 精品国产一区二区三区四区第35| 国产精品爽爽va在线观看网站 | 亚洲人成伊人成综合网2020| 日韩视频一区二区在线观看| 国产精品一区二区免费欧美| 免费在线观看影片大全网站| 50天的宝宝边吃奶边哭怎么回事| 99精品欧美一区二区三区四区| 黄网站色视频无遮挡免费观看| 亚洲欧美日韩高清在线视频| 18美女黄网站色大片免费观看| 天天影视国产精品| 亚洲av成人一区二区三| 亚洲全国av大片| 成人影院久久| www.www免费av| 后天国语完整版免费观看| 国产野战对白在线观看| av电影中文网址| av片东京热男人的天堂| 五月开心婷婷网| 嫩草影视91久久| 久久午夜亚洲精品久久| 亚洲国产欧美网| 精品福利永久在线观看| 777久久人妻少妇嫩草av网站| 一夜夜www| 亚洲午夜精品一区,二区,三区| 欧美日韩一级在线毛片| 亚洲色图av天堂| 欧美激情高清一区二区三区| 99久久99久久久精品蜜桃| av片东京热男人的天堂| 男女床上黄色一级片免费看| av电影中文网址| 男人的好看免费观看在线视频 | 久9热在线精品视频| 日本免费一区二区三区高清不卡 | 一区在线观看完整版| 国产精品1区2区在线观看.| 99精品欧美一区二区三区四区| 国产精品一区二区在线不卡| 一区二区三区激情视频| 99国产极品粉嫩在线观看| 热99re8久久精品国产| 亚洲成人久久性| 香蕉丝袜av| 亚洲av五月六月丁香网| av网站在线播放免费| 99精品欧美一区二区三区四区| xxx96com| 久久久水蜜桃国产精品网| 成熟少妇高潮喷水视频| 国产精品av久久久久免费| 亚洲第一青青草原| 在线观看66精品国产| 免费av中文字幕在线| 亚洲视频免费观看视频| 老司机亚洲免费影院| 最近最新免费中文字幕在线| 免费高清视频大片| 99riav亚洲国产免费| 欧美中文日本在线观看视频| 色播在线永久视频| av中文乱码字幕在线| 91精品三级在线观看| 亚洲五月天丁香| 久久草成人影院| 丁香六月欧美| 午夜精品国产一区二区电影| 十八禁网站免费在线| 99热只有精品国产| 色精品久久人妻99蜜桃| 校园春色视频在线观看| 一边摸一边抽搐一进一出视频| 亚洲精品一区av在线观看| 淫秽高清视频在线观看| 视频区图区小说| 国产精品一区二区免费欧美| 国产成人欧美| 成人亚洲精品一区在线观看| 欧美人与性动交α欧美精品济南到| 亚洲五月婷婷丁香| av天堂久久9| 水蜜桃什么品种好| 欧美老熟妇乱子伦牲交| 十八禁人妻一区二区| 欧美大码av| 日韩中文字幕欧美一区二区| av免费在线观看网站| 在线观看66精品国产| 国产亚洲精品第一综合不卡| 国产av精品麻豆| 人人妻人人澡人人看| 黄片播放在线免费| 欧美日韩亚洲高清精品| 视频区欧美日本亚洲| 国产成人av教育| 日日摸夜夜添夜夜添小说| 久久婷婷成人综合色麻豆| 日韩欧美三级三区| 亚洲五月婷婷丁香| 啦啦啦在线免费观看视频4| 少妇粗大呻吟视频| 亚洲片人在线观看| 久久午夜亚洲精品久久| 波多野结衣av一区二区av| 人妻丰满熟妇av一区二区三区| www.999成人在线观看| 亚洲国产欧美日韩在线播放| 成人三级做爰电影| 国产精品1区2区在线观看.| 亚洲 欧美 日韩 在线 免费| 国产99久久九九免费精品| 一进一出抽搐gif免费好疼 | 国产成人精品久久二区二区91| 最好的美女福利视频网| 欧美日韩亚洲国产一区二区在线观看| 大香蕉久久成人网| 国产激情久久老熟女| 国产伦一二天堂av在线观看| 精品一品国产午夜福利视频| 别揉我奶头~嗯~啊~动态视频| 看免费av毛片| 精品久久久久久,| 男女下面插进去视频免费观看| 高清av免费在线| 精品福利观看| 亚洲欧美日韩无卡精品| 久久久久国产精品人妻aⅴ院| 悠悠久久av| 亚洲中文av在线| 女警被强在线播放| 亚洲va日本ⅴa欧美va伊人久久| 精品国产美女av久久久久小说| 亚洲五月婷婷丁香| xxxhd国产人妻xxx| 好男人电影高清在线观看| 日韩欧美国产一区二区入口| 亚洲国产毛片av蜜桃av| 日本 av在线| 亚洲欧美一区二区三区黑人| 成年人黄色毛片网站| 成人免费观看视频高清| 免费在线观看亚洲国产| 老熟妇乱子伦视频在线观看| 午夜两性在线视频| 精品高清国产在线一区| 99久久99久久久精品蜜桃| 美国免费a级毛片| 精品高清国产在线一区| 老司机亚洲免费影院| av电影中文网址| 精品第一国产精品| 日日干狠狠操夜夜爽| 国产又色又爽无遮挡免费看| 国产野战对白在线观看| 国产精品1区2区在线观看.| bbb黄色大片| 精品99又大又爽又粗少妇毛片 | 亚洲成人久久爱视频| 国产精品免费一区二区三区在线| 亚洲三级黄色毛片| 波多野结衣巨乳人妻| 欧美3d第一页| 国产日本99.免费观看| 日本一二三区视频观看| 欧美极品一区二区三区四区| 搡老熟女国产l中国老女人| 色综合欧美亚洲国产小说| 99国产精品一区二区蜜桃av| 男女之事视频高清在线观看| av在线天堂中文字幕| 国产午夜精品久久久久久一区二区三区 | 国产极品精品免费视频能看的| 欧美xxxx性猛交bbbb| 国产高清三级在线| 一个人观看的视频www高清免费观看| 久99久视频精品免费| 午夜久久久久精精品| 熟妇人妻久久中文字幕3abv| 亚洲美女搞黄在线观看 | 91在线精品国自产拍蜜月| 男插女下体视频免费在线播放| 色综合欧美亚洲国产小说| 日本一本二区三区精品| 婷婷亚洲欧美| 最好的美女福利视频网| 欧美性猛交黑人性爽| 国产一区二区亚洲精品在线观看| 免费搜索国产男女视频| 免费搜索国产男女视频| 婷婷丁香在线五月| 99热精品在线国产| 亚洲天堂国产精品一区在线| 尤物成人国产欧美一区二区三区| xxxwww97欧美| 少妇的逼好多水| 国产欧美日韩一区二区三| 97人妻精品一区二区三区麻豆| 最近视频中文字幕2019在线8| 极品教师在线免费播放| 乱人视频在线观看| 在现免费观看毛片| 我要看日韩黄色一级片| 亚洲国产色片| 熟女人妻精品中文字幕| 国产成人av教育| 天堂av国产一区二区熟女人妻| 天天一区二区日本电影三级| 亚洲最大成人中文| 久久久久免费精品人妻一区二区| 国产精品不卡视频一区二区 | 婷婷色综合大香蕉| 嫩草影院精品99| 欧美成人a在线观看| 成人无遮挡网站| 18禁裸乳无遮挡免费网站照片| 88av欧美| 黄色一级大片看看| 不卡一级毛片| 国产白丝娇喘喷水9色精品| 欧美日韩综合久久久久久 | 婷婷丁香在线五月| 99久久成人亚洲精品观看| 男女那种视频在线观看| 九色国产91popny在线| 国产欧美日韩精品一区二区| 国产亚洲精品综合一区在线观看| 免费人成在线观看视频色| 精品一区二区三区av网在线观看| 高清在线国产一区| 成人av在线播放网站| 午夜福利免费观看在线| 国产主播在线观看一区二区| 悠悠久久av| 啦啦啦观看免费观看视频高清| 一本精品99久久精品77| 亚洲avbb在线观看| 1024手机看黄色片| 久久天躁狠狠躁夜夜2o2o| 深爱激情五月婷婷| 亚洲av一区综合| 日韩高清综合在线| 欧美成狂野欧美在线观看| 两个人视频免费观看高清| 亚洲av一区综合| 亚洲av熟女| av视频在线观看入口| 男人和女人高潮做爰伦理| 简卡轻食公司| 欧美丝袜亚洲另类 | 亚洲乱码一区二区免费版| 久久99热这里只有精品18| 国语自产精品视频在线第100页| 蜜桃久久精品国产亚洲av| 成熟少妇高潮喷水视频| 老司机福利观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 成人毛片a级毛片在线播放| 夜夜看夜夜爽夜夜摸| 赤兔流量卡办理| 最近视频中文字幕2019在线8| 高清日韩中文字幕在线| АⅤ资源中文在线天堂| 中文字幕av成人在线电影| 免费观看精品视频网站| 国产成人啪精品午夜网站| 观看免费一级毛片| 国产69精品久久久久777片| 欧美色视频一区免费| 日本一二三区视频观看| av视频在线观看入口| 久久久久久大精品| 他把我摸到了高潮在线观看| www日本黄色视频网| 黄色配什么色好看| 午夜福利在线观看吧| 亚洲专区中文字幕在线| 日韩欧美 国产精品| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 一个人免费在线观看的高清视频| 国语自产精品视频在线第100页| 最新在线观看一区二区三区| 免费电影在线观看免费观看| 国产三级中文精品| 色综合亚洲欧美另类图片| 乱人视频在线观看| 中国美女看黄片| 中亚洲国语对白在线视频| 在线看三级毛片| 久久性视频一级片| 69av精品久久久久久| 国产又黄又爽又无遮挡在线| 国产精品98久久久久久宅男小说| 日韩欧美一区二区三区在线观看| 久久午夜福利片| 精品日产1卡2卡| 久久久久久久亚洲中文字幕 | av专区在线播放| 网址你懂的国产日韩在线| 亚洲精品在线观看二区| 精品久久久久久久久av| 一级作爱视频免费观看| 亚洲人成网站高清观看| 国产老妇女一区| 亚洲精品在线美女| 黄色女人牲交| 久久久久久久精品吃奶| 亚洲av电影在线进入| 中亚洲国语对白在线视频| 丰满人妻熟妇乱又伦精品不卡| 免费av不卡在线播放| 久久国产乱子免费精品| 18美女黄网站色大片免费观看| 久久精品人妻少妇| 亚洲熟妇中文字幕五十中出| 90打野战视频偷拍视频| 97超级碰碰碰精品色视频在线观看| 亚洲中文日韩欧美视频| 两人在一起打扑克的视频| 亚洲第一电影网av| 精品99又大又爽又粗少妇毛片 | 精品一区二区三区视频在线观看免费| 少妇人妻一区二区三区视频| 性色av乱码一区二区三区2| 国产亚洲欧美在线一区二区| 午夜免费成人在线视频| 久久精品国产亚洲av香蕉五月| 12—13女人毛片做爰片一| 国产三级中文精品| 欧美激情久久久久久爽电影| 国产黄片美女视频| 一区二区三区激情视频| 日韩免费av在线播放| 波多野结衣高清作品| 99riav亚洲国产免费| 国产v大片淫在线免费观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 免费看a级黄色片| 国产精品一区二区三区四区久久| 欧美激情在线99| 亚洲人成电影免费在线| av在线老鸭窝| 欧美日韩国产亚洲二区| 在线天堂最新版资源| 三级男女做爰猛烈吃奶摸视频| 亚洲美女黄片视频| 国产伦精品一区二区三区视频9| 少妇的逼好多水| 亚洲最大成人手机在线| 在线播放国产精品三级| 国产亚洲精品综合一区在线观看| 永久网站在线| 成人国产综合亚洲| 亚洲乱码一区二区免费版| 超碰av人人做人人爽久久| 好男人在线观看高清免费视频| 国产视频内射| 欧美中文日本在线观看视频| 又爽又黄a免费视频| 两性午夜刺激爽爽歪歪视频在线观看| 在线观看66精品国产| 无遮挡黄片免费观看| 国产私拍福利视频在线观看| www.www免费av| 黄色丝袜av网址大全| 在线观看舔阴道视频| av天堂在线播放| 长腿黑丝高跟| 久久午夜亚洲精品久久| 亚洲第一电影网av| 91狼人影院| 少妇熟女aⅴ在线视频| 国产精品久久久久久久电影| 精品人妻偷拍中文字幕| 国产精品三级大全| 亚洲欧美日韩高清专用| 淫妇啪啪啪对白视频| 久久精品国产亚洲av涩爱 | 亚洲精品成人久久久久久| 国产伦精品一区二区三区视频9| 亚洲av熟女| 亚洲av电影不卡..在线观看| 欧美激情国产日韩精品一区| 色哟哟哟哟哟哟| 久久人人爽人人爽人人片va | 国产欧美日韩一区二区三| 日本免费a在线| 最后的刺客免费高清国语| 99久久成人亚洲精品观看| 一个人看视频在线观看www免费| 精品人妻1区二区| 有码 亚洲区| 国产亚洲av嫩草精品影院| 国内久久婷婷六月综合欲色啪| 757午夜福利合集在线观看| 草草在线视频免费看| 亚洲精品久久国产高清桃花| 国产成人欧美在线观看| 欧美日韩国产亚洲二区| 色在线成人网| 国产黄色小视频在线观看| 亚洲人与动物交配视频| 中文字幕av在线有码专区| 午夜免费成人在线视频| 天美传媒精品一区二区| 极品教师在线视频| 婷婷亚洲欧美| 亚洲片人在线观看| 国产精品一区二区三区四区久久| 男插女下体视频免费在线播放| 国产精品一区二区免费欧美| 欧美日韩黄片免| 99在线人妻在线中文字幕| 久久九九热精品免费| 99热这里只有是精品50| 久久久久国产精品人妻aⅴ院| 婷婷色综合大香蕉| 亚洲av免费在线观看| 性色av乱码一区二区三区2| 精品日产1卡2卡| 国产精品野战在线观看| 我的老师免费观看完整版| 国产成年人精品一区二区| 亚洲黑人精品在线| 久久久精品大字幕| 日韩精品青青久久久久久| 精品一区二区三区视频在线| 内射极品少妇av片p| 日韩中字成人| 精品午夜福利视频在线观看一区| 一区二区三区激情视频| 国产精品自产拍在线观看55亚洲| 色av中文字幕| 久久久久久久亚洲中文字幕 | 精品国产亚洲在线| 欧美黑人欧美精品刺激| 午夜激情福利司机影院| 麻豆成人av在线观看| 亚洲狠狠婷婷综合久久图片| 国内久久婷婷六月综合欲色啪| 国产白丝娇喘喷水9色精品| 90打野战视频偷拍视频| 中文字幕人妻熟人妻熟丝袜美| 女同久久另类99精品国产91| 18禁裸乳无遮挡免费网站照片| 真人做人爱边吃奶动态| 国产精品免费一区二区三区在线| 老司机福利观看| av天堂在线播放| 一区二区三区四区激情视频 | 欧美不卡视频在线免费观看| 国产伦精品一区二区三区视频9| av女优亚洲男人天堂| 国产伦精品一区二区三区四那| 亚洲片人在线观看| 国内精品久久久久久久电影| 国产视频一区二区在线看| 成人特级av手机在线观看| 亚洲精品乱码久久久v下载方式| 亚洲成人久久性| 一区二区三区四区激情视频 | 在线播放无遮挡| 欧美+亚洲+日韩+国产| 欧美黑人巨大hd| 韩国av一区二区三区四区| 精品午夜福利在线看| 老司机午夜福利在线观看视频| 日韩欧美 国产精品| 精品免费久久久久久久清纯| 直男gayav资源| 波多野结衣高清无吗| 午夜老司机福利剧场| 色5月婷婷丁香| 床上黄色一级片| 极品教师在线视频| 国产高清视频在线播放一区| 听说在线观看完整版免费高清| 国产又黄又爽又无遮挡在线| 午夜影院日韩av| 亚洲精华国产精华精| 日日摸夜夜添夜夜添av毛片 | 能在线免费观看的黄片| 午夜免费成人在线视频| 欧美极品一区二区三区四区| 黄色女人牲交|