• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature mapping space and sample determination for person re-identification①

    2022-10-22 02:23:12HOUWeiHUZhentaoLIUXianxingSHIChangsen
    High Technology Letters 2022年3期

    HOU Wei(侯 巍), HU Zhentao, LIU Xianxing, SHI Changsen

    (School of Artificial Intelligence, Henan University, Zhengzhou 450046, P.R.China)

    Abstract Person re-identification (Re-ID) is integral to intelligent monitoring systems. However, due to the variability in viewing angles and illumination, it is easy to cause visual ambiguities, affecting the accuracy of person re-identification. An approach for person re-identification based on feature mapping space and sample determination is proposed. At first, a weight fusion model, including mean and maximum value of the horizontal occurrence in local features, is introduced into the mapping space to optimize local features. Then, the Gaussian distribution model with hierarchical mean and covariance of pixel features is introduced to enhance feature expression. Finally, considering the influence of the size of samples on metric learning performance, the appropriate metric learning is selected by sample determination method to further improve the performance of person re-identification. Experimental results on the VIPeR,PRID450S and CUHK01 datasets demonstrate that the proposed method is better than the traditional methods.

    Key words: person re-identification (Re-ID),mapping space,feature optimization,sample determination

    0 Introduction

    The purpose of person re-identification (Re-ID)is to match the same person from different camera views[1]. Person Re-ID is a key component of video surveillance, which is of great significance in security monitoring, person search and criminal investigation.Although great progress has been made in person Re-ID, there are still many problems to be solved due to the existence of visual ambiguities.

    The visual ambiguities brought by changes in viewpoint and illumination are manifested in the person images like large changes in scale and background of the same person, which can significantly degrade the performance of the person Re-ID system. To overcome this limitation, there have been studies that try to use local information and information discrimination[2-3]. Properly utilizing the information in person images and better discriminating them can effectively improve the performance of person Re-ID. The related studies that have emerged in person Re-ID can be generally classified into two types: feature extraction and metric learning.

    Some researchers construct features of person images based on color, texture and other appearance attributes[4-5]. The basic idea is that the person image is divided into multiple overlapping or non-overlapping local image blocks, and then color or texture features are extracted from them separately, thus adding spatial region information into person image features. When calculating the similarity of two person images, the features within the corresponding image blocks will be compared separately, and then the comparison results of each image block will be fused as the final recognition result. Nevertheless, the features constructed by the above method are weak and the feature representation for person Re-ID is abated.

    On the other hand, there are many work that use a given set of training samples to obtain a metric matrix that effectively reflects the similarity between data samples, increasing the distance between non-similar samples while reducing the distance between similar samples[6]. However, these methods do not consider the effect of sample size on the metric learning performance, making the person Re-ID results less reliable.

    Color features are robust to pose and viewpoint changes, but are susceptible to illumination and obstructions. It is difficult to effectively distinguish largescale person images using only color features due to the similarity of dressing problem. The clothing often contains texture information, and texture features involve comparison of neighboring pixels and are robust to illumination, so making full use of color and texture features is very effective for person Re-ID. However, traditional methods apply single color and texture features to the person Re-ID task, and they are insufficient to handle the differences between different person images.In addition, the completeness and richness of feature representations also affect the results of similarity metrics, and traditional methods do not fully utilize the richness of samples when dealing with such metrics,resulting in lower overall performance of the methods.

    To address the above problems, this paper proposes a person Re-ID method based on feature mapping space and sample determination metric learning. The method combines an improved weighted local maximal occurrence (wLOMO) feature that modifies the original LOMO[7]feature with the Gaussian of Gaussian(GOG)[8]feature, and uses a sample determination method to select a suitable metric learning method to rank the similarity of person images. The method in this paper performs simulation experiments on each of the three typical datasets and is compared with other methods. The main contributions are summarized as follows.

    (1) A fused feature mapping space is proposed to enhance the person images features. The mean information of the horizontal direction of person image is introduced into LOMO feature, and the weighted mean and max are fused to obtain the proposed wLOMO feature. To enhance the feature expression of each person image, wLOMO feature is combined with GOG feature.On this basis, in order to simplify the complexity of feature extraction model, the feature transformation processes of wLOMO and GOG are integrated into one feature mapping space.

    (2) A sample determination method is proposed to accommodate different sample sizes. In the dataset,the sample determination method selects the appropriate metric learning to accomplish the similarity ranking of person images according to the demand of different sample sizes. In addition, the selected sample size is dynamically tuned according to the matching rate of different metric learning outputs.

    (3) Extended experiments on three publicly available datasets are designed to evaluate the performance of the proposed method and the comparison method, and to demonstrate the effectiveness and applicability of the proposed method in person Re-ID.

    1 Related work

    The research on person Re-ID can be divided into two groups: feature extraction and metric learning.Person Re-ID based on feature extraction is usually constructed by basic color, texture and other appearance attributes. Ref.[2] proposed the symmetry driven accumulation of local feature (SDALF) based on the symmetrical and asymmetric characteristics of person body structure, which fused three kinds of color feature in person image to complete the discrimination of person image. Ref.[4] proposed an ensemble of localized features (ELF) method. The method adopted AdaBoost algorithm to select the appropriate feature combination from a group of color and texture features,which improved the experimental accuracy. Refs[5,9,10]introduced biologically inspired features (BIF) in person images. By calculating the characteristics of BIF on adjacent scales, a feature called Bicov was proposed. On this basis, Gabor filter and covariance feature were introduced to deal with the problems caused by illumination change and background transformation in person images. Ref.[11] proposed a feature transformation method based on Zero-Padding Augmentation, which could align the features distributed across the disjoint person images to improve the performance of the matching model. Ref.[12] constructed the feature fusion network (FNN) by combining the manually extracted features and deep learning features, and realized the fusion of deep learning features and artificial features by constantly adjusting the parameters of the deep neural network. Ref.[13] proposed a deep convolution model, which highlights the discriminative part by giving the features in each part of the person a different weight to realize the person Re-ID task. The person Re-ID method based on deep learning needs to consider using a large number of labeled samples to train a complex model, and the training process is very time-consuming.

    Person Re-ID methods based on metric learning minimizes the distance between similar person by learning appropriate similarity. Ref.[3] introduced the concept of large margin in Mahalanobis distance and proposed a metric learning method called large margin nearest neighbor (LMNN). LMNN assumed that the sample features of the same class were adjacent, so there was a big gap between the feature samples of different classes. Thus, when calculating the distance,the features of the same kind of samples were gathered,and the different types of samples were pushed. Ref.[6]proposed a local fisher discriminative analysis (LFDA)method, which introduced a matrix based on subspace learning, allocated different scale factors for the same classes and different classes, and used the local invariance principle to calculate the distance. Ref.[14]proposed a Mahalanobis distance metric called keep it simple and straightforward metric (KISSME) by calculating the difference between the intra class and inter class covariance matrices of sample features. The method did not need to calculate the metric matrix through complex iterative algorithm, so it was more efficient. Ref.[15] used a new multi-scale metric learning method based on strip descriptors for person Re-ID. According to this method, the internal structure of different person images can be effectively extracted,improving the recognition rate. However, due to the non-linearity of the person image in the cross field of view, the linear transformation generated by the general metric learning method effects commonly general.Therefore, the kernel correlation based metric learning method was introduced to solve the nonlinear problem in person Re-ID[16-17]. However, the above-mentioned methods adopt a single strategy to deal with the change of sample size, without considering the accuracy impact of the method itself.

    2 Problem description

    It considers that the general process of person rerecognition is to extract features first and then rank them by metric learning. The performance of the method depends strongly on the expression ability of features and metric learning, and the existence of visual ambiguities will inevitably affect the ability. To solve this problem, a new method is proposed to improve the matching rate of person re-recognition.

    The framework of the proposed method is divided into three parts in Fig.1. The first part is the extraction of basic color, texture and spatial features, the second part is the mapping process of basic features, and the third part is the metric learning method based on sample determination.

    Fig.1 The person re-identification framework

    3 Methodology

    Based on the wLOMO in subsection 3.1 and the proposed sample determination in subsection 3.2, the proposed method flowchart is shown in Fig.2.

    3.1 Feature mapping space

    When designing the feature mapping space, two state-of-the-art feature transformation processes are merged into one feature mapping space by cascading,which simplifies the feature extraction.

    3.1.1 LOMO

    When extracting LOMO features, a 10 ×10 sliding subwindow is used to represent the local area of a person image,and an 8 ×8 ×8 bin combined color histogram of the hue, saturati, value (HSV) and two scale the scale invariant local ternary pattern (SILTP)texture histogramFSILTPare extracted from each subwindow. Then the maximum value of pixel features occurrence of all subwindows at the same horizontal position is calculated as

    whereρ(?)is the pixel feature occurrence in all subwindows.

    3.1.2 The proposed wLOMO

    Fig.2 Flowchart of the proposed method

    Considering that the maximization of pixel features leads to the loss of some person features,and the clothes worn by person are often composed of a small number of colors in each part, the mean information can enhance the feature expression of person images when the person background changes little. Therefore, the mean information of pixel feature distribution is introduced into the feature expression, expressed as

    3.1.3 GOG

    Considering that color features are more sensitive to illumination changes in cross view person images,and the impact of spatial information loss on person Re-ID, this paper further extracts GOG features from the same person image to enhance the feature expression.Firstly, the pixel level featurefis extracted as

    f= [y,FMθ,FRGB,FHSV,FLAB,FRG]T(6)

    whereFRGB,FHSV,FLAB,FRGare the color features,FMθis the texture feature,yis the space feature. The color features are channel values of person image,Mθconsists of the values of pixel intensity gradients in the four standard directions of the two-dimensional coordinate system.yis the position of the pixel in the vertical direction of image. After that, block level features are extracted. Each person image is divided intoGpartially overlapped horizontal regions, and each region is divided intok×klocal blocks. The pixel features in each local blocksare represented by Gaussian distribution to form a Gaussian blockzi

    whereμsis the mean vector,Σsis the covariance matrix of blocks.

    Then, the Gauss blockziis mapped to symmetric positive definite matrix to complete block level feature extraction. Finally,the region level features are extracted. The Gaussian blocks are modeled as a Gaussian region by Gaussian distribution. Meanwhile, Gaussian region is embedded into symmetric positive definite matrix. These vectors are finally aggregated to form the GOG featureFGOGof a person image.

    wherezGis theG-th horizontal region feature of a person image.

    3.1.4 Feature mapping space

    The proposed wLOMO describes only maximum occurrence and mean occurrence of pixel features, moreover, GOG can provide covariance information.

    To comprehensively consider the maximum occurrence, mean occurrence and covariance information of pixel features, Eq.(5) and Eq.(8) are combined. It means that wLOMO feature and GOG feature are aligned according to the person’s identity, and their feature mapping process is simplified to one feature mapping space by cascading.

    whereFis the feature of the output of the mapping space.

    3.2 Sample determination

    Cross-view quadratic discriminant analysis (XQDA)[7]and kernel cross-view quadratic discriminant analysis (k-XQDA)[18]are state-of-the-art methods in depending on feature dimension and samples size respectively. Based on the two methods, a sample determination method is proposed to synthesize the advantages of the two methods.

    3.2.1 XQDA

    Before summarizing the XQDA method,a brief introduction is given to the distance measurement of person Re-ID. For a datasetX, it containsCclasses personci(1 ≤i≤C) ∈Rn.The classical Mahalanobis distance metric learns the distanced(xi,zj) between personxi= [xi1,xi2,…,xin] in cameraaand personzj=[zj1,zj2,…,zjm] in camerab.

    3.2.2 k-XQDA

    XQDA metric learning method is directly trained in the original linear feature space, and the similarity and difference among samples are not well expressed.k-XQDA uses a kernel function to map the original samples into the easily distinguishable nonlinear space,and then distinguishes the differences of samples in the nonlinear space. The derivation of k-XQDA method involves mainly the distance metric functiond(xi,zj) in XQDA and the kernelization of the cost functionJ(wk).

    In the kernel space, two kinds of expansion coefficientsαandβcorresponding to person in cameraaandbare used, respectively. Mapping matrixwkcan be expressed as

    3.2.3 Sample determination

    All the intrinsic matrix dimensions of k-XQDA method depend on the size of samples, which greatly reduces the amount of calculation compared with the XQDA method depending on the feature dimension.

    On the basis of subsection 3.2.1 and subsection

    3.2.2, considering the different focus of the two metric learning methods, in order to integrate the advantages of the two and make the actual person re-identification task a better match, this paper proposes a sample determination method, that is, when the size of training setSsatisfies the Eq.(18), using the corresponding metric learning method will make a better effect in the corresponding dataset.

    whereSis the sample size to be determined,sis the current sample size.

    4 Experiments

    To evaluate the performance of the method fairly,all the comparison methods run in the same environment. The hardware environment is Intel Core i7-9700F CPU@3.00 GHz, 8 GB RAM. The operating system is Windows 10 64 bit, and the software environment is Matlab 2019b.

    4.1 Datasets and evaluation protocol

    The effectiveness of the proposed method is demonstrated by three publicly available datasets, they are VIPeR[19], PRID450S[20]and CUHK01[21]. The VIPeR dataset contains 632 persons with different identities. Each person involves two images captured from two disjoint camera views, including variations in background and illumination. The PRID450S dataset contains 450 persons with different identities. Each person covers two images captured by two non-overlapping cameras with a single background. The CUHK01 dataset consists of 971 persons with a total of 3884 shots captured by two non-overlapping cameras with an average of two images for each person, and the person poses vary greatly.

    To evaluate the results of the features in different metric learning,cumulative match characteristics(CMC)curve is used as the evaluation protocol.

    4.2 Comparison with state-of-the-art

    All images are normalized to the same size of 128×48 pixels. The datasets of VIPeR, PRID450S and CUHK01 are randomly divided into two equal parts,one half for training and the other for testing. The size of images in the training set of the three data sets is 632,450 and 972 respectively. To eliminate the performance difference caused by randomly dividing the training set and the testing set, the process is repeated 10 times, and the average cumulative matching accuracies at rank 1, 5, 10 and 20 are reported over 10 runs. In addition, the corresponding CMC curves are shown.

    4.2.1 Evaluation of the mapping space

    To analyze the effectiveness of the proposed mapping space, the output features of the mapping space are sent to the XQDA metric learning method to verify the performance of the method. Since the method is iterative, different weights are looped in different datasets to retain the one with the highest performance.Furthermore, showing the Rank-1 values corresponding to various weights may indicate that the weights are not constant and change between datasets. This paper selects three different datasets and compares the results with state-of-the-art approaches.

    VIPeR dataset: to analyze the influence of weightaon the performance of the wLOMO, the Rank-1 under different weight on VIPeR dataset are shown in Fig.3. It can be seen the introduction of mean information has a certain impact on the method performance. Whenais in range of 0.1 -0.2, the performance of the method is optimal, and increasingacontinually the performance of the method declines.

    The compared methods and their matching rates on VIPeR are shown in Table 1 and Fig.4. The results are reported in Table 1, the Rank-1 of LOMO, LSSCDL, DNS and GOG are better, all exceeding 40%.The proposed approach achieves 50.63% in Rank-1,which is 2.37% better than GOG.

    Fig.3 Rank-1 matching rates

    Table 1 Comparison of Rank results with other methods on VIPeR dataset

    Fig.4 CMC curves

    PRID450S dataset: Fig.5 shows the performance comparison of the wLOMO under different weight values. When the weight value is 0.3 -0.4, the method performance is optimal.

    The comparison methods and their matching rates results on PRID450S dataset are shown in Table 2 and Fig.6. Different from the person images in VIPeR and CUHK01 datasets, the background of person images in PRID450S dataset is relatively simple, and the background interference to all methods is small, the final matching results are generally better. For the proposed method with mean information, the matching rate of Rank-1 is 71.42%, outperforming the second best one GOG by 3.6%.

    Fig.5 Rank-1 matching rates

    Table 2 Comparison of Rank results with other methods on PRID450S dataset

    Fig.6 CMC curves

    CUHK01 dataset: the performance of the wLOMO has been declining withaincreasing, because the person background information is more complex than the first two datasets in Fig.7, and the introduction of mean information leads to performance degradation.Thus, the combination with GOG can strengthen the feature expression and weaken the error caused by mean information.

    Fig.7 Rank-1 matching rates

    The compared methods and their matching rates on CUHK01 dataset are shown in Table 3 and Fig.8.Considering that each person in the CUHK01 dataset contains four images, the first two images contain one front/back view, the last two images contain one side view, and the overall difference between them is little.Therefore, in the experiment, one is randomly selected from the foreground and background images of each person, and one is randomly selected from the side images of each person. The training sets contain 486 pairs of person images, and the test sets contain 485 pairs of person images. As listed in Table 3,the performance of proposed method is better than other methods,outperforming the second-best method with improvements of 5.65%.

    Table 3 Comparison of Rank results with other methods on CUHK01 dataset

    Fig.8 CMC curves

    4.2.2 Evaluation of the sample determination

    The proposed method has achieved state-of-the-art performance, with inputting the output features of the mapping space into XQDA in the above experiment.Then,in order to verify the effectiveness of the proposed sample determination method, the output features of the mapping space are sent to XQDA and k-XQDA respectively to compare the performance of the methods.The experiment results are shown in Table 4, Table 5 and Table 6, in which the size of samples is the number of sample.

    VIPeR dataset:in Table 4,when the size of training set samples is gradually increased, Rank-1 of the two metric learning methods is also increasing during the experiment on the VIPeR dataset. According to the Rank-1, the matching rate of XQDA is greater than that of k-XQDA even with the increase of training set samples. However, the increase of XQDA is 6. 87%and 15.3%, the increase of k-XQDA is 7.97% and 16.93%. The increase extent of k-XQDA is greater than that of XQDA. Thus, when the size of training set samples increases to a certain size, k-XQDA can show better accuracy than XQDA.

    Table 4 Ranks matching rates versus different size of samples on VIPeR dataset

    Table 5 Ranks matching rates versus different size of samples on PRID450S dataset

    Table 6 Ranks matching rates versus different size of samples on CUHK01 dataset

    PRID450S dataset: when the size of samples in the training set increases from 225 to 300 and 436, the Rank-1 of XQDA is better than that of k-XQDA,reported in Table 5. In terms of the extent of Rank-1 increases, XQDA increases by 6.38% and 16.32%, k-XQDA increases by 8.06% and 20.94%. According to the experiment results on PRID450S dataset, when the size of training sets increases to a certain size, the Rank-1 of k-XQDA can exceed that of XQDA.

    CUHK01 dataset: the output features of the mapping space are calculated by XQDA and k-XQDA respectively on CUHK01 dataset. When the size of training set samples is 486,the Rank-1 of k-XQDA exceeds that of XQDA by 1.8%, reported in Table 6.

    In summary, when the size of training set samples is about 532, the performance of k-XQDA is better than that of XQDA in Table 4. Here, the k-XQDA can obtain better results. When the size of training sets is less than 532, the performance of XQDA is better than that of k-XQDA. On PRID450S dataset, when the size of training set samples is bigger than 436, the performance of k-XQDA method is better than that of XQDA method, and better results can be obtained by using k-XQDA. When the size of training sets is less than 436, the performance of XQDA is better than that of k-XQDA in Table 5. According to the results in Table 6,when person Re-ID is conducted on CUHK01 dataset,the size of training set samples is about 486, k-XQDA can obtain good results.

    5 Conclusion

    Based on multi-feature extraction,an effective feature mapping space and a sample determination method is proposed to solve the problem of visual ambiguities in person re-identification. The feature mapping space simplifies the process of complex feature extraction,which takes the basic features in person images as input and outputs the mapped features through the feature mapping space. The mapped features are discriminated by the proposed metric learning method to complete the similarity ranking. Compared with the existing correlation methods, the proposed method improves matching rate effectively. In the future, it is proposed to further study the determination method of metric learning and optimize the performance of the algorithm.

    在线看a的网站| 777米奇影视久久| 日本av手机在线免费观看| 老司机影院成人| 欧美 日韩 精品 国产| 在线播放无遮挡| 国产午夜福利久久久久久| 欧美bdsm另类| 亚洲欧美日韩卡通动漫| 国产精品伦人一区二区| 国产欧美亚洲国产| 精品国产三级普通话版| 日日摸夜夜添夜夜爱| 亚洲成人一二三区av| 久久久欧美国产精品| 两个人的视频大全免费| 大片电影免费在线观看免费| 男人爽女人下面视频在线观看| 国产在线一区二区三区精| 人妻制服诱惑在线中文字幕| 搡女人真爽免费视频火全软件| 嫩草影院新地址| 成年免费大片在线观看| 国产探花极品一区二区| 只有这里有精品99| 日本爱情动作片www.在线观看| 三级国产精品片| 80岁老熟妇乱子伦牲交| 久久99精品国语久久久| 久久99热这里只频精品6学生| 天堂俺去俺来也www色官网| 国产伦精品一区二区三区四那| 国产成人精品久久久久久| 日韩大片免费观看网站| 亚洲精品日本国产第一区| 国产色爽女视频免费观看| 亚洲av二区三区四区| 26uuu在线亚洲综合色| av在线播放精品| 免费看日本二区| 中文资源天堂在线| av免费观看日本| 日韩成人av中文字幕在线观看| 国产精品一二三区在线看| 18+在线观看网站| 国产乱人视频| 欧美性感艳星| 亚洲久久久久久中文字幕| 日本熟妇午夜| www.色视频.com| 国产黄片视频在线免费观看| 直男gayav资源| 黄色日韩在线| 亚洲精品乱久久久久久| 婷婷色综合www| 观看免费一级毛片| 亚洲国产高清在线一区二区三| 色吧在线观看| 麻豆成人午夜福利视频| 亚洲三级黄色毛片| av在线观看视频网站免费| 男女国产视频网站| 国产精品久久久久久久久免| 国产男女超爽视频在线观看| 大码成人一级视频| 少妇熟女欧美另类| 中文资源天堂在线| 99热国产这里只有精品6| 女人十人毛片免费观看3o分钟| 18禁动态无遮挡网站| 久久鲁丝午夜福利片| 哪个播放器可以免费观看大片| 天美传媒精品一区二区| 内地一区二区视频在线| 午夜福利在线观看免费完整高清在| 亚州av有码| 国模一区二区三区四区视频| 国产中年淑女户外野战色| 中文乱码字字幕精品一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91 | 国产精品久久久久久精品古装| 女的被弄到高潮叫床怎么办| 国产精品麻豆人妻色哟哟久久| 精品一区二区三卡| 80岁老熟妇乱子伦牲交| 好男人视频免费观看在线| 狂野欧美激情性bbbbbb| 亚洲av在线观看美女高潮| 久久久欧美国产精品| 亚洲电影在线观看av| 成人黄色视频免费在线看| 大片电影免费在线观看免费| 天天躁日日操中文字幕| 日韩一区二区三区影片| 涩涩av久久男人的天堂| 欧美激情国产日韩精品一区| 日本免费在线观看一区| 国产真实伦视频高清在线观看| 又粗又硬又长又爽又黄的视频| 精品少妇久久久久久888优播| 男人爽女人下面视频在线观看| 亚洲图色成人| 99re6热这里在线精品视频| 欧美精品人与动牲交sv欧美| 亚洲精品国产色婷婷电影| 99热这里只有是精品在线观看| 国产毛片在线视频| 亚洲精品成人久久久久久| 深爱激情五月婷婷| 国产乱人偷精品视频| 免费观看av网站的网址| av在线天堂中文字幕| 免费看日本二区| h日本视频在线播放| 亚洲一级一片aⅴ在线观看| 2021少妇久久久久久久久久久| 国产色婷婷99| 欧美日韩视频精品一区| 性色avwww在线观看| 天堂网av新在线| 午夜福利高清视频| 亚洲色图综合在线观看| 午夜精品一区二区三区免费看| 国内少妇人妻偷人精品xxx网站| 久久久精品94久久精品| 亚洲久久久久久中文字幕| 国产精品久久久久久久电影| 麻豆国产97在线/欧美| 99热这里只有是精品在线观看| 久久久久国产网址| 国产精品国产三级国产av玫瑰| av女优亚洲男人天堂| 免费看不卡的av| 亚洲自拍偷在线| 国产精品久久久久久av不卡| 成年女人在线观看亚洲视频 | 亚洲欧美日韩另类电影网站 | 久久久久久久久久人人人人人人| 亚洲欧洲日产国产| 午夜精品国产一区二区电影 | 啦啦啦中文免费视频观看日本| 日本三级黄在线观看| 成年女人看的毛片在线观看| 97在线视频观看| 国产精品久久久久久精品古装| 黄色一级大片看看| av国产免费在线观看| 一区二区三区免费毛片| 免费看不卡的av| 欧美日韩精品成人综合77777| 精品久久久精品久久久| 人妻一区二区av| 男女下面进入的视频免费午夜| 亚洲欧美清纯卡通| 特级一级黄色大片| 大码成人一级视频| 特大巨黑吊av在线直播| 国产永久视频网站| 国产精品久久久久久av不卡| 久久久欧美国产精品| 欧美bdsm另类| 国产 精品1| 十八禁网站网址无遮挡 | 国产 一区精品| 性插视频无遮挡在线免费观看| 99热这里只有是精品50| 黄片wwwwww| 久久国内精品自在自线图片| 内射极品少妇av片p| 男女下面进入的视频免费午夜| 观看美女的网站| 国产69精品久久久久777片| 成人亚洲精品一区在线观看 | 免费大片18禁| 国产 一区 欧美 日韩| 国产乱人视频| 欧美xxxx黑人xx丫x性爽| 日韩欧美一区视频在线观看 | 91午夜精品亚洲一区二区三区| 亚洲色图综合在线观看| 成人午夜精彩视频在线观看| 亚洲av成人精品一二三区| 好男人在线观看高清免费视频| 高清视频免费观看一区二区| 久久久久久久久大av| 51国产日韩欧美| 精品酒店卫生间| 中文精品一卡2卡3卡4更新| a级毛色黄片| 亚洲av二区三区四区| 国国产精品蜜臀av免费| 中文在线观看免费www的网站| 日韩av在线免费看完整版不卡| 亚洲欧美日韩卡通动漫| 少妇的逼水好多| 在线观看一区二区三区| 久久精品国产亚洲网站| 亚洲成人av在线免费| 有码 亚洲区| 晚上一个人看的免费电影| 日本黄大片高清| 观看美女的网站| 国产日韩欧美在线精品| 成人国产麻豆网| 国产人妻一区二区三区在| 欧美激情久久久久久爽电影| 高清av免费在线| 国产精品久久久久久久久免| 亚洲精品自拍成人| 毛片女人毛片| 国产探花极品一区二区| 国产亚洲最大av| 99久久人妻综合| 久久精品人妻少妇| 亚洲四区av| 伦精品一区二区三区| 97超视频在线观看视频| 亚洲精品国产av蜜桃| 国内精品美女久久久久久| 免费观看性生交大片5| 免费少妇av软件| 国产乱人偷精品视频| 欧美精品一区二区大全| 一个人观看的视频www高清免费观看| av专区在线播放| 一区二区av电影网| 国产v大片淫在线免费观看| 亚洲av日韩在线播放| 国精品久久久久久国模美| 午夜精品一区二区三区免费看| 日本午夜av视频| 在线免费十八禁| 白带黄色成豆腐渣| 欧美一区二区亚洲| 可以在线观看毛片的网站| 亚洲欧美成人综合另类久久久| 亚洲欧美一区二区三区国产| 午夜福利在线观看免费完整高清在| 天美传媒精品一区二区| 18+在线观看网站| 亚洲一级一片aⅴ在线观看| 亚洲精品中文字幕在线视频 | 亚洲精品aⅴ在线观看| 一级二级三级毛片免费看| 国产 一区 欧美 日韩| 成人黄色视频免费在线看| 国产精品久久久久久精品电影小说 | 国产爽快片一区二区三区| 91午夜精品亚洲一区二区三区| 国模一区二区三区四区视频| 少妇高潮的动态图| 国产精品女同一区二区软件| 成年女人在线观看亚洲视频 | 国产欧美日韩精品一区二区| 国产成年人精品一区二区| 成人二区视频| 自拍偷自拍亚洲精品老妇| 2021少妇久久久久久久久久久| 久久久久国产网址| 超碰av人人做人人爽久久| 亚洲在线观看片| 久久久久精品久久久久真实原创| 嘟嘟电影网在线观看| 男的添女的下面高潮视频| 欧美成人a在线观看| 97超碰精品成人国产| 五月玫瑰六月丁香| 真实男女啪啪啪动态图| 国产成人免费观看mmmm| 亚洲精品国产av蜜桃| 午夜福利在线在线| 亚洲人与动物交配视频| 在线看a的网站| 亚洲精品影视一区二区三区av| av在线亚洲专区| 大片电影免费在线观看免费| 在线观看一区二区三区激情| 少妇人妻久久综合中文| 午夜精品国产一区二区电影 | 男人爽女人下面视频在线观看| 黄色日韩在线| 亚洲国产精品成人综合色| 看黄色毛片网站| 国产精品人妻久久久影院| 狠狠精品人妻久久久久久综合| 日产精品乱码卡一卡2卡三| 五月天丁香电影| 成人亚洲欧美一区二区av| 欧美日韩视频高清一区二区三区二| 一级毛片aaaaaa免费看小| 亚洲av男天堂| 久久精品国产鲁丝片午夜精品| 深夜a级毛片| 国产黄色视频一区二区在线观看| av在线蜜桃| 欧美成人一区二区免费高清观看| 久久久久久久久久久免费av| 91久久精品国产一区二区成人| 国产又色又爽无遮挡免| 精品久久国产蜜桃| 国产欧美日韩一区二区三区在线 | 久久国内精品自在自线图片| 最近最新中文字幕大全电影3| 两个人的视频大全免费| 在线观看美女被高潮喷水网站| 中文字幕制服av| 99精国产麻豆久久婷婷| 自拍欧美九色日韩亚洲蝌蚪91 | 26uuu在线亚洲综合色| 日韩欧美精品免费久久| av专区在线播放| 亚洲天堂av无毛| 欧美成人一区二区免费高清观看| 日韩电影二区| 欧美日韩在线观看h| 国产成人a∨麻豆精品| av在线老鸭窝| 91久久精品电影网| 久久久久久久精品精品| 欧美激情国产日韩精品一区| 听说在线观看完整版免费高清| 99久久精品一区二区三区| 国产精品人妻久久久影院| 亚洲天堂国产精品一区在线| 国产成人免费观看mmmm| 观看免费一级毛片| 国产一区亚洲一区在线观看| av黄色大香蕉| 我的女老师完整版在线观看| 国产综合懂色| 各种免费的搞黄视频| 欧美高清性xxxxhd video| 成人黄色视频免费在线看| 色视频www国产| 午夜福利在线在线| 美女高潮的动态| av在线观看视频网站免费| 久久精品夜色国产| 国产精品国产三级专区第一集| 丝袜脚勾引网站| 身体一侧抽搐| 男人爽女人下面视频在线观看| 午夜福利在线观看免费完整高清在| 最后的刺客免费高清国语| 国产黄片美女视频| 欧美xxxx黑人xx丫x性爽| 国产黄片美女视频| 丰满少妇做爰视频| 在线天堂最新版资源| 午夜老司机福利剧场| 亚洲经典国产精华液单| av国产久精品久网站免费入址| 伦理电影大哥的女人| 久久99热这里只频精品6学生| 麻豆成人av视频| 中国国产av一级| 在线播放无遮挡| 嫩草影院精品99| 国产精品国产av在线观看| 有码 亚洲区| 成人欧美大片| 久久ye,这里只有精品| 亚洲av中文字字幕乱码综合| 激情 狠狠 欧美| 国产精品国产av在线观看| 夫妻午夜视频| 少妇的逼水好多| 亚洲精品国产成人久久av| 波多野结衣巨乳人妻| 欧美国产精品一级二级三级 | 乱码一卡2卡4卡精品| 免费av毛片视频| 新久久久久国产一级毛片| 亚洲综合色惰| 欧美一级a爱片免费观看看| xxx大片免费视频| 精品久久久噜噜| 国产欧美另类精品又又久久亚洲欧美| 联通29元200g的流量卡| 国产男女内射视频| 免费看不卡的av| 我要看日韩黄色一级片| 色5月婷婷丁香| 国产精品女同一区二区软件| 久久久亚洲精品成人影院| 水蜜桃什么品种好| 免费看a级黄色片| 三级国产精品片| 街头女战士在线观看网站| 成年免费大片在线观看| 婷婷色综合大香蕉| 伊人久久国产一区二区| 大码成人一级视频| 欧美变态另类bdsm刘玥| 偷拍熟女少妇极品色| 国产精品99久久久久久久久| 午夜福利视频精品| 国产av码专区亚洲av| 亚洲四区av| 亚洲精品aⅴ在线观看| 乱系列少妇在线播放| 91午夜精品亚洲一区二区三区| 秋霞在线观看毛片| 亚洲精品成人久久久久久| 亚洲av不卡在线观看| 久久精品久久久久久噜噜老黄| 内地一区二区视频在线| 我的女老师完整版在线观看| 一级爰片在线观看| 国产亚洲午夜精品一区二区久久 | 久久精品久久精品一区二区三区| 黄色日韩在线| 亚洲av免费高清在线观看| 交换朋友夫妻互换小说| 99热全是精品| 黄片wwwwww| 久久久久久久久久久丰满| 一级毛片 在线播放| 精品久久久噜噜| 国国产精品蜜臀av免费| 99re6热这里在线精品视频| 欧美zozozo另类| 波野结衣二区三区在线| 99久久精品国产国产毛片| 边亲边吃奶的免费视频| 久久99热6这里只有精品| 久久国产乱子免费精品| 尤物成人国产欧美一区二区三区| xxx大片免费视频| 80岁老熟妇乱子伦牲交| 久久99热这里只频精品6学生| 国产白丝娇喘喷水9色精品| 亚洲国产最新在线播放| av.在线天堂| 国产老妇伦熟女老妇高清| 纵有疾风起免费观看全集完整版| 日韩一区二区三区影片| 亚洲精品成人久久久久久| 免费黄网站久久成人精品| 国产精品久久久久久精品电影| 真实男女啪啪啪动态图| 激情五月婷婷亚洲| 亚洲最大成人中文| 97热精品久久久久久| 寂寞人妻少妇视频99o| 少妇人妻精品综合一区二区| 97人妻精品一区二区三区麻豆| 别揉我奶头 嗯啊视频| 国产一区亚洲一区在线观看| 免费av观看视频| 亚洲欧美日韩卡通动漫| 日本-黄色视频高清免费观看| 国产美女午夜福利| 亚洲熟女精品中文字幕| 久久精品国产a三级三级三级| 五月玫瑰六月丁香| 中国美白少妇内射xxxbb| 卡戴珊不雅视频在线播放| 亚洲国产高清在线一区二区三| 熟女人妻精品中文字幕| 久久国产乱子免费精品| 国产大屁股一区二区在线视频| 有码 亚洲区| 久久久久久久久久成人| 最后的刺客免费高清国语| av在线蜜桃| 国产精品一二三区在线看| 久久久久精品久久久久真实原创| 亚洲在线观看片| 噜噜噜噜噜久久久久久91| 在线精品无人区一区二区三 | 欧美3d第一页| 亚洲色图综合在线观看| 成人综合一区亚洲| 亚洲最大成人av| 在线精品无人区一区二区三 | 最近最新中文字幕大全电影3| 精品久久国产蜜桃| 天天一区二区日本电影三级| 一区二区三区乱码不卡18| 中文字幕av成人在线电影| 啦啦啦啦在线视频资源| 国产精品久久久久久精品电影| 男女无遮挡免费网站观看| 一级毛片久久久久久久久女| 欧美区成人在线视频| 国产成人aa在线观看| 婷婷色综合大香蕉| 成人美女网站在线观看视频| 国产精品精品国产色婷婷| 亚洲成色77777| 亚洲欧美中文字幕日韩二区| 哪个播放器可以免费观看大片| 国产综合懂色| 中国国产av一级| 久久女婷五月综合色啪小说 | 插逼视频在线观看| 免费黄色在线免费观看| 搡女人真爽免费视频火全软件| 大又大粗又爽又黄少妇毛片口| 国产精品女同一区二区软件| 国产有黄有色有爽视频| 秋霞在线观看毛片| 亚洲国产精品专区欧美| 精品久久久久久久人妻蜜臀av| 国产高清有码在线观看视频| 麻豆久久精品国产亚洲av| 建设人人有责人人尽责人人享有的 | 两个人的视频大全免费| 国产v大片淫在线免费观看| 97超视频在线观看视频| 麻豆成人av视频| 亚洲va在线va天堂va国产| 午夜福利在线在线| 新久久久久国产一级毛片| 亚洲精品国产色婷婷电影| 久久久久久久久久久免费av| 日韩 亚洲 欧美在线| 男女下面进入的视频免费午夜| 男女无遮挡免费网站观看| 91在线精品国自产拍蜜月| 身体一侧抽搐| 欧美日韩亚洲高清精品| 九色成人免费人妻av| 伊人久久精品亚洲午夜| 热99国产精品久久久久久7| av播播在线观看一区| 丝袜喷水一区| 青青草视频在线视频观看| 美女脱内裤让男人舔精品视频| 99热全是精品| 免费电影在线观看免费观看| 国产毛片在线视频| 毛片女人毛片| 免费观看a级毛片全部| 国产色婷婷99| 69av精品久久久久久| 国产女主播在线喷水免费视频网站| 91精品国产九色| 久久久国产一区二区| 九草在线视频观看| 国产成人aa在线观看| 草草在线视频免费看| 久久这里有精品视频免费| 男男h啪啪无遮挡| 午夜福利高清视频| 久久精品国产a三级三级三级| 精品一区在线观看国产| 三级男女做爰猛烈吃奶摸视频| 少妇熟女欧美另类| 丰满人妻一区二区三区视频av| 色5月婷婷丁香| 久久国内精品自在自线图片| 免费观看a级毛片全部| 国产淫语在线视频| 毛片一级片免费看久久久久| 久久久午夜欧美精品| 在线观看人妻少妇| 黄色怎么调成土黄色| 国产女主播在线喷水免费视频网站| 中文字幕亚洲精品专区| 欧美国产精品一级二级三级 | 天天一区二区日本电影三级| 麻豆成人午夜福利视频| 亚洲欧美成人综合另类久久久| 午夜免费男女啪啪视频观看| 王馨瑶露胸无遮挡在线观看| 免费高清在线观看视频在线观看| 国产 一区 欧美 日韩| 日韩成人av中文字幕在线观看| 1000部很黄的大片| 亚洲不卡免费看| 噜噜噜噜噜久久久久久91| 肉色欧美久久久久久久蜜桃 | 亚洲精品色激情综合| 日韩,欧美,国产一区二区三区| 男人爽女人下面视频在线观看| 精品国产三级普通话版| 欧美日本视频| 亚洲最大成人手机在线| 日本午夜av视频| 成人午夜精彩视频在线观看| 一区二区三区乱码不卡18| 街头女战士在线观看网站| 丝瓜视频免费看黄片| 国产亚洲91精品色在线| 国产成年人精品一区二区| 中国美白少妇内射xxxbb| 一本一本综合久久| 久久精品国产自在天天线| 另类亚洲欧美激情| 禁无遮挡网站| 欧美bdsm另类| 亚洲精华国产精华液的使用体验| 毛片女人毛片| 国内揄拍国产精品人妻在线| 亚洲精华国产精华液的使用体验| 国产亚洲精品久久久com| 欧美日韩精品成人综合77777| 欧美变态另类bdsm刘玥| 亚洲欧美清纯卡通| 精品久久久久久久久亚洲| 午夜免费男女啪啪视频观看| 国产亚洲av嫩草精品影院| 五月伊人婷婷丁香| 午夜免费男女啪啪视频观看| 亚洲四区av| 一区二区av电影网| 国产精品99久久久久久久久| 精品人妻熟女av久视频| 高清日韩中文字幕在线| www.av在线官网国产| 人妻系列 视频| 一级片'在线观看视频| 久久久久久久久久人人人人人人|