• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DeepFake Videos Detection Based on Texture Features

    2021-12-14 10:30:44BozhiXuJiaruiLiuJifanLiangWeiLuandYueZhang
    Computers Materials&Continua 2021年7期

    Bozhi Xu,Jiarui Liu,Jifan Liang,Wei Lu,*and Yue Zhang

    1School of Computer Science and Engineering,Guangdong Province Key Laboratory of Information Security Technology,Ministry of Education Key Laboratory of Machine Intelligence and Advanced Computing,Sun Yat-sen University,Guangzhou,510006,China

    2Department of Computer Science,University of Massachusetts Lowell,Lowell,01854,MA,USA

    Abstract: In recent years, with the rapid development of deep learning technologies, some neural network models have been applied to generate fake media.DeepFakes,a deep learning based forgery technology,can tamper with the face easily and generate fake videos that are difficult to be distinguished by human eyes.The spread of face manipulation videos is very easy to bring fake information.Therefore,it is important to develop effective detection methods to verify the authenticity of the videos.Due to that it is still challenging for current forgery technologies to generate all facial details and the blending operations are used in the forgery process,the texture details of the fake face are insufficient.Therefore, in this paper, a new method is proposed to detect DeepFake videos.Firstly,the texture features are constructed,which are based on the gradient domain,standard deviation,gray level co-occurrence matrix and wavelet transform of the face region.Then, the features are processed by the feature selection method to form a discriminant feature vector, which is finally employed to SVM for classification at the frame level.The experimental results on the mainstream DeepFake datasets demonstrate that the proposed method can achieve ideal performance,proving the effectiveness of the proposed method for DeepFake videos detection.

    Keywords: DeepFake; video tampering; tampering detection; texture feature

    1 Introduction

    In recent years, with the rapid development and widespread application of deep learning [1,2],face manipulation technologies have made great progress.As a representative video manipulation technology, DeepFake technology generates fake facial videos based on deep neural networks such as auto-encoder and Generative Adversarial Networks (GAN).It is easy to replace the target face with the fake face to maliciously tamper the video contents using face manipulation methods [3,4].Tampering of faces violates personal portrait rights and may cause social disputes.In addition, different from images, a larger amount of information can be spread through videos,which brings more fake information after forgery.Therefore, it is of great important to develop detection methods for DeepFake videos.

    In the past two decades, many digital image forensics methods have been developed [5–7].At present, the forensics of face manipulation videos has been attracted a lot of research interest, and a lot of methods are proposed for DeepFake videos detection.Li et al.[8] review the existing generation technologies of DeepFake videos and several independent open-source implementations.They also introduce the generation process based on deep learning as well as the corresponding detection technologies.In addition, the mainstream DeepFake videos datasets are also introduced.At present, Deep learning is widely used to detect DeepFake videos, which constructs deep neural networks to detect the frame sequence after framing.Based on the observation that some inconsistent choices, such as illuminants, exist between scenes with fake frames, a detection method is proposed in [9] using Recurrent Neural Network (RNN).Firstly, the video is divided into frames and Convolutional Neural Network (CNN) is used to extract features of the face.Then the features are sent into Long Short Term Memory (LSTM) network to detect the relationship of time-series between frames for discrimination.Based on the detection of eye blinking in the videos,a set of features based on eye areas are extracted by CNN which is fed into a LSTM network to identify the fake videos [10].But when people’s eyes are closed, this method can not detect this situation well.In addition, when enough eye blinking images are added deliberately to the training set, the method may lead to misjudgment.Li et al.[11] use two classical deep neural networks,VGG and Residual Network (ResNet), to capture the artifacts caused by the affine transformation,which can efficiently detect fake videos.Different from using existing neural networks, to realize the detection of fake videos, a new network structure, the Meso-4 network, is constructed in [12].At the same time, the inception module is used to construct the MesoInception-4 network to extract features at the mesoscopic level.In order to simultaneously solve the problems of tampered videos detection and tampered area location, a multi-task learning method that designs a convolutional neural network is proposed in [13].Different from the method of using deep learning to detect DeepFake videos, many methods [14–16] use handcraft features for classification.Yang et al.[14] estimate 3D head pose based on the inconsistency of the head posture of fake videos and use Support Vector Machine (SVM) for classification.However, the method cannot achieve excellent performance.Agarwal et al.[15] also track facial and head movements, but facial action units are extracted as features to detect DeepFake videos.Jung et al.[16] also analyze the transformation in the pattern of human eye blinking.Based on the period, repeated number and elapsed eye blink time, the features are extracted to determine whether the video is real or fake.In addition, the idea of combining hand-crafted features with deep learning methods is adopted [17,18].In [17], the deep learning method uses the GoogLeNet network, while some features of steganalysis are extracted as hand-crafted features and sent to the SVM.Finally, the classification probability obtained by CNN and SVM is used to combine as a score for judgment.In [18], based on that some simple visual artifacts, such as the color of the left and right eye, are existed in fake videos, relative features are extracted, then a multilayer feedforward neural network and a logistic regression model are used as the classifier.

    In order to improve the interpretability of the model and solve the problem that the training samples may be insufficient, based on the observations that some DeepFake videos are lack of facial texture details, a new method using traditional machine learning technologies is proposed to detect DeepFake videos.Firstly, texture features are extracted using the image gradient, standard deviation, gray-level co-occurrence matrix, and wavelet transform from the face region of every frame, which can represent facial texture details.Secondly, based on the texture features, SVM is employed to realize the detection of DeepFake videos.

    The remaining parts of this paper are organized as follows.Section 2 mainly introduces the generation technologies of DeepFake videos and the corresponding defects of fake videos.In Section 3, the proposed method for DeepFake videos detection is discussed, including the extraction method of the texture feature and the feature selection method.Section 4 presents the experimental results and analysis, and finally the conclusion is given in Section 5.

    2 DeepFake Videos Generation Technology

    The generation technologies of DeepFake videos use neural network to tamper and replace the face in each frame of the video, and then recompress to generate a fake video.A generation technology of DeepFake videos based on the auto-encoder is introduced in [8,12].The model consists of an encoder network and a decoder network.The encoder network takes a facial frame as input to capture the facial features and convert it into a vector as output.The decoder network reconstructs the vector as a fake face, which is finally fused into the background to construct a fake frame.In the training phase, two sets of different face frames are used to train two pairs of encoder networks and decoder networks, in which two encoder networks share the weights, and two decoder networks are trained separately.After the parallel training is completed, when one person’s frame is taken into the auto-encoding model of another person, the encoder captures the facial structure, lighting and other similar features of the faces.And the decoder reconstructs the face details and some unique attributes to generate a fake frame.After performing the same processing on each frame, a fake video is generated.

    At present, some generation model of DeepFake videos cannot generate all the textures of the face, causing that some fake faces are relatively rough.For example, the subtle wrinkle of the face cannot be generated perfectly.At the same time, in the final process of generating the fake frame,the generated face is fused into the background.In order to reduce the boundary inconsistency caused by this process, some smoothing operations are usually used [19], which will also cause the loss of facial texture details.For example, Fig.1a is the frame selected from the real videos dataset VidTIMIT, and Fig.1b is the frame selected from the face manipulation videos dataset DeepFake-TIMIT [19].According to the figure, it is difficult for human eyes to distinguish which frame is real.However, the real frame has more detailed texture, such as the double eyelids and wrinkle details around the eyes, while texture details of the fake frame are lacking.

    Figure 1:Example of real and fake frames.(a) is a real frame, (b) is a fake frame

    Some current works have focused on the facial texture to distinguish real videos and fake videos [20–23].Aiming at the insufficient texture details of the DeepFake videos mentioned above,the relevant features are extracted to capture the characteristics of the texture from the face region of the frame, and then these features are used to identify the authenticity of the frame.

    3 DeepFake Videos Detection Method Based on Texture Features

    The proposed method for the detection of facial manipulation videos based on texture features is described in this section.The extraction method of the texture features is described in Section 3.1.In Section 3.2, the feature selection method is presented.Finally, the overall pipeline of the proposed method is given in Section 3.3.

    3.1 Texture Feature Extraction Method

    Texture features are complex visual features that can characterize the roughness and regularity of images.The analysis of image texture is a classical research direction in the fields of image processing and computer vision.A series of theories are constructed for texture feature extraction,which are used to analyze the texture of image.Great development has been achieved in this field [24–28].Tuceryan et al.[24] divide texture feature extraction methods into four categories:statistical methods, geometrical methods, model methods and signal processing methods.The statistical methods extract the statistical characteristics of the pixel value and its neighborhood as texture features.The geometrical methods analyze image textures by the geometric properties of “texture elements” or primitives.The model methods are implemented by constructing models such as random field models.The signal processing methods extract texture features from the transform domain.In addition, a series of classical texture feature extraction algorithms have been proposed, such as gray-level co-occurrence matrix [25], local binary pattern [26], Markov random field model [27], wavelet transform method [28] and so on.

    Based on the defect of insufficient texture details in the DeepFake videos described in the Section 2, four texture feature extraction methods are used based on the gradient domain, standard deviation, gray level co-occurrence matrix, and wavelet transform respectively to extract the corresponding texture features of the face region, which can effectively classify real videos and fake videos.

    3.1.1 Texture Feature Based on the Gradient Domain and Standard Deviation

    The image gradient characterizes the changes of the gray scale of each pixel in their neighborhood, which can represent the texture level of the image.In the areas with sufficient texture details such as the edge of the image, the gray level changes greatly and the gradient value is large.While the gray level changes smaller and the gradient value is small where the areas are smooth.Images with different texture details have different statistical characteristics of gradients.Therefore, in this paper, the statistical features of the gradient map are used as texture features.

    Usually, the difference is used to obtain the vertical and horizontal gradient of the image.Combining the gradient information in the horizontal and vertical directions of the image, the equation for calculating the gradient amplitudeMis shown as follows:

    wheregxandgyrepresent the gradient of the image in the horizontal and vertical directions,respectively.

    Based on the gradient amplitude, the mean, variance, skewness and kurtosis of it are extracted as texture features, which can reflect the statistical characteristics of the data distribution.

    At the same time, the standard deviation is calculated for the gray image.The standard deviation reflects the dispersion between the image pixels and the overall level of the image.The larger the standard deviation is, the greater each pixel value changes and the sufficient image texture details are.Therefore, the standard deviation of the grayscale image is also calculated as feature to characterize the texture of the image.

    3.1.2 Texture Feature Based on Gray Level Co-Occurrence Matrix

    The gray level co-occurrence matrix describes the texture through statistical analysis of the spatial distribution of each pixel in the image.Given the direction and distance, the probability that two adjacent gray level pixels appear in the image with a specific spatial distribution can be calculated.The probability calculated from different gray levels constitutes a gray level co-occurrence matrix.From the gray level co-occurrence matrix, 14 texture features can be calculated [25].In this paper, five texture features are used, including the contrast, energy, homogeneity,entropy, and correlation.These five features are introduced as follows.

    Contrast reflects the richness of the texture details and the depth of the textures.The more pixels that their gray-scale difference is large, the greater the contrast value is [29].The equation for calculating contrast is shown as follows:

    Energy is also called angular second moment, which reflects the uniformity of the gray level distribution of the image [29].The equation for calculating energy is shown as follows:

    The homogeneity reflects the intensity of local texture changes.The value of homogeneity is larger where the local texture changes more uniformly.The equation for calculating homogeneity is shown as follows:

    Entropy measures the amount of information of the local area.If the image has more texture information, the probability values of the gray-level co-occurrence matrix are uniformly distributed, and the entropy value is large [29].The equation for calculating entropy is shown as follows:

    Correlation measures the degree of correlation between the elements of the gray level cooccurrence matrix [29].The equation of calculating correlation is shown as follows:

    where

    Nis the size of the gray-level co-occurrence matrix, andPi,jis the value of the i-th row and j-th column of the gray-level co-occurrence matrix.

    In order to measure the gray level changes in various directions and extract the texture details in each direction as much as possible, we calculate the gray level co-occurrence matrix with distance of 1 in 0°, 45°, 90°, 135°directions.Considering the amount of calculation and the fineness of texture details reflected by the gray-level co-occurrence matrix, the gray level of the gray-level co-occurrence matrix is set to 64.The contrast, correlation, energy, homogeneity and entropy are calculated for the four gray-level co-occurrence matrices.Finally, the features calculated by the four gray-level co-occurrence matrixes are averaged respectively as the extracted texture features.

    3.1.3 Texture Feature Based on Wavelet Transform

    Wavelet transform is widely used in many fields such as image processing and signal processing [28,30].Using wavelet transform to decompose the image in both horizontal and vertical directions, low-frequency sub-band, horizontal high-frequency sub-band, vertical high-frequency sub-band and diagonal high frequency sub-band can be given.The high-frequency sub-bands contain most of the texture information of the image.Statistical analysis of these sub-bands can obtain the texture level of the image, which can classify images with different texture richness.

    Therefore, wavelet decomposition is performed on the image to obtain three coefficient matrices based on the horizontal high-frequency, vertical high-frequency, and diagonal high-frequency.The average value, standard deviation and energy of the three coefficient matrices are calculated as texture features.The equation for calculating energy is shown as follows:

    whereMandNare the size of the coefficient matrix, andxi,jis the coefficient in the i-th row and j-th column of the coefficient matrix.

    3.2 Feature Selection Method

    The texture features introduced in Section 3.1 can represent which frame contains more texture details.However, some features are not discriminative enough to distinguish whether the frame is real or fake.On the one hand, the extracted features are used to describe the frame texture, which may cause feature redundancy.On the other hand, the features are extracted based on the block of face, some blocks may be not facial areas.Thus the features extracted from these blocks may be invalid.Therefore, a feature selection method is introduced to perform feature screening for improving the classification performance.

    The feature selection method is shown in Algorithm 1.Firstly, the first half of the texture features are selected to initialize feature subset.Then the remaining features are taken into the feature subset one by one.If the performanceJcan be improved, leave the feature.Otherwise discard it.After the second half features are all selected, the first half features are discarded and then put back into the feature subset one by one to decide if the feature can be left.After all features are selected, the feature subset is the final features set used to distinguish whether the frame is real or fake.

    3.3 The Overall Pipeline

    The flowchart of the DeepFake videos detection method we proposed is shown in Fig.2.The steps of the proposed method are described as follows:

    (1) The proposed method refines DeepFake videos detection to frame level, so the videos are firstly decoded into frames.To evaluate the texture details of the face area in every frame, the feature points of the face are extracted using DLIB [31], and the face area is located and cropped according to the extracted feature points.

    (2) As a classical method, Wiener filter is widely used to remove noise.Many methods are proposed to denoise image based on Wiener filter [32,33].In order to reduce the influence of noise caused by sensors on the texture features while preserve the texture details of the frame as much as possible, Wiener filtering is used to denoise the face area.

    (3) Because the tampered area is unknown, the cropped face area is only a rough area, which may include part of the untampered area.In order to reduce the impact of inaccurate interception on the tampered area, at the same time, to avoid only part of the face area being tampered with,such as only the mouth area, we divide the cropped face area into 9 blocks on average and extract texture features for each sub-block to ensure that the forged area is included in the sub-block, so that the extracted texture features can effectively characterize the richness of texture details in the face area.

    (4) Then the extraction method of texture feature introduced in Section 3.1 is used.Through calculating the mean, standard deviation, skewness, and kurtosis of the image gradient, calculating the standard deviation of the grayscale image, calculating the contrast, inverse moment, correlation, energy, entropy of the gray level co-occurrence matrix, and calculating the mean, energy,and standard deviation of the horizontal, vertical, and diagonal high-frequency approximation matrices obtained by wavelet transform for each face region, the texture features are extracted and composed a 171-dimensional features vector.

    (5) Taking the texture feature vector as input, then we use feature selection method introduced in Section 3.2 to remove some redundant features, the retained features are used as the final discriminant features.

    (6) After normalization of the extracted texture features, the SVM classifier is used to train and classify each frame of videos.

    Figure 2:Flow diagram of DeepFake videos detection method based on texture features

    4 Experiment

    4.1 Dataset and Implementation Details

    The datasets used for experiment are DeepFake-TIMIT dataset [19], FaceForensics++dataset [34], Celeb-df dataset [8] and DeepFake Detection Challenge (DFDC) Preview dataset [35],which are the mainstream datasets for detecting face manipulation videos.

    The DeepFake-TIMIT dataset [19], which is based on the VidTIMIT dataset, is generated by the face-swapping algorithm based on GAN.The dataset is divided into two types of videos:one is low quality videos (TIMIT-LQ), whose resolution of the fake face is 64.The other is high quality videos (TIMIT-HQ), whose resolution of the fake face is 128.Each type of video contains 32 different characters, and each character has about 10 videos in which actors speak to the camera, and each video lasts at least 4 seconds.To construct the TIMIT dataset for experiment,the fake videos are selected from the DeepFake-TIMIT dataset, and the real videos are selected from the corresponding VidTIMIT dataset.

    The FaceForensics++ dataset [34] is a large scale face manipulation dataset.The real videos of this dataset are collected from the Internet, and most of the videos are downloaded from the YouTube.The real videos consist of 1000 videos, which contain 509914 frames in total.Four face manipulation technologies are used to generate face tampered videos, namely FaceSwap,DeepFake, Face2Face and NeuralTextures.Because the proposed method is to detect DeepFake videos, so only fake videos generated by the DeepFake method in the FaceForensics++ dataset(FF-DF) are used as fake videos sets.There are three quality videos in the dataset, namely raw videos (C0), light compression videos (C23) and low quality videos (C40).Each type of tampered video contains 1,000 fake videos.

    The Celeb-df dataset [8] is a large scale DeepFake videos dataset, which contains 590 real videos and 5639 DeepFake videos.The number of frames is over two million.The real videos are collected from YouTube.The DeepFake videos are generated using an improved DeepFake synthesis algorithm to solve some problems, such as the low resolution of synthesized faces.

    The DFDC Preview dataset [35] is a preview of the DFDC dataset, which contains around 5000 videos.The real videos are shot by many actors, which include varied lighting conditions,head poses and visual diversity backgrounds.Two methods are used to generate fake videos, which produces different qualities swaps.

    All videos of the four datasets are firstly framed.The number of frames of Deepfake-TIMIT dataset [19], FaceForensics++ dataset [34], Celeb-DF dataset [8] and DFDC Preview dataset [35]are about 70000, 500000, 2000000 and 1000000 respectively.Then the face areas are located and cropped using DLIB [31].

    Accuracy (Acc) and the area under the receiver operating characteristic curve (AUC) are used in the experiment for evaluation.We performed detection at the frame level, that is, at the image level.The higher the value of ACC and AUC, the better the performance of the method.

    4.2 Experimental Results and Analysis

    4.2.1 Impact of Feature Selection Method

    To verify the effectiveness of the feature selection method, a comparative experiment that the feature selection method is used or not is taken on different quality videos of DeepFake-TIMIT [19] dataset.The performance is shown in Tab.1.

    Table 1:Accuracy (%) of use feature selection method or not on Deefake-TIMIT dataset

    From the experimental results, it is obvious that after using the feature selection method, the detection accuracy of low-quality videos has increased by 0.1% approximately, and the accuracy of high-quality videos has increased by 8.3% approximately.The feature selection method can help to improve the detection accuracy, proving the effectiveness of the feature selection method.

    4.2.2 Performance on Mainstream DeepFake Datasets

    The proposed method is evaluated on four mainstream DeepFake datasets, including DeepFake-TIMIT dataset [19], FaceForensics++ dataset [34], Celeb-df dataset [8] and DFDC Preview dataset [35].The performance of the proposed method is shown in Tab.2.

    From the experimental results, it can be seen that the proposed method can achieve ideal performance on DeepFake-TIMIT dataset [19] and FaceForensics++ dataset [34].The accuracy and AUC score of the proposed method on both two datasets are higher than 85% and 94%,respectively.On Celeb-df dataset [8] and DFDC Preview dataset [35], the accuracy and AUC score of the proposed method are higher than 75% and 79%.Because these two datasets are generated by improved DeepFake synthesis algorithm, the situation that lacking of facial texture details has been improved.Therefore, compared with DeepFake-TIMIT dataset [19] and FaceForensics++dataset [34], the performance of the proposed method is degraded on the other two datasets.In general, the proposed method can achieve ideal performance, but it is still a challenge for the proposed method to detect on Celeb-df dataset [8] and DFDC Preview dataset [35].

    Table 2:Performance of the proposed method on DeepFake-TIMIT, FaceForensics++, Celeb-DF and DFDC preview dataset

    4.2.3 Cross-Data Evaluation

    In order to evaluate the performance of the proposed method on different quality videos and cross-quality videos, we use three quality videos of FaceForensics++ dataset [34] as the training set and testing set respectively.For example, the C0 quality videos are used to train, then C0, C23 and C40 quality videos are used to test.The experimental results are shown on Tab.3.

    Table 3:Accuracy (%) of training and testing on three different quality videos of FaceForensics++dataset

    From the experimental results, it can be seen that the accuracy of the proposed method on c0, c23 and c40 quality videos are 87.3%, 86.3% and 91.2%, respectively.When training on one quality videos and testing on other quality videos, the accuracy is basically the same, especially training on c0 and c23 quality videos.In general, the experimental results show that the proposed method can achieve ideal performance on different quality videos.The proposed method is robust to videos with different compression rates.

    4.2.4 Comparison with Other Methods

    Four mainstream DeepFake videos detection algorithms, including FWA [11], MesoNet [12]and XceptionNet [34], texture method [23] are used for comparison.ResNet is used for detection in FWA [11], MesoNet is the neural network proposed in [12] and XceptionNet is the baseline network used by the authors who build the FaceForensics++ dataset [34].Two texture features,LBP and HOG are used in [23].All methods are detected at the frame level.The performance compared with deep learning methods on the DeepFake-TIMIT dataset [19] and FaceForensics++dataset [34] is shown in Tabs.4 and 5.The performance compared with texture method [23] on the FaceForensics++ dataset [34] is shown in Tab.6.

    Table 4:Accuracy (%) of the proposed method and deep learning methods on DeeFake-TIMIT and FaceForensics++ dataset

    Table 5:AUC (%) of the proposed method and deep learning methods on DeeFake-TIMIT and FaceForensics++ dataset

    Table 6:Accuracy (%) of the proposed method and texture method on three different quality videos of FaceForensics++ dataset

    Compared with deep learning methods [11,12,34], the proposed method can achieve the ideal performance.From the result of the DeepFake-TIMIT dataset, it can be seen that the accuracy of the proposed method on high quality videos and low quality videos reaches 94.4% and 92.6%,respectively.The accuracy on both quality videos is better than FWA [11], MesoNet [12] and XceptionNet [34].The AUC score of the proposed method on high quality videos and low quality videos reaches 98.2% and 99.5%, respectively.The AUC score on both quality videos is better than MesoNet [12] and XceptionNet [34], and only 0.4 lower than FWA [11].From the result of the FaceForensics++ dataset, it can be seen that the accuracy and AUC score of the proposed method reach 87.3% and 94.3% respectively, which is also better than FWA [11] and MesoNet [12],but there is a gap compared with XceptionNet [34].Compared with texture method [23], it is obvious that the performance of the proposed method is better than LBP and HOG [23] on C23 quality videos and C40 videos, and on C0 quality videos, the proposed method is about 4% lower than LBP.

    In general, on the DeepFake-TIMIT [19] and FaceForensics++ dataset [34], the proposed method can achieve ideal performance, proving the effectiveness of the proposed method for Deep-Fake videos detection.However, some works have to be done to further improve the performance of the proposed method, especially on the FaceForensics++ dataset [34].

    5 Conclusion

    In order to combat the increasingly serious threat of DeepFake videos, a new method is proposed to detect DeepFake videos.Based on the defect that the texture details of some Deep-Fake videos are insufficient, texture features are extracted using the gradient, standard deviation,gray level co-occurrence matrix and wavelet transform.Then these features are selected and employed to SVM for detecting DeepFake videos.The experimental results show that the proposed method can effectively detect DeepFake videos.In the feature, more researches will be applied to mainstream face manipulation technologies to find out the defects of fake videos.In addition,more effective texture features will be extracted to characterize the texture details of the face region and improve the detection performance of DeepFake videos.

    Funding Statement:This work is supported by the National Natural Science Foundation of China (Nos.U2001202, 62072480, U1736118), the National Key R&D Program of China(Nos.2019QY2202, 2019QY(Y)0207), the Key Areas R&D Program of Guangdong (No.2019B010136002), the Key Scientific Research Program of Guangzhou (No.201804020068).

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品综合一区二区三区| 级片在线观看| 成人高潮视频无遮挡免费网站| 日韩一本色道免费dvd| 亚洲综合色惰| 国产成人影院久久av| 中出人妻视频一区二区| 久久鲁丝午夜福利片| av天堂在线播放| 亚洲精品粉嫩美女一区| 国产精品美女特级片免费视频播放器| 国产伦精品一区二区三区四那| av天堂中文字幕网| 色吧在线观看| 久久国内精品自在自线图片| 十八禁国产超污无遮挡网站| 亚洲欧美精品自产自拍| 精品午夜福利在线看| 久久久久国内视频| av黄色大香蕉| 99视频精品全部免费 在线| 久久亚洲精品不卡| 午夜福利在线观看吧| 国产精品伦人一区二区| 久久精品国产亚洲av涩爱 | 日韩欧美精品v在线| 长腿黑丝高跟| 国产精品嫩草影院av在线观看| 午夜久久久久精精品| 亚洲色图av天堂| 国产三级在线视频| 欧美一区二区亚洲| 午夜福利成人在线免费观看| 老熟妇乱子伦视频在线观看| 成人国产麻豆网| 最近中文字幕高清免费大全6| 欧美丝袜亚洲另类| 18+在线观看网站| 国内少妇人妻偷人精品xxx网站| 国产男人的电影天堂91| 日本欧美国产在线视频| 小蜜桃在线观看免费完整版高清| av在线观看视频网站免费| 我要看日韩黄色一级片| 干丝袜人妻中文字幕| 18+在线观看网站| 男人舔奶头视频| 国产一区二区亚洲精品在线观看| 麻豆精品久久久久久蜜桃| 人人妻人人看人人澡| 老熟妇仑乱视频hdxx| 久久人人爽人人爽人人片va| 美女 人体艺术 gogo| 欧美潮喷喷水| 欧美一区二区精品小视频在线| 亚洲熟妇中文字幕五十中出| 亚洲最大成人手机在线| 看片在线看免费视频| 22中文网久久字幕| 免费看美女性在线毛片视频| 欧美成人免费av一区二区三区| 不卡视频在线观看欧美| 久久精品国产亚洲av香蕉五月| 免费搜索国产男女视频| 欧洲精品卡2卡3卡4卡5卡区| 中文亚洲av片在线观看爽| 日日撸夜夜添| 国产人妻一区二区三区在| 可以在线观看毛片的网站| 一区二区三区高清视频在线| 亚洲性夜色夜夜综合| 国产视频内射| 白带黄色成豆腐渣| 亚洲国产欧洲综合997久久,| 午夜激情福利司机影院| 看免费成人av毛片| 高清日韩中文字幕在线| 国产淫片久久久久久久久| 日韩大尺度精品在线看网址| 日韩人妻高清精品专区| 天美传媒精品一区二区| 精品久久久久久久久久久久久| 精品不卡国产一区二区三区| 亚洲精品粉嫩美女一区| 国产精品久久久久久久久免| 老熟妇乱子伦视频在线观看| 欧美精品国产亚洲| 国产视频内射| 变态另类成人亚洲欧美熟女| 天堂√8在线中文| 在线免费十八禁| 午夜a级毛片| 亚洲高清免费不卡视频| 亚洲在线自拍视频| 又粗又爽又猛毛片免费看| a级毛片免费高清观看在线播放| 少妇熟女欧美另类| 国产成人freesex在线 | 午夜视频国产福利| 人人妻人人澡欧美一区二区| 性插视频无遮挡在线免费观看| 久久精品国产鲁丝片午夜精品| 在线天堂最新版资源| 亚州av有码| 亚洲欧美日韩高清专用| 97超碰精品成人国产| 2021天堂中文幕一二区在线观| 免费观看人在逋| 成人亚洲精品av一区二区| 又黄又爽又免费观看的视频| 国产一区二区在线观看日韩| 亚洲欧美日韩高清专用| 免费看光身美女| 草草在线视频免费看| 插阴视频在线观看视频| 天堂av国产一区二区熟女人妻| 国产高清视频在线播放一区| 日本爱情动作片www.在线观看 | 美女黄网站色视频| 51国产日韩欧美| 91在线观看av| 黄片wwwwww| 欧美成人a在线观看| 精品日产1卡2卡| 免费黄网站久久成人精品| 久久久精品94久久精品| 淫秽高清视频在线观看| 一个人观看的视频www高清免费观看| 丝袜美腿在线中文| 精品久久久久久久末码| 国产黄色小视频在线观看| 在现免费观看毛片| 给我免费播放毛片高清在线观看| 少妇熟女欧美另类| 别揉我奶头 嗯啊视频| 青春草视频在线免费观看| 精品国内亚洲2022精品成人| 黄色配什么色好看| 精品乱码久久久久久99久播| 最新在线观看一区二区三区| 亚洲第一区二区三区不卡| 亚洲精品色激情综合| 国产三级在线视频| 午夜精品国产一区二区电影 | 一a级毛片在线观看| 国产亚洲91精品色在线| 最近手机中文字幕大全| 日本三级黄在线观看| 人妻制服诱惑在线中文字幕| 97碰自拍视频| 国产一区二区在线av高清观看| 亚洲婷婷狠狠爱综合网| 桃色一区二区三区在线观看| 国产色婷婷99| 婷婷亚洲欧美| 最好的美女福利视频网| 亚洲18禁久久av| 少妇猛男粗大的猛烈进出视频 | 久久久久久久午夜电影| 国产成人a∨麻豆精品| 最近的中文字幕免费完整| 观看免费一级毛片| 人妻制服诱惑在线中文字幕| 自拍偷自拍亚洲精品老妇| 久久人人爽人人爽人人片va| 国产片特级美女逼逼视频| 久久精品国产鲁丝片午夜精品| 国产成人freesex在线 | 精品人妻熟女av久视频| 精品日产1卡2卡| 午夜精品一区二区三区免费看| 两性午夜刺激爽爽歪歪视频在线观看| 日本-黄色视频高清免费观看| 日产精品乱码卡一卡2卡三| 一a级毛片在线观看| 天堂动漫精品| 亚洲aⅴ乱码一区二区在线播放| 99久久无色码亚洲精品果冻| 一级毛片我不卡| 亚洲av电影不卡..在线观看| 日日摸夜夜添夜夜添小说| 不卡视频在线观看欧美| 国产男靠女视频免费网站| 久久6这里有精品| 淫妇啪啪啪对白视频| av在线观看视频网站免费| 欧美xxxx黑人xx丫x性爽| 日韩一区二区视频免费看| 精品久久久久久久久亚洲| 免费观看的影片在线观看| 人人妻人人看人人澡| 色噜噜av男人的天堂激情| 久久国内精品自在自线图片| 婷婷亚洲欧美| 亚洲性夜色夜夜综合| 精品久久久久久久久久久久久| 亚洲精品亚洲一区二区| 国产亚洲av嫩草精品影院| 国产亚洲精品av在线| 成人国产麻豆网| 能在线免费观看的黄片| 免费无遮挡裸体视频| 国产一区二区在线av高清观看| 成年女人毛片免费观看观看9| 亚洲人成网站在线播放欧美日韩| 深爱激情五月婷婷| 成人一区二区视频在线观看| 亚洲美女视频黄频| 夜夜夜夜夜久久久久| 免费无遮挡裸体视频| 国产精品人妻久久久影院| 99热全是精品| 亚洲成人av在线免费| 欧美一区二区精品小视频在线| 国产私拍福利视频在线观看| 亚洲成a人片在线一区二区| 国产老妇女一区| 国产精品爽爽va在线观看网站| 九九爱精品视频在线观看| 天天躁夜夜躁狠狠久久av| 日本a在线网址| 在线看三级毛片| 99久久中文字幕三级久久日本| 亚洲av.av天堂| 少妇的逼好多水| 麻豆乱淫一区二区| 久久久久国产网址| 国产一区二区在线观看日韩| 97在线视频观看| 非洲黑人性xxxx精品又粗又长| 成人综合一区亚洲| 人妻夜夜爽99麻豆av| 国内揄拍国产精品人妻在线| 嫩草影院精品99| 久久九九热精品免费| 国产亚洲欧美98| 中文在线观看免费www的网站| 亚洲欧美成人精品一区二区| 国产爱豆传媒在线观看| 十八禁网站免费在线| 欧美一区二区亚洲| 能在线免费观看的黄片| 真人做人爱边吃奶动态| 色哟哟哟哟哟哟| 久久精品国产亚洲av香蕉五月| 日日干狠狠操夜夜爽| 99久国产av精品国产电影| 久久人人爽人人爽人人片va| 一级毛片我不卡| 人妻久久中文字幕网| 亚洲国产日韩欧美精品在线观看| 97碰自拍视频| 亚洲18禁久久av| 国产成人影院久久av| 亚洲av不卡在线观看| 欧美日韩国产亚洲二区| 悠悠久久av| 国产精品av视频在线免费观看| 欧美激情在线99| 成年免费大片在线观看| 精品久久久久久久人妻蜜臀av| 免费观看在线日韩| 免费高清视频大片| 美女 人体艺术 gogo| 你懂的网址亚洲精品在线观看 | 国产午夜精品久久久久久一区二区三区 | 色视频www国产| 精品一区二区三区视频在线观看免费| 国产精品久久视频播放| 一个人看视频在线观看www免费| 麻豆国产97在线/欧美| 久久久久精品国产欧美久久久| 嫩草影视91久久| 久久久久国内视频| 少妇高潮的动态图| 69人妻影院| 午夜视频国产福利| 床上黄色一级片| 中文字幕人妻熟人妻熟丝袜美| 一级a爱片免费观看的视频| 狂野欧美白嫩少妇大欣赏| 九九在线视频观看精品| 美女内射精品一级片tv| 国产白丝娇喘喷水9色精品| 色哟哟·www| 亚洲av第一区精品v没综合| 免费一级毛片在线播放高清视频| www日本黄色视频网| 精品无人区乱码1区二区| av在线蜜桃| 国模一区二区三区四区视频| 日韩国内少妇激情av| 亚洲无线在线观看| 国产在线男女| 色综合站精品国产| 亚洲av成人精品一区久久| 日韩精品有码人妻一区| 极品教师在线视频| 亚洲欧美日韩无卡精品| 女人被狂操c到高潮| 久久精品人妻少妇| 欧美xxxx性猛交bbbb| 亚洲中文字幕日韩| 午夜老司机福利剧场| 欧美日韩国产亚洲二区| 国产精品女同一区二区软件| 国产亚洲精品av在线| 国产一区二区在线av高清观看| 日韩一区二区视频免费看| 18禁黄网站禁片免费观看直播| 精品无人区乱码1区二区| 我的老师免费观看完整版| 亚洲高清免费不卡视频| 亚洲欧美日韩东京热| 成人漫画全彩无遮挡| 麻豆一二三区av精品| 天堂√8在线中文| 丝袜喷水一区| 深夜精品福利| 亚洲人成网站高清观看| 五月玫瑰六月丁香| 国产色婷婷99| 亚洲av成人av| 99精品在免费线老司机午夜| 亚洲一级一片aⅴ在线观看| 国产精品久久电影中文字幕| 亚洲欧美成人综合另类久久久 | 午夜福利视频1000在线观看| 国产亚洲精品久久久久久毛片| 男女视频在线观看网站免费| 老熟妇乱子伦视频在线观看| 搡老妇女老女人老熟妇| 免费看光身美女| 欧美性感艳星| 搡老熟女国产l中国老女人| 综合色av麻豆| 亚洲成av人片在线播放无| 国产精品日韩av在线免费观看| 国产色婷婷99| 麻豆国产97在线/欧美| 欧美一级a爱片免费观看看| 我的女老师完整版在线观看| 观看免费一级毛片| 精品午夜福利在线看| 国产高清三级在线| 国产精品一区二区三区四区久久| 亚洲三级黄色毛片| 国产私拍福利视频在线观看| 久久精品国产亚洲网站| 三级经典国产精品| 18禁黄网站禁片免费观看直播| 免费看美女性在线毛片视频| 中出人妻视频一区二区| 少妇被粗大猛烈的视频| 蜜桃久久精品国产亚洲av| 看非洲黑人一级黄片| 精品午夜福利视频在线观看一区| 色噜噜av男人的天堂激情| 精品午夜福利视频在线观看一区| avwww免费| 亚洲人成网站在线播| 一a级毛片在线观看| 欧美极品一区二区三区四区| 午夜a级毛片| 免费黄网站久久成人精品| 日韩精品有码人妻一区| 精品人妻一区二区三区麻豆 | 夜夜夜夜夜久久久久| 自拍偷自拍亚洲精品老妇| 久久99热6这里只有精品| 人人妻人人澡欧美一区二区| 91精品国产九色| 五月伊人婷婷丁香| 亚洲国产精品久久男人天堂| 久久久久久伊人网av| 天堂影院成人在线观看| 亚洲一级一片aⅴ在线观看| 男女那种视频在线观看| 国产国拍精品亚洲av在线观看| 大又大粗又爽又黄少妇毛片口| 久久人人爽人人爽人人片va| 小蜜桃在线观看免费完整版高清| 18禁在线播放成人免费| 黄色配什么色好看| 在线观看午夜福利视频| 国产极品精品免费视频能看的| 亚洲av免费高清在线观看| 精品一区二区三区视频在线观看免费| 成人av在线播放网站| 国产乱人视频| 成年免费大片在线观看| 欧美bdsm另类| 亚洲天堂国产精品一区在线| 亚洲av成人av| 人人妻人人看人人澡| 乱人视频在线观看| 黄色配什么色好看| 午夜老司机福利剧场| 亚洲电影在线观看av| 国产欧美日韩一区二区精品| 亚洲精品日韩在线中文字幕 | 尾随美女入室| 岛国在线免费视频观看| 久久久久国产精品人妻aⅴ院| 99久久无色码亚洲精品果冻| 你懂的网址亚洲精品在线观看 | 亚洲高清免费不卡视频| 精品一区二区三区视频在线观看免费| 可以在线观看的亚洲视频| 亚洲激情五月婷婷啪啪| 欧美中文日本在线观看视频| 久久这里只有精品中国| 国产在线精品亚洲第一网站| 欧美一区二区国产精品久久精品| 成人永久免费在线观看视频| 久久6这里有精品| 免费av不卡在线播放| 亚洲精品国产av成人精品 | 麻豆av噜噜一区二区三区| 亚洲精品日韩av片在线观看| 亚洲美女搞黄在线观看 | 午夜日韩欧美国产| 69人妻影院| 黄色日韩在线| 给我免费播放毛片高清在线观看| 久久精品人妻少妇| 国产精品久久久久久久久免| 一进一出好大好爽视频| 成人一区二区视频在线观看| 偷拍熟女少妇极品色| 国产精品乱码一区二三区的特点| 亚洲精品影视一区二区三区av| 全区人妻精品视频| 简卡轻食公司| 亚洲欧美日韩无卡精品| 国产乱人偷精品视频| 天堂动漫精品| 在线播放无遮挡| 69人妻影院| 在线观看66精品国产| 亚洲成人中文字幕在线播放| 免费黄网站久久成人精品| 色吧在线观看| 午夜视频国产福利| 两个人视频免费观看高清| 三级男女做爰猛烈吃奶摸视频| 久久九九热精品免费| 久久精品人妻少妇| 特大巨黑吊av在线直播| 国产精品,欧美在线| 精品国内亚洲2022精品成人| 久久综合国产亚洲精品| 精品福利观看| 99在线视频只有这里精品首页| 久久久午夜欧美精品| 国产精品久久久久久久久免| av免费在线看不卡| 成人精品一区二区免费| 久久6这里有精品| 国产亚洲精品久久久久久毛片| 简卡轻食公司| 搡老岳熟女国产| av黄色大香蕉| 欧美一区二区国产精品久久精品| 亚洲欧美日韩东京热| 一级毛片我不卡| 日本免费一区二区三区高清不卡| 久久99热6这里只有精品| 成人国产麻豆网| 亚洲av成人精品一区久久| 综合色av麻豆| 国产白丝娇喘喷水9色精品| 99久国产av精品国产电影| 亚洲精品在线观看二区| 蜜桃久久精品国产亚洲av| 精品午夜福利在线看| 91在线观看av| 午夜福利在线观看吧| 国产高清视频在线播放一区| 蜜桃亚洲精品一区二区三区| 日日撸夜夜添| 欧美高清成人免费视频www| 丝袜美腿在线中文| 1000部很黄的大片| 国产单亲对白刺激| 一进一出好大好爽视频| 午夜免费男女啪啪视频观看 | 国产午夜福利久久久久久| 一级黄片播放器| 午夜a级毛片| 黄色欧美视频在线观看| 日韩成人av中文字幕在线观看 | 美女 人体艺术 gogo| 久久精品国产自在天天线| 亚洲在线自拍视频| 一级毛片久久久久久久久女| 全区人妻精品视频| 欧美色欧美亚洲另类二区| 热99在线观看视频| 精品免费久久久久久久清纯| 在线a可以看的网站| 国产成人aa在线观看| 美女免费视频网站| 亚洲在线自拍视频| 观看美女的网站| 在线观看免费视频日本深夜| 在线免费观看不下载黄p国产| 国产一区二区在线观看日韩| 亚洲三级黄色毛片| а√天堂www在线а√下载| 欧美日韩一区二区视频在线观看视频在线 | 岛国在线免费视频观看| 国内精品美女久久久久久| 欧美3d第一页| 精品乱码久久久久久99久播| 欧美日韩国产亚洲二区| 男人的好看免费观看在线视频| 日产精品乱码卡一卡2卡三| 国产精品永久免费网站| 综合色av麻豆| 小说图片视频综合网站| 精品无人区乱码1区二区| 草草在线视频免费看| 日韩一区二区视频免费看| 久久6这里有精品| 两个人的视频大全免费| 亚洲av不卡在线观看| 天堂√8在线中文| 日本熟妇午夜| 麻豆成人午夜福利视频| 欧美bdsm另类| 99久久精品一区二区三区| 国产成人a∨麻豆精品| 天天躁夜夜躁狠狠久久av| 亚洲久久久久久中文字幕| 色哟哟·www| 免费无遮挡裸体视频| 能在线免费观看的黄片| 亚洲美女视频黄频| 日韩欧美在线乱码| 最后的刺客免费高清国语| 亚洲真实伦在线观看| 99久久成人亚洲精品观看| 91久久精品国产一区二区三区| av福利片在线观看| 国产视频内射| 亚洲自偷自拍三级| 亚洲国产精品合色在线| 精品一区二区三区视频在线| 在线观看一区二区三区| 国产伦精品一区二区三区四那| 国产精品久久久久久av不卡| 男人舔奶头视频| 国产在视频线在精品| 成人漫画全彩无遮挡| 伦理电影大哥的女人| 精品久久久久久久末码| 一级黄色大片毛片| 国模一区二区三区四区视频| 一个人看的www免费观看视频| 亚洲婷婷狠狠爱综合网| 亚洲成人久久性| 99热全是精品| 国产视频一区二区在线看| 婷婷六月久久综合丁香| 久久久久精品国产欧美久久久| 亚洲一级一片aⅴ在线观看| 亚洲av成人精品一区久久| 变态另类丝袜制服| 淫妇啪啪啪对白视频| 国产欧美日韩精品亚洲av| 看片在线看免费视频| 乱人视频在线观看| 中文亚洲av片在线观看爽| 一级av片app| 国产成人福利小说| 国产高清不卡午夜福利| 日本撒尿小便嘘嘘汇集6| 日韩三级伦理在线观看| 又黄又爽又免费观看的视频| 51国产日韩欧美| 91麻豆精品激情在线观看国产| 国产伦一二天堂av在线观看| 国产欧美日韩一区二区精品| 热99在线观看视频| 尾随美女入室| 老司机影院成人| av在线观看视频网站免费| 精品日产1卡2卡| 免费av毛片视频| 久久久精品大字幕| АⅤ资源中文在线天堂| 天堂av国产一区二区熟女人妻| 亚洲精品国产成人久久av| 国产午夜精品论理片| 一区二区三区四区激情视频 | 亚洲精品日韩av片在线观看| 日韩一区二区视频免费看| 久久久久国内视频| 97人妻精品一区二区三区麻豆| 99精品在免费线老司机午夜| 久久天躁狠狠躁夜夜2o2o| 无遮挡黄片免费观看| 国产av麻豆久久久久久久| 一级毛片我不卡| 精品人妻视频免费看| 麻豆一二三区av精品| av卡一久久| 亚洲自拍偷在线| 国产伦精品一区二区三区四那| videossex国产| 22中文网久久字幕| 性欧美人与动物交配| 久久亚洲精品不卡| 国产成人a区在线观看|