• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Steganalysis Based on Deep Content Features Clustering

    2023-10-26 13:13:34ChengyuMoFenlinLiuMaZhuGengcongYanBaojunQiandChunfangYang
    Computers Materials&Continua 2023年9期

    Chengyu Mo ,Fenlin Liu ,Ma Zhu,? ,Gengcong Yan ,Baojun Qi and Chunfang Yang

    1Henan Provincial Key Laboratory of Cyberspace Situational Awareness,Zhengzhou,450001,China

    2Zhengzhou Science and Technology Institute,Zhengzhou,450001,China

    3School of Science,Aalto University,Espoo,02150,Finland

    ABSTRACT The training images with obviously different contents to the detected images will make the steganalysis model perform poorly in deep steganalysis.The existing methods try to reduce this effect by discarding some features related to image contents.Inevitably,this should lose much helpful information and cause low detection accuracy.This paper proposes an image steganalysis method based on deep content features clustering to solve this problem.Firstly,the wavelet transform is used to remove the high-frequency noise of the image,and the deep convolutional neural network is used to extract the content features of the low-frequency information of the image.Then,the extracted features are clustered to obtain the corresponding class labels to achieve sample pre-classification.Finally,the steganalysis network is trained separately using samples in each subclass to achieve more reliable steganalysis.We experimented on publicly available combined datasets of Bossbase1.01,Bows2,and ALASKA#2 with a quality factor of 75.The accuracy of our proposed pre-classification scheme can improve the detection accuracy by 4.84% for Joint Photographic Experts Group UNIversal WAvelet Relative Distortion (J-UNIWARD) at the payload of 0.4 bits per non-zero alternating current discrete cosine transform coefficient (bpnzAC).Furthermore,at the payload of 0.2 bpnzAC,the improvement effect is minimal but also reaches 1.39%.Compared with the previous steganalysis based on deep learning,this method considers the differences between the training contents.It selects the proper detector for the image to be detected.Experimental results show that the pre-classification scheme can effectively obtain image subclasses with certain similarities and better ensure the consistency of training and testing images.The above measures reduce the impact of sample content inconsistency on the steganalysis network and improve the accuracy of steganalysis.

    KEYWORDS Steganalysis;deep learning;pre-classification

    1 Introduction

    Digital steganography is a technique that embeds secret information in the redundancy of multimedia data such as digital images,video,audio,and text to achieve covert communication.During the past more than 20 years,researchers have proposed a series of image steganography algorithms,including early classical techniques such as Least Significant Bit Replace (LSBR) [1],Least Significant Bit Match (LSBM) [2],JSteg [3],F5 [4] and adaptive steganography algorithms that comply with the architecture of“distortion cost function+syndrome-trellis codes (STCs),”such as Highly Undetectable steGO (HUGO) [5],Spatial UNIversal WAvelet Relative Distortion(S-UNIWARD) [6],J-UNIWARD [6],Tong’s method [7],Correlational Multivariate Gaussian(CMG) [8].Accordingly,researchers have proposed many steganalysis approaches,such as early classical Chi-square Attack [9],block effect detection [10],and histogram estimation detection [11].There are also subsequent high-dimensional feature detection methods such as Spatial Rich Model(SRM) [12],Discrete Cosine Transform Residual (DCTR) [13],Gabor Filter Residual (GFR) [14]and features combinations method[15].In recent years,inspired by the excellent performance of deep learning in image classification,researchers have also introduced deep learning into steganalysis and proposed many excellent approaches.

    A well-designed preprocessing layer can be essential in related research [16].Most existing steganalysis methods based on deep learning first preprocess the images to obtain high-frequency signals with rich steganographic noise and perform operations such as convolution,regularization,and the activation function to extract features and detect the stego images.According to whether the kernels of the preprocessing layer are learnable,the existing methods can be divided into deep steganalysis-based deterministic preprocessing and deep steganalysis-based learnable preprocessing.

    The kernels of the preprocessing layer in deep steganalysis-based deterministic preprocessing are fixed,and their parameters no longer participate in the backpropagation of training after initialization.Qian et al.[17]proposed the Gaussian-Neuron Convolutional Neural Network(GNCNN),which uses a fixed 5×5 high-pass filter kernel in the preprocessing layer to eliminate image content interference and enhance the steganography signal.Then a Convolutional Neural Network(CNN)equipped with a Gaussian activation function detects the images with hidden information.Xu et al.[18]also used the high-pass filter kernel of the GNCNN network for preprocessing in their method.Inspired by SRM features,they take the absolute activation(ABS)layers,batch-normalization(BN)layers,and TanH activation function to effectively capture the symbolic symmetry of residual information and limit the range of feature values.Zeng et al.[19] utilized 25 fixed Discrete Cosine Transform (DCT) kernels for preprocessing and applied multiple subnetworks to form the information after quantization and truncation operations for steganalysis.Subsequently,Zeng et al.[20] improved the model in [19] by taking three parallel subnetworks to realize the steganalysis of large-scale Joint Photographic Experts Group(JPEG)images.Li et al.[21]proposed the ReLU Sigmoid and TanH Network(ReStNet),which uses the linear filter,the nonlinear filters in SRM,and the Gabor filter to preprocess the image,then respectively feeds three preprocessing results to three subnetworks with different activation functions for steganalysis.The Joint Photographic Experts Group Convolutional Neural Network(JPEGCNN)proposed by Gan et al.[22] uses a preprocessing layer with a size of 3×3 to capture neighborhood pixel correlation better,then extracts features by stacking convolution-activation-pooling operations and adopting dropout to improve the performance and generalization ability.

    The kernels of the preprocessing layer in deep steganalysis-based learnable preprocessing are learnable and can be automatically updated and optimized in the subsequent training backpropagation after initialization.In 2014,Tan et al.[23]proposed the TanNet,which extracts feature maps with a three-stage stacked convolutional autoencoder,then takes a fully connected neural network to detect the stego images.This method initializes the first layer with the SRM filter kernel.Ye et al.[24]directly initialized the preprocessing layer using 30 residual filter kernels in the SRM model.Then,the truncated linear unit(TLU)activation function in the first layer was used to learn the distribution of steganographic information.Additionally,they introduced channel selection information [25] to detect stego images.In the Steganalysis Residual Network (SRNet) [26],an end-to-end steganalysis network proposed by Boroumand et al.,all filter kernels are randomly initialized and continuously updated in subsequent training,and a shortcut structure[27]is also adopted to increase the diversity of steganalysis features.Zeng et al.proposed the Wider SEparate-then-Reunion Network(WISERNet)[28],which uses the SRM high-pass filter kernel to initialize a layer of convolution of each channel,then updates the convolution kernels to learn the features from different channels of the color images.The Element Wise Network(EWNet)proposed by Su et al.[29]randomly initializes the preprocessing layer,which can continuously learn and optimize in network training.Moreover,its fully convolutional structure avoids excessive loss of steganographic information and realizes the steganalysis of JPEG images with different sizes.

    The above methods mainly focus on preprocessing input images and designing convolutional neural network structures.Some of them have exceeded the steganalysis methods based on rich models.Those methods indicate that deep learning methods have become the mainstream of current steganography detection research.However,using a large number of labeled data-driven deep learning steganalysis methods cannot be separated from training and testing consistent data.Current methods do not differ in processing noisy images with complex textures and smooth images with simple content.When designing the above deep steganalysis methods,the consistency of training images and images to be detected should have been considered.The previous research results show that the performance of steganalysis based on deep learning tends to deteriorate when the texture complexity,statistical distribution,and subject content between test images and training images are inconsistent[30–32].For this problem,Pibre et al.[30] found that the steganalysis based on a convolutional neural network in the Clairvoyant scenario has a specific generalization ability for different datasets.However,this scenario’s particularity leads to the application’s limitation in real-world applications.Zhang et al.[31] and Zhang et al.[32] tweaked the feature extraction network to extract features less affected by image content.Although their methods can reduce the high false alarm rate caused by apparent differences in image content,they also discard many features related to image content,which can characterize the difference caused by steganography.Therefore,the detection accuracy has been negatively affected.The technology proposed by Abukhodair et al.[33]selects optimal features and effectively classifies big data.This method effectively reduces computational time and increases the accuracy of classification.In the traditional field of steganalysis,Amirkhani et al.[34]proposed a steganalysis framework based on image content pre-classification,which pre-classifies training samples based on non-zero DCT coefficient ratios.A classifier is trained specifically to improve the performance of steganography detection based on low-dimensional detection features.Li et al.[35]proposed a“clustering and classification”JPEG steganalysis method,which classifies training and test samples based on the horizontal and vertical intra-block co-occurrence matrices of the absolute values of the DCT coefficients and improves the detection ability.Lu et al.[36]proposed a steganalysis framework based on pre-classification and feature selection,which utilizes the relationship between adjacent image data to pre-classify samples.The improvement of detection performance is verified in steganalysis based on high-dimensional steganalysis features.

    Inspired by the operation of pre-classification of image samples in traditional steganalysis[34–36],this paper proposes a deep-learning steganalysis model that uses image content information to cluster samples to solve the above-mentioned problem.This method improves the extraction method of classification features by directly using convolutional neural networks to extract content classification features,avoiding the domain knowledge required for the manual design of features.First,wavelet decomposition is performed on the images to obtain low-frequency information.Then the convolutional neural network model extracts features that can describe the image content.Next,we cluster features to obtain pre-classification of the samples based on image content.Finally,the steganalysis model is trained individually with each sub-class of data.The proposed method maintains the sample consistency between the training and the testing phase and improves the reliability of steganalysis.

    2 Problem Description

    Compared to traditional steganography,adaptive steganography algorithms change pixels or coefficients in regions that are difficult to model and detect.The current typical adaptive steganography algorithms follow the architecture“distortion cost function+STCs.”First,a distortion cost functionρmeasures the detectability of changing a pixel or coefficient.Then,the secret information is encoded into a stego sequence with minimum overall distortion,viz.

    whereC={c1,c2,···,cN}denotes a sequence of cover pixels or coefficients.S={s1,s2,···,sN}denotes a sequence of stego pixels or coefficients.ρ(ci,si)denotes the distortion caused by changing the cover pixel or coefficientcitosi.Due to the vast storage and time overhead required to search for minimum-distortion sequences from all possible stego sequences,the STCs were used first by Pevny et al.[5]to encode the secret message into an approximate minimum-distortion sequence of stego pixels or coefficients in 2010.After that,the STCs are still the dominant method adopted by adaptive steganography,although some improved coding methods have been proposed one after another.And currently,steganography researchers mainly focus on the design of better distortion cost functions.

    After more than a decade of efforts,researchers have successively designed many distortion cost functions with excellent performance.Although distortion cost functions have their characteristics,almost all of them have the common feature that the distortion is usually more minor when changing the pixels or coefficients in the regions with more obvious color changes or more complex textures.Therefore,most of the pixels or coefficients changed during the steganography cluster in these regions.Since the color of the edges of different content objects in the image varies significantly,the texture of the areas containing a large number of small-sized objects is complex.The distribution of pixels or coefficients changed during steganography is closely related to the image content.Taking the three images shown in row 1 of Fig.1 selected from the Bossbase 1.01 as an example,we used J-UNIWARD to embed random information into them with an embedding ratio of 0.4 bpnzAC.By comparing the change position maps given in row 2 of Fig.1,it can be found that the modifications in the first two images mainly concentrate on the edges of the flowers,and the distribution of the changed positions is relatively similar.However,there are differences in the embedding areas of each image.In the third image,the modifications mainly concentrate on the edges of buildings,people,and the complex areas of texture containing many small windows and doors.The distribution of the altered positions is significantly different from the previous two images.

    Figure 1:The positions of the coefficients that were changed when J-UNIWARD was used

    Four datasets,the Face Recognition Technology(FERET)[37],Oxford 102 Flowers[38],Kaggle1The dataset can be downloaded from here:https://www.kaggle.com/datasets/alxmamaev/flowers-recognitionflower,and Stanford Dogs [39] were used to test the performance of existing steganalysis methods based on deep learning when the contents of the train data and the test data do not match.The FERET dataset consists of face images.The Oxford 102 Flowers and Kaggle flower datasets both consisted of flower images and were merged into the dataset Flowers.The Stanford Dogs dataset consists of images with the content of dogs.First,10,000 images were randomly selected from the FERET,Flower,and Stanford Dogs datasets.Then the images were cropped into a square image starting from the top left corner according to the shortest edge cropping principle.Each cropped image was saved as a grayscale JPEG image with a size of 256×256 and a quality factor (QF) of 75 by the resize operation in the python Pillow library.The cover training set,validation set,and testing set were randomly selected from each set of 10,000 grayscale images in the ratio of 4:1:5,respectively.Then,the J-UNIWARD algorithm was used to embed random information into each image with payload 0.4 bpnzAC to generate the corresponding stego image.Finally,the EWNet[29]was trained with the selected training and validation sets.The models trained with three datasets,FERET,Flower,and Stanford Dogs,were abbreviated as EWNet_FERET,EWNet_Flower,and EWNet_Dogs,respectively.Table 1 gives the detection accuracy of the three models for different classes of testing sets.For the FERET testing set,the accuracy of EWNet_FERET is 11.68% higher than the accuracy of EWNet_Flower.The accuracy is 5.39% higher than that of EWNet_Dogs.Similar results can be found for the Flower and Stanford Dogs testing sets.So the models trained with training datasets consistent with the detected objects significantly outperform those trained with inconsistent datasets.

    Table 1:The detection accuracy of EWNet in the case of cover source mismatch

    In summary,the inconsistency between the contents of the training samples and the object to be detected significantly impacts on the performance of steganalysis based on deep learning.The performance of the deep steganalysis model trained with training samples inconsistent with the object to be detected has significant shortcomings.

    3 Method

    It is universally acknowledged that deep neural networks have shown excellent performance in image content classification.We proposed a steganalysis method based on image content deep clustering to address the degradation of steganalysis performance caused by the image content inconsistency between the training samples and the object to be detected.The basic idea of the method is to cluster the training images into sub-classes based on the deep features expressing the image contents and train a particular steganalysis network for each sub-class.In this way,the consistency between the training samples and the object to be detected is achieved,and the performance of steganalysis is improved.

    As shown in Fig.2,in the training phase,the image content is first separated from the noise to reduce the impact of noise,including steganographic noise,on the image content feature extraction.Secondly,the deep convolutional neural network with excellent performance in image content classification is used to extract deep features from the training images after noise removal for distinguishing images based on contents.Then,the training samples are clustered according to the extracted features,viz.,the training images with close content are divided into the same sub-class.Moreover,the corresponding deep steganalysis network is trained for each sub-class of the sample.In the detecting stage,the image content is first separated from the noise,and the deep convolutional neural network is used to extract the deep features.Then,according to the clustering results of the training stage,the input images are classified to determine the appropriate deep steganalysis network.Finally,the determined deep steganalysis network is used to detect whether the image contains hidden information.

    The key of the proposed method lies in noise interference separation and image clustering based on deep features.Therefore,these two parts are described in detail below.

    Figure 2:Overall framework of the proposed steganalysis method

    3.1 Noise Interference Separation

    When the image is transformed into a frequency domain representation,the content information of the image mainly concentrates on the low-frequency components.Correspondingly,the color change information of the edges and textures of the image content object is mainly reflected by high-frequency components.Adaptive steganography embeds information by changing the pixels or coefficients in the edges of image-content objects and areas with more complex textures.That means the steganographic noise is mainly added to the high-frequency components of the image.However,adaptive steganography has minimal impact on image content.The results of existing adversarial samples show that even the slightest interference applied to the image may lead to misjudgment in the image content classification [40].To avoid the inconsistency between the image to be detected and the training sample caused by this misjudgment as much as possible,Daubechies wavelets,which perform excellently in noise removal,are used to separate the image noise.

    Firstly,a first-level Daubechies wavelet decomposition is performed on the image to obtain its low-frequency component LL,horizontal component LH,vertical component HL,and diagonal component HH.Fig.3 shows the low-frequency component obtained by the first-level Daubechies wavelet decomposition with different vanishing moments on an image,of which coefficients are mapped to[0,255].It can be seen that during the wavelet decomposition,as the disappearance moment is larger,the energy after decomposition is more concentrated,and the image content presented by the low-frequency components becomes clearer.Nevertheless,the larger vanishing moment results in a greater computational cost of the wavelet decomposition.Therefore,according to the value of a vanishing moment usually set in steganalysis,the Daubechies wavelets with the vanishing moment of 8 were selected to decompose the image.The obtained low-frequency components are used as input for the next step of image-deep clustering or classification.

    Figure 3:First-stage Daubechies wavelet low-frequency components at different vanishing moments

    3.2 Image Clustering Based on Deep Content Features

    To solve the problem of steganalysis performance degradation caused by the content inconsistency between training images and images to be detected,we proposed the idea of dividing the training images into multiple sub-classes according to the image content.The corresponding steganalysis network is trained with each subclass of training images.The complicacy of image contents makes it difficult to manually determine in advance which images are close enough to fit into the same class.Therefore,the images are often clustered according to some features,and those with similar features are clustered into the same class.However,traditional image clustering methods often use the differences between some traditional hand-designed image features to measure whether the images are similar.In the existing research,the features extracted based on deep neural networks perform better in image content recognition than traditional hand-designed image features.

    Activated by the above view,we cluster the training image based on deep features,as shown in Fig.4.In the clustering method,the fully connected layer and Softmax are removed from the classical deep neural network VGG16(seeing in Fig.5)in the current image content recognition.The remaining backbone is used as an image content feature extractor.The image content in-depth features are extracted from the low-frequency component LL after performing the first-level Daubechies wavelet decomposition for each training image.Then,the classical clustering algorithm is used to cluster the content-deep features extracted from the low-frequency components of the training images into k training image subsets C1,C2,...,Ck.Then,each subset of cover images obtained by clustering and its corresponding stego images are used to train the corresponding steganalysis network.In the detecting phase,the sub-class of the input image is distinguished based on the distance between the content-deep features of the input image and the center of the content-deep feature in each sub-class of training image so that the most appropriate steganography detector is determined.

    Figure 4:Clustering method of training images

    Figure 5:VGG16 convolutional neural network structure and the star-marked output represents the extracted image content features

    4 Experimental Results and Analysis

    4.1 Datasets and Experimental Environment

    Datasets:The images used in the experiments were generated from three publicly available image libraries:Bossbase 1.01,Bows2,and ALASKA#2.In the following text,unless otherwise specified,Bossbase refers to Bossbase 1.01,Bows refers to Bows2,and ALASKA refers to ALASKA#2.Each image dataset Bossbase and Bows,contains 10,000 grayscale images with a size of 512×512.And the dataset ALASKA contains 80,005 JPEG images with QF 75.Images in the three image datasets were stored as grayscale JPEG images with a size of 256×256 and a quality factor of 75.Then,the J-UNIWARD steganography algorithm was used to embed pseudo-random information with payloads of 0.1,0.2,0.3,and 0.4 bpnzAC in all three cover image datasets,and 12 sets of corresponding stego images were obtained.

    Model training:Considering the size of the dataset and the convergence of network training,we set the number of sub-classes of image clusters as 2 and 4,respectively.After clustering,for each sub-class of training images,the EWNet was trained with 90,000 iterations,an initial learning rate of 0.001and an adjusted learning rate of 0.0001 after the first 50000 iterations.The GPU used in model training was NVIDIA GeForce GTX 1080Ti.

    Hyperparameter optimization:The network was trained by the mini-batch stochastic gradient descent(SGD)optimizer Adamax withβ1=0.91,β2=0.999 and ∈=1×e-8.The batch size was set to 32 (16 cover-stego pairs).The convolutional layers were initialized with the normal distribution initializer with a standard deviation of 0.01,and 2 ×e-4L2 regularization was used to alleviate overfitting.The convolutional layer was set with no bias.The parameter of the batch normalization layer was learnable with a decay rate of 0.9.The ReLU activation function is used for nonlinear processing.

    4.2 Detection Performance of Submodels

    We tested the performance of the steganalysis submodel trained with each sub-class of images after clustering.First,the 10,000 cover images in Bossbase were equally divided into one group of training cover images and one group of testing cover images.According to the principle of oneto-one correspondence with the cover image,stego images were also divided into one group of training stego images and one group of testing stego images.Then,the training cover images were clustered into two sub-classes,Bossbase_C0 and Bossbase_C1,by the method proposed in this paper.The number of images in each class is shown in Table 2.Each sub-class of training cover images and its corresponding stego images were used to train the corresponding deep steganalysis models EWNet_BC0 and EWNet_BC1,respectively.Finally,the class center obtained by clustering was used to classify the test images to determine the proper detection model for steganalysis.

    Table 2:The number of cover images in each sub-class after clustering the 5000 training cover images in the Bossbase dataset

    To compare the steganalysis performance before and after clustering,we randomly selected 3196 cover images from the training cover images to form an image group Bossbase_R0.For each payload,the selected cover images in Bossbase_R0 and the corresponding stego images were used to train the corresponding detection model,which detected the 3142 pairs of test images classified into subclass Bossbase_C0.1804 cover images were randomly selected from the training cover images to form an image group Bossbase_R1.For each payload,the selected cover images in Bossbase_R1 and the corresponding stego images were used to train the corresponding detection model,which was used to detect the 1858 pairs of test images classified into sub-class Bossbase_C1.The specific training and test dataset partitioning scheme is shown in Fig.6.

    Table 3 shows the detection accuracy of the submodel trained by each sub-classes of training cover and the corresponding stego images at each embedding ratio.From the experimental results,the detection accuracy of each steganalysis submodel trained by clustered images is higher than that of the steganalysis model trained by randomly selected images.The improvements are maximum at 0.4 bpnzAC.Especially,the accuracy of the steganalysis submodel EWNet_BC1 trained by cover images in Bossbase_C1 is higher than that of the model training by cover images in Bossbase_R1 by more than 3%.However,the accuracies of the steganalysis submodels trained by cover images in Bossbase_C0 exceed that of the steganalysis submodels trained by cover images in Bossbase_R0 by a small margin.The reason may be that the images in Bossbase_C0 account for 63.8% of the total training cover images,so there is a lot of overlap between Bossbase_C0 and Bossbase_R0.

    Figure 6:Bossbase dataset experimental data division scheme

    Table 3:The detection accuracy of the submodel trained by each sub-class of images after clustering

    To eliminate the performance difference caused by the different number of training images,three cover image datasets were merged into one cover image dataset,referred to as the cover BBA dataset(Bossbase_Bows_ALASKA).And the corresponding stego image dataset is referred to as the stego BBA dataset.90,005 images were randomly selected from BBA as training cover images,and the remaining 10,000 images were used as test cover images.90,005 training cover images were first clustered into four sub-classes: BBA_C0,BBA_C1,BBA_C2,and BBA_C3.The number of images in each sub-class is shown in Table 4.5000 cover images were randomly selected from each subclass.Then,four cover image sets,BBA_C0_5K,BBA_C1_5K,BBA_C2_5K,and BBA_C3_5K,were obtained.For each payload,the stego images corresponding to 5000 cover images were selected to form a group of training stego images.After training 90 epochs,the corresponding steganalysis submodel was obtained.During steganalysis,5000 test cover images were randomly selected from 10,000 test cover images.The class centers obtained during clustering were used to classify the selected test cover images and the corresponding test stego images,then determined the corresponding steganalysis submodels detect them.The number of test image pairs categorized into each sub-class is shown in Fig.7.

    To test the performance of the steganalysis submodels before and after clustering,5000 training cover images were randomly selected from the 90,005 training cover images to form an image set BBA_R_5K.For each payload,the cover images in the BBA_R_5K and their corresponding stego images were used to train the corresponding steganalysis submodel,which was used to detect above 5000 test cover images and their corresponding stego images.

    Table 4:The amount of pre-classified training data in the BBA dataset

    Figure 7:Experimental data partitioning scheme for the same number of training subsets

    Table 5 shows the detection accuracy of different submodels for the test images with different payloads.The detection accuracy of each steganalysis submodel trained with clustered images is significantly higher than that of the submodel trained by the randomly selected images in most cases when the numbers of training images are equal.In particular,when the payload is 0.4,the accuracy of the steganalysis submodel trained with the BBA_C1_5K image set is improved by about 7.5%.The accuracy of the steganalysis submodel trained with the BBA_C2_5K image set is improved by the minimum margin,nearly 1%.The experimental results show that under the same number of training samples,this scheme can also overcome the interference caused by differences in image content to a certain extent,especially at a high embedding rate.Through comparative analysis of images,it is found that the content of images in BBA_C1_5K is more similar,while BBA_C2_5K is more complex,so the improvement effect of BBA_C1_5K is most obvious.

    4.3 Detection Performance of the Ensemble Model

    To test the overall steganalysis performance of the proposed method,we trained the steganalysis submodels by using the four types of training cover images obtained by clustering in the previous section BBA_C0,BBA_C1,BBA_C2,BBA_C3,and their corresponding stego images.The submodels were combined into an ensemble detector EWNet_Cluster.The remaining 10,000 test cover images and corresponding test stego images were classified by using the class center obtained by clustering to determine the corresponding detection submodel and detect them.To compare the steganalysis performance before and after clustering,all the training cover images and their corresponding stego images were directly used to train a steganalysis model EWNet_All and then used to detect 10,000 test cover images and their corresponding test stego images.At the same time,we also use the method proposed in[36]to pre-classify the samples.When training,we also select K as 4 and train each submodel to obtain the ensemble model EWNet_Lu.The specific training and test dataset partitioning scheme is shown in Fig.8.

    Table 5:The detection accuracy of the submodels trained by the same number of images selected from the BBA dataset

    Figure 8:Experimental scheme under the same total number of training sets

    Table 6 shows the detection accuracy of the ensemble model at different payloads.Compared with the detection model trained by all data,the detection accuracy of the ensemble model EWNet_Cluster trained with clustered images significantly exceeds that of the model EWNet_All and EWNet_Lu.Especially when the payload is 0.4,the detection accuracy of the EWNet_Cluster is about 4.84%higher than that of EWNet_All and about 2.56%higher than that of EWNet_Lu.Compared to EWNet_All,when the payload is 0.2,the accuracy of the model EWNet_Cluster improved the least,but also by 1.39%.At the same embedding rate,the accuracy rate of our solution increased by 3.16%compared to EWNet_Lu.In the case of an ensemble model,this scheme can appropriately partition the different content samples,making the model focus more on the steganographic signal rather than the content information of the image.

    To view the detection accuracy of each submodel in detail,Table 7 reports the accuracy of each submodel separately.Under different embedding ratios,compared with the steganalysis model EWNet_All trained with all training images,the steganalysis submodels’accuracy with BBA_C0,BBA_C1,and BBA_C3 images are higher than that of EWNet_All.In particular,the detection accuracy of the submodel trained with BBA_C1 has the most significant improvement,which reaches 8.84% at payload 0.4.Submodels trained with only BBA_C2 images have slightly lower detection accuracy than EWNet_All at a payload not larger than 0.3.This should be attributed to the fewer training images and complex image content.The experimental results show that the detection accuracy of the model can be significantly improved under different embedding rates.However,as the embedding rate decreases,BBA_C2 will slightly decrease because the texture of this subclass of samples is more complex and challenging to detect.The scheme is valid,as in EWNet_All measures the average of all accuracy rates,and some difficult-to-detect class results will be lower than the average.

    Table 6:The detection accuracy of EWNet_Cluster,EWNet_All and EWNet_Lu trained by BBA dataset

    The above experimental results show that under the same data set,the ensemble detector can effectively improve the accuracy of steganalysis.It shows that the clustering based on a deep content features scheme can overcome the problem of poor detection performance caused by the content mismatch between the training images and the detected images to a certain extent and improve the detection accuracy.

    5 Conclusion

    In this paper,an image steganalysis method based on deep content features clustering is proposed.The powerful learning ability of the convolutional neural network is used to extract image content features which are used to cluster the training images.Therefore,a special deep steganalysis submodel is obtained for each class of training images,and an ensemble detection model is formed by combining all sub-models.During detection,the most suitable steganalysis submodel is found based on the deep content features extracted from the input image to minimize the data difference between the training images and the input image.Experimental results show that compared with the model trained by all training images,the proposed method can significantly improve the accuracy of steganalysis based on convolutional neural networks.Compared to randomly extracting subsets from the dataset for training,in the submodel detection performance comparison experiment,the performance can improve by more than 3%on a single subclass of Bossbase at most.The effect can also be improved by up to 7.5% with the same number of subsets on the BBA dataset.The overall performance of steganalysis can be improved by up to 4.84%when the payload is 0.4.

    This method is just an attempt to apply image content clustering based on deep learning features to steganalysis.How to cluster the training images should be determined by the impact of clustering on steganalysis performance.This is also one of the directions to be further explored.

    Acknowledgement:The authors are grateful to the anonymous reviewers for their constructive comments and suggestions.

    Funding Statement:This work is supported by the National Natural Science Foundation of China(Nos.61872448,62172435,62072057),the Science and Technology Research Project of Henan Province in China(No.222102210075).

    Author Contributions:study conception and design: Chengyu Mo,Fenlin Liu;data collection: Ma Zhu,Gengcong Yan;analysis and interpretation of results:Chengyu Mo,Ma Zhu,Fenlin Liu;draft manuscript preparation:Chengyu Mo,Baojun Qi,Chunfang Yang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:In the study,we used the Bossbase1.01,Bows2,and ALASKA#2 datasets,which are publicly available and can be accessed via the citation links in the paper.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    内射极品少妇av片p| 三级国产精品欧美在线观看| 又黄又爽又刺激的免费视频.| 午夜激情欧美在线| 最近手机中文字幕大全| 两性午夜刺激爽爽歪歪视频在线观看| 午夜激情福利司机影院| 久久久色成人| 成人性生交大片免费视频hd| а√天堂www在线а√下载| 欧美一区二区国产精品久久精品| 久久精品综合一区二区三区| 国产男人的电影天堂91| 午夜a级毛片| 久久精品国产自在天天线| 一a级毛片在线观看| 欧美日韩精品成人综合77777| 亚洲精品在线观看二区| 久久精品人妻少妇| ponron亚洲| 精品一区二区三区人妻视频| 两个人的视频大全免费| 精品熟女少妇av免费看| 国产老妇女一区| 国产老妇女一区| 成人特级黄色片久久久久久久| 欧美日韩一区二区视频在线观看视频在线 | 欧美日韩在线观看h| 久久久久久九九精品二区国产| 免费观看在线日韩| 美女被艹到高潮喷水动态| 精品一区二区三区视频在线| 日韩制服骚丝袜av| 欧美激情久久久久久爽电影| 91精品国产九色| 综合色丁香网| 少妇人妻一区二区三区视频| 亚洲成av人片在线播放无| 国产视频内射| a级毛片免费高清观看在线播放| 国产爱豆传媒在线观看| 国产激情偷乱视频一区二区| 欧美色视频一区免费| 长腿黑丝高跟| 中文亚洲av片在线观看爽| 国产毛片a区久久久久| 国产黄片美女视频| 三级男女做爰猛烈吃奶摸视频| 欧美日韩综合久久久久久| 亚洲国产高清在线一区二区三| 午夜福利高清视频| 国产乱人视频| 久久人人精品亚洲av| 午夜福利18| 午夜影院日韩av| 欧美bdsm另类| 亚洲综合色惰| 亚洲自偷自拍三级| 国产欧美日韩精品亚洲av| 国产高清视频在线观看网站| 国国产精品蜜臀av免费| 超碰av人人做人人爽久久| 美女内射精品一级片tv| 好男人在线观看高清免费视频| 欧美色欧美亚洲另类二区| 亚洲人成网站在线播| 黄色欧美视频在线观看| 91狼人影院| 亚洲专区国产一区二区| 亚洲电影在线观看av| 国产精品综合久久久久久久免费| 日日啪夜夜撸| 黄片wwwwww| 久久亚洲精品不卡| 亚洲av美国av| 97在线视频观看| 欧美性感艳星| 岛国在线免费视频观看| 国产精品久久视频播放| 禁无遮挡网站| 国产精品美女特级片免费视频播放器| 美女 人体艺术 gogo| 久久久国产成人精品二区| 欧美成人精品欧美一级黄| 国产又黄又爽又无遮挡在线| 亚洲在线自拍视频| 日韩一区二区视频免费看| 亚洲欧美日韩东京热| 欧美区成人在线视频| 国产爱豆传媒在线观看| 我的老师免费观看完整版| 12—13女人毛片做爰片一| 男人舔奶头视频| 少妇人妻一区二区三区视频| 国产av不卡久久| 超碰av人人做人人爽久久| 国模一区二区三区四区视频| 美女cb高潮喷水在线观看| 国产真实乱freesex| a级毛色黄片| 中出人妻视频一区二区| 久久久a久久爽久久v久久| 日日摸夜夜添夜夜添小说| 国产伦一二天堂av在线观看| 在线观看66精品国产| 精品一区二区三区人妻视频| 中文字幕av成人在线电影| 婷婷色综合大香蕉| 美女黄网站色视频| 成年女人永久免费观看视频| 狠狠狠狠99中文字幕| videossex国产| 简卡轻食公司| 午夜激情福利司机影院| 蜜桃久久精品国产亚洲av| 国产在视频线在精品| 99热精品在线国产| 国产伦在线观看视频一区| 亚洲精品一卡2卡三卡4卡5卡| 12—13女人毛片做爰片一| 久久欧美精品欧美久久欧美| 日本色播在线视频| 久久精品夜夜夜夜夜久久蜜豆| 免费高清视频大片| 日本a在线网址| 精品一区二区三区视频在线| 在线看三级毛片| 99热这里只有精品一区| 小蜜桃在线观看免费完整版高清| 国产精品国产高清国产av| 国产精品综合久久久久久久免费| 在线观看一区二区三区| 一级毛片电影观看 | 中文在线观看免费www的网站| 日韩一区二区视频免费看| 亚洲图色成人| 精品一区二区三区av网在线观看| 亚洲成av人片在线播放无| 成人精品一区二区免费| 可以在线观看毛片的网站| 淫妇啪啪啪对白视频| 国产成人a区在线观看| 久久久久免费精品人妻一区二区| 在线a可以看的网站| or卡值多少钱| 99热网站在线观看| 丰满乱子伦码专区| 亚洲婷婷狠狠爱综合网| 最近在线观看免费完整版| 极品教师在线视频| 成人美女网站在线观看视频| 亚洲欧美日韩东京热| 日本免费一区二区三区高清不卡| 97人妻精品一区二区三区麻豆| 深夜精品福利| 久久6这里有精品| 麻豆成人午夜福利视频| 蜜桃亚洲精品一区二区三区| 亚洲在线观看片| 国产毛片a区久久久久| 97碰自拍视频| 欧美精品国产亚洲| 欧美在线一区亚洲| 亚洲成人久久爱视频| 别揉我奶头 嗯啊视频| 日韩在线高清观看一区二区三区| 不卡一级毛片| 久久99热这里只有精品18| 国产精品一及| 国产黄色小视频在线观看| 欧美色视频一区免费| 日韩欧美国产在线观看| 天天一区二区日本电影三级| 蜜臀久久99精品久久宅男| 亚洲国产欧洲综合997久久,| 午夜亚洲福利在线播放| 在线免费十八禁| 国产精品电影一区二区三区| 中文字幕免费在线视频6| 欧美高清性xxxxhd video| 亚洲中文日韩欧美视频| 日本-黄色视频高清免费观看| 亚洲国产精品sss在线观看| 国产国拍精品亚洲av在线观看| 六月丁香七月| 成人鲁丝片一二三区免费| 美女被艹到高潮喷水动态| 国产高清不卡午夜福利| 久久欧美精品欧美久久欧美| 丰满人妻一区二区三区视频av| 在线观看美女被高潮喷水网站| 久久久色成人| 欧美中文日本在线观看视频| 国产伦精品一区二区三区四那| 三级男女做爰猛烈吃奶摸视频| 精品人妻视频免费看| 色综合站精品国产| 一级毛片我不卡| 人妻久久中文字幕网| 中国美白少妇内射xxxbb| 国产高清视频在线播放一区| 亚洲综合色惰| 久久天躁狠狠躁夜夜2o2o| 老司机午夜福利在线观看视频| 联通29元200g的流量卡| 欧美最新免费一区二区三区| 无遮挡黄片免费观看| 成人特级av手机在线观看| 国产欧美日韩一区二区精品| 久久精品国产鲁丝片午夜精品| 日日摸夜夜添夜夜添av毛片| 国产黄片美女视频| 国产三级在线视频| 精品久久国产蜜桃| 99久国产av精品国产电影| 国产91av在线免费观看| 看黄色毛片网站| 国产一级毛片七仙女欲春2| 黄片wwwwww| 成人漫画全彩无遮挡| 简卡轻食公司| 国产在线精品亚洲第一网站| 免费无遮挡裸体视频| 日韩精品青青久久久久久| 天堂√8在线中文| 午夜免费激情av| 亚洲不卡免费看| 国产毛片a区久久久久| 日韩av不卡免费在线播放| 国产精品一区二区三区四区久久| 欧美xxxx黑人xx丫x性爽| 女的被弄到高潮叫床怎么办| 久久亚洲国产成人精品v| 99久久久亚洲精品蜜臀av| 丰满人妻一区二区三区视频av| 亚洲精品影视一区二区三区av| 国产亚洲欧美98| av专区在线播放| 国产一级毛片七仙女欲春2| 丰满的人妻完整版| 91久久精品国产一区二区成人| 久久久久性生活片| 99久久精品一区二区三区| 色哟哟哟哟哟哟| 国产精品人妻久久久影院| 男插女下体视频免费在线播放| 欧美国产日韩亚洲一区| 成人高潮视频无遮挡免费网站| av天堂在线播放| 国产成人福利小说| 日韩欧美精品免费久久| 校园春色视频在线观看| 成人高潮视频无遮挡免费网站| 亚洲精品在线观看二区| 97在线视频观看| 一进一出抽搐动态| 波多野结衣巨乳人妻| 一进一出抽搐gif免费好疼| 精品久久国产蜜桃| 国产成人aa在线观看| 日韩一区二区视频免费看| 国产成人精品久久久久久| 天堂动漫精品| 国产精华一区二区三区| 精品日产1卡2卡| 国产激情偷乱视频一区二区| 国内精品美女久久久久久| 日本精品一区二区三区蜜桃| 日韩三级伦理在线观看| 日本与韩国留学比较| 国产精品久久久久久久电影| 亚洲图色成人| 欧美精品国产亚洲| 亚洲成人久久性| 赤兔流量卡办理| 成人av在线播放网站| 热99re8久久精品国产| 国产精品一区二区免费欧美| 小说图片视频综合网站| 久久精品国产亚洲av香蕉五月| 亚洲高清免费不卡视频| 欧美另类亚洲清纯唯美| 色综合色国产| 亚洲欧美成人综合另类久久久 | 久久久久久久久大av| 日韩成人av中文字幕在线观看 | 蜜臀久久99精品久久宅男| 中文字幕熟女人妻在线| 三级毛片av免费| 天堂av国产一区二区熟女人妻| 日本免费一区二区三区高清不卡| 国产欧美日韩精品亚洲av| 在线免费十八禁| 国产精品电影一区二区三区| 国内少妇人妻偷人精品xxx网站| 国产成人福利小说| 久久久久性生活片| 久久久久国产精品人妻aⅴ院| 菩萨蛮人人尽说江南好唐韦庄 | 国产乱人偷精品视频| 大香蕉久久网| 国内揄拍国产精品人妻在线| 亚洲婷婷狠狠爱综合网| 高清毛片免费看| 在线观看免费视频日本深夜| 久久精品夜夜夜夜夜久久蜜豆| 国产高清有码在线观看视频| 女人被狂操c到高潮| 久久鲁丝午夜福利片| 菩萨蛮人人尽说江南好唐韦庄 | 国产欧美日韩一区二区精品| 天堂影院成人在线观看| 国产老妇女一区| 蜜臀久久99精品久久宅男| 丰满人妻一区二区三区视频av| 欧美在线一区亚洲| 日韩高清综合在线| 国产在视频线在精品| 亚洲一区二区三区色噜噜| 国产亚洲91精品色在线| 最近在线观看免费完整版| 午夜精品一区二区三区免费看| 成年女人看的毛片在线观看| 久久精品影院6| 亚洲国产色片| 午夜日韩欧美国产| 国产真实伦视频高清在线观看| 亚洲性夜色夜夜综合| 麻豆av噜噜一区二区三区| 亚洲最大成人中文| www日本黄色视频网| 国产精品久久久久久av不卡| av在线老鸭窝| 亚洲真实伦在线观看| 国产成人a∨麻豆精品| 久久久精品大字幕| 97碰自拍视频| 午夜亚洲福利在线播放| 色尼玛亚洲综合影院| 日本黄色视频三级网站网址| 床上黄色一级片| 久久精品夜色国产| 国产精品乱码一区二三区的特点| 日韩精品有码人妻一区| 最近手机中文字幕大全| a级一级毛片免费在线观看| 国产 一区精品| 欧美成人a在线观看| 午夜精品一区二区三区免费看| 久久欧美精品欧美久久欧美| 狠狠狠狠99中文字幕| 99热这里只有是精品50| 精品人妻偷拍中文字幕| 搡老妇女老女人老熟妇| 亚洲不卡免费看| 亚洲一区二区三区色噜噜| 免费看美女性在线毛片视频| 在线播放国产精品三级| 日本免费一区二区三区高清不卡| 中文字幕熟女人妻在线| a级毛色黄片| 日韩,欧美,国产一区二区三区 | 久久久精品大字幕| 亚洲电影在线观看av| 国产精品日韩av在线免费观看| aaaaa片日本免费| 又黄又爽又免费观看的视频| 久久亚洲精品不卡| 插阴视频在线观看视频| 一级毛片aaaaaa免费看小| 村上凉子中文字幕在线| 69av精品久久久久久| 真实男女啪啪啪动态图| 一进一出抽搐gif免费好疼| 狂野欧美激情性xxxx在线观看| 蜜桃亚洲精品一区二区三区| av福利片在线观看| 床上黄色一级片| 男人狂女人下面高潮的视频| 亚洲成人av在线免费| 成年女人永久免费观看视频| 老司机午夜福利在线观看视频| 精品不卡国产一区二区三区| 国产一区亚洲一区在线观看| 最近的中文字幕免费完整| 国内精品一区二区在线观看| 亚州av有码| 狂野欧美白嫩少妇大欣赏| 天天一区二区日本电影三级| 亚洲一级一片aⅴ在线观看| 成人性生交大片免费视频hd| 久久久久久久久久成人| 高清日韩中文字幕在线| 国产精品美女特级片免费视频播放器| 天天一区二区日本电影三级| 亚洲精品色激情综合| 麻豆乱淫一区二区| 亚洲成人av在线免费| 久久精品综合一区二区三区| 校园春色视频在线观看| 又黄又爽又免费观看的视频| 亚洲av美国av| 久久久国产成人精品二区| 国产亚洲精品久久久久久毛片| 狂野欧美激情性xxxx在线观看| 亚洲精品乱码久久久v下载方式| 此物有八面人人有两片| 美女内射精品一级片tv| 亚洲成av人片在线播放无| 国产蜜桃级精品一区二区三区| 麻豆av噜噜一区二区三区| 寂寞人妻少妇视频99o| 久久韩国三级中文字幕| 青春草视频在线免费观看| 美女免费视频网站| 欧美色视频一区免费| h日本视频在线播放| 色综合站精品国产| 最新在线观看一区二区三区| 女同久久另类99精品国产91| 成人特级av手机在线观看| 在线播放国产精品三级| 久久久午夜欧美精品| 六月丁香七月| 免费av观看视频| 一进一出好大好爽视频| 别揉我奶头~嗯~啊~动态视频| 久久精品国产亚洲av涩爱 | av国产免费在线观看| 麻豆国产av国片精品| 亚洲精品一卡2卡三卡4卡5卡| 久久久久久国产a免费观看| 亚洲美女搞黄在线观看 | 午夜免费激情av| 国产免费男女视频| 国产精品三级大全| 高清毛片免费看| 又粗又爽又猛毛片免费看| 欧美激情在线99| 青春草视频在线免费观看| 村上凉子中文字幕在线| 久久九九热精品免费| 欧美激情久久久久久爽电影| 国产精品三级大全| 久久久久精品国产欧美久久久| 国产精品福利在线免费观看| 国产精品嫩草影院av在线观看| 国产精品国产高清国产av| 亚洲熟妇中文字幕五十中出| 国产单亲对白刺激| av在线蜜桃| 精品久久国产蜜桃| 别揉我奶头 嗯啊视频| 深夜a级毛片| 99久国产av精品国产电影| 精品国内亚洲2022精品成人| 国产片特级美女逼逼视频| 国产麻豆成人av免费视频| 久久亚洲国产成人精品v| 99热只有精品国产| 午夜福利成人在线免费观看| 国产老妇女一区| 国产精品日韩av在线免费观看| 国产在视频线在精品| 深夜a级毛片| 最近的中文字幕免费完整| 九九在线视频观看精品| 日本a在线网址| 成人精品一区二区免费| 国产精品,欧美在线| 欧美成人精品欧美一级黄| 91午夜精品亚洲一区二区三区| 毛片女人毛片| 国产伦一二天堂av在线观看| 国产欧美日韩精品亚洲av| 国产精品伦人一区二区| 91久久精品电影网| 午夜老司机福利剧场| 欧美激情久久久久久爽电影| 国产一区二区三区在线臀色熟女| 99热这里只有是精品50| 国内揄拍国产精品人妻在线| 亚洲成a人片在线一区二区| 成年女人看的毛片在线观看| 夜夜爽天天搞| 亚洲丝袜综合中文字幕| 久久婷婷人人爽人人干人人爱| 少妇人妻精品综合一区二区 | 国产综合懂色| 亚洲精品亚洲一区二区| 国产欧美日韩一区二区精品| 一级黄色大片毛片| 男女之事视频高清在线观看| 小蜜桃在线观看免费完整版高清| 深夜精品福利| 尤物成人国产欧美一区二区三区| 色在线成人网| 韩国av在线不卡| 国产高清视频在线播放一区| 99久久精品国产国产毛片| 在线观看免费视频日本深夜| 亚洲国产精品久久男人天堂| 人妻制服诱惑在线中文字幕| 久久久精品94久久精品| 中文字幕久久专区| 日韩欧美一区二区三区在线观看| 一级av片app| 蜜臀久久99精品久久宅男| 精品欧美国产一区二区三| 老熟妇乱子伦视频在线观看| 中文字幕熟女人妻在线| 精品久久久久久久久久久久久| 欧美又色又爽又黄视频| 国产真实乱freesex| 精品熟女少妇av免费看| 成人无遮挡网站| 久久久久国产网址| 国产黄色小视频在线观看| 午夜日韩欧美国产| 哪里可以看免费的av片| 真实男女啪啪啪动态图| 国产成人91sexporn| 如何舔出高潮| 久久人人精品亚洲av| 久久精品国产亚洲网站| 亚洲激情五月婷婷啪啪| 99在线视频只有这里精品首页| 亚洲成人久久爱视频| 在线a可以看的网站| 国产成人a∨麻豆精品| 国内精品一区二区在线观看| 综合色丁香网| 成人永久免费在线观看视频| 淫秽高清视频在线观看| 国产激情偷乱视频一区二区| 久久人人爽人人爽人人片va| 国产大屁股一区二区在线视频| 一区福利在线观看| 色视频www国产| 午夜影院日韩av| 国产视频内射| av中文乱码字幕在线| 精品无人区乱码1区二区| 91久久精品电影网| 国产片特级美女逼逼视频| 日本黄大片高清| 国产亚洲精品久久久com| 国产成人精品久久久久久| 日本一二三区视频观看| 九九在线视频观看精品| 亚洲va在线va天堂va国产| 国产精品亚洲一级av第二区| 久久久久国内视频| 在线观看免费视频日本深夜| 一区二区三区免费毛片| 国产精品国产高清国产av| 成人午夜高清在线视频| 看免费成人av毛片| 日日摸夜夜添夜夜爱| 国产毛片a区久久久久| 久久韩国三级中文字幕| 黄色配什么色好看| 亚洲aⅴ乱码一区二区在线播放| 日韩,欧美,国产一区二区三区 | 搞女人的毛片| 日本爱情动作片www.在线观看 | 欧美日韩乱码在线| 国产精品99久久久久久久久| 观看美女的网站| 国产精品乱码一区二三区的特点| 久久6这里有精品| 人人妻人人澡人人爽人人夜夜 | 成人漫画全彩无遮挡| 国产精品亚洲美女久久久| 嫩草影院新地址| 婷婷亚洲欧美| 久久午夜福利片| av.在线天堂| av女优亚洲男人天堂| 精品人妻偷拍中文字幕| 欧美xxxx性猛交bbbb| 婷婷精品国产亚洲av在线| 日本黄色片子视频| 少妇被粗大猛烈的视频| 亚洲图色成人| 日韩制服骚丝袜av| 亚洲人成网站高清观看| 九色成人免费人妻av| 免费av观看视频| 超碰av人人做人人爽久久| 欧美激情国产日韩精品一区| а√天堂www在线а√下载| 日日摸夜夜添夜夜添av毛片| 嫩草影院精品99| 久久精品国产亚洲av涩爱 | 久久午夜福利片| 成人特级黄色片久久久久久久| 国产真实乱freesex| 国产av在哪里看| 国产单亲对白刺激| 久久精品夜夜夜夜夜久久蜜豆| 老师上课跳d突然被开到最大视频| 国产精品一及| 中文字幕久久专区| 激情 狠狠 欧美| 亚洲欧美日韩东京热| 亚洲婷婷狠狠爱综合网| 成人特级黄色片久久久久久久| 最近手机中文字幕大全| 一进一出抽搐gif免费好疼| 韩国av在线不卡| 日本色播在线视频| av黄色大香蕉| 波多野结衣高清作品| a级一级毛片免费在线观看|