• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on extraction and reproduction of deformation camouflage spot based on generative adversarial network model

    2020-06-28 03:02:30XinYngWeidongXuQiJiLingLiWnninZhuJiyoTinHoXu
    Defence Technology 2020年3期

    Xin Yng , Wei-dong Xu , Qi Ji , Ling Li , Wn-nin Zhu , Ji-yo Tin , Ho Xu

    a National Key Laboratory of Lightning Protection and Electromagnetic Camouflage, Amy Engineering University, Nanjing, Jiangsu, 210007, China

    b Teaching and Research Office of Camouflage in Training Center, Army Engineering University, Xuzhou, Jiangsu, 221004, China

    Keywords:Deformation camouflage Generative adversarial network Spot feature Shape description

    ABSTRACT The method of describing deformation camouflage spots based on feature space has some shortcomings,such as inaccurate description and difficult reproduction. Depending on the strong fitting ability of the generative adversarial network model, the distribution of deformation camouflage spot pattern can be directly fitted, thus simplifying the process of spot extraction and reproduction. The requirements of background spot extraction are analyzed theoretically. The calculation formula of limiting the range of image spot pixels is given and two kinds of spot data sets,forestland and snowfield,are established.Spot feature is decomposed into shape, size and color features, and a GAN (Generative Adversarial Network)framework is established.The effects of different loss functions on network training results are analyzed in the experiment.In the meantime,when the input dimension of generator network is 128,the balance between sample diversity and quality can be achieved.The effects of sample generation are investigated in two aspects.Subjectively,the probability of the generated spots being distinguished in the background is counted, and the results are all less than 20% and mostly close to zero.Objectively, the features of the spot shape are calculated and the independent sample T-test is applied to verify that the features are from the same distribution,and all the P-Values are much higher than 0.05.Both subjective and objective methods prove that the spots generated by this method are similar to the background spots. The proposed method can directly generate the desired camouflage pattern spots, which provides a new technical method for the deformation camouflage pattern design and camouflage effect evaluation.

    1. Introduction

    Deformation camouflage technology is an important military protection method, which usually coats the designed camouflage pattern on the surface of the moving target in order to reduce the saliency of the target or hide the target in the background. The technology is now widely used in military personnel and weapons in various countries.The design of deformation camouflage mainly includes three aspects: the selection of main color, the design of spot shape and the configuration between spots.Among them,the design of spot shape is regarded to be consistent but not identical with the spot shape of the relevant color patches in the background area. Around this principle, scholars have carried out a lot of research, mainly focusing on two aspects.

    On the one hand,feature description is established to extract the distribution of spot features in the background, or conditional distribution is established to evaluate the camouflage effect when the joint distribution is difficult to establish. The Octopus [1,2]method is an early method to describe spot shape. That is, by determining the center of gravity of the shape, a line segment is extended every 45°to contact the shape contour,and the length of each line segment is combined to form a feature vector. The accuracy of this method is too poor to measure the difference of different shapes effectively.The chain code[3]method is similar to the Octopus method. Although its accuracy can be effectively improved, it cannot measure the difference between shapes.Moment invariants [4] and Fourier descriptors [5] are invariant to rotation, scaling and translation. Translation and rotation invariance are needed to describe the shape features of camouflage spots.Size is one of the most important features of camouflage spots. If the size feature is described as a dimension alone,the consistency will be lost in the distance measurement.In addition,color features are actually closely related to spot shape features. The method we proposed above hides the assumption that color features and spot shape features are independent of each other. This assumption is not consistent with reality.In a word,the size of a spot area is also related to its color.

    On the other hand, the background spot distribution is established according to the proposed feature description method, and then the spot shape is generated by the distribution function or the existing spot shape is evaluated to meet the requirements.Because of the defects in the way of describing spot features to varying degrees, it is very difficult to estimate the distribution of highdimensional features even in the case of given features, so it is not common to study the literature of spot reproduction.Literature[6] gives a template method of digital camouflage to generate pattern spots. In essence, it has been assumed that the proposed templates conform to the distribution of background spot shapes.

    This paper aims to solve the problem of extracting and reproducing the features of camouflage spots by means of the strong distribution fitting ability of GAN network. The construction,training and optimization of GAN network are studied for specific problems. This provides a fast and effective way to solve the problem of camouflage spot design.

    2. Related work

    2.1. GAN

    Generative Adversarial Network (GAN) was proposed by Goodfellow[7]in 2014, and it has made vigorous development. In recent years,many scholars have conducted a lot of research on the training, evaluation and structural design of the network [8-10].Different from the traditional neural network model, this network is an unsupervised network model, consisting of two networks of generator and discriminator. The generator is a generative model used to fit the distribution of data; the discriminator is a discriminant model used to judge the fidelity of the data generated by the generator and the real data. During the training, generators and discriminators are trained alternately to enhance each other's abilities.Practice has shown that the GAN network is very suitable for the distribution of more complex modeling problems.

    GAN adopts a clever training method to realize the estimation of likelihood function. This training method avoids the repeated application of Markov chain learning mechanism to the calculation of partition function.It does not require the lower variational limit and approximate inference,thus greatly improving the application efficiency [11]. Let the sample data be x and x ~Pdata(x), and the random noise data be z and z ~Pnoise(z).The generator network G implements a mapping from z to x.According to relevant research,the effect of z data from Gaussian distribution is better than that from uniform distribution. The discriminator network D is used to distinguish real sample x from generated sample G(z). In the training process, on the one hand, D network should constantly improve its discrimination ability,that is,to maximize the expected value Ex~Pdata(x)(D(x))and minimize Ez~Pnoise(z)(D(G(z)));on the other hand, it should help G network training with its discrimination ability, that is, to fix network D and minimize the expected value.

    Because GAN training requires two networks to do game training with each other,it needs to achieve Nash equilibrium[12].However,it is difficult to achieve this due to the influence of various random factors in the actual process, and problems such as mode collapse, gradient dispersion and gradient explosion often occur.Some scholars have made improvements to GAN, mainly reflected in the improvement of network structure, loss function, optimization algorithm and process. Arjovsky et al. [13] proposed WGAN,which uses Wasserstein distance instead of JS divergence (Jensen-Shannon divergence) [14] as a loss function with better smoothness. Besides, WGAN employs Lipschitz continuity constraints and adjustments to network details such as changing activation functions. In theory, these adjustments have thoroughly solved the problem of unstable training. Gulrajani et al. [15] then proposed using gradient penalty instead of parameter truncation to achieve faster convergence. With the emergence of GAN training mode in network structure, DCGAN, cGAN, BiGAN and infoGAN have emerged in different network structures for specific problem areas[9,16,17]. These networks optimize their own structure and improve the loss function to make the training process faster,more stable and robust. In view of the complexity of the nature of the problem of camouflage spot description and reproduction, the structure and training skills mentioned above are used for reference in the construction of the network, and different parameters are optimized through the verification experiment in this paper.

    2.2. GAN + camouflage

    Due to the advantages of the GAN model, the current camouflage research combined with the GAN model has gradually risen,and scholars have studied in various aspects such as texture migration,camouflage design,and camouflage detection.Alfimtsev et al. [18] designed a camouflage pattern generation system based on the characteristics of the deep neural network recognition system and human observers. The generated camouflage pattern was tested on the Faster-RCNN Inception V2 and Faster-RCNN ResNet101 recognition systems and achieved good results. Zheng Y.F. et al. [19] designed the DDCN-4C model through the Dense Deconvolution Network to accurately detect hidden camouflage people.The method they use is a discriminant model.Zhu J.Y.et al.[20]designed Cycle-GAN,which successfully realized the migration of the texture style of the zebra to the ordinary horse.This work is very instructive for the modeling of camouflage patterns.Yeh et al.[21] studied the semantic image restoration technology based on the GAN model, and this result can be directly transferred to the fixed target imitation camouflage design. In fact, for imitation camouflage, models that incorporate deep neural networks have evolved quite maturely.Due to the complexity of the problem,it is very difficult to directly fit the large-scale background with the GAN model for the deformed camouflage applied to the moving target.The network structure and the training amount will be quite large.

    The structure of neural network usually needs fixed-size image input, and the size of camouflage spot is one of its important features.Therefore,it is considered that the camouflage spot is initially decomposed into three features of shape s, size m, and color c. In other words, it is necessary to apply network G fitting to joint distribution(s,m,c)~Pdata(s,m,c).Then the size of the spot shape is determined as uniform size to solve the problem of fixed input network data. Therefore, the whole structure can be optimized alternately by the objective function shown in Eq. (1).It should be noted that the decomposition of camouflage spots is independent without considering the background factors, but it is not independent when given a specific background area. This conjecture was verified in the fourth section of the experiment. This decomposition way of spot features enables calculations to be implemented quickly and efficiently, and it is just a matter of handing over the problems of describing the spot shape and establishing joint distribution to the generator.

    3. Model framework

    An overview flow chart of the proposed method for extracting and reproducing the deformed camouflage spots is shown in Fig.1.The spots pattern in the background is obtained by the process of clustering and morphological opening operations of the background image. Then, according to the parameters at the time of shooting, the spots pattern is screened according to the size requirement to establish a data set. Finally, the GAN model is designed to fit the spot features in the dataset. The trained model can be used directly to generate deformed camouflage spots.

    3.1. Spot extraction

    Currently,the main methods of extracting spot features include color clustering algorithm and region growing method. In this paper, the performance of spot shape algorithm is not discussed in detail. The AFK-MC2algorithm proposed by Olivier [22] et al., in 2016 is used to extract background spot shape. The algorithm improves the distribution of proposals to optimize the selection of initial points, as shown in Fig. 2. Among them, c1is the initial sampling point. d(x,c1) is the distance metric between the sampling points. This paper uses the Euclidean distance metric of RGB color space.X represents all the points to be clustered and|X|is the number of points.Get the first initial cluster point from the uniform distribution. With the proposal distribution q(x|c1) of the Metropolis-Hastings sampling method,a convergent sequence can be obtained, the last one of the sequence is the initial clustering point. Repeat this process to get all k initial cluster points. The algorithm greatly improves the speed of clustering. In the experiment,when the range of the background area is large,the amount of pixels can reach tens of millions, which can effectively improve the clustering speed.

    In the process of spot extraction, too large or too small spots need to be discarded to ensure the accuracy of the camouflage design.Since the resolution of the current imaging device has many influence factors, the ground resolution L is determined according to the angle of view θ of the imaging device, the single row pixel amount M of the CCD,and the observation distance R,as shown in Eq. (3):

    At present,the analysis of the size of camouflage spots is usually based on the visual angle threshold under the condition of human eye observation.According to this principle,Yi[23]thinks that the design observation distance of ground equipment deformation camouflage design is 800-3000 m,and the corresponding spot size is 0.72-2.70 m. In fact, satellite reconnaissance is the most important means of reconnaissance in current military confrontations.When calculating the relevant parameters,10 cm is generally used as the reconnaissance resolution of satellites in the industry.In the meantime, considering that the number of pixels is too small, it is impossible to distinguish different shapes of spots, which violates the design principle of camouflage spots.Therefore,it should be set to calculate the spot size in multiples.This paper recalculate it by 3 times standard.At a limit resolution of 10 cm,the spot size should be 0.3 m.Therefore,according to the proportional conversion of the data given by Yi et al.[23],the spot size should be 0.3-1.13 m.The calculation process of the pixel size Psof the spot size in the image is shown in Eq. (4). According to this formula, camouflage spots pattern with too large or too small size in the background can be calculated and discarded.

    3.2. GAN architecture

    The overall architecture of GAN network is shown in Fig. 2,which includes a generator network and a discriminator network.The input of generator network is Gaussian random sampling z,and the output is spot shape s, size m and color c. The input of the discriminator network is the output of the generator network,and its output represents the credibility of the samples from the real distribution. All the neurons in the network are multi-layer perceptron. In fact, the convolution neural network is superior to multi-layer perceptron in image processing as a whole.Considering two factors,a fully connected network is adopted.On the one hand,the output of the network has three quantities, and the color features and size features cannot be effectively expressed in the convolution network; on the other hand, after decomposing the spot features, the difficulty of learning the distribution has been greatly reduced. Experiments show that the fully connected network can better fit the data. The numbers following the word Dense in the block diagram in Fig. 3 represent the number of neurons.For the spot shape s,its binary spot image data is stretched row by row into vector data. Since the size feature is for image pixels,the color feature has been clustered into k when clustering,so both of them are encoded by one-hot method[24,25].The leaky ReLU activation function [24,26] is used in all activation layers of the network,which is an improvement of ReLU activation function and can effectively solve the problem that neurons are“dead”and cannot participate in gradient propagation.Its slope parameters are set to 0.2. The last activation function of the generator network is set to tanh function, which can generate regular data of [-1,1] interval. The last layer of the discriminator network uses linear functions directly.Arjovsky[13]et al.have proved that this setting can accelerate the convergence of the network and rapidly propagate the gradient. A batch normalization layer is set up in each block of the generator network, which is beneficial to the convergence of the network.After statistical calculation,there are 6 blocks for generator and 3 blocks for discriminator in the whole network, and there are nearly 38 million parameters to be optimized in total, which is a large amount of training. However,there are no prominent problems such as gradient dispersion or gradient explosion in actual training, so it is possible to achieve good convergence without using a residual module such as the ResNet[27] network.

    Fig.1. Overview of deformed camouflage spot extraction and reproduction model.

    Fig. 2. Network model structural diagram.

    Fig. 3. Proposed model architecture.

    The loss function is defined according to the Wasserstein distance and mean square error distance,and different performances are tested in experiments.The optimization equations are shown in Eq.(5)and Eq.(6),respectively.Among them,the loss function for L1needs to satisfy the Lipschitz continuity condition limit. In practical operation, this condition can be satisfied with a given range of parameter values wi∈[ - 0.01,0.01].

    The RMSprop[28]algorithm is used to optimize the parameters of the model in the experiment. The number of iterations is set to 10,000.The training data of each batch randomly selected from the sample is set to 128. In order to make the network optimization process more stable,the learning rate is set to 0.00005.During the training, the discriminator is firstly optimized once, then the parameters of the discriminator are fixed, the generator and the discriminator are combined into a network for optimization once,and then alternately reciprocated until the network converges.

    3.3. Evaluation method

    At present,the methods for evaluating the performance of GAN results include average logarithmic likelihood, kernel density estimation and sample fidelity[29].However,because the GAN model belongs to generative model, the existing methods cannot be applied to all kinds of models. This paper evaluates the stability,generalization performance and image quality of the model from quantitative and qualitative aspects. On one hand, the generated spot pattern is placed in the background and observed by a professional observer whether it is “prominent” to obtain the overall quality effect evaluation. On the other hand, five features, rectangularity, dispersion, eccentricity, roundness and second-order moment [30], are used to sample the feature space of the background spot shape, and the independent T-test is used to test the significant difference between the actual data and the generated data. The stability of the model can be obtained by the difference curve between generator and discriminator loss during training.In fact, no information can be obtained from the loss value of GAN model,but the comparison between the two modules can basically reflect the convergence trend of the model.

    4. Experiments and discussion

    4.1. Data set establishment

    Currently, there are few open data sets related to camouflage spots,so it is necessary to establish relevant data sets.A wide range of background image data can be obtained by using UAV (Unmanned Aerial Vehicle) to image the background area. The UAV's flight altitude is 100 m and the lens angle is 94°. The data are processed according to the calculation method provided in section 3.1. Firstly, image pixels are clustered into five categories. Since there are some noisy points in the image center after clustering,the morphological opening operation is used to process each kind of data one by one. According to Eq. (4), it can be calculated that the pixel size of the spot shape should be roughly between 8 and 30.After screening out the qualified spots, the size and color features are extracted and coded.Size data are encoded by one-hot[24,25]with length and width, and a total of 66-dimensional data is obtained. There are five kinds of color data, and five-dimensional vectors can be obtained. After the shape data is expanded, there are 28×28=784 dimension data. So the total size of the data feature space is 855 dimensions.

    Fig. 4. Original images and clustered images in the process of data set establishment; (a) Original image of forestland background; (b) Clustered image of forestland; (c) Original image of snowfield background; (d) Clustered image of snowfield.

    Fig. 5. Original spot feature extraction to get shape, color and size features. (a) Spot from a forestland background. (b) Spot from a snowfield background.

    Two kinds of databases, forestland background and snowfield background,were respectively photographed and established.One original image and the clustered image randomly selected are shown in Fig. 4. Fig. 5 shows the shape, color and size characteristics of the spots in the image after clustering after feature extraction.In order to shorten the calculation time,only the background data with an area of approximately 1 square kilometer is used as the demonstration. A total of 22 images were acquired,with an image resolution of 5472 pixel×3078 pixel and a total pixel count of approximately 16 million. The actual area of each map is 0.048 km2.After the spots clusters and filtering processing,the data volume obtained of the forestland data set is 10234, and that of the snowfield data set is 12541.

    Fig. 6. Comparison of training conditions of two different loss functions; (a) Loss function of mean square error; (b) Loss function of Wasserstein function.

    Fig. 7. Training results and processing; (a) Data set and training results of forestland; (b) Data set and training results of snowfield.

    4.2. Training details and parameter optimization

    Table 1 shows the computer software and hardware environment used in the model training process.

    In order to give full play to the advantages of the generative model and make the model have better generalization ability, the effects of the loss functions defined by the Wasserstein distance and mean square error distance on the stability of the training process were investigated on the forest background data set during training. As shown in Fig. 6, the horizontal axis is the number of training times and the vertical axis is the value of the loss function.Fig.6(a)shows that G Loss is higher than D Loss under loss function of mean square error, and converges almost when the training times are close to 2500. However, with the increase of training times,G Loss gradually increases while D Loss gradually decreases,the network tends to be unbalanced and the amplitude of oscillation increases slightly. In terms of the quality of the generated images, the patterns are getting more and more monotonous.Fig.6(b)shows that under loss function of Wasserstein distance,the initial oscillation amplitudes of both networks are very large, but tend to stabilize quickly.And the network is basically stable when it comes to 2000 training sessions. After stabilization, there is no longer a trend line change with the subsequent training loss value.The oscillation amplitude of the loss value of the two networks decreases from beginning to end.The generated images have good diversity and high definition. This shows that the loss function of the Wasserstein distance is better than that of mean square error.

    Fig.7 shows the comparison between the training results in the two training sets and the data in the original training set.Since the data quality status of color and length cannot be given, the results are discussed by observing the shape characteristics. The image in Fig.7 represents the spot shape feature; m and c below the image are the spot size feature(the number of pixels in length and width)and the color feature (the color coding after clustering). The first column of Fig. 7 is the generated data, whose image spots are continuous data calculated by tanh function, so its edges are somewhat blurred. But in fact, the effect of camouflage spots is observed from a long distance, and the smaller degree of edge blurring has little effect on the overall effect. In practical applications, the image can be binarized by threshold segmentation and simple morphological processing. The spot feature and color feature of the generated data are obtained by searching for the maximum subscript of the neuron. The second line in Fig. 7 is the result of the simple morphological processing of the data in the second line. The third column is the restored spot (the image is scaled proportionally for a more intuitive display). Overall, the algorithm works well. The generated camouflage spots have high similarity with the spot patterns in the background,and the trained network can quickly generate spot patterns.

    Fig.8. Effect image of generated spots in background,the spots indicated by the red circle is generated by the model;(a)Effect image of generated spots of forestland data set;(b)Effect image of generated spots of snowfield data set.

    Table 1 Experimental environment.

    The effect of random data dimension input by generator on the result of generation is investigated.Experimental results show that the lower the dimension is, the worse the diversity of data generated will be,and the model will be closer to the mode collapse,but the image quality will be slightly improved. As the dimension increases, the diversity of data increases, but the quality decreases significantly. Finally, a better result can be obtained by weighing dimension value 128.

    4.3. Analysis of subjective and objective effects

    Design observation experiments and obtain the overall effect of the generated spots by the statistics. The generated spot patterns were randomly and equally proportionally placed in the clustered background after restoration, and the subjective effects observed by 30 observers were counted, as shown in Fig. 8. The spots indicated by the red circle in the figure is generated by the model.Observers are free to zoom in and out of the background image for unlimited observation time, and need to point out possible spots.The number of times each spot was found and the number of times it was observed were calculated. The statistical results show that the probability of discovery of all spots is less than 20%. Higher probability of spot detection is because the random placement of the location is more obvious. For most of spots, the possibility of being discovered is close to zero.The result of snowfield data set is worse than that of forestland data set, and its probability of being discovered is generally higher. This is due to the poor clustering effect of the snowfield data set, which are generally small and mostly removed during data pre-processing. Generally speaking,the generated spots are consistent with the background spots subjectively, and can meet the camouflage effect requirements.

    In order to more objectively analyze whether the data generated by the generator network and the samples come from the same distribution, the sample data with a capacity of 50 are randomly selected to calculate the five features of the spot shape: rectangularity, dispersion, eccentricity, roundness and second-order moment. The independent sample T-test is used to test whether the two data come from the same distribution in SPSS analysis software. The test results of data sets of forestland and snowfield are shown in Table 2 and Table 3, respectively. The results of both data sets show that the distribution data and sample data have strong homogeneity of variance in feature space.The results of the independent sample T-test indicate that the probability of the two being from the same distribution is very high, and its P-value is much higher than 0.05. It is worth pointing out that the standard deviation of most of the generated data is slightly lower than that of the sample data, which to some extent indicates that the concentration of the generated data is slightly higher and its diversity is slightly worse than that of the original data.

    5. Conclusions

    In this paper,the extraction process of deformation camouflage spots and the problem of spot reproduction based on the GAN model are studied. Firstly, the problems of existing methods for describing and reproducing camouflage spots are analyzed. Subsequently, the calculation method of extracting background camouflage spot features is proposed, and the calculation formula of determining the size range of spot based on imaging parameters is given. The spot feature is decomposed into shape feature, color feature and size feature, and the forestland and snowfield spot feature data sets are created in turn. The GAN framework isestablished, and the training and experiment process are carried out. Firstly, the effects of mean square error loss and Wasserstein distance loss on network convergence are analyzed.The loss curve and generated data show that the Wasserstein distance loss is better than the mean square error loss function on the whole.Secondly,the effect of generator input dimension on image results is studied and 128 is determined as the best input dimension.Then the subjective and objective effects of the spots are investigated.Subjectively, the generated spots are restored and then placed in the background.The probability that 30 observers can observe the spots is counted.The results show that the probability is lower than 20% and the probability of most spots is close to 0. Objectively,sample data with a capacity of 50 are randomly extracted and five features of spot shape, including rectangularity, dispersion, eccentricity, roundness and second-order moment, are calculated.The independent sample T-test is used to test the significance of the two data from the same distribution. The results show that the probability of sampling from the same distribution in the feature space is very high, P-Value is much higher than 0.05.

    Table 2 Independent sample T-test results of spot shape features in forestland data set.

    Table 3 Independent sample T-test results of spot shape features in snowfield data set.

    The method proposed in this paper overcomes the shortcomings of the previous need to describe spots in feature space,and can directly fit the distribution of spot patterns. This method can quickly and effectively extract and generate camouflage spots,which provides a new solution for the design of camouflage patterns and the effect evaluation of camouflage patterns. However,there are still some detailed problems in this study,such as the low accuracy of the generation, the poor clustering effect of snow background and so on.The generated discriminator network can be employed in some evaluation work, so how to strengthen the training of the discriminator is also the next step that needs to be focused on.

    Acknowledgments

    This research was funded by Natural Science Foundation of Jiangsu Province, grant number BK20180579.

    色尼玛亚洲综合影院| 国产午夜精品久久久久久一区二区三区| 精品少妇黑人巨大在线播放 | 内地一区二区视频在线| 蜜桃亚洲精品一区二区三区| 久久久国产成人精品二区| 成熟少妇高潮喷水视频| 女的被弄到高潮叫床怎么办| 精品一区二区三区人妻视频| 美女大奶头视频| 国产高清视频在线观看网站| 久久久久九九精品影院| 亚洲人成网站高清观看| 亚洲电影在线观看av| 久久精品国产自在天天线| 激情 狠狠 欧美| av女优亚洲男人天堂| 亚洲图色成人| 久久久久久久久久久免费av| 日韩制服骚丝袜av| 能在线免费看毛片的网站| 欧美极品一区二区三区四区| 久久精品国产亚洲av涩爱 | 国产一级毛片在线| 欧美激情久久久久久爽电影| 亚洲欧美成人综合另类久久久 | 女人被狂操c到高潮| 国产午夜精品一二区理论片| 我的老师免费观看完整版| 日日撸夜夜添| 午夜精品在线福利| 欧美一级a爱片免费观看看| 女同久久另类99精品国产91| 日韩制服骚丝袜av| 少妇猛男粗大的猛烈进出视频 | 国产精品一及| 久久九九热精品免费| 观看免费一级毛片| 国产熟女欧美一区二区| 又粗又硬又长又爽又黄的视频 | 国产成人freesex在线| 日产精品乱码卡一卡2卡三| 美女国产视频在线观看| 亚洲精品影视一区二区三区av| 女的被弄到高潮叫床怎么办| 日本成人三级电影网站| 精品午夜福利在线看| 一本久久中文字幕| 欧美最黄视频在线播放免费| 成人亚洲欧美一区二区av| 国产极品精品免费视频能看的| 菩萨蛮人人尽说江南好唐韦庄 | 毛片女人毛片| 成人美女网站在线观看视频| 我的老师免费观看完整版| 日韩三级伦理在线观看| 国产真实乱freesex| 成人综合一区亚洲| 91久久精品电影网| 色综合站精品国产| 三级毛片av免费| 天堂av国产一区二区熟女人妻| 国产女主播在线喷水免费视频网站 | 日本在线视频免费播放| 免费观看在线日韩| 国产精品免费一区二区三区在线| 高清毛片免费观看视频网站| 免费观看精品视频网站| 日本五十路高清| 深爱激情五月婷婷| 欧洲精品卡2卡3卡4卡5卡区| 在线a可以看的网站| 免费av毛片视频| 最后的刺客免费高清国语| 国产乱人偷精品视频| 亚洲成人av在线免费| 高清日韩中文字幕在线| 好男人在线观看高清免费视频| 中文字幕人妻熟人妻熟丝袜美| 国产精品.久久久| 日本色播在线视频| 成人午夜高清在线视频| 男人狂女人下面高潮的视频| 亚洲国产日韩欧美精品在线观看| 成人三级黄色视频| 岛国在线免费视频观看| 亚洲一区高清亚洲精品| 国产免费男女视频| 3wmmmm亚洲av在线观看| 色播亚洲综合网| 一级av片app| 免费人成在线观看视频色| 亚洲中文字幕一区二区三区有码在线看| 亚洲av.av天堂| 男女视频在线观看网站免费| 国内久久婷婷六月综合欲色啪| 日本熟妇午夜| 色噜噜av男人的天堂激情| 国产男人的电影天堂91| 精品一区二区三区视频在线| 精品久久久噜噜| 一个人看的www免费观看视频| 日韩中字成人| 免费观看的影片在线观看| 亚洲美女搞黄在线观看| 欧美最新免费一区二区三区| 99热网站在线观看| 国产精华一区二区三区| 最近的中文字幕免费完整| 日韩强制内射视频| 99久久九九国产精品国产免费| 亚洲欧美日韩高清在线视频| 内射极品少妇av片p| ponron亚洲| 少妇被粗大猛烈的视频| 日韩欧美三级三区| 亚洲精品国产成人久久av| 欧美成人一区二区免费高清观看| 好男人在线观看高清免费视频| 国模一区二区三区四区视频| 精品免费久久久久久久清纯| 尾随美女入室| 欧美一级a爱片免费观看看| 亚洲成av人片在线播放无| 赤兔流量卡办理| 特级一级黄色大片| 欧美色欧美亚洲另类二区| 久久久精品大字幕| 国产精品一区二区在线观看99 | 搡老妇女老女人老熟妇| 国产美女午夜福利| 国产综合懂色| 国产精品1区2区在线观看.| 国产真实伦视频高清在线观看| 中文精品一卡2卡3卡4更新| 99九九线精品视频在线观看视频| 精品久久久久久久久亚洲| 又爽又黄a免费视频| 亚洲在久久综合| 亚洲精品乱码久久久久久按摩| 亚洲成人精品中文字幕电影| 综合色av麻豆| 久久午夜亚洲精品久久| 欧美成人精品欧美一级黄| 国内精品久久久久精免费| 22中文网久久字幕| 变态另类丝袜制服| 亚洲aⅴ乱码一区二区在线播放| 我要看日韩黄色一级片| 亚洲精品国产av成人精品| 久久精品国产亚洲av天美| 中文资源天堂在线| 久久热精品热| 亚洲精品成人久久久久久| 国产一区亚洲一区在线观看| 色综合站精品国产| 亚洲五月天丁香| 麻豆av噜噜一区二区三区| 国产欧美日韩精品一区二区| av.在线天堂| 少妇被粗大猛烈的视频| 天美传媒精品一区二区| 人妻系列 视频| 久久亚洲国产成人精品v| 日本五十路高清| 噜噜噜噜噜久久久久久91| videossex国产| 女人被狂操c到高潮| 麻豆成人av视频| 日本成人三级电影网站| 国产午夜福利久久久久久| 2021天堂中文幕一二区在线观| 欧美bdsm另类| 九色成人免费人妻av| 色播亚洲综合网| 日韩 亚洲 欧美在线| 欧美+日韩+精品| 12—13女人毛片做爰片一| 99热网站在线观看| 免费观看人在逋| 久久久色成人| 国产91av在线免费观看| 精品国内亚洲2022精品成人| 亚洲成人av在线免费| 99热网站在线观看| 最近中文字幕高清免费大全6| 神马国产精品三级电影在线观看| 久久精品国产亚洲av香蕉五月| 欧美性猛交╳xxx乱大交人| 成人鲁丝片一二三区免费| 特级一级黄色大片| 亚洲真实伦在线观看| 国产精品日韩av在线免费观看| 人妻少妇偷人精品九色| 精品久久久噜噜| 免费观看在线日韩| 日韩三级伦理在线观看| 欧美成人一区二区免费高清观看| 欧洲精品卡2卡3卡4卡5卡区| 亚洲色图av天堂| 国产精品av视频在线免费观看| 欧美成人a在线观看| 国内久久婷婷六月综合欲色啪| 亚洲av二区三区四区| 男女做爰动态图高潮gif福利片| 色播亚洲综合网| 国内久久婷婷六月综合欲色啪| 国产老妇女一区| 亚洲精品色激情综合| 亚洲最大成人手机在线| 亚洲在线自拍视频| 欧美高清成人免费视频www| 亚洲av二区三区四区| 美女国产视频在线观看| 老女人水多毛片| av在线观看视频网站免费| 九色成人免费人妻av| 久久久久九九精品影院| 亚洲av免费高清在线观看| 成人漫画全彩无遮挡| 毛片一级片免费看久久久久| 亚洲图色成人| 国产精品久久久久久久电影| 美女黄网站色视频| videossex国产| 级片在线观看| 久久久久久久久中文| 亚洲欧美成人综合另类久久久 | 免费不卡的大黄色大毛片视频在线观看 | 插阴视频在线观看视频| 亚洲欧美精品综合久久99| 看片在线看免费视频| 三级男女做爰猛烈吃奶摸视频| 久久人人精品亚洲av| 国产精品日韩av在线免费观看| 免费观看的影片在线观看| 真实男女啪啪啪动态图| 国产精品久久久久久久电影| 黑人高潮一二区| 老司机影院成人| 久久这里有精品视频免费| 国产精品一区二区在线观看99 | 天堂av国产一区二区熟女人妻| 日韩国内少妇激情av| 悠悠久久av| 一级毛片电影观看 | 日日撸夜夜添| 久久国内精品自在自线图片| 欧洲精品卡2卡3卡4卡5卡区| 毛片女人毛片| 一本精品99久久精品77| 精品不卡国产一区二区三区| www.色视频.com| 一级毛片我不卡| 尤物成人国产欧美一区二区三区| 99视频精品全部免费 在线| 3wmmmm亚洲av在线观看| 亚洲国产精品成人综合色| 免费观看的影片在线观看| 国产在视频线在精品| 三级经典国产精品| 精品久久久久久久久亚洲| 久久久久久大精品| 久久99热6这里只有精品| 三级经典国产精品| 国产男人的电影天堂91| 国产美女午夜福利| 亚洲在线自拍视频| 12—13女人毛片做爰片一| 亚洲精品国产成人久久av| 国产69精品久久久久777片| 亚洲国产精品合色在线| 亚洲av男天堂| 女人十人毛片免费观看3o分钟| 成人亚洲精品av一区二区| www.av在线官网国产| 精品国产三级普通话版| 午夜精品在线福利| 欧美最黄视频在线播放免费| 一区二区三区高清视频在线| 国产成人福利小说| 亚洲人与动物交配视频| 国产亚洲精品av在线| 少妇丰满av| 岛国在线免费视频观看| 日本在线视频免费播放| 午夜福利在线观看免费完整高清在 | 色吧在线观看| 日韩亚洲欧美综合| 在线播放无遮挡| 国产精品福利在线免费观看| 狠狠狠狠99中文字幕| 男人狂女人下面高潮的视频| 岛国毛片在线播放| 国产精品免费一区二区三区在线| 成人av在线播放网站| 国产伦理片在线播放av一区 | 精品久久久久久久末码| 三级经典国产精品| 国产精品久久久久久av不卡| 亚洲在久久综合| 国产一区二区三区在线臀色熟女| 成人美女网站在线观看视频| 亚洲自偷自拍三级| 久久久精品大字幕| 高清日韩中文字幕在线| 国产大屁股一区二区在线视频| 久久人人精品亚洲av| 男人的好看免费观看在线视频| 国产伦精品一区二区三区四那| 亚洲精品自拍成人| 高清日韩中文字幕在线| 亚洲国产精品成人综合色| 蜜桃亚洲精品一区二区三区| 亚洲成人av在线免费| 国产精品一及| 国产黄片视频在线免费观看| 亚洲图色成人| 人妻制服诱惑在线中文字幕| 人人妻人人澡欧美一区二区| 校园人妻丝袜中文字幕| 久久精品国产亚洲av天美| 亚洲中文字幕日韩| 男女下面进入的视频免费午夜| 亚洲av男天堂| 亚洲七黄色美女视频| 中出人妻视频一区二区| 久久久久久久久久成人| 欧美bdsm另类| 日韩欧美精品免费久久| 久久久久久久亚洲中文字幕| 国语自产精品视频在线第100页| 久久99热这里只有精品18| 午夜福利视频1000在线观看| 内地一区二区视频在线| 亚洲,欧美,日韩| 日韩中字成人| 欧美成人a在线观看| 成人一区二区视频在线观看| 久久国产乱子免费精品| 亚洲va在线va天堂va国产| 中文字幕av在线有码专区| 国产 一区精品| 成人无遮挡网站| 亚洲国产精品合色在线| 免费人成视频x8x8入口观看| 国产精品一区二区在线观看99 | 亚洲av成人精品一区久久| 国产三级中文精品| 国产精品久久视频播放| 国产色婷婷99| 村上凉子中文字幕在线| 亚洲欧美清纯卡通| 色5月婷婷丁香| 高清日韩中文字幕在线| 国产日本99.免费观看| 日本黄色视频三级网站网址| 免费大片18禁| 亚洲在线观看片| 此物有八面人人有两片| 亚洲av熟女| 麻豆一二三区av精品| 久久精品久久久久久噜噜老黄 | 精品无人区乱码1区二区| 免费av不卡在线播放| 欧美性猛交黑人性爽| 高清毛片免费看| 2021天堂中文幕一二区在线观| 久久人人爽人人片av| 2022亚洲国产成人精品| 亚洲人与动物交配视频| 久久精品91蜜桃| 亚洲欧美日韩卡通动漫| 国产在线男女| 99久久精品热视频| 国产麻豆成人av免费视频| 久久鲁丝午夜福利片| АⅤ资源中文在线天堂| 晚上一个人看的免费电影| 色哟哟哟哟哟哟| 欧美在线一区亚洲| 午夜视频国产福利| 亚洲美女视频黄频| 狂野欧美白嫩少妇大欣赏| 日韩,欧美,国产一区二区三区 | 亚洲自偷自拍三级| 特大巨黑吊av在线直播| 两个人的视频大全免费| 亚洲国产精品成人综合色| 可以在线观看毛片的网站| 最新中文字幕久久久久| 亚洲欧美中文字幕日韩二区| 99久久九九国产精品国产免费| 久久精品综合一区二区三区| 国产久久久一区二区三区| 亚洲精品久久久久久婷婷小说 | 午夜免费激情av| 免费观看在线日韩| 可以在线观看的亚洲视频| 欧美+亚洲+日韩+国产| 国产真实乱freesex| 国产高清不卡午夜福利| 噜噜噜噜噜久久久久久91| 久久久国产成人精品二区| 国产白丝娇喘喷水9色精品| 高清日韩中文字幕在线| 能在线免费观看的黄片| 网址你懂的国产日韩在线| 欧美精品国产亚洲| 欧美极品一区二区三区四区| 久久精品国产自在天天线| 内地一区二区视频在线| 97超视频在线观看视频| 天堂av国产一区二区熟女人妻| a级毛片a级免费在线| av在线亚洲专区| 69人妻影院| h日本视频在线播放| 国产精品久久视频播放| 色哟哟哟哟哟哟| 国产精品美女特级片免费视频播放器| av天堂在线播放| 久久精品国产99精品国产亚洲性色| 国产一区二区在线观看日韩| 日本在线视频免费播放| 亚洲熟妇中文字幕五十中出| 欧美高清成人免费视频www| 亚洲av一区综合| 国产精品国产高清国产av| 欧美xxxx黑人xx丫x性爽| 亚洲第一区二区三区不卡| 亚洲国产精品成人久久小说 | 婷婷色av中文字幕| 久久99热这里只有精品18| 又黄又爽又刺激的免费视频.| АⅤ资源中文在线天堂| 中文资源天堂在线| 毛片一级片免费看久久久久| 身体一侧抽搐| 韩国av在线不卡| 欧美日本亚洲视频在线播放| 免费黄网站久久成人精品| www.色视频.com| 午夜精品一区二区三区免费看| 久久午夜福利片| 久久久久免费精品人妻一区二区| 欧美一区二区亚洲| 久久亚洲国产成人精品v| 欧美性猛交黑人性爽| 日韩一区二区视频免费看| 久久久久久久久久黄片| 欧美另类亚洲清纯唯美| 日产精品乱码卡一卡2卡三| 成人av在线播放网站| 亚洲第一区二区三区不卡| 在线观看午夜福利视频| 黄色一级大片看看| 我要搜黄色片| 色5月婷婷丁香| 淫秽高清视频在线观看| 99精品在免费线老司机午夜| 日韩欧美一区二区三区在线观看| 亚洲欧美精品综合久久99| 最好的美女福利视频网| 亚洲成a人片在线一区二区| 悠悠久久av| 久久国内精品自在自线图片| 九九在线视频观看精品| 91久久精品国产一区二区成人| 国产亚洲精品av在线| 国产精品日韩av在线免费观看| 国产成人a区在线观看| 色吧在线观看| 亚洲国产精品国产精品| 一区福利在线观看| 嘟嘟电影网在线观看| 村上凉子中文字幕在线| 色视频www国产| 精品国产三级普通话版| 青春草视频在线免费观看| 欧美性感艳星| 白带黄色成豆腐渣| 最近手机中文字幕大全| 久久久久久九九精品二区国产| 女人被狂操c到高潮| 日韩欧美一区二区三区在线观看| 久久精品国产亚洲av天美| 婷婷六月久久综合丁香| 亚洲国产精品国产精品| 国产精品av视频在线免费观看| 人妻久久中文字幕网| 中文欧美无线码| 麻豆精品久久久久久蜜桃| 国产精品福利在线免费观看| 欧美xxxx性猛交bbbb| 天天躁夜夜躁狠狠久久av| 老司机影院成人| 日韩欧美 国产精品| 99久国产av精品国产电影| av.在线天堂| 高清毛片免费观看视频网站| 你懂的网址亚洲精品在线观看 | 欧美成人a在线观看| av天堂中文字幕网| 18禁在线无遮挡免费观看视频| 九草在线视频观看| 最近中文字幕高清免费大全6| 亚洲国产日韩欧美精品在线观看| 成人特级av手机在线观看| 国产成人午夜福利电影在线观看| 夜夜爽天天搞| 天天躁夜夜躁狠狠久久av| 欧美极品一区二区三区四区| 哪里可以看免费的av片| 国产成人freesex在线| 性色avwww在线观看| 亚洲精品日韩在线中文字幕 | 九色成人免费人妻av| 国产成人freesex在线| 久久九九热精品免费| 国产午夜精品久久久久久一区二区三区| 精品久久久久久久久久久久久| 欧美一区二区国产精品久久精品| 九草在线视频观看| 国产真实乱freesex| 久久精品国产自在天天线| 国产探花在线观看一区二区| 亚洲欧美日韩卡通动漫| 91久久精品电影网| 欧美高清性xxxxhd video| 国产精品三级大全| 亚洲av二区三区四区| 国产黄色视频一区二区在线观看 | 免费人成视频x8x8入口观看| 久久久久久九九精品二区国产| 中文字幕制服av| 国产黄a三级三级三级人| 少妇高潮的动态图| 亚洲欧美中文字幕日韩二区| avwww免费| 美女被艹到高潮喷水动态| 亚洲国产精品成人综合色| 午夜福利高清视频| 亚洲人成网站在线播| 亚洲欧洲日产国产| 亚洲av二区三区四区| 99热6这里只有精品| 久久久成人免费电影| 久久久欧美国产精品| 一边摸一边抽搐一进一小说| 2021天堂中文幕一二区在线观| 久久久a久久爽久久v久久| 少妇裸体淫交视频免费看高清| 国产极品精品免费视频能看的| 国产一级毛片在线| 变态另类丝袜制服| 亚洲经典国产精华液单| 麻豆乱淫一区二区| 亚州av有码| 亚洲欧美日韩东京热| 国产人妻一区二区三区在| eeuss影院久久| 免费一级毛片在线播放高清视频| 亚洲精品影视一区二区三区av| 国产精品久久久久久亚洲av鲁大| 在线播放国产精品三级| 亚洲最大成人av| 精品不卡国产一区二区三区| 亚洲一区高清亚洲精品| 久久人人爽人人爽人人片va| 白带黄色成豆腐渣| 久久精品国产亚洲网站| 久久久久久久久久黄片| 亚洲国产精品合色在线| 51国产日韩欧美| 精品一区二区免费观看| 成人永久免费在线观看视频| 久久久久网色| 少妇丰满av| 成人午夜精彩视频在线观看| 一区二区三区免费毛片| 日本与韩国留学比较| 亚洲av成人精品一区久久| 亚洲欧美日韩卡通动漫| 亚洲av中文av极速乱| 十八禁国产超污无遮挡网站| av视频在线观看入口| 亚洲精品自拍成人| 又黄又爽又刺激的免费视频.| 国产高清有码在线观看视频| 日韩一本色道免费dvd| 国产精品永久免费网站| 免费看美女性在线毛片视频| 国内精品美女久久久久久| 99热这里只有精品一区| 黄色视频,在线免费观看| 成人亚洲欧美一区二区av| 熟女人妻精品中文字幕| www日本黄色视频网| 免费观看精品视频网站| 久久午夜福利片| 久久国产乱子免费精品| 麻豆一二三区av精品| 男人的好看免费观看在线视频| 少妇被粗大猛烈的视频| 少妇的逼水好多| 99久国产av精品国产电影| 亚洲欧美成人综合另类久久久 | 久久国产乱子免费精品| 成人午夜精彩视频在线观看| 春色校园在线视频观看| 国内揄拍国产精品人妻在线| 亚洲无线观看免费| 国产亚洲精品久久久com| av在线观看视频网站免费|