• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Enhancing the Robustness of Visual Object Tracking via Style Transfer

    2022-11-09 08:15:14AbdollahAmirkhaniAmirHosseinBarshooiandAmirEbrahimi
    Computers Materials&Continua 2022年1期

    Abdollah Amirkhani,Amir Hossein Barshooi and Amir Ebrahimi

    1School of Automotive Engineering,Iran University of Science and Technology,Tehran,16846-13114,Iran

    2School of Electrical Engineering and Computing,The University of Newcastle,Callaghan,2308,Australia

    Abstract: The performance and accuracy of computer vision systems are affected by noise in different forms.Although numerous solutions and algorithms have been presented for dealing with every type of noise,a comprehensive technique that can cover all the diverse noises and mitigate their damaging effects on the performance and precision of various systems is still missing.In this paper,we have focused on the stability and robustness of one computer vision branch(i.e.,visual object tracking).We have demonstrated that,without imposing a heavy computational load on a model or changing its algorithms,the drop in the performance and accuracy of a system when it is exposed to an unseen noise-laden test dataset can be prevented by simply applying the style transfer technique on the train dataset and training the model with a combination of these and the original untrained data.To verify our proposed approach,it is applied on a generic object tracker by using regression networks.This method’s validity is confirmed by testing it on an exclusive benchmark comprising 50 image sequences,with each sequence containing 15 types of noise at five different intensity levels.The OPE curves obtained show a 40%increase in the robustness of the proposed object tracker against noise,compared to the other trackers considered.

    Keywords: Style transfer;visual object tracking;robustness;corruption

    1 Introduction

    Visual object tracking (VOT),which is a subset of computer vision systems,refers to the process of examining a region of an image in order to detect one/several targets and to estimate its/their positions in subsequent frames [1].Computer vision includes other sub-branches such as object detection [2],classification [3],optical-flow computation [4],and segmentation [5].Because of its greater challenges and more versatile applications,further attention has been paid to the subject of VOT,and it has become one of the main branches of computer vision,especially in the last two decades [6].

    The applications of VOT in the real world can be classified into several categories,including surveillance and security [7,8],autonomous vehicles [9],human-computer interaction [10],robotics [11],traffic monitoring [12],video indexing [13],and vehicle navigation [14,15].

    The VOT procedure is implemented in four steps of i) target initialization,ii) appearance model,iii) motion prediction,and iv) target positioning [15].In the target initialization step,the object/objects we intend to track is/are usually specified by one/several bounding boxes in the first frame.The appearance model itself comprises the two steps of visual representation (which is used in the construction of robust object descriptors with the help of various visual features) and statistical modeling (which is employed in the construction of mathematical models by means of the statistical learning techniques) for the detection of objects in image frames [16,17].The target positions in other frames are estimated in the motion prediction step.The ultimate position of a target is determined in the final step by different search methods such as greedy search [18] or by the maximum posterior prediction techniques [19].

    In any computer vision application,a correct and precise object tracking operation can be achieved by feeding clean data and images to a system;image corruptions in any form can lead to a drop in system performance and robustness.For example,the presence of atmospheric haze can diminish the performance and accuracy of autonomous vehicles and surveillance systems.Mehra et al.[20] showed that the presence of haze or any type of suspended particles in the atmosphere has an adverse snow noise effect on an image,degrading its brightness,contrast and texture features.Also,these suspended particles may sometimes alter the foreground and background of images and cause the failure of any type of computer vision task (e.g.,VOT).In another research,the retrieval of lost information in LIDAR images acquired by autonomous vehicles in snowy and rainy conditions has been investigated.The principle component analysis has been used to improve the obtained images [21].

    Other issues also influence the VOT robustness and factors such as the quality of camera sensors,requirements for real-time processing,noise,loss of information during the transfer from 3D to 2D space,and environmental changes.Several factors could cause the environmental fluctuations themselves,e.g.,the presence of occlusions,illumination problems,deformations,camera rotation,and other external disturbances [15].In VOT,the occlusions can occur in three forms:self-occlusion,inter-object occlusion,and occlusion by the background;and for each of these occlusions,four different intensity levels are considered: non-occlusion,partial occlusion,full occlusion,and long-term full occlusion [22].

    Modeling an object’s motion by means of linear and nonlinear dynamic models is one way of dealing with occlusion in object tracking.Such models can be used to predict the motion of an object from the moment of its occlusion to its reemergence.Other methods such as the silhouette projections,color histogram,and optical flow techniques have also been employed for removing the occlusions and boosting the robustness of object trackers [22].Liu et al.[23] presented a robust technique for detecting traffic signs.They claimed that all the traffic signs with occlusion of less than 50% could be identified by their proposed method.In another study [24],occlusion problem was solved by using particle swarm optimization as a tracker and combining it with Kalman filter.

    In this paper,we have proposed a new method for increasing the robustness and preventing the performance drop of object trackers under different ambient conditions.The presented method can be applied to various types of trackers and detectors and it does not impose a heavy computational load on a system.To substantiate our claim,we have implemented our approach on a visual tracker known as the generic object tracking using regression networks.The main challenge we confronted was the lack of a specific benchmark for evaluating the proposed model and comparing it with other existing algorithms.To deal with this deficiency,we tried to create a benchmark that included most of the existing noises.The main contributions of this work can be summarized as follows:

    · Building new training data from previous data through style transfer and combining them.

    · Modeling and classifying 15 different types of noises in four groups with five different intensity levels and applying them to the benchmark.

    · Applying the proposed method on one of the existing object trackers and comparing the obtained results with those of the other trackers.

    It should be mentioned that the presented technique can be applied to multi-object trackers as well.In the rest of this paper,a review of the research activities was conducted to improve image quality and suppress the adverse effects of noise on visual tracker performance has been presented in Section 2.The proposed methodology has been fully explained in Section 3.In Section 4,the obtained results have been given and compared with those of other techniques.Finally,the conclusions and the future work have been covered in Section 5.

    2 Common Methods of Maintaining Robustness in Object Trackers

    Image enhancement and image restoration are usually known as image denoising,deblocking and deblurring [25].Yu et al.[25] have defined the image enhancement and restoration process as follows:

    “A procedure that attempts to improve the image quality by removing the degradation while preserving the underlying image characteristics.”

    The works conducted on the subject of robustness in object trackers can be generally divided into two categories: i) denoising techniques and ii) using deep networks.The denoising techniques inflict a high computational cost.Conversely,the low speed of deep networks in updating the weights has become a serious hurdle in the extensive use of these networks in visual tracking [26].Each of these methods has been explained in the following subsections.

    2.1 Denoising Techniques

    The first and simplest method for improving a system’s accuracy and performance against noisy data is to use a denoising or image restoration technique.In this approach,before feeding the data to the system,different filters and algorithms are used to remove the noise from a corrupted image and to keep the edges and other details of the image intact as much as possible.Some of the more famous of these techniques in the last decade are the Markov random field [27],block-matching and 3D filtering (BM3D) [28],decision-based median filter(DBMF) [29],incremental multiple principal component analysis [30],histogram of oriented gradients [31],local binary pattern human detector [32],co-tracking with the help of support vector machine (SVM) [33],and the nonlocal self-similarity [34] methods [14].For example,the BM3D filtering technique has been employed in [28] for image denoising,using the unnecessary information of images.

    The standard image processing filters have many problems.For example,the median filter acts on all image pixels and restores them without paying attention to the presence or absence of noise.To deal with this drawback,fuzzy smart filters have been developed.These filters have been designed to act more intensely on the noisy regions of images and overlook the regions with no noise.Fuzzy logic was used in [35] for the first time to improve the quality of color images,remove the impulsive noises,and preserve the image details and edges.Earlier,Yang et al.[36] had employed the heuristic fuzzy rules to enhance the multilevel median filters’performance.Despite the mentioned advantages of the fuzzy smart filters,they have two fundamental flaws:

    ·New image corruptions:The mentioned techniques cause new corruptions in the processed images in proportion to the noise intensity levels.For example,in applying the median filter,the edges in the improved images are displaced in proportion to the window size.As another example,in image denoising with a diffusion filter’s help,the image details,especially in images with high noise intensities,fade considerably.

    ·Application-based:The mentioned filters cannot be applied to any type of noise.For example,it was demonstrated in [37] that the Weiner filter performs better on speckle,Poisson,and Gaussian noises than the mean and the median filters.

    The denoising techniques improved very little during the last decade.The denoising algorithms were believed to have reached their optimal performance,which cannot be further improved [38].It was about this time that the emergence of machine learning techniques opened a new door to image quality improvement and denoising.

    2.2 Learning-Based Methods

    First convolutional neural network (CNN),called LeNet,was presented by LeCun et al.[39]to deal with large data sets and complex inference-based operations.Later on,and since the development of the AlexNet,the CNNs have turned into one of the most common and successful deep learning networks for image processing.Jain et al.[40] have claimed that using the CNNs to denoise natural images is more effective than using other image processing techniques such as the Markov random field.For face recognition in noisy images,Meng et al.[41] have proposed a deep CNN consisting of denoising and recognition sub-networks.Contrary to the classic methods,in which the two mentioned sub-networks are trained independently,these two have been trained as a sequence in the above-mentioned work.

    Using a CNN and training it without a pre-learned pattern requires a large training dataset.Moreover,even if such data are available,it would take a long time (tens of minutes) for training a network and reaching the desired accuracy [26].Considering this matter,a CNN-based object tracker consisting of four layers (two convolutional layers and two fully-connected layers) was presented in [26].This tracker has been proposed by adding a robust sampling mechanism in mini-batches and modifying the stochastic gradient descent to update the parameters,significantly boosting the execution speed and robustness during training.

    Based on machine learning knowledge,a prior image is required by the learning-based methods.Despite the simplicity of these techniques,they have two drawbacks:

    ·Extra computational cost—which is imposed on a system due to the optimization of these techniques

    ·Manual adjustment—which has to be performed because of the non-convexity of these techniques and the need to enhance their performance

    To deal with these two issues,discriminative learning methods were proposed.Using a discriminative learning approach,Bhat et al.[42] presented an offline object tracking architecture based on the target model prediction network,predicting a model in just a few optimization steps.Despite all the approaches presented so far,the problem of dependency on prior data still remains.Some researchers have tackled this problem with the help of correlation filters.Using the correlation filters in object tracking techniques to improve performance and accuracy is common;this has led to two classes of object trackers: correlation filter-based trackers (CFTs) and non-correlation filter-based trackers (NCFTs).A novel method of vehicle detection and tracking based on the Yolov3 architecture has been presented in [43].The researchers have used a vision and image quality improvement technique in this work,which includes three steps: illumination enhancement,reflection component enhancement,and linear weighted fusion.In another study,and based on a sparse collaborative model,Zhong et al.[44] presented a robust object tracking algorithm that simultaneously exploits the holistic templates and local representations to analyze severe appearance changes.

    Another method for preventing accuracy loss when using corrupted data is to import these data directly into a training set.Zhao et al.[45] focused on blurred images as a particular case of data corruption.They showed that a low accuracy is obtained by evaluating the final model on blurred images,even by using deeper or wider networks.To rectify this problem,they tried to fine-tune the model on a combination of clear and blurred images in order to improve its performance.A review of the different techniques used for enhancing the robustness of object trackers has been presented in Fig.1.

    Figure 1:A review of the different techniques presented for boosting the robustness of object trackers

    3 Methodology

    In this section,we will describe the proposed procedure in full details.At first,we need to introduce an object tracker,which will be used in this work.After selecting a tracker type,the process will be divided into several subsections,which will then be applied in sequence to the model considered.

    Our methodology comprises three basic steps.In the first step,we train our network model with a set of initial data and then evaluate it on an OTB benchmark and compare it with other trackers.In the second step,we apply the modeled noises to the benchmark and again evaluate the model on the noisy benchmark.In the third step,we obtain the style transfer of every single training dataset,train the model with a combination of clean and stylized data,apply the trained model on the benchmark of the preceding step,and report the results.

    3.1 Selecting an Object Tracker

    In the early part of 2016,Held et al.[46] demonstrated that the generic object trackers could be trained in real-time by observing objects’motion in offline videos.In this regard,they presented their proposed model known as the generic object tracking using regression networks (GOTURN).They also claimed this tracker to be the first neural network tracker that was able to complete the learning process at a speed of about 100 frames per second (100 fps).Thus,we decided to implement our method on this tracker and compare the results before and after applying the changes.It should be mentioned that the presented method in this paper can be applied to all the object trackers and detectors that might be affected by various noises.Fig.2 shows the performances of two of the most common object trackers (the GOTURN and the SiamMask [47])in the presence of snow noise.Here,we applied the said noise at five different intensity levels on a dataset consisting of 70 image frames and evaluated these two trackers’performances on the noisy images.The figure includes only 18 sample frames (frame numbers 0,1,5,9,13,17,21,25,29,33,37,41,45,49,53,60,65 and 69,starting from top left).As is observed in the figure,The GOTURN tracker fails in frame 30,at the noise intensity level of 3,and the SiamMask tracker fails in frame 52,at the noise intensity of 4.Although the SiamMask tracker shows more robustness than the GOTURN tracker,the tracking operation in both trackers is hampered at different noise intensity levels.

    Figure 2:The performances of the GOTURN and the SiamMask trackers on a noisy dataset

    3.2 Training/Testing with Clean Data

    In this paper,we trained our network with a combination of common images and films.Also,to minimize the error between the predicted bounding box and the ground-truth bounding box,we used the L1 loss function.

    The film set:This set contains 314 video sequences,each of which has been extracted from the ALOV300++ dataset [48].On the average,the 5thframe of each video sequence was labeled according to the position of the object to be tracked,and an annotation file was produced for these frames.The film set was then split into two portions;20% as the test data and 80% as the training data.

    The image set:The first set of images has been taken from the ImageNet detection challenge set,which contains 478807 objects with labeled bounding boxes.The second set of images has been adopted from the common objects in context (COCO) set [49].This dataset includes 330,000 images in 81 different object categories.More than 200,000 of these images have been annotated,and they cover almost 2 million instances.

    3.3 Model Evaluation with Corrupted Data

    Most of the benchmarks presented in the literature include either clean data or only specific noises such as the Gaussian noise,while in the real world,our vision is affected by noises of different types and intensities.We needed a noisy benchmark for this work,so we decided to build our own custom benchmark.Note that the mentioned benchmark will only be employed to evaluate the system robustness against different types of noises,and it will never be used to train the proposed object tracker.

    In 2019,Hendrycks et al.[50] introduced a set of 15 image corruptions with five different intensities (a total of 75 corruptions).They used it to evaluate the robustness of the ImageNet model in dealing with object detection.The names and details of these corruptions have been displayed in Fig.3.Based on our viewpoint,we have divided these 15 visual corruptions into the following four categories and interpreted each one by a model of real-world events:

    ·Brightness: We consider the amount of image brightness equivalent to noise and model it with three types of common noises,i.e.,the Gaussian noise,the Poisson noise (which is also known as the shot noise),and the impulse noise.For example,the authors in [50] have claimed that the Gaussian noise appears in images under low-lighting conditions.

    ·Blur:Image blurriness is often a camera-related phenomenon,and it can occur via different mechanisms such as the sudden jerking of the camera,improper focusing,insufficient depthof-field,camera shaking,shutter speed,etc.We modeled these factors’effects on images with four types of blurriness: defocus blur,frosted glass blur,zoom blur,and motion blur.

    ·Weather:One of the most important parameters affecting computer vision systems’quality and reducing their accuracy is the weather condition.We considered a corresponding image corruption for each of the four types of common weather conditions (rainy,snowy,foggy/hazy,and sunny).The snow noise simulates the snowy weather,the frost noise reflects the rainy conditions,the fog noise indicates all the different situations in which a target object is shrouded,and finally,the brightness noise models the sunny conditions and the direct emission of light on camera sensors and lenses.

    ·Digital accuracy:Any type of change in the quality of an image during its saving,compression,sampling,etc.,can be considered noise.In this section,such noises will be modeled by the changes of contrast,elastic transforms [51],saving in the JPEG format,and pixelation.

    This paper’s basic benchmark (the OTB50) includes 50 different sequences such as basketball,box,vehicle,dog,doll,etc.[52].We apply all the above noises at five different intensity levels(from 1 for the lowest to 5 for the highest intensity) on each of these sequences and build our own custom benchmark.In selecting a benchmark,we must ensure that the data and images in the different sequences of the benchmark don’t have any commonality and overlap with the training data;otherwise,the obtained results will be inaccurate and biased and cannot be generalized to other models.For example,the VOT2015 benchmark cannot be used in this paper because of its overlap with the training data.

    Figure 3:Illustrating 15 types of data corruptions with noise intensity levels of 1 to 5

    3.4 Model Training/Testing with Combined Data

    One of the applications of deep learning in the arts is the style transfer technique,closely resembling the Deep Dream [53].This technique was first presented by Gatys et al.[54] in 2016.In this transfer process,two images are used as inputs: the content image and the style reference image.Then,with the help of a neural network,these two images are combined to yield the output image.This network aims to construct a completely new image whose content is provided by the content image and whose style is adopted from the style reference image.This new image preserves the content of the original image in the style of another image.

    We employ this technique here and get the style transfer of each of our datasets (with hyperparameterα=1) by means of the adaptive instance normalization (AdaIN) method [55].Again,as before,an annotation file is created for the new dataset.Finally,we train our object tracker model with a combination of the initial (standard) dataset and the stylized dataset.An example of this transfer and the proposed methodology has been illustrated in Fig.4.(The style transfer method used for training the proposed model has been taken from https://github.com/bethgelab/stylize-datasets).

    Figure 4:The proposed methodology,along with some samples of style transfer

    4 Experimental Results

    In order to evaluate the performance of the proposed method and the results achieved by applying it to our custom benchmark,we need to define a specific measure.The most common method used for evaluating the object tracker algorithms is the one-pass evaluation (OPE)approach.In this approach,for each algorithm,the ground-truth of a target object is initialized in the first image frame.Then,the average accuracy or the success rate is reported for the rest of the frames [52].Considering the many types of noises (15 noise models) and the intensity levels for each noise (5 intensities),a total of 75 graphs will be obtained.Plotting all these graphs and comparing them with one another will not be logical or practical and confuse the reader.Thus,we decided to adopt a criterion that would be appropriate to our approach.In this criterion,the abscissa of each diagram is partitioned into many intervals.The number of these partitions and their intervals are indicated withnandΔ x,respectively,so that

    whereaandbrepresent the lower and the upper bounds of the abscissa and have values of 0 and 1,respectively.

    The closer the partitions are,the higher the obtained accuracy.Therefore,we bringncloser to infinity in order to reduce the distance between the partitions.Next,the average value is computed for each of the four noise models (brightness,blur,weather,and digital) and different types of trackers in the OPE diagrams.Thus,we have

    wherexis the overlap threshold andfis the success rate.Also,Nindicates the number of subsets in each of the four noise models,and its values are 3,4,4 and 4 for the brightness,blur,weather,and digital noise types,respectively.

    Similar to the Riemann sum theory [56],the above function will converge either to the upper bound (called underestimation in the literature) or the lower bound (called overestimation in the literature),depending on the chosen values of the functions in the partitioned intervals.This notion can also be described by the upper and lower Darboux sum theory.Therefore,

    Lemma(1): Assuming a large number of partitioned intervals,the underestimated and overestimated values will be equal to each other,and it will be proven that the above function is integrable in the [a,b] interval.Thus

    Lemma(2):Using the Riemann sums,the value of a definite integral in the following form can be easily approximated for continuous and non-negative functions:

    Hypothesis (1):A bounded function is Riemann integrable over a compact interval if,and only if,it is continuous almost everywhere.It means that the set of non-continuity points in terms of the Lebesgue size has zero value.This characteristic is sometimes called the “Lebesgue’s integrability condition” or the “Lebesgue’s criterion for Riemann integrability.”

    By considering Lemma (2) and Hypothesis (1) and assuming equal lengths for the partitioned intervals,the above equation can be rewritten as

    Figure 5:Comparing the performances of different object trackers on the OTB50 benchmark using the proposed criterion

    By substituting Eq.(6) into Eq.(8),and according to the transposition property of sigma and integral,we will have

    The simulation results obtained based on the defined criterion have been displayed in Fig.5.As is observed in this figure,without altering the structure of a model,the proposed approach has been able to significantly enhance the robustness of the model against different types of noises.

    In conclusion,by using the results of Fig.5,we have calculated the average area under curve(AUC) of each tracker and also calculated the amount of their AUC drop after applying noise in 5 different levels according to the following equations.The results are reported in Tab.1.

    whereMis equal to the number of noise categories modeled,L0is the value of the AUC without noise andsalso represents the noise levels in which;s∈{1,2,3,4,5}.

    Although our work has reached its aims,it has potential limitations.First,due to the combination of clean data and their style transfer,the size of the final data set will be more than doubled,which will increase the network learning time.Second,selecting the proper content layer,style layer,and optimization techniques (e.g.,Chimp optimization algorithm [57]),to some extent,might affect the obtained result and performance of the tracker in presence of noise.

    According to the results,at the noise level of 1,all trackers showed relatively good robustness,and their AUC drop was less than 18%.At the noise level of 2,a small number of trackers experienced an AUC drop of more than 24%,and the rest of the trackers had a maximum AUC drop of 20%.From the noise level of 3 onwards,there is a significant drop in the trackers’robustness,in which the upper limit of AUC drop between the trackers and in the noise level of 3,4 and 5 was about 25%,30% and 40%,respectively.However,the GOTURN trackers training,according to the approach proposed in this paper,showed excellent robustness to all five noise levels,and the maximum AUC drop in all five levels did not exceed 5%.

    Table 1:Average AUC of each tracker in the evaluation process on the OTB50 benchmark in five different noise levels

    5 Conclusion and Future Work

    Visual noises in images are unwanted and undesirable aberrations,which we always try to get rid of or reduce.In digital images,noises appear as random spots on a bright surface,and they can substantially reduce the quality of these images.Image noises can occur in different ways and by various mechanisms such as overexposure,sudden jerking or shaking of camera,changes of brightness,magnetic fields,improper focusing,and environmental conditions like fog,rain,snow,dust,etc.Noises have negative effects on the performance and precision of computer vision systems such as object trackers.Separately dealing with each of these challenges is an easy task,but it is much more difficult to manage them collectively,which is practically more important.In this paper,a novel method was presented for preserving the performance and accuracy of object trackers against noisy data.In this technique,the tracker model is only trained by a combination of standard training data and their style transfer.To validate the presented approach,an object tracker was chosen from the commonly used trackers available,and the proposed technique was applied to it.This tracker was tested on a customized benchmark containing 15 types of noises at five different noise intensity levels.The obtained results show an increase in the proposed model’s accuracy and robustness against different noises than the other considered object trackers.In future work,we intend to apply the Deep Dream technique on our custom training set and train the object tracker with the combination of this dataset and its style transfer.We also intend to test it on both single-object and multi-object trackers.It is worthy of mentioning that this method can be used as a kind of preprocessing block for maintaining robustness in any object detections or computer vision tasks.

    Funding Statement: The authors received no specific funding for this study.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    色av中文字幕| 久久精品国产亚洲网站| 久久久久精品国产欧美久久久| 亚洲精品一区av在线观看| 日韩一区二区视频免费看| 国产精品久久视频播放| 五月伊人婷婷丁香| 天堂影院成人在线观看| 成人无遮挡网站| 国产亚洲91精品色在线| 欧美高清成人免费视频www| 非洲黑人性xxxx精品又粗又长| 少妇的逼好多水| 深夜精品福利| 你懂的网址亚洲精品在线观看 | 在线免费十八禁| 亚洲熟妇熟女久久| 高清日韩中文字幕在线| 亚洲美女视频黄频| 男人舔奶头视频| 亚洲真实伦在线观看| 欧美日本亚洲视频在线播放| 亚洲av五月六月丁香网| 午夜老司机福利剧场| 成年女人永久免费观看视频| 久久久久久国产a免费观看| 亚洲成人精品中文字幕电影| 1000部很黄的大片| 亚洲无线观看免费| 99热6这里只有精品| 久久6这里有精品| 淫秽高清视频在线观看| 男女做爰动态图高潮gif福利片| 久久精品91蜜桃| 在线a可以看的网站| 婷婷精品国产亚洲av| 麻豆一二三区av精品| 亚洲在线观看片| 国产精品无大码| 在线天堂最新版资源| 亚洲人成网站在线播放欧美日韩| 又黄又爽又免费观看的视频| 日日摸夜夜添夜夜添小说| 国产激情偷乱视频一区二区| 日本在线视频免费播放| 蜜桃久久精品国产亚洲av| 国产69精品久久久久777片| 欧美成人免费av一区二区三区| 中出人妻视频一区二区| 中出人妻视频一区二区| 免费黄网站久久成人精品| 精品日产1卡2卡| 国产亚洲欧美98| 国产黄色视频一区二区在线观看 | 国内精品久久久久精免费| 一进一出抽搐动态| 又爽又黄无遮挡网站| 成人亚洲精品av一区二区| av黄色大香蕉| 国产精品一区二区免费欧美| 高清毛片免费看| 最近最新中文字幕大全电影3| 久久精品国产亚洲av天美| 亚洲图色成人| 久久久a久久爽久久v久久| 国产色爽女视频免费观看| 国产大屁股一区二区在线视频| 18禁在线播放成人免费| 偷拍熟女少妇极品色| 一夜夜www| 亚洲av第一区精品v没综合| 日本 av在线| 亚洲精品在线观看二区| 亚洲精品粉嫩美女一区| 久久久久久久久中文| 18+在线观看网站| 亚洲中文日韩欧美视频| 色哟哟·www| 黄片wwwwww| 亚洲中文字幕日韩| 亚洲av不卡在线观看| 精品一区二区免费观看| 长腿黑丝高跟| 国产蜜桃级精品一区二区三区| 国产精品久久久久久精品电影| 男女下面进入的视频免费午夜| 久久欧美精品欧美久久欧美| 在线观看av片永久免费下载| 日本-黄色视频高清免费观看| 亚洲激情五月婷婷啪啪| 神马国产精品三级电影在线观看| 中文字幕av成人在线电影| 美女大奶头视频| 香蕉av资源在线| 国产一区二区在线av高清观看| 亚洲国产欧美人成| 综合色丁香网| 亚洲精品影视一区二区三区av| 日本欧美国产在线视频| 国产麻豆成人av免费视频| 99久国产av精品| 国产 一区 欧美 日韩| av专区在线播放| 黄色配什么色好看| 精品熟女少妇av免费看| 日本免费一区二区三区高清不卡| 欧美日本亚洲视频在线播放| 久久精品国产亚洲av香蕉五月| 国产国拍精品亚洲av在线观看| 亚洲精品粉嫩美女一区| 亚洲国产欧美人成| 日日干狠狠操夜夜爽| 免费搜索国产男女视频| 国产亚洲av嫩草精品影院| 热99re8久久精品国产| 亚洲成a人片在线一区二区| 精品日产1卡2卡| 国产精品一区二区免费欧美| 亚洲三级黄色毛片| 免费人成在线观看视频色| 99精品在免费线老司机午夜| 亚洲无线观看免费| av福利片在线观看| 一区二区三区高清视频在线| 亚洲精品国产成人久久av| 日韩亚洲欧美综合| 久久精品综合一区二区三区| 久久精品国产自在天天线| 久久热精品热| 在线a可以看的网站| 日日摸夜夜添夜夜添小说| 精品久久久久久久久久久久久| 热99re8久久精品国产| videossex国产| 校园人妻丝袜中文字幕| 亚洲精品国产成人久久av| 久久久久久久久久黄片| 日本五十路高清| 欧美日韩精品成人综合77777| 日本精品一区二区三区蜜桃| 日日摸夜夜添夜夜爱| 国产成人aa在线观看| 内地一区二区视频在线| 小蜜桃在线观看免费完整版高清| 国产精品亚洲一级av第二区| 欧美日本亚洲视频在线播放| 黑人高潮一二区| 久久久精品欧美日韩精品| 亚洲欧美清纯卡通| 久99久视频精品免费| 尤物成人国产欧美一区二区三区| 18+在线观看网站| 你懂的网址亚洲精品在线观看 | 国产精品女同一区二区软件| 久久九九热精品免费| 女人被狂操c到高潮| 精品久久久噜噜| 天天一区二区日本电影三级| 九九在线视频观看精品| 日韩欧美精品v在线| 国产成年人精品一区二区| 麻豆国产97在线/欧美| 有码 亚洲区| 偷拍熟女少妇极品色| av女优亚洲男人天堂| 国产精品久久久久久久久免| 成人一区二区视频在线观看| 人人妻人人澡人人爽人人夜夜 | 麻豆一二三区av精品| 欧美不卡视频在线免费观看| 久久久午夜欧美精品| 国产精品一区二区三区四区久久| 校园人妻丝袜中文字幕| 日韩 亚洲 欧美在线| 中文在线观看免费www的网站| 免费看光身美女| 人人妻,人人澡人人爽秒播| 99热只有精品国产| 91av网一区二区| 午夜福利18| 又粗又爽又猛毛片免费看| 国产精品一区二区性色av| 国产白丝娇喘喷水9色精品| 国产精品人妻久久久久久| 欧美最黄视频在线播放免费| 国产精品久久电影中文字幕| 韩国av在线不卡| 久久久国产成人免费| 日韩欧美精品免费久久| 亚洲国产精品成人综合色| 亚洲国产欧美人成| 在线免费观看的www视频| 在线观看一区二区三区| 日韩一本色道免费dvd| 身体一侧抽搐| 欧美三级亚洲精品| 波野结衣二区三区在线| 国内精品一区二区在线观看| 欧美人与善性xxx| 小蜜桃在线观看免费完整版高清| 男女那种视频在线观看| 中文在线观看免费www的网站| 高清毛片免费看| 国产精品乱码一区二三区的特点| 国内少妇人妻偷人精品xxx网站| 搡女人真爽免费视频火全软件 | 一本一本综合久久| 欧美极品一区二区三区四区| 国产精品野战在线观看| 在线a可以看的网站| 男人舔女人下体高潮全视频| av在线天堂中文字幕| 国产成人aa在线观看| av在线老鸭窝| 精品福利观看| 久久国产乱子免费精品| 人妻夜夜爽99麻豆av| 婷婷六月久久综合丁香| 亚洲最大成人av| 午夜爱爱视频在线播放| 国产精品一区二区免费欧美| 亚洲色图av天堂| av天堂在线播放| 亚洲精品粉嫩美女一区| 一个人观看的视频www高清免费观看| 国产精品人妻久久久影院| 国产一区二区三区在线臀色熟女| 国产精品久久视频播放| 色av中文字幕| 欧美+日韩+精品| 午夜福利高清视频| 国产私拍福利视频在线观看| 卡戴珊不雅视频在线播放| 一级黄片播放器| 欧美高清成人免费视频www| 午夜福利在线观看免费完整高清在 | 99在线人妻在线中文字幕| 亚洲四区av| 老师上课跳d突然被开到最大视频| 黄色一级大片看看| eeuss影院久久| 国产精品精品国产色婷婷| 中文字幕精品亚洲无线码一区| 亚洲人成网站高清观看| 变态另类成人亚洲欧美熟女| 菩萨蛮人人尽说江南好唐韦庄 | 亚洲七黄色美女视频| av卡一久久| 99久久成人亚洲精品观看| 成人高潮视频无遮挡免费网站| 无遮挡黄片免费观看| 国产一区二区三区在线臀色熟女| 欧美xxxx性猛交bbbb| 黄色欧美视频在线观看| 日本-黄色视频高清免费观看| 国产精品一及| 亚洲婷婷狠狠爱综合网| 久久久久久久久久久丰满| 亚洲成av人片在线播放无| 国产探花极品一区二区| h日本视频在线播放| 欧美日韩综合久久久久久| 你懂的网址亚洲精品在线观看 | 成年女人看的毛片在线观看| 亚洲av五月六月丁香网| 国产精品久久久久久久电影| 寂寞人妻少妇视频99o| 亚洲精品日韩在线中文字幕 | 国产亚洲精品综合一区在线观看| 中国美白少妇内射xxxbb| 伦精品一区二区三区| 精品福利观看| 亚洲电影在线观看av| 此物有八面人人有两片| 搡老岳熟女国产| 男女做爰动态图高潮gif福利片| 最后的刺客免费高清国语| 久久久a久久爽久久v久久| 免费观看在线日韩| 变态另类丝袜制服| 麻豆乱淫一区二区| 日韩,欧美,国产一区二区三区 | 在线播放国产精品三级| 成年版毛片免费区| 亚洲成人中文字幕在线播放| 国产精品女同一区二区软件| 99久国产av精品| 一个人看的www免费观看视频| 精品一区二区免费观看| 亚洲欧美清纯卡通| 中文在线观看免费www的网站| 精品一区二区三区视频在线观看免费| 国产高清视频在线观看网站| 麻豆成人午夜福利视频| 精品久久久久久久人妻蜜臀av| 一进一出抽搐gif免费好疼| 亚洲精品国产成人久久av| 国产69精品久久久久777片| 国产免费男女视频| 婷婷色综合大香蕉| 久久天躁狠狠躁夜夜2o2o| 国产成人a区在线观看| 成人av一区二区三区在线看| 精品午夜福利在线看| 最近的中文字幕免费完整| 亚洲,欧美,日韩| 久久婷婷人人爽人人干人人爱| 最后的刺客免费高清国语| 亚洲在线自拍视频| 22中文网久久字幕| 黄色视频,在线免费观看| 午夜精品国产一区二区电影 | 久久久精品欧美日韩精品| 国产欧美日韩一区二区精品| 大又大粗又爽又黄少妇毛片口| 全区人妻精品视频| 亚洲美女视频黄频| 97热精品久久久久久| 国产综合懂色| 简卡轻食公司| 久久久久久久久久成人| 精品人妻视频免费看| 中国美女看黄片| 老女人水多毛片| ponron亚洲| 亚洲av一区综合| 欧美性猛交╳xxx乱大交人| 女人被狂操c到高潮| 国内精品一区二区在线观看| 亚洲av.av天堂| 成人性生交大片免费视频hd| 亚洲成av人片在线播放无| АⅤ资源中文在线天堂| 国产人妻一区二区三区在| 日韩欧美在线乱码| 亚洲成人中文字幕在线播放| 免费人成视频x8x8入口观看| 精品久久久久久久久久免费视频| 亚洲一区二区三区色噜噜| 亚洲国产欧美人成| 午夜激情福利司机影院| 日本熟妇午夜| 精品人妻视频免费看| 久久韩国三级中文字幕| 别揉我奶头 嗯啊视频| 三级男女做爰猛烈吃奶摸视频| 五月伊人婷婷丁香| 99久久无色码亚洲精品果冻| 别揉我奶头 嗯啊视频| 国产精品一区二区性色av| 久久精品影院6| 一级av片app| 国产熟女欧美一区二区| 国产伦精品一区二区三区视频9| 非洲黑人性xxxx精品又粗又长| av在线亚洲专区| 日韩制服骚丝袜av| 在线观看午夜福利视频| 十八禁国产超污无遮挡网站| 天天躁夜夜躁狠狠久久av| 国产精品精品国产色婷婷| 日本黄色片子视频| 欧美xxxx性猛交bbbb| 日日摸夜夜添夜夜添av毛片| 日本与韩国留学比较| 欧美日韩乱码在线| 免费黄网站久久成人精品| 小说图片视频综合网站| 国产蜜桃级精品一区二区三区| 在线观看午夜福利视频| 国产在线精品亚洲第一网站| 欧美最新免费一区二区三区| av免费在线看不卡| 国产精品嫩草影院av在线观看| 可以在线观看毛片的网站| 美女免费视频网站| 亚洲精品国产av成人精品 | 午夜影院日韩av| 91久久精品国产一区二区成人| 麻豆av噜噜一区二区三区| 特级一级黄色大片| avwww免费| 美女内射精品一级片tv| 99久国产av精品| 三级男女做爰猛烈吃奶摸视频| 国产精品,欧美在线| 国内精品宾馆在线| 欧美潮喷喷水| 国产av在哪里看| 欧美zozozo另类| 午夜福利视频1000在线观看| 91在线观看av| 人人妻,人人澡人人爽秒播| av在线播放精品| 啦啦啦观看免费观看视频高清| 欧美三级亚洲精品| 变态另类丝袜制服| 色视频www国产| 欧美激情国产日韩精品一区| 国产熟女欧美一区二区| 国产欧美日韩精品亚洲av| 熟女人妻精品中文字幕| 赤兔流量卡办理| 菩萨蛮人人尽说江南好唐韦庄 | 91精品国产九色| 日韩国内少妇激情av| 国产精品av视频在线免费观看| 51国产日韩欧美| 天天一区二区日本电影三级| 免费电影在线观看免费观看| 卡戴珊不雅视频在线播放| 蜜桃久久精品国产亚洲av| 国产精品久久久久久久电影| 热99在线观看视频| 亚洲精品影视一区二区三区av| 又爽又黄a免费视频| 国产熟女欧美一区二区| 免费不卡的大黄色大毛片视频在线观看 | 在线观看66精品国产| 我的女老师完整版在线观看| 亚洲国产精品成人久久小说 | 免费一级毛片在线播放高清视频| 99国产极品粉嫩在线观看| 久久韩国三级中文字幕| 精品久久久久久久人妻蜜臀av| 亚洲欧美日韩无卡精品| 亚洲婷婷狠狠爱综合网| 联通29元200g的流量卡| 日韩欧美 国产精品| 欧美zozozo另类| 老女人水多毛片| 五月伊人婷婷丁香| 免费人成视频x8x8入口观看| 国产 一区 欧美 日韩| 国产午夜精品久久久久久一区二区三区 | 18禁黄网站禁片免费观看直播| 久久天躁狠狠躁夜夜2o2o| 国产精品电影一区二区三区| www.色视频.com| 最近在线观看免费完整版| 国产av麻豆久久久久久久| 日日啪夜夜撸| 亚洲aⅴ乱码一区二区在线播放| 午夜爱爱视频在线播放| 亚洲婷婷狠狠爱综合网| 国产亚洲精品久久久久久毛片| 99久久精品国产国产毛片| 国产女主播在线喷水免费视频网站 | 内射极品少妇av片p| 亚洲欧美日韩东京热| 亚洲精品粉嫩美女一区| 亚洲av电影不卡..在线观看| 神马国产精品三级电影在线观看| 欧美激情在线99| 天堂av国产一区二区熟女人妻| 久久精品国产清高在天天线| 国产视频内射| 99热全是精品| 免费av观看视频| 亚洲一级一片aⅴ在线观看| 色av中文字幕| 日韩欧美 国产精品| 亚洲国产精品成人综合色| 欧美日韩国产亚洲二区| 草草在线视频免费看| 国产麻豆成人av免费视频| 国产精品一区二区性色av| 亚洲欧美日韩卡通动漫| 在线播放无遮挡| 国产精品,欧美在线| 亚洲国产精品国产精品| www日本黄色视频网| 国产片特级美女逼逼视频| 久久久久国产精品人妻aⅴ院| 有码 亚洲区| 观看免费一级毛片| 一区福利在线观看| 热99re8久久精品国产| 色哟哟哟哟哟哟| av视频在线观看入口| 国产在线男女| 一区二区三区高清视频在线| 午夜免费激情av| 两性午夜刺激爽爽歪歪视频在线观看| 男女边吃奶边做爰视频| 亚洲美女搞黄在线观看 | 美女黄网站色视频| 亚洲国产精品成人久久小说 | 亚洲,欧美,日韩| a级毛片免费高清观看在线播放| 99热这里只有精品一区| 99国产极品粉嫩在线观看| 欧美在线一区亚洲| 亚洲人成网站在线观看播放| 精品国产三级普通话版| 毛片一级片免费看久久久久| 亚洲av电影不卡..在线观看| 三级经典国产精品| 精品一区二区免费观看| 少妇熟女欧美另类| 精品不卡国产一区二区三区| 听说在线观看完整版免费高清| 久久久久九九精品影院| 中文字幕av在线有码专区| 日日撸夜夜添| 免费看av在线观看网站| 免费观看在线日韩| 一级毛片久久久久久久久女| 欧美高清性xxxxhd video| 国产女主播在线喷水免费视频网站 | 99热网站在线观看| 性色avwww在线观看| 美女被艹到高潮喷水动态| 黄片wwwwww| 国产精华一区二区三区| 国产精品亚洲美女久久久| 99热只有精品国产| 国产中年淑女户外野战色| 日韩精品中文字幕看吧| 亚洲欧美成人综合另类久久久 | av在线亚洲专区| 男女啪啪激烈高潮av片| 九九热线精品视视频播放| 男人狂女人下面高潮的视频| 日本熟妇午夜| 国产精品av视频在线免费观看| 99久国产av精品国产电影| 日本熟妇午夜| 亚洲国产欧洲综合997久久,| 韩国av在线不卡| 久久久久国产精品人妻aⅴ院| 热99在线观看视频| 99riav亚洲国产免费| 日韩成人伦理影院| 精品一区二区免费观看| 99riav亚洲国产免费| 成人综合一区亚洲| 日本五十路高清| 久久6这里有精品| 99热这里只有是精品50| 亚洲精品亚洲一区二区| 亚洲最大成人av| 国产又黄又爽又无遮挡在线| 欧美日韩在线观看h| 午夜福利在线观看吧| 久久久a久久爽久久v久久| 悠悠久久av| 亚洲七黄色美女视频| 国产aⅴ精品一区二区三区波| 婷婷亚洲欧美| 久久热精品热| 少妇的逼好多水| 国产成人a∨麻豆精品| 极品教师在线视频| 欧美+日韩+精品| 美女黄网站色视频| 精品无人区乱码1区二区| 亚洲专区国产一区二区| 国产伦精品一区二区三区视频9| 男人的好看免费观看在线视频| 国内精品久久久久精免费| 免费在线观看影片大全网站| АⅤ资源中文在线天堂| 欧美色视频一区免费| 国产精品野战在线观看| 亚洲中文字幕一区二区三区有码在线看| 97超视频在线观看视频| 亚洲人成网站在线播| 欧美日韩在线观看h| 99久国产av精品国产电影| 赤兔流量卡办理| 色噜噜av男人的天堂激情| 成人av在线播放网站| 久久亚洲国产成人精品v| 无遮挡黄片免费观看| 国产精品美女特级片免费视频播放器| 在线观看免费视频日本深夜| 欧美一区二区国产精品久久精品| 少妇人妻精品综合一区二区 | 免费无遮挡裸体视频| 看非洲黑人一级黄片| 欧美色视频一区免费| 91狼人影院| 久久久久久伊人网av| 天堂影院成人在线观看| 亚洲精品色激情综合| 亚洲三级黄色毛片| 久久久精品94久久精品| 看片在线看免费视频| 国产精品人妻久久久影院| 精品久久久久久久久久免费视频| 亚洲自偷自拍三级| 国产乱人偷精品视频| 久久精品国产亚洲av香蕉五月| 国产91av在线免费观看| 日产精品乱码卡一卡2卡三| 美女高潮的动态| 色在线成人网| 91精品国产九色| eeuss影院久久| 内地一区二区视频在线| 久久中文看片网| 成人欧美大片| 国产黄色小视频在线观看| 看十八女毛片水多多多| 国产成人影院久久av| 欧美区成人在线视频| 麻豆国产av国片精品| 亚洲中文字幕日韩| 欧美色视频一区免费| 寂寞人妻少妇视频99o| 给我免费播放毛片高清在线观看| 亚洲人成网站在线播放欧美日韩| 简卡轻食公司| 夜夜看夜夜爽夜夜摸|