• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Exploring Image Generation for UAV Change Detection

    2022-06-25 01:17:36XuanLiHaibinDuanYonglinTianandFeiYueWang
    IEEE/CAA Journal of Automatica Sinica 2022年6期

    Xuan Li, Haibin Duan,, Yonglin Tian, and Fei-Yue Wang,

    Abstract— Change detection (CD) is becoming indispensable for unmanned aerial vehicles (UAVs), especially in the domain of water landing, rescue and search. However, even the most advanced models require large amounts of data for model training and testing. Therefore, sufficient labeled images with different imaging conditions are needed. Inspired by computer graphics, we present a cloning method to simulate inland-water scene and collect an auto-labeled simulated dataset. The simulated dataset consists of six challenges to test the effects of dynamic background, weather, and noise on change detection models.Then, we propose an image translation framework that translates simulated images to synthetic images. This framework uses shared parameters (encoder and generator) and 22 × 22 receptive fields (discriminator) to generate realistic synthetic images as model training sets. The experimental results indicate that: 1)different imaging challenges affect the performance of change detection models; 2) compared with simulated images, synthetic images can effectively improve the accuracy of supervised models.

    1. INTRODUCTION

    CURRENTLY, unmanned aerial vehicles (UAVs) [1] are receiving more and more attention for their capabilities of water rescue [2], search, aerial observation [3], and surveillance [4]. Visual perception is an important part of UAVs to accomplish the above tasks, and change detection(CD) plays an essential role in visual perception. Image-based CD, also referred to as background subtraction, aims to predict the location of moving objects in an image. It is becoming an increasingly active area of research in computer vision community.

    Accurate CD can filter redundant data, which is useful for object detection, object tracking, and recognition. There are two methods to improve the performance of CD. On the one hand, researchers design the structure of algorithms to enhance the UAV’s adaptability in complex environments. On the other hand, data augmentation can comprehensively test and optimize the parameters of algorithms. During the past decades, much work has been done to solve CD model performance under complex conditions. For instance, tradition algorithms use different features to perceive pixel state, including color [5]-[7], edge [8], texture [9], etc. Subsequently,the models [10], [11] based on deep convolutional neural networks (DCNN) are widely used to extract features automatically. Nonetheless, a DCNN model requires learning of specific parameters from pixel-level labeled images.However, manual annotation of every pixel is timeconsuming, cumbersome, and subjective. As described, there are two trends in the research of CD. 1) vision algorithms based on deep learning are widely discussed and studied;2) vision data is helpful for algorithm training and testing, but various difficulties are encountered in image data collection and annotation. Lots of works [12], [13] have demonstrated the strength of the virtual world on computer vision. In addition, some researchers use generative adversarial networks (GANs) to generate realistic images for CD research. For instance, Lebedevet al. [14] proposed an image synthesis method that use a GANs model to improve remote sensing CD performance. Niuet al. [15] presented a conditional GANs to generate homogeneous CD images, making the direct comparison feasible. Besides, Liet al. [16]introduced a deep translation based change detection network(DTCDN) for optical and synthetic aperture radar images. In this paper, we use Unity3D to create a simulated scene that automatically generates labeled simulated images. Then, a novel generative adversarial networks is proposed to translate simulated images into synthetic images. Finally, the experiments are conducted to verify the effectiveness of simulated and synthetic images in CD. The overall framework is illustrated in Fig. 1.

    The main contributions of this work can be summarized as follows.

    Fig. 1. The overall framework of the proposed method. In this paper, the simulated images denote to images with accurate labeling and multi-challenge generated by computer graphics platform, while the synthetic images denote to images generated by image translation model. Note: real, simulated and synthetic datasets are used to train or validate different change detection models. All generation datasets are available at: https://github.com/lx7555/Exploring-Inland-Water-Scene-Generation-for-Change-Detection-Analysis, which may have a large potential to benefit the change detection community in the future.

    First, we use a novel cloning method to construct a simulated inland-water scene. In the simulated scenario, imaging challenges can be flexibly configured to obtain multichallenge sequences. Besides, all images are automatically labeled with an accurate ground truth for CD.

    Second, we propose an image translation framework consisting of a variational autoencoder (VAE) and generative adversarial networks (GANs). The VAE (based deep convolutional networks) includes more global information conducive to style-translation of synthetic images. Besides,the GANs can map two domain images to the same semantic space. This method uses a cycle-consistency constraint and PatchGAN discriminator to achieve high-quality synthetic images.

    Third, computational experiments are used to qualify the impact of multi-challenge sequences and synthetic datasets on CD. It is found that the simulated datasets’ diversity can quantitatively analyze UAV’s vision model performance.More importantly, the experimental results demonstrate that synthetic datasets can effectively improve deep learning-based detectors.

    The rest of this paper is organized as follows. Related works will be reviewed in Section II. Section III provides the details of designing simulated scenes and image translation networks.Experimental results of different datasets are carried out in Section IV. We give the conclusion in Section V.

    II. RELATED WORK

    The models and datasets of CD have remained research hotspots for several years. In the remainder of this section, we review related works in detail.

    A. Change Detection Models

    The simplest of CD models is the FrameDifference method[17], which compares the current frame with previous frame(or background frame) to identify the state of pixels. However, this model can only determine the boundary information of moving objects. In order to enhance the robustness of models, Stauffer and Grimson [18] proposed a Gaussian mixture model (GMM) to represent the state of each pixel.However, the GMM framework can not adapt to a dynamic background in different scenes (illumination variations,camera jitter) and runs slowly. Subsequently, the nonparametric kernel density estimation (KDE) method [19] avoids delicate parameter estimation and uses a smooth probability density function to fit a time window of images. The GMM,KDE methods and their variants [20], [21] are multimodal techniques, which still play an important role in practical engineering. In 2014, a classical background subtraction model (visual background extractor, ViBe) was proposed [22].The ViBe model can meet real-time requirements and effectively deal with dynamic background, camera jitter and ghosting. In the traffic surveillance field, it is possible that multiple challenges coexist in a single scene. As stated in [23],[24], the research proposed effective multi-view learning methods to detect foreground objects from complex traffic environments. After that, St-Charles and Bilodeau [25]presented a background modeling and subtraction algorithm based on adaptive methods, named “l(fā)ocal binary similarity segmenter (LOBSTER)”. This model integrates local binary similarity pattern (LBSP) feature components to deal with spatiotemporal variations for each pixel. In [26], the selfbalanced sensitivity segmenter (SuBSENSE) model uses spatiotemporal information, color and texture features, as well as pixel-level feedback to characterize local representations in pixel-level models. The SuBSENSE can be adjusted automatically for many different challenges (illumination variations, dynamic background motion).

    Recent developments of deep learning-based CD methods have attracted much attention due to their impressive performance beyond classical methods. Braham and Van Droogenbroeck [10] used convolutional neural networks and a single grayscale background image to design background subtraction algorithms with spatial features. Furthermore,Babaeeet al. [11] presented a novel background subtraction algorithm based on convolutional neural networks for automatic feature learning and post-processing. Lim and Keles[27], [28] proposed a robust encoder-decoder structure CD model that could extract multi-scale features using only a few training examples. This was demonstrated in many studies where CD methods based on deep learning reproduce state-ofthe-art performance at the time.

    B. Change Detection Datasets and Synthetic Datasets

    As mentioned above, CD algorithms have developed from designing a sophisticated background model to automatic feature learning approaches. However, even the simplest method relies on labeled video sequences for modeling,training, and testing. In particular, the recent revolutionary results [29] of deep learning-based CD foreshadow the importance of image data. For instance, the PETS2001 dataset[30] allowed researchers to submit their detection results for evaluation against a set of applicable metrics. Liet al. [31]introduced a novel dataset with illumination changes and dynamic background, which consisted of 10 video sequences(image size 320 × 240 pixels). A good example is the CDnet2014 dataset [32], which consists of 11 video categories with 4 to 6 video sequences in each category. This dataset as a benchmark provides and contains typical challenges such as dynamic background, night videos, bad weather. A common disadvantage of the above datasets is collecting and labeling images manually, which is costly and error-prone.

    With the development of computer graphics and virtual reality, it is possible to extract key information about objects from simulated scenes. For instance, the Virtual KITTI [12]and SYNTHIA [13] can be applied to different computer vision tasks (including object detection, tracking, semantic segmentation, etc.) in the field of autonomous driving.Besides, Tiburziet al. [33] used a chroma studio to record foreground objects to obtain pixel-level masks for 15 semisynthetic video sequences automatically. Brutzeret al. [34]introduced a simulated dataset with accurate ground truth annotations and shadow masks for driving scenes. This simulated data provided major challenges of CD to assess detectors. However, due to the bias between real images and simulated images, it not easy to get satisfactory results by learning from simulated images. In 2014, Goodfellowet al.[35] proposed the generative adversarial networks (GANs),which consists of a generator and a discriminator. The generator captures real data distribution and generates synthetic data, while the discriminator distinguishes them for authenticity. Given that fact, Shorten and Khoshgoftaar [36]believed that GANs are the most promising generative modeling technique for image generation. Frid-Adaret al.[37] used deep convolutional generative adversarial networks(DCGANs) to generate liver images and successfully improved the observation accuracy. Zhuet al. [38] proposed a framework using CycleGANs to overcome class imbalance, so this approach effectively improves emotion recognition. Gouet al. [39] presented a universal framework for generating synthetic images to help cascade regression models enhance the accuracy of pupil detection. However, image generation based on generative adversarial networks is mainly used for digital and emotion recognition research.

    III. THE METHODS OF DATA GENERATION

    Vision-based change detection in water scenes is a key functionality for UAVs. If properly annotated datasets are not available, current algorithms are not sufficient to produce reliable classifiers. Computer graphics and generative adversarial networks are applied in the computer vision field[40], [41], from which simulated images and synthetic images are derived. Due to the shortcomings of existing datasets, we first describe generation of simulated images in simulated scenes (Section III-A). Then, we propose an image translation framework for generating photo-realistic synthetic images(Section III-B). Finally, we construct multi-challenge sequences and high-quality training sequences using simulated images and synthetic images (Sections III-A and III-B).

    A. Simulated Images Generated From Computer Graphics

    1) Constructing Simulated Scene:Computer graphics have been developed as a viable approach to tackle the longstanding difficulty in visual perception and understanding of complex scenes. In this section, we construct a simulated scene based on real scenes. Specifically, the CDnet2014 consists of 11 categories of video challenges. We use a cloning method to build a simulated inland-water scenario using CDnet2014 datasets (dynamic background category,canoe sequence) as a reference. Considering that the simulated scenario is mainly used for UAV change detection, we divide this scenario into several modules: foreground objects,background models, and imaging challenges. To begin with,we use publicly available and self-designed models as foreground objects (boats, pedestrians), and configure their speed, angle, position, and posture as needed. Then, since background models are stationary or moving slowly, we manually set their position and state, making them similar in content to real scenes. Furthermore, different imaging challenges make it possible to quantitatively test models. In our simulated scene, the challenges considered include (but not limited to): wave height (i.e., low (1 m), medium (2 m),and high (3 m)), noise (i.e., Gaussian, salt and pepper), and illumination variations (from 12:00 am to 12:00 pm). Note that foreground objects and background models are distinguished by labeling information, and imaging challenges are mainly used to generate multi-challenge video sequences.

    2) Generating Ground Truth Labels:The above steps preliminarily complete construction of simulated scenes. This part introduces automatic labeling process in detail. Accurate evaluation results of CD depend on pixel-level annotations.More importantly, deep learning-based models highlight the importance of labeled images. However, Kunduet al. [42]pointed out that it usually requires 30-60 minutes to label an image at the pixel-level. What is more, it is difficult and errorprone to manually label pixels near the foreground object boundaries. These facts suggest that we need to find a better way to replace the controversial manual approach. Therefore,we use Unity3D to solve this problem from the perspective of computer graphics. Computer graphics can easily obtain information about components (vertices, edges, polygons) of 3D models and transform them into binary ground truth images. The original image and binary ground truth image are rendered separately and output independently. In addition,three rules are employed empirically to ensure labeling quality:

    i) There are four common classes (Static, Moving, Non-ROI(region of interest) and Unknown) that assign grayscale values of 0, 255, 85, and 170, respectively (as shown in Fig. 2).

    Fig. 2. Examples of 4-class ground truth annotation automatically generated by Unity3D. Note that Unknown border is represented by 16-neighborhood pixels. Best viewed with zooming.

    ii) The Moving, Static, and Unknown classes are associated with pixels for obvious foreground objects, background models and unclear objects boundaries.

    iii) The Non-ROI label is associated with unrelated areas,which helps some models complete evaluation and initialization.

    3) Designing Simulated Multi-Challenge Sequences:In previous research, Goyetteet al. [32] defined canonical problems for CD, such as dynamic background, intermittent object motion, night video. Due to limitations of real sensors and complexity of real scenarios, these classic problems can not reflect all challenges of UAVs water missions. Based on Goyette’s work, we construct the following multi-challenge sequences using different imaging challenges in simulated scenario (as illustrated in Fig. 3). The height and FOV (field of view) parameters of simulated cameras are 1.5 meters and 90 degrees, respectively. Note: the simulated multi-challenge sequences consist of the following six challenges, each containing 1189 frames (the first 100 frames are labeled as Non-ROI for background model initialization).

    Fig. 3. Sample images of simulated multi-challenge sequences. Top: basic(left), dynamic background (middle), and illumination variations (right);Bottom: bad weather (left), noise (middle), and more moving objects (right).

    Basic:This is a basic inland-water scenario that includes general foreground objects (boats) and background challenges(rivers and trees).

    Dynamic background: This sequence contains videos with dynamic backgrounds, including varying waves (low,medium, and high) and moving tree branches.

    Illumination variations: This challenge changes illumination continuously over time until night. Therefore, the contrast between foreground and background is decreased. Here, we use directional parallel light to follow the movement of the sun from east to west.

    Bad weather: This is a rain and fog sequence, with increased high gain level and low background/foreground contrast,resulting in more camouflage. The exponential decay method is used to generate simulated fog, while the particle system produces simulated rain.

    Noise: The basic sequence adds some sensor noise(Gaussian noise (mean = 0, variance = 0.002), salt and pepper noise (probability = 0.01), mixed noise (Gaussian and salt)) to simulate real scenes, leading to uncertainty.

    Moving objects: We add 9 boats with different appearances(aluminum fishing boat, bass boat, bay boat, etc), colors (red,yellow, black, etc.), and speeds (2-5 m/s) to basic sequence,which can fully test CD models.

    B. Synthetic Images Generated From Generative Adversarial Networks

    The main function of generative adversarial networks is used to narrow the distribution gap in different domains. In Section III-A we use a game engine to generate simulated images. However, building and editing simulated scenes can be still challenging and difficult to cover all gaps from real scenes. Here, we introduce an image generation network to turn the process of manual scene modeling into a model learning and inference problem. More importantly, synthetic images are expected to be used for model training. The overall architecture of the proposed method is shown in Fig. 4.divided into “ImageGAN” and “PatchGAN” due to different receptive fields. The “ImageGAN” has a global receptive field and more weight parameters, making it harder to train and leads to the poor image detail. An alternative approach is PatchGAN, which penalizes the structure at the scale of image patches. In the following, we take this approach to focus on more local style statistics.

    Fig. 4. The overall architecture of the proposed method. The real and simulated images are concurrently inputted into the two variational autoencoder models,whose part of parameters are shared with each other. For discriminators, the middle patch (22 × 22) provides a good balance between quality and flexibility.Therefore, this model can generate a high-quality synthetic image. In the proposed method, the layers from Conv1_1 to Conv5_3, Resblk3_3 to Deconv1_1 and Conv4_4 to Conv1_1 fit the function of encoders E1 and E 2 , generators G 1 and G 2, and discriminator D 1 and D 2, respectively.

    TABLE I THE IMAGE-GENERATION NETWORKS CONFIGURATIONS. THE PARAMETERS S AND FM ARE DENOTED AS STRIDE AND FEATURE MAPS

    3) Constructing High-Quality Synthetic Sequence:In this paper, the image translation network is used to generate highquality synthetic sequences. Section III-B (Table I) lists all the variants of networks (Model 1, Model 2, and Model 3), which are distinguished by the size of receptive fields (patch sizes 40 × 40, 22 × 22 and 10 × 10). To be specific, we used 792 real images from the CDnet2014 canoe category and 792 from simulated multi-challenge sequences (basic) to train above three models. In the experiment, the number of three test datasets is the same. Then, three different models are tested on 1189 simulated samples to generate synthetic images. Fig. 5 illustrates synthetic results for all variants of image translation models. The results show that a smaller patch (10 × 10)increases the flexibility, and a larger patch (40 × 40) preserve a better structure. However, Model 1 and Model 3 have problems with boat generation (e.g., transparency and unclear details). By contrast, the middle patch (22 × 22) provides a good balance between quality and flexibility. Therefore, we decide to use Model 2 as a reference to generate a high-quality synthetic sequence (from simulated images (Basic sequence)to synthetic images).

    IV. EXPERIMENTAL RESULTS

    In Section III, we design simulated scenes and image translation model to generate a series of simulated and synthetic images for UAV change detection. In this section,real, simulated and synthetic datasets are used to train or validate different CD models. The rest of this section is organized as follows. First and foremost, we introduce a basic setup including datasets and measure criteria used in the experiment. Then, different simulated multi-challenge sequences are used to quantitatively analyze the impact on models performance. Furthermore, supervised CD models are trained and validated on simulated and synthetic datasets. Last but not least, we briefly summarize experimental results. The purpose of experiment is to verify the impact of different conditions on vision algorithms, and it also illustrates the importance of image generation on CD.

    A. Experimental Setup

    1) Datasets:To verify the utility of our simulated and synthetic dataset for CD, we select publicly available CDnet2014 datasets (Dynamic Background category, canoe sequence) as the real dataset benchmark. In order to quantitatively analyze CD models, we construct simulated multi-challenge sequences containing six challenges, including Basic, Dynamic background, Illumination variations, Bad weather, Noise, and More moving objects. In addition, we use an image translation model (Model 2) to generate synthetic sequences, which achieves a good balance between quality and structure. Therefore, the corresponding synthetic images are mainly used for the CD task based on DCNN. Examples of datasets used in experiments are shown in Fig. 6.

    Fig. 5. Effect of different patch sizes for sim-to-real translations. Column 1: the input synthetic image. Column 2: the output of Model 1 (with patch sizes of 10 × 10). Column 3: the output of Model 2 (with patch sizes of 22 × 22). Column 4: the output of Model 3 (with patch sizes of 40 × 40).

    Fig. 6. Examples of datasets used in experiments. Top: simulated image (left), real image (middle), and synthetic image (right); Bottom: simulated annotation(left), real annotation (middle), and synthetic annotation (right).

    2) Change Detection Models and Measure Criteria:In this work, four widely used (unsupervised) CD models, including FrameDifference [17], ViBe [22], LOBSTER [25] and SuBSENSE [26], are conducted to verify the effectiveness of simulated sequences. In addition, we also investigate effects of synthetic images on supervised CD models (FgSegNet-v2[28] and MU-Net1 [43]). Before the experiment, we introduce three metrics for model evaluation.

    B. Experiments for Change Detection Model Tested on Real,Simulated and Synthetic Datasets

    According to computer graphics and generative adversarial networks, prior works have shown promising results in generating simulated and synthetic images. In this section, we use real, simulated and synthetic datasets to conduct relevant experiments. To be specific, four classical CD models (unsupervised) are used to verify the reliability and validity of simulated and synthetic datasets. These models are tested on real, synthetic and simulated datasets (Basic sequence),respectively. Table II lists the experimental results of CD.Compared with a real dataset (mean), the simulated results are similar in F-measure and Recall. However, there are great differences in the Precision metric, because simulated data has more background disturbance (water wave, shadow and moving tree branches). These challenges are irregular and camouflaged, resulting in poor performance of models. In addition, the synthetic results have the same F-measure as the simulated results, which benefits from the structural similarities between the two datasets. Since the objects in synthetic data are partially camouflaged and have an increased the gain level, the Precision metric is lower than other two datasets. Examples of results on real, simulated, and synthetic images using the LOBSTER and SuBSENSE models are illustrated in Fig. 7.

    C. Experiments for Change Detection Model Tested on Simulated Multi-Challenge Sequences

    The above experiments provide sufficient evidence that simulated data can be substituted for real data when evaluating CD models. In addition, the imaging diversity of simulated scenarios allows us to quantitatively analyze the performance of detectors under different challenges. There-

    fore, we use the simulated multi-challenge sequences to test different CD models comprehensively. The simulated multichallenge sequences include: Basic, Dynamic background,Noise, Bad weather, Illumination variations, and More moving objects. It is worth noting that each specific sequence is also affected by other challenges, but one challenge is dominant. Quantitative results of different unsupervised CD models are shown in Table III. Besides, we visualize the experimental results on simulated multi-challenge sequences in Fig. 8. Next, we analyze the experimental results of each sequence in detail.

    TABLE II THE EXPERIMENTAL RESULTS ON REAL, SIMULATED, AND SYNTHETIC DATASETS FOR CHANGE DETECTION(NOTATION: THE BOLD NUMBERS REPRESENT THE MAXIMUM METRIC IN DIFFERENT MODELS) (%)

    Fig. 7. Examples of results on real, simulated and synthetic images using the LOBSTER and SuBSENSE models. Row 1: real input image (left), simulated input image (middle), and synthetic input image (right); Row 2: real annotation (left), simulated annotation (middle), and synthetic annotation; Row 3: results by the LOBSTER model; Row 4: results by the SuBSENSE model.

    1) Video Basic (V_B)):The basic sequence provides a first impression of evaluating CD performance in a typical inlandwater scenario. It is difficult for the FrameDifference and ViBe to obtain satisfactory evaluation metrics in the basic sequence. Also remarkable is that LOBSTER has difficulty maintaining the tradeoff between Recall and Precision metrics, resulting in a low F-measure value. The SuBSENSE model keeps all metrics at a high level simultaneously in a basic sequence.

    2) Video Dynamic Background (V_DB):The varying wave and moving tree branches determine the difficulty of the dynamic background sequence. In this section, we mainly adjust the wave height (low (l), medium (m), and high (h)) for experiment. As shown in Table III, the taller the wave height,the more adverse detection factors for each model, leading to a declining trend of performance metrics.

    3) Video Noise (V_VN):Transmission and image compression can add noise to real images. In this comparative experiment, we introduce three kinds of noise for simulated images, including Gaussian noise (G), pepper and salt noise(P), and mixed noise (M). Compared with basic sequence results, most detection models do not provide suitable antinoise mechanisms. The SuBSENSE model is severely disturbed with the challenge of mixed noise. In contrast, the LOBSTER model exhibits good performance because the local binary pattern improves detection performance and stability.

    4) Video Bad Weather (V_BW):This challenge is a basic sequence in rainy condition and low-visibility conditions.Without much surprise, the results of this experiment show quite low performance for all evaluated approaches. The results also show that LOBSTER has better adaptability in rainy conditions, while SuBSENSE performs better in the basic sequence (sunny). This is due to the former uses an adaptive approach that accurately captures foreground objects in bad weather.

    5) Video Illumination Variations (V_IV):None of the testing methods can satisfactorily handle the challenge of illumination variations. There are two reasons for this phenomenon. On the one hand, the contrast between background and foreground objects is reduced; On the other hand, the foreground objects are easy to camouflage at night. Therefore,all detection models failed in this experiment.

    6) Video More Moving Objects (V_MMO):The real scene consists of only one red canoe. It is well known that sample

    Method Testing Recall Precision F-measure FrameDifference V_B (m) 41.2 6.9 11.8 ViBe V_B (m) 30.6 35.3 32.8 SuBSENSE V_B (m) 95.9 80.2 87.3 LOBSTER V_B (m) 92.2 69.2 79.1 FrameDifference V_DB (l) 40.5 11.3 17.6 ViBe V_DB (l) 29.5 41.0 34.3 SuBSENSE V_DB (l) 95.8 95.4 95.6 LOBSTER V_DB (l) 93.2 93.6 93.4 FrameDifference V_DB (h) 41.1 5.5 9.7 ViBe V_DB (h) 29.6 31.7 30.6 SuBSENSE V_DB (h) 95.8 80.6 87.5 LOBSTER V_DB (h) 92.0 63.4 75.0 FrameDifference V_VN (G) 53.9 1.2 2.4 ViBe V_VN (G) 32.1 32.5 32.3 SuBSENSE V_VN (G) 78.9 94.4 85.9 LOBSTER V_VN (G) 75.8 86.2 80.7 FrameDifference V_VN (P) 49.9 1.5 2.9 ViBe V_VN (P) 31.9 10.2 15.4 SuBSENSE V_VN (P) 45.5 97.6 62.1 LOBSTER V_VN (P) 94.0 57.3 71.2 FrameDifference V_VN (M) 59.4 0.1 1.9 ViBe V_VN (M) 32.3 9.2 14.4 SuBSENSE V_VN (M) 28.6 99.1 44.4 LOBSTER V_VN (M) 80.2 79.9 80.0 FrameDifference V_BW 43.1 3.3 6.2 ViBe V_BW 18.7 12.7 15.2 SuBSENSE V_BW 43.1 90.3 58.4 LOBSTER V_BW 61.2 67.9 64.4 FrameDifference V_IV 21.6 0.8 1.6 ViBe V_IV 1.8 0.2 0.2 SuBSENSE V_IV 24.8 4.5 7.6 LOBSTER V_IV 70.2 2.5 4.8 FrameDifference V_MMO 24.8 26.4 25.6 ViBe V_MMO 29.8 74.3 42.5 SuBSENSE V_MMO 77.6 94.4 85.2 LOBSTER V_MMO 69.5 90.2 78.5

    TABLE III THE QUANTITATIVE RESULTS OF DIFFERENT UNSUPERVISED CHANGE DETECTION MODELS ON THE SIMULATED MULTICHALLENGE SEQUENCES (NOTATION: THE BOLD NUMBERS REPRESENT THE MAXIMUM METRIC IN THE SAME CHALLENGE SEQUENCE) (%)diversity is a key factor in model testing. Therefore, we take advantage of the simulated scenario by adding various foreground models to basic sequence. As shown in Table III,the metrics of simple models (FrameDifference and ViBe) are improved, while more advanced models’ (SuBSENSE and LOBSTER) performance are slightly disturbed. The results indicate that different detectors can be comprehensively investigated by the moving objects sequence.

    Fig. 8. Examples of change detection results on simulated multi-challenge sequences using different models. Column 1: input images; Column 2: ground truth; Column 3: change detection results by the FrameDifference model; Column 4: change detection results by the ViBe model; Column 5: change detection results by the SuBSENSE model; Column 6: change detection results by the LOBSTER model. Note that an unknown border is added around the foreground objects in the ground truth image, this protects evaluation metrics from being corrupted by motion blur. Best viewed with zooming.

    D. Experiments for Change Detection Model Trained on Simulated and Synthetic Datasets

    The above experiments use simulated data to test different unsupervised CD models. What is more, this paper proposes an image translation network to generate synthetic images.Next, we investigate the influence of training images in different domains on CD performance. For instance, 50 simulated or 50 synthetic images are randomly selected as benchmarks to train supervised CD models (FgSegNet-v2 [28]and MU-Net1 [43]). All models are tested on real CDnet2014 datasets (dynamic background category, canoe sequence,(1189 images)). The verification results of supervised CD models are illustrated in Table IV. It can be found that the models trained by synthetic images obtains better detection performance. For instance, the MU-Net1 model trained by synthetic images increased the F-measure metric by 48.5%.This demonstrates that our well-designed generative model can effectively reduce the gap between real and simulated images. Moreover, it also shows the practicability of synthetic data in CD research. However, the experimental results also suggest that the precision of the FgSegNet-v2 model trained by synthetic images is lower than simulated images. There are two reasons for this phenomenon. 1) The precision and recall are often in tension. That is, improving precision typically reduces recall and vice versa. 2) Due to domain shift issue,simulated images provide limited features (limited foreground objects are detected), so the trained model can not maintain recall while obtaining high precision metric.

    TABLE IV VERIFICATION RESULTS OF SUPERVISED CHANGE DETECTION MODELS TRAINED ON SIMULATED AND SYNTHETIC DATASETS (%)

    E. Experiments for a Change Detection Model Trained on Real,Simulated and Synthetic Datasets

    In fact, the influence of our proposed image generation approach in the real world is an important topic. Next, we investigate the effectiveness of different training images on CD model (MU-Net1) performance. We randomly select 50 real images from the canoe sequence as the training set and the rest of the 1139 images as testing set. In addition, we generate corresponding simulated images and synthetic images (based on 50 real images) as training sets. In this work, we utilize real, simulated and synthetic datasets to conduct experiments. The training datasets include “real images”, “real images + simulated images” and “real images +synthetic images”. It should be noted that the test results of real images are used as baseline. The experimental results are shown in Table V. The results illustrate that the models trained by real images + the newly generated images(simulated or synthetic) have better detection performance. In particular, the MU-Net1 model trained by “real+synthetic”images increased the F-measure metric by 6.5%. This demonstrates that our image generation approach can greatly improve the performance of CD detectors, which benefits from the small gap between the real and synthetic datasets.

    TABLE V PERFORMANCE OF MU-NET1 MODEL TRAINED ON REAL,SIMULATED AND SYNTHETIC DATASETS (%)

    V. CONCLUSION

    Based on the results detailed above, we use real, simulated and synthetic datasets to train and evaluate different CD models. Firstly, the four unsupervised CD models are tested on real and simulated datasets, respectively. Then, the simulated multi-challenge sequences are used to test different methods, which makes it possible to quantitatively analyze models under diversified imaging challenges. The experimental results show that the FrameDifference method can only detect edge information of moving objects. However, the Vibe and LOBSTER models can not speed up the absorption of ghost effects. By contrast, the SuBSENSE monitors both local model fidelity and segmentation noise. This feature allows it to quickly respond to intermittent dynamic object motion, so that it can be used in complex surveillance scenarios.However, there is dynamic interference and foreground object camouflage in the simulated data, which leads to a decrease in detection accuracy. For example, all evaluated approaches are affected by bad weather, resulting in reduced metrics. Finally,it can be found that the supervised models trained by synthetic images obtain better detection performance, which indicates the potential of image generation research.

    This paper simulates a typical inland-water scenario and generates simulated multi-challenge sequences for testing the visual intelligence of UAVs. The simulated dataset contains six challenge sequences that effective and quantitatively analyze different CD models. Furthermore, we propose an image translation network, which consists of encoders and generators with shared parameters and discriminators with adjustable receptive fields. This method can narrow the gap between real and simulated images and synthesize photorealistic images. The experimental results prove that training with synthetic images can improve the performance of(supervised) models. However, none of these models provides good metrics under illumination variations and bad weather.Therefore, more work [44], [45] should be done to solve this key problem.

    999久久久国产精品视频| 中文字幕最新亚洲高清| 亚洲色图av天堂| 一个人免费看片子| 亚洲欧美一区二区三区黑人| 亚洲国产欧美在线一区| 国产精品久久久久久精品电影小说| 无遮挡黄片免费观看| 国产一区二区三区在线臀色熟女 | 午夜福利在线观看吧| 久久精品国产99精品国产亚洲性色 | 亚洲五月色婷婷综合| 国产淫语在线视频| 啪啪无遮挡十八禁网站| 黄色丝袜av网址大全| 黄色视频在线播放观看不卡| 久久精品国产亚洲av香蕉五月 | 午夜成年电影在线免费观看| 亚洲精品成人av观看孕妇| h视频一区二区三区| 亚洲av第一区精品v没综合| 精品久久久久久电影网| 日韩免费高清中文字幕av| 久久精品国产a三级三级三级| 大香蕉久久网| 99热网站在线观看| 中文字幕精品免费在线观看视频| 精品国产一区二区三区四区第35| 夜夜爽天天搞| 久久午夜综合久久蜜桃| 欧美亚洲日本最大视频资源| 亚洲av日韩精品久久久久久密| 最黄视频免费看| 一级毛片女人18水好多| www.自偷自拍.com| 99国产综合亚洲精品| aaaaa片日本免费| 亚洲国产欧美网| 欧美+亚洲+日韩+国产| 五月开心婷婷网| 国产极品粉嫩免费观看在线| 精品高清国产在线一区| 男女午夜视频在线观看| 纵有疾风起免费观看全集完整版| 黄色丝袜av网址大全| 一级黄色大片毛片| 天堂动漫精品| 国产精品秋霞免费鲁丝片| 热re99久久国产66热| www.自偷自拍.com| 一夜夜www| 国产97色在线日韩免费| 美女福利国产在线| 国产精品国产高清国产av | 一区福利在线观看| 亚洲av国产av综合av卡| xxxhd国产人妻xxx| 十八禁高潮呻吟视频| 黄色视频,在线免费观看| 免费不卡黄色视频| 曰老女人黄片| 亚洲成人国产一区在线观看| 午夜精品久久久久久毛片777| 精品国内亚洲2022精品成人 | 在线观看一区二区三区激情| 母亲3免费完整高清在线观看| 欧美国产精品一级二级三级| 国产成人欧美| 极品教师在线免费播放| 国产激情久久老熟女| 精品福利观看| 少妇被粗大的猛进出69影院| av福利片在线| av不卡在线播放| 巨乳人妻的诱惑在线观看| 高清黄色对白视频在线免费看| 91麻豆av在线| 一区二区三区激情视频| 国产三级黄色录像| 亚洲成国产人片在线观看| 在线十欧美十亚洲十日本专区| 精品国产一区二区三区四区第35| 电影成人av| 高清在线国产一区| 久久久久久久大尺度免费视频| 女人久久www免费人成看片| 国产极品粉嫩免费观看在线| 免费日韩欧美在线观看| 久久亚洲精品不卡| 这个男人来自地球电影免费观看| 免费一级毛片在线播放高清视频 | 大片电影免费在线观看免费| 麻豆成人av在线观看| 黄片小视频在线播放| 国产av又大| 日日爽夜夜爽网站| 亚洲少妇的诱惑av| 麻豆av在线久日| 亚洲情色 制服丝袜| 美女视频免费永久观看网站| 国产伦人伦偷精品视频| 在线播放国产精品三级| 三上悠亚av全集在线观看| 少妇的丰满在线观看| 一边摸一边抽搐一进一出视频| 80岁老熟妇乱子伦牲交| 国产成人av教育| 咕卡用的链子| 两个人免费观看高清视频| 免费人妻精品一区二区三区视频| 美女福利国产在线| 亚洲精品久久成人aⅴ小说| 18在线观看网站| 高潮久久久久久久久久久不卡| 不卡一级毛片| 少妇裸体淫交视频免费看高清 | 一夜夜www| 2018国产大陆天天弄谢| 不卡一级毛片| 欧美激情高清一区二区三区| 国产人伦9x9x在线观看| 久久人人97超碰香蕉20202| 色94色欧美一区二区| 国产区一区二久久| 黑人巨大精品欧美一区二区mp4| a级片在线免费高清观看视频| 国产成人影院久久av| 丰满人妻熟妇乱又伦精品不卡| 国产亚洲精品一区二区www | 变态另类成人亚洲欧美熟女 | 91成人精品电影| 在线观看免费视频网站a站| 男女床上黄色一级片免费看| 99精国产麻豆久久婷婷| a在线观看视频网站| 欧美另类亚洲清纯唯美| 91九色精品人成在线观看| 侵犯人妻中文字幕一二三四区| 黄色 视频免费看| 日韩欧美一区视频在线观看| 亚洲精品粉嫩美女一区| videosex国产| 丝袜美腿诱惑在线| 国产男女超爽视频在线观看| 蜜桃在线观看..| 国产精品免费一区二区三区在线 | 欧美成人午夜精品| 丝袜在线中文字幕| 99九九在线精品视频| 欧美 亚洲 国产 日韩一| 97人妻天天添夜夜摸| 久久久久久久大尺度免费视频| 他把我摸到了高潮在线观看 | 在线观看免费高清a一片| 一级a爱视频在线免费观看| 精品人妻1区二区| 国产伦理片在线播放av一区| 国产成人精品久久二区二区91| 国产国语露脸激情在线看| 久久久精品94久久精品| 免费日韩欧美在线观看| 免费看a级黄色片| 捣出白浆h1v1| 日韩人妻精品一区2区三区| 国产熟女午夜一区二区三区| 亚洲伊人久久精品综合| 欧美国产精品一级二级三级| 嫩草影视91久久| 亚洲欧美一区二区三区黑人| svipshipincom国产片| 动漫黄色视频在线观看| 欧美激情高清一区二区三区| 久久久久精品国产欧美久久久| 国产成人欧美| 丝袜美腿诱惑在线| 午夜成年电影在线免费观看| 99久久人妻综合| 纵有疾风起免费观看全集完整版| 亚洲欧洲精品一区二区精品久久久| 老司机靠b影院| 久久中文字幕人妻熟女| 久久国产精品人妻蜜桃| 丝瓜视频免费看黄片| 母亲3免费完整高清在线观看| 精品国内亚洲2022精品成人 | 在线播放国产精品三级| 大片免费播放器 马上看| 美女国产高潮福利片在线看| 一边摸一边做爽爽视频免费| 午夜免费成人在线视频| 久久久久精品人妻al黑| 国产亚洲欧美在线一区二区| 俄罗斯特黄特色一大片| 久久精品亚洲av国产电影网| 制服人妻中文乱码| 亚洲一区中文字幕在线| 黑人巨大精品欧美一区二区蜜桃| 久久精品熟女亚洲av麻豆精品| 精品少妇黑人巨大在线播放| 91成年电影在线观看| 国产福利在线免费观看视频| 少妇的丰满在线观看| 黄色毛片三级朝国网站| 亚洲三区欧美一区| 午夜福利欧美成人| 欧美激情 高清一区二区三区| a级片在线免费高清观看视频| 少妇裸体淫交视频免费看高清 | 日韩视频一区二区在线观看| 香蕉丝袜av| 一夜夜www| bbb黄色大片| 国精品久久久久久国模美| 国产亚洲av高清不卡| 少妇粗大呻吟视频| 久久精品国产亚洲av香蕉五月 | av视频免费观看在线观看| 美女高潮喷水抽搐中文字幕| 国产伦人伦偷精品视频| 日本wwww免费看| 免费一级毛片在线播放高清视频 | 精品乱码久久久久久99久播| 视频在线观看一区二区三区| √禁漫天堂资源中文www| 黄色成人免费大全| 午夜福利欧美成人| 国产成人精品无人区| 法律面前人人平等表现在哪些方面| 一区二区三区乱码不卡18| 午夜福利视频在线观看免费| 日本a在线网址| 桃花免费在线播放| 色播在线永久视频| 午夜精品国产一区二区电影| 搡老岳熟女国产| 91精品三级在线观看| av网站免费在线观看视频| 成人国语在线视频| 丁香六月欧美| 老熟妇乱子伦视频在线观看| 极品少妇高潮喷水抽搐| 高清视频免费观看一区二区| 国产精品自产拍在线观看55亚洲 | 在线看a的网站| 中文字幕色久视频| 亚洲va日本ⅴa欧美va伊人久久| 97在线人人人人妻| 视频区欧美日本亚洲| 国产国语露脸激情在线看| 亚洲成人手机| 国产男靠女视频免费网站| 亚洲性夜色夜夜综合| 性少妇av在线| 国产老妇伦熟女老妇高清| 久久毛片免费看一区二区三区| 色婷婷av一区二区三区视频| 少妇 在线观看| 一本一本久久a久久精品综合妖精| 麻豆av在线久日| 久久久精品国产亚洲av高清涩受| 免费av中文字幕在线| 亚洲色图av天堂| 成年动漫av网址| 日韩欧美免费精品| 成人国语在线视频| 啦啦啦中文免费视频观看日本| 女人爽到高潮嗷嗷叫在线视频| 在线观看人妻少妇| 后天国语完整版免费观看| 欧美日韩av久久| 国产精品av久久久久免费| 人人妻人人添人人爽欧美一区卜| 热99久久久久精品小说推荐| www.精华液| 91成人精品电影| 别揉我奶头~嗯~啊~动态视频| 变态另类成人亚洲欧美熟女 | 亚洲专区国产一区二区| 啪啪无遮挡十八禁网站| 成年人午夜在线观看视频| 国产一区二区三区在线臀色熟女 | 正在播放国产对白刺激| 天天影视国产精品| 国产一区二区在线观看av| 国产一区二区三区视频了| 中亚洲国语对白在线视频| 高清欧美精品videossex| 国产午夜精品久久久久久| 在线播放国产精品三级| 99re在线观看精品视频| 日本撒尿小便嘘嘘汇集6| 日本黄色视频三级网站网址 | 免费人妻精品一区二区三区视频| av不卡在线播放| 国产av一区二区精品久久| 老司机午夜十八禁免费视频| 老熟女久久久| 亚洲人成伊人成综合网2020| 韩国精品一区二区三区| 女警被强在线播放| 成人av一区二区三区在线看| 亚洲天堂av无毛| 亚洲三区欧美一区| 黄片大片在线免费观看| 少妇 在线观看| 国产成人精品无人区| 国产精品一区二区免费欧美| 久久人人97超碰香蕉20202| 国产午夜精品久久久久久| 视频在线观看一区二区三区| 男女边摸边吃奶| 亚洲情色 制服丝袜| 自线自在国产av| 国产亚洲精品久久久久5区| 人妻久久中文字幕网| 在线 av 中文字幕| 精品视频人人做人人爽| 久久亚洲真实| 多毛熟女@视频| 人人澡人人妻人| 麻豆乱淫一区二区| 女人久久www免费人成看片| 日韩一卡2卡3卡4卡2021年| 女人爽到高潮嗷嗷叫在线视频| 国产日韩欧美在线精品| 亚洲精品自拍成人| 久久青草综合色| 国产成人欧美| 亚洲性夜色夜夜综合| 国产精品免费大片| 欧美 日韩 精品 国产| 久久中文字幕一级| 国产一区二区 视频在线| 免费看a级黄色片| 超色免费av| 天天躁日日躁夜夜躁夜夜| 久久毛片免费看一区二区三区| 国产欧美日韩综合在线一区二区| 中文字幕精品免费在线观看视频| 叶爱在线成人免费视频播放| 我要看黄色一级片免费的| 亚洲性夜色夜夜综合| 国产成人精品久久二区二区免费| 免费高清在线观看日韩| av电影中文网址| 亚洲熟妇熟女久久| 国产精品久久电影中文字幕 | 亚洲国产中文字幕在线视频| 操美女的视频在线观看| 女人爽到高潮嗷嗷叫在线视频| 午夜福利欧美成人| 午夜精品久久久久久毛片777| 国产成+人综合+亚洲专区| 国产精品香港三级国产av潘金莲| 十八禁人妻一区二区| 精品第一国产精品| 天堂俺去俺来也www色官网| 两个人免费观看高清视频| 在线看a的网站| 国产亚洲欧美精品永久| 少妇猛男粗大的猛烈进出视频| 精品国产一区二区三区四区第35| 大香蕉久久成人网| 国产精品久久久久成人av| 国产不卡av网站在线观看| 男人操女人黄网站| 久久 成人 亚洲| 亚洲 国产 在线| 国产成人精品在线电影| 后天国语完整版免费观看| 2018国产大陆天天弄谢| 一区二区三区激情视频| 国产亚洲欧美在线一区二区| 一级片'在线观看视频| 国产高清视频在线播放一区| 欧美在线一区亚洲| 在线观看免费高清a一片| 另类亚洲欧美激情| 老司机午夜十八禁免费视频| 午夜两性在线视频| 日韩欧美免费精品| 怎么达到女性高潮| 欧美在线黄色| 一区在线观看完整版| 欧美性长视频在线观看| 黄色视频不卡| 色尼玛亚洲综合影院| 色在线成人网| 精品熟女少妇八av免费久了| 777久久人妻少妇嫩草av网站| www.999成人在线观看| 亚洲人成伊人成综合网2020| 9色porny在线观看| 中文字幕制服av| 中文字幕另类日韩欧美亚洲嫩草| 久久久国产一区二区| 热re99久久精品国产66热6| 亚洲精品美女久久av网站| 国产精品亚洲av一区麻豆| 高清欧美精品videossex| 久久精品国产亚洲av香蕉五月 | 国内毛片毛片毛片毛片毛片| 男女高潮啪啪啪动态图| 极品少妇高潮喷水抽搐| 国产成人一区二区三区免费视频网站| 人人妻,人人澡人人爽秒播| 在线av久久热| 一区二区日韩欧美中文字幕| 亚洲情色 制服丝袜| 十八禁网站免费在线| 日日爽夜夜爽网站| 亚洲九九香蕉| 欧美日韩精品网址| 欧美成狂野欧美在线观看| 飞空精品影院首页| 操美女的视频在线观看| 亚洲国产成人一精品久久久| 久久久久国内视频| av电影中文网址| 美女高潮喷水抽搐中文字幕| 精品熟女少妇八av免费久了| 欧美 亚洲 国产 日韩一| 十八禁网站免费在线| 国产成人免费观看mmmm| 黑人巨大精品欧美一区二区mp4| 女人久久www免费人成看片| 久久久国产一区二区| 日韩免费av在线播放| 一夜夜www| 99精品在免费线老司机午夜| 久久精品国产a三级三级三级| 电影成人av| 精品卡一卡二卡四卡免费| 啦啦啦中文免费视频观看日本| 国产精品免费一区二区三区在线 | 久久热在线av| 久久精品人人爽人人爽视色| 真人做人爱边吃奶动态| 一二三四社区在线视频社区8| 午夜成年电影在线免费观看| 人人妻,人人澡人人爽秒播| 一级,二级,三级黄色视频| 男男h啪啪无遮挡| 亚洲久久久国产精品| 久久久国产一区二区| 极品少妇高潮喷水抽搐| 欧美成狂野欧美在线观看| 夜夜骑夜夜射夜夜干| 美女高潮喷水抽搐中文字幕| 国产区一区二久久| 久久久久久人人人人人| 免费高清在线观看日韩| 天堂8中文在线网| 国产主播在线观看一区二区| 成人国产av品久久久| 日韩欧美免费精品| 两个人看的免费小视频| 一区二区日韩欧美中文字幕| 黄色片一级片一级黄色片| 久久久久久免费高清国产稀缺| 欧美在线一区亚洲| 亚洲自偷自拍图片 自拍| www.999成人在线观看| 9色porny在线观看| av欧美777| av又黄又爽大尺度在线免费看| 精品亚洲乱码少妇综合久久| 99香蕉大伊视频| 麻豆成人av在线观看| 一级片免费观看大全| 在线十欧美十亚洲十日本专区| 国产精品一区二区在线观看99| 午夜免费鲁丝| 久久狼人影院| 亚洲国产av影院在线观看| 亚洲精品中文字幕在线视频| 欧美成狂野欧美在线观看| 变态另类成人亚洲欧美熟女 | 亚洲国产精品一区二区三区在线| 精品少妇久久久久久888优播| 久久精品国产亚洲av香蕉五月 | av福利片在线| 悠悠久久av| 国产精品美女特级片免费视频播放器 | 一区福利在线观看| 亚洲五月婷婷丁香| 国产精品久久久久久精品电影小说| 午夜福利免费观看在线| 黄片播放在线免费| 高清视频免费观看一区二区| 国产精品成人在线| 曰老女人黄片| 大型av网站在线播放| 成人精品一区二区免费| 蜜桃在线观看..| 老司机午夜福利在线观看视频 | 日韩三级视频一区二区三区| 亚洲欧洲精品一区二区精品久久久| 日本欧美视频一区| 人人妻人人澡人人看| 亚洲国产成人一精品久久久| 国产精品偷伦视频观看了| 热99国产精品久久久久久7| 久久国产精品大桥未久av| 我的亚洲天堂| 国产91精品成人一区二区三区 | 午夜免费成人在线视频| 黄色片一级片一级黄色片| 国产高清视频在线播放一区| 亚洲熟女精品中文字幕| 亚洲熟女毛片儿| 老鸭窝网址在线观看| 一区二区三区乱码不卡18| 日本黄色日本黄色录像| 午夜免费鲁丝| 欧美日本中文国产一区发布| 国产精品免费大片| 欧美黑人精品巨大| 欧美大码av| 亚洲视频免费观看视频| 成人18禁在线播放| kizo精华| 欧美精品av麻豆av| 久久国产精品男人的天堂亚洲| 十分钟在线观看高清视频www| 午夜视频精品福利| 中亚洲国语对白在线视频| www日本在线高清视频| 黑丝袜美女国产一区| 啦啦啦视频在线资源免费观看| 国产三级黄色录像| 亚洲精品久久午夜乱码| 大香蕉久久网| 男男h啪啪无遮挡| 大陆偷拍与自拍| 两性午夜刺激爽爽歪歪视频在线观看 | 最新美女视频免费是黄的| 国产av精品麻豆| 一边摸一边做爽爽视频免费| 夜夜夜夜夜久久久久| 黄片小视频在线播放| 成人永久免费在线观看视频 | 国产野战对白在线观看| 亚洲情色 制服丝袜| 免费女性裸体啪啪无遮挡网站| 激情视频va一区二区三区| 亚洲精品乱久久久久久| 老汉色av国产亚洲站长工具| 曰老女人黄片| 国产成人av教育| h视频一区二区三区| 91大片在线观看| 成年人免费黄色播放视频| 精品高清国产在线一区| 日韩熟女老妇一区二区性免费视频| 窝窝影院91人妻| 国产欧美亚洲国产| 亚洲色图综合在线观看| 国产午夜精品久久久久久| 一本综合久久免费| a级毛片在线看网站| 国产无遮挡羞羞视频在线观看| 操出白浆在线播放| 成人国产一区最新在线观看| 久久久国产成人免费| 日韩免费av在线播放| 老司机午夜十八禁免费视频| 国产在线一区二区三区精| 日本撒尿小便嘘嘘汇集6| 亚洲欧美色中文字幕在线| 91麻豆av在线| 一区福利在线观看| 伦理电影免费视频| 热99久久久久精品小说推荐| 高清在线国产一区| 日本精品一区二区三区蜜桃| 免费在线观看日本一区| 色94色欧美一区二区| 免费在线观看视频国产中文字幕亚洲| 狠狠婷婷综合久久久久久88av| 成年版毛片免费区| 这个男人来自地球电影免费观看| 国产高清视频在线播放一区| 亚洲成a人片在线一区二区| 精品视频人人做人人爽| 91大片在线观看| 男女床上黄色一级片免费看| av国产精品久久久久影院| 黄色视频不卡| 99久久99久久久精品蜜桃| 香蕉丝袜av| 在线 av 中文字幕| 999精品在线视频| 精品国产一区二区久久| 国产野战对白在线观看| 午夜激情久久久久久久| 久久久欧美国产精品| 久久人妻av系列| 亚洲av成人不卡在线观看播放网| 操美女的视频在线观看| 精品久久久久久电影网| 欧美成人午夜精品| 桃花免费在线播放| 老汉色av国产亚洲站长工具| 亚洲专区国产一区二区| 一本—道久久a久久精品蜜桃钙片| 日韩欧美一区视频在线观看| 久久青草综合色| 90打野战视频偷拍视频| 黑人操中国人逼视频| 精品免费久久久久久久清纯 | 女性生殖器流出的白浆| 久久人妻av系列| 老熟女久久久| 欧美日韩亚洲国产一区二区在线观看 |