• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    No-reference synthetic image quality assessment with convolutional neural network and local image saliency

    2019-08-05 01:45:20XiaochuanWangXiaohuiLiangBailinYangandFrederickLi
    Computational Visual Media 2019年2期

    Xiaochuan Wang, Xiaohui Liang (), Bailin Yang, and Frederick W. B. Li

    Abstract Depth-image-based rendering (DIBR) is widely used in 3DTV,free-viewpoint video,and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.

    Keywords image quality assessment;synthetic image;depth-image-based rendering(DIBR);convolutional neural network;local image saliency

    1 Introduction

    With the development of mobile devices and wireless network technology, depth-image-based rendering(DIBR) has become a mainstream technology for supporting remote interactive 3D graphics. Example uses include 3DTV[1],free-viewpoint video[2],stereoview video [3], and 3D interactive graphics systems[4]. In these DIBR-based systems, a virtual view is synthesized based on various known reference views as the input, which comprise texture and depth information. 3D warping [5] and hole filling[1] are typically applied to generate the required virtual views. However, the process of virtual view synthesis is prone to distortions, degrading the visual quality of the synthetic images. Having a proper quality metric for synthetic images is fundamental to ensuring quality of service (QoS) of DIBR-based systems. Specifically, the feedback from synthetic image assessment can be used to govern optimization of reference view compression and transmission.

    Fig. 1 Geometric distortions in DIBR synthetic images. In each pair, left: undistorted image, right: synthetic image. (a)-(d) exhibit holes, cracks, ghost artifacts, and stretching, respectively.

    As illustrated in Fig. 1, geometric distortions,such as holes, cracks, ghost artifacts, and stretching,are the dominant distortions in a DIBR synthetic image. They mainly result from object dis-occlusion,and rounding errors from 3D warping and hole filling processes. Compared to traditional DCTbased image distortions such as noise, blurring,blocking, and ringing artifacts which are distributed rather uniformly over an image,geometric distortions appear in a non-uniform way and are distributed locally around occlusion regions [6]. Existing 2D image quality assessment (IQA) algorithms focus on structural distortions, and are incapable of properly assessing the visual quality of DIBR synthetic images.So far, only a few works have aimed to evaluate DIBR synthetic images. Most are extensions of existing 2D IQA methods, assuming that DIBR synthetic images follow the same natural scene statistics (NSS) as traditional 2D images [6-9]. Their improvements mainly rely on carefully designed handcrafted features.

    In contrast to existing DIBR-related metrics,which heavily rely on handcrafted features, we propose a no-reference (NR) DIBR synthetic image quality assessment method using convolutional neural networks (CNNs) and local image saliency based weighting. Specifically,we exploit the power of CNNs for synthetic image feature extraction, while utilizing the sensitivity of local image saliency to geometric distortions to refine the predicted scores. To overcome the lack of existing training data, we constructed a large DIBR synthetic image dataset with subjective score annotations.

    Our main contributions are as follows:

    · To our knowledge, we are the first to propose a CNN-based NR-IQA for DIBR synthetic images.In particular,the integration of local image saliency boosts prediction performance.

    · We have constructed a new DIBR synthetic image dataset with subjective scores. The capacity and diversity of our proposed dataset is superior to any existing public DIBR image dataset, boosting the training quality and avoiding training bias.

    · We have validated the proposed metric on both the public benchmark IRCCyN/IVC DIBR image dataset [10] and our own dataset. Experimental results demonstrate that our method outperforms conventional 2D image metrics and state-of-the-art DIBR-related metrics.

    The rest of the paper is organized as follows.Related work is described in Section 2. Section 3 presents our NR-IQA approach, and Section 4 evaluates our proposed algorithm. Application of the proposed metric is demonstrated in Section 5.Finally, Section 6 concludes the paper.

    2 Related work

    2.1 Image quality assessment

    Depending on their need for a priori knowledge of the undistorted image, IQA methods may be broadly categorized as full-reference (FR), reduced reference (RR), and no-reference (NR). In FR-IQA,algorithms typically have full knowledge of the ground truth image, and evaluate image distortion according to pixel error measurements, e.g., SSIM [11]. In contrast, RR-IQA only uses partial information of a reference image for quality evaluation [12]. NR-IQA is the most challenging task, in which algorithms estimate the quality of a distorted image without any information about the ground truth. However, NRIQA is most suitable for DIBR system usage,since the undistorted image corresponding to a virtual view is typically unavailable. We hence only discuss NR-IQA algorithms in the following.

    Most NR-IQA methods are based on NSS priors.Mittal et al. [13] proposed a Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), which extracts point-wise statistics from local normalized luminance signals, measuring image naturalness by the deviations from a natural image model. They also proposed another no-reference metric, Natural Image Quality Evaluator (NIQE) [14], without the need for knowing the human subjective score for a distorted image.

    Recently, deep learning methods, especially CNNs,have attracted great attention for their powerful image feature extraction capability. Kang et al.[15] firstly introduced CNNs into image quality assessment. In their work,training images are divided into small patches assigned with subjective scores as labels. The small patches are then trained to fit human subjective scores using CNNs. Bosse et al.[16] and Bare et al. [17] improved the prediction performance by weighting the predicted patch scores with image saliency. Bare et al. [17] adopted a more complex network architecture which clusters each minibatch of training patches. In Ref. [18], a pretrained CNN model is utilized to provide multiple level features for image quality assessment. GANs are also introduced into NR-IQA [19], where a plausible reference image is generated to assist training. As well as for image quality assessment, deep learning has also been applied in aesthetic evaluation [20].CNN-based NR-IQA methods have achieved state-ofthe-art performance on public 2D image databases,such as LIVE [21], TID2008 [22], and TID2013 [23].However, no work has been reported for assessing DIBR synthetic images. This is mainly due to the training bias of traditional 2D image datasets, as the features of traditional 2D images and synthetic images are different due to the different natures of their distortions.

    2.2 DIBR-related image quality assessment

    Previous IQA methods for 2D images are inappropriate for assessing DIBR synthetic images,since the dominant distortions in synthetic images are geometric distortions, as mentioned before.Specifically, holes are mainly induced by object disocclusions in a virtual view. Cracks are induced by rounding errors from 3D warping. Ghost artifacts are mainly induced by inaccurate depths, and stretching is due to improper hole filling algorithms. These distortions are quite different from traditional image distortions, such as noise, blurring, blocking, and ringing artifacts induced by DCT-transform based coding and lossy transmission.

    Conze et al. [24] aggregated texture, gradient orientation, and contrast information as weighting maps for assessing DIBR synthetic image distortions.Battisti [7] presented an FR synthetic image quality metric. It evaluated a synthetic image by comparing the Kolmogorov-Smirnov distance between the blocks of the synthetic image and the undistorted image. Sandi′c-Stankovi′c et al. proposed a Morphological Wavelet Peak Signal-to-Noise Ratio (MW-PSNR) metric [25]and a Morphological Pyramids Peak Signal-to-Noise Ratio (MP-PSNR) metric [26]. Both MW-PSNR and MP-PSNR transform a synthetic image into wavelet domain, and measure the spectral difference between the synthetic image and the undistorted one. Zhou et al. [6] proposed an FR metric for DIBR synthetic images with dis-occluded region discovery. It first detected the dis-occluded regions by comparing the absolute difference between the synthetic image and the undistorted image, and then weighted the predicted quality using the detected dis-occluded regions. Gu et al. [8] proposed an NR method for DIBR synthetic images using local image description. It measured geometric distortions with an auto-regression based NSS model. Tian et al. [9] proposed another NR-IQA method for measuring synthetic image distortions. Four kinds of features, including morphological differences, edges,gradients, and holes ratio, are separately measured and finally aggregated. These DIBR-related metrics achieve significant improvement over conventional IQA metrics,yet heavily rely on handcrafted features.

    3 Our approach

    We now present the details of our method. As mentioned above,current DIBR-related IQA methods rely heavily on handcraft features, while CNN-based methods suffer from training bias. We hence propose a novel NR-IQA method for synthetic images based on CNNs and local image saliency based weighting. We also address the lack of training data by constructing a new DIBR synthetic image database with sufficient samples.

    3.1 Overview

    Motivated by previous work, we apply CNNs to train a regression model between predicted image quality scores and human subjective scores. Specifically, the CNN model is assumed to represent the feature subspace of DIBR synthetic images in terms of natural images.

    The main bottleneck of CNN-based synthetic image quality prediction is the lack of sufficient training data. Notably, existing CNN-based IQA methods achieve successful results as they are typically trained on very large image databases, e.g., LIVE, CSIQ,TID2008, and TID2013, which contain thousands of images. In contrast, existing public DIBR synthetic image datasets,in particular the IRCCyN/IVC DIBR image dataset, contain only 96 images (including the undistorted images). Our new synthetic image dataset was developed to address the lack of training data.

    A CNN model is proposed and trained on our dataset. Particularly, we utilize local image saliency to weight the predicted score, appropriately emphasising the contribution of geometric distortions.The architecture of proposed method is illustrated in Fig. 2. With our trained model, we can predict the quality score for test images without knowledge of undistorted versions of them.

    Fig. 2 Architecture of our no-reference synthetic image quality metric. The inputs are small (32×32) patches. The predicted patch scores are weighted by local image saliency.

    3.2 Local image saliency based weighting

    Previous work assigns the subjective score of an image to small image patches uniformly [15-17]. It implicitly assumes that the small image patches equally contribute to image quality. In fact, the visual quality of each small image patch is quite different from the whole image quality[27], especially for synthetic images. Suppose a small image patch is exactly covered by a dis-occluded region, and holes dominate an entire patch. As illustrated in Fig. 3, such a patch may be perceived as having better visual quality than that of the whole image.Without knowledge of geometric distortions, a user may simply think that the patch contains a smooth region. Therefore,the strategy of assigning a uniform predicted score to all image patches cannot properly represent the contributions of geometric distortions.

    Fig. 3 Visual appearance of image patches containing geometric distortions. Patch A has partial holes, while patch B is dominated by holes. Compared to patch A, patch B is generally perceived as a higher quality image patch, if knowledge of geometric distortions in the whole image is not known.

    As performing subjective tests on small image patches is expensive and time-consuming (e.g., a total of 768 subjective tests are required to consider small image patches for each image), a light-weight method of assignment of predicted patch scores is highly desirable. In Ref. [16], the predicted patch score is weighted by image saliency,i.e.,salient regions are assigned larger weights. This fits the assumption that observers are generally more sensitive to salient regions, such as the person and chair in Fig. 4(a).The distortions in such salient regions have more influence on the quality of the whole image. However,this only holds for traditional distortions, such as blurring, white noise, and blocking artifacts that are distributed uniformly across the whole image.It is inapplicable to DIBR synthetic images, as geometric distortions in such images are non-uniform and locally-distributed.

    Consider Fig. 4. Figure 4(b) shows the saliency map for Fig. 4(a) generated by Ref. [28]. Note that the most salient regions (depicted brighter) are not those regions containing geometric distortions in the synthetic image. For instance, the most salient region in Fig. 4 is the blurred red book, but it is not humanly perceived as distorted. Directly applying image saliency based weighting as proposed in Ref. [27] to the synthetic image thereby overstates the contribution of such regions, while weakening the contribution of local patches containing geometric distortions.

    Fig. 4 Saliency maps for a synthetic image and its local patches. (a) Synthetic image. (b) Associated saliency map, with brighter intensity indicating stronger saliency. (c) Six chosen small patches extracted from the synthetic image, the corresponding patch saliency maps using the same saliency model, and the corresponding region extracted from the image saliency map. Note that geometric distortions appear differently in the patch saliency map and the image saliency map.

    We observe that it makes sense to exploit the difference between the saliency map of a local patch and its corresponding region of the saliency map for the whole image to help to improve the representation of geometric distortions. As seen in Fig. 4(c), the cracks on the wall are dark (indicating weak saliency)in the whole image but are bright (indicating strong saliency) in the small patch. In reality, human perception is most sensitive to such cracks. We should hence assign a large weight to the corresponding patches. In contrast, the holes appearing at the right side of the lion statue are dark (indicating weak saliency) in both the image saliency map and the patch saliency map. This fits the observation that holes around the lion statue are not perceived to be consistent with the cracks in the white wall. This is partly supported by theories that in the human visual system, texture contrast masking and luminance adaptation conceal distortions to some extent [29].We can thus give the corresponding patch a small weight. On the other hand, patches containing no geometric distortion share similar appearance of local patch saliency and corresponding regional saliency in the whole image. For instance, the aforementioned red book with motion blurring appears to be salient in both the patch and the corresponding region of the whole image. However, human perception does not consider motion blurring to be a distortion. In this situation,the contribution of the predicted patch score should be low. The background floor is neither salient at the patch level nor the whole image level,and that should also be considered as unimportant,as shown in Fig. 4(c).

    Based on the above observations, we exploit the ratio between the local patch saliency and the corresponding regional saliency in the whole image to represent the contribution of patch scores toward geometric distortions. We define this as local image saliency, formulated as follows:

    where ?xindicates the region of a small patch.S(·) and S′(·) denote the per-pixel value of patch saliency and the corresponding saliency in the whole image, respectively. The proposed local image saliency is then used to weight the predicted patch scores. For example, a patch with high local image saliency implies that the patch contains clearly visible geometric distortions, and that the predicted score should be increased, and vice versa.

    3.3 Network architecture

    Our network is mostly inspired by Ref. [15], but is designed to process DIBR synthetic images during preprocessing, and to use local image saliency based weighting.

    3.3.1 Preprocessing

    Before training, we divide each synthetic image into small patches of size 32 × 32 pixels. As depicted in Fig. 5, geometric distortions are visible in RGB channels. However, such distortions are concealed after gray-scale transformation and local contrast normalization. Consequently, we abandon grayscale transform and local contrast normalization,even though they have been widely used in existing CNN-based NR-IQA methods [15, 17]. As a result,important distortion information can be better preserved.

    3.3.2 Layers

    We use 9 convolutional layers to extract local patch features. Each convolutional layer is followed by a ReLU activation function, which means the local information is extracted into a deeper layer. The convolutional layer can be formulated as

    Fig. 5 Visual perception of synthetic images. (a) Two synthetic images. (b) Corresponding gray-scale maps. (c) Visualization of the local normalized maps [15, 17]. Note that holes in regions with high intensity contrast and complex textures are lost after gray-scale transformation and local contrast normalization.

    where Cjis the feature map of the jth layer, and Wjand Bjare weight and bias respectively. Details of layer configurations as well as kernels are depicted in Fig. 2.Note that we use a zero-padding strategy, so as to preserve the information at image borders. After three convolutional layers, we apply a max-pooling layer with a 2×2 kernel to enlarge the respective field. We also apply the dropout strategy after the first fully connected layer. The network depth is chosen with the assumption that shallow network architectures capture low-level features while deep network architectures capture semantic features. The effect of network depth is discussed in Section 4.

    3.3.3 Optimization

    By aggregating the local image saliency based weighting, the loss function is formulated as follows:

    where cxis the local image saliency defined in Eq.(1).x and qxdenote the input small image patch and its assigned subjective quality score, respectively. f(·)outputs the predicted quality score. W,B indicate the trainable weights and biases, respectively. The effectiveness of our proposed local image saliency based weighting is discussed in Section 4. We use the ADAM optimizer to solve this problem.

    3.4 Construction of training database

    3.4.1 Our DIBR synthetic image database

    Until recently, available synthetic image databases with subjective scores were insufficient for training.For instance, the IRCCyN/IVC DIBR image dataset[35] contains only 12 undistorted images and 84 synthetic images. Moreover, these images cover only three scenes: Book Arrival, Newspaper, and Lovebird.All have humans in the center of the scene, which may lead to training bias. The MCL 3D database[36] contains 693 stereoscopic image pairs, which is sufficient for training. However, it lacks subjective scores for each synthetic image. In order to improve training performance, we constructed a new DIBR synthetic image dataset.

    A total of 18 reference images were chosen. These reference images ranged from 960×640 to 1920×1080 pixels in size. Twelve reference images were randomly sampled from 3D-HEVC testing video sequences or other typical RGBD databases. Note that the sampled reference images are quite different from those in the IRCCyN/IVC DIBR image dataset. The remaining six reference images were picked from the Middlebury Stereo dataset [34], which only contains indoor objects without people. We specifically chose these reference images to avoid training bias. The reference images are shown in Fig. 6.

    Figure 7 shows a scatter plot of spatial information(SI) vs. colorfulness information (CI) for our chosen reference images and IRCCyN/IVC DIBR image dataset, as suggested by Ref. [37]. They show that the SI and CI of our chosen reference images span a larger range than the IRCCyN/IVC DIBR image dataset, indicating that the contents of our dataset are more diverse and more likely to avoid training bias.

    Fig. 6 Reference images from Nayoga Free-viewpoint video dataset [30], Microsoft 3D Video database [31], Poznan Multiview video test sequences [32], Freiburg stereo dataset [33], and Middlebury Stereo dataset [34].

    For each reference image, we set four camera baselines between the reference view and the virtual view. For instance, the camera position of the Balloons reference image is denoted by 0, then we select four virtual cameras along the horizon line of the reference camera, while the baselines between the virtual cameras and the reference camera are set to -2d,-d,+d,+2d, respectively. d is the preset unit distance. After 3D warping, we conduct 7 holefilling algorithms on the synthetic images. Finally,we obtain 504 synthetic images. Note that the holefilling algorithms are the same as those used for the IRCCyN/IVC DIBR image dataset. Details of the hole-filling algorithm are given in Ref. [7]. Compared to the IRCCyN/IVC DIBR image dataset, our new database has over 5 times as many images. Further comparisons are listed in Table 1.

    Fig. 7 Spatial information versus colorfulness scatter plots for (a) the IRCCyN/IVC DIBR image dataset and (b) our proposed augmented synthesized image dataset. Red lines indicate the convex hull of the points in each scatter plot, indicating the range of scene diversity.

    3.4.2 Subjective testing

    Since the number of synthetic images was prohibitively large for a double stimulus setup, we instead chose a single stimulus absolute category rating procedure with hidden reference (ACR-HR),as suggested by ITU-T Recommendation P.910[38]. Each synthetic image was evaluated by 15 observers. Subjective testing was divided into three sub-sessions of 25 min each with a break of five minutes in between to reduce visual fatigue and eye strain. Each testing image was displayed for 15 s, following by a gray image for 5 s. To ensure the robustness of subjective opinion, twelve testing images were randomly displayed repeatedly. The 15 subjects who participated in the test were graduate or undergraduate students with ages ranging from 21 to 31. Two of them had knowledge of IQA, the remainder having no experience of IQA.

    Table 1 Details of our proposed DIBR synthetic image dataset

    Before testing started, the study procedure was explained to each subject. We also obtained verbal confirmation that the subjects had normal or corrected-normal vision. For each sub-session, five images were shown as a warm-up; these had different contents but the same type of distortions as the testing images.

    A 24 inch Lenovo X23 LG 0.2 monitor was used as display. It had 16:9 aspect ratio, 0.30 m height,200 cd·m-2peak luminance, and 1920×1080 display resolution. The testing room was dark with weak ambient lighting. Subjects viewed images from 2.1 m,as suggested in ITU-T Recommendation P.910 [38].At the end of the image display duration, the test number of the image was displayed on the screen,informing subjects to write down one of the five rankings: 5-Excellent, 4-Good, 3-Fair, 2-Poor, 1-Bad on their subjective scoring sheets.

    3.4.3 Processing of raw subjective scores

    The subject rejection procedure outlined in ITU-R BT.500[39]was used to discard scores from unreliable subjects. The kurtosis of the scores(MOS scores)was firstly used to determine whether the scores assigned by a subject followed a normal distribution. For the normally distributed scores, a subject was rejected whenever more than 5% of the scores assigned by the subject fell outside the range of two deviations from the mean scores; otherwise, the subject was rejected whenever more than 5% of the scores fell outside the range of 4.47 standard deviations from the mean scores. All of the 15 subjects passed the outlier rejection. We further analyzed the scores for the 12 redundant images,finding that most subjects assigned the same scores to these repeated images. This further validated the effectiveness of our subjective testing.Finally, the scores of 15 subjects were averaged.

    4 Experimental results

    We now provide the details of our experimental settings and give a performance comparison for our proposed DIBR synthetic image quality metric on the benchmark IRCCyN/IVC DIBR image dataset and our own dataset. We also briefly discuss the dependence on proposed strategies, including preprocessing, local image saliency based weighting, and network depth.

    4.1 Settings

    4.1.1 Training implementation

    Two datasets were used in our experiments, including the IRCCyN/IVC DIBR image dataset and our DIBR synthetic image database. We trained the CNN model on our DIBR synthetic image database; the synthetic images were divided into training set, validation set,and testing set according to reference image. The dataset division obeyed the 60%/20%/20% principle.Thus, 10 reference images with their associated distorted images were chosen as training set. The validation set and testing set contained 4 reference images and their distorted images separately. Only the training set and validation set were used during training, while the testing set was kept secret until performance evaluation.

    In experiments, we set the ADAM optimizer learning rate λ = 0.0001, performing stochastic gradient descent (SGD) for 20 epochs in training,and saving the models with the top five Pearson linear correlation coefficient (PLCC) performance on the validation set. For each epoch, the training and validation set were shuffled. We calculated local image saliency weights for the whole image and patches using the saliency model in Ref. [28]. During the testing stage, the predicted scores from the five restored models were averaged.

    4.1.2 Evaluation methodology

    Three indicators were used to evaluate the performance of our proposed metric, including Pearson linear correlation coefficient (PLCC), root mean square error (RMSE), and Spearman rank order correlation coefficient (SROCC). These indicators measure the consistency, accuracy, and monotonicity between the predicted quality scores and subjective scores. PLCC and SROCC range from 0 to 1,higher values indicating better performance. RMSE ranges from 0 to ∞+,smaller values indicating better performance.

    A total of 13 IQA algorithms were selected for comparison. These methods can be divided into two categories, traditional 2D IQA metrics and DIBR-related IQA metrics. For 2D image quality assessment, we separately choose four FR-IQA methods, including PSNR, SSIM [11], VSNR [40],and FSIM [41], as well as three NR-IQA methods,including BRISQUE [13], NIQE [14], and SSEQ [42].For DIBR-related methods, four FR-IQA methods,including 3DSwIM [7], MW-PSNR [25], MP-PSNR[26], and SDRD [6], as well as two recently published NR-IQA methods, including APT [8] and NIQSV+[9], were chosen.

    For the sake of fairness of performance comparison,the predicted scores of compared metrics were scaled to the subjective scores, i.e., MOS values via thirdorder polynomial fitting. The polynomial fitting is conducted as follows, which is suggested by ITU-R BT.500 [39]:

    where s is the score and a,b,c,d are coefficients of the polynomial fitting function, determined by the predicted results and associated subjective scores.Note that our predicted scores are directly trained to fit the subjective scores, so do not require scaling.

    The parameters (if any) in the compared FR-IQA methods were trained on the training dataset, while the predicted scores were fitted using non-linear logistic regression to minimize the errors between the predicted scores and the corresponding subjective scores, as suggested by Ref. [8]. After parameter training, we evaluated each method's performance on the testing dataset. The compared NR-IQA methods were directly evaluated on the testing dataset.

    4.2 Performance on the IRCCyN/IVC DIBR image dataset

    We now compare the performance of the proposed algorithm on the IRCCyN/IVC DIBR image dataset with state-of-the-art methods. As mentioned before,we trained the CNN model on the training data of our DIBR image database,where the models with top five PLCC results on the validation dataset were saved.The RMSE, PLCC, and SROCC for our metric using the IRCCyN/IVC DIBR image dataset are listed in Table 2. Our proposed algorithm achieves values of 0.3820, 0.8112, and 0.7520, respectively, which are better than those for competing methods.

    From Table 2, we are able to derive two important conclusions.

    Firstly,existing IQA algorithms that were designed for traditional 2D images do not perform effectively.The FR-IQA metrics are better than the NR-IQA metrics. FSIM [41] achieves 0.5887, 0.4671, and 0.3286 for RMSE, PLCC, and SROCC, respectively.Note that NR-IQA metrics are not able to predict DIBR synthetic image scores at all well, e.g.,NIQE [14] achieves 0.1152 and 0.1181 for PLCC and SROCC, respectively. This is mainly due to dependency on natural image distortion priors.In particular, NIQE predicts image quality by evaluating the effect of distortions in terms of the NSS distribution. As mentioned before, geometric distortions are different from traditional image distortions. The learned model is thus inadequate for assessing DIBR synthetic images.

    Secondly, despite the fact that the DIBR-related IQA algorithms perform better than those designed for traditional 2D images, prior methods are still insufficient. The best DIBR-related IQA metric is SDRD [6] that achieves 0.3901, 0.8104, and 0.7610 for RMSE, PLCC, and SROCC, respectively. Stateof-the-art NR-IQA metrics, such as APT [8] and NIQSV+[9]achieve similar performance. Our metric outperforms those two relatively new NR-IQA metrics for DIBR synthetic images, and indeed achieves performance competitive to that of the state-of-theart FR-IQA metric,SDRD.Note however that SDRD is a full-reference method while ours is independent of reference images.

    4.3 Cross validation

    4.4 Ablation study

    Table 2 RMSE, PLCC, and SROCC on IRCCyN/IVC DIBR image dataset

    Table 3 RMSE, PLCC, and SROCC on testing dataset of our DIBR synthetic image database

    To avoid training bias of our CNN model, we conducted cross validation on our own database.Particularly, we evaluated the RMSE, PLCC, and SROCC of our metric and DIBR-related metrics on the testing set of our database. The results are listed in Table 3.

    Our metric achieves the best performance on our DIBR synthetic image database in comparison with other DIBR-related metrics. Note that SDRD [6] is inferior to our method on the new database.

    The performance of most existing DIBR-related metrics decreases when tested on our database. This implies that lack of diversity in the IRCCyN/IVC DIBR image dataset has caused training bias. The variation in RMSE on these two databases is shown in Table 4,which shows that RMSE is lower when testing on our database. Note that the RMSE variation of 3DSwIM is the most significant. This is perhaps due to the weighting of face features in 3DSwIM, leading to training bias.

    Several strategies are involved in our method.The most important issues concerning prediction performance are preprocessing, local image saliency based weighting, and network depth. We therefore conduct an ablation study to demonstrate the effect of these strategies.

    Table 4 RMSE on IRCCyN/IVC DIBR image dataset and testing dataset of our DIBR synthetic image database

    4.4.1 Preprocessing

    We first evaluated preprocessing. While our preprocessing strategy uses raw images directly, we also implemented gray-scale transformation and local contrast normalization of the training images for comparison; the network architecture remained the same. The RMSE, PLCC, and SROCC values are listed in Table 5.

    We can see from Table 5 that our preprocessing strategy achieves better performance on the testing set of our DIBR synthetic image database. It implies that gray-scale transformation and local contrast normalization may discard useful information.

    4.4.2 Local image saliency based weighting

    To demonstrate the effectiveness of local image saliency based weighting, we separately trained the CNN model with different modalities, i.e., the CNN network without weighting, the same model with image saliency based weighting as deployed in Ref. [17], and our proposed model based on local image saliency weighting. In the first case,the predicted patch scores are averaged to fit the subjective score. In the second case, the predicted patch scores are weighted by image saliency. The utilized image saliency is formulated as follows:

    Note the difference between image saliency based weighting in Eq. (6) and local image saliency based weighting in Eq. (3). Image saliency considers saliency, while local image saliency considers saliency variation between the local region and the whole image. The RMSE, PLCC, and SROCC for the testing dataset of our DIBR synthetic image database are listed in Table 6, which shows that the performance of the unweighted CNN model is greatly improved by using image saliency based weighting,as shown in Ref. [17]. However, our proposed local image saliency based weighting further improves the indicators on the testing dataset. This implies that local image saliency based weighting is better for assessing DIBR synthetic images.

    A visualization of local image saliency based weighting is given in Fig. 8. Figure 8(a) represents the saliency map of the entire image, while Fig. 8(b)represents saliency maps of small patches, merged into an entire image-sized map. Figure 8(c)visualizes the actually used local image saliency based weights,as calculated by Eq. (3). Clearly, the weights from the saliency map and local image saliency are quite different. The red box in Figs. 8(a) and 8(c) shows cracks in the wall assigned a low weight by the saliency map but a high weight by our proposed local image saliency: local image saliency based weighting provides a better representation of the contributions of patch scores.

    4.4.3 Network depth

    A deeper network architecture is suggested [16] to achieve better prediction performance on traditional 2D image databases. We validated this assumption on our augmented DIBR synthetic image dataset.Figure 9 shows how RMSE varies with different network depths, i.e., number of convolutional layers. We observe that RMSE decreases on both the training dataset and validation dataset with increasing network depth, agreeing with the assumption that greater network depth benefits prediction performance. However, the performance gain, significantly decreases when the network depthexceeds nine. Also, deeper convolutional layers may lead to overfitting on the validation dataset unless care is taken. In practice, we use a network architecture with nine convolutional layers.

    Table 5 RMSE, PLCC, and SROCC for the testing set of our DIBR synthetic image database with different preprocessing strategies

    Table 6 RMSE, PLCC, and SROCC for the testing dataset of our DIBR synthetic image database with different network modalities

    Fig. 8 Visualization of local image saliency based weighting. (a) Saliency map of the entire distorted image. (b) Merged saliency maps of the associated small image patches. All saliency maps were produced by Ref. [28]. (c) Local image saliency based weights, brighter blocks indicating higher weights.

    Fig. 9 Performance of CNN models with different network depths(numbers of convolutional layers).

    5 Application

    The quality of synthetic images is key to the success of DIBR-based systems. For instance, a quality measure can be used to guide the coding of reference texture images and depth map. It can also be used to evaluate hole-filling algorithms. Here we use the proposed synthetic image quality metric to optimize the prediction of reference viewpoints. We first describe the baseline model of reference viewpoint prediction, and then introduce a novel model using our proposed metric.

    5.1 Baseline model of reference viewpoint prediction

    Suppose a user navigates within a virtual environment. Reference viewpoints are predicted according to user movement, and for each, an associated reference texture image and depth are transmitted to the user-end for virtual view synthesis. Ideally,reference viewpoint prediction is frequent, to reduce errors. However,the bottleneck of reference viewpoint transmission is bandwidth: the reference data which can be transmitted are limited. Previous work[43, 44] adopts a strategy that predicts reference viewpoints with a constant frequency. Shi et al.[45] adopts another mechanism that predicts the reference viewpoint when the MSE between the synthetic image and the undistorted image exceed preset thresholds. We choose these two models as baselines to demonstrate the effectiveness of our proposed metric. Following Ref. [45], we predict reference viewpoints by assessing the quality of the synthetic images. However, our metric requires no reference, and can be directly used to assess the synthetic images without need for the undistorted images.

    5.2 Performance

    Suppose the user navigates the virtual environment along a horizontal path. The path is equally sampled,and each sample indicates a virtual viewpoint. The positions of these virtual viewpoints can then be denoted as(··· ,v-1,v0,v1,···),where v0denotes the initial viewpoint. Figure 10 shows the undistorted image and the synthetic images for v0. Note that the two synthetic images utilize different reference viewpoint predicted by MSE and our proposed metric.

    We can see from Fig. 10 that the two synthetic images can hardly be distinguished. However, the predicted reference viewpoints are v4using MSE and v7using the proposed metric, respectively. We choose the predicted reference viewpoint as the new initial viewpoint, repeating the reference viewpoint prediction until the virtual viewpoint reaches v100. A total of 25 reference viewpoints are suggested by MSE,while only 17 reference viewpoints are suggested by our proposed metric. By doing so, the transmitted reference data is reduced while the visual quality maintained.

    Fig. 10 Visual quality of synthetic images with different predicted reference viewpoints. (a) Undistorted image of v0. (b) Synthetic image of v0 using the reference view of v4, as suggested by MSE. (c) Synthetic image of v0 using the reference view of v7, as suggested by our metric.

    We also simulated virtual environment navigation on a Nexus 5 device. The reference data was transmitted to the client when the quality of the synthetic image fell below a preset threshold. We tested bandwidth required by MSE-based reference viewpoints and ours. See Table 7: our metric saves 29% bandwidth on average in comparison to the metric in Ref. [45].

    Table 7 Transmission frequency and average bandwidth cost of different reference viewpoint selection models

    6 Conclusions

    Compared to existing DIBR-related IQA methods,there are some highlights of our work. Firstly, it is the first CNN-based NR-IQA method for DIBR synthetic images, achieving significant performance improvements over state-of-the-art 2D and DIBRrelated IQA methods. Our proposal to use local image saliency based weighting further benefits prediction performance. Secondly, we have designed a diverse DIBR synthetic image dataset, which helps to reduce training bias in our CNN model. Although we have achieved competitive performance on DIBR synthetic images, there is still room to improve. For instance,the assignment of patch scores needs further consideration to better fit human perception. In future, we hope to improve the proposed metric by integrating local image saliency in an end-to-end framework.

    Acknowledgements

    The authors would like to thank the anonymous reviewers for their valuable comments. They would also thank Kai Wang and Jialei Li for their assistance in dataset construction and public release. The work was sponsored by the National Key R&D Program of China (No. 2017YFB1002702), and the National Natural Science Foundation of China(Nos. 61572058,61472363).

    精品国产国语对白av| 狠狠精品人妻久久久久久综合| 天堂俺去俺来也www色官网| 免费看av在线观看网站| 国产深夜福利视频在线观看| 精品亚洲成a人片在线观看| av在线老鸭窝| 80岁老熟妇乱子伦牲交| 19禁男女啪啪无遮挡网站| 黄色一级大片看看| 香蕉丝袜av| 一级,二级,三级黄色视频| 成人黄色视频免费在线看| 99香蕉大伊视频| 日韩大片免费观看网站| 亚洲欧洲日产国产| 免费在线观看日本一区| 无限看片的www在线观看| 曰老女人黄片| 妹子高潮喷水视频| 精品亚洲乱码少妇综合久久| h视频一区二区三区| 亚洲,欧美,日韩| 欧美激情极品国产一区二区三区| 欧美激情高清一区二区三区| 永久免费av网站大全| 国产黄频视频在线观看| 日韩制服骚丝袜av| √禁漫天堂资源中文www| 在线观看国产h片| 黄色a级毛片大全视频| 亚洲欧美一区二区三区久久| 在线 av 中文字幕| 激情视频va一区二区三区| 亚洲精品一区蜜桃| 巨乳人妻的诱惑在线观看| 男人添女人高潮全过程视频| 国产精品香港三级国产av潘金莲 | 日韩伦理黄色片| 操美女的视频在线观看| 精品一区在线观看国产| 久久精品久久精品一区二区三区| 天天躁日日躁夜夜躁夜夜| a 毛片基地| 午夜视频精品福利| 黄色 视频免费看| 国产野战对白在线观看| 人妻 亚洲 视频| 欧美黑人欧美精品刺激| 九草在线视频观看| 精品国产乱码久久久久久男人| 亚洲国产欧美一区二区综合| 久久99精品国语久久久| 亚洲人成电影免费在线| 精品欧美一区二区三区在线| 青春草亚洲视频在线观看| 免费在线观看日本一区| 欧美 亚洲 国产 日韩一| 亚洲三区欧美一区| 建设人人有责人人尽责人人享有的| 国产视频一区二区在线看| 纯流量卡能插随身wifi吗| 好男人电影高清在线观看| av在线app专区| 国产日韩欧美亚洲二区| 国产精品香港三级国产av潘金莲 | 日本av手机在线免费观看| av不卡在线播放| 精品少妇一区二区三区视频日本电影| 丝瓜视频免费看黄片| 精品人妻在线不人妻| 亚洲av日韩精品久久久久久密 | 青青草视频在线视频观看| 咕卡用的链子| 日韩中文字幕欧美一区二区 | 午夜激情久久久久久久| 校园人妻丝袜中文字幕| 美女国产高潮福利片在线看| 国产一级毛片在线| 天天影视国产精品| 在线精品无人区一区二区三| 国产av精品麻豆| 最新的欧美精品一区二区| 极品人妻少妇av视频| 久久久久精品国产欧美久久久 | 亚洲精品国产色婷婷电影| 亚洲精品久久午夜乱码| 又黄又粗又硬又大视频| 成年人午夜在线观看视频| 老司机深夜福利视频在线观看 | 在线亚洲精品国产二区图片欧美| 成年动漫av网址| 日韩熟女老妇一区二区性免费视频| 男男h啪啪无遮挡| 50天的宝宝边吃奶边哭怎么回事| 亚洲国产欧美网| 久久国产精品影院| 国产日韩欧美亚洲二区| 99国产精品免费福利视频| 在线看a的网站| 日本一区二区免费在线视频| 午夜激情久久久久久久| 亚洲精品av麻豆狂野| 在现免费观看毛片| 久久久久久人人人人人| 一区福利在线观看| 亚洲人成77777在线视频| 久久天躁狠狠躁夜夜2o2o | 自线自在国产av| 国产91精品成人一区二区三区 | 精品亚洲成国产av| 在线天堂中文资源库| 日韩一本色道免费dvd| 日本vs欧美在线观看视频| 国产精品一区二区免费欧美 | 777久久人妻少妇嫩草av网站| 国产成人欧美| 午夜福利乱码中文字幕| 免费女性裸体啪啪无遮挡网站| 一区在线观看完整版| 91麻豆av在线| 美女福利国产在线| 亚洲三区欧美一区| 精品一区二区三卡| 午夜福利在线免费观看网站| 国产精品国产三级专区第一集| 亚洲成国产人片在线观看| 国产精品一区二区免费欧美 | 韩国高清视频一区二区三区| 国产日韩欧美在线精品| 性高湖久久久久久久久免费观看| 十八禁人妻一区二区| av网站在线播放免费| 亚洲成人手机| 亚洲精品在线美女| h视频一区二区三区| 叶爱在线成人免费视频播放| 汤姆久久久久久久影院中文字幕| 精品国产乱码久久久久久男人| 一级,二级,三级黄色视频| 视频在线观看一区二区三区| 亚洲人成电影免费在线| 两个人看的免费小视频| 国产免费视频播放在线视频| 亚洲欧美激情在线| 久热这里只有精品99| 叶爱在线成人免费视频播放| 欧美日韩福利视频一区二区| 久久国产精品人妻蜜桃| 欧美日韩av久久| 欧美av亚洲av综合av国产av| 黄色一级大片看看| 中文欧美无线码| www.精华液| 99国产精品一区二区三区| 国产爽快片一区二区三区| 欧美黄色片欧美黄色片| 欧美精品人与动牲交sv欧美| 中文字幕色久视频| 亚洲成人手机| 亚洲专区中文字幕在线| 一本一本久久a久久精品综合妖精| 国产av精品麻豆| 婷婷色综合大香蕉| 国产精品一区二区在线不卡| 夜夜骑夜夜射夜夜干| 人人妻,人人澡人人爽秒播 | 后天国语完整版免费观看| 久久久精品94久久精品| 国产在视频线精品| 亚洲av综合色区一区| 另类亚洲欧美激情| 大型av网站在线播放| 国产97色在线日韩免费| 国产成人免费无遮挡视频| 国产亚洲一区二区精品| 亚洲人成网站在线观看播放| 亚洲九九香蕉| 99热国产这里只有精品6| 欧美久久黑人一区二区| 国产精品免费视频内射| 国产精品香港三级国产av潘金莲 | 亚洲欧美清纯卡通| 老熟女久久久| 美女高潮到喷水免费观看| 欧美精品一区二区免费开放| 国产精品 欧美亚洲| 五月天丁香电影| 99久久人妻综合| 久久精品人人爽人人爽视色| 制服诱惑二区| 国产亚洲午夜精品一区二区久久| 久久人人爽av亚洲精品天堂| 日韩 亚洲 欧美在线| 久久国产精品男人的天堂亚洲| 丝袜美腿诱惑在线| 黄色片一级片一级黄色片| 脱女人内裤的视频| 欧美日韩黄片免| 免费在线观看完整版高清| 狠狠婷婷综合久久久久久88av| 日本vs欧美在线观看视频| 校园人妻丝袜中文字幕| 亚洲精品国产区一区二| 国产成人一区二区在线| 亚洲国产欧美一区二区综合| 精品第一国产精品| 久久精品国产综合久久久| 成年av动漫网址| 捣出白浆h1v1| 69精品国产乱码久久久| 欧美中文综合在线视频| 午夜激情av网站| 成人午夜精彩视频在线观看| 天天操日日干夜夜撸| av又黄又爽大尺度在线免费看| 中文字幕精品免费在线观看视频| 国产在线一区二区三区精| 成人亚洲精品一区在线观看| 欧美人与性动交α欧美精品济南到| 精品一区二区三区av网在线观看 | 男女之事视频高清在线观看 | 欧美 亚洲 国产 日韩一| 国产亚洲av片在线观看秒播厂| 人人妻人人澡人人爽人人夜夜| 一区二区av电影网| 男男h啪啪无遮挡| 美女视频免费永久观看网站| 深夜精品福利| 久久性视频一级片| 久久综合国产亚洲精品| 欧美日韩av久久| 午夜福利一区二区在线看| a级毛片在线看网站| 国产精品三级大全| 国产片特级美女逼逼视频| 久久精品熟女亚洲av麻豆精品| 51午夜福利影视在线观看| 高清av免费在线| 亚洲精品日本国产第一区| 国产亚洲av片在线观看秒播厂| 肉色欧美久久久久久久蜜桃| 国产一区亚洲一区在线观看| 97人妻天天添夜夜摸| 久久天堂一区二区三区四区| 婷婷色麻豆天堂久久| 久久精品久久精品一区二区三区| 伊人久久大香线蕉亚洲五| 久久国产精品大桥未久av| 亚洲免费av在线视频| 亚洲精品日本国产第一区| 精品人妻在线不人妻| 国产黄频视频在线观看| 久久久久久久精品精品| 午夜精品国产一区二区电影| 99精国产麻豆久久婷婷| 大码成人一级视频| 午夜激情av网站| 欧美老熟妇乱子伦牲交| 欧美日韩亚洲综合一区二区三区_| a级片在线免费高清观看视频| av网站免费在线观看视频| 黑人猛操日本美女一级片| 婷婷成人精品国产| 19禁男女啪啪无遮挡网站| 精品熟女少妇八av免费久了| 久久精品亚洲av国产电影网| 男女下面插进去视频免费观看| 视频区图区小说| 免费在线观看日本一区| 亚洲 国产 在线| 丝袜美腿诱惑在线| 国产成人欧美在线观看 | 女人被躁到高潮嗷嗷叫费观| 日韩熟女老妇一区二区性免费视频| 操美女的视频在线观看| 免费日韩欧美在线观看| 成人午夜精彩视频在线观看| 久久精品亚洲av国产电影网| 中文字幕av电影在线播放| 乱人伦中国视频| 国产福利在线免费观看视频| 男女高潮啪啪啪动态图| 老汉色∧v一级毛片| 女人久久www免费人成看片| 色婷婷av一区二区三区视频| av电影中文网址| h视频一区二区三区| 久久性视频一级片| 99国产精品99久久久久| 欧美av亚洲av综合av国产av| 日本a在线网址| 成人影院久久| 日韩制服丝袜自拍偷拍| 国产欧美日韩一区二区三 | 亚洲国产日韩一区二区| 国产欧美日韩综合在线一区二区| 91精品伊人久久大香线蕉| 极品少妇高潮喷水抽搐| 亚洲欧美精品自产自拍| 欧美日韩亚洲国产一区二区在线观看 | 在线观看免费日韩欧美大片| xxxhd国产人妻xxx| 97在线人人人人妻| av网站免费在线观看视频| 久久综合国产亚洲精品| 乱人伦中国视频| 永久免费av网站大全| 国产一区二区 视频在线| 成人国产一区最新在线观看 | 亚洲精品日本国产第一区| 久久亚洲国产成人精品v| 亚洲欧洲国产日韩| 亚洲精品日本国产第一区| 国产伦理片在线播放av一区| 女警被强在线播放| www日本在线高清视频| 亚洲国产精品999| 国产一区二区在线观看av| 一边摸一边抽搐一进一出视频| 国产91精品成人一区二区三区 | 亚洲男人天堂网一区| 久久这里只有精品19| 女性被躁到高潮视频| 亚洲,欧美精品.| 精品国产一区二区久久| 女性被躁到高潮视频| 久久鲁丝午夜福利片| 亚洲精品乱久久久久久| 国产亚洲av高清不卡| 亚洲中文av在线| 久久久久国产一级毛片高清牌| 久久精品国产综合久久久| 国产免费又黄又爽又色| 久久性视频一级片| 国产免费现黄频在线看| 成年女人毛片免费观看观看9 | 亚洲精品国产av蜜桃| 国产福利在线免费观看视频| 黄色a级毛片大全视频| 一个人免费看片子| 青草久久国产| 欧美人与性动交α欧美软件| 亚洲,欧美精品.| 七月丁香在线播放| 国产日韩欧美视频二区| 色网站视频免费| 亚洲av电影在线观看一区二区三区| 成年av动漫网址| 久久免费观看电影| 久久国产亚洲av麻豆专区| 美女午夜性视频免费| 久9热在线精品视频| 亚洲精品日本国产第一区| 国产在线视频一区二区| 一边亲一边摸免费视频| 天天躁夜夜躁狠狠躁躁| 国产成人精品无人区| 国产av国产精品国产| 男女无遮挡免费网站观看| 精品熟女少妇八av免费久了| 精品福利观看| 波野结衣二区三区在线| 欧美精品av麻豆av| 国产在线一区二区三区精| 这个男人来自地球电影免费观看| 精品国产国语对白av| 黄片小视频在线播放| 中文字幕色久视频| 每晚都被弄得嗷嗷叫到高潮| 亚洲av日韩在线播放| 久久国产亚洲av麻豆专区| 成人午夜精彩视频在线观看| 亚洲欧美成人综合另类久久久| 水蜜桃什么品种好| 国产高清不卡午夜福利| 国产人伦9x9x在线观看| av天堂在线播放| 亚洲美女黄色视频免费看| 国产精品熟女久久久久浪| 成年人免费黄色播放视频| 欧美日韩亚洲国产一区二区在线观看 | 一本久久精品| 精品人妻熟女毛片av久久网站| 久久久国产一区二区| 好男人电影高清在线观看| 国产黄色免费在线视频| a级毛片黄视频| 人人妻人人澡人人爽人人夜夜| 久久久久精品国产欧美久久久 | 亚洲精品中文字幕在线视频| 日本a在线网址| 亚洲国产欧美网| 国产一卡二卡三卡精品| 国产成人欧美在线观看 | 99国产综合亚洲精品| 天堂8中文在线网| 国产亚洲精品第一综合不卡| 香蕉国产在线看| 亚洲国产精品成人久久小说| 大香蕉久久网| 国产高清国产精品国产三级| 欧美精品高潮呻吟av久久| 免费观看a级毛片全部| 97精品久久久久久久久久精品| 久久女婷五月综合色啪小说| 搡老乐熟女国产| 天天影视国产精品| 久久久精品国产亚洲av高清涩受| 亚洲中文字幕日韩| 99国产精品一区二区蜜桃av | 精品亚洲成a人片在线观看| 制服诱惑二区| 久久久久国产精品人妻一区二区| 色网站视频免费| 亚洲色图综合在线观看| 国产一卡二卡三卡精品| av网站在线播放免费| 亚洲av在线观看美女高潮| 久久性视频一级片| 丝袜脚勾引网站| 精品国产一区二区三区久久久樱花| 天堂8中文在线网| 午夜福利乱码中文字幕| 成人国语在线视频| 91字幕亚洲| 国产一区二区三区av在线| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲成av片中文字幕在线观看| √禁漫天堂资源中文www| 午夜福利视频在线观看免费| 在线观看免费午夜福利视频| 亚洲欧美激情在线| 国产精品 国内视频| 无限看片的www在线观看| 国产欧美亚洲国产| 久久ye,这里只有精品| 精品国产一区二区三区四区第35| 日韩一卡2卡3卡4卡2021年| 晚上一个人看的免费电影| 日韩人妻精品一区2区三区| 午夜福利影视在线免费观看| 国产在线视频一区二区| 欧美少妇被猛烈插入视频| 日韩精品免费视频一区二区三区| av有码第一页| xxx大片免费视频| 日本五十路高清| 欧美精品一区二区免费开放| 一二三四在线观看免费中文在| 性色av乱码一区二区三区2| 少妇猛男粗大的猛烈进出视频| 亚洲av美国av| 欧美另类一区| 精品国产一区二区三区四区第35| 亚洲精品乱久久久久久| 91字幕亚洲| 国产熟女午夜一区二区三区| 午夜两性在线视频| 一级,二级,三级黄色视频| 9热在线视频观看99| 午夜福利在线免费观看网站| 看免费av毛片| 色精品久久人妻99蜜桃| 国产精品久久久久成人av| 宅男免费午夜| 精品久久蜜臀av无| 国产亚洲av片在线观看秒播厂| 视频区图区小说| 欧美日韩亚洲高清精品| 久久九九热精品免费| 亚洲专区国产一区二区| 亚洲黑人精品在线| 高清欧美精品videossex| 天天添夜夜摸| 中文字幕高清在线视频| 国产欧美日韩精品亚洲av| 中国国产av一级| 国产精品国产av在线观看| 夫妻午夜视频| 成年动漫av网址| 自拍欧美九色日韩亚洲蝌蚪91| 国产精品久久久av美女十八| 性少妇av在线| 久久久久久久国产电影| 欧美精品一区二区免费开放| 黄网站色视频无遮挡免费观看| 麻豆av在线久日| 九色亚洲精品在线播放| 高清黄色对白视频在线免费看| 超碰成人久久| 久久精品国产亚洲av高清一级| 一本久久精品| 久久人妻福利社区极品人妻图片 | 国产精品九九99| 亚洲色图综合在线观看| 捣出白浆h1v1| 啦啦啦视频在线资源免费观看| 中文字幕人妻丝袜一区二区| 亚洲情色 制服丝袜| 国产伦理片在线播放av一区| 青春草亚洲视频在线观看| 国产欧美日韩一区二区三 | 桃花免费在线播放| 十八禁高潮呻吟视频| 日本vs欧美在线观看视频| 97在线人人人人妻| 两人在一起打扑克的视频| 亚洲少妇的诱惑av| 在线av久久热| 丝袜喷水一区| 国产精品人妻久久久影院| 精品一品国产午夜福利视频| 黑人猛操日本美女一级片| 少妇被粗大的猛进出69影院| 男女国产视频网站| kizo精华| 欧美久久黑人一区二区| 亚洲久久久国产精品| 脱女人内裤的视频| www.av在线官网国产| 91精品国产国语对白视频| videosex国产| 亚洲av片天天在线观看| 国产av精品麻豆| 欧美人与性动交α欧美软件| 国产免费福利视频在线观看| 中文乱码字字幕精品一区二区三区| 永久免费av网站大全| 精品福利观看| 这个男人来自地球电影免费观看| 日韩一区二区三区影片| 中文欧美无线码| 久久久久久久大尺度免费视频| 人人妻人人澡人人看| 日本五十路高清| 爱豆传媒免费全集在线观看| 欧美在线黄色| 大码成人一级视频| 在线观看免费视频网站a站| 免费av中文字幕在线| 欧美变态另类bdsm刘玥| 激情五月婷婷亚洲| 亚洲av美国av| 日日夜夜操网爽| 夫妻性生交免费视频一级片| 免费观看a级毛片全部| 久久久久久免费高清国产稀缺| 这个男人来自地球电影免费观看| 麻豆乱淫一区二区| 久久99精品国语久久久| 在现免费观看毛片| 亚洲成色77777| 一本大道久久a久久精品| 欧美日韩综合久久久久久| 亚洲 欧美一区二区三区| 国产成人一区二区在线| 黄片播放在线免费| 国产在线免费精品| netflix在线观看网站| 黄频高清免费视频| 国产片特级美女逼逼视频| 麻豆av在线久日| 国产无遮挡羞羞视频在线观看| 99香蕉大伊视频| 自线自在国产av| 18禁裸乳无遮挡动漫免费视频| 久久狼人影院| 国产人伦9x9x在线观看| 捣出白浆h1v1| 精品国产国语对白av| 欧美xxⅹ黑人| 久久国产精品影院| 欧美精品一区二区免费开放| 亚洲精品一区蜜桃| 2018国产大陆天天弄谢| 国产av一区二区精品久久| 国产高清国产精品国产三级| 精品人妻熟女毛片av久久网站| 精品高清国产在线一区| 精品免费久久久久久久清纯 | 考比视频在线观看| 人人澡人人妻人| 午夜福利免费观看在线| 亚洲精品国产av成人精品| 亚洲av欧美aⅴ国产| 亚洲三区欧美一区| 国产欧美日韩综合在线一区二区| 国产亚洲精品第一综合不卡| 亚洲精品乱久久久久久| 久热这里只有精品99| 50天的宝宝边吃奶边哭怎么回事| 国产成人精品在线电影| 老司机深夜福利视频在线观看 | 人人妻人人澡人人看| 国产精品久久久av美女十八| av天堂久久9| 久久鲁丝午夜福利片| 国语对白做爰xxxⅹ性视频网站| 亚洲av日韩在线播放| 国产精品国产三级国产专区5o| 麻豆国产av国片精品| 丁香六月天网| 欧美少妇被猛烈插入视频| 国产一区二区三区综合在线观看| 精品国产国语对白av| 交换朋友夫妻互换小说| 在线观看www视频免费| 视频区图区小说| 国产免费又黄又爽又色| 成人18禁高潮啪啪吃奶动态图| av有码第一页| 国语对白做爰xxxⅹ性视频网站| 交换朋友夫妻互换小说| 久久精品久久久久久久性|