• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Learning adaptive receptive fields for deep image parsing networks

    2018-10-17 07:04:10ZhenWeiYaoSunJunyuLinandSiLiu
    Computational Visual Media 2018年3期

    Zhen Wei,Yao Sun(),Junyu Lin,and Si Liu,4

    Abstract In this paper,we introduce a novel approach to automatically regulate receptive fields in deep image parsing networks.Unlike previous work which placed much importance on obtaining better receptive fields using manually selected dilated convolutional kernels,our approach uses two affine transformation layers in the network’s backbone and operates on feature maps.Feature maps are in flated or shrunk by the new layer,thereby changing the receptive fields in the following layers.By use of end-to-end training,the whole framework is data-driven,without laborious manual intervention.The proposed method is generic across datasets and different tasks.We have conducted extensive experiments on both general image parsing tasks,and face parsing tasks as concrete examples,to demonstrate the method’s superior ability to regulate over manual designs.

    Keywords semantic segmentation;receptive field;data-driven;face parsing

    1 Introduction

    In deep neural networks,the notion of a receptive field refers to the data that are path-connected to a neuron[1].After the introduction of fully convolutional networks(FCN)[2],receptive fields have become especially important for deep image parsing networks;they can significantly affect the network’s performance.As discussed in Ref.[3],a small receptive field may lead to inconsistent parsing results for large objects while a large receptive field may ignore small objects and classify them as background.Even if such extreme problems do not arise,unsuitable receptive fields can still impair performance.

    Recent works such as Refs.[4,5]have already discussed adapting network structures to use different receptive fields.Dilated convolutional kernels are often used for this purpose.The convolutional kernels’receptive field size can be controlled by appropriate choice of dilation values(typically integers).However,this approach has several main drawbacks that should be addressed.Firstly,these dilation values are treated as hyper-parameters in network design.The selection of dilation values is based on the designer’s observations or the results of a series of trials on a certain dataset,which is laborious and time-consuming. Secondly,such choices are not generic across different image parsing tasks,or even various datasets given the same task.During network transfer,the selection procedure must be performed again.Thirdly,dilated convolutional kernels only lead to discrete values for receptive fields.When a dilation value is incremented,the corresponding receptive field(e.g.,the fc6 layer in VGG[6])may expand by tens or even hundreds of pixels,making it hard to accurately control the receptive field.

    The contribution of this paper is a learning-based,data-driven method for automatically regulating receptive fields in deep image parsing networks.The main idea is to introduce a novel affine transformation layer,the in flation layer,before the convolutional layer whose receptive field is to be regulated.This inflation layer uses interpolation algorithms to enlarge or shrink feature maps.The following layers perform inference on these in flated features,thus changing the receptive fields after the in flation layer.Then,inference results(before softmax normalization)are resized to a fixed size by an interpolation layer.During training,the in flation factor,f,embedded in both the in flation layer and the interpolation layer has computable derivatives,and is trained end-to-end together with the network backbone.Asfmay be a real number,the in flation layer can produce a more fine-grained receptive field,and is trained only once.

    To corroborate the method’s effectiveness,we have conducted experiments on both general image parsing tasks as well as face parsing.With proper initialization,the proposed method can achieve comparable,or even superior,results compared to the best manually selected dilated convolutions.In particular,due to the strong regulation ability brought by our method,the improved model achieves state-of-the-art face parsing accuracy on the Helen dataset[7,8].

    The rest of this paper is organized as follows.In Section 2,we review related work on image parsing,especially focusing on issues relevant to receptive fields.Section 3 provides details of the new affine transformation layer and derivatives of the in flation factorf.Section 4 describes the experimental settings,while Section 5 discusses our experimental results.Section 6 concludes the paper.

    This paper extends our former conference publication[9].Additional content here mainly includes:(i)more elaborate discussion of several issues during optimization(in Section 5),(ii)detailed network settings used in experiments(in Table 1),and(iii)further qualitative and quantitative results(in Tables 2–5 and Figs.2 and 3).

    2 Related work

    This section provides a brief review and discussion of related work.

    2.1 FCNs and dilated convolution

    The introduction of FCNs[2]has emphasised the importance of receptive fields.The forward process of FCNs to generate dense classification results is equivalent to a series of inferences using sliding windows on the input image.Using fixed stride sliding,inference at the pixel-level is solely based on data inside the window. The window is,in this case,the receptive field of the network. In Ref.[2],the authors discuss dilated convolution,but do not make use of it in their network.DeepLab[4]uses dilated convolutions to reduce pooling strides while expanding receptive fields and reducing the number of parameters in the fc6 layer.In Ref.[5],the authors append a series of dilated convolutional layers after an FCN backbone(the frontend)to expand the receptive field.Recently,in DeepLab v2[10],the authors manually designed four different dilated convolutions which are used in parallel to achieve multi-scale parsing.

    However,these dilation designs are all based on trials or the designers’observations of the dataset.This is not difficult,but nevertheless is laborious and time-consuming.This paper offers the first way to replace such a process with an automatic method.

    2.2 Regulating receptive fields with input variance

    Adding input variability can also be used to provide dynamic receptive fields for a network.Zoomout[11]uses 4 inputs at different scales during inference to capture both contextual and local information.DeconvNet[3]applies prepared detection bounding boxes and crops out object instances.Inference is conducted on both these sub-images and the whole image.

    Such approaches require complex pre-and postprocessing.Furthermore,they are computationally expensive as tens or even hundreds of forward propagations may be needed for each input image.

    2.3 Affine transformation in deep networks

    Affine transformations are usually seen in deep networks.Spatial transformer network(STN)[12]for character recognition uses a side branch to regress a set of affine parameters and applies the corresponding transformation to feature maps.In Ref.[13],the network predicts both facial landmarks in both the original image and the transformed sub-images.Affine transformation parameters are then obtained by projection of these two sets of landmarks.

    Our method intrinsically differs from such related work.Taking STN for example:

    ·Affine transformation is only the tool used by STN to solve various problems;in particular,STN uses affine transformation to correct spatial variability of input data for recognition.Our method regulates the receptive field in the parsing network.

    · The different aims result in different network structures.Affine parameters used in STN are data-dependent,as each input is different.The parameterfin our method is embedded,and knowledge-dependent(obtained by training):the receptive field should be stable during inference.Our work focuses on replacing the manual receptive field selection process.Studies on use of dynamic receptive fields are not taken into consideration here.

    ·As the receptive field depends only on size,rotation functionality is discarded in this work,unlike other work.

    In Ref.[14],deformable convolutions are used to reformulate the sampling process in convolutions in a learning-based approach.Deformable convolutions can also be regarded as a way of reallocating convolutional weights.If nearby weights in lower layers are increased,the receptive fields of the corresponding weights in higher layers become smaller,and vice versa.

    3 Approach

    In this section, we provide details ofour methods,including the modified network structure,implementation of the in flation and interpolation layers,and loss guidance for our multi-path network.These allow us to realize multi-scale inference with our data-driven method.

    We use both single-path and multi-path structures.Almost all state-of-the-art deep image parsing networks are either single-path[2,4,5,15]or multipath[10],so we use these two structures to show that our method is effective and compatible with such state-of-the-art methods.

    3.1 Framework

    Figure 1 presents the details of our framework.The specific settings for the network backbone are provided in Table 1.Using dilated convolutions,pooling strides in pool4 and pool5 are removed.The extent of the receptive field for the fc6 layer is 212×212.Note that we still use dilated convolutions in the fc6 layer to generate different initial receptive fields.

    In the single-path network,the in flation layer and the interpolation layer are inserted before layer fc6 and after layer fc8 respectively.The receptive field is regulated by pool5 features.To reduce feature variability and increase robustness during optimization,we add a batch normalization(BN)[16]layer before the in flation layer.

    Fig.1 Framework.(a)Modified single-path network.New layers are inserted before layer fc6 and after layer fc8.(b)Modified multi-path network in which all branches have the same structure and initialization.Weighted gradient layers are used to break symmetry during training.The specific settings for the single-path network are given in Table 1.

    Table 1 Network structures used,including network backbone,single-path baseline model,and single-path modified model

    In the multi-path version,layers from BN to the interpolation layer are duplicated,and followed by a summation operation for feature fusion.Each duplicate is initialized in the same way.In order to break this symmetry and achieve discriminative multi-scale inference,a loss guidance layer is added to enforce each duplicate to focus on a different scale.These issues will be explained in detail in the following subsections.

    3.2 Affine transformation layers

    The affine transformation layers include the in flation layer and the interpolation layer.

    The in flation layer learns a parameterf,the in flation factor.The feature map is enlarged by thefactorf beforethefollowing convolution operations.Unlike other deep networks with affine operations[12,13],regulating receptive fields does not require cropping or rotation,so only one parameter is needed in the in flation layer.

    There are two steps in the in flation operation,coordinate transformation and sampling.To formulate the first process,let(,)and(,)be coordinates in the source feature map(input)and target feature map(output)respectively.The in flation process performs element-wise coordinate projection using:

    The size of the feature map changes accordingly:

    whereHandWare the height and width of the feature maps,superscripts s and t meaning “source”and “target”,respectively.

    In the second step,we use a sampling kernelk(·)to assign pixel values in target feature maps.It is denoted bywhereiis the pixel index andcis the channel index.Letbe a pixel value in a source feature map.Then we have

    This operation is identical for each input channel.The sampling kernelk(·)could be any differentiable image interpolation kernel.Here we use the bilinear kernel:k(x,f,m)=max(0,1?|x/f?m|),giving

    where

    and

    Using the chain rule,the gradient from the in flation layer Ginfis

    Additionally,we normalizeGinfby dividing by HtWt,the number of pixels in a target feature map.

    The interpolation layer has almost the opposite functionality.In this layer,feature maps are resized back to a fixed size.The resizing factorf′used in interpolation layers is

    whereFis a constant determined by the desired output size.In our implementationFis 8.11 to resize the final result to be as large as the label map or input image.

    The interpolation layer provides a further contribution to the in flation factor’s gradient:

    where ?Loss/?f′has exactly the same form as in Eq.(7).In practice,we simply add these two gradients together to update the in flation factor f:

    When considering specific layers in our network,we obtain:

    whereCis the number of channels in the BN layer,and subscripts bn and img refer to the BN layer and input image respectively.

    In this way,it is possible to learn the in flation factor during end-to-end training.

    3.3 New receptive field

    To calculate the ranges of the new receptive fields,we can transform the question to one of obtaining an equivalent kernel size for the fc6 layer while leaving the feature maps unchanged.Denoting the original kernel size byk,Eq.(2)gives the new equivalent size:.Thus the extent of the new receptive field is 212+8×(k′?1),where 212 is the receptive field in the pool5 layer,and 8 is the overall stride from the conv11 layer to the pool5 layer in the network backbone.

    3.4 Loss guidance for multi-path networks

    Deep networks with multi-scale receptive fields have brought performance improvements in image parsing tasks[10]. Such networks usually use several slightly different parallel paths to achieve multiple receptive fields.Our method can be also used in similar structures to realize further improvements,taking the place of hand-craft dilated convolutional kernels.

    To achieve this,as shown in Fig.1(b),layers fc6,fc7,and fc8 arefirst copied in parallel.The outputs of the fc8 layers are fused by summation.Then,in flation and interpolation layers are inserted before each fc6 layer and after each fc8 layer.A shared BN layer is appended after pool5.

    However,this framework is symmetric and is unsuited to learning discriminative features. To break this symmetry,a weighted gradient layer is added behind each interpolation layer during training.Following the class-rebalancing strategy in Ref.[17]and the use of weighted loss in Ref.[18],the weighted gradient layer weights the gradient valuesif the ground truth labelliof the corresponding pixel(theith pixel in thecth channel)is in a given label setS.The weightwis usually greater than 1.Thus

    4 Experiments

    We conducted experiments to show the superiority of our method,in its ability to select a finer receptive field.The experiment consisted of three parts:

    ·We first reproduced the receptive field search process by using dilated convolutional kernels and found the optimal receptive field manually.

    ·Leaving the network backbone intact,the singlepath network was modified by inserting new affine transformation layers.The in flation factor was learned with different initial dilation values.

    ·We used the best two and three receptive field settings according to the results in the first experiment to build a bi-path network and a tripath network as baseline models.For modified models,parallel paths were constructed with the same structure.By deploying loss guidance,each parallel path learned a discriminative in flation factor and feature.

    Results demonstrate the effectiveness of the proposed method to learn and obtain better receptive if elds with little manual intervention.

    4.1 Dataset and data preprocessing

    The Helen dataset[7,8]was used in the face parsing task.It contains 2330 facial images with 11 manually labelled facial components including eyes,eyebrows,nose,lips,and mouth.The hair region is annotated b;it is thus not accurate enough for comparison.We adopted the same dataset division setting as in Refs.[18,19],using 100 images for testing.

    All images were aligned using similar steps to those in Ref.[19].We used Ref.[20]to generate facial landmarks and align each image to a canonical position.After alignment,each image was cropped or padded and then resized to 500×500 pixels.

    The augmented PASCAL VOC 2012 segmentation dataset was used in the general image parsing task.The augmented PASCAL VOC 2012 segmentation dataset is based on the PASCAL VOC 2012 segmentation benchmark[21]with extra annotation provided by Ref.[22].It has 12,031 images for training and 1499 images for validation,consisting of 20 foreground object classes and one background class.

    4.2 Implementation details

    Structures of models modified by our method are shown in Table 1 and Fig.1.

    In the face parsing task,we trained each model with mini-batch gradient descent.The momentum,weight decay,and batch size were set to be 0.9,0.0005,and 2 respectively.The base learning rate was1e?7while the softmax loss was normalized by batch size.A total of 55,000 iterations were used;training stopped after 50,000 iterations.

    The batch normalization layer used default settings.In flation factors were initialized to 1 and their learning rates were base learning rates multiplied by a weight ranging from 3×104to 9×104.No weight decays were applied to in flation factors during training.In flation factors were restricted to the range[0.25,4]in order to avoid numerical problems or exceptional memory usage.

    In the general image parsing task,we realized its single-path version with a batch size of 20 and a learning rate multiplierfof 3×105.A total of 9600 iterations were used with 3 steps.The great data variability in the VOC dataset as well as data shuffling and random cropping strategies provided significant obstacles to optimizingf.To increase robustness,the following strategies were used during training:(a)clipping exceptional?Loss/?fvalues;(b)when updatingf,gradients from background areas were masked by multiplying them by a weight less than 1,preventing them from becoming dominant;(c)the original step using aγvalue of 0.1 was replaced by two smaller steps 200 iterations apart withγvalues of 0.32.

    4.3 Comparison with manual selection

    4.3.1 Single-path models

    In the face parsing task,we quantitatively evaluated and compared our model with baseline models using F-measures:see Table 2.First,we manually determined the best receptive field using dilated convolutional kernels based on the baseline models,by using a series of dilation values for each model and selecting the one providing the highest F-score as the optimal manually designed model.

    Next,the other unselected networks were modified using our proposed method,and their receptive fields were used for initialization. Results in Table 2 show that almost all modified models(except for a dilation value of 2,which will be further discussed in Section 5.1)have witnessed improvement,providing results comparable to those from the optimal manually designed model.The new receptive fields,e.g.,of size 292,are morefinegrained and cannot be obtained by use of the dilation algorithm.Their results are equal to,or even surpass,those of the best manually design models.

    Table 2 Quantitative evaluation of baseline models and modified models using the Helen dataset.Key:dilation:dilation values in the fc6 layer.rf-fc6:the extent of the receptive field in the fc6 layer.?:the in flation factor starts being updated after 10,000 iterations in training

    Qualitative comparisons for the face parsing task are provided in Fig.2.Results in Figs.1(d)and 1(e)show the improvements brought by our method.Smaller semantic areas are parsed better,especially the eyebrows and nose.Face boundaries are smoother and more accurate.Results in Figs.1(c)and 1(d)show that the proposed models provide comparable results to manually designed models:our method can replace previous manual receptive field selection methods.

    For the general image parsing task,a similar process was used.Evaluation was conducted using the VOC validation set under a mean IOU metric(average Jaccard distance).

    Table 3 provides quantitative results.Modified models with initial dilation values of 16,18,and 20 show noticeably improved results that are comparable with the best manually designed models,with receptive fields adjusted to an optimal range.Note that with the current network backbone,dilation convolutional kernels cannot generate receptive field of size 396,showing that the proposed method can generate receptive fields with a finer granularity.

    Choosing different dilation values when initializing the modified models helps to evaluate the potential of the proposed method.Modified models with small initial dilation values have improved parsing accuracy but still perform worse than the best manually designed one,mainly due to the shrinkage of features and information loss.On the other hand,models with large initial dilation values perform better than the optimal baseline model.The reasons may vary,but one possible reason is that the modified models learn from data with dynamic sizes whilefis changing,with similar effects to those from data augmentation methods.These phenomena are further discussed in Section 5.1.

    Qualitative comparisons for the general image parsing task are provided in Fig.3. Results in(d)and(e),(f)and(g)show the improvements brought by our method.With finer receptive fields,results from modified models are generally more consistent.Results in(d)have clearer shapes and boundaries than results in(e).Results in(c),(f),and(g)show that if an unsuitable initial receptive field is used,while modified models are improved,they are still not comparable to the best manually designed models.Results in(c)and(d)show that,if the initial receptive field is appropriately set,our models provide results very close to those of manually designed models:our method can replace previous manual methods of receptive field design.

    Table 3 Quantitative evaluation of baseline models and modified models using the PASCAL VOC 2012 validation set

    Fig.2 Face parsing results for the Helen dataset.(a)Original images.(b)Ground truth.(c)Baseline model with dilation value of 4(with best manually selected receptive field).(d)Modified model with initial dilation value of 12.(e)Baseline model with dilation value of 12.(d)and(e)show the improvements brought by our method.Smaller semantic areas have better parsing results,especially eyebrows,nose.Face boundaries are smoother and more accurate.(c)and(d)show that our models have very similar ability to manually designed models:our method can replace manual receptive field design processes.

    These results demonstrate that,with proper initial settings,the proposed method is able to help deep image parsing networks find better receptive fields automatically,providing results that are equivalent to,or better than,the best manually designed one.

    4.3.2 Multi-path models

    A bi-path network and a tri-path network were built for use in a face parsing experiment.As baseline models,dilated convolutional kernels with best accuracy were selected:kernels with dilation values 4(best overall results,with highest eye F-score)and 6(highest nose and mouth F-scores)for the bi-path network,and dilation values of 4,6,and 8(providing highest face F-score)for the tri-path network.

    Fig.3 General image parsing results on the PASCAL VOC 2012 validation set.(a)Original images.(b)Ground truth.(c)Baseline model with dilation value of 12 and best manually selected receptive field.(d)Modified model with initial dilation value of 20.(e)Baseline model with dilation value of 20.(f)Modified model with initial dilation value of 4.(g)Baseline model with dilation value of 4.Results in(d)and(e),(f)and(g)show the improvements brought by our method.With finer receptive fields,results from the modified model are generally more consistent.Results in(d)have clearer shapes and boundaries than results in(e).Results in(c),(f),and(g)show that with poor initial receptive fields,modified models are still improved but not as good as the best manually designed models.Results in(c)and(d)show that,if the initial receptive field is properly set,our model has comparable performance to the manually designed model:our method can replace previous receptive field design processes.

    As a comparison,the parallel paths in both modified bi-path and tri-path networkswere symmetric using an initial dilation value of 8.The weightwused in the weighted gradient layer was 1.2.

    Results in Table 4 show that the proposed method is able to obtain better receptive fields for each parallel path,providing superior results to the manually designed network.We observe that the loss guidance manages to break symmetry in the network structure and learn discriminative features.

    Table 4 Quantitative evaluation of multi-path versions of baseline models and modified models using Helen dataset[7,8].Each parallel path in the modified network was initialized with a dilation value of 8

    4.3.3 Comparison with previous face parsing methods

    Table 5 shows a quantitative comparison for face parsing between our method and other state-of-theart methods.We use reported results from Refs.[8,19,23].Our method used a single-path network with an initial dilation value of 8.Even without CRF or RNN post-processing,our method still achieves the highest accuracy.

    5 Discussion

    5.1 Choosing proper initial receptive fields

    Although our method has a strong ability to regulate receptive fields,to get the best results,suitable initial dilation values must be chosen.Figures 4 and 5 demonstrate typical fluctuations infduring training for both tasks.

    Using initial receptive fields much smaller than the desired one,fis hard to optimize as the network attempts to keep it larger than 1(see“dilation 2”in Fig.4).The shrinkage of features results in information loss,impairing parsing performance.In the face parsing task,even with some strategies,e.g.,beginning to updatefafter 10k iterations(see“dilation 2 after 10k”in Fig.4),fgoes down but does not reach the value expected.Consequently,modified models with small initial receptive fields provide improved results,but they are still not comparable to those from the best manually designed models.When it comes to the general image parsing task,models with small initial dilation values are sometimes trapped in local minima wheref fluctuates around a value larger than 1(see Fig.6).On the other hand,using extremely large initial dilations requires more extensive learning off,leading to unaffordable memory loads and time costs,as feature maps accordingly become much larger.In summary,our suggestion is to use moderately large dilation values for initialization,but not too large.

    5.2 Optimization for the general dataset

    Unlike the face parsing task,in which images are coarsely aligned and semantic constituents from different images are of similar size(e.g.,eyes,lips),object sizes in general datasets have much greater variability,making optimizingfrather more difficult.Even with proper initialization and identical network settings,whilefstays in a certain range it does not converge to a specific value(see Fig.7).Results shown in Table 3 are typical examples.

    6 Conclusions

    In this paper,we have introduced a new automatic regulation method for receptive fields in deep image parsing networks.This data-driven approach is able to replace existing hand-crafted receptive field selection methods.It enables deep image parsing networks to obtain better receptive fields with finer granularity in a single training process.Experimental resultsusing the Helen and PASCAL VOC 2012 datasets demonstrate the effectiveness of our method in comparison to existing methods.

    Table 5 Quantitative comparison of our method and other face parsing models on the face parsing task.Our method performs best

    Fig.5 Typical fluctuations infduring training for the general image parsing task.Modified models used initial dilation values of:(a)4,(b)6,(c)16,(d)18,(e)20.Unlike the training process for the face parsing task,fshows more noticeable fluctuation due to high data variability in the VOC dataset.

    Fig.6 Fluctuation offduring training in the general image parsing task,using the same initial network settings.Only changes in the first 2500 iterations are plotted here.The initial dilation value was 4,much smaller than the optimal value.In this case,fsometimes may become trapped in local minima and stay near 1.Small initial dilation values are to be avoided.

    Fig.7 Fluctuation offduring training for the general image parsing task,with identical initial network settings.Only changes in the first 3000 iterations are plotted here.The initial dilation value was 18.Due to the great variability during optimization,fwill reach a range of values,instead of stopping at a specific number.

    Acknowledgements

    This work was supported by the National Natural ScienceFoundation ofChina(Nos.U1536203,61572493),the Cutting Edge Technology Research Program of the Institute of Information Engineering,CAS(No.Y7Z0241102),theKeyLaboratory of Intelligent Perception and Systems for High-Dimensional Information of the Ministry of Education(No.Y6Z0021102),and Nanjing University of Science and Technology(No.JYB201702).

    成人欧美大片| av女优亚洲男人天堂| 欧美性猛交黑人性爽| 美女黄网站色视频| 日韩一本色道免费dvd| 高清视频免费观看一区二区 | 美女高潮的动态| 午夜激情福利司机影院| 卡戴珊不雅视频在线播放| 成人漫画全彩无遮挡| 国产成年人精品一区二区| 国产伦精品一区二区三区视频9| 国产一区二区三区av在线| 少妇人妻一区二区三区视频| 久久热精品热| 亚洲精品日韩在线中文字幕| 纵有疾风起免费观看全集完整版 | 国产黄色小视频在线观看| 一边亲一边摸免费视频| 久久99热这里只有精品18| 国产爱豆传媒在线观看| av.在线天堂| 国产在线男女| 国产 一区 欧美 日韩| 高清毛片免费看| 特大巨黑吊av在线直播| 三级毛片av免费| 亚洲va在线va天堂va国产| 日本av手机在线免费观看| 卡戴珊不雅视频在线播放| 国模一区二区三区四区视频| videossex国产| 嫩草影院入口| 能在线免费看毛片的网站| 97热精品久久久久久| 国产精品嫩草影院av在线观看| 韩国av在线不卡| 国产精品一二三区在线看| 99热这里只有是精品在线观看| 欧美成人免费av一区二区三区| 99热这里只有是精品在线观看| 欧美丝袜亚洲另类| 亚洲四区av| 韩国av在线不卡| 久久久国产成人免费| 亚洲欧美清纯卡通| 视频中文字幕在线观看| 青春草亚洲视频在线观看| 一夜夜www| 成人午夜高清在线视频| 性插视频无遮挡在线免费观看| 99国产精品一区二区蜜桃av| 国产黄色小视频在线观看| 久久精品久久精品一区二区三区| 日日摸夜夜添夜夜爱| 日日干狠狠操夜夜爽| 国产成人a∨麻豆精品| av视频在线观看入口| 天天躁日日操中文字幕| 最后的刺客免费高清国语| 日本欧美国产在线视频| 春色校园在线视频观看| 国产精品嫩草影院av在线观看| 亚洲欧美中文字幕日韩二区| 国产精品美女特级片免费视频播放器| 中文字幕精品亚洲无线码一区| 国产老妇伦熟女老妇高清| 久久久色成人| 久久精品久久久久久久性| 特大巨黑吊av在线直播| 亚洲精品乱码久久久v下载方式| 欧美高清性xxxxhd video| 国产在视频线在精品| 啦啦啦韩国在线观看视频| 国产伦一二天堂av在线观看| 人体艺术视频欧美日本| 亚洲av日韩在线播放| 午夜日本视频在线| 久久久久精品久久久久真实原创| 成人国产麻豆网| 长腿黑丝高跟| 日本av手机在线免费观看| 成人无遮挡网站| 能在线免费观看的黄片| 蜜桃亚洲精品一区二区三区| 国产精品电影一区二区三区| 熟妇人妻久久中文字幕3abv| 免费av不卡在线播放| 久久精品91蜜桃| 国内少妇人妻偷人精品xxx网站| 国产三级在线视频| 看非洲黑人一级黄片| 99久久中文字幕三级久久日本| 成人特级av手机在线观看| 欧美激情在线99| 精品人妻视频免费看| 亚洲18禁久久av| 色哟哟·www| 亚洲国产精品成人综合色| 男人的好看免费观看在线视频| 99热6这里只有精品| 嘟嘟电影网在线观看| 亚洲在线自拍视频| 国产亚洲精品av在线| 在线观看66精品国产| 日韩av在线免费看完整版不卡| 精华霜和精华液先用哪个| www.色视频.com| 国国产精品蜜臀av免费| 少妇丰满av| 爱豆传媒免费全集在线观看| 国产视频首页在线观看| 欧美精品国产亚洲| av又黄又爽大尺度在线免费看 | 久久久久久久久大av| 久久精品综合一区二区三区| 一区二区三区四区激情视频| 一级毛片我不卡| 国产黄色视频一区二区在线观看 | 中文欧美无线码| 日本wwww免费看| 久久久久久久久久黄片| 综合色丁香网| 97热精品久久久久久| 亚洲成人久久爱视频| 中文字幕亚洲精品专区| 国产一区有黄有色的免费视频 | 啦啦啦啦在线视频资源| 在线播放国产精品三级| 欧美潮喷喷水| 免费av不卡在线播放| 欧美一区二区精品小视频在线| 亚洲国产精品sss在线观看| 亚洲熟妇中文字幕五十中出| 天美传媒精品一区二区| 一本一本综合久久| 免费看日本二区| 啦啦啦韩国在线观看视频| 九草在线视频观看| 午夜福利在线观看免费完整高清在| 只有这里有精品99| 亚洲av.av天堂| 中国国产av一级| 国产亚洲av嫩草精品影院| 久久久色成人| 91精品国产九色| 成人一区二区视频在线观看| 久久精品国产亚洲av涩爱| 成人毛片a级毛片在线播放| 精品国产露脸久久av麻豆 | 成人欧美大片| 老司机影院毛片| 97人妻精品一区二区三区麻豆| 丰满乱子伦码专区| 2021少妇久久久久久久久久久| 一卡2卡三卡四卡精品乱码亚洲| 可以在线观看毛片的网站| av.在线天堂| 成人国产麻豆网| a级毛片免费高清观看在线播放| 一个人免费在线观看电影| 男人和女人高潮做爰伦理| 久热久热在线精品观看| 成人一区二区视频在线观看| 亚洲av二区三区四区| 少妇人妻一区二区三区视频| 中文精品一卡2卡3卡4更新| 69av精品久久久久久| 亚洲欧美一区二区三区国产| 老司机福利观看| 真实男女啪啪啪动态图| 麻豆一二三区av精品| 夜夜看夜夜爽夜夜摸| 村上凉子中文字幕在线| 国产av不卡久久| 少妇裸体淫交视频免费看高清| 国产亚洲av嫩草精品影院| 欧美极品一区二区三区四区| 看黄色毛片网站| 白带黄色成豆腐渣| 国产极品精品免费视频能看的| 深夜a级毛片| 国产av不卡久久| 18禁在线无遮挡免费观看视频| 日韩 亚洲 欧美在线| 晚上一个人看的免费电影| 免费一级毛片在线播放高清视频| .国产精品久久| 亚洲成人中文字幕在线播放| 久久久久精品久久久久真实原创| 午夜免费男女啪啪视频观看| 国产真实伦视频高清在线观看| 在线观看一区二区三区| 嫩草影院入口| 亚洲怡红院男人天堂| 久久久久精品久久久久真实原创| 国国产精品蜜臀av免费| 我要看日韩黄色一级片| 国语对白做爰xxxⅹ性视频网站| 免费看av在线观看网站| 99久久精品一区二区三区| 全区人妻精品视频| eeuss影院久久| 久久久久久大精品| 午夜福利视频1000在线观看| 又爽又黄无遮挡网站| 成人欧美大片| 国产激情偷乱视频一区二区| 欧美高清成人免费视频www| 美女脱内裤让男人舔精品视频| 亚洲乱码一区二区免费版| 中文精品一卡2卡3卡4更新| 精品久久久久久久末码| 麻豆乱淫一区二区| 啦啦啦韩国在线观看视频| 亚洲av二区三区四区| 国产一区二区三区av在线| 最近视频中文字幕2019在线8| 日本一二三区视频观看| 18禁动态无遮挡网站| 中国美白少妇内射xxxbb| 亚洲精品aⅴ在线观看| 精品久久久久久电影网 | 日韩视频在线欧美| 国产熟女欧美一区二区| 日本色播在线视频| 欧美性猛交黑人性爽| 99久久精品国产国产毛片| 国产精品久久电影中文字幕| 亚洲无线观看免费| 成人毛片60女人毛片免费| 精品国产三级普通话版| 丝袜美腿在线中文| 欧美另类亚洲清纯唯美| 亚洲最大成人中文| 国产精品无大码| 男女国产视频网站| 简卡轻食公司| 免费观看精品视频网站| av.在线天堂| www.色视频.com| 狠狠狠狠99中文字幕| 中文字幕精品亚洲无线码一区| 一级爰片在线观看| 欧美最新免费一区二区三区| 日本色播在线视频| 日本av手机在线免费观看| 国产乱来视频区| 亚洲人成网站在线观看播放| 精品熟女少妇av免费看| 亚洲av男天堂| 亚洲精品影视一区二区三区av| 中文字幕制服av| av在线亚洲专区| 22中文网久久字幕| 欧美精品国产亚洲| 日本一二三区视频观看| 韩国高清视频一区二区三区| 欧美变态另类bdsm刘玥| 我的老师免费观看完整版| 亚洲av成人av| 三级毛片av免费| 亚洲av中文字字幕乱码综合| 欧美高清成人免费视频www| 插逼视频在线观看| 久久久久精品久久久久真实原创| 狠狠狠狠99中文字幕| 中文字幕免费在线视频6| 国产成人freesex在线| 在线观看一区二区三区| 亚洲精品影视一区二区三区av| 蜜桃久久精品国产亚洲av| 99久久九九国产精品国产免费| 人人妻人人澡欧美一区二区| 久久久精品94久久精品| 午夜激情福利司机影院| 欧美人与善性xxx| 久久精品夜夜夜夜夜久久蜜豆| 欧美xxxx黑人xx丫x性爽| 国产美女午夜福利| 国产精品乱码一区二三区的特点| 久久精品久久久久久噜噜老黄 | 人妻制服诱惑在线中文字幕| eeuss影院久久| 亚洲aⅴ乱码一区二区在线播放| 中文字幕亚洲精品专区| 色综合色国产| 亚洲精品乱久久久久久| 午夜激情欧美在线| 特级一级黄色大片| 久久国产乱子免费精品| 亚洲av熟女| 欧美成人午夜免费资源| 黄色日韩在线| 免费不卡的大黄色大毛片视频在线观看 | 人人妻人人澡人人爽人人夜夜 | 成年女人看的毛片在线观看| 亚洲国产欧美人成| 国产美女午夜福利| 国产精品不卡视频一区二区| 亚洲成人精品中文字幕电影| 18禁在线播放成人免费| 国产中年淑女户外野战色| 国产午夜精品论理片| 国产 一区精品| 日本av手机在线免费观看| 99久久人妻综合| 全区人妻精品视频| 久久人人爽人人爽人人片va| 国产淫片久久久久久久久| 免费人成在线观看视频色| 亚洲精品,欧美精品| 69人妻影院| 99久久成人亚洲精品观看| 国内精品美女久久久久久| 淫秽高清视频在线观看| 欧美另类亚洲清纯唯美| 亚洲成色77777| 久久久久久久久久黄片| 午夜精品在线福利| 国产在线男女| 久久99精品国语久久久| 在线观看美女被高潮喷水网站| 国产精品久久久久久久久免| 伦理电影大哥的女人| 色综合亚洲欧美另类图片| 国产亚洲av嫩草精品影院| 久久这里有精品视频免费| 天天躁夜夜躁狠狠久久av| 超碰av人人做人人爽久久| 国产精品一二三区在线看| 亚洲欧美日韩东京热| 久久这里只有精品中国| 久久久色成人| 18+在线观看网站| 亚洲国产精品sss在线观看| av卡一久久| 亚洲三级黄色毛片| 日本免费一区二区三区高清不卡| 舔av片在线| 亚洲精品乱码久久久v下载方式| 亚洲中文字幕日韩| 亚洲熟妇中文字幕五十中出| 国产中年淑女户外野战色| 人妻制服诱惑在线中文字幕| www.色视频.com| 麻豆成人午夜福利视频| 国产伦理片在线播放av一区| 在线观看一区二区三区| 最近手机中文字幕大全| 中文字幕av在线有码专区| 看非洲黑人一级黄片| 国产亚洲精品久久久com| 亚洲成av人片在线播放无| 亚洲美女视频黄频| 一本一本综合久久| 国产精品不卡视频一区二区| 亚洲美女视频黄频| 99在线视频只有这里精品首页| 成人午夜精彩视频在线观看| 亚洲欧美精品专区久久| 亚洲精品影视一区二区三区av| 国产激情偷乱视频一区二区| www日本黄色视频网| 国内精品宾馆在线| 国产午夜精品论理片| 免费电影在线观看免费观看| 午夜激情福利司机影院| 免费看光身美女| 精品午夜福利在线看| 国产精品伦人一区二区| 欧美三级亚洲精品| 一级二级三级毛片免费看| 亚洲精品乱码久久久v下载方式| 亚洲精品aⅴ在线观看| 舔av片在线| 高清在线视频一区二区三区 | 精品久久久久久久久av| 亚洲一区高清亚洲精品| 美女脱内裤让男人舔精品视频| 亚洲最大成人av| 级片在线观看| 久久精品人妻少妇| 免费搜索国产男女视频| 国产淫语在线视频| 免费观看a级毛片全部| 日本欧美国产在线视频| 不卡视频在线观看欧美| 日韩欧美三级三区| 欧美丝袜亚洲另类| 欧美区成人在线视频| 成人特级av手机在线观看| 亚洲不卡免费看| 国产精品综合久久久久久久免费| 91在线精品国自产拍蜜月| 成人一区二区视频在线观看| 亚洲成人中文字幕在线播放| 亚洲欧美日韩高清专用| 久久鲁丝午夜福利片| 日韩三级伦理在线观看| 超碰97精品在线观看| 国产午夜福利久久久久久| 亚洲欧洲日产国产| 免费黄网站久久成人精品| 中文天堂在线官网| 青春草亚洲视频在线观看| 亚洲不卡免费看| 中文字幕av成人在线电影| 少妇裸体淫交视频免费看高清| 亚洲在线观看片| 日本色播在线视频| 69人妻影院| 啦啦啦观看免费观看视频高清| 汤姆久久久久久久影院中文字幕 | 久久久久久伊人网av| a级毛色黄片| 99视频精品全部免费 在线| 舔av片在线| 国产色婷婷99| 99久久精品热视频| 国产视频内射| 免费观看在线日韩| 少妇裸体淫交视频免费看高清| 国产 一区 欧美 日韩| 只有这里有精品99| 2021少妇久久久久久久久久久| 内射极品少妇av片p| 欧美精品国产亚洲| 欧美高清性xxxxhd video| 精品人妻偷拍中文字幕| 一区二区三区免费毛片| 日韩欧美在线乱码| 日韩欧美三级三区| 中文字幕制服av| 婷婷色麻豆天堂久久 | 欧美日韩综合久久久久久| 舔av片在线| 日韩视频在线欧美| 在线观看一区二区三区| 三级毛片av免费| 亚洲中文字幕日韩| 一区二区三区四区激情视频| 中文字幕av在线有码专区| 七月丁香在线播放| 男人狂女人下面高潮的视频| 久久久久久九九精品二区国产| 一本久久精品| 久久精品国产亚洲av涩爱| 一级毛片电影观看 | 三级男女做爰猛烈吃奶摸视频| 综合色丁香网| 国产激情偷乱视频一区二区| 日韩欧美三级三区| 中文字幕制服av| 亚洲精品久久久久久婷婷小说 | 成人综合一区亚洲| 亚洲伊人久久精品综合 | 特级一级黄色大片| 国产亚洲午夜精品一区二区久久 | 成年女人永久免费观看视频| 天堂影院成人在线观看| 亚洲成人中文字幕在线播放| 国产成人aa在线观看| 亚洲美女搞黄在线观看| 中国美白少妇内射xxxbb| 国产精品人妻久久久影院| 亚洲人成网站在线播| 神马国产精品三级电影在线观看| 国模一区二区三区四区视频| 亚洲无线观看免费| 国产一区二区在线av高清观看| 又爽又黄a免费视频| 黄片wwwwww| 欧美丝袜亚洲另类| 22中文网久久字幕| 99久国产av精品国产电影| 亚洲成人av在线免费| av视频在线观看入口| 国产伦理片在线播放av一区| 成年女人永久免费观看视频| 国产成人精品一,二区| 亚洲av成人精品一二三区| 久久99热6这里只有精品| ponron亚洲| 日本一二三区视频观看| 国产精品99久久久久久久久| 九色成人免费人妻av| 99久久成人亚洲精品观看| 亚洲自拍偷在线| 午夜激情福利司机影院| 日本免费在线观看一区| 69人妻影院| 97超碰精品成人国产| 国产成人精品婷婷| 国产91av在线免费观看| 欧美日韩在线观看h| 成人av在线播放网站| 欧美区成人在线视频| 视频中文字幕在线观看| 秋霞伦理黄片| 日韩欧美三级三区| 久久草成人影院| 男人舔奶头视频| 在现免费观看毛片| 在线天堂最新版资源| 毛片一级片免费看久久久久| 中文资源天堂在线| 久久人人爽人人爽人人片va| 老师上课跳d突然被开到最大视频| 婷婷色综合大香蕉| 桃色一区二区三区在线观看| 日韩欧美 国产精品| 欧美成人一区二区免费高清观看| 日韩国内少妇激情av| 视频中文字幕在线观看| 免费av观看视频| 成人特级av手机在线观看| 亚洲av成人精品一区久久| 久久久久精品久久久久真实原创| 99热这里只有是精品在线观看| 1000部很黄的大片| 一区二区三区乱码不卡18| 亚洲欧美日韩东京热| 一级二级三级毛片免费看| 日韩一区二区视频免费看| 国产精品人妻久久久影院| 国产乱人偷精品视频| 亚洲精品成人久久久久久| 成年免费大片在线观看| 永久网站在线| 69av精品久久久久久| 国产免费男女视频| or卡值多少钱| 美女被艹到高潮喷水动态| 亚洲国产精品专区欧美| 国产91av在线免费观看| 亚洲成人精品中文字幕电影| 免费黄色在线免费观看| 大话2 男鬼变身卡| 最后的刺客免费高清国语| 白带黄色成豆腐渣| 久久综合国产亚洲精品| 久久久欧美国产精品| 国产av不卡久久| 日韩视频在线欧美| 成人三级黄色视频| 人妻系列 视频| 国产爱豆传媒在线观看| 国产人妻一区二区三区在| 国产免费福利视频在线观看| 男人和女人高潮做爰伦理| 永久网站在线| 久久午夜福利片| www.av在线官网国产| 一夜夜www| 亚洲欧美日韩东京热| 免费看美女性在线毛片视频| 免费av观看视频| 中文字幕熟女人妻在线| 成人午夜精彩视频在线观看| 国产国拍精品亚洲av在线观看| 久久久a久久爽久久v久久| 亚州av有码| 亚洲av成人精品一区久久| 免费不卡的大黄色大毛片视频在线观看 | 久久欧美精品欧美久久欧美| 成人av在线播放网站| 全区人妻精品视频| 国产亚洲精品久久久com| 大话2 男鬼变身卡| 简卡轻食公司| 成人毛片a级毛片在线播放| 美女cb高潮喷水在线观看| 美女脱内裤让男人舔精品视频| 日韩高清综合在线| 国产av码专区亚洲av| 狠狠狠狠99中文字幕| 麻豆一二三区av精品| 亚洲18禁久久av| 尤物成人国产欧美一区二区三区| 草草在线视频免费看| 亚洲国产精品久久男人天堂| 国产三级中文精品| 91精品伊人久久大香线蕉| 日本熟妇午夜| 免费看美女性在线毛片视频| 一级爰片在线观看| 精品国产三级普通话版| 极品教师在线视频| 日韩欧美精品v在线| 欧美成人精品欧美一级黄| 欧美日韩在线观看h| 午夜福利成人在线免费观看| 日韩中字成人| 中文字幕免费在线视频6| 日韩精品有码人妻一区| 国产在线一区二区三区精 | 国产男人的电影天堂91| 国产成人a区在线观看| 亚洲国产欧美人成| 亚洲无线观看免费| 日日摸夜夜添夜夜爱| 国产精品1区2区在线观看.| 日本欧美国产在线视频| 国产精品综合久久久久久久免费| 麻豆乱淫一区二区| eeuss影院久久| 国产综合懂色| 九色成人免费人妻av| 亚洲国产日韩欧美精品在线观看| 国产精品精品国产色婷婷| 色综合色国产| 禁无遮挡网站| 欧美又色又爽又黄视频| 免费看日本二区|