• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Neighborhood fusion-based hierarchical parallel feature pyramid network for object detection

    2020-10-15 02:04:16MoLingfeiHuShuming

    Mo Lingfei Hu Shuming

    (School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China)

    Abstract:In order to improve the detection accuracy of small objects, a neighborhood fusion-based hierarchical parallel feature pyramid network (NFPN) is proposed. Unlike the layer-by-layer structure adopted in the feature pyramid network (FPN) and deconvolutional single shot detector (DSSD), where the bottom layer of the feature pyramid network relies on the top layer, NFPN builds the feature pyramid network with no connections between the upper and lower layers. That is, it only fuses shallow features on similar scales. NFPN is highly portable and can be embedded in many models to further boost performance. Extensive experiments on PASCAL VOC 2007, 2012, and COCO datasets demonstrate that the NFPN-based SSD without intricate tricks can exceed the DSSD model in terms of detection accuracy and inference speed, especially for small objects, e.g., 4% to 5% higher mAP (mean average precision) than SSD, and 2% to 3% higher mAP than DSSD. On VOC 2007 test set, the NFPN-based SSD with 300×300 input reaches 79.4% mAP at 34.6 frame/s, and the mAP can raise to 82.9% after using the multi-scale testing strategy.

    Key words:computer vision; deep convolutional neural network; object detection; hierarchical parallel; feature pyramid network; multi-scale feature fusion

    Considering the problem of small object detection, a neighborhood fusion-based hierarchical parallel feature pyramid network is proposed. This network extracts features with rich context information and local details by fusing the shallow features of similar scales, that is, the constructed feature pyramid network has no connection between the upper and lower layers. This network is integrated into the SSD framework to achieve object detection.

    Object detection is an important computer vision task that uses image processing algorithms and models to detect instance objects of a specific class in digital images, providing fundamental information for further image understanding. Objects can appear anywhere in the image in a variety of shapes and sizes. Besides, detection is also affected by perspective, illumination conditions, and occlusion, making it a Gordian knot. Early object detectors are closely related to hand-engineered features that are applied to dense image meshes to locate objects in the sliding-window paradigm, such as the Viola-Jones (VJ) face detector[12], the histogram of oriented gradient (HOG) pedestrian detector[3], and deformable part models (DPMs)[4]. Recently, with the emergence of convolutional neural networks (CNN) and the rapid development of deep learning, object detection has also made great strides forward. Researchers have proposed many excellent object detectors, which can be roughly classified into the one-stage models and the two-stage models.

    The two-stage model divides the detection tasks into two phases and has become the dominant paradigm of object detection. It generates a sparse set of candidate regions (ROI) first (via selective search[5]or region proposal network[6]), and then classifies these ROIs into a particular category and refines their bounding boxes. R-CNN[7]and SPPNet (spatial pyramid pooling convolutional network)[8]are classic works that implement this idea. After years of research, the superior performance of the two-stage detectors on several challenging datasets (e.g., PASCAL VOC[9]and COCO[10]) has been demonstrated by many methods[6,1116].

    In contrast, the one-stage model can directly predict the category and location of objects simultaneously, thus abandoning the region proposal. Its straightforward structure grants it a one-stage model higher detection efficiency with a slight performance degradation exchange. SSD[17]and YOLO[18]achieve ultra-real-time detection with a tolerable performance, renewing interest in one-stage methods. Many extensions of these two models have been proposed[1925].

    Current state-of-the-art object detectors have achieved excellent performance on several challenging benchmarks. However, there are still many conundrums in the field of object detection: 1) Feature fusion and reusing. Features rich in high-level semantic information is beneficial for object detection. Lin et al.[15,1920,26]obtained more characterized features through multiple feature fusion. 2) The trade-off between performance and speed. For the sake of practical applications, a delicate balance must be struck between performance and detection speed.

    This paper rethinks the feature fusion and reusing while taking into account the inference speed. As an efficient feature extraction method, FPN[15]has been adopted by many state-of-the-art detectors[14,19,21,27]. Heuristically, this paper reconstructs the feature pyramid network in a hierarchical parallel manner (neighborhood fusion-based hierarchical parallel feature pyramid network), and then it is integrated into the SSD framework to verify its effectiveness. The main contributions are summarized as follows: 1) Proposing a simple and efficient method for constructing the context-rich feature pyramid network; 2) Integrating the hierarchical parallel feature pyramid network into the SSD framework and showing its performance improvement on standard object detection benchmarks compared with some FPN based models.

    1 Feature Pyramid Network

    Many experiments have confirmed that it is profitable to use multi-scale features for detection. For example, SSD[17]uses the multiple spatial resolution features of VGG nets[28]. These multi-scale features taken directly from the backbone network can be consolidated into a primary feature pyramid network, in which the top layer has rich semantic information but a lower resolution, while the bottom layer has less semantic information but a higher resolution. Fig.1 summarizes some typical feature pyramid networks. In this figure, the circle outline represents the feature map, its larger size represents higher resolution, and its darker color represents stronger semantic information. FPN[15]performs a top-down layer-by-layer fusion of the primary feature pyramid network with additional lateral connections for building high-level semantic features on all scales. On the basis of FPN, PANet (path aggregation network)[29]adds a bottom-up route to enhance its network structure, which is conducive to shortening the information propagation path while using low-level features to locate objects. NAS-FPN[30]can be regarded as the pioneering work of NAS application in object detection. It automatically learns the structure of the feature pyramid network by designing an appropriate search space, but it requires thousands of GPU hours during searching, and the resulting network is irregular and difficult to interpret or modify.

    (a)

    Fig.2 shows several similar methods for constructing the feature pyramid network based on the SSD framework. In this figure, the rectangular outline represents the feature map, its width represents the number of channels, and its height represents the resolution (i.e., 512×38×38). For brevity, some similar connections are omitted. DSSD[19]is inherited from SSD and FPN, building a more representative feature pyramid network by layer-by-layer fusing. In this case, low-resolution features are constantly up-sampled to mix with high-resolution features. RSSD (rainbow single shot detector)[20]constructs a feature pyramid network in a fully connected manner. That is, the feature map of each resolution in the feature pyramid network is obtained by fusing all inputs. In the structure of Fig.1(b), Fig.2(b) and Fig.2 (c), the input features inevitably undergo multiple consecutive resampling for constructing the feature pyramid network, which also causes some additional sampling noises and information loss.

    In order to alleviate this potential contradiction, and take into account the training and inference speed, this paper manually limits the resampling times of the input features, ensuring that each feature map will undergo up-sampling and down-sampling at most once. The model structure is shown in Fig.1(d) and Fig.2(d), which only fuses shallow features on similar scales, abandoning multiple consecutive resampling used in DSSD and RSSD. Subsequent experimental results demonstrate the effectiveness of this approach. On the VOC 2007 test set, the NFPN-based SSD with 300×300 input size achieves 79.4% mAP at 34.6 frame/s, DSSD[19]with 321×321 input achieves 78.6 mAP at 9.5 frame/s, and RSSD[20]with 300×300 input achieves 78.5% mAP at 35.0 frame/s. After using the multi-scale testing strategy, the NFPN-based SSD can achieve 82.9% mAP.

    2 Network Architecture

    The neighborhood fusion-based hierarchical parallel feature pyramid network (NFPN) is shown in Fig.1(d) and Fig.2(d), introducing a neighborhood fusion-based hierarchical parallel architecture for constructing the context-rich feature pyramid network. The basic unit of NFPNis the multi-scale feature fusion module (MF module), which is used to aggregate multi-scale features. This section will introduce the MF module first, and then discuss how to integrate MF modules to construct NFPN and embed it into the SSD[17]framework.

    (a)

    2.1 MF module

    In order to aggregate multi-scale features obtained from the backbone network, this paper introduces a simple multi-scale feature fusion module called the MF module, as shown in Fig.3. The MF module takes three different scale features as inputs (i.e., 4×, 2×, 1×) and resamples these features to the same resolution (2×) through the down-sampling branch (4×→2×) and the up-sampling branch (1×→2×). The subsequent 3×3 convolution layer is designed to refine features and limit noise caused by resampling. Instead of the element-wise operation used in FPN[15]and DSSD[19], the MF module adopts concatenation to combine multiple features, where the status of different features is characterized by the number of channels. The output channel of the basic branch (2× resolution) is set to be 256 to preserve more of its features, while the output channel of the down-sampling and up-sampling branches is set to be 128 to complement the context information. By convention, each convolutional layer is followed by a batch normalization (BN) layer and a ReLU(rectified linear unit) activation layer.

    Fig.3 Multi-scale feature fusion module

    2.2 Stacking of MF module

    Referring to the structure shown in Fig.2(d), NFPN can be implemented by stacking MF modules in parallel among layers of the primary feature pyramid network. SSD[17]performs detection tasks using six scale features (denoted asC4,C5,…,C9, respectively) of the VGG-16[28]. Fu et al.[19]proposed the deconvolution module and prediction module to build a context-rich feature pyramid network in a layer-by-layer fusion manner, while Jeong et al.[20]proposed the rainbow concatenation similar to a fully connected network. These structures can be briefly formulated as

    (1)

    Pi=rainbow(C4,C5,…,C9)

    (2)

    where deconv and predict are the deconvolution module and prediction module in DSSD[19], respectively; rainbow is the rainbow concatenation in RSSD[20]. Finally, {Pi} combines into a context-rich pyramid network for multi-scale detection. Considering DSSD and RSSD, the input features of the feature pyramid network inevitably undergo multiple consecutive resampling, which can aggregate more contextual information, but introduce additional resampling noise.

    To mitigate this contradiction, this paper manually limits the times of resampling that each input feature needs to perform, ensuring that each input can be up-sampled and down-sampled at most, once. This method is a hierarchical parallel stacking of the MF module and can be formulated as

    Pi=MF(Ci-1,Ci,Ci+1)

    (3)

    where MF represents the MF module shown in Fig.3. Intuitively, NFPN reduces resampling times required for each input feature, and only fuses shallow features with similar resolution. Furthermore, the hierarchical parallel structure does not increase the depth of the computational graph as much as the layer-by-layer structure and is more in line with parallel computing.

    This paper is based on the SSD framework to verify the performance of NFPN. In detail, the NFPN-based SSD additionally introduces “conv3_3” (C3in Fig.4) as the down-sampling branch ofP4and deletes the up-sampling branch ofP9while increasing the number of channels of its down-sampling branch from 128 to 256. In summary, the NFPN-based SSD contains six parallel stacked MF modules, as shown in Fig.4.

    Fig.4 Stacking of MF module

    3 Experiments

    Experiments were conducted on three widely used object detection datasets: PASCAL VOC 2007, 2012[9], and MS COCO[10], which have 20, 20, and 80 categories, respectively. The results on the VOC 2012 test set and COCO test-dev set were obtained from the evaluation server. The experimental code is based on Caffe[31].

    3.1 Training strategies

    For the sake of fairness, almost all the training policies are consistent with SSD[17], including the ground-truth box matching strategy, training objective, anchor set, hard negative mining, and data augmentation. The model loss is a weighted sum between localization loss (Smooth L1) and classification loss (Softmax). NFPN takes seven scale features selected directly from the backbone network as inputs, whose resolutions are 752, 382, 192, 102, 52, 32, and 12, respectively. The convolution and deconvolution layers in NFPN do not use bias parameters, and their weights are initialized by a Gaussian function with a mean value of 0 and a standard deviation of 0.01. For the BatchNorm layer, the moving average fraction is set to be 0.999, while the weight and bias are initialized to be 1 and 0, respectively. All the models are trained with the SGD solver on 4 GTX 1080 GPUs, CUDA 9.0, and cuDNN v7 with Intel Xeon E5-2620v4@2.10 GHz. Considering the limitation of GPU memory, this paper uses VGG-16[28]as the backbone network with a batch size of 32, and only trains the model with input size 300×300.

    3.2 Testing strategies

    Inspired by RefineDet[25], this paper performed both single-scale testing and multi-scale testing. The spatial resolutions of the output features of MF moduleP4toP9are 382, 192, 102, 52, 32, and 12, respectively. To perform multi-scale testing,P8,P9, and their associated layers are directly removed from the trained model with no other modifications. This will shrink the mAP of the VOC 2007 test set from 79.4% to 75.5% when performing single-scale testing. Multi-scale testing works by imposing different resolution inputs to the model and aggregating all the detection results together, and then uses the NMS with a threshold of 0.45 to obtain the final result on PASCAL VOC dataset while using Soft-NMS on the COCO dataset. The default input resolution isSI∈{1762, 2402, 3042, 304×176, 304×432, 3682, 4322, 4962, 5602, 6242, 6882}. Additionally, the image horizontal flip operation is also used.

    3.3 PASCAL VOC 2007

    For the PASCAL VOC 2007 dataset, all the models are trained on the union of VOC 2007 and VOC 2012 trainval sets (16 551 images) and tested on the VOC 2007 test set (4 952 images). This paper adopts a fully convolutional VGG-16 net used in ParseNet[32]as the pre-trained model and fine-tunes it using SGD with a momentum of 0.9 and a weight decay of 2×10-4. The initial learning rate is set to be 0.001, then reduced by a factor of 10 at iterations 6×104and 105, respectively. The training cycle is 1.2×105iterations.

    Tab.1 shows the results for each category on the VOC 2007 test set. Taking the GPU memory constraint into account, this paper only trains the model with input resolution 300×300 (i.e., NFPN-SSD300). Compared with some models based on the feature pyramid network with a similar input resolution (e.g., RSSD300, DSSD321, and FSSD300), the NFPN-based SSD without intricate tricks shows performance improvements in most categories, producing the increase of mAP of 0.9%, 0.8%, and 0.6%, respectively. Although its accuracy is inferior to some two-stage models, it guarantees real-time detection. After using the multi-scale testing strategy, the NFPN-based SSD can achieve 82.9% mAP, which is much better than single-scale testing (79.4% mAP).

    Tab.1 Detection results on the VOC 2007 test set %

    Tab.2 shows the inference speed and average precision(AP) of some state-of-the-art methods. NFPN-SSD300 takes 28.9 ms to process an image (i.e., 34.6 frame/s and 79.4% mAP) with no batch processing on a GTX 1080 GPU, which exceeds DSSD321 in both inference speed and detection precision but is slightly inferior to RefineDet320. Compared with some methods with large input sizes, NFPN-SSD300 only retains the superiority of inference speed.

    In summary,the NFPN-based SSD exhibits a significant improvement in inference speed and detection precision compared to the structure-oriented modified model RSSD[20]and DSSD[19]. Referring to Fig.5, with the SSD300 model as a baseline, the mAP of NFPN-SSD300 is increased by 2% to 6% for classes with specific backgrounds (e.g., airplane, boat, and cow), while increased by 6% for the bottle class, whose instances are usually small. It is proved that NFPN can extract more fertile contextual information, which is beneficial for the detection of small objects and classes with a unique context. Due to the transportability of this structure, it can also be easily embedded in other detectors to further boost their performance.

    3.4 Ablation study on PASCAL VOC 2007

    To demonstrate the effectiveness of the structure shown in Fig.2(d), this paper designs four variants and evalu-ates them on the PASCAL VOC 2007 dataset. The training strategies are inherited from Section 3.3. In particular, the batch size is set to be 12, which is the largest batch size that a single GTX 1080 GPU can accommodate. The basic module of NFPN is shown in Fig.3, which has three input branches, i.e., down-sampling, up-sampling, and the basic branch. By cutting out different branches, there are three variants: 1) down-sampling+basic; 2) basic+up-sampling; 3) down-sampling+basic+up-sampling. Furthermore, the fourth variant is a cascade of the feature pyramid network and can be formulated as

    Fig.5 Comparison of improved average precision (%) of specific classes on VOC 2007 test set

    (4)

    Tab.2 Inference speed vs. detection accuracy on PASCAL VOC dataset

    Tab.3 records the evaluation results. It is observed that the variant with three input branches shows better detection performance than those variants with two input branches (mAP 78.6% vs. 78.1% and 78.2%). The cascading of feature pyramid networks can also contribute to the detection performance (i.e., 0.5% higher), but the corresponding memory consumption will also increase. Accordingly, this paper selects the variant model with three input branches in other experiments. Attempts have also been made to integrate the FPN structure into the SSD framework, but it cannot converge to a matching detection performance.

    Tab.3 Ablation study on the VOC 2007 test set

    3.5 PASCAL VOC 2012

    For PASCAL VOC 2012 dataset, all the models are trained on the union of VOC 2007 and VOC 2012 trainval sets plus VOC 2007 test set (21 503 images) and tested on the VOC 2012 test set (10 991 images). The training set is an augmentation of the training set used in Section 3.3, attaching about 5 000 images. Therefore, the NFPN-based SSD model that iterates 6×104times in Section 3.3 is used as a pre-trained model to shorten the training cycle. Other training strategies are consistent with those discussed in Section 3.3. The evaluation results are recorded in Tab.2 and Tab.4. Considering similar input sizes (i.e., 3002, 3212), the NFPN-based SSD still shows performance improvement compared with SSD, RSSD, and DSSD. After using the multi-scale testing strategy, the mAP of NFPN-SSD300 can also catch up with RefineDet320.

    3.6 MS COCO

    MS COCO[10]is a large-scale object detection, segmentation, and captioning dataset. For the object detection task, COCO train, validation, and test sets contain more than 2×105images and 80 object categories. Object detection performance metrics include AP and average recall(AR).

    Tab.4 Detection results on the VOC 2012 test set %

    By convention[35], this paper uses a union of 8×104images from the COCO train set and random 3.5×104images from the COCO validation set for training (the trainval35k split), and uses the test-dev evaluation server to evaluate the results. The pre-trained model and optimizer setting are the same as Section 3.3, while the training cycle is set to be 4×105iterations. The initial learning rate is 10-3, then decays to 10-4and 10-5at 2.8×105iterations and 3.6×105iterations, respectively. The experimental results are shown in Tab.5, and some qualitative test results are shown in Fig.8 and Fig.9.

    Similarly, to verify the validity of the feature pyramid network constructed in the NFPN-based SSD, the detec-tion results are first compared with DSSD, which integrates FPN and ResNet-101 into the SSD framework with no other tricks. As can be seen from Fig.6, the NFPN-based SSD has been improved in various detection evaluation metrics (e.g., AP and AR) compared with SSD. Additionally, compared with DSSD, the NFPN-based SSD is only inferior in the detection of large objects. That is, the NFPN-based SSD has more performance improvements for small objects. Fig.7 visualizes the first 8 channels of the feature map used to detect small objects. For some images that only contain small instances, the feature map in SSD300 used for detection only has few activated neurons, which makes it difficult to locate objects. The combination of NFPN and SSD gives the model a stronger feature extraction capability so that the feature map used for detection can retain more useful information, as shown in the fourth row of Fig.7. Comparing the feature maps in DSSD and NFPN-SSD, it is not difficult to find that NFPN-SSD can extract more detailed information, which is especially beneficial for the detection of small objects.

    Fig.6 Comparison of improved AP (%) and AR (%) on MS COCO test-dev set

    Considering additional FPN-based detectors (e.g., RetinaNet400, NAS-FPN),the NFPN-based SSD is still inferior in detection accuracy, but it is superior in inference speed (34.6 vs. 15.6 and 17.8 frame/s). Due to the portability of NFPN, it can also complement these methods to further improve their performance.

    Tab.5 Detection results on MS COCO test-dev set

    (a)

    (a)

    4 Conclusions

    1) The hierarchical parallel structure of NFPN eliminates the successive resampling of features and does not increase the depth of the computational graph as much as the layer-by-layer structure. Additionally, the gradient can be passed back to the shallow layer along a shorter path, which is beneficial to the optimization of the model.

    2) The hierarchical parallel feature pyramid network is more conducive to the parallel acceleration of GPU.

    3) NFPN is highly portable and can be embedded in many methods to further boost their performance. It is demonstrated to be effective by integrating NFPN into the SSD framework, and extensive experimental results show that NFPN is more proficient for detecting small objects and classes with specific backgrounds.

    (a)

    亚洲第一青青草原| 日韩欧美精品免费久久| 成人影院久久| 又大又黄又爽视频免费| 国产免费现黄频在线看| 日韩电影二区| 18禁动态无遮挡网站| 亚洲精品第二区| 天美传媒精品一区二区| 国产高清国产精品国产三级| 天天操日日干夜夜撸| 一级毛片黄色毛片免费观看视频| 国产成人精品福利久久| 亚洲人成网站在线观看播放| 亚洲精品国产av蜜桃| 亚洲欧美中文字幕日韩二区| 国产精品一区二区精品视频观看| 不卡av一区二区三区| av卡一久久| 在线观看免费高清a一片| 下体分泌物呈黄色| 国产精品久久久久久精品古装| av在线老鸭窝| 亚洲精品美女久久av网站| 欧美激情高清一区二区三区 | 香蕉国产在线看| 99热国产这里只有精品6| 操出白浆在线播放| 色吧在线观看| 精品久久蜜臀av无| 国产亚洲午夜精品一区二区久久| 精品一区二区三区四区五区乱码 | 乱人伦中国视频| 久久久久久人人人人人| 男人舔女人的私密视频| 国产免费又黄又爽又色| 午夜免费观看性视频| 国产一区二区三区综合在线观看| 成人免费观看视频高清| 你懂的网址亚洲精品在线观看| 午夜老司机福利片| 18禁裸乳无遮挡动漫免费视频| 少妇的丰满在线观看| 久久久久人妻精品一区果冻| 校园人妻丝袜中文字幕| 好男人视频免费观看在线| 少妇人妻精品综合一区二区| 国产成人系列免费观看| 大陆偷拍与自拍| 欧美精品一区二区免费开放| 欧美另类一区| 久久精品国产综合久久久| 97精品久久久久久久久久精品| 国产无遮挡羞羞视频在线观看| 美女午夜性视频免费| 久久精品人人爽人人爽视色| 免费人妻精品一区二区三区视频| 视频在线观看一区二区三区| 各种免费的搞黄视频| 亚洲,一卡二卡三卡| 美女国产高潮福利片在线看| 啦啦啦啦在线视频资源| 美女午夜性视频免费| 午夜免费男女啪啪视频观看| 成人免费观看视频高清| 亚洲精品久久成人aⅴ小说| 天天躁夜夜躁狠狠躁躁| 久久精品亚洲av国产电影网| 90打野战视频偷拍视频| 亚洲,欧美,日韩| 美女午夜性视频免费| 国产日韩欧美亚洲二区| av线在线观看网站| 国产亚洲av片在线观看秒播厂| 丁香六月天网| 97在线人人人人妻| 建设人人有责人人尽责人人享有的| 亚洲欧美激情在线| 无限看片的www在线观看| 五月天丁香电影| 无限看片的www在线观看| 9热在线视频观看99| 啦啦啦视频在线资源免费观看| 亚洲成人手机| 我要看黄色一级片免费的| 女的被弄到高潮叫床怎么办| 老汉色∧v一级毛片| 精品人妻在线不人妻| 国产av精品麻豆| 男女边摸边吃奶| 久久久久视频综合| 伦理电影免费视频| 国产黄色视频一区二区在线观看| 日韩 欧美 亚洲 中文字幕| 久久久久久久久久久免费av| 成人手机av| 伦理电影免费视频| 国产精品99久久99久久久不卡 | 51午夜福利影视在线观看| 久久久精品国产亚洲av高清涩受| 亚洲精品av麻豆狂野| 激情视频va一区二区三区| 国产深夜福利视频在线观看| 人体艺术视频欧美日本| 久久久久视频综合| 好男人视频免费观看在线| 热re99久久精品国产66热6| 成人午夜精彩视频在线观看| 这个男人来自地球电影免费观看 | 久久久久久久国产电影| 色94色欧美一区二区| 国产精品一国产av| 黄色怎么调成土黄色| 日韩一区二区三区影片| 免费看不卡的av| 男的添女的下面高潮视频| 成年女人毛片免费观看观看9 | 91国产中文字幕| 国产成人免费观看mmmm| 人人澡人人妻人| 我要看黄色一级片免费的| 在现免费观看毛片| 九九爱精品视频在线观看| 日韩欧美一区视频在线观看| 性高湖久久久久久久久免费观看| 丝袜人妻中文字幕| 操美女的视频在线观看| 久久久久久久精品精品| 五月天丁香电影| 亚洲专区中文字幕在线 | 狠狠婷婷综合久久久久久88av| 国产黄频视频在线观看| 亚洲国产av新网站| 国产高清国产精品国产三级| 我要看黄色一级片免费的| 免费黄色在线免费观看| 精品亚洲成国产av| 18禁国产床啪视频网站| 亚洲av欧美aⅴ国产| av片东京热男人的天堂| 欧美国产精品一级二级三级| 日本爱情动作片www.在线观看| 2021少妇久久久久久久久久久| 亚洲欧美中文字幕日韩二区| 国产极品天堂在线| 亚洲成人av在线免费| 老鸭窝网址在线观看| 在线观看免费日韩欧美大片| 新久久久久国产一级毛片| 丝袜脚勾引网站| 日韩一区二区三区影片| 精品亚洲乱码少妇综合久久| 欧美日韩亚洲综合一区二区三区_| 国产精品久久久av美女十八| 久久婷婷青草| 中文欧美无线码| 青青草视频在线视频观看| 妹子高潮喷水视频| 考比视频在线观看| 色94色欧美一区二区| 亚洲一码二码三码区别大吗| 一级毛片 在线播放| 91精品国产国语对白视频| 女性被躁到高潮视频| 国产爽快片一区二区三区| 欧美变态另类bdsm刘玥| 亚洲成色77777| 亚洲精品久久久久久婷婷小说| 爱豆传媒免费全集在线观看| 欧美国产精品va在线观看不卡| 秋霞伦理黄片| 久久99精品国语久久久| www.熟女人妻精品国产| 老司机深夜福利视频在线观看 | 99热国产这里只有精品6| 久久人人爽av亚洲精品天堂| 在线 av 中文字幕| 伊人久久国产一区二区| 看免费成人av毛片| 国产 一区精品| 国产免费视频播放在线视频| 热99久久久久精品小说推荐| 日本vs欧美在线观看视频| 九色亚洲精品在线播放| 国产男女超爽视频在线观看| 黑人欧美特级aaaaaa片| 91成人精品电影| 又大又爽又粗| 免费在线观看视频国产中文字幕亚洲 | 少妇人妻久久综合中文| 亚洲av综合色区一区| 欧美激情高清一区二区三区 | 亚洲av电影在线进入| 夫妻午夜视频| av在线老鸭窝| 在线观看免费午夜福利视频| 少妇人妻精品综合一区二区| 国产伦人伦偷精品视频| 午夜福利视频精品| 尾随美女入室| 熟女少妇亚洲综合色aaa.| 婷婷成人精品国产| 亚洲国产欧美一区二区综合| 午夜福利影视在线免费观看| 国产欧美亚洲国产| 视频区图区小说| 啦啦啦在线观看免费高清www| 亚洲男人天堂网一区| 波野结衣二区三区在线| 十分钟在线观看高清视频www| 中文字幕av电影在线播放| 精品亚洲乱码少妇综合久久| 精品免费久久久久久久清纯 | 99国产综合亚洲精品| 在线观看三级黄色| 少妇 在线观看| 黄色视频不卡| 美女主播在线视频| netflix在线观看网站| 亚洲国产精品国产精品| 成人免费观看视频高清| 久久这里只有精品19| 老司机深夜福利视频在线观看 | 国产免费一区二区三区四区乱码| 欧美av亚洲av综合av国产av | 一二三四在线观看免费中文在| 精品人妻在线不人妻| 日韩av在线免费看完整版不卡| 久久综合国产亚洲精品| 三上悠亚av全集在线观看| www.av在线官网国产| 欧美日韩一级在线毛片| 国产精品99久久99久久久不卡 | 亚洲欧美中文字幕日韩二区| 国产av码专区亚洲av| 黄色怎么调成土黄色| 丁香六月天网| 亚洲av成人精品一二三区| 亚洲精品成人av观看孕妇| 欧美日韩综合久久久久久| 成人18禁高潮啪啪吃奶动态图| 国产老妇伦熟女老妇高清| 日日啪夜夜爽| 亚洲熟女精品中文字幕| 国产亚洲av高清不卡| 国产av精品麻豆| 色视频在线一区二区三区| 国产淫语在线视频| 可以免费在线观看a视频的电影网站 | 中文天堂在线官网| 我要看黄色一级片免费的| 久久久久精品人妻al黑| 黑人欧美特级aaaaaa片| 久久天堂一区二区三区四区| 午夜激情久久久久久久| 乱人伦中国视频| 成人手机av| 欧美日韩精品网址| 国产免费福利视频在线观看| 午夜福利影视在线免费观看| 国产精品一区二区在线观看99| 欧美精品av麻豆av| 80岁老熟妇乱子伦牲交| 精品午夜福利在线看| 91精品三级在线观看| 国产亚洲欧美精品永久| 欧美精品高潮呻吟av久久| 久久精品久久久久久噜噜老黄| 韩国av在线不卡| 久久99一区二区三区| 两个人看的免费小视频| 久久毛片免费看一区二区三区| 国产97色在线日韩免费| 国产欧美日韩一区二区三区在线| 亚洲天堂av无毛| 亚洲精品第二区| 美女脱内裤让男人舔精品视频| 高清av免费在线| 国产精品久久久久久精品古装| 国产有黄有色有爽视频| 国产精品久久久人人做人人爽| 99国产综合亚洲精品| 色综合欧美亚洲国产小说| 五月天丁香电影| 国产成人系列免费观看| 亚洲中文av在线| av在线观看视频网站免费| 一区二区三区四区激情视频| videos熟女内射| 国产成人精品无人区| 亚洲情色 制服丝袜| 天堂俺去俺来也www色官网| 1024香蕉在线观看| 一区二区av电影网| 无限看片的www在线观看| 国产在线一区二区三区精| 亚洲欧美精品自产自拍| 亚洲精品国产一区二区精华液| 另类精品久久| 你懂的网址亚洲精品在线观看| 欧美精品av麻豆av| 999精品在线视频| 波野结衣二区三区在线| 欧美中文综合在线视频| 青春草国产在线视频| 亚洲第一青青草原| 最黄视频免费看| 欧美日韩国产mv在线观看视频| 美女中出高潮动态图| 国产精品国产三级专区第一集| 久久精品久久精品一区二区三区| 日韩制服丝袜自拍偷拍| 国产高清不卡午夜福利| 国产片内射在线| 欧美成人精品欧美一级黄| 久久免费观看电影| 少妇人妻精品综合一区二区| 天天躁夜夜躁狠狠久久av| 老司机亚洲免费影院| 99精国产麻豆久久婷婷| 国产在视频线精品| 香蕉丝袜av| 在线观看免费高清a一片| 国产一区二区三区综合在线观看| av国产精品久久久久影院| 国产成人一区二区在线| 国产成人av激情在线播放| 久久天堂一区二区三区四区| 在线观看国产h片| 亚洲人成网站在线观看播放| 久久ye,这里只有精品| 秋霞伦理黄片| 午夜激情久久久久久久| 一区福利在线观看| 国产激情久久老熟女| 国产精品麻豆人妻色哟哟久久| 精品久久久久久电影网| 男女午夜视频在线观看| 日本vs欧美在线观看视频| 99久久人妻综合| 午夜老司机福利片| 新久久久久国产一级毛片| 国产一区二区在线观看av| www.自偷自拍.com| e午夜精品久久久久久久| 久久精品国产a三级三级三级| 日韩精品有码人妻一区| 亚洲伊人色综图| bbb黄色大片| 纯流量卡能插随身wifi吗| 观看av在线不卡| 超碰97精品在线观看| 亚洲精品,欧美精品| 亚洲三区欧美一区| 日本色播在线视频| 男人舔女人的私密视频| 操美女的视频在线观看| 男女午夜视频在线观看| 精品久久蜜臀av无| 国产又爽黄色视频| 国产亚洲午夜精品一区二区久久| 午夜免费男女啪啪视频观看| 亚洲情色 制服丝袜| 午夜福利,免费看| 9色porny在线观看| 99国产精品免费福利视频| 国产日韩一区二区三区精品不卡| 欧美黑人欧美精品刺激| 看免费av毛片| 日韩熟女老妇一区二区性免费视频| 卡戴珊不雅视频在线播放| 19禁男女啪啪无遮挡网站| 国产人伦9x9x在线观看| 伊人久久国产一区二区| av.在线天堂| 欧美另类一区| 女人被躁到高潮嗷嗷叫费观| 日韩av在线免费看完整版不卡| av卡一久久| 国产精品久久久av美女十八| 亚洲国产欧美一区二区综合| 午夜91福利影院| 成人午夜精彩视频在线观看| 一区二区av电影网| 狂野欧美激情性bbbbbb| 国产男女超爽视频在线观看| 国产片内射在线| 97在线人人人人妻| 国产黄色免费在线视频| 国产成人免费无遮挡视频| 秋霞伦理黄片| 色婷婷av一区二区三区视频| 青春草国产在线视频| 免费在线观看完整版高清| 日韩大片免费观看网站| 久久久精品区二区三区| 高清黄色对白视频在线免费看| 亚洲,一卡二卡三卡| 久久精品aⅴ一区二区三区四区| 视频区图区小说| 老司机靠b影院| 国产欧美日韩综合在线一区二区| 亚洲少妇的诱惑av| 亚洲精品乱久久久久久| 久热这里只有精品99| 亚洲精品国产色婷婷电影| 国产一卡二卡三卡精品 | 热99国产精品久久久久久7| 永久免费av网站大全| 国产精品免费视频内射| 街头女战士在线观看网站| 欧美亚洲日本最大视频资源| 日韩不卡一区二区三区视频在线| 国产乱人偷精品视频| 一级毛片 在线播放| 巨乳人妻的诱惑在线观看| 国产精品三级大全| 欧美黑人欧美精品刺激| 嫩草影院入口| 久久99一区二区三区| 狂野欧美激情性bbbbbb| 亚洲天堂av无毛| 九九爱精品视频在线观看| 桃花免费在线播放| 午夜免费观看性视频| 成人18禁高潮啪啪吃奶动态图| 精品少妇黑人巨大在线播放| 精品国产国语对白av| 国产成人精品久久久久久| 下体分泌物呈黄色| 亚洲美女黄色视频免费看| 国产欧美日韩一区二区三区在线| 最近最新中文字幕免费大全7| 精品国产一区二区三区四区第35| 青春草亚洲视频在线观看| 亚洲国产日韩一区二区| 国产在视频线精品| 色精品久久人妻99蜜桃| 国产精品久久久久久精品古装| 日韩不卡一区二区三区视频在线| 日本欧美视频一区| 中文字幕最新亚洲高清| 女人久久www免费人成看片| 国产日韩欧美视频二区| 国产精品一国产av| av在线app专区| 一边亲一边摸免费视频| 国产毛片在线视频| 精品国产国语对白av| 亚洲一码二码三码区别大吗| 精品国产超薄肉色丝袜足j| 99精品久久久久人妻精品| 女人高潮潮喷娇喘18禁视频| 男女边吃奶边做爰视频| 国产一区二区三区综合在线观看| 一本久久精品| 日韩一本色道免费dvd| 一区二区三区激情视频| 亚洲成色77777| 少妇被粗大猛烈的视频| 国产亚洲av片在线观看秒播厂| 色综合欧美亚洲国产小说| 国产又色又爽无遮挡免| 国产成人免费无遮挡视频| 欧美日韩一区二区视频在线观看视频在线| 男人爽女人下面视频在线观看| 丝袜人妻中文字幕| 国产精品一二三区在线看| 国产精品偷伦视频观看了| 久久久精品区二区三区| 亚洲伊人久久精品综合| 久久久精品免费免费高清| 精品少妇久久久久久888优播| 成人亚洲精品一区在线观看| 国产精品久久久久久精品电影小说| 免费人妻精品一区二区三区视频| 老熟女久久久| xxx大片免费视频| 熟女av电影| 亚洲成人手机| 男女免费视频国产| 亚洲精品中文字幕在线视频| 亚洲一级一片aⅴ在线观看| 亚洲精品久久成人aⅴ小说| 日韩成人av中文字幕在线观看| 90打野战视频偷拍视频| 水蜜桃什么品种好| 免费在线观看黄色视频的| 伊人久久国产一区二区| 国产精品人妻久久久影院| 午夜免费观看性视频| 最近中文字幕高清免费大全6| 天天添夜夜摸| 秋霞在线观看毛片| 亚洲欧美成人综合另类久久久| 免费少妇av软件| 国产日韩一区二区三区精品不卡| 亚洲成人av在线免费| 久久久久精品性色| 欧美人与性动交α欧美精品济南到| 十八禁高潮呻吟视频| 一级毛片电影观看| 女性生殖器流出的白浆| av女优亚洲男人天堂| 一区在线观看完整版| 国产亚洲精品第一综合不卡| 制服人妻中文乱码| 美女脱内裤让男人舔精品视频| 少妇 在线观看| 日韩不卡一区二区三区视频在线| 欧美黑人精品巨大| 久久国产亚洲av麻豆专区| 男女免费视频国产| 如何舔出高潮| 777米奇影视久久| 国产成人一区二区在线| 国产在视频线精品| 青草久久国产| 一级爰片在线观看| 亚洲精品久久午夜乱码| 制服丝袜香蕉在线| 美女视频免费永久观看网站| 777米奇影视久久| 精品人妻熟女毛片av久久网站| 一本大道久久a久久精品| 国产极品天堂在线| 91成人精品电影| 成人漫画全彩无遮挡| 大陆偷拍与自拍| 麻豆精品久久久久久蜜桃| 亚洲国产av新网站| 国产一卡二卡三卡精品 | 久久亚洲国产成人精品v| 精品卡一卡二卡四卡免费| 天天躁夜夜躁狠狠久久av| 久久人人97超碰香蕉20202| 国产精品免费大片| 久久久久久久久久久免费av| 国产黄色视频一区二区在线观看| 久久这里只有精品19| 午夜免费男女啪啪视频观看| 午夜91福利影院| 超色免费av| 丝袜喷水一区| 国产成人av激情在线播放| 制服丝袜香蕉在线| 中文字幕制服av| 亚洲精品国产一区二区精华液| 国产成人精品福利久久| av不卡在线播放| 天天躁夜夜躁狠狠久久av| 国产亚洲一区二区精品| 看非洲黑人一级黄片| 999久久久国产精品视频| 精品一区二区三区四区五区乱码 | 少妇猛男粗大的猛烈进出视频| 91精品三级在线观看| 日韩av免费高清视频| 精品亚洲成国产av| 高清av免费在线| 国产亚洲av片在线观看秒播厂| 免费在线观看黄色视频的| 嫩草影视91久久| e午夜精品久久久久久久| 国产色婷婷99| 久久ye,这里只有精品| 国产伦理片在线播放av一区| 久久97久久精品| 欧美日韩综合久久久久久| 亚洲欧美一区二区三区久久| 亚洲av电影在线进入| 日本欧美国产在线视频| 伊人亚洲综合成人网| 一边摸一边做爽爽视频免费| 国产一区二区三区av在线| 熟妇人妻不卡中文字幕| 日本午夜av视频| 韩国高清视频一区二区三区| 少妇被粗大猛烈的视频| 无遮挡黄片免费观看| 精品午夜福利在线看| 蜜桃在线观看..| 啦啦啦在线免费观看视频4| 欧美激情高清一区二区三区 | 精品福利永久在线观看| 国产精品一二三区在线看| 日本一区二区免费在线视频| 国产一区有黄有色的免费视频| 大码成人一级视频| 午夜免费男女啪啪视频观看| 一区二区三区四区激情视频| 欧美日韩一区二区视频在线观看视频在线| 亚洲欧美精品综合一区二区三区| 黄片无遮挡物在线观看| 丝袜喷水一区| 国产精品女同一区二区软件| 国产av码专区亚洲av| 久热爱精品视频在线9| 成人手机av| 亚洲国产毛片av蜜桃av| 精品一区二区三区av网在线观看 | 欧美精品一区二区免费开放| 欧美精品一区二区免费开放| 妹子高潮喷水视频| 午夜老司机福利片| 久久久国产欧美日韩av| 国产男女内射视频| 热99久久久久精品小说推荐| 日本av手机在线免费观看| 亚洲成色77777| a级毛片黄视频| 成人黄色视频免费在线看| 久久精品国产a三级三级三级| 熟女少妇亚洲综合色aaa.| 男女床上黄色一级片免费看|