• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved target detection algorithm based on Faster-RCNN

    2024-01-08 09:12:32BAIChenshuaiWUKaijunWANGDicongHUANGTaoTAOXiaomiao

    BAI Chenshuai,WU Kaijun,WANG Dicong,2,HUANG Tao,TAO Xiaomiao

    (1.School of Electronic and Information Engineering,Lanzhou Jiaotong University,Lanzhou 730070,China; 2.College of Intelligence and Computing,Tianjin University,Tianjin 300350,China)

    Abstract:Asymmetric convolution block network is introduced into the Faster-RCNN network model,and it is defined as improved target detection algorithm based on Faster-RCNN.In this algorithm,the convolution kernel of 3×3 in the network model is replaced by the asymmetric convolution block of 1×3+3×1+3×3.Firstly,the residual network ResNet is used as the backbone of the algorithm to extract the feature map of the image.The feature map passes through the convolution kernel block of 1×3+3×1+3×3 and then passes through two convolution kernels of 1×1.Secondly,the regional proposal network (RPN) is used to obtain the suggestion box of shared feature layer,and the suggestion box is mapped to the last feature map of convolution,and the anchor box of different sizes are unified by region of interest (RoI).Finally,the detection classification probability (Softmax loss) and detection border regression (Smooth L1 loss) are used for training.PASCAL_VOC data set is used.The results of mean average precision (mAP) show that the mAP value is increased by 0.38% compared with the original Faster-RCNN algorithm,the mAP value is increased by 2.68% compared with the RetinaNet algorithm,and the mAP value is increased by 3.41% compared with the YOLOv4 algorithm.

    Key words:Faster-RCNN; target detection algorithm; asymmetric convolution block; regional proposal network; regional pooling layer

    0 Introduction

    As one of the basic tasks in the fields of unmanned driving,video monitoring and early warning safety,target detection plays an important role in many researches.Especially in the densely populated places such as railway station,high-speed railway station and airport,the target detection technology is closely related to unmanned driving,video monitoring and security detection,and it is one of the most important research directions in 5G and artificial intelligence.With the rapid development of artificial intelligence and 5G technology,the research direction of using deep learning as target detection has attracted the interests of researchers,which makes deep learning be further developed in the direction of target detection.

    The traditional target detection method is divided into two kinds.One is the sliding window method,which needs to consider the aspect ratio of the object in the design of the window.It increases the complexity of the design,and the robustness and efficiency of hand-designed features are poor.The second target recognition method is based on selective search,which uses image segmentation method to connect the two most similar regions each time (depending on the overlapping degree of color,texture,size and shape),and uses the search box to locate the target in each iteration.In a word,the biggest disadvantage of sliding window is the redundancy of selection box,and selective search can effectively remove the redundant candidate box,greatly reduce the amount of calculation.Now,there are two types of object detection methods based on deep learning.The first one is a two-stage target detection method based on R-CNN,Fast-RCNN,Faster-RCNN,and Mask-RCNN,in which Faster-RCNN firstly generates a series of candidate frames,and then uses convolutional neural network to classify the samples.The second is a one-stage target detection algorithm based on regression represented by SSD,YOLOv3,and YOLOv4,which directly transforms the problem of target frame location into a regression problem instead of generating candidate frames.Because of the difference between the two methods,the performance is also different.The two-stage target detection method has advantages in detection accuracy and positioning accuracy,while the one-stage target detection algorithm has advantages in speed.

    Aiming at the field of driverless vehicle,the pedestrian,bicycle,battery car,pet,traffic signal,road sign and obstacle are studied.Considering the demand of high precision in the driving of unmanned vehicles,two-stage target detection method is more suitable,the original target detection method has been improved and optimized in this paper.Liu et al.[1]proposed a method to accurately learn and extract the characteristics of the rotating region and locate the rotating target.R-RCNN has three important new components,including the rotating RoI pool layer,the rotation regression model and the non maximum suppression (NMS) multitasking method among different classes.Girshick[2]proposed a method based on fast region for convolutional neural network (Fast-RCNN) for target detection.Fast-RCNN can use neural network to classify objects effectively.Ren et al.[3]proposed a Fast-RCNN,a region recommendation network based on candidate regions,which shared complete image features with detection networks[4-10].RPN is a network of regional recommendations,which can predict the target boundary and target score at each location at the same time.After end-to-end training,RPN generates high-quality region suggestion box,which is used in fast RCNN target detection model.RPN is trained from end to end,and high quality area suggestion box is generated,which is used in target detection model Faster-RCNN.Jeremiah et al.[11]proposed a Mask-RCNN target detection algorithm,which was the latest target detection algorithm for natural image target detection,location and instance segmentation.Lu et al.[12]proposed a new Grid-RCNN target detection algorithm.Grid guided positioning mechanism is used to achieve accurate target detection by the framework.Unlike the traditional target detection method based on regression,Grid-RCNN algorithm can capture spatial information clearly and has the position sensitive property of complete convolution structure.Liu et al.[13]proposed a method of SSD to detect the target in image by using a single convolutional neural network.The output space of the boundary frame is discretized into a set of default boxes,which has different aspect ratio and the ratio of each feature map location.Joseph et al.[14]proposed to predict the target score of each bounding box by using logical regression.Alexey et al.[15]used new functions to combine some of excellent algorithms to achieve relatively good results.

    A target detection algorithm is proposed based on the improved target detection algorithm based on faster RCNN.A structure neutral asymmetric convolution block[16]is used as the construction block of convolution kernel,and one-dimensional asymmetric convolution is used to enhance the square convolution kernel.The 3×3 convolution kernel of the basic network structure of the Faster-RCNN algorithm is modified into a (3×3+1×3+3×1) asymmetric convolution block,and the anchor parameters are optimized to improve the coincidence degree between the prior frame and the data set.Although the speed of the algorithm is slowed down,the target detection accuracy of the algorithm is improved.

    1 Target detection of ACBNet+Faster-RCNN

    1.1 ResNet network

    He et al.[17]proposed the residual network ResNet to solve the degradation problem.The basic idea is to provide the residuals of the previous layer to adapt it to the residual mapping,rather than provide an alternative structure[18-19].It is considered that the display of residuals is relatively easy to optimize.And it is easy to set the residuals of the previous layer to zero if the same display is the best compared with the non-linear layer simulation group in extreme cases.A simple execution tag is displayed and added to the stack output as shown in Fig.1.Fast connection can be obtained without any additional parameters or complex calculations.The whole network propagates back through SGD.

    Fig.1 Building block of residual learning

    A reference valuexis established for the input of each layer,and a residual function is formed,which is easier to optimize and can greatly deepen the network layer.In the residual blocks above,there are two layers,as shown in Eq.(1).W1,W2,Wiall represent weight,σrepresents the rectified linear unit (ReLU).Then,outputyis obtained through a shortcut and REeLU as shown in Eq.(2).

    F=W2σ(W1x),

    (1)

    y=F(x,{Wi})+x.

    (2)

    When the dimensions of the input and output need to be changed (such as changing the number of channels),a linear transformationWsofxcan be performed in the shortcut,as shown in Eq.(3).

    y=F(x,{Wi})+Wsx.

    (3)

    Fig.2 shows the network structure of resnet50.It is divided into five stages.In practical application,considering the computational cost,the remaining residual blocks are optimized,that is,the two convolution kernels (3×3) are replaced by asymmetric convolution blocks (1×1+3×3+1×1).

    Fig.2 ResNet network architecture

    In the new structure,the middle convolution layer (3×3) first reduces the computational complexity to the convolution layer (1×1),and then reverts to another convolution layer (1×1).The accuracy of the precision is maintained and the computational complexity is reduced.The first 1×1 convolution kernel compresses the channel of 256 into 64,which is then restored by 1×1 convolution.

    Tears came to my eyes as I realized what I had been a fool to judge Al as a failure. He had not left any material possessions behind. But he had been a kind loving father, and left behind his best love.

    1.2 Region proposal network

    After inputting the feature map into the network,a series of convolution kernels and ReLU×39×256-dimensional feature map are used to get the anchor points,and then used to select the scheme[20-23].Anchor points are generated,and the anchor point is a fixed size.Each point of the feature map is mapped back to the center point of the receptive field of the original image as the reference point,and thenKanchor points of different sizes and proportions around the reference points are selected.As shown in Fig.3,K=9 anchor points are generated at each slide position by using 3 ratios and 3 aspect ratios.Multiple region suggestions can be predicted by each feature point on the feature map.For example,the number of pixels of 51×51 is generated on the feature graph of 39×39×9 candidate boxes.

    As shown in Fig.3,nine candidate boxes are generated for a pixel at a certain position in the feature map.There are 256 channel feature mappings in the input RPN,and different 3×3 sliding windows are used to obtain the convolution value of the pixels in each channel at the same time.Finally,the convolution values of the pixels in each channel are added to obtain a new feature value.The 256-dimensional vector corresponds to two branches.One branch is the classification of the target and the background.The number of candidate boxes of 2K×18 and 256×18Kobtained through the 1×1 convolution kernel is 9.If the candidate frame is the target area,the position of the candidate frame in the target area needs to be determined.The other branch uses the 1×1 convolution kernel to get coordinates of 4K×1×256×36.Each box contains four coordinates (x,y,w,h),which is specific position.If the candidate frame is not the target area,the candidate frame is directly deleted without judging the subsequent position information.

    Classification branch:All anchor points of each image in the training set (including manual calibration) are divided into positive samples and negative samples.

    1) For each calibrated area,the anchor point with the largest overlap rate is recorded as a positive sample to ensure that each anchor point corresponds to at least one positive sample.

    2) For the remaining anchors,if the overlap ratio with the calibration area exceeds 0.7,it is recorded as a positive sample (each can correspond to multiple positive sample anchors).If the overlap ratio of any calibration is less than 0.3,it is recorded as a negative sample.

    Regression branch is shown in Eqs.(4)-(7).

    (4)

    (5)

    (6)

    (7)

    wherex,y,w,hrepresent the center coordinates and width and height of the box;x,xa,x*represent predicted box,anchor box and ground truth box;y,ya,y*represent predicted box,anchor box and ground truth box;w,wa,w*represent predicted box,anchor box and ground truth box;h,ha,h*represent predicted box,anchor box and ground truth box;trepresents the offset of the predict box relative to the anchor box;t*represents the offset of the ground true box relative to the anchor box.The learning goal is to make the former close to the value of the latter.

    In the middle of RPN,“cls” and “reg” respectively perform various calculations on these anchor points as shown in Eqs.(8)-(11).At the end of RPN,the initial screening (first remove the out-of-bounds anchor points,and then remove the duplication through the non-maximum suppression algorithm based on the classification results) and the initial offset (according to the regression result) of anchor points are realized by summarizing the results of the two branches.At this point,the output of box becomes proposal.

    The offset formulas are shown as Eqs.(8)-(11).

    (8)

    (9)

    (10)

    (11)

    Because anchor points usually overlap,suggestions for the same object will also overlap.In order to solve the problem of overlapping solutions,the NMS algorithm is adopted.If the intersection over union (IoU) between the two solutions is greater than a preset threshold,the solution with a lower score will be discarded.

    If the IoU value is too small,some objects may be lost.If the IoU value is too large,many objects may appear.The typical value of IoU is 0.6.After NMS treatment,the firstnrecommendations are sorted.

    1.3 Asymmetric convolution block

    Three parallel cores are used to replace the original cores by asymmetric convolution (AC) net,as shown in Fig.4.

    Fig.4 Overview of ACNet[16]

    Given a network,each squared convolution kernel is replaced by an ACB module and trained to convergence.Then the weights of the asymmetric core in each ACB are added to the corresponding position of the square core,and ACNet is transformed into the equivalent structure of the original network.ACNet can improve the performance of the benchmark model and has obvious advantages in PASCAL_VOC 2007 data.In addition,ACNet introduces 0 parameter,which can be combined with different CNN structures without adjusting the parameters carefully,and it is easy to implement on the mainstream CNN framework without additional inference time overhead.

    1.4 Improved Faster-RCNN algorithm

    An improved target detection algorithm based on Faster-RCNN is proposed.

    Step 1:Use the backbone feature of the backbone network ResNet to extract the Network and obtain a shared feature map.

    Step 2:Pass the shared feature map through an asymmetric convolution block,and then pass two (1×1 convolution kernel).

    Step 3:Use RPN to generate a bunch of anchor frames first,cut and filter,and then use SoftMax to determine whether the anchor is the foreground or the background.

    Step 4:Map the recommendation window to the convolution feature map of the last layer of the convolution kernel,and generate a fixed-size feature map for each RoI through the RoI pooling layer.

    Step 5:Use softmax loss function andL1smooth loss function for classification and regression,respectively.

    2 Experiment

    2.1 Experimental environment and datasets

    The experimental environment platform built in this paper has computer configuration i5-8250CPU,8 GRAM,64 bit windows 10 operating system and server configuration GeforceRTX2080×4.This algorithm is implemented on the basis of Faster-RCNN.The data is from PASCAL_VOC 2007,and 5 011 photos of different time,place and light are selected.Labelimg software is used to label the target in the image,and the XML file in VOC format is obtained as the label of the target detection datasets.

    2.2 ACBNet+Faster-RCNN target detection algorithm

    The model training process is divided into two iterations.In the first iteration,the parameter value of Batch_Size is 2,the parameter value of initial learning rate is 0.000 1,and the parameter value of epoch is 50.In the second iteration,the Batch_Size is set to 2,the initial learning rate is 0.000 01 and the epoch is set to 50.

    2.2.1 Mean average precision value

    For deep learning target detection algorithm,the detection accuracy of detection algorithm is very important.The mean average precision (mAP) is selected as the evaluation index.AP actually refers to the area under the curve drawn by using the combination of different precision and recall points.When different confidence levels are taken,different precision and recall are gotten.When enough confidence levels dense is obtained,a lot of precision and recall can be gotten.mAP is the average of AP values of all classes.The experimental results of mAP value are shown in Fig.5.

    (a) Faster-RCNN algorithm

    As shown in Fig.5,the experimental results of mAP of numerical indicators obtained by all methods are presented.The abscissa in Fig.5 is the AP value of a single class.There are 20 classes tested in this experiment.The ordinate is all the classes corresponding to this target detection.The top of each sub graph is the mAP value of each algorithm.From the mAP value at the top of each graph,it can be seen that the mAP numerical results obtained by proposed method are excellent compared with the other three algorithms.The most special one is that the mAP value is increased by 0.38% on the basis of the original Faster-RCNN algorithm.Compared with the RetinaNet algorithm,the mAP value of proposed method is increased by 3.02%.Compared with the YOLOv4 algorithm,the mAP value of proposed method is increased by 3.75%.It further shows that the proposed algorithm plays a good role in the process of target detection.

    2.2.2 Log average miss rate (LAMR)

    The target detection algorithms of deep learning are generally evaluated by the relationship curve between miss rate (MR) and average false positive per image (FPPI).In this paper,the logarithm mean value of MR when the logarithm of FPPI in the interval[0.01,100]is used as the evaluation standard of data,which is called LAMR for short.The experimental results of LAMR value are shown in Fig.6.

    (a) Faster-RCNN algorithm

    As shown in Fig.6,the experimental results of numerical index LAMR obtained by all methods are presented.The abscissa in Fig.6 is the MR value of a single class,and the ordinate is all the classes corresponding to this target detection.There are 20 classes in this experiment.LAMR refers to the logarithm average miss detection rate.So the smaller the experimental result of each class,the better the algorithm performance.The LAMR value of miss detection rate shown in Fig.6(a) is less than that of the comparison algorithm in Fig.6(b) and 6(c) among the 20 classes detected,especially the proposed algorithm is improved on the basis of the original algorithm.In the 20 experimental classes,the value of miss detection rate shown in Fig.6(d) is less than that of the original algorithm shown in Fig.6(a),which shows that the proposed algorithm has achieved good results in the process of target detection once again.

    3 Conclusions

    An asymmetric network block is proposed,which is further combined with the algorithm of Faster-RCNN,so that the 3×3 convolution core is replaced by (1×3+3×1+3×3) convolution core.Without adding any model parameters,the mAP value of the algorithm target detection is improved,the LAMR value of the algorithm target detection is reduced,the detection rate is improved and the stability of the algorithm is enhanced compared the improved algorithm with the original algorithm.

    Although relatively good results has been achieved in VOC2007 dataset,there are still two shortcomings in the application of this algorithm in target detection.Firstly,this algorithm is modified on the basis of Faster-RCNN algorithm,which makes the complexity of the algorithm model increase.Secondly,the applicability of this algorithm is very weak.If the model is applied to remote sensing images,railway images or high real-time scenes,its effect is not good.It can be considered to optimize the model,analyze the application scenarios of the target detection algorithm,eliminate the redundancy of the model,adjust the training parameters,and improve the performance of the target detection algorithm,so as to further improve the object detection algorithm based on deep learning in the next step.

    黄色配什么色好看| 国产精品国产三级国产专区5o| av一本久久久久| 久久精品国产自在天天线| 亚洲激情五月婷婷啪啪| 成人免费观看视频高清| 欧美+日韩+精品| 亚洲三级黄色毛片| 久久精品国产a三级三级三级| 啦啦啦啦在线视频资源| 成年人午夜在线观看视频| 插阴视频在线观看视频| 天美传媒精品一区二区| 亚洲自偷自拍三级| 日本vs欧美在线观看视频 | 国产精品久久久久久精品电影小说 | 少妇丰满av| 蜜臀久久99精品久久宅男| 波野结衣二区三区在线| 久热这里只有精品99| 国产一级毛片在线| 少妇 在线观看| 黄色怎么调成土黄色| 有码 亚洲区| 国产精品偷伦视频观看了| 国产女主播在线喷水免费视频网站| 成人免费观看视频高清| 秋霞伦理黄片| 国产深夜福利视频在线观看| 亚洲国产精品成人久久小说| 欧美xxxx黑人xx丫x性爽| 免费av中文字幕在线| 黄片wwwwww| 国产有黄有色有爽视频| 亚洲精品一二三| 久久国产精品男人的天堂亚洲 | 蜜桃在线观看..| 国产精品久久久久久精品古装| 在线 av 中文字幕| 国模一区二区三区四区视频| 男男h啪啪无遮挡| 国产黄片视频在线免费观看| 国产精品免费大片| 日本午夜av视频| 天天躁夜夜躁狠狠久久av| 久久亚洲国产成人精品v| 人妻 亚洲 视频| 亚洲欧美日韩另类电影网站 | 亚洲精品第二区| 一本久久精品| 成人18禁高潮啪啪吃奶动态图 | 国模一区二区三区四区视频| 好男人视频免费观看在线| 一二三四中文在线观看免费高清| 国产精品欧美亚洲77777| 久久ye,这里只有精品| 亚洲国产精品一区三区| 青青草视频在线视频观看| 激情 狠狠 欧美| 免费不卡的大黄色大毛片视频在线观看| 亚洲欧洲国产日韩| 99久久精品热视频| 亚洲国产av新网站| 久久精品国产亚洲av天美| 一级毛片 在线播放| 亚洲精品色激情综合| 久久久久视频综合| 777米奇影视久久| 免费播放大片免费观看视频在线观看| 久久99精品国语久久久| 国产乱人视频| 亚洲精品aⅴ在线观看| 色网站视频免费| 亚洲第一区二区三区不卡| 欧美区成人在线视频| 波野结衣二区三区在线| 日韩,欧美,国产一区二区三区| 国产毛片在线视频| 久久这里有精品视频免费| 成人午夜精彩视频在线观看| 日韩成人伦理影院| kizo精华| 久久久精品免费免费高清| 国产伦精品一区二区三区四那| 日韩欧美一区视频在线观看 | 极品少妇高潮喷水抽搐| 少妇精品久久久久久久| 51国产日韩欧美| 成年免费大片在线观看| 国产中年淑女户外野战色| 大香蕉97超碰在线| 最后的刺客免费高清国语| 国精品久久久久久国模美| 久久国产亚洲av麻豆专区| 热99国产精品久久久久久7| 色婷婷久久久亚洲欧美| 99久久综合免费| 美女内射精品一级片tv| 91精品伊人久久大香线蕉| 一级二级三级毛片免费看| 国产精品免费大片| 国产亚洲最大av| 亚洲国产精品一区三区| 久久韩国三级中文字幕| 精品久久久久久久末码| 热99国产精品久久久久久7| 亚洲国产精品成人久久小说| 少妇熟女欧美另类| 日韩一区二区三区影片| 秋霞在线观看毛片| 美女高潮的动态| 国产精品久久久久久精品古装| 中文字幕制服av| 日日啪夜夜爽| 另类亚洲欧美激情| 免费看不卡的av| 人妻制服诱惑在线中文字幕| 国产有黄有色有爽视频| 亚洲美女黄色视频免费看| 我的女老师完整版在线观看| 亚洲精品色激情综合| 一级二级三级毛片免费看| 久久韩国三级中文字幕| 国产精品一区www在线观看| 极品教师在线视频| 老师上课跳d突然被开到最大视频| 青春草视频在线免费观看| 边亲边吃奶的免费视频| 一级黄片播放器| 熟妇人妻不卡中文字幕| 久久精品国产a三级三级三级| 国产 一区 欧美 日韩| 精品视频人人做人人爽| 麻豆成人av视频| 波野结衣二区三区在线| 亚洲第一区二区三区不卡| 国产爱豆传媒在线观看| 91aial.com中文字幕在线观看| 久久精品国产自在天天线| 欧美精品国产亚洲| 少妇裸体淫交视频免费看高清| 一区二区三区乱码不卡18| 男人和女人高潮做爰伦理| 美女高潮的动态| 精品亚洲成a人片在线观看 | 亚洲精品,欧美精品| av网站免费在线观看视频| 一区在线观看完整版| 午夜日本视频在线| 少妇精品久久久久久久| 国产一区亚洲一区在线观看| 欧美xxxx性猛交bbbb| videos熟女内射| 三级国产精品片| 在线亚洲精品国产二区图片欧美 | 亚洲国产欧美在线一区| 成人18禁高潮啪啪吃奶动态图 | 日本黄色日本黄色录像| av.在线天堂| 日产精品乱码卡一卡2卡三| 一区二区av电影网| 国产亚洲91精品色在线| 免费大片18禁| 日本一二三区视频观看| 啦啦啦啦在线视频资源| 成人美女网站在线观看视频| 国产成人一区二区在线| 九九在线视频观看精品| 日韩人妻高清精品专区| 九九久久精品国产亚洲av麻豆| 久久ye,这里只有精品| 高清欧美精品videossex| 国产精品伦人一区二区| 国产精品麻豆人妻色哟哟久久| 国产又色又爽无遮挡免| 成人免费观看视频高清| 成人毛片a级毛片在线播放| 黄片无遮挡物在线观看| 久久99热这里只有精品18| 国产精品三级大全| 在线观看国产h片| av卡一久久| 久久久色成人| 亚洲精品一区蜜桃| 午夜免费男女啪啪视频观看| 大陆偷拍与自拍| 欧美日韩视频精品一区| 国产精品人妻久久久影院| 久久ye,这里只有精品| 夜夜骑夜夜射夜夜干| 极品教师在线视频| 国产成人午夜福利电影在线观看| 国产黄片美女视频| 男的添女的下面高潮视频| 亚洲天堂av无毛| 各种免费的搞黄视频| 欧美一区二区亚洲| 日日摸夜夜添夜夜爱| 秋霞在线观看毛片| 国产精品一二三区在线看| 国产精品麻豆人妻色哟哟久久| 日日摸夜夜添夜夜添av毛片| 久久女婷五月综合色啪小说| 91午夜精品亚洲一区二区三区| 亚洲精品乱久久久久久| 久久人人爽av亚洲精品天堂 | 波野结衣二区三区在线| 两个人的视频大全免费| 国产伦精品一区二区三区视频9| 日韩中文字幕视频在线看片 | 少妇的逼水好多| 伦理电影大哥的女人| 2022亚洲国产成人精品| 日韩三级伦理在线观看| 久久人人爽av亚洲精品天堂 | 国产男女超爽视频在线观看| 大话2 男鬼变身卡| 国产午夜精品久久久久久一区二区三区| 精品国产三级普通话版| 午夜日本视频在线| 国产女主播在线喷水免费视频网站| 91久久精品电影网| 成人黄色视频免费在线看| 久久久国产一区二区| 国产精品女同一区二区软件| 一级毛片我不卡| 精品国产乱码久久久久久小说| 精品一区二区三卡| av专区在线播放| 亚洲久久久国产精品| 久久久久久久久久人人人人人人| 亚洲av二区三区四区| 国内少妇人妻偷人精品xxx网站| 亚洲精品,欧美精品| 国产亚洲91精品色在线| 久久97久久精品| 男人爽女人下面视频在线观看| 欧美成人午夜免费资源| 欧美激情国产日韩精品一区| 国产精品一区二区性色av| 久久精品国产自在天天线| 91久久精品国产一区二区三区| 国产精品熟女久久久久浪| 国产在线一区二区三区精| 日韩国内少妇激情av| 我的老师免费观看完整版| 亚洲av成人精品一二三区| 少妇的逼好多水| 一区二区三区免费毛片| 久久精品久久久久久久性| 夜夜爽夜夜爽视频| 国产精品一区二区三区四区免费观看| 国产黄色免费在线视频| 亚洲内射少妇av| 青青草视频在线视频观看| 亚洲精品成人av观看孕妇| 观看免费一级毛片| 91狼人影院| 亚洲精品日韩在线中文字幕| 亚州av有码| 一级毛片我不卡| 极品少妇高潮喷水抽搐| 永久网站在线| 国产成人免费观看mmmm| 插阴视频在线观看视频| 极品少妇高潮喷水抽搐| 最近中文字幕2019免费版| 日本色播在线视频| 搡老乐熟女国产| 99热全是精品| 欧美老熟妇乱子伦牲交| 欧美精品一区二区免费开放| 成人影院久久| 狂野欧美激情性bbbbbb| 精品一品国产午夜福利视频| 国产永久视频网站| 亚州av有码| 国语对白做爰xxxⅹ性视频网站| 日日啪夜夜爽| 高清午夜精品一区二区三区| av国产免费在线观看| 精品久久国产蜜桃| 国产精品久久久久成人av| 久热久热在线精品观看| 不卡视频在线观看欧美| 26uuu在线亚洲综合色| 最近中文字幕高清免费大全6| 欧美日韩亚洲高清精品| 性高湖久久久久久久久免费观看| 久久精品国产自在天天线| 深爱激情五月婷婷| 久久99热6这里只有精品| 免费看不卡的av| 日韩av不卡免费在线播放| 麻豆国产97在线/欧美| 99久久综合免费| 又爽又黄a免费视频| 精品熟女少妇av免费看| 午夜激情久久久久久久| 国产人妻一区二区三区在| 80岁老熟妇乱子伦牲交| 久久99精品国语久久久| 嫩草影院新地址| 色视频www国产| 精品久久久噜噜| 国产精品国产av在线观看| 亚洲色图av天堂| 国产精品不卡视频一区二区| av播播在线观看一区| 久久久久久久久大av| 视频中文字幕在线观看| 国产 精品1| 青春草视频在线免费观看| 麻豆成人av视频| 国产精品一区二区性色av| 视频区图区小说| 在线观看免费高清a一片| 国产亚洲一区二区精品| 日本欧美视频一区| 国产av精品麻豆| 精品久久久久久久久亚洲| 五月伊人婷婷丁香| 国产 一区精品| 国产亚洲午夜精品一区二区久久| 一二三四中文在线观看免费高清| 乱系列少妇在线播放| 如何舔出高潮| 国产又色又爽无遮挡免| 人人妻人人澡人人爽人人夜夜| 中文天堂在线官网| av免费在线看不卡| 国产精品精品国产色婷婷| 一区二区三区乱码不卡18| 国产无遮挡羞羞视频在线观看| 日韩视频在线欧美| www.av在线官网国产| 身体一侧抽搐| 国产一区二区在线观看日韩| 各种免费的搞黄视频| 日本av手机在线免费观看| 最近2019中文字幕mv第一页| 国产v大片淫在线免费观看| 亚洲国产毛片av蜜桃av| 大香蕉久久网| 国产黄色免费在线视频| 日本猛色少妇xxxxx猛交久久| 欧美xxxx性猛交bbbb| 国产 一区 欧美 日韩| 大香蕉久久网| 黄色日韩在线| 各种免费的搞黄视频| 天美传媒精品一区二区| 中文字幕免费在线视频6| 人妻 亚洲 视频| 精品熟女少妇av免费看| 日日撸夜夜添| 国产人妻一区二区三区在| 在线观看人妻少妇| 一个人看视频在线观看www免费| 久久久亚洲精品成人影院| 纵有疾风起免费观看全集完整版| 男女下面进入的视频免费午夜| 高清av免费在线| 夜夜骑夜夜射夜夜干| 高清毛片免费看| 日韩,欧美,国产一区二区三区| 亚洲av国产av综合av卡| 久久99热这里只有精品18| 丰满迷人的少妇在线观看| 日韩免费高清中文字幕av| 18禁动态无遮挡网站| h视频一区二区三区| 三级国产精品欧美在线观看| 国产伦精品一区二区三区视频9| 欧美日韩视频高清一区二区三区二| 国产精品熟女久久久久浪| 国产在线一区二区三区精| 尾随美女入室| 日韩av在线免费看完整版不卡| 一本色道久久久久久精品综合| 国产成人免费无遮挡视频| 国产av国产精品国产| 日本一二三区视频观看| 欧美丝袜亚洲另类| 极品少妇高潮喷水抽搐| 亚洲av日韩在线播放| 丰满人妻一区二区三区视频av| 国产美女午夜福利| 九九爱精品视频在线观看| 制服丝袜香蕉在线| 久久久成人免费电影| 丝袜脚勾引网站| 日本欧美视频一区| 精品国产露脸久久av麻豆| 久久精品久久久久久噜噜老黄| 欧美一区二区亚洲| 亚洲精品成人av观看孕妇| 精品一品国产午夜福利视频| 免费观看在线日韩| 中文天堂在线官网| 啦啦啦中文免费视频观看日本| 国产人妻一区二区三区在| 国产高清三级在线| 国产欧美亚洲国产| 午夜日本视频在线| 国产成人精品婷婷| 色视频www国产| 久热这里只有精品99| 亚洲真实伦在线观看| 欧美另类一区| 国产欧美日韩一区二区三区在线 | 女人久久www免费人成看片| 国产av码专区亚洲av| 深夜a级毛片| 久久ye,这里只有精品| 国产91av在线免费观看| 日韩成人伦理影院| 久久久午夜欧美精品| 一级二级三级毛片免费看| 乱系列少妇在线播放| 一级爰片在线观看| 久久久久久久久久人人人人人人| 精品人妻视频免费看| 99九九线精品视频在线观看视频| 免费黄色在线免费观看| 联通29元200g的流量卡| 欧美日韩一区二区视频在线观看视频在线| 精品国产三级普通话版| 国产亚洲最大av| 亚洲精品成人av观看孕妇| 伦理电影大哥的女人| 国产精品福利在线免费观看| 深爱激情五月婷婷| 狂野欧美白嫩少妇大欣赏| 成年女人在线观看亚洲视频| 在线精品无人区一区二区三 | 国产伦精品一区二区三区视频9| 美女xxoo啪啪120秒动态图| 青春草国产在线视频| 麻豆成人av视频| 国产免费一级a男人的天堂| 国产成人免费观看mmmm| 搡老乐熟女国产| 三级国产精品片| 国产成人精品久久久久久| 少妇人妻久久综合中文| 韩国av在线不卡| 97超碰精品成人国产| 各种免费的搞黄视频| 婷婷色综合www| 亚洲av在线观看美女高潮| 寂寞人妻少妇视频99o| 一边亲一边摸免费视频| 国产精品99久久99久久久不卡 | 校园人妻丝袜中文字幕| 天堂俺去俺来也www色官网| kizo精华| 亚洲自偷自拍三级| 精品一区二区三区视频在线| 精品酒店卫生间| 久久99热这里只有精品18| 亚洲精品成人av观看孕妇| 久久亚洲国产成人精品v| 亚洲精品久久久久久婷婷小说| 亚洲综合精品二区| 亚洲三级黄色毛片| 在线精品无人区一区二区三 | 女性生殖器流出的白浆| 激情五月婷婷亚洲| 色婷婷久久久亚洲欧美| 观看免费一级毛片| 精品久久久久久久久亚洲| 久久97久久精品| 国产乱人偷精品视频| 日本黄大片高清| 国产一区亚洲一区在线观看| 国产精品嫩草影院av在线观看| 亚洲精品乱码久久久v下载方式| 日韩,欧美,国产一区二区三区| 国产精品欧美亚洲77777| 天天躁日日操中文字幕| 午夜视频国产福利| 久久精品国产自在天天线| 狂野欧美白嫩少妇大欣赏| 91狼人影院| 国产真实伦视频高清在线观看| 丝袜脚勾引网站| 十八禁网站网址无遮挡 | 国产视频内射| 网址你懂的国产日韩在线| 在线观看三级黄色| 亚洲四区av| 国产高清不卡午夜福利| 国产精品国产三级国产av玫瑰| 亚洲综合精品二区| av天堂中文字幕网| 91在线精品国自产拍蜜月| 国产69精品久久久久777片| 成人毛片a级毛片在线播放| 亚洲精品国产av成人精品| 伊人久久精品亚洲午夜| 色吧在线观看| 国产乱人偷精品视频| 深夜a级毛片| 午夜福利影视在线免费观看| 五月开心婷婷网| 99久久中文字幕三级久久日本| 晚上一个人看的免费电影| 午夜福利在线观看免费完整高清在| 亚洲精品,欧美精品| 狂野欧美激情性bbbbbb| 国产一区二区三区综合在线观看 | 中文字幕久久专区| 91久久精品电影网| 色视频在线一区二区三区| 青春草视频在线免费观看| 久久久亚洲精品成人影院| 国产男女内射视频| 欧美一级a爱片免费观看看| 哪个播放器可以免费观看大片| 日日摸夜夜添夜夜爱| 99久久精品一区二区三区| 国产精品国产三级国产av玫瑰| 秋霞在线观看毛片| .国产精品久久| 国产高潮美女av| 亚洲精品久久午夜乱码| 亚洲人成网站在线观看播放| 国产午夜精品久久久久久一区二区三区| 国产高清有码在线观看视频| 熟妇人妻不卡中文字幕| 纵有疾风起免费观看全集完整版| 久久人人爽av亚洲精品天堂 | 国产黄色免费在线视频| 日日摸夜夜添夜夜爱| 午夜视频国产福利| 精品国产乱码久久久久久小说| 黄色怎么调成土黄色| 国产淫语在线视频| 亚洲av成人精品一区久久| 最新中文字幕久久久久| 毛片女人毛片| 岛国毛片在线播放| av国产免费在线观看| 男人爽女人下面视频在线观看| av在线蜜桃| 久久人人爽av亚洲精品天堂 | 亚洲美女黄色视频免费看| 久久国产亚洲av麻豆专区| 一级爰片在线观看| 国产成人精品婷婷| 精品少妇久久久久久888优播| 久热这里只有精品99| 成人黄色视频免费在线看| 成人18禁高潮啪啪吃奶动态图 | 99热全是精品| 精品国产露脸久久av麻豆| 欧美成人一区二区免费高清观看| 亚洲人成网站在线观看播放| 国产亚洲午夜精品一区二区久久| 亚洲av不卡在线观看| 免费播放大片免费观看视频在线观看| 在现免费观看毛片| 精品少妇黑人巨大在线播放| 成人影院久久| 99久久人妻综合| 亚洲综合色惰| 久久久久精品性色| av网站免费在线观看视频| 亚洲精品国产色婷婷电影| 久久国产精品男人的天堂亚洲 | 深爱激情五月婷婷| 高清在线视频一区二区三区| 国产免费福利视频在线观看| 老司机影院成人| 国产欧美亚洲国产| 美女福利国产在线 | 精品亚洲成国产av| 中文字幕av成人在线电影| 久久97久久精品| 午夜免费观看性视频| 97在线视频观看| 老女人水多毛片| 又粗又硬又长又爽又黄的视频| 九九爱精品视频在线观看| 26uuu在线亚洲综合色| 国产 精品1| av在线观看视频网站免费| 99热这里只有是精品在线观看| www.av在线官网国产| 亚洲欧洲日产国产| 老熟女久久久| av在线观看视频网站免费| 中文乱码字字幕精品一区二区三区| 亚洲精品aⅴ在线观看| 王馨瑶露胸无遮挡在线观看| 欧美日韩一区二区视频在线观看视频在线| 我要看日韩黄色一级片| 欧美高清性xxxxhd video| 久久久久久久久大av| 最近手机中文字幕大全| 在线看a的网站| 天天躁日日操中文字幕| 99视频精品全部免费 在线| 天美传媒精品一区二区| 十八禁网站网址无遮挡 | 91久久精品电影网| 18禁在线无遮挡免费观看视频| 久久久欧美国产精品| 18禁裸乳无遮挡动漫免费视频| 国产精品伦人一区二区| 1000部很黄的大片| 一区二区三区免费毛片| 日韩国内少妇激情av| 在线观看免费高清a一片|