• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Lightweight Deep Learning Algorithm in 3D Reconstruction

    2022-11-11 10:47:28TaoZhangandYiCao
    Computers Materials&Continua 2022年9期

    Tao Zhangand Yi Cao

    1School of Mechanical Engineering,North China University of Water Conservancy and Hydroelectric Power,Zhengzhou,450045,China

    2Department of Electrical and Computer Engineering,University of Windsor,Windsor,ON,N9B 3P4,Canada

    Abstract: The three-dimensional (3D)reconstruction technology based on structured light has been widely used in the field of industrial measurement due to its many advantages.Aiming at the problems of high mismatch rate and poor real-time performance caused by factors such as system jitter and noise, a lightweight stripe image feature extraction algorithm based on You Only Look Once v4 (YOLOv4)network is proposed.First, Mobilenetv3 is used as the backbone network to effectively extract features, and then the Mish activation function and Complete Intersection over Union(CIoU)loss function are used to calculate the improved target frame regression loss,which effectively improves the accuracy and real-time performance of feature detection.Simulation experiment results show that the model size after the improved algorithm is only 52 MB, the mean average accuracy (mAP)of fringe image data reconstruction reaches 82.11%, and the 3D point cloud restoration rate reaches 90.1%.Compared with the existing model, it has obvious advantages and can satisfy the accuracy and real-time requirements of reconstruction tasks in resource-constrained equipment.

    Keywords: 3D reconstruction; feature extraction; deep learning; lightweight;YOLOv4

    1 Preface

    Optical three-dimensional(3D)measurement technology[1]is one of the most important research fields and research directions in optical measurement.As an important method of three-dimensional measurement technology, striped structured light technology can quickly and accurately obtain 3D point cloud data on the surface of the measured object, and is widely used in quality inspection,cultural relic protection, human-computer interaction, biomedicine and other fields [2,3].The basic process of the measurement algorithm is as follows:project one or a group of structural fringes onto the surface of the object,the camera captures the fringe image modulated by the height of the object,and the relevant algorithm is used to calculate the phase information carried in the fringe;according to the phase and height, world coordinates and image pixel coordinates The mapping relationship between to get the final 3D information.Techniques such as fringe analysis, phase extraction and phase unwrapping have an important influence on the accuracy of 3D measurement.How to obtain high-precision depth information from the fringe image of the measured object is still the focus and difficulty of fringe projection 3D measurement technology.

    Algorithms for obtaining depth information (or unfolding phase)from fringe images usually require two main steps:phase extraction represented by phase shifting and Fourier transform methods[4] and phase unfolding represented by spatial phase unwrapping and time phase unwrapping [5,6].With the successful application of deep learning in the field of 3D measurement,3D reconstruction technology[7-9]based on Convolutional Neural Network(CNN)has been continuously developed.The typical representative is Region-Convolutional Neural Network (R-CNN)series of algorithms based on region selection,but these methods take a long time to detect and cannot achieve the effect of real-time detection;Single Shot MultiBox Detector(SSD)[10]fusion multi-scale detection model has improved speed,but it detects small targets.Insufficient performance.The YOLO[11-14]series of algorithms are one of the most widely used algorithms in the field of deep learning.YOLOv1 has a fixed input size and has a poor detection effect on objects that occupy a relatively small area;YOLOv2 removes the fully connected layer and improves the detection speed; YOLOv3 obtained better detection performance, and can effectively detect small target objects, without a significant increase in speed.YOLOv4 is the fourth version of the YOLO series of algorithms,and its accuracy and speed have been significantly improved.With the increasing expansion of neural network model scale and increasing parameter scale,it needs to consume a lot of computing and storage resources,making it difficult to integrate into mobile terminals with limited resources, such as mobile phones and tablet computers.

    Parameter compression on the constructed YOLOv4 model can well solve the contradiction between the huge network model and the limited storage space.The currently widely used model parameter compression method has weighted parameter quantization [15], Singular Value Decomposition(SVD)method[13]and so on.

    The weight parameter quantization can achieve the purpose of reducing resource consumption by reducing the accuracy of the weight.For example, in common development frameworks[16-18],the activation and weight of neural networks are usually represented by floating-point data.Using low-level fixed-point data or even a small portion of training values to replace floating-point data helps reduce the bandwidth and storage requirements of the neural network processing system.The disadvantage is decreased data accuracy has caused a decrease in classification accuracy, and at the same time,the compression effect is difficult to improve.Peng[19]and others have greatly reduced the model parameters and resource occupation by adding the Ghost module and the Shuffle Conv module,but the accuracy is reduced by 0.2%compared with the original network.The SVD decomposition law achieves the purpose of reducing resource consumption by reducing the number of weights.Literature[20]proposed a global average pooling algorithm to replace the fully connected layer.Google Net uses this algorithm to reduce the scale of network training, and the removal of the fully connected layer does not affect the accuracy of image recognition.Google Net uses this algorithm to reduce the scale of network training,and the removal of the fully connected layer does not affect the accuracy of image recognition.The recognition accuracy of the algorithm in Image Net reaches 93.3%.At the same time,literature[21]proposed a 1*1 convolution kernel,which was successfully applied to Google Net and Res Net,which played a role in reducing the amount of parameters.

    This paper uses the YOLOv4 network model to extract the features of the striped structured light image.Considering that the features of the striped image are not obvious due to the influence of illumination and noise, the feature extraction network model is improved.The algorithm first uses Mobilenetv3 structure to replace Cross-stage partial Darknet53(CSPDarknet53)network of YOLOv4 to reduce the amount of backbone network parameters, and then introduces the Mish activation function and the CIoU loss function to calculate the improvement of the target frame regression loss,which effectively improves the generalization of feature extraction.

    2 3D Reconstruction Algorithm

    2.1 Stripe Structured Light 3D Reconstruction Algorithm

    The principle of the fringe structured light 3D reconstruction algorithm is shown in Fig.1.Assume that the light beam projected by the projection system intersects the reference plane at point B,which is imaged at point C on the camera image plane.When the object is placed,it suppose that another light beam intersects the object at point D,which is also imaged at point C in the camera image plane.For point C in the phase plane,there are two phase values before and after the object is placed.Therefore,the height h of point D can be derived from the phase difference.

    Figure 1:3D measurement principle of structured light

    The phase shift method is one of the commonly used methods of the fringe structured light 3D reconstruction technology.By projecting a series of fringe images with a phase shift of In(x,y)to the reconstruction target 2π/n,the wrapped phase of the standard phase shift method is:

    The wrapping phase is discontinuous, and the value range is between [-π,π].The unfolding phaseφ(x,y)required in the subsequent three-dimensional reconstruction work is obtained by phase unwrapping.Phase unwrapping aims to recover the continuous phase fromφ(x,y),and reconstruct the physically continuous phase change by adding or subtracting an appropriate multiple k(x,y) of 2π,thereby eliminate phase jumps.Therefore,the relationship between the unfolding phase and the wrapping phase is as follows:

    Finally, the mapping expression between the unfolding phase and the height is determined and calibrated the mapping coefficients to realize the conversion of depth data and phase data of the measured object,and obtain the 3D topography information of the object surface.

    2.2 YOLOv4 Network

    YOLOv4 is mainly composed of Backbone,Neck and Head,as shown in Fig.2.The Backbone part of YOLOv4 uses the CSPDarknet53 network, which is based on the Darknet53 network of YOLOv3 and formed by drawing on the ideas of CSPNet [22].The Neck part is composed of the Spatial Pyramid Pooling Networks (SPPNet)structure and Path Aggregation Network (PANet).SPPNet is a spatial pyramid pooling network that can increase the receptive field of the network,and the PANet network is a path aggregation network that realizes the integration of deep features and shallow features of the Backbone network.In the head detection part, the YOLOv4 algorithm uses the YOLOv3 detection head to perform two convolution operations with a size of 3×3 and 1×1 to complete the detection.

    Figure 2:Stucture of YOLOv4

    2.3 Network Model Compression

    YOLOv4 network model is improved from two aspects: using the MobileNetV3 structure to replace the backbone feature extraction network of YOLOv4, and greatly reducing the number of backbone network parameters through the deep separable convolution in Mobilenetv3; introducing Mish activation function and CIoU loss function calculation to improve target frame regression loss,effectively improve the generalization of feature extraction.

    YOLOv4 algorithm uses the CSPDarknet53 network as the feature extraction network, which contains 5 residual blocks, which are respectively stacked by 1, 2, 8, 8, and 4 residual units.The algorithm has a total of 104 convolutional networks,including 72 convolutional layers,and uses a large number of standard 3×3 convolution operations.A large amount of computing resources are used in the calculation process,which makes it difficult to achieve real-time performance.With the transfer of multi-layer features,more convolutional layers will gradually reduce the ability of local refined feature extraction,which affects the detection performance of the algorithm for small features.Therefore,it is necessary to improve the YOLOv4 feature extraction network to meet the small target detection and real-time requirements.

    The MobileNet network uses the depth separable convolution calculation to convert the traditional convolution into a deep convolution and a 1×1 dot convolution, and introduces a width multiplier and a resolution multiplier to control the amount of model parameters.Mobile NetV3 is the third generation of Mobile Net network development.It combines the deep separable convolution method in MobileNetV1,the Inverted Residuals,Linear Bottleneck and the Squeeze-and-Excitation(SE)attention mechanism in MobileNetV2.MobileNetV3 uses neural architecture search(NAS)to search for network configuration and parameters, while improving the swish activation function to reduce the amount of calculation for h-swish,which can achieve less calculation and higher accuracy.The Mobile Net network first uses three 3*3 convolution kernels to convolve with each channel of the input feature map to obtain a feature map with an input channel equal to the output channel, and then uses N 1*1 convolution kernels to convolve this feature map to obtain a new N-channel feature map.Compared with the CSPDarknet53 network,it not only maintains a relatively powerful feature extraction capability,but also reduces the size of the model to a large extent,making it more convenient to deploy in the mobile terminal of the industrial field.At the same time,it has less network depth than the CSPDarknet53 network, which can better extract local refined features and improve the feature detection performance of small targets.

    The model is trained with a self-regular non-monotonic Mish activation function, which can ensure the effective return of training loss,and obtain better generalization ability and more accuracy while ensuring the convergence speed.The calculation formula is:

    where x is the input of the activation layer,and f(x)is the output of the activation layer.

    In order to detect the target more accurately,the training loss is composed of the weighted sum of bounding box regression loss,confidence loss and classification loss,and calculates the return gradient.The calculation formula is:

    where L represents training loss;Lboxrepresents bounding box regression loss;Lobjrepresents target confidence loss;λiourepresents category classification loss; represents bounding box regression loss weight coefficient;S represents the number of grids;B represents anchor point candidates generated by each grid Box;lij,objindicates that there is a target;Lciouindicates the boundary loss measured by CIoU.

    λiouaffects the proportion of the bounding box regression loss in the overall training loss,which can improve the detection accuracy.The calculation of confidence loss is as follows:

    whereλclsis the confidence loss weight coefficient;lij,noobjindicates no target;λCis the loss weight coefficient corresponding to each category target;Ciis the confidence of the i-th grid; ?Cis the target confidence.

    By changingλcls, the influence weight of the confidence loss in the entire training loss can be adjusted;by changingλC,the influence weight of samples of different categories in the training loss can be set,so as to be compatible with categories with fewer training samples to solve complex problems.

    3 Experiment and Result Analysis

    In order to verify the reliability of the algorithm and the effect in the actual measurement,a set of grating three-dimensional projection measurement system composed of a projector and a camera was built,as shown in the Fig.3.The resolution of the camera(Hikvision MV-CA060-10GC)is 3072*2048,the resolution of the projector(BenQ es6299)is 1920*1200,the high-speed vision processor(CPU i9-10900X,3.7 GHz,4.5 GHz Turbo,memory 64 GB DDR4,32-bit Windows operating system).

    Figure 3:Experimental system

    The experimental steps are as follows:

    (1)Generate sine grating fringes,where a four-step phase shift fringe pattern is used.

    (2)Project the sine grating fringe pattern to the homogeneous whiteboard,and collect the grating fringe modulated by the surface of the object.

    (3)Use the training data to train the YOLOv4 network model to obtain the mapping between the fringe image and the depth image.

    (4)Use the trained network to obtain the depth data of the fringe image.

    For deep learning network training,the training rounds are uniformly set to 100,the batch size is 16,the initial learning rate is 1e-3,and the initial weights are all set to 1.There are a total of 5012 photos in the training set.In each round of training,90%of the photos are used for training,and the other 10% of the photos are used for real-time detection of the training effect.This experiment will select a set of weight files with the lowest loss in each round to compare mAP size, model size, and real-time detection Frames Per Second(FPS).

    As shown in Tab.1,the model size of standard YOLOv4 is about 220 M,and FPS is 6.33.After replacing CSPDarknet53 with Mobilenetv3, the model size further decreased to only 50 M, FPS increased to 14.35,but mAP also dropped to 77.48%.It can be concluded that although Mobilenetv3 can greatly simplify the network structure,mAP will also be greatly reduced.After the improved model is used in the algorithm in this paper,mAP increases to 82.11%and the model size becomes 52 M,and the FPS is 13.67.Although the algorithm causes the model to become slightly larger and the FPS to drop slightly,it ensures a higher mAP.

    Table 1: Comparison of deep learning model

    According to the built experiment system and trained deep learning model, 3D reconstruction calculations are performed on objects with a simple shape and a complex shape respectively.The experiment uses a high-speed visual processor for training,and uses pre-training weights to train the original YOLOv4 network and the improved YOLOv4 model in this article.Finally,the results of the above three models are compared.Fig.4 is the simple shape image of the pony spoon inputted by the test,respectively taking 4 fringe images with different phases.Fig.5 is the final optimal depth image.Tab.2 is the 3D reconstruction effect of the three models.

    Figure 4:Test fringe image

    Figure 5:Depth image

    Table 2: Comparison of 3D reconstruction results

    Fig.6 is the complicated shape image of the human face inputted by the test,respectively taking 4 fringe images with different phases.Fig.7 is the final optimal depth image.Tab.3 is the 3D reconstruction effect of the three models.

    Figure 6:Test fringe image

    Figure 7:Depth image

    Table 3: Comparison of 3D reconstruction results

    By simulating the 3D reconstruction process of two different objects,compared with the simple model of first example, the model of second example is more complex, has more abundant fringe features, and be convenient to obtain the phase change, so it is better than the first example in reconstruction accuracy and speed.At the same time,it can be concluded from the simulation results of the three algorithms:the lightweight YOLOv4 model in this paper is superior to the other two models in terms of average phase error,point cloud restoration rate and running time,but it still needs further research at the sub-pixel level in the detail reconstruction.

    4 Conclusions

    Based on the 3D model of striped structured light construction,this paper proposes a stripe image feature extraction algorithm based on lightweight YOLOv4.The advantage of this model is that it uses a lightweight Mobile Net network to replace the CSPDarknet backbone network in YOLOv4,which simplifies the network structure and improves the real-time performance of model detection;uses the Mish activation function and the CIoU loss function to calculate and improve the target frame regression loss,which is effective Improved feature detection accuracy and real-time performance.The experimental results show that, compared with the existing 3D reconstruction methods, the depth information calculated by the proposed method has higher accuracy and improves the accuracy of the 3D measurement results of fringe images.Therefore,it can be effectively used in the field of fringe projection 3D measurement and is better to meet the needs of 3D shape measurement of objects in scientific research and practical applications.The next step will continue to study the effectiveness of the proposed method in other more experimental scenarios,such as the effectiveness and accuracy of the fringe image depth estimation in the case of colored objects,high-light objects,and projection outof-focus conditions.On the other hand,the generalization ability of the model is a common problem in deep learning,and it is also a key issue that needs to be paid attention to in the next work to improve the proposed method.

    Acknowledgement:The authors thank Dr.Jinxing Niu for his suggestions.The authors thank the anonymous reviewers and the editor for the instructive suggestions that significantly improved the quality of this paper.

    Funding Statement:This work is funded by the Training Plan for Young Backbone Teachers in Colleges and Universities in Henan Province under Grant No.2021GGJS077.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品三级大全| 国产精品一二三区在线看| 精品欧美国产一区二区三| 国产成人aa在线观看| 干丝袜人妻中文字幕| 麻豆乱淫一区二区| 美女高潮的动态| 日本黄色片子视频| 免费看日本二区| 成年免费大片在线观看| 国产91av在线免费观看| 麻豆精品久久久久久蜜桃| 狂野欧美白嫩少妇大欣赏| 男插女下体视频免费在线播放| 久久久久网色| 亚洲一区高清亚洲精品| 精品久久久久久成人av| 国产精品电影一区二区三区| kizo精华| 免费看光身美女| 美女被艹到高潮喷水动态| 两个人的视频大全免费| 国产老妇女一区| 久久久久久九九精品二区国产| 亚洲成人中文字幕在线播放| 久久久久久久久久黄片| 哪里可以看免费的av片| 一级毛片电影观看 | 免费av观看视频| 国产成人一区二区在线| 中文资源天堂在线| 国产精品人妻久久久久久| 国产毛片a区久久久久| 亚洲av二区三区四区| 亚洲成人av在线免费| 国产欧美日韩精品一区二区| 久久久久久久亚洲中文字幕| 又爽又黄a免费视频| 国产成人a区在线观看| 97超碰精品成人国产| 一级毛片aaaaaa免费看小| 亚洲人成网站高清观看| 久久久久久久亚洲中文字幕| 久久久久免费精品人妻一区二区| 精品国产三级普通话版| 国产免费男女视频| 三级经典国产精品| 日日干狠狠操夜夜爽| 欧美精品国产亚洲| 国产精品日韩av在线免费观看| 午夜亚洲福利在线播放| 99九九线精品视频在线观看视频| 丰满乱子伦码专区| 一级毛片久久久久久久久女| 99久久无色码亚洲精品果冻| 亚洲国产精品成人综合色| 夜夜爽天天搞| 国产精品久久久久久av不卡| 国内精品宾馆在线| 国产精品一二三区在线看| 观看免费一级毛片| 亚洲成av人片在线播放无| 成年版毛片免费区| 夜夜看夜夜爽夜夜摸| 精品人妻偷拍中文字幕| 久久人人精品亚洲av| 亚洲欧美精品专区久久| 精华霜和精华液先用哪个| 免费观看a级毛片全部| 在线观看一区二区三区| 精品人妻视频免费看| 日韩av不卡免费在线播放| 身体一侧抽搐| 此物有八面人人有两片| 3wmmmm亚洲av在线观看| 国产精品免费一区二区三区在线| 色哟哟·www| 欧美性猛交黑人性爽| 99久久精品热视频| 国产国拍精品亚洲av在线观看| 色综合亚洲欧美另类图片| 成年女人永久免费观看视频| or卡值多少钱| 精品久久久久久久末码| 简卡轻食公司| 99久国产av精品| 精品久久久久久久久久免费视频| 国产欧美日韩精品一区二区| 看十八女毛片水多多多| 亚洲成人久久性| 少妇被粗大猛烈的视频| 又粗又硬又长又爽又黄的视频 | av视频在线观看入口| 中国美女看黄片| 久久久国产成人免费| 女的被弄到高潮叫床怎么办| 精品久久久久久久久久免费视频| 成人综合一区亚洲| 日韩一区二区视频免费看| 国产精品.久久久| 只有这里有精品99| 亚洲美女视频黄频| 黄片无遮挡物在线观看| 国产黄片美女视频| 国产伦理片在线播放av一区 | 又粗又爽又猛毛片免费看| 久久国产乱子免费精品| 亚洲精品日韩av片在线观看| 国产一区二区激情短视频| 欧美性猛交黑人性爽| 中文字幕熟女人妻在线| 国产男人的电影天堂91| www.色视频.com| 日本色播在线视频| 国产亚洲5aaaaa淫片| 在线免费观看的www视频| 美女脱内裤让男人舔精品视频 | 日韩欧美国产在线观看| 岛国在线免费视频观看| 国产三级在线视频| 在线国产一区二区在线| 好男人在线观看高清免费视频| 国产蜜桃级精品一区二区三区| 熟妇人妻久久中文字幕3abv| 最近2019中文字幕mv第一页| 日韩欧美三级三区| 波野结衣二区三区在线| 嘟嘟电影网在线观看| av在线天堂中文字幕| 天天躁夜夜躁狠狠久久av| 天堂网av新在线| 性插视频无遮挡在线免费观看| 日韩欧美精品v在线| 看黄色毛片网站| 国产精华一区二区三区| 搡女人真爽免费视频火全软件| 亚洲成av人片在线播放无| 国产精品美女特级片免费视频播放器| 美女黄网站色视频| 免费av不卡在线播放| 国产精品蜜桃在线观看 | 亚洲av第一区精品v没综合| 久久欧美精品欧美久久欧美| 啦啦啦韩国在线观看视频| 毛片女人毛片| 国产精品久久久久久精品电影| 久久久久国产网址| 欧美成人精品欧美一级黄| 国产乱人偷精品视频| 亚洲久久久久久中文字幕| 午夜精品国产一区二区电影 | 久久久久网色| 亚洲无线在线观看| 日产精品乱码卡一卡2卡三| 亚洲国产日韩欧美精品在线观看| 欧美色欧美亚洲另类二区| 成人漫画全彩无遮挡| 国产三级在线视频| 搞女人的毛片| 如何舔出高潮| 99国产精品一区二区蜜桃av| 色噜噜av男人的天堂激情| www.av在线官网国产| av.在线天堂| 91久久精品电影网| 亚洲成人久久性| 村上凉子中文字幕在线| 欧美+亚洲+日韩+国产| 亚洲精品亚洲一区二区| 我的老师免费观看完整版| 欧美日韩一区二区视频在线观看视频在线 | 久久99热6这里只有精品| 我要搜黄色片| 26uuu在线亚洲综合色| 亚洲成人av在线免费| 国产 一区 欧美 日韩| 99久国产av精品国产电影| 国产黄a三级三级三级人| 天天躁夜夜躁狠狠久久av| 欧美高清成人免费视频www| 欧美丝袜亚洲另类| av在线天堂中文字幕| 国产 一区精品| 少妇熟女aⅴ在线视频| 国产91av在线免费观看| 午夜老司机福利剧场| 波多野结衣高清作品| 国产精品女同一区二区软件| 亚洲精品影视一区二区三区av| 在线免费观看不下载黄p国产| 观看美女的网站| 色噜噜av男人的天堂激情| 免费av不卡在线播放| 国产 一区 欧美 日韩| 亚洲欧美日韩高清专用| 国产视频首页在线观看| 国产 一区精品| 在线a可以看的网站| 国产色婷婷99| 一级黄片播放器| 日本av手机在线免费观看| 亚洲欧洲国产日韩| 国产老妇女一区| 可以在线观看毛片的网站| 日本黄色片子视频| 亚洲av熟女| av免费在线看不卡| 国产精品一区二区三区四区久久| 97在线视频观看| 我要搜黄色片| 午夜福利在线观看吧| 好男人在线观看高清免费视频| 精品无人区乱码1区二区| 午夜精品在线福利| 日韩成人av中文字幕在线观看| 男女做爰动态图高潮gif福利片| 精品一区二区三区人妻视频| 91狼人影院| 午夜久久久久精精品| 久久精品夜色国产| 国产成人a∨麻豆精品| 麻豆国产97在线/欧美| 亚洲欧美日韩无卡精品| 日产精品乱码卡一卡2卡三| 成年版毛片免费区| 只有这里有精品99| 国产精品国产三级国产av玫瑰| 黄色视频,在线免费观看| 最后的刺客免费高清国语| 夜夜夜夜夜久久久久| 国产精品蜜桃在线观看 | 亚洲经典国产精华液单| av在线观看视频网站免费| 欧美一区二区国产精品久久精品| 综合色丁香网| 97热精品久久久久久| av女优亚洲男人天堂| 免费在线观看成人毛片| 亚洲久久久久久中文字幕| 免费人成在线观看视频色| 国产av麻豆久久久久久久| 久久精品国产鲁丝片午夜精品| 精品无人区乱码1区二区| 一本久久精品| 一个人免费在线观看电影| 哪里可以看免费的av片| 热99在线观看视频| 国产精品久久久久久亚洲av鲁大| 国产老妇女一区| 内射极品少妇av片p| 国内精品久久久久精免费| 中文字幕免费在线视频6| 狂野欧美白嫩少妇大欣赏| 日韩欧美精品免费久久| 一个人免费在线观看电影| АⅤ资源中文在线天堂| 久久精品夜夜夜夜夜久久蜜豆| 国产乱人视频| 中文亚洲av片在线观看爽| 特级一级黄色大片| 成人一区二区视频在线观看| 欧美高清性xxxxhd video| 久久午夜亚洲精品久久| 国产精品女同一区二区软件| 91麻豆精品激情在线观看国产| 久久热精品热| 人妻制服诱惑在线中文字幕| 精品久久久噜噜| 又爽又黄无遮挡网站| 一边摸一边抽搐一进一小说| 亚洲精品国产成人久久av| 亚洲综合色惰| 国产精品无大码| 午夜福利在线观看免费完整高清在 | 久久草成人影院| 欧美色视频一区免费| 一区二区三区四区激情视频 | 国产综合懂色| 亚洲欧美日韩高清在线视频| 在线播放无遮挡| 在线观看66精品国产| 日韩一本色道免费dvd| 欧美bdsm另类| 免费看a级黄色片| 身体一侧抽搐| 免费av不卡在线播放| 亚洲在久久综合| 欧美变态另类bdsm刘玥| 日本一本二区三区精品| 国产单亲对白刺激| 成人性生交大片免费视频hd| 欧美xxxx黑人xx丫x性爽| 日韩成人av中文字幕在线观看| 国产精品一区二区三区四区久久| 国产不卡一卡二| 国产av在哪里看| 免费观看a级毛片全部| 国产国拍精品亚洲av在线观看| 夜夜爽天天搞| 一本一本综合久久| 蜜桃亚洲精品一区二区三区| 一本精品99久久精品77| 日韩一区二区视频免费看| 久久精品国产亚洲av涩爱 | 2022亚洲国产成人精品| 成年女人看的毛片在线观看| 亚洲经典国产精华液单| 免费看光身美女| 国产成人freesex在线| 美女被艹到高潮喷水动态| 国产av一区在线观看免费| 欧美成人免费av一区二区三区| 久99久视频精品免费| 久久久色成人| 99久国产av精品国产电影| 日韩欧美精品免费久久| 国产精品久久久久久av不卡| 午夜激情福利司机影院| 精品久久久久久久末码| 一区二区三区免费毛片| 午夜老司机福利剧场| 一本久久精品| 禁无遮挡网站| 国产老妇伦熟女老妇高清| 国产片特级美女逼逼视频| 久久久久网色| 欧美xxxx性猛交bbbb| 免费在线观看成人毛片| 麻豆成人午夜福利视频| 午夜爱爱视频在线播放| 狂野欧美白嫩少妇大欣赏| 国产中年淑女户外野战色| 欧美色欧美亚洲另类二区| 国产69精品久久久久777片| 国产精品伦人一区二区| 久久草成人影院| 内地一区二区视频在线| 晚上一个人看的免费电影| 内地一区二区视频在线| 国产精品久久电影中文字幕| 日本三级黄在线观看| 男人狂女人下面高潮的视频| 中出人妻视频一区二区| 女的被弄到高潮叫床怎么办| 国产91av在线免费观看| 尤物成人国产欧美一区二区三区| 99久久九九国产精品国产免费| 国产亚洲5aaaaa淫片| 国产一区二区在线av高清观看| 99久国产av精品国产电影| 国产亚洲av片在线观看秒播厂 | 欧美+日韩+精品| 国产精品.久久久| 日本黄色视频三级网站网址| 亚洲一区二区三区色噜噜| 欧美+亚洲+日韩+国产| 亚洲乱码一区二区免费版| 婷婷色综合大香蕉| 国产不卡一卡二| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲欧美中文字幕日韩二区| 国产亚洲av片在线观看秒播厂 | 国产一区二区在线观看日韩| 激情 狠狠 欧美| 最近2019中文字幕mv第一页| 大又大粗又爽又黄少妇毛片口| 你懂的网址亚洲精品在线观看 | 久久久成人免费电影| 亚洲内射少妇av| 日本免费a在线| 97热精品久久久久久| 国产美女午夜福利| 特大巨黑吊av在线直播| 国产大屁股一区二区在线视频| 国产伦精品一区二区三区四那| 久久精品国产亚洲av涩爱 | 国产一区二区亚洲精品在线观看| 国产一区二区三区在线臀色熟女| 99国产精品一区二区蜜桃av| 午夜免费激情av| 午夜激情福利司机影院| 午夜福利成人在线免费观看| 国产乱人偷精品视频| 日韩视频在线欧美| 特级一级黄色大片| 成人亚洲欧美一区二区av| 一区二区三区高清视频在线| 亚洲国产精品sss在线观看| 高清日韩中文字幕在线| 狂野欧美激情性xxxx在线观看| 国产黄片视频在线免费观看| 高清毛片免费观看视频网站| 国产老妇伦熟女老妇高清| 少妇猛男粗大的猛烈进出视频 | 网址你懂的国产日韩在线| 国产免费一级a男人的天堂| 美女大奶头视频| 青春草视频在线免费观看| 国产成人freesex在线| 亚洲第一区二区三区不卡| 村上凉子中文字幕在线| 我的女老师完整版在线观看| 狂野欧美激情性xxxx在线观看| 国产精品久久久久久av不卡| 午夜爱爱视频在线播放| 中文字幕免费在线视频6| 一级毛片aaaaaa免费看小| 99久久九九国产精品国产免费| 成人毛片a级毛片在线播放| 亚洲精品成人久久久久久| 少妇的逼好多水| 国产亚洲欧美98| 婷婷亚洲欧美| 欧美一区二区精品小视频在线| 亚洲av免费在线观看| 午夜精品在线福利| 午夜视频国产福利| 欧美bdsm另类| 国产精品精品国产色婷婷| 午夜精品在线福利| 免费黄网站久久成人精品| 亚洲国产日韩欧美精品在线观看| 你懂的网址亚洲精品在线观看 | 人人妻人人澡人人爽人人夜夜 | 亚洲真实伦在线观看| 亚洲国产欧美在线一区| 91久久精品国产一区二区成人| 狂野欧美白嫩少妇大欣赏| 中文在线观看免费www的网站| 国产伦一二天堂av在线观看| 国产高清激情床上av| 午夜免费激情av| 久久久久久久久中文| 久久这里只有精品中国| 欧美色视频一区免费| 亚洲欧美成人精品一区二区| 免费观看a级毛片全部| 欧美成人一区二区免费高清观看| 精品人妻一区二区三区麻豆| 国产精品一区二区性色av| 日韩强制内射视频| 特大巨黑吊av在线直播| a级一级毛片免费在线观看| 你懂的网址亚洲精品在线观看 | 黄色日韩在线| 18禁黄网站禁片免费观看直播| 97在线视频观看| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲色图av天堂| 美女大奶头视频| 中国国产av一级| 熟女人妻精品中文字幕| 国产大屁股一区二区在线视频| 色综合色国产| 亚洲成人中文字幕在线播放| 国产精品爽爽va在线观看网站| 日韩亚洲欧美综合| av在线老鸭窝| 欧美区成人在线视频| 男女做爰动态图高潮gif福利片| 嫩草影院新地址| 桃色一区二区三区在线观看| 99热这里只有是精品50| 波野结衣二区三区在线| 黄片wwwwww| 日日干狠狠操夜夜爽| 婷婷色av中文字幕| 欧美日本视频| 日韩av不卡免费在线播放| 长腿黑丝高跟| 久久国内精品自在自线图片| 国产乱人偷精品视频| 婷婷精品国产亚洲av| 深夜a级毛片| 亚洲国产色片| 亚洲无线观看免费| 久久精品人妻少妇| 日本免费a在线| 亚洲欧美精品综合久久99| 国产精品99久久久久久久久| 色吧在线观看| 色视频www国产| 久久久久免费精品人妻一区二区| 免费看a级黄色片| 综合色丁香网| 亚洲av不卡在线观看| 白带黄色成豆腐渣| 男女做爰动态图高潮gif福利片| 夜夜爽天天搞| 国产av一区在线观看免费| 国产高清不卡午夜福利| 我的女老师完整版在线观看| 亚洲欧美清纯卡通| 国产日韩欧美在线精品| 亚洲成人久久性| 亚洲欧洲日产国产| 精品无人区乱码1区二区| 中国美白少妇内射xxxbb| 国产精品无大码| 欧美激情在线99| 国产av一区在线观看免费| 午夜福利在线在线| 男人和女人高潮做爰伦理| 看十八女毛片水多多多| 中国国产av一级| 久久久久久久亚洲中文字幕| 久久国产乱子免费精品| 麻豆成人午夜福利视频| 国产白丝娇喘喷水9色精品| 成人特级av手机在线观看| 精品人妻一区二区三区麻豆| 免费看a级黄色片| 久久久色成人| 久久亚洲国产成人精品v| 国产不卡一卡二| 午夜免费激情av| 国产欧美日韩精品一区二区| 免费看a级黄色片| 女人十人毛片免费观看3o分钟| 亚洲美女搞黄在线观看| 插逼视频在线观看| 男女边吃奶边做爰视频| 国产午夜精品论理片| 五月伊人婷婷丁香| 亚洲婷婷狠狠爱综合网| 真实男女啪啪啪动态图| 两个人的视频大全免费| 舔av片在线| 国产精品久久久久久久久免| 亚洲无线观看免费| 26uuu在线亚洲综合色| 久久久久九九精品影院| 一级毛片aaaaaa免费看小| 日韩av不卡免费在线播放| 校园人妻丝袜中文字幕| 亚洲美女搞黄在线观看| 欧美日韩一区二区视频在线观看视频在线 | 欧美日韩在线观看h| 好男人视频免费观看在线| 青春草视频在线免费观看| av在线观看视频网站免费| 卡戴珊不雅视频在线播放| 最近最新中文字幕大全电影3| 日韩一本色道免费dvd| av视频在线观看入口| 精品久久久久久久久久免费视频| 日日啪夜夜撸| 夜夜爽天天搞| 国产精品野战在线观看| 国产真实伦视频高清在线观看| 欧美成人a在线观看| 日韩视频在线欧美| 亚洲欧美精品专区久久| 国产精品久久视频播放| 一本精品99久久精品77| 日韩av不卡免费在线播放| av在线播放精品| 国产日韩欧美在线精品| 最好的美女福利视频网| av卡一久久| 一级毛片aaaaaa免费看小| 欧美日韩精品成人综合77777| 久久久色成人| 久久鲁丝午夜福利片| 亚洲av成人av| 亚洲欧美精品综合久久99| 国内精品美女久久久久久| 国产黄色小视频在线观看| 国产色婷婷99| 26uuu在线亚洲综合色| 久99久视频精品免费| 日本黄色视频三级网站网址| 一本久久精品| 中文亚洲av片在线观看爽| 亚洲高清免费不卡视频| 欧美激情久久久久久爽电影| 国产黄色小视频在线观看| 国产午夜精品论理片| 国产精品精品国产色婷婷| 在线国产一区二区在线| 国产一区二区激情短视频| 老师上课跳d突然被开到最大视频| 亚洲欧美日韩无卡精品| 日韩 亚洲 欧美在线| 亚洲人成网站在线播放欧美日韩| 乱人视频在线观看| 日韩大尺度精品在线看网址| 最新中文字幕久久久久| 日韩一区二区三区影片| 国内精品美女久久久久久| 亚洲精品影视一区二区三区av| 国产精品不卡视频一区二区| 亚洲国产欧洲综合997久久,| 欧美高清性xxxxhd video| 免费看a级黄色片| 国产人妻一区二区三区在| 人人妻人人澡人人爽人人夜夜 | 亚州av有码| 免费黄网站久久成人精品| 午夜福利在线在线| 亚洲四区av| 午夜福利高清视频| 麻豆乱淫一区二区| 成人三级黄色视频| 国产久久久一区二区三区| 久久中文看片网| 欧美最黄视频在线播放免费| a级毛片免费高清观看在线播放| 亚洲成av人片在线播放无| av在线播放精品| 插逼视频在线观看| 久久九九热精品免费| 麻豆久久精品国产亚洲av| 最近2019中文字幕mv第一页| 永久网站在线|