• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on Multi-View Image Reconstruction Technology Based on Auto-Encoding Learning

    2022-11-11 10:45:16TaoZhangShaokuiGuJinxingNiuandYiCao
    Computers Materials&Continua 2022年9期

    Tao Zhang,Shaokui Gu,Jinxing Niu,*and Yi Cao

    1School of Mechanical Engineering,North China University of Water Conservancy and Hydroelectric Power,Zhengzhou,450045,China

    2Department of Electrical and Computer Engineering,University of Windsor,Windsor,N9B 3P4,ON,Canada

    Abstract: Traditional three-dimensional(3D)image reconstruction method,which highly dependent on the environment and has poor reconstruction effect, is easy to lead to mismatch and poor real-time performance.The accuracy of feature extraction from multiple images affects the reliability and real-time performance of 3D reconstruction technology.To solve the problem,a multi-view image 3D reconstruction algorithm based on self-encoding convolutional neural network is proposed in this paper.The algorithm first extracts the feature information of multiple two-dimensional (2D)images based on scale and rotation invariance parameters of Scale-invariant feature transform(SIFT)operator.Secondly,self-encoding learning neural network is introduced into the feature refinement process to take full advantage of its feature extraction ability.Then, Fish-Net is used to replace the U-Net structure inside the self-encoding network to improve gradient propagation between U-Net structures,and Generative Adversarial Networks(GAN)loss function is used to replace mean square error(MSE)to better express image features,discarding useless features to obtain effective image features.Finally,an incremental structure from motion (SFM)algorithm is performed to calculate rotation matrix and translation vector of the camera,and the feature points are triangulated to obtain a sparse spatial point cloud, and meshlab software is used to display the results.Simulation experiments show that compared with the traditional method,the image feature extraction method proposed in this paper can significantly improve the rendering effect of 3D point cloud, with an accuracy rate of 92.5% and a reconstruction complete rate of 83.6%.

    Keywords:Multi-view;image reconstruction;self-encoding;feature extraction

    1 Preface

    Emergence of 3D digital technology has greatly promoted the accurate modeling of 3D objects[1-3].Traditional 3D digital technology will touch and disturb the reconstructed target in the process of acquiring data,and acquisition time is relatively long.Development and application of non-contact three-dimensional object reconstruction technology has become an urgent problem in the field of accurate digitization of object surfaces.

    Researchers usually use two 3D reconstruction methods.One of them is 3D point cloud of crop fruits directly obtained through sensors[4-6].One method is to directly obtain 3D point cloud of object through the sensors[4].Such methods can obtain high-precision object surface point clouds,but there are problems such as complicated and expensive equipment,large amount of data,low reconstruction efficiency,and large environmental lighting restrictions.Another method is to recover 3D phenotype point cloud of object surface through 2D images [6].This type of method is simple to operate and low in price.The generated point cloud model often contains a lot of noise, and is greatly affected by the camera’s calibration accuracy and ambient light in use.The reconstruction method of SFM,which acquires 2D image sequences and can automatically perform camera calibration, has great advantages.However, for the target object with complex structure, sparse point cloud obtained by SFM method contains less 3D information.Firstly,some operators are used to extract and match the features of the image.The purpose is to obtain high-quality effective features such as SIFT[7],speeded up robust features(SURF)[8],Oriented Fast and Rotated Brief(ORB)[9]features and so on.Then pose parameters of multi-eye camera and sparse point cloud of scene are obtained through motion recovery structure algorithm, and the scene is densely reconstructed through multi-eye stereo vision algorithm,and finally 3D model can be obtained by post-processing of dense point cloud.

    At present, some scholars use deep learning for 3D reconstruction, but so far there is no network that can only input multi-eye images and directly output 3D point clouds or other 3D structure networks.This is because deep learning is more difficult to deal with spatial geometry problem,and whole process of 3D reconstruction uses a lot of spatial geometry theory.The current mainstream method is to use deep learning methods to replace certain steps in the whole framework of 3D reconstruction.For example,learned invariant feature transform(LIFT)[10]framework uses deep learning to replace feature extraction module, and some networks such as Bundle Adjustment networks (BA-Net)[11,12] use deep learning to estimate global parameters of camera and so on.The emergence of deep learning has made the accuracy of 3D reconstruction higher and higher.At present, the most widely used method of deep learning is to use deep learning for image depth estimation, including monocular, binocular and multi-eye depth estimation.The 3D reconstruction can be completed by mapping the estimated depth image to the 3D space to obtain a dense point cloud and performing information fusion.On the other hand, the use of deep learning can add semantic information to 3D reconstruction, such as semantic segmentation of point clouds and 3D semantic reconstruction of scenes.The image features of some objects are similar to background features,and it is difficult to detect them with traditional machine vision methods.The autoencoder can learn effective features of data from a large amount of unlabeled data,avoiding the problem that supervised learning network requires a large amount of high-quality labeled data for training.Chen et al.[13]proposed U-Net network,which fuses the feature map in encoding with the feature map of corresponding size in decoding through idea of shortcut connection, so as to recover more spatial information in the upsampling process.Liu et al.[14]designed a feature compression activation module through squeezeand-excitation networks(SENet)network and constructed SE-Unet model to strengthen the network learning of image features.Ma et al.[15]proposed a new decoder structure,which introduces a selfattention mechanism to decode cascaded deep features and shallow features, reduces accuracy loss in the upsampling process.Cheng et al.[16] proposed a new semantic segmentation algorithm to adopt a dense layer structure in the network, use grouped convolution to speed up the calculation,and introduce an attention mechanism to improve segmentation effect.Current methods focus on local features in 2D images and do not consider the connection and correlation of features between multi-view images.

    To solve the above problems, multi-view 3D reconstruction algorithm based on self-encoding network is proposed,which extracts the features of 2D image through self-encoding network.Firstly,multiple images from different perspectives are collected, and extracted by convolutional neural network and its extended self-encoding network to get the shape features.Secondly,Fish-Net is used to replace U-Net to solve the end-to-end communication problem and reduce the problem of feature loss.Finally,GAN loss function is used to reduce the loss of image edges and achieve fine extraction of image features.Experiments show that the features extracted based on the method in the text have a higher degree of recognition than the artificial features.

    2 Multi-View 3D Reconstruction Algorithm

    Multi-view 3D reconstruction algorithm is shown in Fig.1.The basic principle is:extract features of image and perform feature matching,calculate essential matrix according to matching points,and then perform incremental or global SFM algorithm to calculate the rotation matrix and translation vector of camera,and triangulate feature points to get sparse spatial point cloud.

    Figure 1:Multi-view 3D reconstruction algorithm

    3 Image Feature Extraction Algorithm Based on Auto-Encoding Network

    Feature extraction is one of the key technologies of multi-view image reconstruction, which purpose is to obtain high-quality effective features to prepare for image matching.

    3.1 Auto-Encoding Algorithm

    Autoencoder(AE)[17,18]is unsupervised artificial neural network model,which is widely used in data dimensionality reduction,noise reduction and sample reconstruction for data visualization,as shown in the Fig.2.The autoencoder builds the U-Net network structure with the idea of sparse coding, generates low-dimensional features by encoding high-dimensional sample data, and uses low-dimensional and high-order coding features to reconstruct the original samples.The encoding network completes non-linear mapping of input data and outputs feature map.The feature map is then convolved and down-sampled to obtain multiple layers of hidden feature information.The decoding network uses feature map to reconstruct input data.

    Since surface defects of objects are local anomalies in uniform textures, there are different characteristic representations between defects and background textures.Auto-encoding network is used to understand the representation of defect data and find the commonality of surface defects of object.Therefore,the problem of detecting defects on the surface of objects has become a problem of object segmentation.The encoder-decoder architecture is used to convert input defect image into pixel prediction mask.

    Figure 2:Structure of autoencoder

    In the convolutional autoencoder network, mean-square error (MSE)[19] is often used as loss function.It is generally used to evaluate the pixel-level difference between two images,and to measure the pixel-level difference between reconstructed image and input image in the image reconstruction network.This function focuses on the global difference of the image and does not consider local texture features, therefore the inpainting model with MSE as the loss function performs better on regular texture samples than on the irregular texture.

    3.2 Improvements to Auto-Encoding Deep Learning Networks

    AE network is an end-to-end, simple and lightweight model, but the expression of model still does not achieve the desired effect.Therefore,Fish-Net is used instead of U-Net.In addition,only 2D feature point information does not provide best guidance.It is hoped that more information can be added to guide AE.AE uses a global loss function, but the weight of pixels inside object should be less than pixels near edges and surfaces.

    3.2.1 Choice of Reconstruct Network Frames

    In AE,two U-Nets in series are used to train a network of output image features end-to-end.UNet uses the structure of“up/down sampling+jump connection”,and the built neural network has the advantages of easy convergence and light weight.The deep network can easily obtain the gradient of the shallow network faster and retain the image pixel position information.However,the algorithm also has the problem that when multiple U-Nets work together on the same model,each U-Net directly cooperates poorly.Therefore,many other models based on U-Net have been improved,such as Fish NET and so on.

    Fish-Net is an improvement on U-Net,as shown in the Fig.3.Fish-Net consists of three parts:Tail, Body, Head.Tail uses the work of existing network structures to obtain deep low-resolution features from input image.Body contains upsampling and refinement extraction blocks,and obtains high-resolution features for high-level semantic information.Head contains down-sampling and refinement extraction blocks that preserve and refine the features obtained from these three parts.The refined features of last convolutional layer of Head are used for the final task decision.When multiple U-Nets are connected in series,there are skip connections between corresponding upsampling and downsampling within a single U-Net,but there are no skip connections between upsampling and downsampling in two adjacent U-Nets, so two paths between U-Nets may become a bottleneck for gradient propagation.

    Figure 3:Structure of Fish-Net

    Therefore,in addition to connecting the corresponding downsampling layer and upsampling layer in itself,Fish-Net also makes skip connections between each U-Net upsampling layer and the adjacent U-Net downsampling layer, so that the following U-Net is easy to feel the gradient of the previous U-Net.There are two convolutional blocks for upsampling and downsampling in Fish-Net,namely upsampling-reproducing block (UR-block)and downsampling-reproducing block (DR-block).For downsampling with stride 2, Fish-Net sets the convolution kernel size to 2×2, which resolves the overlap between pixels.To avoid weighted deconvolution with upsampling,Fish-Net chooses nearest neighbor interpolation for upsampling.Since the upsampling operation will dilute the input features at a lower resolution, Fish-Net also applies dilated convolutions, changing the data structure for retraining.

    3.2.2 Improvement of Loss Function

    Loss function is used to guide model training and play a key role in the training effect.GAN loss function used in this article is weighted by content loss and adversarial loss and expressed as:

    whereLtotalis total loss;Lcontis content loss;Ladvis adversarial loss;λis weight of adversarial loss function.Adversarial loss, which makes generated point cloud closer to actual point cloud by continuously optimizing generator and discriminator,is expressed as

    wherepgen(S) is distribution of generated point cloud;pactuak(B) is distribution of actual point cloud.The loss function makes generated point cloud more realistic in terms of visual effects.Content loss is expressed by a multi-stage loss function in this article.By considering of different edge features between blurred image and clear image in Stage1, L1 loss function is used in stage1.This function,which uses L1 gradient regularization [20-22] to constrain low-frequency feature detail information and retain more image edge information and structural details,is expressed as

    where‖B-S1‖is L1 loss;? is gradient operator;βis weight of L1 gradient regularization.L2 loss function is used in Stage2 and Stage3, and helps solving problems such as lack of high-frequency feature information during image generation.Compared with L1 loss, the image generated by L2 loss function training is more in line with the overall distribution of natural images.The function is expressed as

    whereLiis output of generator model of i-th stage;Siis clear image of i-th stage;ciis number of channels in the i-th stage;wiis width of i-th stage;hiis height of i-th stage.

    4 Experiment and Result Analysis

    4.1 Experimental Environment

    In order to evaluate effectiveness of the proposed algorithm,The extensive experiments are conducted on the published benchmark data set[23].Experimental platform:high-speed visual processor(CPU i9-10900X,3.7 GHz,4.5 GHz Turbo,memory 64 GB DDR4,32-bit Windows operating system).Algorithm uses matlab to write and debug the 3D reconstruction program, and adopts meshlab to display the final reconstruction results.

    4.2 Network Training

    Multi-view image feature extraction model structure includes:encoding network performs feature extraction and dimension compression on the input signal, decoding network is responsible for reconstructing the original signal, and classifier network uses the features extracted by encoding network to classify tasks.

    The structure of each part of auto-encoding network is as follows.

    (1)Input layer.

    Input data is 500×600 two-dimensional data.

    (2)Coding network.

    Coding network consists of 3 convolutional layers alternating with 2 max pooling layers.The network performs feature extraction on the input data through convolution layer, and then uses maximum pooling layer to compress the features extracted by convolution layer to achieve feature dimension reduction.

    (3)Decoding network.

    Decoding network consists of 2 convolutional layers and 2 deconvolutional layers alternately.The network decodes the features extracted by coding network,and then uses deconvolution layer to map and expand the output feature map size to reconstruct the input signal.

    (4)Activation function.

    Activation function Rectified linear unit 6(ReLU6)is used after convolutional layer and deconvolutional layer of the network.So ReLU6 solves the problem that corresponding weight cannot be updated because parts of input data fall into hard saturation region during training process.

    (5)Global average pooling layer.

    In order to reduce the feature dimension of coding network output, a “convolutional layer +global average pooling layer” structure follows output of coding network, and performs averagely pooling to feature map output by coding network.Each feature map corresponds to a feature point,and finally these feature points are combined into a feature vector.Therefore, for input signals of different sizes,feature dimension extracted by network is fixed.

    This paper conducts experiments on the proposed deep learning model based on the Pytorch framework.The optimizer adopts a periodic learning frequency of 10, an initial learning rate of 10,a learning rate decay rate of 0.000001, and a L2 regularization weight decay of 0.0001.Each batch randomly selects 200 image datas for training.Autoencoder model is first trained in two stages by the image dataset.The first stage reconstructs the original signal and saves the network parameters.In the second stage,a convolutional layer and a global average pooling layer follow the trained encoding network, and connect the classification network for training.The training process does not change the parameters of encoding network, but only updates the parameters of convolutional layer and classification network.Classifier is removed after training is complete.Therefore,output of the global average pooling layer is the feature extracted by auto-encoding network,and can be supplied to each classifier for classification.The article conducted comparative experiment on the size settings of epoch and batch-size,and finally determined that epochs=10 and batch_size=1024.

    4.3 Multi-View Image Reconstruction Experiment

    Take an angle of the image data set as an example,as shown in Fig.4.Figs.5 and 6 are used to represent two coordinate systems to extract image feature points.

    Figure 4:Image data

    Count the number of corresponding feature points of the image in the two images, as shown in Fig.7.

    Use the method in this article to generate a point cloud of a 3Dl object in a matlab environment,as shown in Fig.8.

    Figure 5:Reference image 1

    Figure 6:Reference image 2

    Figure 7:Matching of different image feature points

    The running time is about 15 s, and the result is better.It can be seen that, in the image, the more complex the color and shape features of the object, the less the pure color area, the better the reconstruction effect.

    Problems encountered during matching operation:

    (1)The network algorithm has a large amount of calculation,and the small CPU is easy to crash.When collecting images,it is necessary to control the image size or reduce the number of images on the premise of ensuring the quality.

    (2)The interval angle between the two matching images should not be too small,which means that the camera does not move,and the matching cannot be processed.Therefore,it is necessary to ensure that there is a certain rotation change between the images.

    Figure 8:Generate point cloud diagram

    The point cloud data generated in matlab is saved in dly format, imported into meshlab for rendering and display, and the final display result is shown in Fig.9.The basic outline of the scene can be shown,but there are some problems in the simulation result.Because the background features are relatively few, the reconstruction effect of the background part is not good.The strong external light causes reflection on the surface of the object,which covers the color,and the matching effect is not good.

    Figure 9:Meshlab displays the results

    4.4 Algorithm Evaluation Index

    In order to evaluate the effectiveness of the algorithm in this paper,which is compared with the effect of SIFT algorithm and Convolutional Neural Networks (CNN)network.Two indicators of reconstruction accuracy and reconstruction completeness are used.The accuracy of reconstructed surface R measures closeness of reconstructed surface R to true surface G, and completeness of reconstructed surface R measures the extent to which true surface G is covered by reconstructed surface R.The comparison results are shown in Tab.1.It can be seen that the algorithm in this paper is better than the other two algorithms.The traditional SIFT algorithm is highly sensitive to noise,and can easily lead to mismatches and convolutional neural networks.CNN adopts U-Net network structure, and directly splices the features of previous layer and the feature information obtained by corresponding downsampling during upsampling,resulting in the problem of feature loss.In this paper,after using the previous improved network,the parameters are greatly improved.

    Table 1: Comparison of phase recovery of various methods

    5 Conclusions

    Aiming at the problem of inaccurate feature extraction and poor real-time performance in multiview 3D reconstruction, an image feature extraction method based on auto-encoding network is proposed in this paper.The algorithm designs the auto-encoding network from two aspects of network structure and loss function.First,Fish-Net is used to replace U-Net,and skip connection of network layer is redesigned to improve the efficiency of network transmission.Secondly, GAN loss function is used to preserve more edge information and structural details of image.Experimental results show that reconstruction accuracy of the algorithm reaches 92.5%, and reconstruction integrity is 83.6%,which is better than the traditional U-Net network structure.However, in the environment of high reflective interference and weak features, the accuracy and real-time performance of this algorithm have not yet reached the highest level,so the model needs to be further optimized and improved in the next research:

    (1)The network used in this paper is to initialize the parameters before training.It can improve the ability of network feature extraction in general by using a pre-trained model as an encoder.

    (2)Fish-Net in this paper follows the network depth and width of U-Net in the design,however,it is not certain that this structure is the optimal solution.The next step is to explore the influence of network depth and width on the accuracy of target feature extraction.

    (3)A more appropriate attention mechanism should be introduced to improve post-reconstruction processing optimization results.

    (4)The simulation in this paper is only carried out on the marked public data set.The application range of the network can be further expanded for general extraction

    Acknowledgement:The authors thank Dr.Jinxing Niu for his suggestions.The authors thank the anonymous reviewers and the editor for the instructive suggestions that significantly improved the quality of this paper.

    Funding Statement:This work is funded by Key Scientific Research Projects of Colleges and Universities in Henan Province under Grant 22A460022,and Training Plan for Young Backbone Teachers in Colleges and Universities in Henan Province under Grant 2021GGJS077.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品免费久久久久久久清纯 | 美国免费a级毛片| 久久国产精品人妻蜜桃| 2018国产大陆天天弄谢| 可以免费在线观看a视频的电影网站| 亚洲精品国产一区二区精华液| 久久久久国产一级毛片高清牌| 婷婷丁香在线五月| 精品高清国产在线一区| av线在线观看网站| 女人久久www免费人成看片| 久久九九热精品免费| 午夜激情久久久久久久| 午夜激情av网站| 天天躁狠狠躁夜夜躁狠狠躁| 18禁美女被吸乳视频| 老司机午夜十八禁免费视频| av电影中文网址| av线在线观看网站| 免费观看人在逋| 国产欧美日韩综合在线一区二区| 精品少妇久久久久久888优播| 国产亚洲av高清不卡| 久热这里只有精品99| 亚洲精品中文字幕一二三四区 | 一区二区日韩欧美中文字幕| 久久天躁狠狠躁夜夜2o2o| 午夜激情久久久久久久| 黄色成人免费大全| 日韩 欧美 亚洲 中文字幕| 少妇裸体淫交视频免费看高清 | 中文字幕人妻丝袜制服| 熟女少妇亚洲综合色aaa.| 欧美人与性动交α欧美软件| 午夜福利欧美成人| 免费看十八禁软件| 9色porny在线观看| 亚洲精品在线美女| 国产在线免费精品| 人人妻人人澡人人看| 国产精品电影一区二区三区 | 99精品欧美一区二区三区四区| 亚洲精品自拍成人| 亚洲欧美日韩高清在线视频 | 桃花免费在线播放| 五月天丁香电影| 免费少妇av软件| 一夜夜www| av超薄肉色丝袜交足视频| 精品欧美一区二区三区在线| 国产精品久久久久久精品电影小说| 日本欧美视频一区| 视频在线观看一区二区三区| 午夜免费鲁丝| 国产精品影院久久| 国产男女超爽视频在线观看| 可以免费在线观看a视频的电影网站| 无限看片的www在线观看| 国产日韩欧美亚洲二区| 男女下面插进去视频免费观看| 精品欧美一区二区三区在线| 亚洲精品成人av观看孕妇| 国产欧美日韩一区二区三| 中亚洲国语对白在线视频| 午夜成年电影在线免费观看| 国产精品久久久久久精品古装| 18禁国产床啪视频网站| 亚洲色图 男人天堂 中文字幕| 国产一区二区 视频在线| 精品国产超薄肉色丝袜足j| 高清视频免费观看一区二区| 免费在线观看黄色视频的| 国精品久久久久久国模美| 成人av一区二区三区在线看| 亚洲av电影在线进入| 亚洲人成伊人成综合网2020| 欧美激情久久久久久爽电影 | 精品国内亚洲2022精品成人 | 啦啦啦免费观看视频1| 大型av网站在线播放| 2018国产大陆天天弄谢| 另类精品久久| 露出奶头的视频| 人人妻人人澡人人看| 亚洲欧美一区二区三区久久| 国产精品香港三级国产av潘金莲| 亚洲av国产av综合av卡| 91老司机精品| 亚洲少妇的诱惑av| 久久久久久久久久久久大奶| 最新美女视频免费是黄的| 黄网站色视频无遮挡免费观看| 亚洲,欧美精品.| 在线观看免费日韩欧美大片| 午夜免费鲁丝| 久久久国产欧美日韩av| 在线看a的网站| 国产成人啪精品午夜网站| 一级黄色大片毛片| 19禁男女啪啪无遮挡网站| 无人区码免费观看不卡 | 亚洲精华国产精华精| 欧美精品av麻豆av| 蜜桃国产av成人99| 国产三级黄色录像| 91麻豆av在线| 亚洲精品在线美女| 老司机影院毛片| 成人国语在线视频| 国产精品99久久99久久久不卡| 欧美黑人精品巨大| 少妇裸体淫交视频免费看高清 | 天天影视国产精品| 色精品久久人妻99蜜桃| 久久 成人 亚洲| 91成人精品电影| 天堂中文最新版在线下载| 亚洲综合色网址| 国产精品99久久99久久久不卡| 国产91精品成人一区二区三区 | 亚洲第一av免费看| tocl精华| 国产老妇伦熟女老妇高清| 午夜福利一区二区在线看| 桃红色精品国产亚洲av| 亚洲中文日韩欧美视频| 国产xxxxx性猛交| 亚洲一区二区三区欧美精品| 国产成人av激情在线播放| 91字幕亚洲| 香蕉丝袜av| 美女扒开内裤让男人捅视频| 少妇粗大呻吟视频| 久久精品国产亚洲av高清一级| 国产精品欧美亚洲77777| 日本黄色视频三级网站网址 | 亚洲国产av影院在线观看| 超碰成人久久| 久久毛片免费看一区二区三区| 91国产中文字幕| 久久青草综合色| 午夜免费成人在线视频| 99国产精品一区二区蜜桃av | 激情视频va一区二区三区| 女性生殖器流出的白浆| 欧美中文综合在线视频| 嫩草影视91久久| 久久中文字幕一级| 最新美女视频免费是黄的| 亚洲全国av大片| 丝袜美腿诱惑在线| 国产精品久久电影中文字幕 | 丁香六月天网| 最新的欧美精品一区二区| 亚洲 欧美一区二区三区| 亚洲成人免费av在线播放| 日韩人妻精品一区2区三区| 50天的宝宝边吃奶边哭怎么回事| 国产真人三级小视频在线观看| 日韩大码丰满熟妇| 午夜福利影视在线免费观看| 考比视频在线观看| 国产成人av教育| 两性午夜刺激爽爽歪歪视频在线观看 | 国产熟女午夜一区二区三区| 亚洲一区中文字幕在线| 91精品三级在线观看| 日本vs欧美在线观看视频| a级毛片黄视频| 99re6热这里在线精品视频| 久久这里只有精品19| 国产精品熟女久久久久浪| 久久中文看片网| 黄色片一级片一级黄色片| 欧美人与性动交α欧美软件| 国产精品九九99| 大码成人一级视频| 国产欧美日韩一区二区三区在线| 久久性视频一级片| 国产成+人综合+亚洲专区| 80岁老熟妇乱子伦牲交| 国产国语露脸激情在线看| 99国产精品一区二区三区| 欧美老熟妇乱子伦牲交| 久久这里只有精品19| 水蜜桃什么品种好| 久久人妻福利社区极品人妻图片| 怎么达到女性高潮| 国产又色又爽无遮挡免费看| 视频区图区小说| 成年版毛片免费区| 国产男女超爽视频在线观看| 久久ye,这里只有精品| 91字幕亚洲| 亚洲黑人精品在线| 色播在线永久视频| 国产精品二区激情视频| 免费av中文字幕在线| 嫩草影视91久久| av超薄肉色丝袜交足视频| 丁香六月欧美| 日韩欧美三级三区| 久久精品国产99精品国产亚洲性色 | 在线永久观看黄色视频| 久久久久久亚洲精品国产蜜桃av| 黄色 视频免费看| 香蕉丝袜av| 国产一区二区 视频在线| 国产亚洲精品第一综合不卡| 亚洲熟女精品中文字幕| 欧美激情久久久久久爽电影 | 欧美精品啪啪一区二区三区| 美女高潮到喷水免费观看| 久久久国产一区二区| 纯流量卡能插随身wifi吗| 男女下面插进去视频免费观看| 国产精品电影一区二区三区 | 中亚洲国语对白在线视频| 热re99久久国产66热| 亚洲精品久久成人aⅴ小说| 国产国语露脸激情在线看| 国产精品免费视频内射| 18禁美女被吸乳视频| 脱女人内裤的视频| 国产91精品成人一区二区三区 | kizo精华| 久久久国产一区二区| 999久久久国产精品视频| 老鸭窝网址在线观看| 少妇猛男粗大的猛烈进出视频| 久久精品aⅴ一区二区三区四区| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲熟女毛片儿| 少妇裸体淫交视频免费看高清 | 国产精品秋霞免费鲁丝片| 在线 av 中文字幕| 老司机影院毛片| 国产日韩一区二区三区精品不卡| av在线播放免费不卡| 无人区码免费观看不卡 | 午夜老司机福利片| 另类精品久久| 又大又爽又粗| 水蜜桃什么品种好| 最近最新中文字幕大全免费视频| 亚洲欧美一区二区三区久久| www.999成人在线观看| 国产亚洲欧美精品永久| 精品亚洲成a人片在线观看| 丝袜人妻中文字幕| 国产精品久久久久久精品电影小说| 91成年电影在线观看| 国产精品一区二区在线不卡| 啦啦啦免费观看视频1| 黄色片一级片一级黄色片| 亚洲熟女毛片儿| 国产免费福利视频在线观看| 亚洲av片天天在线观看| 99在线人妻在线中文字幕 | 国产视频一区二区在线看| 狠狠婷婷综合久久久久久88av| 免费在线观看日本一区| 人妻一区二区av| 高清av免费在线| 日韩制服丝袜自拍偷拍| 麻豆国产av国片精品| 一二三四在线观看免费中文在| 精品国产一区二区三区久久久樱花| 亚洲欧洲精品一区二区精品久久久| 亚洲国产毛片av蜜桃av| 两人在一起打扑克的视频| 亚洲精品中文字幕在线视频| 91国产中文字幕| 老司机在亚洲福利影院| 久久精品国产亚洲av高清一级| 精品少妇内射三级| 黄色视频不卡| 看免费av毛片| 男女床上黄色一级片免费看| 正在播放国产对白刺激| 国产av一区二区精品久久| 欧美黄色淫秽网站| 国产精品影院久久| 久久久精品国产亚洲av高清涩受| 12—13女人毛片做爰片一| 男女边摸边吃奶| av不卡在线播放| 国产三级黄色录像| 国产精品电影一区二区三区 | 国产色视频综合| 在线 av 中文字幕| 欧美激情高清一区二区三区| 91九色精品人成在线观看| 精品人妻熟女毛片av久久网站| 国产成人啪精品午夜网站| 亚洲美女黄片视频| 日韩一卡2卡3卡4卡2021年| 免费在线观看影片大全网站| 亚洲色图 男人天堂 中文字幕| 一级a爱视频在线免费观看| 欧美激情高清一区二区三区| av电影中文网址| 亚洲av日韩精品久久久久久密| 黑人欧美特级aaaaaa片| 18禁国产床啪视频网站| 欧美大码av| 久久精品熟女亚洲av麻豆精品| 日韩大片免费观看网站| 亚洲美女黄片视频| 成人国产av品久久久| 18在线观看网站| 日日爽夜夜爽网站| 天天影视国产精品| 男女边摸边吃奶| 国产高清国产精品国产三级| 亚洲精品国产区一区二| 人人妻人人澡人人爽人人夜夜| 日韩一卡2卡3卡4卡2021年| 欧美激情久久久久久爽电影 | 国产97色在线日韩免费| 久热爱精品视频在线9| 国产精品九九99| 法律面前人人平等表现在哪些方面| 一区二区av电影网| 又黄又粗又硬又大视频| 国产一卡二卡三卡精品| av网站在线播放免费| 欧美+亚洲+日韩+国产| 1024视频免费在线观看| 免费在线观看日本一区| 午夜福利视频在线观看免费| 别揉我奶头~嗯~啊~动态视频| 99久久99久久久精品蜜桃| a级片在线免费高清观看视频| 少妇裸体淫交视频免费看高清 | 国产在线精品亚洲第一网站| 久久人人97超碰香蕉20202| 国产精品自产拍在线观看55亚洲 | 国产aⅴ精品一区二区三区波| 天天躁夜夜躁狠狠躁躁| 国产成人精品无人区| 91精品国产国语对白视频| 男女无遮挡免费网站观看| 亚洲色图 男人天堂 中文字幕| 老司机深夜福利视频在线观看| av视频免费观看在线观看| 国产极品粉嫩免费观看在线| 五月天丁香电影| 91精品三级在线观看| 在线av久久热| 国产视频一区二区在线看| 日韩视频一区二区在线观看| 亚洲国产中文字幕在线视频| 午夜两性在线视频| 成人手机av| 精品少妇一区二区三区视频日本电影| 视频在线观看一区二区三区| 成人特级黄色片久久久久久久 | 成年版毛片免费区| 男女边摸边吃奶| 777久久人妻少妇嫩草av网站| 亚洲av日韩在线播放| 老司机亚洲免费影院| 亚洲精品中文字幕一二三四区 | 久久九九热精品免费| 天堂俺去俺来也www色官网| 大香蕉久久成人网| 亚洲三区欧美一区| 欧美日韩成人在线一区二区| 丝袜在线中文字幕| 午夜老司机福利片| 在线观看人妻少妇| 少妇的丰满在线观看| 国产成人系列免费观看| 欧美日韩亚洲国产一区二区在线观看 | 妹子高潮喷水视频| 在线观看免费午夜福利视频| 亚洲av成人不卡在线观看播放网| 两人在一起打扑克的视频| 免费看a级黄色片| 日本撒尿小便嘘嘘汇集6| av天堂在线播放| 免费不卡黄色视频| 欧美在线黄色| 亚洲精品国产色婷婷电影| 亚洲成人国产一区在线观看| 一级毛片女人18水好多| 女人久久www免费人成看片| 国产免费视频播放在线视频| 叶爱在线成人免费视频播放| 老司机深夜福利视频在线观看| 狠狠狠狠99中文字幕| 一边摸一边抽搐一进一出视频| 天天操日日干夜夜撸| 王馨瑶露胸无遮挡在线观看| 天堂中文最新版在线下载| 国产老妇伦熟女老妇高清| 夜夜爽天天搞| 黄网站色视频无遮挡免费观看| 两性夫妻黄色片| 黄色 视频免费看| 免费在线观看黄色视频的| 老熟女久久久| 日韩大片免费观看网站| 亚洲精品久久成人aⅴ小说| 黄色丝袜av网址大全| 国产精品免费一区二区三区在线 | 久久精品亚洲精品国产色婷小说| 纵有疾风起免费观看全集完整版| 在线观看66精品国产| 嫁个100分男人电影在线观看| 天堂俺去俺来也www色官网| 成人黄色视频免费在线看| 在线亚洲精品国产二区图片欧美| 99久久精品国产亚洲精品| tube8黄色片| 老司机福利观看| 午夜福利欧美成人| 久久午夜综合久久蜜桃| 国产一区二区激情短视频| 亚洲av片天天在线观看| 在线看a的网站| 久热爱精品视频在线9| 国产色视频综合| 午夜精品国产一区二区电影| 亚洲伊人久久精品综合| 久久精品熟女亚洲av麻豆精品| 伦理电影免费视频| 啦啦啦 在线观看视频| 9191精品国产免费久久| cao死你这个sao货| 中文亚洲av片在线观看爽 | 亚洲精品av麻豆狂野| 久久精品人人爽人人爽视色| 热99国产精品久久久久久7| 交换朋友夫妻互换小说| 午夜两性在线视频| 人人妻人人澡人人看| 日韩成人在线观看一区二区三区| 色尼玛亚洲综合影院| 别揉我奶头~嗯~啊~动态视频| 12—13女人毛片做爰片一| 菩萨蛮人人尽说江南好唐韦庄| 天天影视国产精品| 制服人妻中文乱码| 岛国毛片在线播放| 欧美人与性动交α欧美软件| 亚洲av国产av综合av卡| 老熟妇仑乱视频hdxx| 日本撒尿小便嘘嘘汇集6| 欧美在线黄色| 久久久久久久久免费视频了| 老司机影院毛片| 色老头精品视频在线观看| 超色免费av| 久久久久久人人人人人| 久久久久久亚洲精品国产蜜桃av| 久久久精品免费免费高清| 黄色怎么调成土黄色| 午夜福利在线免费观看网站| 成年版毛片免费区| 成人黄色视频免费在线看| 日韩欧美国产一区二区入口| 亚洲综合色网址| 亚洲成人国产一区在线观看| 精品少妇内射三级| 亚洲av欧美aⅴ国产| 日本黄色视频三级网站网址 | 丰满少妇做爰视频| 男女边摸边吃奶| 成人av一区二区三区在线看| 国产97色在线日韩免费| 热99久久久久精品小说推荐| 国产1区2区3区精品| 香蕉丝袜av| 久久国产精品影院| 国产国语露脸激情在线看| 啦啦啦免费观看视频1| 最新美女视频免费是黄的| 欧美日韩中文字幕国产精品一区二区三区 | 欧美精品高潮呻吟av久久| 成人18禁在线播放| 国产欧美亚洲国产| 不卡av一区二区三区| 国产成人影院久久av| 日韩免费av在线播放| 可以免费在线观看a视频的电影网站| 他把我摸到了高潮在线观看 | 男女下面插进去视频免费观看| 丰满饥渴人妻一区二区三| 亚洲国产av新网站| 午夜福利一区二区在线看| 亚洲精品av麻豆狂野| 精品免费久久久久久久清纯 | 精品少妇黑人巨大在线播放| 在线观看舔阴道视频| 最近最新免费中文字幕在线| 天天躁夜夜躁狠狠躁躁| 国产精品免费大片| 国产午夜精品久久久久久| 欧美性长视频在线观看| 在线观看免费日韩欧美大片| videos熟女内射| 欧美国产精品一级二级三级| cao死你这个sao货| 超碰97精品在线观看| 十分钟在线观看高清视频www| 另类亚洲欧美激情| 一区二区三区国产精品乱码| 电影成人av| 天天躁夜夜躁狠狠躁躁| 欧美中文综合在线视频| 免费在线观看完整版高清| 王馨瑶露胸无遮挡在线观看| 久久天躁狠狠躁夜夜2o2o| 久久久久久久国产电影| 少妇的丰满在线观看| 亚洲男人天堂网一区| 91九色精品人成在线观看| a级毛片在线看网站| 好男人电影高清在线观看| 欧美中文综合在线视频| 日韩熟女老妇一区二区性免费视频| 国产精品久久久久久精品电影小说| 国产熟女午夜一区二区三区| 午夜福利欧美成人| 黄色毛片三级朝国网站| 亚洲国产欧美一区二区综合| 韩国精品一区二区三区| 一级毛片电影观看| 国产一区二区三区在线臀色熟女 | 757午夜福利合集在线观看| 男人操女人黄网站| 亚洲国产看品久久| 亚洲精品av麻豆狂野| 精品福利永久在线观看| 欧美+亚洲+日韩+国产| 欧美国产精品一级二级三级| 国产成人精品在线电影| 亚洲五月色婷婷综合| 午夜老司机福利片| 亚洲视频免费观看视频| 搡老熟女国产l中国老女人| 嫁个100分男人电影在线观看| 免费看a级黄色片| 久久国产精品人妻蜜桃| 男女之事视频高清在线观看| 超碰97精品在线观看| 久久久久国产一级毛片高清牌| 国产欧美日韩精品亚洲av| 色婷婷久久久亚洲欧美| 免费观看a级毛片全部| 日韩视频在线欧美| 亚洲色图综合在线观看| 啪啪无遮挡十八禁网站| av线在线观看网站| 久久国产精品人妻蜜桃| 国产免费福利视频在线观看| 日韩免费av在线播放| 欧美国产精品一级二级三级| 国产精品自产拍在线观看55亚洲 | 中文字幕人妻丝袜一区二区| www日本在线高清视频| 国产熟女午夜一区二区三区| 午夜福利乱码中文字幕| 9191精品国产免费久久| 成人手机av| 精品国产一区二区三区四区第35| 啦啦啦中文免费视频观看日本| 亚洲av欧美aⅴ国产| 久久精品国产a三级三级三级| 制服人妻中文乱码| 亚洲av第一区精品v没综合| 久久久久久人人人人人| 高清在线国产一区| 久久人妻福利社区极品人妻图片| 蜜桃国产av成人99| 夫妻午夜视频| 国产精品一区二区在线观看99| 黄网站色视频无遮挡免费观看| 亚洲第一欧美日韩一区二区三区 | 久久人人爽av亚洲精品天堂| 久久99一区二区三区| 亚洲国产成人一精品久久久| www.熟女人妻精品国产| 国产精品av久久久久免费| 久久久久久免费高清国产稀缺| 大片电影免费在线观看免费| 久久久国产精品麻豆| 一区二区三区激情视频| 一级片'在线观看视频| 久久精品人人爽人人爽视色| 一区二区三区激情视频| 久久久国产一区二区| 老司机影院毛片| 超碰97精品在线观看| 国产精品久久电影中文字幕 | 汤姆久久久久久久影院中文字幕| 国产一区有黄有色的免费视频| 黄色毛片三级朝国网站| 十八禁人妻一区二区| 国产xxxxx性猛交| 午夜日韩欧美国产| 香蕉国产在线看| 在线观看免费视频网站a站| 色综合婷婷激情| tocl精华| 国产免费av片在线观看野外av| 亚洲专区字幕在线| 成人黄色视频免费在线看| 国产免费av片在线观看野外av| 夜夜骑夜夜射夜夜干| bbb黄色大片| 蜜桃在线观看..| 色老头精品视频在线观看|