• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    SoftTissue Feature Tracking Based on Deep Matching Network

    2023-02-17 03:12:34SiyuLuShanLiuPengfeiHouBoYangMingzheLiuLirongYinandWenfengZheng

    Siyu Lu,Shan Liu,Pengfei Hou,Bo Yang,Mingzhe Liu,Lirong Yin and Wenfeng Zheng,*

    1School of Automation,University of Electronic Science and Technology of China,Chengdu,610054,China

    2College of Computer Science and Cyber Security,Chengdu University of Technology,Chengdu,610059,China

    3Department of Geography and Anthropology,Louisiana State University,Baton Rouge,LA,70803,USA

    ABSTRACT Research in the field of medical image is an important part of the medical robot to operate human organs.A medical robot is the intersection of multi-disciplinary research fields,in which medical image is an important direction and has achieved fruitful results.In this paper,a method of softtissue surface feature tracking based on a depth matching network is proposed.This method is described based on the triangular matching algorithm.First,we construct a self-made sample set for training the depth matching network from the first N frames of speckle matching data obtained by the triangle matching algorithm. The depth matching network is pre-trained on the ORL face data set and then trained on the self-made training set.After the training,the speckle matching is carried out in the subsequent frames to obtain the speckle matching matrix between the subsequent frames and the first frame.From this matrix, the inter-frame feature matching results can be obtained. In this way, the inter-frame speckle tracking is completed.On this basis,the results of this method are compared with the matching results based on the convolutional neural network.The experimental results show that the proposed method has higher matching accuracy.In particular,the accuracy of the MNIST handwritten data set has reached more than 90%.

    KEYWORDS Softtissue;feature tracking;deep matching network

    1 Introduction

    In recent years,surgical robots have begun to be frequently used in minimally invasive surgery to reduce the pain of patients,reduce the work intensity of the surgeon,improve the accuracy of surgical operations, and reduce the difficulty of surgical operations [1-4]. This operation is mainly used for disease monitoring and treatment of various parts of the human body through an endoscope,which enters the human body through a small channel(a natural channel or a channel confirmed by a doctor).Compared with traditional surgery,the position perception of intraoperative equipment and soft tissue surface requires high accuracy, and because the intraoperative field of view is relatively narrow, it causes a lot of difficulties [5]. Therefore, many computer-assisted techniques have been proposed to assist the operation process[6,7],and many advanced robot-assisted surgical techniques have extremely high requirements for the tracking of the soft tissue surface characteristics of the surgical organs,such as abnormal brain detection method for magnetic resonance image and detecting tuberculosis from chest CT images [8,9]. Tracking research on the surface of soft tissue is conducive to the use of surgical robots’high-precision and high-flexibility characteristics. It can perform precise surgical operations in different organs and tissues of the human body[10-12].It is conducive to the recovery and reconstruction of surgical organs and tissues,greatly reduces the danger caused by the shaking of the body during the operation of the surgeon, greatly enhances the doctor’s confidence and reduces the surgeon’s fatigue, and enhances the safety and effectiveness of the operation. In addition, the tracking of soft tissue surface features of endoscopic image sequences has very important applications in postoperative surgical effect analysis,surgical training and teaching,and virtual reality soft tissue 3D modeling[11,13].

    The tracking problem in the medical field is a hot issue [14], and most of the technical routes adopted are based on the feature as the object to launch the tracking.However,problems such as low matching accuracy and slow speed of feature points in endoscopic images remain.

    Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery.Schoob et al.proposed a non-rigid tracking using surgical stereo imaging[15].A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considers texture information.This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate the online application of the method, the computational load is reduced by concurrent processing and affineinvariant fusion of the tracking and refinement results.The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space.

    The surface feature of the soft tissue image is used as the tracking object to realize the tracking of the surface of the soft tissue [16]. The key step of the soft tissue surface feature tracking process is feature matching. The feature matching method is also applied to feature matching in different views of the same frame. At a certain moment, the coordinates of a three-dimensional space point are mapped to two different perspective images in the left and right views, but they are actually the same space point.Similarly,feature matching is performed on the points of the left and right views to obtain the parallax under different viewing angles.In addition,the internal and external parameters of the camera that shoots the left and right views and the focal length are added to obtain the threedimensional coordinates of the space points.

    Robotic automation in surgery requires the precise tracking of surgical tools and mapping of deformable tissue. Previous works on surgical perception frameworks require significant effort in developing features for surgical tools and tissue tracking. In this work, Lu et al. [17] overcame the challenge by exploiting deep learning methods for surgical perception. They integrated deep neural networks, capable of efficient feature extraction, into the tissue tracking and surgical tool tracking processes.By leveraging transfer learning,the deep-learning-based approach requires minimal training data and reduced feature engineering efforts to fully perceive a surgical scene.

    Verdie et al.[18]proposed a learning-based time invariant feature detector(TILDE),which can reliably detect key points in the case of severe changes in external conditions such as illumination.An effective method is proposed to generate the training set of training regression. This method learns three regressors, and the segmented regressor shows the best effect. The author evaluates the regressor on the new outdoor benchmark data set,which shows that the performance of the regressor proposed by the author on the benchmark data set is obviously better than the most excellent algorithm at that time. Savinov et al. proposed Quartnetworks [19]. They first proposed to learn the feature detector from scratch, train a neural network to rank the key points, and then find the key points from the top/bottom bits of the ranking. The workflow of the whole method is to extract random block pairs from two images.Each image block obtains a response through the neural network,then calculates the loss through the sorting consistency function of quadruple and optimizes it by gradient descent method. The algorithm based on data learning can not only learn the feature detector like Quarknetworks, but also learn the feature descriptor. With the improvement of machine learning[20-24],Simoserra et al.proposed Deepdesc[25]for key point descriptor learning.This method uses a convolutional neural network to learn the discriminant representation of image blocks (patches),trains a Siamese network with paired inputs, and processes a large number of paired image blocks by combining the random extraction of training sets and the mining strategy for patch pairs that are difficult to classify.The L2 distance is used in training and testing,and the learned 128-d descriptor is used.Its Euclidean distance reflects the similarity of patch pairs.The feature learning method maps the pixel values of image blocks to description vectors through nonlinear coding.The goal is to learn description vectors.The selection of measurement rules of these description vectors is generally related to the real label vector.The processing process of references[26,27]included multiple parameterization modules such as gradient calculation,spatial pooling,feature normalization and dimension reduction.Trzcinski et al. [28] used a “weak learning”accelerator, including a series of capabilities of gradient direction and spatial position parameterization. In order to find the optimal parameters, different types of optimization algorithms, Powell minimization, boosting and convex optimization are used respectively.

    Feature based 3D reconstruction is the last step of soft tissue surface tracking, mainly to build a visual object model with more three-dimensional spatial characteristics. The key point of 3D reconstruction technology is feature matching. In the stereo matching of binocular vision, the corresponding points in space are obtained from the two-dimensional feature matching results and camera parameters, combined with the triangular knowledge of epipolar geometry, multiple feature matching,and then the three-dimensional point cloud set of multiple points is obtained.Finally,the three-dimensional shape of the soft tissue surface is restored through triangulation. The essence of soft tissue surface reconstruction is to accurately estimate the object’s three-dimensional shape. It is a process of converting a two-dimensional image into a three-dimensional image based on feature point matching data.In[29],the authors proposed an intraoperative surface reconstruction method based on stereo endoscope images. At the same time, the author also proposed a new hybrid CPUGPU algorithm,which unifies the advantages of CPU and GPU versions.An innovative synchronous positioning and mapping algorithm is proposed in [30], which used a series of images of the stereo mirror to reconstruct the surface deformably. The author introduced a distortion field based on embedded deformation nodes,which can restore the three-dimensional shape from continuous paired stereo images.

    In this paper,we used the feature matching algorithm based on deep learning,mainly based on the soft tissue tracking of the deep matching network.First,we used the triangle matching algorithm to obtain a self-made data set,then used the ORL face data set to pre-train the deep matching network,and then used the self-made training set to train the deep matching network.After carrying out the two-class feature tracking of the deep matching network,we carried out the multi-class spot tracking based on the convolutional neural network.In the multi-class convolutional neural network part,the same neural network architecture is used, and different pre-training sets, the MNIST handwritten data set and CIFAR-10 data set are used to experiment with the effect of the pre-training set on retraining and get the results.Finally,experiments are carried out on the algorithm of this paper and the influence of the network structure and training data set on the experimental results is analyzed and compared.The innovations are that we used three unrelated data sets to pre-train and retrain the matching network, constructed the training data set to prepare the training samples for the neural training network, improved the depth matching network based on the Siamese network and finally achieved good matching results.

    2 Dataset

    The initialization parameters of the neural network are obtained by training on the ORL face data set[31].The ORL face data set contains a total of 400 images of 40 different people.Each person has 10 different images.The light,facial expressions and details of the images are different,and the size is 112*92 grayscale images.The data set is shown in Fig.1.

    Figure 1:ORL face dataset

    The two data sets used in the pre-training in this article are the MNIST data set and the CIFAR-10 data set[32,33].The data set is shown in Figs.2 and 3.

    Figure 2:MNIST dataset

    Firstly,the MNIST data set of handwritten scanned digits is introduced.NIST,on behalf of the National Institute of standards and technology,is the organization that originally collected these data“M”stands for modified.In order to use machine learning algorithm easier,we first preprocessed the data.The MNIST dataset includes scanning of handwritten digits and related labels(describing which number of 0~9 is contained in each image).It includes 60000 training images with 28*28 pixels and 10000 test images with 28*28 pixels.As shown in Fig.2,these handwritten digits are standardized in size and located in the center of the image,and the pixel value of the image is normalized to 0~1.

    Figure 3:CIFAR-10 dataset

    The CIFAR-10 dataset contains 10 categories,aircraft,cars,birds,cats,deer,dogs,frogs,horses,ships,and trucks.A total of 60000 RGB color images,including 50000 training images and 10000 test images.

    Medical dataset we used in this paper is a set of actual three-dimensional images of the soft tissue of the heart provided by Hamlyn Center at Imperial College London.They are available on the website:https://imperialcollegelondon.app.box.com/s/kits2r3uha3fn7zkoyuiikjm1gjnyle3.

    3 Method

    3.1 Triangular Matching Algorithm

    The constructed matching data set is shown in Fig.4.The first frame is our known frame,as shown in the figure,the 25th and 30th frames are the data sets matched by our triangle matching algorithm.Because the spot detection algorithm is affected by light,etc.,the triangle matching can not match the first frame one by one,and there is spot loss in the subsequent frame Fi(i ≥1),but it does not prevent us from intercepting the spots to make the data set.Even if a 32*32 size screenshot of a certain spot is missing in a certain frame,a screenshot of a certain spot will still appear in its subsequent frames.

    Figure 4: (Continued)

    Figure 4:Screenshot and its spots(a)first frame(b)frame 25(c)frame 30

    3.2 Speckle Tracking Based on Depth Matching Network

    The depth matching network is mainly composed of two parts, feature extraction network and metric network.The feature extraction is composed of two convolutional neural networks[22,34]with shared weights.This thinking comes from the Siamese network and twin neural network,which is very suitable for the binary classification task of image matching.Each image block(patch)inputs a feature extraction network to generate a fixed dimension sift like feature.This feature is a depth feature,but different from sift,the similarity and difference between the two feature description vectors in sift are calculated by the Euclidean distance,while in depth matching network,the metric network is used.

    The metric network consists of three fully connected layers.The last layer uses sigmoid function(i.e.,Eq.(1))to output scores to obtain the similarity probability of image blocks.

    The feature extraction network includes five convolution layers and two lower sampling layers,and includes an FC layer used to reduce the dimension of the features extracted by the feature extraction network. The function of the FC layer is to reduce the feature dimension extracted by the feature extraction network and control the overfitting of the network, because the number of parameters involved in full connection is large. If the feature dimension is too high, it will easily lead to a large number of parameters and over fitting.The output 256 dimension of FC layer represents the advanced features of the input image block, and each image block represents the spots detected in the frame.Therefore,FC layer represents the spot depth features“integrated”by the feature extraction network.

    Because our input image is small,the size of the convolution kernel we use is relatively small,which is also to comprehensively and carefully obtain the feature information in the image block(patch).The convolution kernel after filling and then convolution greatly increases the nonlinear characteristics without losing the resolution while keeping the scale of the feature map unchanged; A convolution kernel corresponds to a feature map after convolution.Different convolution kernels(with different weights and bias)will get different feature maps after convolution to extract different features.

    We use the RELU(rectified linear units)function[35]as the activation function of the convolution layer.Similarly,the sign function of the full connection layer is also RELU.The introduction of the activation function is to increase the nonlinearity of the convolution layer. Without the activation function,each convolution layer is equivalent to matrix multiplication,and the output of each layer only goes through a linear transformation. No matter how many layers the neural network has, the linear transformation is of little significance as a whole. Obviously, we want to learn the nonlinear characteristics of image blocks. By adding an activation function, nonlinear changes are introduced into neurons,so the neural network can be used to simulate any nonlinear function arbitrarily.

    The details of the depth matching network,such as parameters,convolution kernel size,convolution step size and so on,are listed in Tables 1 and 2.

    Table 1: Detailed information of feature extraction network

    Table 2: Measurement network detailed information

    After the detailed information description of the network layer is completed,we make necessary explanations for the training method.We select positive and negative samples with batch size from the training sample set to build a group of training samples.The number of positive and negative samples in each batch size is equal.

    Depth matching network is based on minimizing the cross entropy loss function to train network parameters.The cross-entropy function[36]is:

    where yiis the 0/1 label of the input image pair,1 represents a match,and 0 represents a non-match.is the actual output value of the matching network,n represents the number of picture pairs in each batchsize,and batchsize=32.We update the network weight according to the cross entropy,and then continue to input the next set of training samples,and repeat the above training process,and complete the epoch group training.In order to ensure that the direction of optimization is correct,the number of positive samples in the input samples of each batch is equal to the number of negative samples,and the training process is shown in Fig.5,where M=32,B=256,N2=9.

    Figure 5:Schematic diagram of depth matching network training process

    We calculate the matching matrix between Fi(i>1)and the spots in F1. The image blocks corresponding to the feature points in F1and Fi(i>1)are combined into the depth matching network in pairs, and the similarity of the spots will be calculated, and the matching matrix will be filled in according to the correspondence between rows and columns. Each row of the matching matrix corresponds to a spot in F1,and each column corresponds to a detected spot in Fi(i>1).According to the matching matrix,we select the column with the highest score in each row(corresponding to a feature point in F1)and exceed a set threshold(corresponding to the feature point in Fi(i>1))as the matching feature point to complete the spot in the frame Match between(tracking).If the matching degree is less than the set threshold,it means that no matching spots are detected in this frame.

    4 Experiments and Results

    Our experiment is based on TensorFlow 2.1,python 3.7,NVIDIA Geforce GTX 1060ti and other platforms.The program running environment is shown in Table 3.

    Table 3: Operating environment

    When constructing the sample set,we need to first expand the edges of the bottom of the input image.Our input picture is 288*320,and the spot coordinates near the bottom edge of the detected spots are close to the y value (288) in the picture coordinate system, where the origin of the picture coordinate system is in the upper left corner.The y-value index of the spot coordinate has the hidden danger of crossing the boundary. Therefore, before capturing the picture, fill in the bottom of the picture.In this article,we choose the boundary pixel extension[37],which is conducive to the feature extraction network to fully extract the pixel information around the spots. The edge filling result is shown in Fig.6.

    Figure 6:Bottom edge filling(a)Original drawing(b)Boundary pixel expansion

    According to the feature point matching results of the first 100 frames,the image block with the size of M*M=32*32 is intercepted with the position coordinates of the matched feature points as the center, and the positive samples and negative samples are constructed by combining them. The spots in the positive samples are the corresponding vertices of the two triangles matched with each other,marked as 1.The spots in the negative sample are the corresponding vertices of two mismatched triangles,marked as 0.Finally,all the positive and negative samples are used to construct the training sample set.For our proposed depth matching network,we use the network pre-training to obtain the initialization parameters in the network.The depth matching network is pretrained on ORL face data set.The pre-training results are shown in Fig.7 and the re-training results are shown in Fig.8.It can be seen from Figs.7 and 8 that although the pre-training stage has performed well,under the retraining of the self-made training set,the curve is smoother and converges faster from the accuracy curve and loss curve. Therefore, the weight parameters obtained by pre-training are effective, which speeds up the training progress and convergence speed.

    Figure 7:Accuracy and loss curve in pre-training stage

    Figure 8:Accuracy and loss curve in retraining stage

    For the traditional neural network,32*32*3 endoscopic soft tissue speckle map is used as the input, and the previous n=100 frame matching result is used as the training set. Due to the loss of spots,a total of 750 speckle maps are used as the training set,and 180 speckle images of the subsequent 20 frames are used as the test set to train and test the convolutional neural network.

    The two data sets used for pre-training are the MNIST data set and the CIFAR-10 dataset.We will perform pre-training on the two data sets respectively, and then use our self-made training set for retraining. On the same network structure, different data sets are used for pre-training, and the influence of the pre-training data set on the convolutional neural network training can be compared;After the retraining stage,we can see the impact of pre-training on retraining.

    In the network setting,the learning rate is 0.0001,the optimizer adopts Adam optimizer and small batch training method,batchsize=32,and the maximum training epoch=100.The training results are shown in Fig.9.

    Figure 9:Loss reduction curve based on MNIST data set

    When training the convolutional neural network based on CIFAR-10 data set,the input is changed from single channel gray image to 3-channel RGB image. The learning rate, optimizer and other parameter settings remain unchanged,and the training results are shown in Fig.10.

    Figure 10:Accuracy curve based on CIFAR-10 data set

    Save the weight of the pre-training and use the self-made training set for retraining.The results of retraining are shown in Fig.11.

    Figure 11:Retraining loss and accuracy curve based on CIFAR-10 data set

    As mentioned in the previous paper,we use three different data sets to study inter frame speckle matching on two neural networks. The differences between neural networks and the comparison of data sets are shown in Table 4.

    Table 4: Comparison of training effects of neural network and pre-training data set

    In the subsequent frames,in order to ensure the universality of the test,any frame is selected.In the experiment,any frame selected by the program is F148,after the spot detection,29 spots are detected.Take the detected spot coordinates as the center, intercept 32 * 32 image blocks, form image block pairs with the 9 spots detected in the first frame,input the trained depth matching network,and the depth matching network outputs the similarity of each image block pair to obtain a 9*29 matching score matrix.Therefore,the matching result of the first frame and the F148spot can be obtained from the matching score matrix, as shown in Fig.12. And the speckle matching diagram obtained by the matching score matrix is shown in Fig.13.

    Figure 12:Detection spot map in F148

    Figure 13:Speckle matching diagram obtained by matching score matrix

    Spot 1 is most stably detected in the first frame, so we still track spot 1 in subsequent frames,as shown in Fig.14, which shows the tracking of spots with serial number 1 detected in the first frame.The horizontal axis is the number of frames,the vertical axis is the pixel coordinate of the spot,the vertical axis of the left figure is the X coordinate of the pixel coordinate,and the vertical axis of the right figure is the Y coordinate of the pixel coordinate.The origin of pixel coordinates is located in the upper left corner of the image,and the X and Y of pixel coordinates are just opposite to the row and column values of accessing the two-dimensional image matrix.

    Figure 14:Pixel coordinate tracking results of subsequent frames of speckle 1

    5 Discussion

    From Figs.9 and 10,the accuracy of the two data sets in the pre-training stage is different.The accuracy of the MNIST handwritten data set has reached more than 90%, which can be considered that the purpose of pre-training has been achieved.However,the accuracy rate on the CIFAR-10 data set is only 50%,and even the accuracy of the test set tends to decline.

    As can be seen from Fig.12,retraining has obvious divergence and overfitting.The reason is that the pre-training of the pre-training data set is not in place.We can also see from the pre-training curve that its accuracy curve and the trend of decline appear.Only the pre-training sample set is large,and it may continue to be pre-trained for hundreds of rounds,which will be the same as that of retraining.In the complex feature information,the pre-trained sample data set has a considerable impact on the subsequent retraining.

    It can be seen from Figs.10-12 that the grid multi-classification effect based on CIFAR-10 data set is obviously inferior to that based on the MNIST handwritten data set.It shows that the pre-training of the data set still has a significant impact on the subsequent retraining.

    It can be seen from Table 4 that in the pre-training stage,the training results of the depth matching network on the face data set are excellent. If we continue to use the self-made training set, the convergence is faster, the accuracy curve is steeper, and its initial starting point is also relatively high, It shows that the weight parameter obtained by pre-training on the gray image face data set is an effective weight parameter for soft tissue image speckle training set,and shortens the retraining time; Convolutional neural network also performs well on MNIST handwritten gray-scale images,but it performs poorly on the RGB CIFAR-10 data set for two reasons: the first reason is that the characteristics of the MNIST handwritten digits are as simple as those of soft tissue image surface spots, and the gray-scale information is regional, while the image information of cars, animals and other images in CIFAR-10 data set is more complex, And the characteristics of surface spots in soft tissue images; The second reason is that the structure of convolutional neural network itself is relatively simple,which can only deal with images with simple feature information.For complex feature information such as images in CIFAR-10 dataset,it is easy to overfit,resulting in non-convergence or even divergence.

    As shown in Fig.12,it can be seen that the spots detected in F148are compared with the first frame.The spots 5 and 6 detected in the first frame are not detected in F148.Therefore,in the matching score matrix,the spots 5,6 in the first frame,the similarity probability of the column corresponding to the row is below 0.5,indicating that the spots 5 and 6 are not detected,and the spots 5 and 6 fail to track in F148. The similarity probability of the columns corresponding to the rows of the remaining spots is above 0.5 and the largest among the columns. The column where the probability is located in the corresponding spot of the first frame spot in F148.

    In the experiment, spot 1 is tracked from frame 121. From the pixel coordinates, the heartbeat range is still about 30 pixels, which means that the soft tissue feature tracking algorithm based on depth matching network is successful.

    6 Conclusions

    This paper constructs the training data set to prepare the training samples for the neural training network.After that,we improve the depth matching network based on the Siamese network to adapt to the feature extraction and measurement of soft tissue surface images. Firstly, we pre-train on the ORL face data set to get better results and then retrain on our own data set to get a smoother and steeper loss curve and accuracy curve so as to achieve the purpose of retraining. Furthermore,we compared the spot-matching algorithms of classified convolutional neural networks. In terms of convolutional neural networks,Lenet was used as the basic structure and slightly modified,pre-trained on the MNIST handwritten data set and the CIFAR-10 data set respectively,and then retrained with a self-made training set.The pre-training based on the MNIST data set performed well in the retraining stage.However,the pre-training accuracy based on the CIFAR-10 data set reaches 50%,and the loss in the retraining stage shows a divergent form,and the accuracy decreases significantly.It can be seen that the data set has a significant impact on the results of pre-training and retraining.Therefore,in the follow-up research,we can further select more data sets to train the deep matching network for better performance.

    Acknowledgement:Thank all the authors for their contributions to the paper.

    Funding Statement:This work was jointly supported by the Sichuan Science and Technology Program(Grant:2021YFQ0003;Acquired by Wenfeng Zheng).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    9色porny在线观看| 搡女人真爽免费视频火全软件| 99久久中文字幕三级久久日本| 黄色怎么调成土黄色| 免费黄色在线免费观看| av视频免费观看在线观看| 久久久久久久久久人人人人人人| 久久国产乱子免费精品| 国产精品一区二区在线不卡| 国产视频内射| 国产毛片在线视频| av播播在线观看一区| 欧美少妇被猛烈插入视频| 一本久久精品| 国产高清三级在线| 国产精品一区二区性色av| 国产成人91sexporn| 最近中文字幕高清免费大全6| 亚洲欧美一区二区三区国产| 少妇被粗大的猛进出69影院 | 亚洲精品国产色婷婷电影| 成年人免费黄色播放视频 | 久久国产精品男人的天堂亚洲 | 久久精品国产自在天天线| 99热国产这里只有精品6| 欧美日韩视频精品一区| 少妇被粗大猛烈的视频| 久久99精品国语久久久| 在线观看av片永久免费下载| 精品一区在线观看国产| 亚洲欧美一区二区三区国产| 成年女人在线观看亚洲视频| 欧美三级亚洲精品| 有码 亚洲区| 最后的刺客免费高清国语| 99久久人妻综合| 精品酒店卫生间| 美女主播在线视频| 亚洲高清免费不卡视频| 在线观看人妻少妇| 国产熟女欧美一区二区| 又粗又硬又长又爽又黄的视频| 日韩免费高清中文字幕av| 久久国产精品男人的天堂亚洲 | 久久国产精品男人的天堂亚洲 | 波野结衣二区三区在线| 日韩不卡一区二区三区视频在线| 最后的刺客免费高清国语| 十八禁高潮呻吟视频 | 丝袜脚勾引网站| 欧美一级a爱片免费观看看| 日本-黄色视频高清免费观看| 久久久久久伊人网av| 久久午夜福利片| 三上悠亚av全集在线观看 | 99久久精品国产国产毛片| 丝袜喷水一区| 中文天堂在线官网| 国产亚洲最大av| 亚洲欧美一区二区三区国产| 国产精品99久久久久久久久| 在线观看免费日韩欧美大片 | 麻豆精品久久久久久蜜桃| 国产av一区二区精品久久| 午夜福利网站1000一区二区三区| 亚洲精品国产色婷婷电影| 日韩 亚洲 欧美在线| 久久久精品免费免费高清| 22中文网久久字幕| 综合色丁香网| 女的被弄到高潮叫床怎么办| 夫妻性生交免费视频一级片| 高清av免费在线| 亚洲欧美中文字幕日韩二区| 久久这里有精品视频免费| 成人毛片a级毛片在线播放| 国产国拍精品亚洲av在线观看| 中文乱码字字幕精品一区二区三区| 午夜影院在线不卡| 男的添女的下面高潮视频| 久久人人爽人人爽人人片va| 国产精品久久久久久av不卡| av黄色大香蕉| 成人毛片a级毛片在线播放| 夫妻午夜视频| 国产精品久久久久久久久免| 亚洲成人av在线免费| 成人国产av品久久久| 在线观看免费日韩欧美大片 | 少妇人妻一区二区三区视频| av免费在线看不卡| 一级毛片电影观看| 亚洲成人手机| 新久久久久国产一级毛片| 国产av码专区亚洲av| 免费不卡的大黄色大毛片视频在线观看| 春色校园在线视频观看| 另类亚洲欧美激情| 亚洲欧美成人综合另类久久久| 欧美日韩国产mv在线观看视频| 精品亚洲成国产av| 纵有疾风起免费观看全集完整版| 观看美女的网站| 久久久久久久大尺度免费视频| 成人免费观看视频高清| 精品久久久久久久久av| 嘟嘟电影网在线观看| 黄色一级大片看看| xxx大片免费视频| 国产淫片久久久久久久久| 日韩一区二区视频免费看| 国产极品天堂在线| 在线观看免费高清a一片| 伊人久久国产一区二区| 天天躁夜夜躁狠狠久久av| 欧美另类一区| 亚洲图色成人| 纵有疾风起免费观看全集完整版| 国产黄片视频在线免费观看| 久久国产精品男人的天堂亚洲 | 欧美三级亚洲精品| 在线观看美女被高潮喷水网站| 欧美成人精品欧美一级黄| .国产精品久久| 国产精品国产三级国产av玫瑰| 在线 av 中文字幕| 亚洲久久久国产精品| 日韩不卡一区二区三区视频在线| 亚洲精品国产av成人精品| 日韩视频在线欧美| 欧美精品一区二区免费开放| 国产在视频线精品| 亚洲无线观看免费| 免费播放大片免费观看视频在线观看| 春色校园在线视频观看| 在线精品无人区一区二区三| 亚洲自偷自拍三级| 热re99久久精品国产66热6| 国产在线免费精品| 高清av免费在线| 韩国高清视频一区二区三区| 国产有黄有色有爽视频| 成人漫画全彩无遮挡| 亚洲av不卡在线观看| 99视频精品全部免费 在线| 大片电影免费在线观看免费| 亚洲国产欧美日韩在线播放 | 99热全是精品| 99精国产麻豆久久婷婷| 在线观看国产h片| 18禁动态无遮挡网站| 九九在线视频观看精品| 国产视频首页在线观看| 欧美 日韩 精品 国产| 亚洲成人av在线免费| 午夜久久久在线观看| 久久99热6这里只有精品| 国产精品一区二区三区四区免费观看| 一边亲一边摸免费视频| 亚洲图色成人| av播播在线观看一区| 国产免费一区二区三区四区乱码| 美女中出高潮动态图| 日产精品乱码卡一卡2卡三| 中文字幕av电影在线播放| 少妇精品久久久久久久| 五月伊人婷婷丁香| 51国产日韩欧美| 十八禁网站网址无遮挡 | 日韩欧美一区视频在线观看 | 一级爰片在线观看| 少妇的逼水好多| 日韩中字成人| 99热全是精品| 国产免费福利视频在线观看| 一区二区三区四区激情视频| 午夜免费男女啪啪视频观看| 人人妻人人澡人人看| 色婷婷久久久亚洲欧美| 性高湖久久久久久久久免费观看| 日韩强制内射视频| 成人美女网站在线观看视频| 日日啪夜夜撸| 免费观看的影片在线观看| 亚洲精华国产精华液的使用体验| 日本wwww免费看| 黄色视频在线播放观看不卡| av福利片在线观看| 国产综合精华液| 高清在线视频一区二区三区| 久久热精品热| 人妻人人澡人人爽人人| 国产午夜精品一二区理论片| 久久久久精品久久久久真实原创| 日韩一区二区三区影片| 三级国产精品欧美在线观看| 美女福利国产在线| 日韩一区二区视频免费看| 少妇被粗大猛烈的视频| 午夜激情福利司机影院| 美女脱内裤让男人舔精品视频| 一级毛片电影观看| 午夜影院在线不卡| 精品国产一区二区久久| 天堂俺去俺来也www色官网| 免费av不卡在线播放| 免费久久久久久久精品成人欧美视频 | 亚洲国产精品一区二区三区在线| 亚洲色图综合在线观看| 欧美变态另类bdsm刘玥| 日本wwww免费看| 免费黄色在线免费观看| 男女免费视频国产| 在线观看免费高清a一片| 日韩欧美一区视频在线观看 | 久久婷婷青草| 久久久久久久久大av| 一级黄片播放器| .国产精品久久| 噜噜噜噜噜久久久久久91| av在线app专区| 国产极品天堂在线| 精品人妻熟女av久视频| 国产av码专区亚洲av| 青青草视频在线视频观看| 免费黄频网站在线观看国产| 日韩伦理黄色片| 下体分泌物呈黄色| 久久久久久久久久人人人人人人| 国产黄片视频在线免费观看| 久久亚洲国产成人精品v| 一级毛片 在线播放| 色视频www国产| 制服丝袜香蕉在线| 欧美日韩国产mv在线观看视频| 日韩成人伦理影院| 日本猛色少妇xxxxx猛交久久| 中文字幕制服av| 伊人亚洲综合成人网| 大片免费播放器 马上看| av在线老鸭窝| 免费观看在线日韩| 日韩伦理黄色片| 色视频在线一区二区三区| 国产爽快片一区二区三区| 亚洲av二区三区四区| 国产成人免费观看mmmm| 日韩亚洲欧美综合| 国产爽快片一区二区三区| 一级毛片黄色毛片免费观看视频| 日韩制服骚丝袜av| 91在线精品国自产拍蜜月| 成人综合一区亚洲| 国产成人免费观看mmmm| 欧美 日韩 精品 国产| 日韩人妻高清精品专区| 久久国内精品自在自线图片| 波野结衣二区三区在线| 免费观看无遮挡的男女| 十八禁网站网址无遮挡 | 亚洲欧美日韩东京热| 黑人猛操日本美女一级片| 精品亚洲成a人片在线观看| 国产一区二区三区av在线| 亚洲国产色片| 日本黄大片高清| 六月丁香七月| 九草在线视频观看| 青青草视频在线视频观看| 在线观看免费视频网站a站| 精品国产一区二区久久| 香蕉精品网在线| 天天躁夜夜躁狠狠久久av| 国产在视频线精品| 欧美区成人在线视频| 人人澡人人妻人| 久久狼人影院| 各种免费的搞黄视频| 99热这里只有精品一区| 亚洲精品日韩av片在线观看| 另类精品久久| 黄片无遮挡物在线观看| 秋霞在线观看毛片| 伊人亚洲综合成人网| 最近手机中文字幕大全| 欧美最新免费一区二区三区| 日日摸夜夜添夜夜爱| 亚洲精品一二三| 又粗又硬又长又爽又黄的视频| 高清在线视频一区二区三区| 哪个播放器可以免费观看大片| 黄色视频在线播放观看不卡| 日韩欧美 国产精品| 18禁在线无遮挡免费观看视频| 欧美高清成人免费视频www| 国产精品国产三级国产av玫瑰| 国产91av在线免费观看| 国产真实伦视频高清在线观看| 最近的中文字幕免费完整| 日本黄大片高清| 欧美日韩国产mv在线观看视频| www.av在线官网国产| av天堂久久9| 精品一品国产午夜福利视频| 国产亚洲欧美精品永久| 在线观看一区二区三区激情| 99久久中文字幕三级久久日本| 亚洲人成网站在线播| 欧美日韩视频精品一区| 丝袜在线中文字幕| 午夜日本视频在线| 在线免费观看不下载黄p国产| 亚洲第一区二区三区不卡| 99久久综合免费| 伊人久久精品亚洲午夜| 日韩强制内射视频| 国产精品不卡视频一区二区| 国产精品一区二区在线不卡| 久久亚洲国产成人精品v| 免费av中文字幕在线| 国产精品麻豆人妻色哟哟久久| 极品人妻少妇av视频| 18禁动态无遮挡网站| 亚洲美女黄色视频免费看| 最近的中文字幕免费完整| 纵有疾风起免费观看全集完整版| 麻豆成人午夜福利视频| 中文天堂在线官网| 亚洲人与动物交配视频| 亚洲av中文av极速乱| 老司机亚洲免费影院| 你懂的网址亚洲精品在线观看| 亚洲婷婷狠狠爱综合网| 人人妻人人澡人人爽人人夜夜| 啦啦啦视频在线资源免费观看| 欧美激情国产日韩精品一区| 天美传媒精品一区二区| 精品人妻熟女毛片av久久网站| 久久午夜综合久久蜜桃| 一级毛片aaaaaa免费看小| 黑人高潮一二区| 在线观看人妻少妇| 国产一区亚洲一区在线观看| 精品一区二区三卡| 国产一区二区在线观看日韩| 久久97久久精品| 久久精品国产自在天天线| 欧美日韩视频高清一区二区三区二| 欧美成人精品欧美一级黄| 免费av不卡在线播放| 成人影院久久| 欧美成人精品欧美一级黄| 成人亚洲精品一区在线观看| 国产亚洲欧美精品永久| 嘟嘟电影网在线观看| 国产精品久久久久久久电影| √禁漫天堂资源中文www| 水蜜桃什么品种好| 久久久久国产网址| 啦啦啦在线观看免费高清www| 特大巨黑吊av在线直播| 寂寞人妻少妇视频99o| 在线观看国产h片| 精品人妻一区二区三区麻豆| 这个男人来自地球电影免费观看 | 这个男人来自地球电影免费观看 | 夫妻午夜视频| 亚洲精品国产av蜜桃| 美女大奶头黄色视频| 男男h啪啪无遮挡| kizo精华| 十分钟在线观看高清视频www | 欧美激情极品国产一区二区三区 | 亚洲怡红院男人天堂| 黄色视频在线播放观看不卡| 少妇精品久久久久久久| 观看av在线不卡| 国产亚洲精品久久久com| 国产精品99久久久久久久久| 国产男女内射视频| 国产高清三级在线| 免费观看在线日韩| 亚洲国产最新在线播放| 欧美日本中文国产一区发布| 亚洲国产av新网站| 三级国产精品片| 国产免费一级a男人的天堂| 丰满迷人的少妇在线观看| 新久久久久国产一级毛片| 国产精品成人在线| 2021少妇久久久久久久久久久| 久久鲁丝午夜福利片| 国产精品免费大片| 久久久久久人妻| 日日啪夜夜撸| 国产精品熟女久久久久浪| 三级经典国产精品| 狂野欧美激情性xxxx在线观看| 国产亚洲av片在线观看秒播厂| 如日韩欧美国产精品一区二区三区 | 女性生殖器流出的白浆| 狂野欧美激情性xxxx在线观看| 国产高清三级在线| 最新中文字幕久久久久| 国产亚洲一区二区精品| 日本-黄色视频高清免费观看| 国产成人免费无遮挡视频| 亚洲国产日韩一区二区| 精品卡一卡二卡四卡免费| 夫妻性生交免费视频一级片| 老司机亚洲免费影院| 22中文网久久字幕| 欧美3d第一页| 国产极品粉嫩免费观看在线 | 人人妻人人看人人澡| 成人漫画全彩无遮挡| 国产成人freesex在线| 日日爽夜夜爽网站| 最近中文字幕2019免费版| 国产精品熟女久久久久浪| 久久热精品热| 一级av片app| 亚洲欧洲国产日韩| 日本欧美视频一区| 久久影院123| 国产在线视频一区二区| 午夜91福利影院| 亚洲国产精品一区二区三区在线| 少妇熟女欧美另类| 26uuu在线亚洲综合色| 国产深夜福利视频在线观看| 一本—道久久a久久精品蜜桃钙片| 大又大粗又爽又黄少妇毛片口| 久久久精品94久久精品| 亚洲丝袜综合中文字幕| 99久久中文字幕三级久久日本| 亚洲伊人久久精品综合| 国产精品久久久久成人av| 人人妻人人爽人人添夜夜欢视频 | 91精品国产国语对白视频| 日韩欧美一区视频在线观看 | 18禁动态无遮挡网站| 91久久精品电影网| 美女cb高潮喷水在线观看| 精品酒店卫生间| 一级毛片久久久久久久久女| 成人综合一区亚洲| av免费在线看不卡| 欧美精品一区二区免费开放| 街头女战士在线观看网站| 少妇 在线观看| 日本av免费视频播放| 中文字幕亚洲精品专区| 成人国产av品久久久| 欧美变态另类bdsm刘玥| 国产高清有码在线观看视频| 最近中文字幕2019免费版| 国产精品久久久久久精品电影小说| 最新的欧美精品一区二区| 国产一级毛片在线| 久久久久久伊人网av| 精品视频人人做人人爽| 亚洲情色 制服丝袜| 女性生殖器流出的白浆| 成年人午夜在线观看视频| 亚洲av国产av综合av卡| 九九爱精品视频在线观看| xxx大片免费视频| 免费看不卡的av| 一区二区三区四区激情视频| 欧美xxⅹ黑人| 麻豆精品久久久久久蜜桃| 亚洲欧美成人综合另类久久久| 久久久久久久亚洲中文字幕| 成人漫画全彩无遮挡| 色94色欧美一区二区| 成年人午夜在线观看视频| 亚洲欧美精品专区久久| 中文字幕人妻熟人妻熟丝袜美| 亚洲内射少妇av| 久久精品国产亚洲av涩爱| 六月丁香七月| 欧美丝袜亚洲另类| 一区二区av电影网| 秋霞伦理黄片| 啦啦啦视频在线资源免费观看| 天堂中文最新版在线下载| 三级国产精品片| 亚洲国产av新网站| 国产在线男女| 日日摸夜夜添夜夜爱| 亚洲欧美精品自产自拍| 天堂俺去俺来也www色官网| 99久久精品热视频| 中文字幕久久专区| 综合色丁香网| 免费观看在线日韩| 热re99久久精品国产66热6| www.色视频.com| 国产淫语在线视频| 国精品久久久久久国模美| 日本黄大片高清| 免费看av在线观看网站| 各种免费的搞黄视频| 中文资源天堂在线| 国模一区二区三区四区视频| 婷婷色麻豆天堂久久| 精品99又大又爽又粗少妇毛片| 亚洲自偷自拍三级| 一个人看视频在线观看www免费| 最近手机中文字幕大全| 汤姆久久久久久久影院中文字幕| 久久精品国产鲁丝片午夜精品| 国产精品欧美亚洲77777| 黄色配什么色好看| 日韩亚洲欧美综合| 五月伊人婷婷丁香| 男女无遮挡免费网站观看| 免费观看性生交大片5| 大话2 男鬼变身卡| 婷婷色av中文字幕| 777米奇影视久久| 国产探花极品一区二区| 一本一本综合久久| 69精品国产乱码久久久| 久久国产精品男人的天堂亚洲 | √禁漫天堂资源中文www| 国产在线免费精品| 国产精品一区www在线观看| 日韩伦理黄色片| 一级毛片aaaaaa免费看小| 日日爽夜夜爽网站| 国产精品一区二区性色av| 最近2019中文字幕mv第一页| 人妻夜夜爽99麻豆av| 少妇丰满av| 国内精品宾馆在线| 肉色欧美久久久久久久蜜桃| 啦啦啦视频在线资源免费观看| 麻豆成人av视频| 黄色毛片三级朝国网站 | 成人18禁高潮啪啪吃奶动态图 | 国产视频内射| 免费人妻精品一区二区三区视频| 美女视频免费永久观看网站| 在线观看三级黄色| 岛国毛片在线播放| 国产精品99久久99久久久不卡 | 精品国产国语对白av| 亚洲精品乱码久久久久久按摩| 亚洲人成网站在线播| 欧美另类一区| 九九在线视频观看精品| 免费看av在线观看网站| 国产亚洲91精品色在线| 五月开心婷婷网| 免费黄色在线免费观看| 一级毛片 在线播放| 欧美激情国产日韩精品一区| 国产老妇伦熟女老妇高清| 国产熟女午夜一区二区三区 | 亚洲激情五月婷婷啪啪| 极品少妇高潮喷水抽搐| 成人漫画全彩无遮挡| 日本-黄色视频高清免费观看| 国产精品伦人一区二区| 有码 亚洲区| 性高湖久久久久久久久免费观看| 国产中年淑女户外野战色| 亚洲av男天堂| 日韩成人伦理影院| 一本一本综合久久| 又大又黄又爽视频免费| 欧美高清成人免费视频www| 亚洲欧美清纯卡通| 少妇人妻一区二区三区视频| 久久婷婷青草| 亚洲成人手机| 国产精品福利在线免费观看| 精品一区二区三卡| 亚洲婷婷狠狠爱综合网| 街头女战士在线观看网站| 精品一区二区三卡| 亚洲成人手机| 国产黄频视频在线观看| 美女中出高潮动态图| 美女内射精品一级片tv| 国产午夜精品久久久久久一区二区三区| 国产色爽女视频免费观看| 国产av国产精品国产| 男人舔奶头视频| 久久久a久久爽久久v久久| 亚洲,一卡二卡三卡| 在线观看av片永久免费下载| 乱码一卡2卡4卡精品| 免费观看性生交大片5| 男女边摸边吃奶| videos熟女内射| 啦啦啦中文免费视频观看日本| a级毛片在线看网站| 老女人水多毛片| 中文字幕av电影在线播放| 久久久午夜欧美精品| 一个人免费看片子| 久久青草综合色| a级毛片在线看网站| 久久久a久久爽久久v久久| 中文字幕av电影在线播放| 欧美亚洲 丝袜 人妻 在线| 亚洲三级黄色毛片| 六月丁香七月| 亚洲熟女精品中文字幕| 午夜免费观看性视频| 国产在视频线精品| 国产欧美日韩综合在线一区二区 |