• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning in DXA Image Segmentation

    2021-12-16 06:39:04DildarHussainRizwanAliNaqviWoongKeeLohandJooyoungLee
    Computers Materials&Continua 2021年3期

    Dildar Hussain,Rizwan Ali Naqvi,Woong-Kee Loh and Jooyoung Lee,*

    1School of Computational Science, Korea Institute for Advanced Study (KIAS), 85 Hoegiro, Dongdaemun-gu, Seoul,02455,South Korea

    2Department of Unmanned Vehicle Engineering,Sejong University,209,Neungdong-ro,Gwangjin-gu,Seoul,05006,South Korea

    3Department of Software, Gachon University, Seongnam, 13120,South Korea

    Abstract:Many existing techniques to acquire dual-energy X-ray absorptiometry(DXA)images are unable to accurately distinguish between bone and soft tissue.For the most part, this failure stems from bone shape variability, noise and low contrast in DXA images,inconsistent X-ray beam penetration producing shadowing effects, and person-to-person variations.This work explores the feasibility of using state-of-the-art deep learning semantic segmentation models,fully convolutional networks (FCNs), SegNet, and U-Net to distinguish femur bone from soft tissue.We investigated the performance of deep learning algorithms with reference to some of our previously applied conventional image segmentation techniques (i.e., a decision-tree-based method using a pixel label decision tree [PLDT]and another method using Otsu’s thresholding) for femur DXA images, and we measured accuracy based on the average Jaccard index, sensitivity, and specificity.Deep learning models using SegNet, U-Net, and an FCN achieved average segmentation accuracies of 95.8%, 95.1%, and 97.6%, respectively, compared to PLDT (91.4%) and Otsu’s thresholding (72.6%).Thus we conclude that an FCN outperforms other deep learning and conventional techniques when segmenting femur bone from soft tissue in DXA images.Accurate femur segmentation improves bone mineral density computation, which in turn enhances the diagnosing of osteoporosis.

    Keywords: Segmentation; deep learning; osteoporosis; dual-energy X-ray absorptiometry

    1 Introduction

    Osteoporosis, as a severe degrader of bone health and the leading cause of hip fracture in developed countries, leaves a grave burden on health budgets.The repercussions of this disease are noted to be the same among women and men and can affect them at any stage of life.More than 20% of people die from fractures that occur from osteoporosis [1].The medical imaging technology known as dual-energy X-ray absorptiometry (DXA) can adequately diagnose this disease and is currently considered the gold standard for such diagnoses[2-4].Meanwhile,quantitative computed tomography exists as an alternative method,but it demands a large dose of X-rays and is overpriced.

    Reliable osteoporosis analysis is dependent on accurate computation of bone mineral density (BMD).Careful DXA image analysis and precise BMD calculation support the pinpoint segmentation of soft tissue and bone.On the other hand, imprecise bone and soft tissue segmentation severely affect BMD calculation and subsequent analysis of DXA image [3,4].Unfortunately, erroneous bone region identification is a widespread problem in femur DXA images for several reasons.First, DXA images use low-dose X-rays, which makes them prone to noise.Second, the overlap of the hip muscles and pelvic bone over the femur head generates different intensities for the various regions and the femur in the image.Third, the irregular attenuation of an X-ray through the human body creates a negative shadow,which causes gloomy regions in the images.Other factors influencing segmentation characteristics include luminous intensity, scanning orientation, person-to-person variations, and image resolution[5].

    Regardless of the numerous techniques that have been used so far,accurate automatic segmentation in DXA imaging remains a challenge [6-13].Manual segmentation is time-consuming, requiring the involvement of an expert, and such an approach is untenable when it comes to analyzing a fairly large population [7,9,10].Edge-detection-based image segmentation methods are unreliable due to hurdles in the final integration of tiny boundaries to find broad object edges in an image [7].The calibration procedure for specific DXA imaging devices and the diversity present in DXA data make it challenging to specify an acceptable threshold value in threshold-based segmentation techniques, again creating a less-than-ideal situation for segmentation in DXA imaging [7,8].Active appearance models or active shape models are often used in DXA image segmentation as well [11,13].The convergence of landmark points with an active shape model determines the ultimate location of an object.However, the active shape model sometimes converges on inaccurate boundaries for an object due to variations between patients.It is often challenging to describe the Mahalanobis distance in an active shape model with a sparse covariance matrix.Meanwhile, an active appearance model uses a Gaussian space to match an object’s shape through a statistical model.But assumptions made with Gaussian space matching often fail due to bone structure variations, particularly in patients with osteophytes [14].The abovementioned techniques demand close initialization of the model to a segmentation subject.Furthermore, the structural specification ambivalence produced by the partial-volume effect makes other segmentation techniques,such as region growing, unsuitable for femur segmentation in DXA imaging [15].Segmentation of X-ray images using a watershed algorithm habitually results in over-segmentation [16].Because of all the intrinsic constraints associated with current methods, they are inappropriate for the segmentation of DXA images.Thus, there remains a desperate need to find a self-activating DXA image segmentation approach that grants higher accuracy.

    In recent years,the term deep learning has been widely used in the analysis of biological and biomedical images, such as those dealing with the detection of cancerous and mitosis cells, skin lesion classification,neural membrane segmentation, immune cell detection, the segmentation of masses in mammograms, and so on [17-26].The development of deep convolutional neural networks (CNNs) and recent maturation in their use has led to fully convolutional networks (FCNs) [27] being effectively used for nodule detection and segmentation in biomedical imaging systems such as computed tomography and magnetic resonance images [28-32].Despite the possible use of deep learning for image inspection in other medical imaging systems, it has not been applied to DXA imaging.

    In this work, our purpose is to research the feasibility of deep learning approaches for accurate segmentation in DXA imaging.We introduce and investigate the performance of the most up-to-date and novel deep learning approaches for a multiclass pixel-wise semantic model, such as one using SegNet, UNet, or an FCN, for femur segmentation from DXA images.To the best of our knowledge, this is the first deep-learning-based effort relevant to DXA image segmentation.We compare the results of this study to one of our previous works on femur segmentation from DXA images using a machine-learning-based approach called pixel label decision tree,which was reported in[33]and published by the Journal of X-Ray Science and Technology.

    We conducted a detailed investigation to figure out the capability of recent deep CNN approaches for femur segmentation from DXA images.The best results were achieved with a deep FCN-based architecture.This technique takes advantage of the fact that a DXA scan produces low-energy (LE) and high-energy (HE) images that are then merged to create high-contrast images.Various algorithms can be used with the combinations of HE and LE images to generate high-contrast results [33-37].These DXA images are saved as portable network graphic files.

    The main intention of this study is to identify a highly accurate solution for femur segmentation from DXA images.Improved DXA image segmentation will support accurate BMD calculation, and better BMD calculation can strengthen the diagnosing capability of DXA machines.Besides that, improved automatic DXA image segmentation will increase the use of DXA imaging devices.Through this study,we show how convolutional networks can be trained from end to end, pixel to pixel, with a small clinical dataset using transfer learning in semantic segmentation.Our key insight is to introduce a convolutional network with efficient inference and learning on a small sample of DXA data.

    The rest of the manuscript is ordered as follows.Section 2 covers the segmentation model.Section 3 shows the proposed model’s results and includes a discussion of the findings.Finally, Section 4 presents our concluding remarks and possibilities for future work.

    2 Methods

    Femur data are attained from DXA scanning as LE and HE images.The X-ray images are captured by the receptor and maintained in digital records for further processing.These two images undergo a process that yields high-contrast electronic display images.

    2.1 Display Image Generation

    In medical image processing and computer vision, recent techniques based on deep neural networks have accomplished state-of-the-art and impressive outcomes on large-scale datasets with millions of images belonging to different categories.Regarding their impressive achievements, deep neural networks have been shown to be susceptible to image quality [38].Thus, image quality is an important consideration in these approaches.Improving the visual quality of an image through measures such as contrast enhancement and noise reduction has had an impressive effect on image classification and segmentation [39,40].We acquired DXA data in the form of LE and HE images and combined them to form high-contrast display images.In this study, we consider a high-contrast bone mineral density image(BMDI) generated from a DXA scan and its effect on deep learning segmentation results.These highcontrast BMDIs can be generated from DXA scans as follows:

    where uland uhare constant values of LE and HE X-rays, respectively.The incident HE0and LE0are the outcome energies from the X-ray s source, and LEiand HEiare detector counts at a particular scanning position (i.e., the image pixel).The BMD value for the soft tissue region is always lower than the bone region; therefore, Eq.(1) produces a brighter bone and darker soft tissue image.The Rstvalue (as shown in Eq.(2) is calculated first and then used to generate the BMDI in Eq.(1).Similarly, C and B depict image contrast enhancement and brightness.We used C and B as constant values preserved from the experiments.We normalized the intensities of an image from 0 to 255, and the final image was extracted as a portable network graphic file to be used in the deep learning model.

    2.2 Data Augmentation and Transfer Learning

    A huge dataset is required to appropriately train deep learning networks.The small-scale availability of medical datasets is one of the most challenging dilemmas in a deep learning approach.In order to meet the large dataset requirements of deep neural network training, a small data size can be increased using data augmentation [41,42].This is the most familiar and comfortable method to reduce deep neural network overfitting problems.

    Therefore,in this study,we applied the data augmentation process only to the training data(i.e.,80%of the 900 images)as follows.We randomly selected a set of femur images from the training dataset(i.e.,80%of the 900 images)and applied image translations and horizontal and vertical reflections along with labeled ground truth images to increase the data size.We extracted random 192×96 patches from 384×192 images and applied horizontal flip, vertical flip, and their subsequent scaling to 384 × 192 using linear interpolation.The augmentation process increased the size of our available dataset up to 2,500 images.The translation process produced a black pixel gap in an image that was filled with an air class.Thus, a total of 2,000 femur images with their correlate ground truth labels were used to train our suggested segmentation methods.

    Furthermore,we followed the transfer learning idea to intensify the training competence of our proposed deep learning models.We employed the weights of the pre-trained model Visual Geometry Group 16 network using the immense ImageNet dataset [17].Then we fine-tuned our prepared networks using the augmented training data.A separate validation dataset (i.e., 20% of the original training dataset) was used to optimize the proposed segmentation models.All the femur images were resized to 384 ×192 pixels using bilinear interpolation [13].

    2.3 Segmentation Models

    An overview of DXA image analysis using deep learning methods is given in Fig.1.In this study,we present U-Net,SegNet,and FCN methods to segment femur in a DXA image.The FCNs estimate a dense return from an arbitrary-sized feed in data.In this manner,both“l(fā)earning and inference are carried out on the entire image at a time by dense feedforward computation and backpropagation”[43].The CNN-based deep learning models are usually found with fully-connected layers.The name fully convolutional in the FCN model points to convolutional layers without any fully connected layers in the model.We have used a sigmoid activation function in the activation layer of the suggested deep learning segmentation networks to classify each pixel in the femur image into three classes (i.e., bone, soft tissue, and air).More details about the U-Net,SegNet,and FCN techniques are given in[43-45].The Adadelta optimization method was used to train all the segmentation models with a batch size of 25.The initial learning rate was set as 0.2,which was reduced during the training process with automatic updates throughout 200 epochs.

    Figure 1: Overview of dual energy X-ray absorptiometry image analysis using deep learning

    We used weighted cross-entropy to calculate the loss,which minimized the overall loss,H,throughout the training stage as follows:

    where y is the ground truth labels and ? represents the predicted map of segmentation.C represents the class.M is the number of classes (bone, tissue, and air).The implementation of this work was performed with Python on the Ubuntu 18.04 operating system using the Keras library with the Theano backend.

    2.4 Post-Processing

    No post-processing was performed for the deep learning models.Contrasted to deep learning models,conventional semantics segmentation techniques always require a boundary smoothing filter to smooth femur bone boundaries labeled by the segmentation model and remove imperfections.In our previous study, we used a binary smoothing filter to remove such imperfections from the segmented DXA images.“Binary smoothing removes small-scale noise in the shape while maintaining large-scale features.For more details about binary smoothing,visit our previous work referenced in” [33].

    2.5 Evaluation and Performance Analysis

    2.5.1 Evaluation Matrices

    The difference between model-based prediction and ground truth annotations was noted by the number of TN(true negatives),TP(true positives),FN(false negatives),and FP(false positives),where n is the total number of observations such that n = TN + TP + FN + FP.The accuracy of each model was calculated according to the following procedures.

    The Jaccard index (JI) or intersection over union (IOU) is the estimated reliability of the segmented object with ground truths.

    where“TP is the object area(correctly classified)common between segmented image and ground truth.FP and FN are the numbers of bone and soft tissue pixels wrongly classified between two classes(bone and soft tissue)”[33].

    Sensitivity,“also known as the positive prediction rate(TPR),measures the proportion of positive pixels identified accurately”[33].“In a sensitivity test,the number of correctly classified bone tissue pixels in the femur DXA image is compared to the ground truth”[33]:

    where“TP is the total number of correctly classified pixels representing bone,and GTbis the ground truth in bone pixels”[33].

    Specificity,“also known as the true-negative prediction rate(TNR),measures the proportion of negative pixels accurately identified”[33].“In a specificity test,the number of correctly classified soft tissue pixels in a femur DXA image is compared to ground truth” [33]:

    where “TN is the total number of pixels correctly classified as soft tissue and GTtis the ground truth in rejection of the bone pixels” [33].

    The false-positive prediction rate (FPR) is the measure of soft tissue pixels wrongly classified as bone:

    Meanwhile,the false-negative prediction rate(FNR)is the measure of bone pixels wrongly classified as soft tissue:

    2.5.2 Model Segmentation Accuracy

    “The test performance of each method per image was calculated by comparing the segmentation output of a femur object to the ground truth” [33].“We used sensitivity, specificity, and IOU tests to measure the accuracy of an individual image” [33].“A segmentation method was considered to have failed to segment a femur object in a test image correctly if IOU <0.92, sensitivity <95%, or specificity <93%” [33].“The final accuracy of the model was calculated by comparing the number of accurately segmented images out of the total number of test images”[33]:

    In the above equation,JI is the Jaccard index,φ is the image segmentation sensitivity,and υ is the image segmentation specificity.The 2,500 (original + augmented) DXA images were divided for experiments as follows: 60% were used for training, 20% were used for validation to optimize the parameters of all the segmentation models,and remaining 20%were used for independent testing.

    2.5.3 Comparison with Conventional Techniques

    Previously,we used some conventional techniques(i.e.,Otsu’s thresholding and a pixel label decision tree) to segment femur in DXA images using handcrafted features.Thus, we compared the results of the current deep-learning-based segmentation to our previous work.For more details, visit our previous work referenced in [33].

    3 Results and Discussion

    We used the same dataset from our previous study titled“Femur Segmentation in DXA Imaging Using a Machine Learning Decision Tree”[33],with some additional femur images acquired from a DXA scanner(OsteoPro MAX,YOZMA B.M.Tech Co.,Ltd.,Republic of Korea).Radiology experts manually segmented femur images as the ground truth.Manual annotations were extracted from the DXA system in the portable network graphic file format along with high-contrast images to train and test our deep learning models.Each and every pixel in a femur image was annotated and assigned a class label(i.e.,either bone,soft tissue,or air).

    This section presents the performance of the U-Net, SegNet, and FCN approaches using the test data(i.e., 500 femur images).Tab.1 shows the segmentation performance results in terms of average accuracy computed using the JI, sensitivity, and specificity of all the test images.Some output examples of the segmented femur DXA images are shown in Fig.2 using different segmentation methods.A couple of the predicted segmentation contours based on the different segmentation models are shown in Fig.3.

    Figure 2: Dual energy X-ray absorptiometry image segmentation with different models:(a)a femur image,(b) a femur image segmented by SegNet, (c) a femur image segmented by U-Net, (d) a femur image segmented by a fully convolutional network, (e) a femur image segmented by a pixel label decision tree,and (f) a femur image segmented by Otsu’s thresholding

    Figure 3: Predicted femur boundaries with SegNet, U-Net, and a fully convolutional network.The red contours represent ground truths, the yellow contours represent SegNet, the blue contours represent U-Net, and the green contours represent the fully convolutional network

    Segmenting femur images with significantly higher accuracy(i.e.,98.5%)occurred from using an FCN in comparison to the other segmentation models.Optimal results were produced with an FCN when it was fine-tuned to the DXA data.We segmented 500 test femur images with an FCN and other deep learning models.Each model demonstrated soaring performance over the conventional models in high-contrast femur sections (the femur head and shaft), as well as in most challenging areas (e.g., the greater and lesser trochanters).Data were collected on multiple devices, and the models covered the diversity of the data and were robust.

    We performed a BMD consistency check on model-segmented images in comparison to manually segmented ones.First, we randomly selected 100 femur images and gave them to three persons to manually segment the femur, select regions of interest, and calculate the BMD at three different regions,that is, the femur neck, Ward’s triangle, and the greater trochanter.Second, an average value was recorded from three expert readings at the three femur regions (i.e., the femur neck, Ward’s triangle, and the greater trochanter) for each instance of the image.Then the estimations were compared with the model-based segmentation to check the consistency of each model.Finally, we carried out a statistical correlation study (by calculating the coefficient of determination, R2) of the BMD measurements between the different segmentation methods and manual segmentation.The FCN segmentation method scored the highest correlation record as shown in Tab.2.

    Table 2: Segmentation performance of different methods on the test dataset

    The results demonstrate that the FCN method yields higher sensitivity,specificity,and accuracy than the other models.The FCN model provides a practical and powerful technique for the segmentation issue in DXA imaging.Although a CNN model is considered an innovative and genuine segmentation method, it requires an extensive amount of training data.We followed the transfer learning idea to raise the training capability of deep learning models on a small number of femur DXA images.We employed the weights of the pre-trained model Visual Geometry Group16 network by using the immense ImageNet dataset.Our previous study presented on femur segmentation [33] shows that current deep learning models performed better than previously applied models.

    A conventional problem for deep learning is the fact that it is hard to train a CNN-based network with limited data without using optimized techniques and data augmentation.For this reason, appropriate optimization methods, data augmentation, and transfer learning can assist in training a reliable segmentation network.Transfer learning fine-tunes the deep network that has been pre-trained on medical or general images.Set side by side with data augmentation, transfer learning is an additional distinct solution with many parameters.To address this issue, we successfully implemented the transfer learning solution with DXA images that had already been segmented with another solution (i.e., those from ImageNet).

    During the correlation analysis of femur BMD measurement,we found a significantly higher correlation(R2= 0.962) between the measurement from an FCN-segmented femur images compared to the expertsegmented ones.Further investigations may be able to identify the serviceability of FCN and other deep learning models in the clinical diagnosis of osteoporosis and the prediction of fracture risk.All deeplearning-based models were shown to perform better than previously applied techniques.The study has demonstrated that convolutional networks can be effectively used with a high level of performance on a small clinical dataset through transfer learning in semantic segmentation.

    4 Conclusion

    We presented a deep-learning-based technique for femur segmentation in DXA imaging and focused on improving segmentation accuracy.The predictive performance and efficiency of an FCN on femur data was stunning.The practical use of the FCN method in DXA image segmentation has the potential to enhance the validity of BMD analysis and clinical diagnosis of osteoporosis.

    Our results demonstrate that the FCN model can be used for DXA image segmentation since it performs well on femur DXA images with proper tuning of the model.One limitation of the deep learning approach is that these models suffer from poor generalization when the input data comes from different DXA machines(due to different acquisition parametrization,system models,system calibration,etc.).Our next research step will focus on this issue.

    Acknowledgment:Special thanks to YOZMA B.M.Tech Co.,Ltd.,Republic of Korea,and its employees for providing data for this research.

    Funding Statement:This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT [NRF-2017R1E1A1A01077717].

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品一区二区三区av网在线观看| 久久久国产精品麻豆| av中文乱码字幕在线| 亚洲熟女毛片儿| av免费在线观看网站| 777久久人妻少妇嫩草av网站| av福利片在线观看| 岛国在线观看网站| 亚洲专区字幕在线| 国产伦一二天堂av在线观看| 欧美性长视频在线观看| 一本大道久久a久久精品| 99国产精品一区二区三区| 亚洲欧美日韩东京热| av福利片在线观看| 他把我摸到了高潮在线观看| 黄片大片在线免费观看| 亚洲欧美日韩高清在线视频| 欧美成人午夜精品| 黄片小视频在线播放| 亚洲av熟女| 欧美中文日本在线观看视频| 成人高潮视频无遮挡免费网站| av中文乱码字幕在线| 黄色视频不卡| 国产麻豆成人av免费视频| 久久久久久久久免费视频了| 亚洲国产高清在线一区二区三| 草草在线视频免费看| 少妇裸体淫交视频免费看高清 | 香蕉久久夜色| 亚洲国产精品sss在线观看| 天天添夜夜摸| 国产成人欧美在线观看| 天天一区二区日本电影三级| 午夜a级毛片| 久久久久久久精品吃奶| 高清在线国产一区| 在线观看免费视频日本深夜| 国产精品,欧美在线| 国模一区二区三区四区视频 | 欧美精品亚洲一区二区| 国产精品一及| 麻豆国产av国片精品| 91老司机精品| 好男人电影高清在线观看| 床上黄色一级片| 好男人在线观看高清免费视频| 欧美久久黑人一区二区| 少妇粗大呻吟视频| 亚洲av电影在线进入| 国产麻豆成人av免费视频| 脱女人内裤的视频| 免费在线观看视频国产中文字幕亚洲| 亚洲成人精品中文字幕电影| 亚洲中文字幕日韩| 欧美日韩黄片免| 国内少妇人妻偷人精品xxx网站 | 中文字幕av在线有码专区| 无限看片的www在线观看| 老司机在亚洲福利影院| 久久久久久免费高清国产稀缺| 少妇粗大呻吟视频| 97人妻精品一区二区三区麻豆| 免费av毛片视频| 琪琪午夜伦伦电影理论片6080| 欧美性猛交╳xxx乱大交人| 中亚洲国语对白在线视频| 久久久国产成人免费| av片东京热男人的天堂| 精品久久久久久久久久久久久| 精品无人区乱码1区二区| 欧美中文综合在线视频| 久久精品91蜜桃| 国产精品,欧美在线| 午夜福利视频1000在线观看| 亚洲 欧美一区二区三区| 老汉色∧v一级毛片| 变态另类丝袜制服| 久久久国产成人精品二区| 国产成+人综合+亚洲专区| 色在线成人网| 亚洲精华国产精华精| www.熟女人妻精品国产| 国产精品亚洲一级av第二区| 麻豆国产97在线/欧美 | 男人舔女人的私密视频| 51午夜福利影视在线观看| 精品欧美一区二区三区在线| 国语自产精品视频在线第100页| 久久婷婷人人爽人人干人人爱| 在线观看午夜福利视频| 免费观看精品视频网站| 午夜免费成人在线视频| 欧美黑人巨大hd| 丰满人妻熟妇乱又伦精品不卡| 麻豆成人av在线观看| 亚洲成人国产一区在线观看| av国产免费在线观看| 叶爱在线成人免费视频播放| 黄色丝袜av网址大全| 欧美三级亚洲精品| 麻豆国产97在线/欧美 | 999久久久国产精品视频| 欧美日韩中文字幕国产精品一区二区三区| 狂野欧美激情性xxxx| 脱女人内裤的视频| 无限看片的www在线观看| 手机成人av网站| 国内少妇人妻偷人精品xxx网站 | 色综合亚洲欧美另类图片| 90打野战视频偷拍视频| 久久天躁狠狠躁夜夜2o2o| 日本一区二区免费在线视频| 亚洲人成网站高清观看| 成人永久免费在线观看视频| 日本黄色视频三级网站网址| 日韩成人在线观看一区二区三区| 亚洲av成人av| 国产黄色小视频在线观看| 精品欧美国产一区二区三| 欧美极品一区二区三区四区| 日韩欧美在线乱码| 我的老师免费观看完整版| 两个人免费观看高清视频| 久久久久国内视频| 日日爽夜夜爽网站| 中文字幕人成人乱码亚洲影| 国产av又大| 亚洲av中文字字幕乱码综合| 美女午夜性视频免费| 最近最新免费中文字幕在线| 人人妻人人看人人澡| 久久国产乱子伦精品免费另类| 免费人成视频x8x8入口观看| 成人一区二区视频在线观看| 中文字幕人成人乱码亚洲影| 99精品久久久久人妻精品| 在线国产一区二区在线| 婷婷六月久久综合丁香| 日韩有码中文字幕| 国产免费男女视频| 中文字幕人妻丝袜一区二区| 非洲黑人性xxxx精品又粗又长| 一本大道久久a久久精品| 日本 av在线| 日本一二三区视频观看| 久久久久精品国产欧美久久久| 老司机在亚洲福利影院| 婷婷六月久久综合丁香| 亚洲精品久久国产高清桃花| 欧美一级毛片孕妇| 亚洲成av人片在线播放无| 日本免费一区二区三区高清不卡| 欧美黑人欧美精品刺激| 99在线视频只有这里精品首页| 久久久精品大字幕| 亚洲国产高清在线一区二区三| 亚洲男人天堂网一区| 后天国语完整版免费观看| 亚洲在线自拍视频| 久久婷婷成人综合色麻豆| 麻豆国产97在线/欧美 | 真人做人爱边吃奶动态| 一进一出抽搐gif免费好疼| 老汉色av国产亚洲站长工具| 两性夫妻黄色片| 色av中文字幕| 欧美 亚洲 国产 日韩一| 精品久久久久久,| 亚洲精品粉嫩美女一区| 18美女黄网站色大片免费观看| 成人国语在线视频| 亚洲 欧美一区二区三区| 成人一区二区视频在线观看| 757午夜福利合集在线观看| 老司机靠b影院| 在线视频色国产色| 亚洲国产精品sss在线观看| 99精品欧美一区二区三区四区| 亚洲国产精品999在线| 久久久水蜜桃国产精品网| 国内精品久久久久久久电影| 国产精品98久久久久久宅男小说| 亚洲狠狠婷婷综合久久图片| 制服人妻中文乱码| 在线视频色国产色| 久久九九热精品免费| 中文资源天堂在线| 黄色女人牲交| 成人国语在线视频| 久久久久久人人人人人| 午夜两性在线视频| 亚洲av美国av| 亚洲免费av在线视频| 在线a可以看的网站| 九色成人免费人妻av| 国产单亲对白刺激| 国产爱豆传媒在线观看 | 777久久人妻少妇嫩草av网站| 99在线人妻在线中文字幕| 最新在线观看一区二区三区| 老司机靠b影院| 亚洲七黄色美女视频| 欧美久久黑人一区二区| 亚洲国产看品久久| 久久久精品国产亚洲av高清涩受| 777久久人妻少妇嫩草av网站| 麻豆国产97在线/欧美 | 国产不卡一卡二| 成年人黄色毛片网站| 欧美乱色亚洲激情| 亚洲精品色激情综合| 成人国语在线视频| 亚洲 国产 在线| av超薄肉色丝袜交足视频| 人妻丰满熟妇av一区二区三区| 久久人人精品亚洲av| www.www免费av| 精品高清国产在线一区| 久久婷婷人人爽人人干人人爱| 黄色片一级片一级黄色片| 激情在线观看视频在线高清| 嫩草影院精品99| 国产精品永久免费网站| 欧美乱妇无乱码| 又黄又粗又硬又大视频| 久久久久久国产a免费观看| 精品久久久久久成人av| 成人国产一区最新在线观看| 欧美乱妇无乱码| 少妇熟女aⅴ在线视频| 国产精品久久久人人做人人爽| 一边摸一边抽搐一进一小说| 亚洲天堂国产精品一区在线| 亚洲色图av天堂| 亚洲精品一区av在线观看| 久久久国产精品麻豆| 三级毛片av免费| 男男h啪啪无遮挡| 少妇粗大呻吟视频| 国产黄a三级三级三级人| 麻豆国产av国片精品| 亚洲精品av麻豆狂野| 免费在线观看影片大全网站| 十八禁网站免费在线| 桃红色精品国产亚洲av| 成人18禁高潮啪啪吃奶动态图| 三级国产精品欧美在线观看 | 69av精品久久久久久| 成人欧美大片| 天堂动漫精品| 伦理电影免费视频| 欧美在线黄色| 国产区一区二久久| 又粗又爽又猛毛片免费看| 国模一区二区三区四区视频 | 亚洲成人精品中文字幕电影| 午夜老司机福利片| 每晚都被弄得嗷嗷叫到高潮| 国产精华一区二区三区| 一二三四在线观看免费中文在| 国产精品精品国产色婷婷| 久久久久精品国产欧美久久久| 成人国产一区最新在线观看| 男女之事视频高清在线观看| 亚洲国产中文字幕在线视频| 777久久人妻少妇嫩草av网站| av在线播放免费不卡| 91av网站免费观看| 欧美人与性动交α欧美精品济南到| 久久国产精品人妻蜜桃| 老司机深夜福利视频在线观看| 日本五十路高清| 国产精品av久久久久免费| 很黄的视频免费| 成人永久免费在线观看视频| av国产免费在线观看| 日韩大码丰满熟妇| 精品久久久久久久人妻蜜臀av| 神马国产精品三级电影在线观看 | 亚洲国产欧美网| 精品久久久久久,| 久久亚洲真实| xxx96com| 成人国产一区最新在线观看| 制服丝袜大香蕉在线| 女人被狂操c到高潮| 麻豆一二三区av精品| 国产三级在线视频| xxxwww97欧美| 欧美又色又爽又黄视频| 欧美日韩一级在线毛片| 老司机靠b影院| 一边摸一边抽搐一进一小说| 成人高潮视频无遮挡免费网站| 国产一区二区三区在线臀色熟女| 看黄色毛片网站| 99精品欧美一区二区三区四区| 精品午夜福利视频在线观看一区| 欧美三级亚洲精品| 亚洲熟妇熟女久久| 99久久国产精品久久久| 亚洲中文日韩欧美视频| 国内少妇人妻偷人精品xxx网站 | 中文字幕人妻丝袜一区二区| 亚洲人成伊人成综合网2020| 免费在线观看视频国产中文字幕亚洲| 国产精品久久久人人做人人爽| 国产欧美日韩一区二区精品| 亚洲av成人一区二区三| 婷婷精品国产亚洲av在线| 精品免费久久久久久久清纯| 国产片内射在线| 日本免费一区二区三区高清不卡| 国产黄色小视频在线观看| 中文在线观看免费www的网站 | 欧美人与性动交α欧美精品济南到| 久久精品国产综合久久久| www日本黄色视频网| 一本一本综合久久| 免费无遮挡裸体视频| 美女黄网站色视频| 很黄的视频免费| 精品福利观看| 久久精品影院6| 性欧美人与动物交配| 日本成人三级电影网站| 九色成人免费人妻av| 免费看日本二区| 国产精品免费视频内射| 国产午夜精品久久久久久| 精品久久蜜臀av无| 精品国产超薄肉色丝袜足j| 亚洲人成77777在线视频| 色尼玛亚洲综合影院| 18禁观看日本| 国产成人精品久久二区二区91| 九色国产91popny在线| 在线视频色国产色| 精品久久蜜臀av无| 亚洲国产看品久久| 亚洲一区高清亚洲精品| 亚洲成人精品中文字幕电影| 中出人妻视频一区二区| 国产不卡一卡二| 欧美成人免费av一区二区三区| 亚洲欧美日韩无卡精品| 精品日产1卡2卡| 18禁黄网站禁片免费观看直播| 久久久国产成人精品二区| 欧美日本亚洲视频在线播放| 可以免费在线观看a视频的电影网站| 欧美日本亚洲视频在线播放| 国产视频内射| 人人妻,人人澡人人爽秒播| 免费在线观看黄色视频的| 成人av一区二区三区在线看| 久99久视频精品免费| 老司机靠b影院| 最近在线观看免费完整版| 身体一侧抽搐| 在线观看日韩欧美| 天天躁夜夜躁狠狠躁躁| aaaaa片日本免费| 97碰自拍视频| 国产区一区二久久| www日本黄色视频网| 欧美一级毛片孕妇| 国产高清激情床上av| www.精华液| 亚洲片人在线观看| 久久午夜综合久久蜜桃| 男女床上黄色一级片免费看| 亚洲精品av麻豆狂野| 女生性感内裤真人,穿戴方法视频| 精品国内亚洲2022精品成人| 国产一区二区激情短视频| 一级毛片精品| 黄色视频,在线免费观看| 国产亚洲精品综合一区在线观看 | 又爽又黄无遮挡网站| 此物有八面人人有两片| 国产精品亚洲av一区麻豆| 国产野战对白在线观看| 精品欧美一区二区三区在线| 草草在线视频免费看| www.熟女人妻精品国产| 亚洲av美国av| 日日摸夜夜添夜夜添小说| 欧美一区二区国产精品久久精品 | www.999成人在线观看| 精品一区二区三区av网在线观看| 国产精品九九99| 国产成人精品久久二区二区免费| 在线观看美女被高潮喷水网站 | 久久久久久国产a免费观看| 看片在线看免费视频| 露出奶头的视频| 韩国av一区二区三区四区| 日韩欧美免费精品| 激情在线观看视频在线高清| 日韩大码丰满熟妇| 12—13女人毛片做爰片一| 免费在线观看黄色视频的| 久久欧美精品欧美久久欧美| 久久久久久亚洲精品国产蜜桃av| 色噜噜av男人的天堂激情| 日本三级黄在线观看| 国产精品一及| 亚洲 欧美一区二区三区| 欧美一区二区国产精品久久精品 | 亚洲精品美女久久av网站| 国产1区2区3区精品| 色综合站精品国产| 又黄又粗又硬又大视频| 男女视频在线观看网站免费 | 三级毛片av免费| 人人妻人人澡欧美一区二区| 欧美日韩亚洲国产一区二区在线观看| 午夜免费观看网址| 亚洲 欧美一区二区三区| 久久久久久大精品| 国产精品乱码一区二三区的特点| 最近最新免费中文字幕在线| 亚洲一区二区三区色噜噜| 成人av在线播放网站| 午夜福利在线在线| 中出人妻视频一区二区| 香蕉av资源在线| 亚洲欧美日韩高清在线视频| 国产一区二区三区在线臀色熟女| 757午夜福利合集在线观看| 精品久久久久久久末码| 妹子高潮喷水视频| 精品无人区乱码1区二区| 久久国产乱子伦精品免费另类| 午夜福利成人在线免费观看| 一区二区三区国产精品乱码| 欧美日韩中文字幕国产精品一区二区三区| 无限看片的www在线观看| 免费人成视频x8x8入口观看| 亚洲精品在线美女| x7x7x7水蜜桃| 国产不卡一卡二| 天堂影院成人在线观看| 国产一区二区激情短视频| 国产亚洲精品一区二区www| av视频在线观看入口| 女同久久另类99精品国产91| 可以在线观看的亚洲视频| 免费看a级黄色片| 欧美成人免费av一区二区三区| 国产免费av片在线观看野外av| 久久久精品大字幕| 极品教师在线免费播放| 国产精品98久久久久久宅男小说| 国产熟女午夜一区二区三区| 国产精品久久久久久亚洲av鲁大| 一卡2卡三卡四卡精品乱码亚洲| 国产精品久久久久久精品电影| 淫妇啪啪啪对白视频| 国产精品爽爽va在线观看网站| 搞女人的毛片| 丝袜人妻中文字幕| 国内少妇人妻偷人精品xxx网站 | 亚洲国产精品合色在线| 给我免费播放毛片高清在线观看| 国产三级中文精品| 精品欧美一区二区三区在线| 亚洲,欧美精品.| 精品高清国产在线一区| 亚洲aⅴ乱码一区二区在线播放 | 成年免费大片在线观看| 婷婷六月久久综合丁香| 窝窝影院91人妻| 久久人妻福利社区极品人妻图片| 一二三四在线观看免费中文在| 后天国语完整版免费观看| 免费看a级黄色片| 国产精品 欧美亚洲| 老汉色av国产亚洲站长工具| 搞女人的毛片| 91国产中文字幕| 国产精品影院久久| 色尼玛亚洲综合影院| 国产黄a三级三级三级人| 特级一级黄色大片| 亚洲五月婷婷丁香| 免费看a级黄色片| 国产久久久一区二区三区| 十八禁网站免费在线| 国产亚洲精品一区二区www| 日本 欧美在线| 成人av一区二区三区在线看| 亚洲欧美精品综合久久99| 宅男免费午夜| 亚洲午夜理论影院| 国模一区二区三区四区视频 | 亚洲av电影在线进入| 亚洲午夜精品一区,二区,三区| tocl精华| 色在线成人网| 久久热在线av| 久久中文字幕一级| 黑人巨大精品欧美一区二区mp4| bbb黄色大片| 国产一区在线观看成人免费| 怎么达到女性高潮| 18禁美女被吸乳视频| 欧美三级亚洲精品| 女人被狂操c到高潮| 国产欧美日韩一区二区三| 真人做人爱边吃奶动态| 男女视频在线观看网站免费 | 精品午夜福利视频在线观看一区| www.999成人在线观看| 熟妇人妻久久中文字幕3abv| 久久九九热精品免费| 久久人人精品亚洲av| 制服人妻中文乱码| 久久中文字幕人妻熟女| 久久香蕉精品热| 中文字幕精品亚洲无线码一区| 久久这里只有精品中国| 国产av麻豆久久久久久久| 两个人视频免费观看高清| 久久婷婷成人综合色麻豆| 狂野欧美激情性xxxx| 亚洲男人天堂网一区| 又黄又粗又硬又大视频| 欧美乱妇无乱码| 在线视频色国产色| 午夜精品一区二区三区免费看| 午夜亚洲福利在线播放| 身体一侧抽搐| 成人三级做爰电影| 久久精品影院6| 欧美精品啪啪一区二区三区| 亚洲一卡2卡3卡4卡5卡精品中文| 一a级毛片在线观看| 亚洲精品av麻豆狂野| 人成视频在线观看免费观看| 男人舔女人的私密视频| www.熟女人妻精品国产| 国产精品精品国产色婷婷| 全区人妻精品视频| 首页视频小说图片口味搜索| 亚洲18禁久久av| 在线十欧美十亚洲十日本专区| 99在线视频只有这里精品首页| 中文字幕人成人乱码亚洲影| 国产精品野战在线观看| a级毛片在线看网站| 嫩草影视91久久| svipshipincom国产片| 亚洲中文av在线| 777久久人妻少妇嫩草av网站| 午夜老司机福利片| 91麻豆精品激情在线观看国产| 欧美乱码精品一区二区三区| 俺也久久电影网| 欧美av亚洲av综合av国产av| 精品国内亚洲2022精品成人| 啪啪无遮挡十八禁网站| 97碰自拍视频| 国产又黄又爽又无遮挡在线| 免费看美女性在线毛片视频| 亚洲精品中文字幕在线视频| 亚洲av电影不卡..在线观看| 亚洲无线在线观看| 亚洲成人免费电影在线观看| 又爽又黄无遮挡网站| 免费高清视频大片| 99国产精品一区二区三区| 母亲3免费完整高清在线观看| 精品一区二区三区视频在线观看免费| 村上凉子中文字幕在线| 老司机午夜十八禁免费视频| 男女下面进入的视频免费午夜| svipshipincom国产片| www.999成人在线观看| 国产亚洲精品av在线| 亚洲一码二码三码区别大吗| 精品不卡国产一区二区三区| 久9热在线精品视频| 亚洲av成人不卡在线观看播放网| 国产日本99.免费观看| 97人妻精品一区二区三区麻豆| 欧洲精品卡2卡3卡4卡5卡区| 亚洲男人天堂网一区| 一区二区三区国产精品乱码| 黄频高清免费视频| 在线观看66精品国产| 1024手机看黄色片| 国产亚洲精品第一综合不卡| 欧美黄色片欧美黄色片| 国产成人av激情在线播放| or卡值多少钱| 成年版毛片免费区| 毛片女人毛片| 亚洲真实伦在线观看| 精品久久久久久成人av| 欧美黑人欧美精品刺激| 欧美高清成人免费视频www| av视频在线观看入口| 日本黄色视频三级网站网址| 中文字幕精品亚洲无线码一区| www日本黄色视频网| 黑人欧美特级aaaaaa片| 欧美日韩中文字幕国产精品一区二区三区| 国产免费男女视频| 一边摸一边抽搐一进一小说| 女人高潮潮喷娇喘18禁视频| 丰满人妻熟妇乱又伦精品不卡|