• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Breast Lesions Detection and Classification via YOLO-Based Fusion Models

    2021-12-10 11:58:04AsmaBaccoucheBegonyaGarciaZapirainCristianCastilloOleaandAdelElmaghraby
    Computers Materials&Continua 2021年10期

    Asma Baccouche,Begonya Garcia-Zapirain,Cristian Castillo Olea and Adel S.Elmaghraby

    1Department of Computer Science and Engineering,University of Louisville,Louisville,40292,KY,USA

    2eVida Research Group,University of Deusto,Bilbao,4800,Spain

    Abstract:With recent breakthroughs in artificial intelligence,the use of deep learning models achieved remarkable advances in computer vision,ecommerce,cybersecurity,and healthcare.Particularly,numerous applications provided efficient solutions to assist radiologists for medical imaging analysis.For instance,automatic lesion detection and classification in mammograms is still considered a crucial task that requires more accurate diagnosis and precise analysis of abnormal lesions.In this paper,we propose an end-to-end system,which is based on You-Only-Look-Once(YOLO)model,to simultaneously localize and classify suspicious breast lesions from entire mammograms.The proposed system first preprocesses the raw images,then recognizes abnormal regions as breast lesions and determines their pathology classification as either mass or calcification.We evaluated the model on two publicly available datasets,with 2907 mammograms from the Curated Breast Imaging Subset of Digital Database for Screening Mammography(CBIS-DDSM)and 235 mammograms from INbreast database.We also used a privately collected dataset with 487 mammograms.Furthermore,we suggested a fusion models approach to report more precise detection and accurate classification.Our best results reached a detection accuracy rate of 95.7%,98.1%and 98%for mass lesions and 74.4%,71.8%and 73.2%for calcification lesions,respectively on CBIS-DDSM,INbreast and the private dataset.

    Keywords:Breast cancer;detection;classification;YOLO;deep learning;fusion

    1 Introduction

    Breast cancer is considered the most common type of cancer that affects women worldwide.Over 279,000 cases were reported in the United States in 2020 with a 15% death rate against other types of cancer[1].Early detection of breast cancer has become required to reduce the high mortality rate among women;thus,diagnostic systems have been studied to assist radiologists with more precise analysis[2–5].Mammography screening has been recognized as the most effective tool to reveal abnormalities in the breast tissue,where the most important findings are breast masses and calcifications that may lead to the presence of cancer[6].To inspect for potential lesions,radiology experts have to read and evaluate the daily screening mammograms,which is considered very challenging due to the highly significant cost and error that may occur due to the variations of abnormalities in terms of location,texture,shape and size[7].

    Recently,deep learning technology has been widely adopted in the medical field to support physicians due to the huge number of patients and urgent need to improve the accuracy of their pathology diagnosis over breast lesions detection and classification[8,9].Accordingly,many computer-aided diagnosis systems(CAD)and similar automatic processes have been developed using deep learning methodologies to provide fast and precise solutions in medical image detection and classification[10–12].Conventional systems relied on extracting hand-crafted and low-level features to localize and classify potential regions using simple image processing and machine learning techniques[13–15].So far,these solutions have become inaccurate and resulted in a high false positive rate,and thus have been substituted with the novel deep learning approaches[16,17].

    With the increasing number of breast mammograms and enhancement of computational capacity of computers,different deep learning models have been widely implemented to offer a better alternative.They aim to automatically extracting deep and high-level features directly from raw images without knowledge requirement[18].This helped to improve results of automated systems and maintain a good tradeoff between precision of lesions detection and accuracy of distinguishing between different types of lesions from a simple mammogram[19–22].Deep learning models have the ability to extract deep and multiple-scaled features,and combine them to assist experts to make the final decision.Accordingly,their strength to adapt to different cases has been proved for objects detection and classification tasks in many applications[23–26].This resulted in many state-of-the-art models that were proved outstanding success on natural and medical images.These models were evolved from a simple Convolutional Neural Networks(CNNs)model to become other variations such as R-CNNs,Fast CNNs and Faster R-CNNs models[27–29].These popular models have overcome many limitations of deep learning such as computational time,redundancy,overfitting and parameters size.However,training and implementing most of these models is often time-consuming and requires a high computational memory.Therefore,another variation called You-Only-Look-Once(YOLO),which is characterized with a low-memory dependence,has been recognized as a fast object detection model and suitable for CAD systems[30–36].

    In this study,we propose an end-to-end system that is based on the YOLO-based model to simultaneously detect and classify breast lesions into mass tumors or calcification.Our approach contributes a new feature,which is an end-to-end system that can recognize both types of suspicious lesions whether only one type exists in an image or both simultaneously appear in the same image.As the choice of YOLO model was stated earlier,this implementation will also serve as a base for future tasks in order to present a complete breast cancer diagnostic framework(i.e.,lesions segmentation and malignancy prediction,etc.).The performance of this prerequisite step was proved on different mammography datasets using deep learning methodologies(i.e.,data augmentation,early stopping,hyperparameters tuning and transfer learning).An additional contribution was presented in this paper to boost the lesions detection and classification performance as follows.As the performance varies according to the input data of the model,single evaluation results were first reported over the variations of images,then different fusion models were developed to increase the final detection accuracy rate and join models with different settings.This will help to keep the best detected bounding boxes and remove the bad predictions that can mislead the future diagnostic tasks.The proposed methodology was performed on two most widely used datasets:CBIS-DDSM and INbreast,and also on an independent private dataset.The outcome of this work will justify the performance of the YOLO-based model for deep learning lesion detection and classification on mammography.Furthermore,it will present as a comparative study of YOLO-based model performance using different mammograms.

    The rest of the paper is organized as follows.First,the literature review of breast lesion detection and classification using deep learning is introduced in Section 2.In Section 3,details of our methodology are presented,including a description of YOLO-based model architecture and the suggested fusion models approach,followed by details about the used breast cancer datasets and preprocessing techniques.Then,in Section 4,we discuss the hyperparameters tuning applied for training the model,and present experimental results that are compared with other works.We conclude the paper in Section 5 with a discussion about our proposed methodology and future works.

    2 Literature Review

    Since the development of machine learning technology,many applications have given more attention in adopting deep learning to solve complex problems,particularly in the fields of computer vision,image recognition,object detection[17–19]and segmentation[30–35].Many studies showed that traditional techniques have failed to provide highly accurate models due to the limitation of hand-crafted features extracted from raw images.Indeed,traditional CAD systems that were proposed for breast lesions detection and classification could not overcome the huge variations in lesions size and texture,compared to deep learning methods[36–38].Therefore,numerous CAD systems were successfully developed using deep learning architectures to improve the detection and classification of organs lesions such as liver lesions,lung nodules and particularly breast lesions[39,40].

    Researchers have demonstrated the feasibility of regional-based models to build an end-to-end system for detecting and classifying malignant and benign tumors in the INbreast mammograms and achieved a detection rate of 89.4%[41].The same idea was also presented in a recent work by Peng et al.[42]that introduced an automated mass detection approach,which integrated Faster R-CNN model and multiscale-feature pyramid network.The method yielded a true positive rate of 0.93 on CBIS-DDSM and 0.95 on INbreast dataset.

    Accordingly,Al-Antari et al.[43]employed YOLO model for breast masses detection that reported a detection accuracy of 98.96%.The output served after that for mass segmentation and recognition in order to provide a fully integrated CAD system for digital X-ray mammograms.Another work by Al-Antari et al.[44]in 2020 improved the results of the breast lesions detection and classification by adopting first the YOLO model for detection and then compared feedforward CNN,ResNet-50,and InceptionResNet-V2 for classification.Similarly,Al-masni et al.[45]proposed a CAD system framework that first detected breast masses using YOLO model with an overall accuracy of 99.7%,and then classified them into malignant and benign using Fully Connected Neural Networks(FC-NNs)with an overall accuracy of 97%.

    Deep convolutional neural networks(DCNN)was also suggested for mammographic mass detection by using transfer learning strategy from natural images[46].In 2018,a work presented by Ribli et al.[47]proposed a CAD system based on Faster R-CNN framework to detect and classify malignant and benign lesions and obtained an AUC score of 0.95 on INbreast dataset.Another work employed fully convolutional network(FCN)with adversarial learning in an unsupervised fashion to align different domains while conducting mass detection in mammograms[48].

    Since breast tumors detection is a crucial step that remains a challenge for CAD systems,many reliable models were used to support this automatic diagnosis.For example,Singh et al.relied on Single Shot Detector(SSD)model to localize tumors in mammograms,and then extracted output boxes to apply segmentation and classification tasks[49].This yielded sufficient true positive rate of 0.97 on INbreast dataset.Other recent studies proposed using YOLO model to achieve a better performance in detecting bounding boxes surrounding breast tumors.For example,Al-masni et al.[50]presented a YOLO-based CAD system that achieved an overall accuracy of 85.52% on DDSM dataset.

    Tumor localization task was conducted in a detection framework for cancer metastasis using a patch-based classification stage and a heatmap-based post-processing stage[51].This achieved a score of 0.7051 and served for whole slide image classification.Breast tumors detection was also addressed in 2016 by Akselrod-Ballin et al.[52]where images were divided into overlapped patches and fed into a cascaded R-CNN model to first detect masses and then classify them into malignant or benign.In 2015,a work presented by Dhungel et al.[53]relied on a multi-scale Deep Belief Network(DBN)to first extract all suspicious regions from entire mammograms and then filter out the best regions using Random Forest(RF).This technique achieved a true positive rate of 96%.In 2017,a work presented by Akselrod-Ballin et al.[54]developed a three-stage cascade of Faster-RCNN model to detect and classify abnormal regions in mammograms.Their overall detection and classification accuracy reached 72% and 77% on INbreast dataset.

    Most of these reviewed works and their diagnosis results showed how artificial intelligence has successfully contributed to solve the challenge of breast cancer detection.However,practical implementation and system evaluation along with the high complexity of memory and time remain a problem to investigate.The majority of these works have tackled the problem of detecting only mass tumors in the entire breast and then classifying them into malignant and benign.Our approach was developed differently to address the task of detection and classification of two types of breast lesions(i.e.,mass and calcification).We expand our methodology by presenting fusion models approach that combines predictions of different models to improve the final results.

    3 Methods and Materials

    In this study,we present an end-to-end model for simultaneous detection and classification of breast lesions in mammograms.The process uses a deep learning YOLO-based model that generates suspicious regions from the entire input breast images and classifies the type of lesions as either mass or calcification.We also propose a fusion models approach to improve the model performance and to join different learnings.

    3.1 YOLO-Based Model

    Object detection refers to a regression problem that maps right coordinates of images’pixels to a bounding box that surrounds a specific object.Popular regional-based neural networks models predict multiple bounding boxes and use regions to localize objects within images after being fed into a CNN that generates a convolutional feature map.This approach applies a selective search that extracts most adequate regions from images and then predicts the offset values for the final bounding boxes.Typically,this technique is experimentally slow and memory consuming,therefore a YOLO deep learning network was proposed where a single CNN predicts at the same time bounding boxes allocation and their class label probabilities from entire images.The lowcomputational aspect of YOLO comes from the fact that it does not require extracting features on sliding windows.In fact,it only uses features from the entire image to directly detect each bounding box and its class label probability.

    YOLO architecture,as explained in Fig.1,is simply based on the fully convolutional neural network(FCNN)design.Particularly,it splits each entire image into m × m grids and for each grid,B bounding boxes are returned with a confidence score and C class probabilities.

    Figure 1:Proposed YOLO-based architecture

    Confidence score is computed by multiplying the probability of existing class object with the intersection over the union(IoU)score as detailed in Eq.(1).

    In addition,the detected object is classified as mass or calcification according to its class probability and its confidence score for that specific class label as explained below in Eq.(2).

    In this work,we adopted YOLO-V3,which is the third improved version of YOLO networks,in order to detect more different scaled object,and it uses multi-scale features extraction and detection.As shown in Fig.1,the architecture first employs an extraction step that is based on the DarkNet backbone framework[55].It was inspired by the ResNet architecture and VGG-16,and it presents a new design of 53 layers,as illustrated in the lowest block in Fig.1,with skip connections in order to prevent gradients from diminishing and vanishing while propagating through deep layers.After that,the extracted features at different scales were fed into the detection part that presents three fully connected layers.After that,it applies the concept of anchor boxes that is borrowed from Faster-RCNNs model.In fact,prior boxes were pre-determined by training a K-means algorithm on the entire images.After that,the output matrixes of multi-scale features were defined as grid cells with anchor boxes.This helps to determine the IoU percentage between the defined ground-truth and anchor boxes.It also ensures selecting the boxes with best scores comparing to a certain threshold.At the end,four offsets values of bounding boxes against each anchor box were predicted with a confidence score and a class label probability.Hence,detection considered correct bounding boxes that had both scores exceeding a certain threshold[56].

    3.2 Fusion Models Approach

    According to the generalized YOLO-based model we presented earlier in Fig.1,bounding boxes that surround suspicious breast lesions are detected with certain confidence score as explained in previous subsection.This score varies with the model settings,the input data fed to the model and with the internal classification step performed by YOLO to determine the class label probability score(i.e.,Mass or Calcification).Based on this hypothesis,evaluation of such a model can be expanded to improve the final predictions result.

    In this work,we suggested first selecting the best predicted bounding boxes within all augmented images(i.e.,rotated,transformed,translated,etc.)according to their IoU score.This helped to determine the best representative mammograms to correctly localize and classify breast lesions.Second,we suggested joining different predictions of the model’s implementation in order to lower the error rate and combine performance of differently configured models.These models were trained and configured differently to finally create a fusion-based model dedicated for best performance.

    In fact,we note that Model1,referred as M1,is trained and configured differently for one class targeting either Mass or Calcification.Therefore,the two developed models from M1are now referenced as M1(Mass)for Mass class and M1(Calcification)for Calcification class.Model2,referred as M2,is configured for multi-class training and identification and used for fusion to improve the performance of single-class models.The model M2will now be identified as M2(Mass and Calcification)since it targets multiple classes.

    After developing and testing each model Mi,our proposed fusion approach is to create a fusion model for Mass class using M1(Mass)and for Calcification class using M1(Calcification),while benefiting from the M2(Mass and Calcification)to improve the performance of the M1models.

    We first report the Mass predictions1 using M1(Mass)that have IoU score more than threshold1.Next,we select only images with Mass lesions and report their predictions using M2(Mass and Calcification)and another threshold2.After that,we filter out predicted images that are not within the Mass predictions1 and save them as Mass predictions2.We finally combine the two predictions into final Mass predictions as shown in Fig.2.We repeat the same logic for Calcification predictions according to the flow in Fig.2.In all our fusion models,we used a threshold1 to be 0.5 and threshold2 to be 0.35 that yielded satisfying results.

    Figure 2:Flow chart of the fusion models approach for final prediction(input mammography images includes single lesions and different lesions cases from the CBIS-DDSM dataset)

    3.3 Datasets

    In this study,the CBIS-DDSM and INbreast public datasets were used in our experiments to train and evaluate the proposed methodology.We also evaluated the performance with a small private dataset with different cases.

    CBIS-DDSM[57]is an updated and standardized version of the of Digital Database for Screening Mammography(DDSM)dataset,where images were converted from Lossless Joint Photographic Experts Group(LJPEG)to Digital Imaging and Communications in Medicine(DICOM)format.It was reviewed by radiologists after eliminating inaccurate cases and confirmed with the histopathology classification.It contains 2907 mammograms from 1555 patients and it is organized in two categories of pathology:Mass images(50.5%)and Calcification images(49.5%).Mammograms were collected with two different views for each breast(i.e.,MLO and CC).Images have average size of 3000 × 4800 pixels and are associated with their pixel-level ground-truth for suspicious regions location and type.

    INbreast[58]is a public dataset of images acquired using theMammoNovation Siemensfullfield digital mammography(FFDM)that are stored in DICOM format.The database contains 410 mammograms where 235 cases include abnormalities in both MLO and CC views from 115 patents,and thus normal mammograms were excluded.Images are also represented with their annotated ground-truth and have average size of 3328 × 4084 pixels.There are 45.5% of images that include Mass lesions and 54.5% of images that include Calcifications lesions.

    The private dataset was acquired from the National Institute of Cancerology(INCAN)in Mexico City.It contains 489 mammograms with only stage 3 and 4 breast cancer where 487 cases include abnormal lesions from 208 patients,where 80% of images include Mass lesions and the rest includes Calcifications.Images have average of 300 × 700 pixels collected from CC,MLO,AT and ML views.

    All mammograms may have one or multiple lesions with different sizes and locations.Besides,our experimental datasets have different resolution and capture quality,which can be observed visually from Fig.3,and this is due to the different modality that was used to extract mammograms.Consequently,performance results varied as demonstrated using multiple testsets.

    Figure 3:Examples from the public and private mammography datasets,where green box indicates a mass and yellow box indicates a calcification.(a)CBIS-DDSM mammogram example,an MLO view;(b)INbreast mammogram example,an MLO view;(c)Private mammogram example,a CC view

    3.4 Data Preparation

    Mammograms were collected using the scanning technique of digital X-ray mammography that usually compresses the breast.This may generates deformable breast regions and degrades the quality of mammography images[59,60].Therefore,some preprocessing steps should be applied to correct the data and remove additional noise[44,45].In this work,we applied histogram equalization only on the CBIS-DDSM and the private dataset to enhance any compressed region and create a smooth pixels-equalization that helps distinguishing suspicious regions from the normal regions.We did not enhance the INbreast dataset as it was correctly acquired using the Full Field Digital Mammography(FFDM)and thus its quality is satisfying.

    Furthermore,our suggested YOLO-based model requires mammograms and the coordinates of regions of interest(ROI)that surrounds breast lesions.According to the existing ground-truth that represent experts’annotations,we extracted the lesions coordinates represented in x,y,width and height and the class(mass or calcification).Next,mammograms were resized using a bi-cubic interpolation over 4 × 4 neighborhood.For experimental reasons,we used images sizes of 448 ×448 because the input size should be divisible by 32 according to DarkNet backbone architecture of YOLO-V3,and this size should also fit on the GPU memory.

    Training deep learning models requires a large amount of annotated data that helps maintaining its generalization aspect.For medical applications,most of the collected datasets have small number of instances and often suffer from an imbalanced distribution,which remains a challenge for training deep learning models[61].To overcome this problem,two solutions were recently employed in many studies:data augmentationandtransfer learning.Data augmentation offers a process of increasing experimentally size of the dataset[2,8,10,12,18,39,43,45].In this paper and for the particular detection task,we augmented the original mammograms six times.First we rotated original images with the anglesΔθ= {0°,90°,180°,270°} and we transformed them using Contrast Limited Adaptive Histogram Equalization(CLAHE)method[62]with two variations {tile grid size of(4,4)and a contrast threshold of 40,tile grid size of(8,8)and a contrast threshold of 30}.Thus,a total of 18.909,1410,and 2922 mammograms were respectively collected for CBIS-DDSM,INbreast,and the private dataset to train and test the proposed model.

    Deep learning models start with initializing the trainable parameters(i.e.,weights,bias).To do that,there are two commonly adopted methods:random initialization and transfer learning[2,10,19,43,45,49,63,64].In our study,we only relied on transfer learning technique by using the weights of a pre-trained model on a larger annotated dataset(i.e.,ImageNet,MSCOCO,etc.)and then we re-trained and fine-tuned the new weights on our specific task and augmented dataset.This helped to accelerate the convergence and avoid overfitting problems.Hence,we used the weights that were trained using the DarkNet backbone framework on the MSCOCO dataset.The pre-trained model architecture was originally based on the VGG-16 model.

    4 Experiments and Results

    All experiments using the proposed deep learning model were conducted on a PC with the following specifications:Intel(R)Core(TM)i7-8700K processor with 32 GB RAM,3.70 GHz frequency,and one NVIDIA GeForce GTX 1090 Ti GPU.

    4.1 Evaluation Metrics

    In this study,we used only object detection and classification measures to evaluate the performance of our YOLO-based model.To ensure the true detection of breast lesions in the mammograms,we first measured the intersection over union(IoU)score between each detected box and its ground-truth,and then we tested if it exceeded a particular confidence score threshold that will be discussed later.Eq.(3)details the IoU score formula.

    We also relied on another objective measure that considered the predicted class probability of true detected boxes.Inspired by the work[65],we computed the number of true detected masses and calcifications over the total number of mammograms as defined in Eq.(4).

    This means we excluded cases having a lower IoU score before computing the final detection accuracy rate.Indeed,predicted boxes that had confidence probability scores equal or greater than the confidence score threshold,were only considered for computing the final detection accuracy rate.We measured the detection accuracy rate globally and for each independent class to evaluate the performance of the simultaneous detection and classification.

    4.2 Hyperparameters Tuning

    The proposed YOLO-based model presents a list of hyperparameters that includes learning rate,number of epochs,dropout rate,batch size,number of hidden units,confidence score threshold and so on.Considering their effect on the model performance,only three hyperparameters were selected for the tuning.For all datasets,we randomly split all mammograms for each class into groups of 70%,20%,and 10% respectively for training,testing,and validation sets.

    In each experiment,trainable parameters were fixed and each hyperparameter was varied.For all experimental datasets,we used Adam as optimizer,and all experiments were reported using the detection accuracy rate.First,we set the learning rate to 0.001,number of epochs to 100 and the batch size to 64 according to the work[45],and then we trained the model with different confidence score thresholds until we report the value that provided satisfying detected objects for further tasks(i.e.,segmentation and shape classification).As shown in Fig.4a,the best confidence score value for all datasets is 0.35 to accept all detected objects the model confident from them by more than 35%.Next,we repeated the experiments but we varied learning rate values to report the best detection accuracy rate for all datasets as shown in Fig.4b.In addition,the early stopping strategy for the second half of iterations was used to reduce the learning rate by 10% if the loss function did not decrease every 10 epochs.Next,we selected the best learning rate which is 0.001 and we varied the batch size to report the best results for the three datasets as illustrated in Fig.4c.Finally,we set the learning rate to be 0.001 and batch size to be 16,and we varied the number of epochs until all datasets reported the best performance for 100 epochs as shown in Fig.4d.

    Figure 4:Hyperparameters tuning;(a)confidence score;(b)learning rate;(c)batch size;and(d)number of epochs

    4.3 Results

    Different experiments were conducted to assess the effect of varying input images data and target classes(i.e.,mass,calcification)of our suggested YOLO-based model.Furthermore,additional experiments were conducted for the fusion models approach to improve the results.

    4.3.1 Single Models Evaluation

    The breast lesions detection and classification model was trained differently over the mammography datasets.We varied the input data fed to the model and configured the classification to be with multiple classes using M2.Performance of the model is reported in Tab.1.

    Results show the advantage of data augmentation and resize over the original mammography datasets.In fact,the performance increased with 10% for CBIS-DDSM dataset with almost half of inference time.Similarly,the model achieved a better detection accuracy rate with more than 6.5% and 40% less inference time.The same improvement with 29.6% is noticed on the private dataset with a 28% drop in inference time.Accordingly,using the augmented and resized datasets,we varied the prediction classes by training M1independently on Mass and Calcification,and M2on both,and results are reported in Tab.2 below.

    Table 1:Model performance for different configurations

    Table 2:Model performance for different prediction classes

    Results show that Private dataset had the highest performance comparing with the public datasets and this can be explained with the good resolutions and the easy localization of most of the lesions in those mammograms.Moreover,the public datasets had more deteriorated lesions that are harder to simultaneously detect and classify.

    Accordingly,results in Tab.2 show the clear ability of the YOLO-based model to better detect and classify the mass lesions from the entire mammograms than the calcification lesions.This is aligned with the difference between the two types of lesions in terms of shape,size and texture.In fact,calcifications are often small and randomly distributed in challenging positions within the breast[66].As shown in Fig.5,calcifications do not have standard shape and they can be bilateral,thick linear,clustered,pleomorphic and vascular,etc.These varied shapes can limit the detection and classification for this type of lesions and yield more failed cases than for the other lesions.Below in Fig.5,it shows a case of a coarse-like calcification that has crossed thick lines with irregular size(image on the left,taken from the CBIS-DDSM dataset).Another case shows pleomorphic calcifications that have randomly distribution(image on the middle,taken from the INbreast dataset).In addition,example of clustered calcifications located on the pectoral muscle that presents a challenging case in mammography(image on the right,taken from the Private dataset).

    Figure 5:Examples of different calcifications shape and localization(ground-truth of calcification is marked in green,ground-truth of mass is marked in red)for CBIS-DDSM,INbreast and private datasets(from left to right)

    Moreover,we notice that both models have the best results toward mass lesions using the private dataset,and toward calcification lesions using the INbreast dataset.This can be explained with the degraded quality presented in the digitized X-rays mammograms of CBIS-DDSM dataset.Consequently,performance is affected by the image quality and our study proved that detection and classification highly require full-field digital mammography images which involves direct conversion and preserve the shape and textures breast lesions[67].

    Moreover,Tab.2 demonstrates that training the model on both prediction classes slightly decreased the performance and this can be explained by the inability of YOLO-based model to detect and distinguish some different types of lesions having similar shapes.However,we proved the robustness of our suggested model toward mass detection with a maximum detection accuracy rate of 96.2 using the private dataset.All experiments had similar inference time with a maximum value of 0.58 seconds.Examples from each dataset are illustrated in Fig.6,and each lesions breast has its confidence score.We clearly notice that multiple lesions were accurately detected in the same mammogram.

    Figure 6:Examples of breast lesions detection and classification results and their confidence score toward different classes on CBIS-DDSM,INbreast and private datasets(from left to right):mass(green boxes)and calcification(yellow boxes)

    4.3.2 Fusion Models Evaluation

    This study proposed an additional step to evaluate the simultaneous detection and classification model.This presents an expanded evaluation that fuses models trained with different settings as detailed in Section 3.2.In fact,before presenting the results,single models M1and M2were first reported over best-selected mammograms from the augmented datasets.This means for every set of predicted mammograms including the original and their five augmented images(i.e.,rotated,transformed),we selected the image having the highest IoU score.Next,different models were fused into a new Fusion model,as detailed in Tab.3,and we measured the detection accuracy rate toward every prediction class.

    Table 3:Comparison performance using fusion models approach

    Indeed,performance of detection and classification using the fusion model was increased for each type of breast lesion comparing to the single models.For CBIS-DDSM dataset,mass lesions had a detection accuracy rate of 95.7%,which is higher than 85.1%.Besides,we boosted the performance with 12.2% for calcification lesions.For INbreast dataset,we achieved a final detection accuracy rate of 98.1% for mass lesions and 72% for calcification lesions,which are better than results reported for single experiments in Tab.2.Similarly,performance was improved for the private dataset with 98% detection accuracy rate for mass lesions and 73.2% for calcification lesions.

    It is clearly observed that our suggested fusion models approach improved the results of detection and classification on mammography images.Indeed,fusion strategies were reviewed in the past for medical image segmentation[68–70],and our approach is a new decision-level fusion strategy for object detection and classification that proved the advantage of fusing results of multiple models.

    Finally,a comparison of mass detection results of the latest studies and similar methods are listed in Tab.4.Our implemented method using the fusion models approach is sufficiently fast and accurate.Comparing both detection accuracy rate and inference time with the other works shows that we achieved a better overall performance on the public datasets:CBIS-DDSM with a detection accuracy rate of 95.7% and INbreast with a detection accuracy rate of 98.1%.

    It is to notice that comparative results with the state-of-the-art methods relied on both detection accuracy rate and testing inference time,so even though the work by Al-Antari et al.[43]outperformed the detection results for INbreast,but it was more expensive than our implementation in terms of inference time.Additionally,experiments in each work were based on different preprocessing techniques,which can perform differently on both standard datasets.

    Table 4:Comparison of mass detection with other works

    5 Discussion and Conclusion

    In this study,we have implemented a deep learning YOLO model to simultaneously detect and classify the suspicious lesions in the breast.Similar works only addressed the problem of mass lesions detection and extracted the regions of interest for further diagnosis.In contrast,our study expands the ability of YOLO-based model to conduct simultaneous detection and classification on mammograms[45],and consequently presents a method that overcomes the problem of predicting location and type of two common findings in the whole mammograms:Mass and Calcification.Results showed the capability of our proposed methodology to accurately achieve state-of-the-art performance.

    Furthermore,this approach revealed the advantage of YOLO model as detector and classifier toward different clinical mammographic images(i.e.,digitized X-rays,full-field digital mammography,etc.).The quality of predicted images also affirms the robustness of YOLO to successfully identify breast lesions over pectoral muscle,next to breast nipples,or above the dense tissues as shown in Fig.6.Experimental results showed that training YOLO-based deep learning model is overall fast and accurate,where our results outperform the SSD method[35],the Faster R-CNN model[44],the CNN model[17]and other machine learning techniques[8,16]that had a maximum detection accuracy rate of 98% on INbreast dataset but a significantly high inference time.The comparison revealed that YOLO model is the right choice for mass detection in mammography as presented in other existing YOLO implementations[41,43,44]with a maximum detection accuracy rate of 97.27% on INbreast dataset,and our study enhanced the state-of-the-art results to be 98.1%.However,limitations of the proposed YOLO model can occur in the training configuration that depends on preparing the right format of input data.Thus,input images should be accompanied by the true locations and class labels of the lesions during the training.This requires extracting the coordinates of lesions from the ground truth and consequently YOLO model has an input dependency.

    In addition,this paper provided feasible and promising results using the proposed fusion models approach that was considered to join different models and lower the miss-prediction error.Moreover,as the breast lesions detection plays a critical role in the CAD systems and fullyintegrated breast cancer diagnosis[32,43,45],our methodology provided an improved detection performance compared with the recent deep learning models.This helps to avoid carrying out additional errors when conducting further diagnosis on the detected lesions.

    For a complete clinical application that can assist radiologists,future work aims at extracting the correctly detected masses and calcifications and conducting lesions segmentation,shape and type classification(malignant or benign),and malignancy degree prediction of breast tumors.This will provide an entire framework for breast cancer diagnosis that may also include clinical reports analysis.

    Acknowledgement:The authors would especially like to express their gratitude to the National Institute of Cancerology(INCAN)in Mexico City for providing the private mammography dataset.Thanks also to the radiologists Dr.Kictzia Yigal Larios and Dr.Raquel Balbás at FUCAM A.C.,and Dr.Guillermo Peralta and Dr.Néstor Pi?a at Cancer Center Tec100 by MRC International.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    黄色毛片三级朝国网站| 伦理电影免费视频| 欧美精品高潮呻吟av久久| 男男h啪啪无遮挡| 日韩大片免费观看网站| 叶爱在线成人免费视频播放| 国产精品国产av在线观看| 如何舔出高潮| av国产久精品久网站免费入址| 成人黄色视频免费在线看| 看免费成人av毛片| 久久久久久人妻| 国产片特级美女逼逼视频| 久久人人爽av亚洲精品天堂| 国产极品粉嫩免费观看在线| 国产av精品麻豆| 一个人免费看片子| 免费日韩欧美在线观看| 国产精品三级大全| 欧美日韩福利视频一区二区| 欧美黑人精品巨大| 久久影院123| 亚洲男人天堂网一区| 在线亚洲精品国产二区图片欧美| 国语对白做爰xxxⅹ性视频网站| 国产麻豆69| 夫妻性生交免费视频一级片| 女人高潮潮喷娇喘18禁视频| 国语对白做爰xxxⅹ性视频网站| 日韩一卡2卡3卡4卡2021年| 国产男女超爽视频在线观看| 大香蕉久久网| 国产伦人伦偷精品视频| a级片在线免费高清观看视频| 亚洲熟女毛片儿| 久久久久久久精品精品| 亚洲精品,欧美精品| 亚洲在久久综合| 亚洲成人免费av在线播放| 老司机深夜福利视频在线观看 | 久久久亚洲精品成人影院| 免费黄网站久久成人精品| 我要看黄色一级片免费的| 亚洲精品国产一区二区精华液| 欧美日韩视频精品一区| 免费女性裸体啪啪无遮挡网站| 日韩视频在线欧美| 老司机影院成人| 精品一区二区三区av网在线观看 | 天天躁日日躁夜夜躁夜夜| 久久久久精品久久久久真实原创| 叶爱在线成人免费视频播放| 午夜影院在线不卡| 美女福利国产在线| 激情五月婷婷亚洲| 久久人妻熟女aⅴ| 亚洲少妇的诱惑av| 激情五月婷婷亚洲| 99精品久久久久人妻精品| 夫妻性生交免费视频一级片| 久久亚洲国产成人精品v| 新久久久久国产一级毛片| 建设人人有责人人尽责人人享有的| 久久韩国三级中文字幕| 国产激情久久老熟女| 国产男女内射视频| 一区二区av电影网| 欧美xxⅹ黑人| 黄色一级大片看看| h视频一区二区三区| 在线亚洲精品国产二区图片欧美| 欧美精品亚洲一区二区| 国产1区2区3区精品| 色综合欧美亚洲国产小说| 国产一区二区三区av在线| 国产免费福利视频在线观看| kizo精华| 日韩一区二区视频免费看| av在线app专区| 黄色视频在线播放观看不卡| 精品国产国语对白av| 欧美久久黑人一区二区| 精品午夜福利在线看| 亚洲av国产av综合av卡| 91精品三级在线观看| 三上悠亚av全集在线观看| 精品少妇一区二区三区视频日本电影 | 侵犯人妻中文字幕一二三四区| 国产99久久九九免费精品| 国产亚洲最大av| 色吧在线观看| 久久99一区二区三区| 欧美黄色片欧美黄色片| 久久久久久久大尺度免费视频| a级毛片黄视频| 婷婷色综合www| 男女边吃奶边做爰视频| 99热国产这里只有精品6| 国产福利在线免费观看视频| 久久av网站| 毛片一级片免费看久久久久| 欧美亚洲 丝袜 人妻 在线| 国产在线免费精品| 日韩精品免费视频一区二区三区| 亚洲成人一二三区av| 丝袜脚勾引网站| av一本久久久久| 婷婷色综合大香蕉| 天天躁夜夜躁狠狠久久av| 黄频高清免费视频| 国产亚洲av片在线观看秒播厂| 免费黄网站久久成人精品| 国产免费又黄又爽又色| 三上悠亚av全集在线观看| 少妇 在线观看| 啦啦啦在线免费观看视频4| 亚洲成国产人片在线观看| 亚洲天堂av无毛| 欧美成人精品欧美一级黄| 亚洲欧美一区二区三区久久| 成人18禁高潮啪啪吃奶动态图| 日韩一本色道免费dvd| 天天添夜夜摸| 精品人妻熟女毛片av久久网站| av视频免费观看在线观看| 最近的中文字幕免费完整| 亚洲精品日本国产第一区| 又黄又粗又硬又大视频| 午夜免费鲁丝| 男女高潮啪啪啪动态图| 国产人伦9x9x在线观看| 可以免费在线观看a视频的电影网站 | 久久精品国产亚洲av高清一级| 五月开心婷婷网| 精品人妻熟女毛片av久久网站| 亚洲视频免费观看视频| 搡老岳熟女国产| 一区福利在线观看| 免费黄网站久久成人精品| 中文字幕亚洲精品专区| 久久久久久人妻| 999精品在线视频| 女人被躁到高潮嗷嗷叫费观| 午夜福利视频在线观看免费| 美女视频免费永久观看网站| 久久久久国产精品人妻一区二区| 人妻人人澡人人爽人人| 毛片一级片免费看久久久久| 热re99久久国产66热| 国产人伦9x9x在线观看| 成年女人毛片免费观看观看9 | 日本wwww免费看| av在线观看视频网站免费| 中文天堂在线官网| 亚洲国产日韩一区二区| 亚洲在久久综合| 99国产精品免费福利视频| 精品人妻在线不人妻| 国产淫语在线视频| 精品国产一区二区三区四区第35| 91成人精品电影| 国产成人系列免费观看| 老汉色∧v一级毛片| 热re99久久精品国产66热6| 女性被躁到高潮视频| 日本欧美国产在线视频| 国产精品嫩草影院av在线观看| 亚洲综合精品二区| 亚洲欧洲日产国产| 男人操女人黄网站| 国产成人欧美| 一区二区av电影网| 免费在线观看完整版高清| 在线免费观看不下载黄p国产| 2021少妇久久久久久久久久久| 精品久久久精品久久久| 99久国产av精品国产电影| 最近2019中文字幕mv第一页| 一边摸一边抽搐一进一出视频| 亚洲精品日本国产第一区| 黄色视频在线播放观看不卡| 亚洲欧美一区二区三区国产| 亚洲精品国产一区二区精华液| 国产深夜福利视频在线观看| 亚洲精品国产区一区二| 女的被弄到高潮叫床怎么办| 欧美亚洲日本最大视频资源| 久久精品久久久久久久性| 又黄又粗又硬又大视频| 最近手机中文字幕大全| 99久国产av精品国产电影| 女性被躁到高潮视频| 亚洲欧美日韩另类电影网站| 9热在线视频观看99| a 毛片基地| 纯流量卡能插随身wifi吗| www.自偷自拍.com| 成年av动漫网址| 少妇精品久久久久久久| 一区福利在线观看| 欧美黑人精品巨大| 欧美变态另类bdsm刘玥| 国产淫语在线视频| 成人午夜精彩视频在线观看| 另类亚洲欧美激情| 人妻一区二区av| 午夜福利一区二区在线看| 人人妻人人爽人人添夜夜欢视频| 精品人妻熟女毛片av久久网站| 国产男女超爽视频在线观看| 最近最新中文字幕大全免费视频 | 国产xxxxx性猛交| 国产xxxxx性猛交| h视频一区二区三区| 91精品三级在线观看| 人人妻人人爽人人添夜夜欢视频| 亚洲精品一二三| 一本一本久久a久久精品综合妖精| 九草在线视频观看| 久久鲁丝午夜福利片| 国产高清不卡午夜福利| 亚洲欧美精品自产自拍| 久久久久久久久久久久大奶| 日韩av在线免费看完整版不卡| 国产精品一国产av| 最黄视频免费看| 国产97色在线日韩免费| 高清av免费在线| 高清av免费在线| 777米奇影视久久| 欧美精品一区二区免费开放| 大陆偷拍与自拍| 宅男免费午夜| 亚洲av在线观看美女高潮| 日韩 亚洲 欧美在线| av国产精品久久久久影院| 自拍欧美九色日韩亚洲蝌蚪91| 老鸭窝网址在线观看| 欧美黄色片欧美黄色片| 亚洲少妇的诱惑av| 一级黄片播放器| 久久精品国产综合久久久| 国产成人欧美在线观看 | 黑人猛操日本美女一级片| 免费人妻精品一区二区三区视频| 波野结衣二区三区在线| 欧美97在线视频| 丝袜在线中文字幕| 国产精品嫩草影院av在线观看| 色婷婷久久久亚洲欧美| 777久久人妻少妇嫩草av网站| 国产一级毛片在线| 狠狠精品人妻久久久久久综合| av线在线观看网站| 日韩 欧美 亚洲 中文字幕| 亚洲在久久综合| 亚洲成人av在线免费| 亚洲av国产av综合av卡| 日韩精品有码人妻一区| 日韩中文字幕视频在线看片| 一级毛片电影观看| 久久韩国三级中文字幕| 中文乱码字字幕精品一区二区三区| 国产精品久久久久久精品古装| 十八禁高潮呻吟视频| 国产不卡av网站在线观看| videosex国产| 国产成人精品久久久久久| 亚洲av国产av综合av卡| 免费不卡黄色视频| 2018国产大陆天天弄谢| 成人影院久久| 丰满少妇做爰视频| 韩国av在线不卡| 亚洲伊人色综图| 国产又色又爽无遮挡免| 国产在线免费精品| 亚洲激情五月婷婷啪啪| 精品人妻熟女毛片av久久网站| 日韩,欧美,国产一区二区三区| 国产福利在线免费观看视频| 欧美日韩福利视频一区二区| 久久久国产一区二区| √禁漫天堂资源中文www| 欧美国产精品一级二级三级| 美女福利国产在线| 久热爱精品视频在线9| 国产欧美日韩一区二区三区在线| 欧美av亚洲av综合av国产av | 十分钟在线观看高清视频www| av一本久久久久| 婷婷成人精品国产| 另类亚洲欧美激情| 精品酒店卫生间| 欧美精品一区二区大全| 99久久精品国产亚洲精品| 啦啦啦视频在线资源免费观看| 成人漫画全彩无遮挡| 久久 成人 亚洲| xxxhd国产人妻xxx| 卡戴珊不雅视频在线播放| 国产 精品1| 晚上一个人看的免费电影| 亚洲第一av免费看| 日韩熟女老妇一区二区性免费视频| 欧美xxⅹ黑人| 精品国产一区二区三区久久久樱花| 亚洲一码二码三码区别大吗| 亚洲一区二区三区欧美精品| 人人妻人人澡人人看| 国产亚洲最大av| 丁香六月欧美| 最近的中文字幕免费完整| 久久久久久人妻| 久久久精品免费免费高清| 国产成人精品在线电影| 国产黄频视频在线观看| 国产亚洲最大av| a 毛片基地| 黑人巨大精品欧美一区二区蜜桃| 久久久欧美国产精品| 老鸭窝网址在线观看| xxx大片免费视频| 日本欧美国产在线视频| 久久精品国产a三级三级三级| 99国产精品免费福利视频| 亚洲美女黄色视频免费看| 夫妻午夜视频| 女人爽到高潮嗷嗷叫在线视频| 伊人久久国产一区二区| 美女高潮到喷水免费观看| 超色免费av| 欧美少妇被猛烈插入视频| 国产片特级美女逼逼视频| 男人添女人高潮全过程视频| 在线天堂最新版资源| 国产成人免费观看mmmm| 精品少妇内射三级| 免费黄网站久久成人精品| 亚洲男人天堂网一区| 亚洲精品久久久久久婷婷小说| av又黄又爽大尺度在线免费看| 中文字幕精品免费在线观看视频| 亚洲精品一区蜜桃| 90打野战视频偷拍视频| 看非洲黑人一级黄片| 毛片一级片免费看久久久久| 丰满饥渴人妻一区二区三| 午夜免费男女啪啪视频观看| 久久精品亚洲av国产电影网| 国产麻豆69| av卡一久久| 亚洲伊人色综图| 国产av码专区亚洲av| 最近最新中文字幕免费大全7| 一本久久精品| videos熟女内射| 叶爱在线成人免费视频播放| 中国三级夫妇交换| 夜夜骑夜夜射夜夜干| 在线观看人妻少妇| 中文字幕另类日韩欧美亚洲嫩草| 精品久久蜜臀av无| 婷婷色综合www| h视频一区二区三区| 久久久亚洲精品成人影院| 亚洲欧洲精品一区二区精品久久久 | 亚洲人成电影观看| 下体分泌物呈黄色| 日韩电影二区| 狠狠婷婷综合久久久久久88av| 久久久久视频综合| 久久久久网色| h视频一区二区三区| 日韩一本色道免费dvd| av天堂久久9| 如何舔出高潮| 亚洲av日韩精品久久久久久密 | 国产成人一区二区在线| 99香蕉大伊视频| av在线播放精品| 人人妻,人人澡人人爽秒播 | 久久国产精品男人的天堂亚洲| 国产精品成人在线| 日日摸夜夜添夜夜爱| 国产色婷婷99| 日韩一本色道免费dvd| 久久热在线av| 久久久久久人人人人人| 女性生殖器流出的白浆| 国产97色在线日韩免费| 亚洲欧美中文字幕日韩二区| 亚洲精品久久成人aⅴ小说| 欧美日韩成人在线一区二区| 亚洲欧洲精品一区二区精品久久久 | 91精品国产国语对白视频| 国产欧美日韩一区二区三区在线| 久久久久久久久久久免费av| 日韩一卡2卡3卡4卡2021年| 一区二区三区乱码不卡18| 纯流量卡能插随身wifi吗| 亚洲少妇的诱惑av| 亚洲av在线观看美女高潮| a 毛片基地| 久久精品国产综合久久久| 成年女人毛片免费观看观看9 | 99九九在线精品视频| 国产野战对白在线观看| 国产成人精品久久二区二区91 | 国产1区2区3区精品| 成年av动漫网址| 黄频高清免费视频| 色综合欧美亚洲国产小说| 亚洲精品成人av观看孕妇| 国产精品二区激情视频| 亚洲天堂av无毛| 久久久久久人人人人人| 一个人免费看片子| 大话2 男鬼变身卡| 看非洲黑人一级黄片| 日日摸夜夜添夜夜爱| videos熟女内射| 交换朋友夫妻互换小说| 欧美日韩亚洲国产一区二区在线观看 | av天堂久久9| 婷婷成人精品国产| 国产精品一区二区在线观看99| 色吧在线观看| 色婷婷久久久亚洲欧美| 欧美黑人欧美精品刺激| 99久久精品国产亚洲精品| 丰满饥渴人妻一区二区三| 欧美最新免费一区二区三区| e午夜精品久久久久久久| 亚洲欧美一区二区三区国产| 国产一区二区三区综合在线观看| 精品一区二区三区四区五区乱码 | 日本午夜av视频| 99九九在线精品视频| 国产午夜精品一二区理论片| 99热国产这里只有精品6| av女优亚洲男人天堂| 国产精品二区激情视频| 国产97色在线日韩免费| 国产精品免费视频内射| 五月开心婷婷网| 可以免费在线观看a视频的电影网站 | 国产欧美日韩一区二区三区在线| 国产精品av久久久久免费| 国产精品免费大片| 天天躁夜夜躁狠狠躁躁| 你懂的网址亚洲精品在线观看| 男女边吃奶边做爰视频| 亚洲精品中文字幕在线视频| 亚洲成人av在线免费| 国产97色在线日韩免费| 久久综合国产亚洲精品| 国产在线一区二区三区精| 人成视频在线观看免费观看| 综合色丁香网| 波多野结衣av一区二区av| 十八禁人妻一区二区| 男男h啪啪无遮挡| 亚洲欧美色中文字幕在线| 国产精品偷伦视频观看了| av电影中文网址| 久久毛片免费看一区二区三区| 国产国语露脸激情在线看| 美女大奶头黄色视频| 亚洲精品av麻豆狂野| 天天影视国产精品| 欧美精品人与动牲交sv欧美| xxxhd国产人妻xxx| 天天躁狠狠躁夜夜躁狠狠躁| 婷婷色麻豆天堂久久| 99九九在线精品视频| 18禁国产床啪视频网站| 日韩大片免费观看网站| 建设人人有责人人尽责人人享有的| 欧美最新免费一区二区三区| 夫妻午夜视频| av在线app专区| 国产黄频视频在线观看| 亚洲欧美清纯卡通| 欧美人与善性xxx| 大片电影免费在线观看免费| 久久久久久久大尺度免费视频| 国产成人精品久久久久久| 亚洲色图综合在线观看| 亚洲国产日韩一区二区| 亚洲精品国产色婷婷电影| 日韩制服丝袜自拍偷拍| 久久精品国产亚洲av高清一级| 天美传媒精品一区二区| 日韩电影二区| 久久这里只有精品19| 久久精品久久久久久久性| 久久国产精品大桥未久av| 久久ye,这里只有精品| tube8黄色片| 在线观看免费午夜福利视频| 国产精品一区二区精品视频观看| 叶爱在线成人免费视频播放| 日本爱情动作片www.在线观看| 日韩视频在线欧美| 久久精品亚洲av国产电影网| 亚洲四区av| 国产精品.久久久| 久久影院123| 国产色婷婷99| 国产欧美日韩综合在线一区二区| 麻豆乱淫一区二区| 免费观看人在逋| 卡戴珊不雅视频在线播放| 1024香蕉在线观看| xxx大片免费视频| 极品少妇高潮喷水抽搐| 久久99精品国语久久久| 午夜激情av网站| av.在线天堂| 王馨瑶露胸无遮挡在线观看| 国产精品二区激情视频| 久久久精品94久久精品| 日韩视频在线欧美| 久久久精品国产亚洲av高清涩受| 亚洲五月色婷婷综合| 在线看a的网站| 丰满少妇做爰视频| 蜜桃在线观看..| 国产精品免费大片| 国产亚洲精品第一综合不卡| 99香蕉大伊视频| 亚洲精品在线美女| 久久免费观看电影| av电影中文网址| 亚洲成色77777| 两个人看的免费小视频| 两个人免费观看高清视频| 日韩制服骚丝袜av| 国产精品久久久久久精品古装| 视频在线观看一区二区三区| 亚洲 欧美一区二区三区| 精品亚洲成a人片在线观看| bbb黄色大片| 国产爽快片一区二区三区| 波多野结衣一区麻豆| 一区二区三区激情视频| 亚洲激情五月婷婷啪啪| 国产精品一区二区精品视频观看| 国产免费福利视频在线观看| 免费在线观看视频国产中文字幕亚洲 | 欧美日韩一区二区视频在线观看视频在线| av在线播放精品| 男女床上黄色一级片免费看| 国产片内射在线| 男女无遮挡免费网站观看| 精品久久蜜臀av无| 精品亚洲成a人片在线观看| 精品一区二区三区四区五区乱码 | 中文字幕最新亚洲高清| 国产精品香港三级国产av潘金莲 | 最近最新中文字幕免费大全7| 熟妇人妻不卡中文字幕| 欧美日韩视频精品一区| 亚洲欧美精品综合一区二区三区| 最近最新中文字幕大全免费视频 | 国产成人免费无遮挡视频| 人人妻人人爽人人添夜夜欢视频| 久久久久人妻精品一区果冻| 一区福利在线观看| 午夜激情久久久久久久| 天天影视国产精品| 最近最新中文字幕免费大全7| 新久久久久国产一级毛片| 午夜老司机福利片| 韩国精品一区二区三区| 视频区图区小说| 丝瓜视频免费看黄片| 午夜老司机福利片| 熟妇人妻不卡中文字幕| 日韩不卡一区二区三区视频在线| 亚洲国产欧美日韩在线播放| 亚洲天堂av无毛| 欧美亚洲日本最大视频资源| 国产 一区精品| 妹子高潮喷水视频| 亚洲熟女精品中文字幕| 亚洲成人一二三区av| 亚洲熟女精品中文字幕| 黄色 视频免费看| 99热国产这里只有精品6| 熟妇人妻不卡中文字幕| 极品人妻少妇av视频| 乱人伦中国视频| 性少妇av在线| 极品少妇高潮喷水抽搐| 中国三级夫妇交换| 午夜福利视频在线观看免费| 精品午夜福利在线看| 国产一区有黄有色的免费视频| 一级毛片 在线播放| 国产亚洲av片在线观看秒播厂| 久久久国产一区二区| 人妻 亚洲 视频| 在线观看国产h片| 欧美人与善性xxx| 亚洲色图综合在线观看| 亚洲av日韩精品久久久久久密 | 国产麻豆69| 亚洲国产毛片av蜜桃av| 久久久国产一区二区| av视频免费观看在线观看| 久久综合国产亚洲精品| 99国产精品免费福利视频| 日日摸夜夜添夜夜爱|