• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recognizing Breast Cancer Using Edge-Weighted Texture Features of Histopathology Images

    2023-12-12 15:51:12ArslanAkramJavedRashidFahimaHajjejSobiaYaqoobMuhammadHamidAsmaArshadandNadeemSarwar
    Computers Materials&Continua 2023年10期

    Arslan Akram,Javed Rashid,Fahima Hajjej,Sobia Yaqoob,Muhammad Hamid,Asma Arshad and Nadeem Sarwar

    1Department of Computer Science and Information Technology,Superior University,Lahore,54000,Pakistan

    2MLC Lab,Maharban House,House#209,Zafar Colony,Okara,56300,Pakistan

    3Information Technology Services,University of Okara,Okara,56300,Pakistan

    4Department of CS&SE,International Islamic University,Islamabad,44000,Pakistan

    5Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Riyadh,11671,Saudi Arabia

    6Department of Computer Science,University of Okara,Okara,56300,Pakistan

    7Department of Statistics and Computer Science,University of Veterinary and Animal Sciences,Lahore,Punjab,54000,Pakistan

    8School of Biochemistry and Biotechnology,University of the Punjab,Lahore,54000,Pakistan

    9Department of Computer Science,Bahria University,Lahore Campus,Lahore,54600,Pakistan

    ABSTRACT Around one in eight women will be diagnosed with breast cancer at some time.Improved patient outcomes necessitate both early detection and an accurate diagnosis.Histological images are routinely utilized in the process of diagnosing breast cancer.Methods proposed in recent research only focus on classifying breast cancer on specific magnification levels.No study has focused on using a combined dataset with multiple magnification levels to classify breast cancer.A strategy for detecting breast cancer is provided in the context of this investigation.Histopathology image texture data is used with the wavelet transform in this technique.The proposed method comprises converting histopathological images from Red Green Blue(RGB)to Chrominance of Blue and Chrominance of Red(YCBCR),utilizing a wavelet transform to extract texture information,and classifying the images with Extreme Gradient Boosting(XGBOOST).Furthermore,SMOTE has been used for resampling as the dataset has imbalanced samples.The suggested method is evaluated using 10-fold cross-validation and achieves an accuracy of 99.27% on the BreakHis 1.0 40X dataset,98.95% on the BreakHis 1.0 100X dataset,98.92% on the BreakHis 1.0 200X dataset,98.78%on the BreakHis 1.0 400X dataset,and 98.80%on the combined dataset.The findings of this study imply that improved breast cancer detection rates and patient outcomes can be achieved by combining wavelet transformation with textural signals to detect breast cancer in histopathology images.

    KEYWORDS Benign and malignant;color conversion;wavelet domain;texture features;xgboost

    1 Introduction

    Cancer incidence continues to rise,making it the top cause of death worldwide.The fact that breast cancer is the second most common disease in women makes it an important global health concern.There is a worldwide problem with breast cancer.By the year 2020,the World Health Organization predicted there would be an additional 2.3 million instances of breast cancer worldwide.In terms of female fatalities,breast cancer ranks sixth.There appears to be no consistent breast cancer mortality rate.Breast cancer rates are higher in wealthy countries than in less developed ones because of differences in nutrition,exercise,and reproduction rates.With an estimated 284,200 new cases in 2021 and 44,130 deaths,breast cancer is the leading cause of death among American women[1].Breast cancer mortality rates in developed nations have been falling over the past few decades as the disease has been better diagnosed and treated.Despite this,breast cancer remains a major health concern worldwide,particularly in underdeveloped countries with scarce diagnostic and therapeutic options.Mammography and other imaging screening should begin for women of average risk at 40 since breast cancer survival rates are increased via early identification.Increased frequency of testing for breast cancer may be necessary for women with a family history or other risk factors.Early breast cancer staging is essential to increase the chances of successful therapy and rapid recovery.Accurate diagnosis and staging,which permits prompt intervention with surgery,radiation therapy,chemotherapy,or any combination thereof,are often the consequence of several factors contributing to improved treatment outcomes.

    Technology has made cancer detection more sensitive and accurate.X-rays,Magnetic Resonance Imaging(MRI),Computed Tomography(CT),ultrasound,biopsy,and lab testing are used to identify cancer.The biopsy includes evaluating a small tissue sample from the suspected location under a microscope for cancer cells.Blood and tumor marker testing can also detect cancer cells or cancerrelated chemicals.Conventional cancer detection technologies have drawbacks.Imaging may miss tiny tumors,and biopsy and laboratory testing may give erroneous positive or negative results.Liquid biopsy can detect cancer cells or DNA fragments in the blood.This non-invasive approach may diagnose cancer earlier and assess therapy response.Common machine learning methods,including Support Vector Machines(SVM),Random Forests,and K-Nearest Neighbors(KNN),have all been used for breast cancer classification[2,3].These algorithms’statistical and mathematical foundations allow for extracting useful insights from seemingly unconnected information.However,feature engineering,the process of selecting and extracting pertinent qualities from the data,is typically required when using traditional machine learning approaches.It might be lengthy when dealing with extensive data like histopathology images.Automating various domains using machine learning and deep learning is easy.Applications range from image forgery recognition and smart city infrastructure to medical care and agricultural water distribution[4–7].

    Classifying breast cancer histopathology images using deep learning models and other machine learning approaches is an active study area.Deep learning models like Convolutional Neural Networks(CNNs)have recently been used to classify breast cancer[8–10].For instance,the effectiveness of SVM,KNN,and CNN models was evaluated on breast cancer histopathological images.The CNN model had the greatest accuracy(95.29%)of all the machine-learning methods.An interesting case in point is the classification of breast cancer using histopathology images,where different deep-learning models were compared,and the best model achieved an accuracy of 97.3%[11–15].Combining histopathology images with a cutting-edge deep learning model called Deep Attention Ensemble Network(DAEN)[14]further demonstrates the superiority of deep learning models over conventional machine learning algorithms for breast cancer classification.Several papers have investigated the feasibility of using machine learning on histology images to categorize breast cancer better.Classifying breast cancer histopathology images using a combination of a support vector machine and a random forest resulted in an accuracy of 84.23 percent.Several machine learning models,including SVMs,KNNs,random forests,and CNNs,were tested and compared for their ability to classify breast cancer histology.Compared to the other approaches,CNN had the highest accuracy(96.8%)[15].

    Classifying cancerous images using machine learning has been the subject of numerous studies.When contrasted to more conventional inspection techniques that use image processing and classification algorithms,however,it is determined that these methods require refinement.First,there is a significant gender gap in the data made public through competitions and other sources.Furthermore,studies have yet to focus on analyzing a combined dataset consisting of all magnification levels,even though most research has focused on analyzing histopathology images either on a single magnification or separately on several magnification levels.Second,the current breast cancer classification methods have poor performance on the best classification algorithms since they rely on statistical and textural elements of an image to make their classifications.

    The findings of this study combined wavelet transformation with Extreme Gradient Boosting(XGBOOST) [16] to develop a technique for distinguishing between benign and malignant cancers.This study offers a scale-invariant strategy for labeling images as benign or malignant,regardless of their size,shape,or resolution.The suggested method classifies cancer as benign or malignant using the BreakHis 1.0 [17] dataset comprising four types of magnification levels.Important subsections include the preprocessing stage,during which images from various databases with varying types,sizes,and dimensions are input and converted into YCBCR channels,and the feature extraction and concatenation stages.The final step involves providing features to XGBOOST to classify them and developing a model for use by image forensic specialists.

    Some crucial findings from the study are as follows:

    1.Even though there are many more benign images than malignant ones,Synthetic Minority Oversampling Technique(SMOTE)has been utilized to balance the dataset so that more useful insights can be gleaned from it using the BreakHis 1.0 dataset.

    2.The images are classified as benign or malignant using XGBOOST,and texture features are extracted using Wavelet transformation.

    3.If a method maintains its effectiveness regardless of the size of the image,it is said to be scaleinvariant.Therefore,the scale invariance of the planned method is evaluated using images of varied sizes,shapes,and types.

    The remainder of this article is organized in terms of time:Section 2 details the pertinent studies on breast cancer detection techniques.Section 3 provides a high-level overview of the steps involved in the proposed methodology,including preprocessing,feature extraction,and classification.The experimental data sets are discussed here as well.Section 4 presents experimental results and a discussion of the proposed design.Results from computations using the proposed architecture are tabulated and illustrated.The report finishes with a discussion of the results and recommendations for future study in Section 5.

    2 Literature Review

    Breast cancer is a serious global public health issue that profoundly impacts patient outcomes and healthcare systems.Early identification and accurate breast cancer diagnosis were crucial for patients to have a greater survival rate and pay less for medical care.Recently,machine learning algorithms have shown enormous promise in identifying and classifying breast cancer using images from histopathology.This literature review includes the most up-to-date findings on the limitations of machine learning algorithms for breast cancer classification.

    Breast cancer grading using deep learning was created by Wetstein et al.[18] and tested using whole-slide histopathology images.The algorithm outperformed human pathologists at identifying low and intermediate tumor stages,achieving an accuracy rate of 80% and a Cohen’s Kappa of 0.59.The work highlighted the possibility of deep learning-based models for automating breast cancer grading on whole-slide images,which is important since accurate and consistent grading improves patient outcomes.To determine the most common and productive training-testing ratios for histological image recognition,Wakili et al.[19]quickly analyzed deep-learning-based models.A training-to-testing ratio of 80/20 was shown to yield the highest accuracy.DenTnet,a new method built on transfer learning and DenseNet,was also created by the authors to address the limitations of prior methods.DenTnet achieved up to 99.28%accuracy on the BreaKHis dataset,outperforming leading deep learning algorithms in computing performance and generalizability.DenTnet allowed us to use fewer computational resources while maintaining our previous feature distribution.DenTnet tested only whole slide images but it was not tested on different resolutions.

    Kadhim et al.[20]used the Histogram of Gradients(HOG)feature extractor to quantify invasive ductal carcinoma histopathology images.Area Under Curve (AUC),F1 score,specificity,accuracy,sensitivity,and precision were used to evaluate the algorithms’performance.With more than 100 images,the algorithms struggled to keep up with the data.Deep learning could help get over this limitation.By reducing the scope for human error,machine learning (ML) can potentially improve breast cancer detection and survival rates.Zhang et al.[21] developed BDR-CNN-GCN to detect breast cancer in mammograms better.When a convolutional graph network(GCN)and a CNN are combined with batch normalization(BN),dropout(DO),and rank-based stochastic pooling(RSP),performance is improved.After being evaluated ten times on the breast miniMIAS dataset,the model has a sensitivity of 96.202 percent,a specificity of 96.002 percent,and an accuracy of 96.101 percent.Compared to 15 state-of-the-art breast cancer detection approaches and five neural network models,BDR-CNN-GCN achieves better results regarding data augmentation and identifying malignant breast masses.

    The sliding window method for extracting features from Local Binary Patterns (LBP) characteristics was developed by Alqudah et al.[22].Overall,the proposed method achieves high accuracy,sensitivity,and specificity,with a 91.12%rate of correct predictions,an 85.22%rate of correct positive predictions,and a 94.01%rate of correct negative predictions.In comparison to other studies in the literature,these outcomes excel.More information can be extracted using the suggested method,and other machine-learning strategies can be compared.The technique can potentially enhance breast cancer diagnosis and histological tissue localization.Clementet et al.’s support vector machine classifier and four DCNN versions classified breast cancer histology images into eight categories[23].A deep convolutional neural network(DCNN)was used to analyze images at many resolutions and produce a highly predictive multi-scale pooling image feature representation(MPIFR),which was then used by SVM to classify the images.Since it offers a fresh approach to reliably identifying various breast cancer subtypes,the proposed MPIFR technology may greatly enhance patient outcomes and breast cancer screening.Using the BreakHis histopathological breast cancer image dataset,we show a precision of 98.45 percent,a sensitivity of 97.48 percent,and an accuracy of 97.77 percent.

    The MPIFR method can improve the precision of breast cancer diagnosis and patients’health.Seo et al.[24] created a deep convolutional neural network (DCNN) that performs exceptionally well in classifying breast cancer.On the BreakHis topology BC image dataset,the ensemble model achieved higher accuracy(97.77%),sensitivity(97.48%),and precision(98.45%)than the prior stateof-the-art and an entire set of DCNN baseline models.To separate cells with and without nuclei,Saturi et al.[25] introduced a superpixel-clustering strategy based on optimization.The proposed method outperformed prior studies,resulting in an 8%–9% increase in classification accuracy for identifying breast cancer.The improved segmentation results result from the method’s advantages,which include searching for global optimization and using parallel computing.

    In [26],Hao et al.suggested a deep semantic and Grey Level Co-Occurrence Matrix (GLCM)based technique to image recognition in breast cancer histopathology.The suggested method outperforms the baseline models in Magnification Specific (MSB) and Magnification Independent (MIB)classification,with recognition accuracies of 96.75%,95.21%,96.57%,and 93.15%at magnifications of 40,100,200,and 400,respectively,and 96.33%,95.26%,96.09%,and 92.99% at the patient level.At the individual patient level,MIB classification accuracy was 95.56 percent,and at the individual image level,it was 95.54%.The suggested method’s accuracy is comparable to current best practices in recognition.Rehman et al.[27] proposed a neural network-based,reduced feature vector-and-machine learning framework to distinguish between mitotic and non-mitotic cells.The suggested method could accurately capture cell texture,allowing for the creation of efficiently reduced feature vectors to identify malignant cells.The proposed technique used ensemble learning with weighted attributes to improve model performance.The proposed method for recognizing mitotic cells outperforms state-of-the-art methods on the MITOS-12,AMIDA-13,MITOS-14,and TUPAC16 datasets.Different feature extraction methods(Hu moment,Haralick textures,and color histogram)created by Joseph et al.allowed for successful multi-classification of breast cancer cases on the BreakHis dataset.Histological images supported the multi-classification strategy recommended for breast cancer,which outperformed the majority of other investigations.Histopathological images at 40X,100X,200X,and 400X magnifications were classified with accuracies of 97.87%,97.60%,96.10%,and 96.84%using the proposed method[28].

    Increasing patient survival rates and decreasing healthcare costs require early identification and accurate breast cancer diagnosis.Machine learning algorithms have shown potential in detecting and classifying breast cancer using histopathology images.Recent studies have investigated many approaches to grading breast cancer,including superpixel clustering algorithms,sliding window feature extraction methods,and deep learning-based models.These studies have shown that the proposed methods are superior to alternative procedures concerning accuracy,sensitivity,and specificity,all contributing to improved breast cancer detection.These procedures have the potential to enhance patient outcomes while decreasing healthcare costs.Among the many limitations and challenges that must be surmounted are the interpretability of machine learning models and the requirement for additional labeled data.

    3 Material and Methods

    The whole-slide classification machine learning pipeline has great potential for use in the detection and treatment of breast cancer.We analyze high-resolution images from databases like BreakHis to classify slides as cancerous or benign.The images were converted to YCBCR for optimal texture feature extraction.After the first image processing,texture features were retrieved using wavelet coefficients.A binary classifier was then given the extracted features.Any algorithm distinguishing between cancerous and noncancerous slides can be the classifier.The dataset must be resampled before classification can begin.Oversampling using SMOTE analysis is being used to rectify this inequitable data set.XGBOOST is handling classification in this investigation.The pipeline then reports the classification results.Metrics like accuracy,precision,recall,and F1 score may be included in the report.These indicators can be used to assess the pipeline’s efficiency and adjust the various stages accordingly.The pipeline consists of four phases:preprocessing,feature extraction,classification,and result reporting Fig.1.

    Figure 1:Workflow of proposed breast cancer classification method

    3.1 Datasets

    This section describes the data collecting and preprocessing methods used to train and assess the models employed in the machine learning pipeline.Table 1 summarizes the features of BreakHis 1.0.The BreakHis 1.0 database contains images of breast cancer tissue samples.The images are separated into two categories:normal and malignant.The magnifications used to capture these images range from 40X to 400X.The total number of images is 3,995,with 1,995 showing malignant growths and 2,000 showing noncancerous ones.Each image is a Portable Network Graphics(PNG)file of 7004603 pixels.The BreakHis dataset’s wide range of image sizes makes it perfect for teaching recognition models to scale.

    Table 1:Details of datasets used for experiments

    Breast histopathology images from the BreakHis 1.0 dataset.The dataset includes 9,109 microscopic images of both healthy and malignant breast tissue.These images were captured at four magnifications(40X,100X,200X,and 400X)with two distinct staining procedures(hematoxylin,eosin,and picrosirius red).Studies have used the BreakHis 1.0 dataset to train and evaluate algorithms for breast cancer diagnosis and prognosis.Thus,we have developed deep learning models for automatically classifying breast histopathology images,which has greatly improved the progress of CAD systems[29].Fig.2 displays a few examples of the experimental database’s image content.

    Figure 2:A breast cancer slide at four different magnifications: (a) 40X,(b) 100X,(c) 200X,and(d)400X

    The data needed to be rebalanced,and many different approaches were studied.Under-sampling would include decreasing normal slides to equal the number of cancer slides,but this would diminish the already limited amount of data from the majority class and,as a result,may eliminate beneficial features.If the minority class was oversampled using a method such as the synthetic minority oversampling technique(SMOTE)[30],the output classes would be more balanced,and the model would have access to more useful information.However,this method is more computationally expensive than the technique currently used,class weights,a simpler technique.Class weighting gives more weight to the class under-represented in the training data when computing the loss function.Class weighting does not involve further manipulation of the training data,given its capacity to meaningfully extend the size of the training data set currently limited in BreakHis 1.0.Fig.3 shows the results of resampling using SMOTE.

    Figure 3:Bar chart showing the output class distribution between the benign and malignant classes within the training data.(a)Before Balancing(b)After Balancing

    All datasets used for this inquiry were partitioned into K-fold cross-validation parts with their corresponding ratios.When using XGBOOST,training images are used to build a model,while testing images are utilized to evaluate the model and obtain information from the one that has been trained.

    3.2 Preprocessing

    Digital image processing yields subtly diverse outcomes when applied to images in various color modes.Converting an image from Red,Green,Blue (RGB) to Luminance,Chrominance (YCBCR)offers many benefits.For image and video compression,transmission,and processing,YCBCR is a color space that separates luminance (brightness) and chrominance (color).Converting an image from RGB to YCBCR reduces color redundancy,which improves image compression.In YCBCR,the luminance channel has the most visual information.Reducing chrominance resolution reduces file size without affecting image quality.YCBCR also handles human-device color perception discrepancies.Electronic gadgets see red,blue,and green equally,but humans see green more.YCBCR handles these variances by segregating luminance and chrominance information.So,the RGB image is converted to YCBCR using the OpenCV library in Python to separate all three components of YCBCR.

    3.3 Feature Extraction

    Signal processing,data compression,and image analysis are just a few of the many applications of the wavelet transform,a mathematical technique.It takes a signal and breaks it down into a family of wavelets,each of which is a scaled and translated version of the mother wavelet.The wavelet transform can be applied to signals in either continuous or discrete time.Discrete wavelet transforms(DWT)are frequently used for feature extraction and compression in image processing.The DWT breaks down an image into coefficients representing various degrees of detail and approximation.The image is convolved with a collection of filters known as the wavelet filters to extract these coefficients.The DWT can be expressed mathematically as follows:

    where?j,k,nand ?j,k,nare wavelet and scaling functions,andxnis the original signal.At level j and index k,the wavelet and scaling coefficients are denoted byWj,kandVj+1,k.

    Image analysis software widely uses texture features and the grey-level co-occurrences matrix(GLCM).Important details are laid out,and useful statistical interface formulas are also laid out[31].The image’s pixel intensities are ranked by counting how many of each kind there are.An image set’s mean is calculated by:

    The standard deviation may measure in-homogeneousness because it depicts the probability distribution of the observed population [32].Standard deviations with larger values publicly reflect the high resolution of the boundaries of an image and are indicative of images with higher intensity levels.Using the described formula,it determined:

    A metric known as“skewness”[32]has been used to quantify the presence or absence of symmetry.Skewness,denoted by Sk(x),is defined as follows for the X probability distribution.

    The term kurtosis [33] is used to characterize the curvature of the probability distributions of random variables.Kurt of variable x,also known as the kurtosis of a random variable,is defined as:

    A metric known as energy has been applied to the study of visual similarities.The energy variable quantifies how many times the pixelated image may be replicated.The Horalicks’definition of feature energy in the GLCMs.The second angular moment is another name for it,and its full name is as follows:

    By contrast,also known as the resolution of a pixel concerning its neighbors,it is a measurement used to assess the quality of an image.

    3.4 Classification

    We rely on earlier studies to guide our classification method because feature extraction is more important to our work than building a superior classifier.Our research confirmed the widespread implementation of nonlinear XGBOOST for image classification and the successful attainment of high-quality detection outcomes.For this reason,XGBOOST is our top pick.The DART amplifier is being used.Training and testing are two of several steps in the categorization process.Fig.1 illustrates a functional breakdown of the system’s workflow.During its formation,the classifier draws heavily on the texture features of the image databases.After wavelet-based feature extraction,we train a classification model with XGBOOST.Every image in the experimental datasets had features extracted for training data.The 10-fold cross-validation technique is employed for this purpose.In order to analyze the data,it was split up into k-segments.On every experimental dataset,the proposed model excels.

    3.5 Experimental Setup

    Python evaluated texture attributes,and XGBOOST classified counterfeit photos.Several machine-learning methods and extraction parameters were evaluated to enhance accuracy.XGBOOST classified images,and Python 3.11 preprocessed and extracted features.OpenCV and NumPy are popular image-reading and preprocessing libraries.Robotics,autonomous cars,and computer vision use these picture libraries.PyFeat extracts picture features using texture,shape,and color.These traits help machine learning systems classify and recognize items.XGBOOST and Scikit-learn offer decision trees,random forests,and support vector machines.SMOTE is used in machine learning to correct the class imbalance.SMOTE generates artificial minority class samples to balance the dataset and improve classification model accuracy.These Python packages process,extract,classify,and visualize pictures.Matplotlib and Seaborn ease picture analysis and categorization.The DART booster’s default settings use all training samples with a learning rate of 0.1,a maximum tree depth of 6,a subsample ratio of 1,a regularization term of 1,a gamma value of 0.0(no minimum loss reduction required for splitting),a minimum child weight of 1,and no dropout.K-Fold cross-validation evaluates categorization models.XGBOOST’s cross-validated k-fold datasets were calculated using each fold’s testing set.A Jupyter Notebook with a seventh-generation Dell I7 CPU,16 GB of RAM,and 1 TB of storage ran all tests.

    3.6 Evaluation Measures

    Many distinct measures,such as testing accuracy,precision,recall,F1-score,and AUC,are used to evaluate the classification process.When considering the proposed method,the assessment parameter utilized most of the time is accurate.So,in this study,the proposed approach is quantitatively evaluated using the following three parameters:

    whereAccuracyis the total number of correct guesses divided by the total number of correct forecasts,then multiplied by 100 to get a percentage,the percentage of correctly identified samples in the true positive rate is determined using.

    In this model,true positive(TP)represents the number of diseases that were correctly recognized,false positive (FP) represents the number of conditions that were misclassified,and false negative represents the number of diseases that should have been discovered but were not(FN).The F1 score is a popular measure for accuracy and recall.

    Cross-validation(CV)is a resampling methodology utilized to assess machine learning models in a constricted dataset while safeguarding the prediction models against overfitting.On the other hand,K-Fold CV embodies a technique where the given dataset is spliced into K segments or folds,where each fold serves as a testing set at some point.Consider the case of 10-fold cross-validation(K=10),where the dataset is separated into ten folds,with the first fold testing the model in the first iteration and the remaining folds trained on the model.In the second iteration,the second fold serves as the testing set,whereas the rest function as the training set.This cyclic process repeats until each ten-fold is utilized as the testing set.

    4 Results and Analysis

    The results of a large-scale experiment to test the proposed method for categorizing breast cancer are presented here.We used the evaluation method mentioned in Section 3.6 to train and score the models.Data was compiled from a wide range of performance assessment tools.The tests were conducted in the following areas:

    These areas were the focus of the experiments:

    1.The effectiveness of the proposed framework is measured by XGBOOST for two-class classification across different magnification datasets individually available in BreakHis 1.0.

    2.For two-class classification on the combined dataset,XGBOOST is used to evaluate the efficacy of the suggested framework.Cross-validation uses different assessment metrics to rate the proposed framework on the combined dataset for benign and malignant.

    3.Analysis of how the proposed method stacks up against other,more advanced approaches.

    4.1 Evaluation of Proposed Method on 40X,100X,200X,and 400X Images from BreakHis 1.0

    Table 2 summarizes ten rounds of cross-validation testing of a breast cancer classification model on a 40X magnified dataset.Wavelet transformation and textural features of histopathological images distinguish benign from malignant instances.The table below shows each fold’s benign and malignant classification percentage.The AUC statistic and the number of images utilized in each iteration are shown.All folds have good accuracy ratings of 96.35–99.27 percent.The model correctly classifies benign and malignant cases with good precision,recall,and F1 score values.Wavelet transformation and textural aspects of histopathology images may improve breast cancer classification accuracy and patient outcomes.

    Table 2:10-fold cross-validation results on 40X magnified images of BreakHis 1.0

    When evaluating machine learning models,cross-validation is frequently used.The dataset is partitioned into k folds,and the model is trained k times,with each fold serving as either the validation or training set.The model can be put to the test on new data through cross-validation.Model performance across each cross-validation fold is displayed in fold-wise confusion matrices.For each category,they show the proportion of correct classifications,incorrect classifications,and false negatives.Overfitting,class imbalance,and patterns in model performance can all be identified with this information.Based on the fold-wise confusion matrices presented in Fig.4,the model achieves high-performance levels for both benign and malignant classes.Depending on the fold,performance may change due to differences in the number of images used per class.Blue boxes in the confusion matrix show samples that are correctly classified.

    Table 3 displays the outcomes of a 10-fold cross-validation on the BreakHis 1.0 dataset using the proposed approach and a 100X magnification.The table separately lists the accuracy,precision,recall,and F1 score for each fold and benign and cancerous images.We also provide area under the curve(AUC)values for each fold,quantifying the model’s ability to differentiate between benign and cancerous images.The outcomes show that the automated approach is effective and accurate in spotting breast cancer.The high accuracy ratings (95.83–98.95%) demonstrate that the system can successfully categorize various images.The excellent precision,recall,and F1 score scores show how well the system can distinguish between benign and cancerous images.The AUC values demonstrate that the algorithm can distinguish between normal and cancerous images.These findings provide promising evidence for the potential utility of the automated approach in detecting invasive breast cancer.

    Table 3:10-fold cross-validation results on 100X magnified images of BreakHis 1.0

    Figure 4:Confusion matrices of testing results on 40X magnified images of BreakHis 1.0

    The confusion matrices shown in Fig.5 can be used to perform a fold-wise evaluation of a classification model.The model was trained and validated using many folds or data sets.The confusion matrices show how well the model does on each fold.The model’s accuracy and AUC(Area under the Curve)on that fold constitutes its total performance.True positives(TP),true negatives(TN),false positives (FP),and false negatives (FN) are added up for each fold and displayed in the confusion matrix.Metrics such as precision,recall,and F1score can be computed from this data to shed light on the model’s efficacy.The model has performed well with few false positives and negatives,and the accuracy and AUC values are sufficient for most folds.The model’s advantages and disadvantages need more investigation in any case.

    Figure 5:Confusion matrices of testing results on 100X magnified images of BreakHis 1.0

    Table 4 displays the outcomes of a 10-fold cross-validation study conducted on images from the BreakHis 1.0 dataset that were magnified by a factor of 200.The cross-validation is represented by“folds,”or rows.Values for accuracy,precision,recall,F1 score,and area under the curve(AUC)are displayed in separate columns for benign and malignant images.Between 97.12%and 98.92%,the foldwise accuracy is quite high.Both benign and cancerous images have precision values between 0.96 and 0.99.Both healthy and cancerous images have recall values between 0.97 and 0.99.The F1 score values are between 0.97 and 0.99 for healthy and cancerous images.The AUCs are between 0.97 and 0.99.The results show that the model is highly accurate and performs well when identifying benign and malignant breast histopathology images.

    Table 4:10-fold cross-validation results on 200X magnified images of BreakHis 1.0

    Ten iterations of cross-validation were run on a 200-fold-enhanced version of the BreakHis 1.0 dataset,and the findings are displayed in confusion matrices in Fig.6.Each fold’s accuracy,AUC,and confusion matrix are shown independently.Members of a confusion matrix that fall on the diagonal reflect correctly diagnosed events (benign and malignant),whereas those that fall off the diagonal represent misclassified cases.The model has an adequate area under the curve(AUC).There is some variation in the number of misclassified samples between the different folds.Each additional fold results in a higher rate of false positives(three cases of benign disease misdiagnosed as malignant)and false negatives(five cases of malignant disease misdiagnosed as benign).Areas under the curve that are large are indicative of successful data classification.If there is a big discrepancy between the number of benign and malignant events in this dataset,the class imbalance may be troublesome even if AUC remains unchanged.

    Figure 6:Confusion matrices of testing results on 200X magnified images of BreakHis 1.0

    This research used XBOOST to correctly label benign and malignant breast cancer images in a dataset comprising both types of cancer.Table 5 displays the outcomes of a 10-fold cross-validation test conducted on 400X zooms of the BreakHis 1.0 dataset.The table shows each fold’s accuracy,precision,recall,F1 score,and AUC.The table shows that for most folds,the proposed method achieved good accuracy(between 94.31%and 98.78%).High precision and recall values show that the method accurately separates benign from malignant samples.The high area AUC scores,ranging from 0.94 to 0.99,further prove that the proposed technique is a success.Table 5 shows that the proposed approach is a potentially useful strategy for classifying breast cancer images,which can be implemented in clinical settings for early detection and diagnosis.

    Table 5:10-fold cross-validation results on 400X magnified images of BreakHis 1.0

    Fold-wise confusion matrices for the provided classification model are displayed in Fig.7.The accuracy and AUC(area under the curve)values are presented for each fold,representing the model’s performance on a different portion of the data.Each confusion matrix is a 2×2 table,with the first row showing the number of false positives and the second showing the number of false negatives.Correctly classified samples are denoted by items on the diagonal(top left and bottom right),while misclassified samples are denoted by elements off the diagonal (top right and bottom left).The provided data suggests that the model performs better,with accuracy scores between 0.94 and 0.99 and AUC scores between 0.95 and 0.99 throughout the ten folds.It is worth noting that results may differ based on the dataset used.Therefore,additional investigation into the model’s efficacy may be necessary.

    Figure 7:Confusion matrices of testing results on 400X magnified images of BreakHis 1.0

    Tables 2 and 3,and Fig.4 show that the proposed method can successfully identify breast cancer in histological images.Wavelet transformation and textured features of histopathology pictures were used in the suggested study to distinguish between benign and malignant breast cancer.High accuracy,precision,recall,and F1 score results in cross-validation tests show that the models can correctly label a sizable fraction of images.The AUC values also demonstrate that the models can distinguish between normal and cancerous visuals.These results provide preliminary support for the automated invasive breast cancer detection technique,implying that it may improve patient outcomes.

    4.2 Performance Evaluation of Proposed Method on Combined Dataset

    Table 6 summarizes the results of the BreakHis 1.0 dataset’s application of the XGBOOST algorithm to classify breast cancer patients.Ten-fold cross-validation results show that the model is quite accurate,with a mean of 97.84%.The model’s recall,F1,and accuracy were used to evaluate how well it distinguished between benign and malignant tumors.The F1 score,precision,and recall all stayed in the 0.96 to 0.99 range for the harmless category.The malignant class’s F1 score,recall,and precision were all between 0.97 and 0.99.These results show that the model can distinguish between benign and malignant tumors in breast cancer images.The area under the curve(AUC)was also used to evaluate the model’s performance in identifying benign from malignant tumors.The model has excellent discriminatory power with an AUC in the range of 0.97 and 0.99.The results indicate that the proposed method is a practical strategy for breast cancer categorization based on histological images.

    Table 6:10-fold cross-validation results on combined images of BreakHis 1.0

    Fig.8 displays the 10-fold cross-validation results for a breast cancer XGBOOST model’s classification accuracy.Several different dataset folds exist for generating independent training and validation datasets.After each cycle,we log the AUC and accuracy.The confusion matrix provides information about the percentages of correct and incorrect results for each fold.The model has a respectable accuracy of 0.94 to 0.99 across ten folds.

    Furthermore,the AUC values are rather satisfactory,between 0.95 and 0.99.These findings indicate that the model may be able to distinguish between benign and aggressive breast tumors.The confusion matrices demonstrate that the model correctly classifies occurrences as good or bad.False positives and false negatives are possible,although only very rarely.When a model wrongly detects a benign instance as malignant,this is known as a false positive(FP),and when a model incorrectly identifies a malignant instance as benign,this is known as a false negative (FN).Clinical situations are inherently high-risk,making accounting for this type of error imperative.The proposed method appears to apply to classifying breast cancer.However,more research on larger datasets is required to verify their clinical feasibility.

    4.3 Comparative Analysis with State-of-Art Methods

    Section 2 covers the many methods used to diagnose breast cancer.A few of them use machine learning and deep learning.Different models can be compared using the same data to see how well they perform.Our research included comparing our approach with others that produce comparable results.We compare the suggested method’s accuracy to that of state-of-the-art methods.Table 7 compares the accuracy of various techniques for detecting breast cancer at varying magnification levels.The proposed method is just one of many that can be used;other options are Sliding Window+SVM[13],ResNet50+KWELM [28],Xception+SVM [29],and DenseNet201+GLCM+SVM [17].All measurements,including accuracy and area under the curve,suggest that the proposed strategy is superior.At 40X magnification,the proposed method obtains an accuracy of 99.27%,while at 100X magnification,it achieves an accuracy of 98.95%.Both at 200X and 400X,it gets a 98.92%accuracy rate.Xception plus SVM consistently beats other methods,regardless of zoom level.ResNet50 +KWELM performs moderately better from 40X to 100X but much worse from 100X to 400X.The proposed method shows potential as a robust instrument for detecting breast cancer due to its higher performance.

    Table 7:Comparative analysis with state-of-the-art methods

    5 Conclusion

    Recognizing malignant images is a vital study topic in the medical field.The purpose of this research is to employ wavelet transformation and texture features in the diagnosis of breast cancer.Our method eliminates the YCBCR channels from an image before extracting blocks of color data.The proposed method is resilient against transformations (rotation,scaling,and distortion) applied to the tumor region.However,we trained and tested our proposed technique on a larger collection of images to increase its efficacy.The classification was performed using the XGBOOST classifier,and feature extraction parameters were optimized for optimum accuracy.Maximum accuracy of 99.27%was reached on the 40X dataset,98.95%on the 100X dataset,98.92%on the 200X dataset,98.78%on the 400X dataset,and 98.80% on the combined dataset using the suggested method.Our findings show that wavelet modification can be used successfully for cancer image recognition.There are,however,some restrictions that must be overcome.For instance,our dataset does not reflect the world as it is because of the biases introduced by Smote.In addition,our approach might need help with more advanced forms of image editing,such as sophisticated geometric transformations or semantic changes at a higher level.In conclusion,our research has aided in advancing wavelet-based methods for recognizing cancer images in medical imagery.To make our method more accurate and stable,we intend to continue investigating this topic by increasing the size of our dataset and investigating additional classification models.The goal is to create a system that can accurately and efficiently categorize multi-class cancer images in real-world settings.

    Acknowledgement:None.

    Funding Statement:This work was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R236),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design:A.Akram,M.Hamid,J.Rashid;data collection:A.Akram,F.Hajjej,N.Sarwar;analysis and interpretation of results:A.Arshad,J.Rashid,M.Hamid;draft manuscript preparation:A.Akram,J.Rashid,F.Hajjej,N.Sarwar,M.Hamid.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data will be provided on request.It is also publicly available.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    在线精品无人区一区二区三| 叶爱在线成人免费视频播放| 波多野结衣一区麻豆| av片东京热男人的天堂| 国语对白做爰xxxⅹ性视频网站| 亚洲精品av麻豆狂野| 午夜激情av网站| 亚洲国产精品国产精品| av有码第一页| 亚洲色图 男人天堂 中文字幕| 99热国产这里只有精品6| 欧美久久黑人一区二区| 啦啦啦啦在线视频资源| 亚洲视频免费观看视频| 国产精品久久久久久人妻精品电影 | 18禁裸乳无遮挡动漫免费视频| 亚洲av电影在线观看一区二区三区| 色播在线永久视频| av在线app专区| 五月开心婷婷网| 国产精品成人在线| 久久人人爽av亚洲精品天堂| 亚洲精品中文字幕在线视频| 啦啦啦 在线观看视频| 在线观看免费日韩欧美大片| 我的亚洲天堂| 欧美 日韩 精品 国产| 亚洲男人天堂网一区| 亚洲av国产av综合av卡| 黑人欧美特级aaaaaa片| 欧美成人精品欧美一级黄| 免费在线观看完整版高清| 亚洲精品日韩在线中文字幕| 亚洲情色 制服丝袜| 最近的中文字幕免费完整| 国产色婷婷99| 久久久亚洲精品成人影院| 午夜激情久久久久久久| 最近2019中文字幕mv第一页| 久久久精品免费免费高清| 满18在线观看网站| 男女国产视频网站| 亚洲三区欧美一区| 欧美日本中文国产一区发布| 亚洲精华国产精华液的使用体验| 久久免费观看电影| 亚洲av电影在线观看一区二区三区| 一边摸一边抽搐一进一出视频| 亚洲av在线观看美女高潮| 国产成人精品福利久久| 一区二区av电影网| 最近手机中文字幕大全| avwww免费| 纵有疾风起免费观看全集完整版| 大码成人一级视频| 免费日韩欧美在线观看| 国产av一区二区精品久久| 午夜福利网站1000一区二区三区| 日韩一卡2卡3卡4卡2021年| 91精品伊人久久大香线蕉| 欧美成人精品欧美一级黄| 日韩,欧美,国产一区二区三区| 十八禁人妻一区二区| 久久ye,这里只有精品| 最近2019中文字幕mv第一页| 一本一本久久a久久精品综合妖精| 香蕉丝袜av| 午夜免费男女啪啪视频观看| 亚洲人成电影观看| 国产成人啪精品午夜网站| www日本在线高清视频| 天天影视国产精品| 交换朋友夫妻互换小说| 青春草亚洲视频在线观看| 一级黄片播放器| 国产午夜精品一二区理论片| 亚洲熟女精品中文字幕| 午夜91福利影院| 亚洲伊人色综图| 亚洲人成77777在线视频| 国产一区二区在线观看av| 大片免费播放器 马上看| 人人妻,人人澡人人爽秒播 | 亚洲激情五月婷婷啪啪| 黑人猛操日本美女一级片| 97精品久久久久久久久久精品| avwww免费| 男女边摸边吃奶| 多毛熟女@视频| 日韩 亚洲 欧美在线| 国产又色又爽无遮挡免| 69精品国产乱码久久久| 王馨瑶露胸无遮挡在线观看| 亚洲国产精品国产精品| 亚洲av欧美aⅴ国产| 亚洲av福利一区| 欧美日韩av久久| 国产精品久久久人人做人人爽| 满18在线观看网站| 丰满迷人的少妇在线观看| 在线观看www视频免费| 91精品三级在线观看| 久久精品久久久久久噜噜老黄| av国产精品久久久久影院| 伊人久久大香线蕉亚洲五| 人人妻人人澡人人爽人人夜夜| 亚洲欧洲国产日韩| 在现免费观看毛片| 亚洲精品一二三| 大香蕉久久成人网| 精品卡一卡二卡四卡免费| 极品少妇高潮喷水抽搐| 一级,二级,三级黄色视频| 国产一区二区 视频在线| 蜜桃国产av成人99| 欧美日韩亚洲综合一区二区三区_| 少妇人妻久久综合中文| 97人妻天天添夜夜摸| 亚洲精华国产精华液的使用体验| 亚洲一卡2卡3卡4卡5卡精品中文| 色播在线永久视频| 国产免费又黄又爽又色| 免费高清在线观看视频在线观看| 亚洲,一卡二卡三卡| 国产在线一区二区三区精| av.在线天堂| 2018国产大陆天天弄谢| 老汉色∧v一级毛片| 国产成人精品在线电影| 国产精品欧美亚洲77777| 国产亚洲av高清不卡| 亚洲成av片中文字幕在线观看| 69精品国产乱码久久久| 久久久久网色| 在线精品无人区一区二区三| 亚洲一级一片aⅴ在线观看| 美女国产高潮福利片在线看| 亚洲在久久综合| 亚洲av成人精品一二三区| 亚洲综合精品二区| 尾随美女入室| 亚洲一级一片aⅴ在线观看| 国产 精品1| 人人妻人人爽人人添夜夜欢视频| 精品一区二区免费观看| 国产精品久久久久久精品电影小说| 国产av一区二区精品久久| 欧美日韩视频精品一区| 欧美人与性动交α欧美软件| 国产毛片在线视频| 日韩制服骚丝袜av| 国产精品免费视频内射| 午夜福利免费观看在线| 男女国产视频网站| 91成人精品电影| 侵犯人妻中文字幕一二三四区| 免费看av在线观看网站| 夫妻性生交免费视频一级片| 国产视频首页在线观看| 飞空精品影院首页| 国产精品久久久久久人妻精品电影 | 999久久久国产精品视频| 天天添夜夜摸| 欧美中文综合在线视频| 国产在线一区二区三区精| 人成视频在线观看免费观看| 丁香六月欧美| 少妇被粗大的猛进出69影院| 一区二区三区四区激情视频| 日韩精品免费视频一区二区三区| 两性夫妻黄色片| 亚洲国产欧美网| 精品一区二区三卡| 日韩精品有码人妻一区| 中文乱码字字幕精品一区二区三区| 欧美黑人精品巨大| 69精品国产乱码久久久| 日韩 亚洲 欧美在线| 久久鲁丝午夜福利片| 性少妇av在线| 一区二区三区激情视频| 黄频高清免费视频| 国产一区二区 视频在线| 亚洲国产精品国产精品| 日本av免费视频播放| av国产久精品久网站免费入址| 欧美av亚洲av综合av国产av | 晚上一个人看的免费电影| 欧美日韩综合久久久久久| 国产男女内射视频| 亚洲精品一区蜜桃| 在线观看免费日韩欧美大片| av国产久精品久网站免费入址| 欧美97在线视频| 久久精品亚洲av国产电影网| 女性被躁到高潮视频| 在线精品无人区一区二区三| 最新在线观看一区二区三区 | 久久久久久久大尺度免费视频| 90打野战视频偷拍视频| 久久综合国产亚洲精品| 国产精品香港三级国产av潘金莲 | 综合色丁香网| 男女无遮挡免费网站观看| 一区在线观看完整版| 免费日韩欧美在线观看| 欧美日本中文国产一区发布| 亚洲精品国产色婷婷电影| 99久久人妻综合| 日韩av不卡免费在线播放| 卡戴珊不雅视频在线播放| 久久婷婷青草| 一本—道久久a久久精品蜜桃钙片| 日韩 亚洲 欧美在线| 亚洲中文av在线| 欧美日韩亚洲国产一区二区在线观看 | 一本大道久久a久久精品| 亚洲,欧美精品.| 人人妻人人澡人人看| 午夜激情av网站| 国产免费现黄频在线看| √禁漫天堂资源中文www| 亚洲精品国产av蜜桃| 国产成人精品无人区| 久久97久久精品| 国产精品香港三级国产av潘金莲 | 日日爽夜夜爽网站| 午夜福利乱码中文字幕| 少妇的丰满在线观看| 免费看av在线观看网站| 老鸭窝网址在线观看| 国产成人免费无遮挡视频| 精品视频人人做人人爽| 亚洲精品乱久久久久久| 亚洲精品国产av蜜桃| 久久免费观看电影| 老司机亚洲免费影院| 亚洲精品美女久久久久99蜜臀 | 中文字幕高清在线视频| 久久毛片免费看一区二区三区| 亚洲天堂av无毛| 男人爽女人下面视频在线观看| 亚洲国产精品999| 欧美日韩一级在线毛片| 国产精品一二三区在线看| 精品国产露脸久久av麻豆| 精品第一国产精品| 在线观看免费视频网站a站| 日韩不卡一区二区三区视频在线| 日韩一区二区三区影片| 中文天堂在线官网| 超碰成人久久| 久久国产精品大桥未久av| 黑人猛操日本美女一级片| 国产日韩欧美亚洲二区| 亚洲天堂av无毛| 九色亚洲精品在线播放| 免费观看人在逋| 欧美日韩视频高清一区二区三区二| 国产日韩欧美亚洲二区| 日本色播在线视频| 久久韩国三级中文字幕| 免费黄网站久久成人精品| 自线自在国产av| 97精品久久久久久久久久精品| 日韩 欧美 亚洲 中文字幕| 捣出白浆h1v1| 久久久精品国产亚洲av高清涩受| 日日摸夜夜添夜夜爱| 日日啪夜夜爽| videosex国产| 777米奇影视久久| 欧美日韩精品网址| 国产一区二区三区综合在线观看| 国产麻豆69| 美女福利国产在线| 精品一区二区三卡| 亚洲七黄色美女视频| 激情五月婷婷亚洲| 亚洲自偷自拍图片 自拍| 精品人妻在线不人妻| 国产精品成人在线| av国产精品久久久久影院| 国产深夜福利视频在线观看| 亚洲伊人色综图| www.av在线官网国产| 极品人妻少妇av视频| 国产片特级美女逼逼视频| 精品亚洲成a人片在线观看| 国产熟女欧美一区二区| 国产男人的电影天堂91| 无限看片的www在线观看| 一区二区日韩欧美中文字幕| 久久久国产精品麻豆| 色婷婷av一区二区三区视频| 久久99热这里只频精品6学生| 欧美 亚洲 国产 日韩一| 丝袜脚勾引网站| 亚洲国产精品国产精品| 叶爱在线成人免费视频播放| 日本一区二区免费在线视频| 不卡av一区二区三区| 国产xxxxx性猛交| 麻豆av在线久日| 各种免费的搞黄视频| 中文乱码字字幕精品一区二区三区| 国产亚洲av片在线观看秒播厂| 精品国产露脸久久av麻豆| 精品午夜福利在线看| 国产一卡二卡三卡精品 | 最近最新中文字幕大全免费视频 | 99精国产麻豆久久婷婷| 别揉我奶头~嗯~啊~动态视频 | 狠狠婷婷综合久久久久久88av| 在线观看免费午夜福利视频| av卡一久久| 两个人免费观看高清视频| 中文欧美无线码| 婷婷色综合www| 亚洲,欧美,日韩| 各种免费的搞黄视频| 国产欧美日韩一区二区三区在线| 欧美日韩亚洲综合一区二区三区_| www.熟女人妻精品国产| 国产在线一区二区三区精| 国产成人精品久久久久久| 伦理电影免费视频| 亚洲av在线观看美女高潮| 中文天堂在线官网| 亚洲色图综合在线观看| 在线观看人妻少妇| 色精品久久人妻99蜜桃| 丁香六月欧美| 久久狼人影院| 国产 一区精品| 国产精品久久久久成人av| 亚洲久久久国产精品| 18禁裸乳无遮挡动漫免费视频| 老司机影院成人| 午夜福利,免费看| 18禁裸乳无遮挡动漫免费视频| 久热爱精品视频在线9| 女性生殖器流出的白浆| 18禁裸乳无遮挡动漫免费视频| 美女国产高潮福利片在线看| 成年人免费黄色播放视频| 久久久国产一区二区| 国产成人欧美| 制服丝袜香蕉在线| 人人妻人人添人人爽欧美一区卜| av线在线观看网站| 免费人妻精品一区二区三区视频| 一二三四在线观看免费中文在| 肉色欧美久久久久久久蜜桃| 午夜91福利影院| 国产成人精品在线电影| 亚洲综合精品二区| 最近中文字幕2019免费版| 男男h啪啪无遮挡| 久久精品人人爽人人爽视色| 精品久久久精品久久久| 欧美xxⅹ黑人| 久久久久精品性色| 久久久亚洲精品成人影院| 国产精品久久久久久精品电影小说| 你懂的网址亚洲精品在线观看| 男男h啪啪无遮挡| 日韩熟女老妇一区二区性免费视频| 国产成人av激情在线播放| 一本一本久久a久久精品综合妖精| www.熟女人妻精品国产| 人妻一区二区av| 国产成人av激情在线播放| 99国产精品免费福利视频| 黄色怎么调成土黄色| 欧美日本中文国产一区发布| 中文字幕高清在线视频| 一级毛片黄色毛片免费观看视频| 女人爽到高潮嗷嗷叫在线视频| 欧美人与善性xxx| 人人妻人人添人人爽欧美一区卜| 在线精品无人区一区二区三| 一二三四中文在线观看免费高清| 日本黄色日本黄色录像| 亚洲国产精品一区三区| 亚洲成av片中文字幕在线观看| 国产亚洲欧美精品永久| avwww免费| 黑人猛操日本美女一级片| 久久毛片免费看一区二区三区| 亚洲国产中文字幕在线视频| 宅男免费午夜| 亚洲,一卡二卡三卡| av有码第一页| 日韩免费高清中文字幕av| 久久人妻熟女aⅴ| 亚洲精品日本国产第一区| 纵有疾风起免费观看全集完整版| a 毛片基地| 日韩一本色道免费dvd| 少妇 在线观看| 丝袜人妻中文字幕| 国产精品久久久久久精品古装| 一区二区三区精品91| 美女视频免费永久观看网站| 亚洲av成人精品一二三区| 国产在线一区二区三区精| 国产成人免费观看mmmm| 国产xxxxx性猛交| 国产精品 欧美亚洲| 另类亚洲欧美激情| 黄片播放在线免费| 亚洲中文av在线| netflix在线观看网站| 色网站视频免费| 2021少妇久久久久久久久久久| 国产伦人伦偷精品视频| 曰老女人黄片| 久久性视频一级片| 18禁观看日本| 可以免费在线观看a视频的电影网站 | 在线看a的网站| 九色亚洲精品在线播放| 制服诱惑二区| 亚洲av成人不卡在线观看播放网 | 在线观看免费日韩欧美大片| 亚洲精品国产av蜜桃| av片东京热男人的天堂| 日韩电影二区| 国产日韩一区二区三区精品不卡| 国产毛片在线视频| 丝袜美足系列| 国产成人精品久久久久久| 午夜激情av网站| 国产黄频视频在线观看| 韩国精品一区二区三区| 大码成人一级视频| 亚洲欧美成人精品一区二区| 国产1区2区3区精品| 国产av码专区亚洲av| 曰老女人黄片| 如日韩欧美国产精品一区二区三区| 欧美亚洲日本最大视频资源| 少妇人妻久久综合中文| h视频一区二区三区| 欧美久久黑人一区二区| 欧美激情极品国产一区二区三区| 色播在线永久视频| 天天操日日干夜夜撸| 激情五月婷婷亚洲| 免费黄频网站在线观看国产| 免费久久久久久久精品成人欧美视频| 久久婷婷青草| 久久久久久久精品精品| 人人妻人人澡人人爽人人夜夜| 一二三四中文在线观看免费高清| 最近中文字幕2019免费版| 在线 av 中文字幕| 大香蕉久久成人网| 人人妻,人人澡人人爽秒播 | 成人影院久久| 男人舔女人的私密视频| 成年女人毛片免费观看观看9 | 啦啦啦 在线观看视频| 精品少妇内射三级| 国产精品.久久久| 国产成人av激情在线播放| 亚洲男人天堂网一区| 欧美人与性动交α欧美精品济南到| e午夜精品久久久久久久| 久久人人97超碰香蕉20202| 国产伦人伦偷精品视频| 亚洲精品国产一区二区精华液| 午夜精品国产一区二区电影| 国产精品一国产av| 精品国产超薄肉色丝袜足j| 国产成人精品久久久久久| av电影中文网址| 色网站视频免费| 精品少妇内射三级| 在线精品无人区一区二区三| √禁漫天堂资源中文www| 精品国产国语对白av| 午夜影院在线不卡| 国产精品 欧美亚洲| 久久av网站| 亚洲欧洲精品一区二区精品久久久 | av在线播放精品| 看免费成人av毛片| 国产av国产精品国产| 波野结衣二区三区在线| 一本久久精品| 天堂中文最新版在线下载| 免费日韩欧美在线观看| 免费女性裸体啪啪无遮挡网站| 中国国产av一级| 久久久久久久国产电影| 日韩不卡一区二区三区视频在线| 久久久久久久国产电影| 操出白浆在线播放| 成年av动漫网址| 久久久精品国产亚洲av高清涩受| 国产精品久久久久久精品古装| 亚洲av国产av综合av卡| 麻豆乱淫一区二区| 午夜日本视频在线| 欧美成人午夜精品| 久久精品亚洲av国产电影网| 热re99久久精品国产66热6| 街头女战士在线观看网站| 亚洲精品在线美女| 男人操女人黄网站| 老汉色∧v一级毛片| 街头女战士在线观看网站| 老司机靠b影院| bbb黄色大片| 精品亚洲成a人片在线观看| 国产97色在线日韩免费| 国产日韩欧美在线精品| 日本猛色少妇xxxxx猛交久久| 青春草视频在线免费观看| 国产精品欧美亚洲77777| h视频一区二区三区| 国产人伦9x9x在线观看| 欧美激情 高清一区二区三区| 亚洲综合色网址| 自线自在国产av| 日韩人妻精品一区2区三区| 黄色视频在线播放观看不卡| 免费观看人在逋| 成人漫画全彩无遮挡| 国产淫语在线视频| 国产激情久久老熟女| 久久精品aⅴ一区二区三区四区| 成人三级做爰电影| 日韩电影二区| 成人影院久久| 99re6热这里在线精品视频| 精品一区二区三区av网在线观看 | 亚洲国产成人一精品久久久| 久久久久精品久久久久真实原创| 国产欧美日韩一区二区三区在线| av免费观看日本| 日韩 亚洲 欧美在线| 十八禁人妻一区二区| 亚洲精品中文字幕在线视频| 精品国产乱码久久久久久小说| 欧美日韩亚洲高清精品| 国产欧美日韩综合在线一区二区| 国产精品久久久久久久久免| 久久精品久久久久久噜噜老黄| 国产爽快片一区二区三区| 99热网站在线观看| 久久国产精品男人的天堂亚洲| 日韩av免费高清视频| 尾随美女入室| 亚洲精品av麻豆狂野| 精品国产一区二区三区久久久樱花| 下体分泌物呈黄色| 亚洲欧洲国产日韩| 午夜免费男女啪啪视频观看| 一区二区三区乱码不卡18| 黄片无遮挡物在线观看| 精品一区二区三卡| 91精品伊人久久大香线蕉| 日韩一本色道免费dvd| 伊人久久国产一区二区| 亚洲情色 制服丝袜| 在线看a的网站| 999久久久国产精品视频| www.熟女人妻精品国产| 国产熟女欧美一区二区| 蜜桃国产av成人99| 亚洲精品乱久久久久久| 在线天堂中文资源库| 一级a爱视频在线免费观看| 日韩制服骚丝袜av| 亚洲欧美成人精品一区二区| 天天躁日日躁夜夜躁夜夜| 91aial.com中文字幕在线观看| 亚洲少妇的诱惑av| 久久久久久久久免费视频了| 亚洲七黄色美女视频| 男人舔女人的私密视频| 国产高清国产精品国产三级| 精品免费久久久久久久清纯 | www.av在线官网国产| 最近2019中文字幕mv第一页| 国产97色在线日韩免费| 日本黄色日本黄色录像| 最近的中文字幕免费完整| 成人国产av品久久久| 七月丁香在线播放| 欧美黄色片欧美黄色片| 老司机影院成人| 国产精品秋霞免费鲁丝片| 人人妻人人爽人人添夜夜欢视频| 老司机深夜福利视频在线观看 | 日韩人妻精品一区2区三区| 涩涩av久久男人的天堂| 国产99久久九九免费精品| 日韩av在线免费看完整版不卡| 国产精品蜜桃在线观看| 午夜激情久久久久久久| 尾随美女入室| 91精品伊人久久大香线蕉| 精品免费久久久久久久清纯 | 久久毛片免费看一区二区三区| 久久久久国产一级毛片高清牌| 午夜福利视频在线观看免费| 精品久久蜜臀av无| 日韩欧美一区视频在线观看| 少妇被粗大猛烈的视频| 精品亚洲成a人片在线观看| av网站在线播放免费|