• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Advancing Brain Tumor Analysis through Dynamic Hierarchical Attention for Improved Segmentation and Survival Prognosis

    2024-01-12 03:48:06KannanandAnusuya
    Computers Materials&Continua 2023年12期

    S.Kannan and S.Anusuya

    1Department of Computer Science and Engineering,Malla Reddy College of Engineering,Secunderabad,Telungana,India

    2Department of Information Technology,Saveetha School of Engineering,Saveetha Institute of Medical and Technical Sciences,SIMATS,Chennai,Tamil Nadu,602 117,India

    ABSTRACT Gliomas,the most prevalent primary brain tumors,require accurate segmentation for diagnosis and risk assessment.In this paper,we develop a novel deep learning-based method,the Dynamic Hierarchical Attention for Improved Segmentation and Survival Prognosis(DHA-ISSP)model.The DHA-ISSP model combines a three-band 3D convolutional neural network (CNN) U-Net architecture with dynamic hierarchical attention mechanisms,enabling precise tumor segmentation and survival prediction.The DHA-ISSP model captures fine-grained details and contextual information by leveraging attention mechanisms at multiple levels,enhancing segmentation accuracy.By achieving remarkable results,our approach surpasses 369 competing teams in the 2020 Multimodal Brain Tumor Segmentation Challenge.With a Dice similarity coefficient of 0.89 and a Hausdorff distance of 4.8 mm,the DHA-ISSP model demonstrates its effectiveness in accurately segmenting brain tumors.We also extract radio mic characteristics from the segmented tumor areas using the DHA-ISSP model.By applying cross-validation of decision trees to the selected features,we identify crucial predictors for glioma survival,enabling personalized treatment strategies.Utilizing the DHA-ISSP model and the desired features,we assess patients’overall survival and categorize survivors into short,mid,in addition to long survivors.The proposed work achieved impressive performance metrics,including the highest accuracy of 0.91,precision of 0.84,recall of 0.92,F1 score of 0.88,specificity of 0.94,sensitivity of 0.92,area under the curve(AUC)value of 0.96,and the lowest mean absolute error value of 0.09 and mean squared error value of 0.18.These results clearly demonstrate the superiority of the proposed system in accurately segmenting brain tumors and predicting survival outcomes,highlighting its significant merit and potential for clinical applications.

    KEYWORDS Survival prediction;3D multimodal MRI;brain tumors;segmentation;CNN U-Net;deep learning

    1 Introduction

    Gliomas,the most common type of brain tumor,can occur in any part of the brain and originate from glial cells.There are two main types of gliomas based on pathological evaluation:glioblastoma multiforme/higher-grade gliomas(GBM/HGG)as well as lower-grade gliomas(LGG)[1].Glioblastoma is a highly aggressive and deadly form of brain cancer.Gliomas consist of several distinct cores and regions,including a non-enhancing core,an enhancing core,a necrotic core,peritumoral edema,and other histological sub-regions[2].Accurate and reliable prediction of overall survival is essential for glioma patients as it helps in patient management,treatment planning,and outcome prediction.Automated algorithms have shown promise in making such predictions [3].However,identifying trustworthy and effective predictive features is challenging.Clinical imaging techniques,including Xrays and computerized tomography (CT) scans,provide radiographic information that can be used to extract quantitative imaging features [4–6].Clinical data such as patient age and resection status may also contain critical prognostic details.Board-certified neuroradiologists play a crucial role in segmenting gliomas during pre-operative Magnetic resonance imaging (MRI) image investigations[7].They provide quantitative morphological descriptions and size measurements of glioma subregions,essential for survival prediction (SP).Quantitative approaches have significant potential in determining the grade of gliomas and guiding treatment decisions,making them a subject of extensive discussion and research [8–10].However,automated segmentation of brain tumors in multimodal MRI images remains challenging due to factors like fuzzy boundaries,image artifacts,and variations in appearance and shape.In recent years,deep convolutional neural networks (CNNs) have made significant advancements in computer vision,including medical image analysis[11].CNNs,inspired by the structure of the visual cortex,consist of multiple layers of convolutional operations that enable the extraction of complex and meaningful features[12–14].This article presents a novel approach that combines label and deep learning(DL)methods for segmenting gliomas into their constituent parts using multimodal MRI data.We employ decision tree regression analysis to select the most informative features and rank them based on their predictive power[15].This feature selection process is further validated using cross-validation.

    The proposed approach represents a comprehensive framework for glioma segmentation and survival prediction using multimodal MRI data.By leveraging deep learning techniques and radio mic features,we aim to improve automated segmentation’s accuracy and reliability and provide clinicians with valuable prognostic information.Several studies have reported positive outcomes in predicting overall survival(OS)for brain tumor patients using different datasets.For instance,using radiometric data,a private dataset of 119 patients successfully predicted OS and progression-free survival.Data mining techniques have also shown promising results by incorporating Perfusion-MRI data alongside the MR sequences.Another study using a smaller private dataset of 93 patients achieved successful OS prediction using deep learning methods.However,deep learning methods performed poorly on the open-access BraTS dataset,as indicated by the BraTS summary.A quantitative comparison of deep learning and conventional regression on radio mic characteristics for OS prediction in the BraTS data revealed that radiomic features were more reliable compared to features obtained from deep learning networks.

    2 Related Work

    Brain tumour segmentation and SP have been active research areas aiming to improve diagnosis in glioma patients.This section reviews relevant studies in the field,focusing on brain tumour segmentation and Survival Prognosis techniques,deep learning in radio mics,and survival prediction in gliomas.Singh et al.[11] presented an efficient brain tumor detection method that combines a modified tree growth algorithm with the random forest method.Their approach offers robust detection capabilities and efficient processing of large datasets,which is advantageous for accurate tumor identification.However,a potential limitation lies in the reliance on handcrafted features,which may only partially capture the complex characteristics of brain tumors.The proposed DHAISSP method utilizes a dynamic hierarchical attention mechanism in a three-band 3D CNN UNet architecture,allowing for more precise tumor segmentation and improved survival prognosis.Das et al.[12]proposed a method for brain tumor segmentation in glioblastoma multiforme,leveraging radiomic features and machine learning techniques.Their approach provides valuable insights into tumor analysis and survival prediction,which is a significant advantage.However,one limitation is the potential sensitivity of the method to variations in feature extraction and selection processes.

    In contrast,the DHA-ISSP model integrates dynamic hierarchical attention mechanisms,achieving exceptional segmentation accuracy and surpassing existing methods in the Multimodal Brain Tumor Segmentation Challenge.Additionally,the DHA-ISSP model incorporates survival prediction capabilities,enabling the identification of crucial predictors for glioma survival and personalized treatment strategies.Tran et al.[13]proposed a survival prediction method for glioblastoma patients,focusing on leveraging spatial information and structural characteristics.Their approach incorporates essential clinical factors for accurate prognosis,which is advantageous for personalized treatment decisions.However,it is worth noting that the method primarily focuses on specific tumor types,potentially limiting its applicability to other tumor subtypes.In comparison,the DHA-ISSP model achieves precise tumor segmentation,surpasses existing methods in the challenge,and incorporates survival prediction capabilities.The DHA-ISSP model offers a comprehensive and effective solution for brain tumor analysis and personalized treatment strategies by capturing fine-grained details and contextual information through dynamic hierarchical attention mechanisms.

    Majib et al.[16] proposed VGG-SCNet,a DL framework for brain tumor detection based on the VGG network architecture.Their approach achieves accurate tumor detection,demonstrated through evaluations of MRI datasets.While this method offers accurate detection,relying on single network architecture may limit its ability to capture tumor complexity.In contrast,the DHA-ISSP method combines a 3D CNN U-Net architecture with dynamic hierarchical attention mechanisms,enabling precise tumor segmentation and survival prediction.Zhou et al.[17]investigated brain tumor heterogeneity for survival time prediction,emphasizing its importance in prognostic predictions.However,their study may need to be revised in generalizing findings to different tumor subtypes or datasets.In contrast,the DHA-ISSP model integrates dynamic hierarchical attention mechanisms,achieving exceptional segmentation accuracy and surpassing existing methods.It also incorporates survival prediction capabilities,enabling personalized treatment strategies based on crucial predictors.

    Srinidhi et al.[18] utilized a 3D U-Net model for brain tumor segmentation and survival prediction,showcasing the potential of DL techniques.However,the challenge of handling largescale datasets or additional modalities may arise.In comparison,the DHA-ISSP model outperforms existing methods regarding segmentation accuracy,survival prediction,and competitive performance.It provides a comprehensive and effective solution for precise segmentation and personalized treatment strategies by capturing fine-grained details and leveraging dynamic hierarchical attention mechanisms.Jain and Santhanalakshmi proposed an early detection method for brain tumors and SP using DL and ensemble learning techniques [19].The authors utilized radiomics images and develop a DL model combined with ensemble learning algorithms for accurate tumor detection and prediction of survival outcomes.Their work emphasizes the importance of early detection and personalized treatment planning.Wu et al.[20] proposed an intelligent diagnosis method for brain tumour segmentation using deep CNNs and the support vector machine (SVM) algorithm.Their approach demonstrates promising results,but the combination of CNNs and SVM may introduce complexity.In comparison,the proposed DHA-ISSP model combines a three-band 3D CNN U-Net architecture with dynamic hierarchical attention mechanisms,achieving exceptional segmentation accuracy and survival prediction capabilities.

    Yadav et al.[21] proposed a glioblastoma brain tumor segmentation and survival prediction method using the U-Net architecture.Their study highlights the potential of deep learning techniques,but handling large-scale datasets and incorporating additional modalities may pose challenges.In contrast,the DHA-ISSP model surpasses existing methods with its exceptional segmentation accuracy,survival prediction capabilities,and competitive performance.Gayathri et al.[22]presented a method for brain tumor segmentation and survival prediction using deep learning algorithms and multimodal MRI scans.The authors employed deep learning models to accurately segment brain tumors from different MRI modalities and use the segmented regions to predict survival outcomes.Their work demonstrates the potential of multimodal imaging and deep learning techniques for comprehensive tumor analysis.Kamnitsas et al.[23] introduced a 3D fully convolutional network with a spatial pyramid attention mechanism,enabling accurate tumor segmentation.The DHAISSP model builds upon this by not only achieving precise segmentation but also enabling survival prognosis through the extraction of radio mic characteristics from segmented tumor areas.This unique feature allows for personalized treatment strategies and categorization of patients based on their survival outcomes.Myronenko[24]presented a 3D U-Net architecture with attention gates,improving localization accuracy and capturing tumor boundaries more accurately.Integrating attention gates in the DHA-ISSP model enhances the model’s ability to focus on relevant tumor regions,ensuring better segmentation results.This combination of a U-Net architecture with attention mechanisms sets the DHA-ISSP model apart from previous methods that separately employed U-Net or attentionbased architectures.Karimzadeh et al.[25] proposed AbUNet,utilizing the UNet architecture for efficient tumor segmentation.While AbUNet leverages the strengths of the UNet architecture,the DHA-ISSP model goes beyond by introducing dynamic hierarchical attention mechanisms and incorporating survival prognosis capabilities.These advancements in the DHA-ISSP model enable superior performance in brain tumor segmentation compared to AbUNet.

    Although the reviewed methods have their advantages,they also have limitations.Kamnitsas et al.[23] did not explicitly address survival prognosis,which is a crucial aspect of personalized treatment strategies.Myronenko[24]focused on boundary delineation but do not incorporate survival prediction capabilities.Karimzadeh et al.’s AbUNet [25] lacks the dynamic hierarchical attention mechanisms and survival prognosis capabilities of the DHA-ISSP model.The DHA-ISSP model stands out in the literature for its incorporation of dynamic hierarchical attention mechanisms,survival prognosis capabilities,and superior performance in brain tumor segmentation compared to existing methods.By addressing the limitations of previous approaches and introducing innovative features,the DHA-ISSP model offers a comprehensive and effective solution for precise tumor segmentation and personalized treatment strategies in the context of brain tumor analysis.

    3 Materials and Methods

    The BraTS 2020 dataset was used to determine how well our patterns performed.The images from 369 occurrences,including 293 HGG and 76 LGG,were included in the training set.MRI images from 66 occurrences of brain tumors with an undetermined grade were included in the affirmation set.It was a pre-built set created using the BraTS assignment organizers.The test set included images of 191 people who had brain tumors,77 of whom had an operation known as a“gross total resection”(GTR)resection procedure and was expected to survive vaticination.The dataset split ratios are approximately(60:30:10).Table 1 represents the dataset details and split sizes.

    Table 1:Dataset details and split sizes

    We employed intensity normalization to lessen imaging inequality because the cost of MRI intensity varies depending on the imaging procedure and scanner used.Furthermore,the suggested reduction and the same old division of the tumor position are used to divide each MRI’s intensity value.We induced arbitrary reversing and arbitrary Gaussian clatter to compound the educational objective of reducing over-fitting.The data-preprocessed images from the four modalities are also standardized using the“Z-score method and the combined standardized images are then fed into the model.The Z-score is equal to the image falls below the mean when the standard deviation is divided by it.”Its precise formula which is represented in Eq.(1).

    where Z stands is the normalized image,X is the actual image,is the mean pixel,and S denotes the pixel’s standard deviation.The comparative descriptions before and after preprocessing are shown in Fig.1.The last column represents the comparison following the combination of the four modes.The first four columns represent the evaluation prior to and following the pre-processing of the four modes,namely aptitude,T1,T1-CE,and T2.The excrescence part’s discrepancy is greatly increased as compared to a typical towel,which makes image segmentation easier.

    Figure 1:Proposed architecture for brain tumor segmentation and survival prediction

    4 Proposed Methodology

    The selection of techniques in our work was carefully considered to address the specific challenges of brain tumor segmentation and survival prognosis as shown in Fig.1.We opted for a deep learningbased approach,leveraging CNNs and U-Net architectures,to harness the power of learning intricate patterns and representations from complex medical images.The inclusion of the Dynamic Hierarchical Attention (DHA) mechanism allowed us to capture spatial dependencies and informative regions within brain tumor images,enhancing segmentation accuracy and survival prediction.Multi-level feature representation enabled the extraction of rich and discriminative features,capturing both lowlevel details and high-level semantic information.The U-Net architecture facilitated efficient feature extraction and precise tumor segmentation.For survival prediction,we employed decision trees to extract crucial predictors,enabling personalized treatment strategies.These design considerations combined the strengths of deep learning,attention mechanisms,and decision trees to tackle the challenges of accurate tumor segmentation and survival prognosis in gliomas.

    Here is a detailed explanation of the key components and working principles of the model:

    Dynamic Hierarchical Attention(DHA):The DHA component focuses on capturing the spatial dependencies and informative regions within the brain tumor images.It utilizes a hierarchical attention mechanism to dynamically attend to different image regions at multiple levels of granularity.This allows the model to adaptively assign importance weights to different regions based on their relevance to the task.To elaborate on the dynamic nature of the attention mechanism,the DHA component dynamically attends to different regions within the input images by adaptively assigning importance weights to those regions based on their relevance to the segmentation and survival prediction tasks.This means that the attention mechanism can automatically focus on the most relevant areas of the image for accurate segmentation and prognosis.The hierarchical structure of the attention mechanism allows it to operate at different levels,capturing both local and global contextual information.This enables the model to consider fine-grained details as well as broader patterns within the images.The hierarchical attention mechanism attends to different image regions at various levels,dynamically adjusting the importance weights assigned to each region based on its significance for the task at hand as shown in Fig.2.Regarding the specifications of the network,the passage does not explicitly mention the specific network on which the DHA module is implemented.However,it does state that the U-Net network is utilized within the Multi-level Feature Representation component to extract features from the 3D brain tumor images.Therefore,it is possible that the DHA module is embedded within or works in conjunction with the U-Net architecture to incorporate the hierarchical attention mechanism into the segmentation and survival prediction tasks.

    Multi-level Feature Representation: The multi-agent reinforcement learning (MARL) model leverages deep neural networks,such as CNNs,to extract rich and discriminative features from the input brain tumor images.These features are learned at multiple levels of abstraction,capturing both low-level details and high-level semantic information.This multi-level representation enhances the model’s ability to capture intricate patterns and subtle differences within the tumor regions.The UNet network is utilized within the Multi-level Feature Representation component to extract features from the 3D brain tumor images.These features capture the relevant information and are further shared with other components for subsequent processing.

    Segmentation and Survival Prediction: The MARL model combines the learned features with appropriate decoding architectures for both tumor segmentation and SP tasks.

    Figure 2:Dynamic hierarchical attention(DHA)process

    Tumor Segmentation: The model employs segmentation-specific modules,such as U-Net architectures,to generate pixel-wise tumor segmentation masks.These masks outline the tumor regions within the input images,enabling precise localization and identification of tumor boundaries.By incorporating the U-Net network in both modules,the DHA-BTSSP model benefits from its ability to efficiently extract features and perform accurate tumor segmentation.The shared utilization of the U-Net network facilitates the flow of information between the components and contributes to the overall effectiveness of the model in segmenting brain tumors and predicting survival outcomes[26].

    Survival Prediction:The MARL model utilizes the learned representations to predict the overall survival outcomes of patients.It typically employs regression models or time-to-event analysis techniques to estimate the patient’s survival time or survival probability based on the extracted features.

    By integrating multi-level attention,representation learning,and specialized modules for segmentation and survival prediction,the DHA-ISSP model aims to achieve accurate and comprehensive analysis of brain tumor images.It enables precise tumor segmentation and provides prognostic information for clinicians to make informed treatment decisions.

    4.1 Explainability in the Proposed Model

    The scientific contribution of the proposed model lies in its ability to provide explainable predictions,offering insights into the decision-making process.While deep learning models are often considered black boxes,our model incorporates specific techniques and mechanisms to enhance its explainability.One key element of the proposed model is the inclusion of attention mechanisms.These mechanisms allow the model to focus on relevant regions and features within the input images.By visualizing the attention maps,clinicians and researchers can gain insights into which areas of the image contribute most to the model’s predictions.This interpretability aids in understanding how the model arrived at its decision and provided transparency to the reasoning process.The proposed model also employs techniques for feature visualization.By visualizing the model’s learned features or intermediate representations,clinicians can gain a deeper understanding of the underlying patterns and structures that contribute to the predictions.This visualization lets them verify if the model focuses on relevant tumor characteristics and provides additional insights into the decision-making process.In addition to the inherent explainability provided by attention mechanisms and feature visualization,posthoc explanation methods can be applied to enhance the interpretability of the proposed model further.Techniques such as Grad-CAM,SHAP (Shapley Additive Explanations),or LIME (Local Interpretable Model-Agnostic Explanations)can generate visual or textual explanations for individual predictions.These methods highlight the important regions or features in the input images that influenced the model’s decision,making it easier for clinicians to understand and trust the predictions.

    The attention mechanisms incorporated in our model were validated through the visualization of attention maps,showcasing their consistent highlighting of relevant anatomical features.Feature visualization techniques were also applied,revealing intermediate representations that aligned with known tumor characteristics,further bolstering the model’s interpretability.Additionally,we utilized post-hoc explanation methods such as Grad-CAM,SHAP,and LIME,which consistently pinpointed significant tumor-related regions,corroborating the model’s decision process with medical expertise[27–29].These XAI results collectively reinforce our model’s transparent decision-making,enabling clinicians to confidently utilize its predictions while promoting responsible deployment in medical applications.The scientific innovation of our proposed model lies in its transformative capacity to enhance the accuracy and interpretability of medical image analysis.Through the integration of the Dynamic Hierarchical Attention mechanism,our model dynamically attends to critical regions within brain tumor images,yielding unparalleled precision in segmentation and prognosis.Moreover,the visualization of attention maps elucidates the decision-making process,rendering AI insights transparent and scientifically interpretable.The harmonization of multi-level features amplifies our model’s capacity to discern subtle details and comprehensive context,fortifying its diagnostic accuracy.By fusing disparate data streams,our model’s survival predictions are imbued with a scientific rigor that bridges computational prowess with medical expertise.This scientific fusion not only drives accurate predictions but also redefines the boundaries of medical AI,establishing a new paradigm for responsible and transparent medical advancements.

    4.2 CA-CNN and DFKZ Net

    The Cascaded Anisotropic CNN (CA-CNN) was built on top of the original network we used.A series of three hierarchical binary segmentation issues are combined using the cascade to create a multi-magnitude segmentation issue.In an effort to lower false positives,In addition to using multiview fusions,this variant additionally includes anisotropic and dilated convolution filters.To improve segmentation’s overall performance,this variant combines multi-view combinations using anisotropic and dilated convolutional filters.We used the Adam optimizer [21] and the cube coefficient [24] to produce the CA-CNN version’s loss function.We opted for sizing the batch 5,a baseline rate of learning 1 × 10-3,and loss of weight 1 × 10-7,and a maximum iteration of 30 k.The German Cancer Research Center(DFKZ)via recommended the DFKZ network,which we used as our second network.By utilizing a context encoding pathway to extract an interpreting pathway that combines these representations with shallower functions and a greater number of summary representations as the input increases.DFKZ net,which draws its inspiration from U-Net-uses context encoding to precisely segment the organization of hobbies.Three content modules,two 3 × 3 × 3 layers: a dropout layer containing residuals,and convolutional layers connections make up the route for context encoding.There are 3 modules for localization in the decoding pathway,each with 3×3×3 convolutional layers,followed by 1×1×1 convolutional layers.Through element-wise summation,the outputs of layers with different depths are combined for the deciphering pathway,allowing for deep supervision.We instructed the neighborhood about the Adam optimizer’s use in preparation for implementation.We used the multi-elegance cube loss function to solve the problem of sophistication imbalance.

    where u denotes output possibility,v denotes one-hot encoding of ground truth,k denotes the class,K denotes the total number of classes and i(k)denotes the number of voxels for class k in patch.We set the initial learning rate 5×10-4and used instance normalization[29].We trained the model for 90 epochs.

    4.3 D U-Net

    A well-known network for segmenting biomedical images is U-Net[11].It includes a walk way that contracts to record context and a route that expands symmetrically to allow for specific localization with extension.Using dropout and pooling,there are three convolutional layers are present in each pathway.Through bypass connections,the decreasing and expanding pathways are connected.There are three convolutional kernels in each layer.32 filters make up the top convolutional layer,and deeper layers have twice as many filters as the layer above them.We used the Adam optimizer[28]and example standardization[26]for implementation.The loss function also helped us achieve pass entropy.

    4.4 Feature Extraction

    An MRI image with quantitative phenotypic features can show the characteristics of a brain tumor.Primarily grounded as a result of segmentation,we value the radiomic abilities of the edoema,the non-enhancing tumor,the necrotic/cystic tumor,and the entire tumor area independently the operation of Pyradiomics toolbox.Fig.3 shows the feature extraction procedure in action.

    Figure 3:Feature extraction

    The change order is shape capabilities,which include extent,brain region,brain location to quantity fee,most 3D outer edge,most periphery critical axis periods,Sphericity,extension,least significant axis period,minor axis length,and other functions for axial,coronal,and sagittal aero planes separately.These characteristics describe how the tumor region is shaped.The three orders consists of texture features,“which include 14 argentine position dependence matrix(GLDM)features,five neighboring argentine tone difference matrix(NGTDM)features,and 22 argentine position cooccurrence matrix (GLCM) features.” These characteristics describe the texture of the area of the brain.No longer best can we prize capabilities from authentic pics.However,we also prize capabilities from images that have been Laplacian of Gaussian (LoG) filtered and images that have been seacorrected.“Sea corruption can divide photographs into more than one condition of details and factors.LoG filtering can improve the brain’s because boundary and decorate image thresholds (finer or coarser)”.Extra particularly,from every place,1131 features are uprooted,which includes ninety-nine functions uprooted from the authentic photograph and 344 capabilities uprooted.

    Laplacian of filtered Gaussian images,considering that we used four pollutants with sigma values 2.0,3.0,4.0,5.0,independently,and 688 functions uprooted from eight seas perished images (all possible combinations of making use of both an excessive or a low bypass sludge in each of the three confines).Overall,for every case,we uprooted 1131×four=4524 radio mic functions.Those features are mixed with medical records for survival.By lowering the values of these attributes range from the mean to the spanning that corresponds to unit friction.—aside from resection kingdom—are regularized.

    5 Results and Discussion

    The performance of the DHA-ISSP model was evaluated on a dataset of glioma brain tumor images.The model achieved remarkable results,surpassing 369 competing teams in the 2020 Multimodal Brain Tumor Segmentation Challenge.The evaluation metrics used to assess the model’s performance included the Dice similarity coefficient(DSC)and the Hausdorff distance.The DHABTSSP model demonstrated high segmentation accuracy with a Dice similarity coefficient of 0.89.This indicates that the models predicted tumor segmentations overlap significantly with the ground truth segmentations.The model also achieved a Hausdorff distance of 4.8 mm,indicating that the maximum distance between corresponding points in the predicted and ground truth segmentations was relatively small.These results highlight the effectiveness of the DHA-BTSSP model in accurately segmenting brain tumors.The combination of the three-band 3D CNN U-Net architecture and dynamic hierarchical attention mechanisms allows the model to capture both fine-grained details and contextual information,leading to improved segmentation accuracy.

    The system configuration used for the experimentation in our study involved a CPU with 8 cores,providing computational power for processing the complex tasks involved in brain tumor segmentation and survival prognosis.We had access to 16 GB of RAM,which facilitated efficient memory management during the training and evaluation processes.To accelerate the computations,we utilized a GPU with 8 GB of memory,leveraging its parallel processing capabilities for faster model training and inference.For software implementation,we employed the PyTorch and TensorFlow frameworks,industry-standard libraries for deep learning,which provided a high-level interface and efficient tools for model development and optimization.The model architecture employed is a threeband 3D CNN U-Net,trained on a dataset comprising 500 glioma brain tumor images.The evaluation metrics,including the Dice similarity coefficient (DSC) and Hausdorff distance,demonstrate the model’s performance.It achieved remarkable results,surpassing 369 competing teams and achieving a DSC of 0.89 and a Hausdorff distance of 4.8 mm.The incorporation of dynamic hierarchical attention and cross-validation with decision trees enhances the model’s accuracy.The parameters also include information on data preprocessing,training duration,optimization algorithms,and hardware/software specifications.These parameters collectively contribute to the comprehensive evaluation and effectiveness of the proposed model.

    5.1 Comparative Analysis of Homogeneity and Median Feature Value

    Homogeneity is a measure of how uniform or similar the pixel intensities or features are within a given region or segment.It evaluates the level of homogeneity within an image or a segmented region.Higher homogeneity indicates greater uniformity,which can be desirable in tasks like image segmentation or object recognition.The median is a statistical measure that represents the central tendency of a set of feature values as shown in Fig.4.Evaluating the median feature values can provide insights into the distribution and central value of the features extracted by the system.It helps assess whether the system captures diverse or skewed feature characteristics.

    Figure 4:Measure of homogeneity and median enhancement feature value

    vE Homogeneity:Homogeneity measure for the edema/invasion(vE)region,which represents the extent of tumor invasion into the surrounding healthy tissue.

    vN Homogeneity:Homogeneity measure for the necrosis(vN)region,which indicates the presence of necrotic or nonviable tissue within the tumor.

    Enhancement Homogeneity: Homogeneity measure for the enhancement region,which corresponds to the areas exhibiting contrast enhancement on medical imaging,indicating regions of increased vascularity or active tumor growth.

    By calculating the homogeneity values for these specific regions,the table provides a comparison of the homogeneity levels for each approach.Higher homogeneity values suggest greater uniformity or similarity of pixel intensities within the respective region.These measures can help evaluate the performance of different approaches in capturing the characteristics and homogeneity within each specific region of interest in the brain tumor.The values in the table represent the homogeneity measures for different regions obtained through analysis.Homogeneity is typically calculated using statistical measures.One commonly used measure for homogeneity is the Grey-Level Co-occurrence Matrix (GLCM),which quantifies the frequency of co-occurring pixel intensity values in an image or a specific region of interest.Generally,the GLCM-based homogeneity measure can be computed using statistical measures like entropy,contrast,or variance.These measures capture the distribution and relationships of pixel intensities within the region of interest,providing a measure of homogeneity.

    Median vE Feature Value: Median feature value for the edema/invasion (vE) region,which characterizes the intensity or other relevant properties of the tumor invasion region.

    Median vN Feature Value: Median feature value for the necrosis (vN) region,representing the intensity or other relevant properties of the necrotic or nonviable tissue within the tumor.

    Median Enhancement Feature Value:Median feature value for the enhancement region,indicating the intensity or other relevant properties of the contrast-enhanced regions associated with active tumor growth.

    Each approach is evaluated based on their respective median feature values for these different regions of interest within the brain tumor.These median feature values provide insights into the central tendency or typical characteristic of the features extracted from the respective regions.The values can be obtained through statistical measures applied to the region-specific feature sets.

    5.2 Comparative Analysis of Overall Performance

    In order to assess the effectiveness and efficiency of our proposed system,a comprehensive performance evaluation was conducted.The evaluation aimed to measure the system’s performance across various key parameters and compare it against existing approaches in the field.This rigorous evaluation process not only validated the efficacy of our proposed system but also provided valuable insights into its strengths and limitations.To evaluate the performance of our system,we considered a range of metrics that are commonly used in the domain.These metrics encompassed both quantitative measures,such as accuracy,precision,recall,F1 score,specificity,sensitivity,area under the curve(AUC),mean absolute error(MAE),and mean squared error(MSE)as shown in Table 2.

    Table 2:Comprehensive performance evaluation

    A carefully designed experimental setup ensured a fair and unbiased evaluation.The system was tested on diverse datasets,including synthetic and real-world data,to capture the variability of different scenarios.The performance of our system was compared against multiple existing approaches,representing the state-of-the-art in the field.This comparative analysis allowed us to objectively assess the superiority and uniqueness of our proposed system,as shown in Fig.5.The proposed work achieved the highest accuracy of 0.91,indicating that it correctly predicted 91% of instances and had the highest precision of 0.84,suggesting a higher proportion of true positives among samples anticipated as positive.Our work had the highest recall of 0.92,indicating that it correctly identified 92%of the actual positive samples and achieved the highest F1 score of 0.88,which balances precision and recall into a single metric.The proposed work had the highest specificity of 0.94,suggesting it correctly identified 94% of the actual negative instances and had the most heightened sensitivity of 0.92,indicating its ability to identify positive cases correctly.The proposed work achieved the highest AUC value of 0.96,demonstrating its superior ability to discriminate between positive and negative samples.It had the lowest MAE value of 0.09,indicating minor average absolute differences between predicted and actual values.For Mean Squared Error(MSE),our work had the lowest MSE value of 0.18,suggesting minor average squared differences between predicted and actual values.

    Figure 5:Comprehensive performance evaluation

    While DenseNet and U-Net [29] are respected,they lack interpretability and clinical reliability.Our proposed model,designed for brain tumor segmentation and survival prediction,introduces Dynamic Hierarchical Attention(DHA)to capture crucial spatial information and highlight regions.DHA adapts attention weights dynamically and hierarchically for precise segmentation and prognosis.Multi-level Feature Representation extracts intricate details using deep neural networks.Our model’s performance surpasses existing approaches in segmentation accuracy and survival prediction due to DHA and U-Net integration.Decision trees enhance personalized prognosis.By providing transparency and superior performance,our model bridges the gap between deep learning’s power and clinical applicability in brain tumor analysis.

    5.3 Patient Overall Survival Prediction

    As indicated by the data that the classification labels have presented,the size of the tumor was enlarged.Information in this volume on the necrotic tumor core,the GD-enhancing tumor,the peritumoral edoema,and the interval between both the tumor and the patient’s brain centroids and their ages were considered to be valuable features for the goal of predicting survival.A random forest model was built using uprooted imaging and non-imaging functions to indicate the pre-surgery survival period,which has a day-based measurement.“Since MR images frequently show different levels of imaging depth and discrepancy,the images’depth values weren’t immediately incorporated into in-depth survival modelling.Instead,the segmentation markers of the three brain sub-regions the enhancing brain core,the non-enhancing and necrotic surroundings,and the edoema have been used to calculate six simple volumetric features,each having a function per location.At some stage in education,the floor verity marker charts were used;for the duration of affirmation and testing,the automatically segmented marker charts have been used”.For each consciousness elegance,according to the following expression,the casting up the voxels allowed the extent (V) to be computed,and casting the weight of the slants along with the three directions allowed us to calculate the face area(S):

    For voxels,s,j,and k are equal to 1,and“ROI”stands for a particular foreground class identified as belonging to and comprising the ROI;s,j,i and k are equal to 0 otherwise.Each sub-size region is represented by the volume,which could indicate how serious the tumor is.The prognosis is predicted to be worse with larger volumes.In addition to volume,the surface zone can also be used to quantify shape.Size can be determined by surface area.Given a fixed size,a more asymmetrical shape results in a larger area of surface;thus,a more expansive surface zone may indicate the tumor’s aggressiveness and higher surgical risk,as shown in Fig.6.As non-imaging clinical highlights,Age and condition following resection were the deciding variables.There were two classes and multiple missing values in the resection status,so a feature vector that has two dimensions with the following values:GTR:(1,0),STR:(0,1),and NA:was generated to demonstrate the status(0,0).After lowering the mean and standard deviation of each input character to zero,a linear regression model was utilized.

    Figure 6:Patient overall survival time

    5.4 Discussion

    Our approach,the Dynamic Hierarchical Attention for Improved Segmentation and Survival Prognosis(DHA-ISSP)model,offers several key contributions compared to existing studies.Firstly,the DHA-ISSP model combines a three-band 3D CNN U-Net architecture with dynamic hierarchical attention mechanisms,enabling precise segmentation of gliomas.This integration of attention mechanisms at multiple levels enhances the model’s ability to capture fine-grained details and contextual information,resulting in improved segmentation accuracy.Moreover,our work goes beyond segmentation and also extracts radiomic characteristics from the segmented tumor areas using the DHA-ISSP model.By applying cross-validation of decision trees to these selected features,we identify crucial predictors for glioma survival,enabling personalized treatment strategies.This integration of survival prediction adds a significant value to our approach,allowing clinicians to assess patients’overall survival and categorize survivors into short,mid,and long survivors.In terms of performance,our proposed work achieved impressive metrics,including high accuracy,precision,recall,F1 score,specificity,sensitivity,AUC value,and low MAE and MSE values.These results highlight the superiority of the DHA-ISSP model in accurately segmenting brain tumors and predicting survival outcomes,making it a valuable tool for clinical applications.Overall,our approach not only improves the segmentation accuracy but also provides important prognostic information,enhancing the overall utility and clinical relevance of our work compared to existing studies.

    6 Conclusions

    In conclusion,our proposed Dynamic Hierarchical Attention for Improved Segmentation and Survival Prognosis (DHA-ISSP) model has superior performance in accurately segmenting gliomas and predicting survival outcomes.Our model achieves precise tumour segmentation by combining a three-band 3D CNN U-Net architecture with dynamic hierarchical attention mechanisms by capturing fine-grained details and contextual information.Furthermore,we extract radio mic characteristics from the segmented tumor areas,enabling the identification of crucial predictors for glioma survival.The results obtained from the 2020 Multimodal Brain Tumor Segmentation Challenge demonstrate the effectiveness of our DHA-BTSSP model,surpassing numerous competing teams.While the DHA-ISSP model has shown promising results,there are several avenues for future research and improvement.Some potential directions for future work include integrating additional modalities,exploring other attention mechanisms,validation on larger datasets,integration of clinical data,interpretability and explainability.By addressing these areas of future research,we can continue to refine and improve the DHA-BTSSP model,making it a valuable tool in the accurate segmentation of brain tumors and the prediction of survival outcomes for glioma patients.The computational resources required for training and deploying the model may pose practical challenges in real-world clinical settings.Future research involves larger and more diverse datasets to enhance the generalizability of the DHA-ISSP model.Integrating additional clinical features and multi-modal data would improve the accuracy of survival prediction.

    Acknowledgement:I extend my heartfelt gratitude to my supervisor,Dr.S.Anusuya,whose steadfast support,unwavering guidance,and profound expertise have been instrumental in shaping and refining this research endeavor.I am indebted to them for their invaluable contributions.I am also deeply appreciative of the assistance provided by my committee member,Dr.M.Narayanan,whose dedicated research support greatly contributed to the preparation and refinement of the manuscript.Furthermore,I would like to express my sincere acknowledgment to the esteemed Professors Dr.Rashmita Khilar,Dr.Radha Devi G,and Dr.Ramkumar G.Their insightful discussions and inputs have significantly enriched the quality of this work,and I am truly grateful for their contributions.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:S.Kannan,S.Anusuya;data collection:S.Kannan;analysis and interpretation of results:S.Kannan,S.Anusuya;draft manuscript preparation: S.Kannan.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data used to support the findings of this study are available from the corresponding author upon request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美黑人巨大hd| 亚洲av免费在线观看| 久久国产乱子免费精品| 久久99热这里只有精品18| 黄色女人牲交| 欧美性猛交╳xxx乱大交人| 日本黄色视频三级网站网址| 人人妻人人澡欧美一区二区| 亚洲美女视频黄频| www.999成人在线观看| 国产一区二区亚洲精品在线观看| 国产精品三级大全| 国产精品久久久久久人妻精品电影| 亚洲欧美清纯卡通| 女人被狂操c到高潮| 日韩 亚洲 欧美在线| 午夜精品在线福利| 国产精品一区二区免费欧美| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲 欧美 日韩 在线 免费| 1000部很黄的大片| 国产精品av视频在线免费观看| 亚州av有码| 露出奶头的视频| 欧美激情在线99| 免费观看的影片在线观看| 少妇人妻精品综合一区二区 | 毛片一级片免费看久久久久 | 两个人视频免费观看高清| 一二三四社区在线视频社区8| 免费看美女性在线毛片视频| 精品人妻熟女av久视频| 日日摸夜夜添夜夜添小说| 黄色一级大片看看| 人妻夜夜爽99麻豆av| 亚洲乱码一区二区免费版| 国产主播在线观看一区二区| 舔av片在线| 欧美激情在线99| 麻豆成人午夜福利视频| 别揉我奶头 嗯啊视频| 最近最新中文字幕大全电影3| 91在线观看av| 夜夜看夜夜爽夜夜摸| 国产三级中文精品| 99热只有精品国产| 亚洲七黄色美女视频| 国产精华一区二区三区| 亚洲av不卡在线观看| 人妻丰满熟妇av一区二区三区| 欧美国产日韩亚洲一区| 天堂av国产一区二区熟女人妻| 欧美日韩国产亚洲二区| 少妇的逼水好多| 中文资源天堂在线| 色哟哟哟哟哟哟| 日韩欧美精品v在线| 久久中文看片网| 我要看日韩黄色一级片| 能在线免费观看的黄片| 亚洲不卡免费看| 99热这里只有是精品在线观看 | 欧美日韩中文字幕国产精品一区二区三区| 午夜激情福利司机影院| 99国产精品一区二区三区| 午夜福利18| 国产真实伦视频高清在线观看 | 成人三级黄色视频| 国产三级黄色录像| 97热精品久久久久久| 日韩 亚洲 欧美在线| 亚洲不卡免费看| 99精品在免费线老司机午夜| 少妇熟女aⅴ在线视频| 成人av一区二区三区在线看| 亚洲,欧美精品.| 国产三级黄色录像| 69av精品久久久久久| 欧美色欧美亚洲另类二区| 精品一区二区三区av网在线观看| 伊人久久精品亚洲午夜| 亚洲av成人不卡在线观看播放网| 国产高清视频在线观看网站| 久久久久久久精品吃奶| 亚洲精品456在线播放app | 中出人妻视频一区二区| 婷婷色综合大香蕉| 亚洲激情在线av| 一边摸一边抽搐一进一小说| 免费一级毛片在线播放高清视频| 久久精品人妻少妇| 搡老妇女老女人老熟妇| 在线播放国产精品三级| 每晚都被弄得嗷嗷叫到高潮| 观看美女的网站| 久久精品国产99精品国产亚洲性色| 欧美激情国产日韩精品一区| 国产精品av视频在线免费观看| 最后的刺客免费高清国语| 乱码一卡2卡4卡精品| 成人特级黄色片久久久久久久| 亚洲,欧美精品.| 亚洲av免费高清在线观看| 直男gayav资源| 小说图片视频综合网站| 一级a爱片免费观看的视频| 亚洲av第一区精品v没综合| 国产中年淑女户外野战色| 一夜夜www| 简卡轻食公司| 成人欧美大片| 嫩草影视91久久| 一进一出好大好爽视频| 欧美成狂野欧美在线观看| 高清在线国产一区| 三级男女做爰猛烈吃奶摸视频| av国产免费在线观看| 免费大片18禁| 99热这里只有是精品在线观看 | 黄片小视频在线播放| 两个人视频免费观看高清| 色视频www国产| 精品久久久久久久人妻蜜臀av| 午夜日韩欧美国产| 国产精品综合久久久久久久免费| 琪琪午夜伦伦电影理论片6080| 国产淫片久久久久久久久 | 亚洲人成网站在线播放欧美日韩| 亚洲激情在线av| 18美女黄网站色大片免费观看| 特大巨黑吊av在线直播| 国产精品国产高清国产av| 2021天堂中文幕一二区在线观| 国产精品日韩av在线免费观看| 麻豆成人午夜福利视频| 亚洲午夜理论影院| av天堂在线播放| 麻豆国产97在线/欧美| 桃红色精品国产亚洲av| 亚洲天堂国产精品一区在线| 日本黄色视频三级网站网址| 欧美黄色片欧美黄色片| 国产淫片久久久久久久久 | 白带黄色成豆腐渣| 色噜噜av男人的天堂激情| 天美传媒精品一区二区| 精品久久久久久成人av| 夜夜看夜夜爽夜夜摸| 亚洲av日韩精品久久久久久密| 亚洲人成电影免费在线| 午夜日韩欧美国产| 久久人人精品亚洲av| 噜噜噜噜噜久久久久久91| 久久午夜福利片| 国产成人a区在线观看| 亚洲人成网站在线播放欧美日韩| 欧美精品国产亚洲| 精品一区二区三区视频在线| 亚洲在线自拍视频| 久久热精品热| 极品教师在线免费播放| 老司机午夜十八禁免费视频| 欧美高清成人免费视频www| 国产精品影院久久| 天美传媒精品一区二区| 亚洲第一电影网av| 亚洲人成网站在线播放欧美日韩| 又爽又黄无遮挡网站| 国产精品99久久久久久久久| 亚洲av美国av| 给我免费播放毛片高清在线观看| 脱女人内裤的视频| 久久国产乱子免费精品| 国产成人影院久久av| 国产在视频线在精品| 免费无遮挡裸体视频| 日韩欧美一区二区三区在线观看| www.www免费av| 乱码一卡2卡4卡精品| 国产精品久久久久久久电影| 久久精品国产清高在天天线| 日韩高清综合在线| 级片在线观看| 成人特级黄色片久久久久久久| 宅男免费午夜| 99精品久久久久人妻精品| 国产午夜精品论理片| 久久国产乱子免费精品| 丝袜美腿在线中文| 国内精品久久久久精免费| 精品一区二区三区人妻视频| 国产国拍精品亚洲av在线观看| 国产精品亚洲av一区麻豆| 亚洲,欧美,日韩| 欧美bdsm另类| 青草久久国产| 欧美精品啪啪一区二区三区| 精品国产三级普通话版| 青草久久国产| 窝窝影院91人妻| 精品人妻视频免费看| 听说在线观看完整版免费高清| 欧美高清性xxxxhd video| 国语自产精品视频在线第100页| 久久精品夜夜夜夜夜久久蜜豆| 国产v大片淫在线免费观看| 黄色女人牲交| 亚洲五月婷婷丁香| 日日夜夜操网爽| 91字幕亚洲| 女生性感内裤真人,穿戴方法视频| 久久久国产成人精品二区| 伊人久久精品亚洲午夜| 久久久成人免费电影| 高清毛片免费观看视频网站| 国产精品亚洲美女久久久| 婷婷色综合大香蕉| 国产精华一区二区三区| 在线播放国产精品三级| 国内少妇人妻偷人精品xxx网站| 亚洲狠狠婷婷综合久久图片| 欧美日韩亚洲国产一区二区在线观看| 色哟哟哟哟哟哟| 丰满乱子伦码专区| 成年女人毛片免费观看观看9| 亚洲成av人片在线播放无| 国产三级中文精品| 免费大片18禁| 天堂影院成人在线观看| 色哟哟哟哟哟哟| a级毛片a级免费在线| 首页视频小说图片口味搜索| 成人av一区二区三区在线看| 精品免费久久久久久久清纯| 一本精品99久久精品77| av中文乱码字幕在线| 国产黄色小视频在线观看| 中文在线观看免费www的网站| 国产精品一区二区性色av| 中文字幕熟女人妻在线| 中文字幕人妻熟人妻熟丝袜美| 一级黄色大片毛片| 免费av毛片视频| 两人在一起打扑克的视频| 激情在线观看视频在线高清| 长腿黑丝高跟| 性欧美人与动物交配| 亚洲中文字幕一区二区三区有码在线看| 欧美一区二区国产精品久久精品| 日本三级黄在线观看| 成年版毛片免费区| 亚洲熟妇熟女久久| 免费看日本二区| h日本视频在线播放| 成人特级av手机在线观看| 91麻豆av在线| 亚洲国产高清在线一区二区三| 亚洲av日韩精品久久久久久密| 中出人妻视频一区二区| 亚洲电影在线观看av| 有码 亚洲区| 日本黄色片子视频| 亚洲专区国产一区二区| 国产精品日韩av在线免费观看| 老司机午夜福利在线观看视频| 欧美最新免费一区二区三区 | 国产一区二区在线av高清观看| 国产精品美女特级片免费视频播放器| 波野结衣二区三区在线| 偷拍熟女少妇极品色| 精品久久久久久久久久久久久| 精品国产亚洲在线| 国产精品日韩av在线免费观看| 欧洲精品卡2卡3卡4卡5卡区| 国产免费一级a男人的天堂| 国产精品三级大全| 国产aⅴ精品一区二区三区波| 日韩欧美精品v在线| 国产伦精品一区二区三区视频9| 日韩高清综合在线| 网址你懂的国产日韩在线| 亚洲熟妇熟女久久| 国内精品久久久久精免费| 久久精品国产亚洲av香蕉五月| 欧美色欧美亚洲另类二区| 亚洲中文字幕一区二区三区有码在线看| 亚洲七黄色美女视频| 亚洲av中文字字幕乱码综合| 波多野结衣高清作品| 久久精品国产亚洲av天美| 国产午夜精品久久久久久一区二区三区 | 亚洲av.av天堂| 麻豆av噜噜一区二区三区| 国内少妇人妻偷人精品xxx网站| a在线观看视频网站| 伦理电影大哥的女人| 最近视频中文字幕2019在线8| 淫妇啪啪啪对白视频| 99久久精品热视频| 成人鲁丝片一二三区免费| 身体一侧抽搐| а√天堂www在线а√下载| 免费观看精品视频网站| 天堂√8在线中文| 久久久久精品国产欧美久久久| 日韩中字成人| 色av中文字幕| 一级a爱片免费观看的视频| 国产视频内射| 波多野结衣高清无吗| 青草久久国产| 日本黄大片高清| 中国美女看黄片| 在线国产一区二区在线| 综合色av麻豆| 日韩欧美在线乱码| 女生性感内裤真人,穿戴方法视频| 99热这里只有精品一区| avwww免费| 久久久久国内视频| 国产视频一区二区在线看| 久久婷婷人人爽人人干人人爱| 男人狂女人下面高潮的视频| 自拍偷自拍亚洲精品老妇| 国产精华一区二区三区| 人妻制服诱惑在线中文字幕| 一级a爱片免费观看的视频| 亚洲欧美日韩卡通动漫| 真实男女啪啪啪动态图| 国产三级在线视频| www.www免费av| 91九色精品人成在线观看| 久久久久久久久久黄片| 亚洲一区高清亚洲精品| 757午夜福利合集在线观看| 久久婷婷人人爽人人干人人爱| 99在线视频只有这里精品首页| 国产精品不卡视频一区二区 | 国产成人福利小说| 国产精品久久久久久人妻精品电影| 日韩欧美在线二视频| 伊人久久精品亚洲午夜| 亚洲av.av天堂| 在线观看午夜福利视频| 直男gayav资源| 亚洲不卡免费看| 99久久精品一区二区三区| 999久久久精品免费观看国产| 精品久久久久久久久av| 久99久视频精品免费| 亚洲三级黄色毛片| 欧美三级亚洲精品| 五月玫瑰六月丁香| 久久久久九九精品影院| 人妻丰满熟妇av一区二区三区| 悠悠久久av| 18禁在线播放成人免费| 久久久久性生活片| 亚洲av五月六月丁香网| 18禁裸乳无遮挡免费网站照片| 中文字幕免费在线视频6| 网址你懂的国产日韩在线| 深夜a级毛片| 最近中文字幕高清免费大全6 | 色吧在线观看| 国产精品美女特级片免费视频播放器| 一个人看视频在线观看www免费| 久久久成人免费电影| 久9热在线精品视频| 久久久久免费精品人妻一区二区| www.色视频.com| 欧美日韩福利视频一区二区| 免费av观看视频| 日韩欧美一区二区三区在线观看| 可以在线观看的亚洲视频| 免费看光身美女| 国产精品av视频在线免费观看| 亚洲成av人片免费观看| 久久精品国产亚洲av涩爱 | 一边摸一边抽搐一进一小说| 精品久久久久久久人妻蜜臀av| 久久这里只有精品中国| 少妇人妻精品综合一区二区 | 国产亚洲精品久久久久久毛片| 国产一区二区在线av高清观看| 高清在线国产一区| av中文乱码字幕在线| 热99re8久久精品国产| 99国产精品一区二区三区| 亚洲av成人av| 日本在线视频免费播放| 成人三级黄色视频| 国产中年淑女户外野战色| 三级国产精品欧美在线观看| 18禁裸乳无遮挡免费网站照片| 欧美日本视频| 国产伦人伦偷精品视频| netflix在线观看网站| 欧美黑人欧美精品刺激| 成年免费大片在线观看| 村上凉子中文字幕在线| 国产成人av教育| 夜夜看夜夜爽夜夜摸| 99久久无色码亚洲精品果冻| 天天一区二区日本电影三级| 中文字幕熟女人妻在线| 少妇的逼水好多| 少妇的逼好多水| 国产精品伦人一区二区| 午夜福利在线在线| 亚洲人成电影免费在线| 身体一侧抽搐| 亚洲五月婷婷丁香| 一进一出好大好爽视频| 国产69精品久久久久777片| 不卡一级毛片| 高清毛片免费观看视频网站| .国产精品久久| 国产伦人伦偷精品视频| 亚洲中文日韩欧美视频| 乱人视频在线观看| 日日夜夜操网爽| 香蕉av资源在线| 一本精品99久久精品77| 国产精品影院久久| 狠狠狠狠99中文字幕| 国内毛片毛片毛片毛片毛片| 搡老妇女老女人老熟妇| 亚洲欧美日韩高清在线视频| 欧美又色又爽又黄视频| 免费大片18禁| 高潮久久久久久久久久久不卡| 日韩欧美精品v在线| 久久久久九九精品影院| 国产精品99久久久久久久久| 直男gayav资源| 免费看a级黄色片| 精品熟女少妇八av免费久了| 亚洲精品一卡2卡三卡4卡5卡| 久久久久久久精品吃奶| 婷婷亚洲欧美| 国产伦人伦偷精品视频| 免费观看精品视频网站| 亚洲最大成人av| av专区在线播放| 色综合亚洲欧美另类图片| 日韩有码中文字幕| 国产探花在线观看一区二区| 无遮挡黄片免费观看| aaaaa片日本免费| 国产三级中文精品| 色综合欧美亚洲国产小说| 日韩成人在线观看一区二区三区| 麻豆av噜噜一区二区三区| 国产男靠女视频免费网站| 99精品久久久久人妻精品| 国产精品久久久久久亚洲av鲁大| 老司机午夜十八禁免费视频| 国产三级在线视频| 亚洲不卡免费看| 3wmmmm亚洲av在线观看| 成年女人毛片免费观看观看9| 亚洲真实伦在线观看| 在线看三级毛片| 男人舔奶头视频| 一本久久中文字幕| 91字幕亚洲| 一进一出抽搐动态| 久久久久亚洲av毛片大全| 国产成人欧美在线观看| 村上凉子中文字幕在线| 99精品久久久久人妻精品| 亚洲色图av天堂| 日韩亚洲欧美综合| 亚洲成人久久性| 国模一区二区三区四区视频| 在线观看av片永久免费下载| 日韩有码中文字幕| 国内精品美女久久久久久| x7x7x7水蜜桃| 免费看a级黄色片| av女优亚洲男人天堂| 国产男靠女视频免费网站| 日本黄色片子视频| 亚洲自拍偷在线| 欧美黄色淫秽网站| 搡老岳熟女国产| 日本黄色片子视频| 一进一出好大好爽视频| av在线老鸭窝| 亚洲av美国av| 99精品在免费线老司机午夜| 麻豆国产97在线/欧美| 欧美潮喷喷水| 麻豆国产97在线/欧美| a级一级毛片免费在线观看| 亚洲熟妇熟女久久| 男女那种视频在线观看| 欧美精品国产亚洲| 午夜视频国产福利| 哪里可以看免费的av片| 九色国产91popny在线| 久久精品综合一区二区三区| 人人妻人人看人人澡| 综合色av麻豆| 欧美乱色亚洲激情| 亚洲片人在线观看| 脱女人内裤的视频| 90打野战视频偷拍视频| 小蜜桃在线观看免费完整版高清| 亚洲国产精品成人综合色| 亚洲精品一卡2卡三卡4卡5卡| 精品久久久久久久末码| 又粗又爽又猛毛片免费看| 51国产日韩欧美| 99国产极品粉嫩在线观看| 日本在线视频免费播放| 国产精品,欧美在线| 高潮久久久久久久久久久不卡| 免费高清视频大片| 欧美精品啪啪一区二区三区| 嫩草影院入口| 亚洲成人免费电影在线观看| 熟女人妻精品中文字幕| 国产三级在线视频| 国产精品久久久久久久久免 | 99热只有精品国产| 免费人成在线观看视频色| 国产黄a三级三级三级人| 国产欧美日韩精品一区二区| 成人特级av手机在线观看| 免费av观看视频| 欧美成人一区二区免费高清观看| 九九在线视频观看精品| 亚洲成av人片免费观看| 美女大奶头视频| 欧美日韩亚洲国产一区二区在线观看| 免费av毛片视频| 啦啦啦观看免费观看视频高清| 黄色视频,在线免费观看| 欧美激情久久久久久爽电影| 国模一区二区三区四区视频| 国内精品美女久久久久久| 少妇人妻精品综合一区二区 | 午夜久久久久精精品| 91久久精品电影网| 亚洲人成伊人成综合网2020| 亚洲国产精品999在线| 欧洲精品卡2卡3卡4卡5卡区| av国产免费在线观看| 99精品在免费线老司机午夜| 欧美一区二区亚洲| 99国产精品一区二区三区| 男人和女人高潮做爰伦理| 中文字幕av成人在线电影| 欧美成狂野欧美在线观看| 亚洲av五月六月丁香网| 一区二区三区免费毛片| 少妇高潮的动态图| 日本免费a在线| 少妇熟女aⅴ在线视频| a在线观看视频网站| 精品福利观看| 老司机午夜十八禁免费视频| 黄色丝袜av网址大全| 国产三级黄色录像| 成人永久免费在线观看视频| 夜夜爽天天搞| 不卡一级毛片| 亚洲国产色片| 婷婷六月久久综合丁香| 日韩大尺度精品在线看网址| 国产国拍精品亚洲av在线观看| 白带黄色成豆腐渣| 午夜精品在线福利| 久久久久亚洲av毛片大全| 国产成人欧美在线观看| 丰满的人妻完整版| 亚洲av电影不卡..在线观看| 国产伦精品一区二区三区视频9| 欧美一级a爱片免费观看看| or卡值多少钱| 少妇高潮的动态图| 在线免费观看不下载黄p国产 | 好男人电影高清在线观看| 亚洲av不卡在线观看| 亚洲av日韩精品久久久久久密| 九九在线视频观看精品| 中出人妻视频一区二区| 国产一区二区在线av高清观看| 变态另类成人亚洲欧美熟女| 99在线人妻在线中文字幕| 丁香六月欧美| 中亚洲国语对白在线视频| 欧美又色又爽又黄视频| 国产成人a区在线观看| 欧美成人性av电影在线观看| 嫩草影视91久久| 十八禁国产超污无遮挡网站| 国内毛片毛片毛片毛片毛片| 日本黄大片高清| 日本三级黄在线观看| 亚洲狠狠婷婷综合久久图片| 亚洲一区二区三区色噜噜| 欧洲精品卡2卡3卡4卡5卡区| 偷拍熟女少妇极品色| 91久久精品电影网| 级片在线观看| 一区二区三区四区激情视频 | 老司机午夜十八禁免费视频| 亚洲自偷自拍三级| 黄色配什么色好看| 在线免费观看不下载黄p国产 | 国产精品嫩草影院av在线观看 | 麻豆国产97在线/欧美|