• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Clinical Knowledge-Based Hybrid Swin Transformer for Brain Tumor Segmentation

    2023-10-26 13:15:28XiaoliangLeiXiaoshengYuHaoWuChengdongWuandJingsiZhang
    Computers Materials&Continua 2023年9期

    Xiaoliang Lei ,Xiaosheng Yu ,Hao Wu ,Chengdong Wu and Jingsi Zhang

    1College of Information Science and Engineering,Northeastern University,Shenyang,110819,China

    2Faculty of Robot Science and Engineer,Northeastern University,Shenyang,110819,China

    3Faculty of Engineering,Macquarie University,Sydney,NSW,2109,Australia

    ABSTRACT Accurate tumor segmentation from brain tissues in Magnetic Resonance Imaging (MRI) imaging is crucial in the pre-surgical planning of brain tumor malignancy.MRI images’heterogeneous intensity and fuzzy boundaries make brain tumor segmentation challenging.Furthermore,recent studies have yet to fully employ MRI sequences’considerable and supplementary information,which offers critical a priori knowledge.This paper proposes a clinical knowledge-based hybrid Swin Transformer multimodal brain tumor segmentation algorithm based on how experts identify malignancies from MRI images.During the encoder phase,a dual backbone network with a Swin Transformer backbone to capture long dependencies from 3D MR images and a Convolutional Neural Network(CNN)-based backbone to represent local features have been constructed.Instead of directly connecting all the MRI sequences,the proposed method re-organizes them and splits them into two groups based on MRI principles and characteristics: T1 and T1ce,T2 and Flair.These aggregated images are received by the dual-stem Swin Transformer-based encoder branch,and the multimodal sequence-interacted cross-attention module (MScAM)captures the interactive information between two sets of linked modalities in each stage.In the CNN-based encoder branch,a triple down-sampling module (TDsM) has been proposed to balance the performance while downsampling.In the final stage of the encoder,the feature maps acquired from two branches are concatenated as input to the decoder,which is constrained by MScAM outputs.The proposed method has been evaluated on datasets from the MICCAI BraTS2021 Challenge.The results of the experiments demonstrate that the method algorithm can precisely segment brain tumors,especially the portions within tumors.

    KEYWORDS Brain tumor segmentation;swin transformer;multimodal;clinical knowledge

    1 Introduction

    Magnetic resonance imaging(MRI)is crucial in brain tumor diagnosis[1].The different modal sequences of MRI,including T1,T1ce,T2,and Flair,each has unique features and are commonly used in clinical settings[2].Manually segmenting tumors from brain tissues in MRI images is an exhausting but necessary pre-processing step in the pre-surgical planning of brain malignancies [3].Today,specialists can do this work rapidly with appropriate computer-aided medical image segmentation technologies,which play an essential role in the clinical and medical fields:segmenting areas of lesions or separating tissues in medical images can aid physicians in the diagnosis of disease,the localization of the case,and the treatment planning,as well as in determining the extent of surgery or the distribution of radiotherapy doses.In addition,the correct segmentation of enhancing tumor and gangrenous portion is an essential reference for determining the degree of disease progression and survival status.However,due to the limitations of imaging principles and the intricate physiology of the human brain,MR images frequently display inhomogeneous intensities,and the margins of tumors and their adjacent tissues are frequently indistinct and overlapping.In addition,the central region of the tumor typically occupies a small portion of the image with low resolution,making it even more challenging to distinguish.These issues make brain tumor segmentation challenging.Before deep learning was proposed,researchers traditionally used classical machine learning techniques based on statistics,entropy,and others to deal with low-level features [4,5],which are susceptible to initial settings and noise[6].Deep learning methods significantly improve the performance of machine learning.Based on Convolutional Neural Network (CNN),the U-shaped network (U-Net) [7],with two symmetric branches for the feature encoder and decoder,allows for excellent scalability.Furthermore,U-Net became one of the most well-known frameworks for medical image segmentation thanks to CNN’lightweight architecture and feature representation capability.Inspired by natural language processing,Vision Transformer (ViT) [8] patches images and processes them using Transformer modules to capture global long-range relationships,which are difficult for CNNs.Swin transformer is one of the best variations of Transformer,and it has much potential for segmenting medical images [9,10].Integrating Transformer blocks into U-Net structures can utilize their complementary information fusion capabilities and scalability and improve segmentation performance[11–13].Multimodality is one of the current hot topics in machine learning research.Researchers can perform machine learning tasks more effectively by integrating the data properties of multiple modalities,such as photos and text annotations.Most researchers in the field of brain tumor segmentation blend multimodal MRI images in the input or output layers using convolutions.In contrast to other multimodal data,the essential and complementary information between MRI sequences[14],provides crucial a priori knowledge but has yet to be actively used in recent investigations.

    This paper proposes a novel clinical knowledge-based hybrid Swin Transformer framework with an encoder-decoder structure and skip-connections.In the encoder phase,we designed a dual backbone network: one is based on Swin Transformer to capture the long dependencies from the 3D images,and the other utilizes a CNN-based backbone for local feature representation.The MRI sequences are separated into two groups based on the MRI principles and characteristics: the first group contains T1 and T1ce,while the second group contains T2 and Flair.These grouped data are passed into the proposed dual stem Swin Transformer-based branch.We proposed a multimodal sequence-interacted cross-attention module(MScAM)to exchange information between two groups of correlated image modalities.In addition,we utilize a triple down-sampling module (TDsM) to balance the performance during downsampling.The CNNs-based decoder phase outputs the segment results,associating local features with long-range dependencies.The main contributions of this study can be summarized as follows:

    1)This article categorizes multimodal MRI images to capture complementary brain information based on clinical knowledge.This operation can improve the segmentation performance,especially within tumors.

    2)The proposed method has a dual-branch encoder and integrates the inter-modal information via the proposed MScAM.This operation complements the feature extraction characteristics of CNNs and Swin Transformers.

    3)This paper designs TDsM to maximize the retention of valid data during downsampling.

    4)The proposed method achieves positive experimental results on the BraTS2021 dataset.

    2 Related Works

    2.1 MRI Modalities

    MRI creates distinct modal sequences by altering transverse and longitudinal relaxation[15].T1-weighted imaging sequence(T1),T2-weighted imaging sequence(T2),T1-weighted contrast-enhanced(T1ce),and fluid-attenuated inversion recovery sequence (Flair) are the most frequently used MRI sequences in clinical practice.The morphological and pathological information on MRI images are complementary:T1 displays the anatomical structure of brain tissues;T2 is related to the tissue’s water content and is used to enhance the lesion area and locate the brain tumor;T1ce displays the interior of the tumor and distinguishes the enhanced tumor core from the gangrenous portion;and Flair inhibits intracranial cerebrospinal fluid and reveals the edge of the peritumoral edema [16].Different MRI sequences reveal distinct manifestations of brain tissue,which is crucial for diagnosing brain tumors.Fluid and mucus appear as low signals on T1 and high signals on T2 images;adipose appears as high signals on both T1 and T2 images;and lesions appear as either isointense or hypointense on both T1 and T2 images[17].Therefore,specialists can use T1 and T1ce sequences to observe the tumor core without peritumoral edema and T2 and FLAIR images to highlight the entire tumor with peritumoral edema [18].Inspired by clinical knowledge and how experts identify tumors from MRI images,we expect the model to learn structural and pathological information about brain tumors based on correlated MRI images’characteristics.As shown in Fig.1,brain tumors typically consist of enhancing tumors (Yellow),peritumoral edema (Green),and the gangrenous portion (Red).The T1 image emphasizes the brain’s structure,with the lesion region appearing relatively blurry and the tumor core appearing dim.The enhanced brain tumor region with profuse blood flow is highlighted on the T1ce image.In tasks involving the segmentation of brain tumors,the segmentation of enhancing tumors is relatively tricky.Therefore,combining T1 and T1ce images makes it possible to distinguish the tumor cores with less affection from peripheral edema.Flair images suppress cerebrospinal fluid and enhance the contrast between the lesion and cerebrospinal fluid compared to T2 images.Integrating T2 and Flair images can locate lesions more precisely and recognize the boundaries of edematous regions.This paper separates the input MRI images into two correlated pairs:the first contains T1 and T1ce images,and the other contains T2 and Flair images.This procedure enables more targeted learning and enhances tumor segmentation accuracy.

    2.2 Transformer-Based Brain Tumor Segmentation Models

    In the field of Natural Language Processing (NLP),the Transformer consisting predominantly of multi-headed attention(MHA)and location feedforward networks has yielded outstanding results[19].To exploit the Transformer’s ability to capture long-distance dependencies and global context information,researchers migrated the Transformer to computer vision (CV) by embedding each position of the feature maps into a sequence and reformulating it as a sequence-to-sequence task[20].There have been numerous proposals to increase the efficacy of Transformers in CV,and one of the advancements that function well in medical segmentation is the Swin Transformer.It uses shifted windows to improve computational efficiency and uses window multi-head self-attention and shifted window multi-head self-attention instead of multi-head attention.To take advantage of the complementary feature extraction capabilities of the Transformer and CNN,Wang et al.[21]proposed the TransBTS network,which uses Transformer in 3D CNN for MRI Brain Tumor Segmentation.The TransBTS uses a CNN-based encoder to capture spatial features and feed them to the Transformer layer and CNN-based decoder.Hatamizadeh et al.[22]proposed Unet Transformers(UNETR),which uses a Transformer as the encoder and connects it to an FCNN-based decoder via skip connections at different resolutions.Subsequently,Hatamizadeh et al.[23] proposed the Swin UNETR,which employs hierarchical Swin Transformer blocks as the encoder and ranked first in the BraTS 2021 Challenge validation phase.Li et al.[24]proposed Window Attention Up-sample(WAU)to increase the sampling of features in the decoder path by Transformer attention decoders.Pham et al.[25]used a Transformer with a variational autoencoder(VAE)branch to reconstruct input images concurrently with segmentation.These models indicate that the synergistic collaboration between CNNs and Transformers offers a powerful approach to effectively model complex patterns and dependencies within images,which can improve the generalization ability of the models.However,these methods employ either CNN or the Transformer for feature extraction or encoding and apply the other for decoding,which may result in the decoder needing more access to complete input information.Inspired by these insights,this paper employs a separate dual-branch encoder phase based on CNN and Swin Transformer to exploit their complementing qualities in capturing features.Furthermore,the information from these two branches is fused during the decoder process.

    Figure 1:Four modalities of MRI images of the same patient(axial slice)

    2.3 Multimodal Brain Tumor Segmentation

    Multimodal data supplement the insufficient information offered by single-modal data and assist with intricate tasks [26].It has attracted increasing interest in recent research [27],particularly in medical image processing,which frequently needs to work on the issue of insufficient data volume.Unlike multimodal data in other fields with diverse structural characteristics,MRI sequences appear structurally similar,but their morphological and pathological information differs[28].Jabbar et al.[29]proposed a comprehensive U-NET architecture with modifications in their layers.Siddiquee et al.[30]modified the network training process that minimizes redundancy under perturbations to enforce the encoder-decoder-based segmentation network on learning features.Peiris et al.[31] proposed a volumetric transformer architecture and used an encoding path with two window-based attention mechanisms to capture local and global features of medical volumes.Xing et al.[32] proposed a Transformer-based multi-encoder and single-decoder structure and a nested multimodal fusion for high-level representations and modality-sensitive gating for more effective skip connections.These methods handle the distinct MRI sequences as four channels and feed them into the network without reflecting the distinctions between multimodal data.They do not fully utilize the available information[30,33].Zhu et al.[34] used Flair and T1ce sequences for edge extraction and all modalities for semantic segmentation.Zhang et al.[35] calculated the weights for each modality and connect all the weighted modalities as the input.Chen et al.[36] inputted each modality separately into the network and computed the weights for each.Wang et al.[37] designed two densely-connected paralleled branches for different modality pairs and used layer connections to capture modality relationships.Awasthi et al.[38] proposed an attention module and used three distinct models for distinct regions.These methods use feature extraction on a single modality and feature splicing in the final stage of the fusion.However,they ignore the cross-modality information interaction between the spatial modalities,and establishing encoders for each modality requires lots of computing resources.This paper studies the utilization of complementary information between MRIs based on clinical knowledge to guide image segmentation,enabling a more comprehensive and rational utilization of multimodal MRI information and avoiding excessive consumption of computing resources.

    3 Methodologies

    3.1 Model Architecture

    The overall architecture of the proposed model is illustrated in Fig.2.According to the imaging principles and clinical knowledge,the MRI sequences are divided into two sets:the first contains T1 and T1ce,and the second contains T2 and Flair.Fed the two sets of data into the dual-stem Swin-Transformer and fuse the two sets of features by Multimodal Sequence-interacted Cross-Attention module(MScAM),and an attention matrix can be obtained.A triple down-sampling module(TDsM)has been proposed in the CNN encoder branch to acquire more comprehensive local features of the 3D inputs.The attention matrixes constrain the feature maps obtained from the CNN encoder branch in the decoder phase.

    Figure 2:The architecture of the proposed method

    3.2 Hybrid-Branch Multimodal Encoder

    The hybrid branch multimodal encoder proposed comprises a dual-stem Swin Transformer branch and a convolutional encoder branch.The proposed method transfers the re-grouped MRI images to the Swin-Transformer branch and combines the outputs of each layer to derive complementary relationships between MRI modalities.CNNs branch receives all modalities and extracts the local feature representation.The dual encoder design can fully utilize the complementary information of MRI multimodal sequences and enhance the network’s capacity for feature extraction.

    3.2.1 Dual-Stem Swin-Transformer Branch

    Two independent and symmetrical Swin Transformer stems build hierarchical feature maps from two input groups labeled as.In this section,we merely describe one stem and the other in the same way.Firstly,the inputs are divided into non-overlapping patches through patch patriation.Then,the data are fed into three stages of Swin Transformer modules followed by a merging layer.Each step doubles channels and halves feature map resolution to expand perception.The dual-stem Swin-Transformer branch generates four paired feature maps,and the feature maps from each stage will be utilized to compute the attention matrix in the MScAM and input for the following step.The decoder phase will receive the final outputs from the convolutional encoder branch.

    Multimodal Sequence-interacted Cross Attention Module(MScAM):The MScAM aims to extract the cross-modal interaction features.As shown in Fig.3a,first,a preliminary fusion of the intergroup feature maps is performed by a matrix multiplication operation on the paired feature maps from the same stage of the dual-stem Swin Transformer branch:

    where S()represents the Swin Transformer block andθmeans the model parameter.S(θ1,·)is the left steam for processing,and S(θ2,·)is the right steam for processingXT2,Flair.This operation captures the relevant information of the intergroup feature maps.Then,calculate channel attention weights and weight feature maps to make the model focus more on critical channels:

    where·represents broadcast element-wise multiplication.MLP represents multilayer perceptron with shared weights.Calculate the spatial attention weight matrix and constrain the fused feature map as follows:

    By capturing spatial information,the model can analyze the location and structure of objects.Finally,sum it up with the original fused feature map,and the final attention matrixAis obtained through a sigmoid operation.This step is to compensate for the detailed information missed by the attention mechanism.The final output is as follows:

    MScAM gains hierarchical global contextual dependencies from the Swin-Transformer stems and aggregates multimodal features via channel and spatial attention,resulting in multimodal interaction.

    Figure 3:The architecture of the proposed multimodal sequence-interacted cross-attention module(MScAM)and triple down-sampling module(TDsM)

    3.2.2 Convolutional Encoder Branch

    The convolutional encoder branch consists of four stages,with one convolutional encoder block and one TDsM for downsampling in each stage.After passing three dilation convolutional layers and two ReLU layers in each convolutional encoder block,the input with four modal MRI images will be added to the original input and then fed to another convolution-ReLU combination.Based on the hybrid dilated convolution (HDC) [39] principle,the dilation rate is set to 1,2,5.Skip connections are established to prevent degradation.The output of each stage will be transmitted to the subsequent stage,and the output of the final stage will be connected with the output of the other branch for feature decoding.The CNNs branch does not have a dual structure,and this is because using dual stems on the CNNs branch and calculating attention weights will inevitably increase the computational cost of the model.In addition,the multimodal information has been extracted in the Swin Transformer branch,which is better at getting global information;therefore,the focus of the CNNs branch is to contribute more local information to the model,and there is no need to use overly complex structure and attention modules.

    Triple Down-Sampling Module:An appropriate down-sampling operation can diminish network parameters and prevent overfitting.The average pooling and the max pooling are widely used in CNNs due to their simplicity.The average pooling method can lessen the impact of noisy features,but it gives equal importance to every element in the pooling region and may degrade model discernment.Max pooling can avoid background effects but may capture noisy features.Convolutions can do downsampling by setting a bigger stride,and it can better capture local features.However,it is less effective than the pooling methods at reducing variance and suppressing information[40].The proposed TDsM simultaneously uses an average pooling layer,a max pooling layer,and a convolution layer to reduce the dimension of the feature maps.As shown in Fig.3b,after three processing layers,the feature maps are connected and passed through a 1×1×1 convolution layer to compress the channels and integrate the cross-channel information.This module is designed to reduce the resolution of the image while achieving the combined effect of capturing local features,smoothing out noise,and reducing the background.

    3.2.3 Hierarchical Feature Alignment Decoder

    Each hierarchical feature alignment decoder stage includes one up-sampling,one skip connection,and one CNN-based decoder block.All outputs from the fourth stage of encoder branches are connected to the primary input of the decoder phase.The feature maps are input into a CNN-based decoder block at each stage and multiplied by the MScAM-generated attention matrix.This operation permits the alignment of cross-group multimodal features,long-range dependencies,and local features.Up-sample the outputs and use skip connections to connect them with the features extracted from the same stage of the convolutional encoder branch.

    4 Experiments

    4.1 Experimental Details and Evaluation Metrics

    The experiments are implemented on the PyTorch and MONAI and trained on one NVDIA A100 GPU for 100 epochs.The loss function is the weighted sum of CE and Dice loss,where the weights of Dice is 1 and CE is 0.5.The initial learning rate is 0.0001,and the wrapped optimizer has been used to adjust the learning rate.The embedding size of the Swin Transformer block is 48,the patch size is 2,the window size is 7 and the depths are[2,2,2].In the training stage,we randomly crop the input MRI images into 128×128×128.In the test stage,we use the sliding window method with an overlap rate of 0.6.

    Details of the Dataset:We use glioma datasets provided by MICCAI BraTS2021 challenge to verify the proposed method[41,42].Since the validation dataset is private,we use the training set for training and validation.There are 1251 skull-stripped MRI images in the training set,each consisting of four modalities:T1,T1ce,T2,and Flair.The ground truth of tumors is segmented manually by raters following the same annotation protocol.The sub-regions considered for evaluation are the“enhancing tumor”(ET),the“tumor core”(TC),and the“whole tumor”(WT).

    Evaluation Metrics:We used Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance(HD95) as the evaluation metrics,which is consistent with the requirements of the MICCAI BraTS2021 challenge.DSC is designed to measure the similarity between the prediction and the ground truth,and it is sensitive to the mask’s interior.HD95 is the largest 95% value of the surface distance sorted between prediction and ground truth,and it is more sensitive to the boundaries.When evaluating a model’s performance,we expect it to have a high DSC value and a short HD95 distance.

    4.2 Comparison Experiments

    The proposed model has been compared with several state-of-art multimodal brain tumor segmentation models.We directly run the released codes of these papers.All the models are trained under the same dataset split,and the evaluation metrics are based on outputs without any postprocessing.Table 1 demonstrates the quantitative results of all the models,and the best results are shown in bolded font.As it shows,the proposed model achieves the best performance on the value of DSC,and it is 0.026 and 0.024 higher than the second on the performance of ET and TC,respectively.Regarding HD95,the proposed method ranks first on ET and TC and second on WT.The proposed model ranks first in terms of the mean value of HD95.Limited by the MRI imaging principle and the human brain’s complex physiological structure,the boundaries between tissues generally appear blurry and overlapping in the MRI images,and the boundaries between the enhancing tumor core,the necrotic tumor core,and the peritumoral edema are challenging to be distinguished—that can also be reflected by the experimental results: the segmentation performance of the algorithms in portions of ET and TC is relatively poor.By taking advantage of the complementary clinical information between MRI images,the proposed method significantly improves the performance of brain tumor segmentation,especially the portions inside the tumors.

    Table 1:Quantitative results of comparison experiments

    Fig.4 depicts the box diagrams of the experiment’s DSC value results.A box plot visually represents the data’s dispersion,revealing its extent.The horizontal lines in the boxplot represent the maximum,upper quartile (Q3),median (Q2/median),lower quartile (Q1),and minimum,in descending order,from top to bottom.The proposed method generally achieves higher Q1,Q2,and Q3 values,indicating effectiveness.Observing all ET segmentation results,there is typically a large gap between the maximum and minimum,meaning difficulty segmenting the ET section and glaring differences between samples;however,the proposed method has higher DSC values.The concentration of the proposed method ranks third in TC segmentation,which may be due to the lack of preprocessing and postprocessing.However,it has a high value of Q1,Q2,and Q3 and validated its effectiveness.The proposed method manifests optimally in terms of WT segmentation.Fig.5 shows the quantitative comparison of the experiments.The green regions represent peritumoral edema,the red regions represent gangrenous tissue,and the yellow regions represent tumor enhancement.According to the respective labels:WT is made up of green,red,and yellow regions;TC is made up of yellow and red regions;and ET is made up of yellow regions.Each row in the figure represents an MRI slice of a patient:the first and second rows correspond to the axial slices,the third and fourth rows correspond to the sagittal slices,and the fifth and sixth rows correspond to the coronal slices.The first and second columns of the figure are actual identifiers,with the second column containing a magnified portion of tumors.Small arrows are drawn on the diagram to denote the reference and comparison observation locations for the convenience of observation.It is shown in the figure that the proposed method improves the segmentation performance and performs better in terms of details.Specifically,the case in the first row does not contain enhancing tumors,but the proposed method still obtained excellent segmentation results.

    Figure 4:Box plots of comparison experiments on DSC value

    Figure 5:Visualization of quantitative comparison of comparison experiments

    4.3 Ablation Studies

    We conducted several ablation experiments to evaluate the superiority of each proposed module.Table 2 shows the results of ablation studies.Method (1) utilizes the baseline model without the proposed MScAM.Method (2) uses a single Swin Transformer branch instant of dual-stem Swin Transformer branch and connects all the MRI modalities as one input,and the inputs of MScAM are replaced with the outputs of Swin Transformer layers.Method (3) shows the baseline model without CNNs branch in the encoder phase.Since TDsM is related to the CNNs encoder branch,method(3)does not contain TDsM.Method(4)replaces TDsM with max pooling,which is commonly used in deep learning.Method(5)utilizes the dual-stem Swin-Transformer branch without the CNN encoder branch and removes the skip connections.Method(6)only utilizes the CNNs encoder branch with TDsM for down-sampling.Method(7)utilizes a single CNNs encoder branch without TDsM.Method(8)excludes the Swin Transformer branch and TDsM.It expands the CNN encoder branch to dual stems and inputs grouped MRI images like the dual-stem Swin Transformer branch to verify the validity of MRI Pairing and MSCAM.Fig.6 shows the visualization of the quantitative comparison of ablation experiments.Similar to Fig.5,WT is made up of green,red,and yellow regions;TC is made up of yellow and red regions;and ET is made up of yellow regions.Each row in the figure represents an axial slice of an MRI image.The first and second columns of the figure are actual identifiers,with the second column containing a magnified portion of tumors.It is shown in the figure that the proposed method improves the segmentation performance and performs better in terms of details.

    Table 2:Quantitative results of ablation experiments

    Figure 6:Visualization of quantitative comparison of ablation studies

    Effectiveness of Dual-Branches Encoder:Among the experiment results of all methods(excluding the proposed method),the DSC value and HD95 distance of method(4)rank first,and the DSC value of the method(1)ranks second.Correspondingly,the segmentation results perform relatively poorly in the experiments of the method(3),(5),and(6),which use only a single branch in the encoder phase.Method(3)and(5)perform better than method(6)on Dice value,especially on portions of ET and TC,but the HD95 value of (6) performs best.This indicates that the Swin Transformer branch can help to increase the detection of tumor cores and gangrene,and the CNNs branch positively impacts the segmentation of the whole tumor,which is relatively coherent in MRI images.The comparison of these results illustrates the effectiveness of a dual-branches encoder.

    Effectiveness of Modal Pairing of MRI Images:In the experiment of method(2),the inputs are not grouped into two pairings,so a single-branch Transformer instant of the dual-stem branch is utilized.The average value of DSC is 0.8897,and the HD95 distance is 8.72.Method(1)does modal pairing but removes the MScAM,the average value of DSC has been improved to 0.9089,and the HD95 distance dropped to 8.31.In the proposed method,the metrics have been further optimized to 0.9166 and 5.83.This suggests that clustering correlated MRI images allows for more focused learning increases segmentation accuracy and that MSCAM can further exploit cross-modality information.

    Effectiveness of MScAM:By comparing the proposed method with method(1),it can be found that the addition of MScAM has significantly improved the performance of method (1).Similarly,method(3)has better results than method(5)due to the addition of MScAM.Additionally,method(8)achieved optimization in evaluation metrics compared to method(7),indicating that the proposed MRI grouping method and MScAM are effective.This observation means that MScAM can extract the cross-modal interaction features and provide more information in the decoder phase.

    Effectiveness of TDsM:In comparing the proposed method and method(4),the DSC values and HD95 values of the proposed method are better than those of method(4).In addition,by comparing method (7) and method (8),it can be found that the value of evaluation metrics has been improved after adding TDsM.These comparisons indicate that TDsM plays a positive role in brain tumor segmentation.

    5 Conclusions

    This paper proposes a clinical knowledge-based hybrid Swin Transformer method for brain tumor segmentation inspired by clinical knowledge and how specialists identify tumors in MRI images.This paper analyzes the differences and connections between MRI sequences and groups them before entering the network.It adopts a dual encoder with Swin Transformer and CNNs and proposes a multimodal sequence-interacted cross-attention module for catching interactive information for different modalities.On datasets from the MICCAI BraTS2021 Challenge,the proposed method was validated and obtained 0.9166 for the mean value of DSC and 5.83 for the mean distance of HD95.The experimental results demonstrate that the proposed method can segment brain tumors accurately,especially on the portions of ET and TC,which are essential for tumor prognosis but usually difficult to be distinguished.Compared with other methods,the proposed method fully uses the cross-modal interaction features.It leverages the strengths of Transformer and CNNs in long-range dependencies extracting and local feature representation.The main contribution of this paper consists of proposing a method to utilize complementary information from brain MRI sequences and a method for crossmodal interaction features extracting.The proposed method applies to the following applications:in brain tumor diagnosis,the proposed method can assist in localizing the case and assessing the degree of malignancy of the tumors;in treatment planning,it can also help determine the extent of surgery and the distribution of radiotherapy doses.In addition,because the proposed method is excellent in enhancing the segmentation of the nuclear portion of the tumor,it can help physicians to determine the degree of tumor progression and assess the prognosis.

    Though the proposed method achieves promising results,it still has several limitations.First,there is a lack of pre-processing and post-processing.As a result,it still suffers from the collective amount of data,sample imbalance,and extremely blurred images.Second,due to limited computational resources,only a small batch size and image size can be used for verifications,limiting the proposed model’s performance.In addition,we only conducted validation on the Brast2021 dataset,and the method is based on MRI medical background knowledge,which cannot be directly transferred to other image segmentation tasks involving image types other than MRI images.In the future,we plan to concentrate on the denoising and reconstruction of MRI images and the post-processing of network outputs to improve brain tumor segmentation performance,and to enhance the robustness of the proposed model,we plan to validate it on more datasets under different application backgrounds.

    Acknowledgement:Thanks to the anonymous reviewers for their constructive comments.Thanks to the tutors and researchers for their assistance and guidance.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China under Grant No.U20A20197,Liaoning Key Research and Development Project 2020JH2/10100040,Natural Science Foundation of Liaoning Province 2021-KF-12-01 and the Foundation of National Key Laboratory OEIP-O-202005.

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design: Xiaoliang Lei,Xiaosheng Yu;data processing: Jingsi Zhang;draft manuscript preparation:Chengdong Wu,Xiaosheng Yu;draft manuscript preparation: Xiaoliang Lei,Hao Wu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The MICCAI BraTS2021 dataset is supported by BraTS 2021 challenge,the training data is available at https://www.kaggle.com/datasets/dschettler8845/brats-2021-task1?select=BraTS2021_Training_Data.tar.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品久久久久久久末码| 精品国产乱码久久久久久小说| 搞女人的毛片| 中国美白少妇内射xxxbb| 夜夜爽夜夜爽视频| 精品久久久噜噜| 欧美激情国产日韩精品一区| 街头女战士在线观看网站| 亚洲天堂av无毛| 国产一区二区在线观看日韩| 黄色怎么调成土黄色| 狂野欧美激情性bbbbbb| 亚洲av不卡在线观看| 看黄色毛片网站| 十八禁网站网址无遮挡 | 久久精品久久久久久久性| 国产色婷婷99| 纵有疾风起免费观看全集完整版| 亚洲精品乱久久久久久| 91精品伊人久久大香线蕉| 国语对白做爰xxxⅹ性视频网站| 国产男女内射视频| 熟女人妻精品中文字幕| 人妻 亚洲 视频| 夜夜看夜夜爽夜夜摸| 日韩中字成人| 91久久精品国产一区二区成人| 建设人人有责人人尽责人人享有的 | 日韩av不卡免费在线播放| 国产又色又爽无遮挡免| 赤兔流量卡办理| 国产精品99久久久久久久久| 国产精品99久久久久久久久| 国产欧美亚洲国产| 国产有黄有色有爽视频| 欧美精品一区二区大全| 亚洲精品日韩在线中文字幕| 亚洲欧美日韩无卡精品| 日韩精品有码人妻一区| 国产精品人妻久久久影院| 欧美另类一区| 黄色怎么调成土黄色| 99久久精品国产国产毛片| 免费黄色在线免费观看| 如何舔出高潮| 日日啪夜夜爽| 中文字幕久久专区| 欧美激情国产日韩精品一区| 日本免费在线观看一区| 亚洲精品日韩在线中文字幕| 舔av片在线| 亚洲人与动物交配视频| 国产精品人妻久久久久久| 国内少妇人妻偷人精品xxx网站| 99久久精品热视频| 精品久久久久久久久av| 不卡视频在线观看欧美| 80岁老熟妇乱子伦牲交| 日日摸夜夜添夜夜添av毛片| 欧美激情久久久久久爽电影| 国产白丝娇喘喷水9色精品| 免费观看在线日韩| 在线观看av片永久免费下载| 亚洲国产av新网站| 精品少妇久久久久久888优播| 国产精品熟女久久久久浪| 女人被狂操c到高潮| 国产久久久一区二区三区| 国产精品人妻久久久久久| 日本免费在线观看一区| 黄片wwwwww| 国产免费福利视频在线观看| 亚洲成人中文字幕在线播放| 高清av免费在线| 成人漫画全彩无遮挡| 97热精品久久久久久| 免费高清在线观看视频在线观看| 亚洲精品一二三| 美女脱内裤让男人舔精品视频| 免费观看a级毛片全部| 男女国产视频网站| 亚洲欧美清纯卡通| 天堂俺去俺来也www色官网| 又黄又爽又刺激的免费视频.| 精品一区二区三卡| 99久久九九国产精品国产免费| 人妻夜夜爽99麻豆av| 尾随美女入室| 制服丝袜香蕉在线| 少妇被粗大猛烈的视频| 又粗又硬又长又爽又黄的视频| 一区二区三区四区激情视频| 性色avwww在线观看| 亚洲欧美一区二区三区国产| 亚洲国产色片| 99久久精品热视频| 免费观看a级毛片全部| 久久久久久久久久成人| 国产 一区 欧美 日韩| 亚洲在久久综合| 亚洲欧美成人精品一区二区| 寂寞人妻少妇视频99o| 日韩伦理黄色片| 亚洲精品国产色婷婷电影| 亚洲四区av| 91精品国产九色| 黄色欧美视频在线观看| 男人添女人高潮全过程视频| 黄色欧美视频在线观看| av在线老鸭窝| 男人添女人高潮全过程视频| 夜夜爽夜夜爽视频| 一级爰片在线观看| 大又大粗又爽又黄少妇毛片口| 中文字幕人妻熟人妻熟丝袜美| 人妻系列 视频| 日韩一区二区三区影片| 日本欧美国产在线视频| 嫩草影院精品99| 一边亲一边摸免费视频| 日日撸夜夜添| 97人妻精品一区二区三区麻豆| 少妇被粗大猛烈的视频| 亚洲欧洲国产日韩| 亚洲国产av新网站| 国产午夜精品久久久久久一区二区三区| 大话2 男鬼变身卡| 成人国产麻豆网| 国产黄a三级三级三级人| 大片免费播放器 马上看| 中文字幕亚洲精品专区| 国产又色又爽无遮挡免| 国产成人精品久久久久久| 日韩精品有码人妻一区| 校园人妻丝袜中文字幕| av女优亚洲男人天堂| 高清在线视频一区二区三区| 国产伦理片在线播放av一区| 在线天堂最新版资源| av免费观看日本| 偷拍熟女少妇极品色| 久久久久网色| 新久久久久国产一级毛片| 亚洲av二区三区四区| 精品人妻视频免费看| 国产综合精华液| 久久久久国产精品人妻一区二区| 中文字幕亚洲精品专区| 欧美人与善性xxx| 久久99蜜桃精品久久| 国产精品久久久久久精品古装| 亚洲精品乱码久久久v下载方式| 国产真实伦视频高清在线观看| 七月丁香在线播放| 黑人高潮一二区| av国产免费在线观看| 高清视频免费观看一区二区| 人妻制服诱惑在线中文字幕| 国产高清不卡午夜福利| 亚洲色图av天堂| 亚洲自拍偷在线| 亚洲怡红院男人天堂| 日韩强制内射视频| 国产男女内射视频| 日本午夜av视频| 91久久精品电影网| 午夜福利高清视频| 亚洲精品自拍成人| 亚洲精品自拍成人| 22中文网久久字幕| 中文字幕av成人在线电影| 亚洲精品,欧美精品| 男人舔奶头视频| eeuss影院久久| 亚洲av.av天堂| 日韩三级伦理在线观看| 精品久久久久久久久av| 欧美性猛交╳xxx乱大交人| 在线观看美女被高潮喷水网站| 深爱激情五月婷婷| 欧美性感艳星| 啦啦啦中文免费视频观看日本| 91久久精品国产一区二区成人| 欧美极品一区二区三区四区| 在线播放无遮挡| 中文天堂在线官网| 七月丁香在线播放| 国产免费一区二区三区四区乱码| 男人狂女人下面高潮的视频| 赤兔流量卡办理| 日韩一本色道免费dvd| 精品久久国产蜜桃| videossex国产| 中文字幕免费在线视频6| 人人妻人人看人人澡| 22中文网久久字幕| 大码成人一级视频| 天天躁日日操中文字幕| 男人狂女人下面高潮的视频| 成人欧美大片| 亚洲第一区二区三区不卡| 成人国产av品久久久| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲精品日韩av片在线观看| 又粗又硬又长又爽又黄的视频| 美女内射精品一级片tv| 国产乱来视频区| 欧美一级a爱片免费观看看| 白带黄色成豆腐渣| 亚洲熟女精品中文字幕| 丰满乱子伦码专区| 中文乱码字字幕精品一区二区三区| 亚洲不卡免费看| 亚洲最大成人中文| 日韩欧美精品v在线| av在线天堂中文字幕| 亚洲人与动物交配视频| 中文在线观看免费www的网站| 在线精品无人区一区二区三 | 男女啪啪激烈高潮av片| 国产成年人精品一区二区| 久久午夜福利片| 久久97久久精品| 三级国产精品片| 成人午夜精彩视频在线观看| 一级毛片黄色毛片免费观看视频| 日韩伦理黄色片| 女人十人毛片免费观看3o分钟| 午夜精品国产一区二区电影 | 日韩欧美精品v在线| 观看美女的网站| 超碰97精品在线观看| 又黄又爽又刺激的免费视频.| 不卡视频在线观看欧美| 中文字幕av成人在线电影| 免费不卡的大黄色大毛片视频在线观看| 一级毛片久久久久久久久女| 人妻少妇偷人精品九色| 老女人水多毛片| 亚洲欧美日韩东京热| 汤姆久久久久久久影院中文字幕| 97超视频在线观看视频| 欧美激情在线99| 久久精品国产亚洲网站| 99热这里只有是精品50| av天堂中文字幕网| 街头女战士在线观看网站| 久久久久久久午夜电影| 亚洲综合精品二区| 成人亚洲精品av一区二区| 国产精品人妻久久久久久| 日韩欧美 国产精品| av免费在线看不卡| 亚洲aⅴ乱码一区二区在线播放| 亚洲精品第二区| 精品久久久噜噜| 寂寞人妻少妇视频99o| 精品人妻一区二区三区麻豆| 欧美极品一区二区三区四区| 国产精品99久久久久久久久| 国产午夜福利久久久久久| 日韩一区二区视频免费看| 看免费成人av毛片| av在线观看视频网站免费| 一级毛片久久久久久久久女| 亚洲美女搞黄在线观看| 成人亚洲精品一区在线观看 | 免费看不卡的av| 国产真实伦视频高清在线观看| 在线观看免费高清a一片| 免费黄频网站在线观看国产| 身体一侧抽搐| 成人鲁丝片一二三区免费| 日韩电影二区| 交换朋友夫妻互换小说| 自拍欧美九色日韩亚洲蝌蚪91 | 深夜a级毛片| 国产黄片视频在线免费观看| 久久久久久伊人网av| 亚洲精品国产成人久久av| 国产中年淑女户外野战色| 在线观看一区二区三区激情| 欧美日韩综合久久久久久| 国产成人91sexporn| 18禁在线无遮挡免费观看视频| 亚洲人成网站在线播| 久久女婷五月综合色啪小说 | 国产精品精品国产色婷婷| 少妇熟女欧美另类| 看黄色毛片网站| 亚洲av日韩在线播放| 国产男女内射视频| 人妻系列 视频| 国产又色又爽无遮挡免| 亚洲精品乱码久久久久久按摩| 26uuu在线亚洲综合色| 激情五月婷婷亚洲| 国产精品99久久99久久久不卡 | 春色校园在线视频观看| 涩涩av久久男人的天堂| 你懂的网址亚洲精品在线观看| 男女边吃奶边做爰视频| 国产精品一二三区在线看| 亚洲最大成人中文| 亚洲欧美精品专区久久| 亚洲欧洲日产国产| 在线免费观看不下载黄p国产| 一级毛片aaaaaa免费看小| 另类亚洲欧美激情| 日日摸夜夜添夜夜添av毛片| 成人高潮视频无遮挡免费网站| 免费看光身美女| 亚洲av中文字字幕乱码综合| 国产探花在线观看一区二区| 国产男女超爽视频在线观看| 午夜福利网站1000一区二区三区| 只有这里有精品99| 日韩不卡一区二区三区视频在线| 亚洲成人中文字幕在线播放| 亚洲自拍偷在线| 爱豆传媒免费全集在线观看| 亚洲精品乱久久久久久| 美女内射精品一级片tv| 国产精品一区www在线观看| 亚洲美女搞黄在线观看| 欧美变态另类bdsm刘玥| 搞女人的毛片| 三级经典国产精品| 日产精品乱码卡一卡2卡三| 欧美另类一区| 特大巨黑吊av在线直播| h日本视频在线播放| 黑人高潮一二区| 久久久久九九精品影院| 中文字幕人妻熟人妻熟丝袜美| 午夜福利在线观看免费完整高清在| av免费在线看不卡| 国语对白做爰xxxⅹ性视频网站| 国产探花在线观看一区二区| 国产极品天堂在线| 久久久精品欧美日韩精品| 蜜桃亚洲精品一区二区三区| 久久这里有精品视频免费| 亚洲av国产av综合av卡| 一级毛片电影观看| 成人鲁丝片一二三区免费| 伦精品一区二区三区| 麻豆久久精品国产亚洲av| 中文天堂在线官网| a级毛片免费高清观看在线播放| 亚洲精品成人av观看孕妇| 国产精品不卡视频一区二区| av.在线天堂| 亚洲国产欧美在线一区| 婷婷色av中文字幕| 三级男女做爰猛烈吃奶摸视频| 日韩制服骚丝袜av| 好男人视频免费观看在线| 男人添女人高潮全过程视频| 国产 一区精品| 免费观看性生交大片5| 老女人水多毛片| 亚洲一区二区三区欧美精品 | 成人综合一区亚洲| 三级经典国产精品| 日本猛色少妇xxxxx猛交久久| 久久精品国产亚洲av天美| 久久精品人妻少妇| 在线观看免费高清a一片| 亚洲精品国产av蜜桃| av在线蜜桃| 亚洲欧美日韩卡通动漫| 精品99又大又爽又粗少妇毛片| 18禁动态无遮挡网站| 九九在线视频观看精品| 亚洲最大成人av| 国产亚洲av片在线观看秒播厂| 国产色爽女视频免费观看| 99视频精品全部免费 在线| 菩萨蛮人人尽说江南好唐韦庄| 99热这里只有是精品在线观看| 国产精品久久久久久久电影| 国产精品人妻久久久影院| 日韩亚洲欧美综合| 男女无遮挡免费网站观看| 成年版毛片免费区| 一级毛片 在线播放| 国产精品99久久99久久久不卡 | 一区二区三区四区激情视频| 乱系列少妇在线播放| 联通29元200g的流量卡| 国内少妇人妻偷人精品xxx网站| 国产成年人精品一区二区| 亚洲国产高清在线一区二区三| 97人妻精品一区二区三区麻豆| 边亲边吃奶的免费视频| av播播在线观看一区| 久久久国产一区二区| 日韩免费高清中文字幕av| 蜜桃久久精品国产亚洲av| 成人亚洲欧美一区二区av| 亚洲自偷自拍三级| 美女视频免费永久观看网站| 九草在线视频观看| 中国国产av一级| 中文字幕免费在线视频6| 插逼视频在线观看| 国产男女内射视频| 男人狂女人下面高潮的视频| 人妻夜夜爽99麻豆av| 一二三四中文在线观看免费高清| 日韩电影二区| 国产伦精品一区二区三区视频9| 亚洲av免费高清在线观看| 欧美激情国产日韩精品一区| 少妇被粗大猛烈的视频| 精品人妻熟女av久视频| 人妻制服诱惑在线中文字幕| 亚洲av.av天堂| 欧美极品一区二区三区四区| 国产亚洲5aaaaa淫片| 美女cb高潮喷水在线观看| 亚洲精品一二三| 亚洲av免费在线观看| 建设人人有责人人尽责人人享有的 | 高清视频免费观看一区二区| 精品一区二区免费观看| 亚洲成人久久爱视频| 国产精品一区二区在线观看99| 亚洲av福利一区| 国产精品国产三级国产专区5o| 国产欧美亚洲国产| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 热99国产精品久久久久久7| 精品视频人人做人人爽| 亚洲精品乱码久久久久久按摩| 久久综合国产亚洲精品| 精品99又大又爽又粗少妇毛片| 一级毛片黄色毛片免费观看视频| 大片电影免费在线观看免费| 三级国产精品片| 成年版毛片免费区| 国产一区二区亚洲精品在线观看| 高清毛片免费看| 波多野结衣巨乳人妻| 在线观看av片永久免费下载| 在线观看免费高清a一片| 秋霞伦理黄片| 亚洲av.av天堂| 久久精品久久久久久久性| 日韩国内少妇激情av| 国产黄色免费在线视频| 国产成年人精品一区二区| 男插女下体视频免费在线播放| 亚洲久久久久久中文字幕| 一个人看的www免费观看视频| 婷婷色综合大香蕉| 美女xxoo啪啪120秒动态图| 韩国av在线不卡| 亚洲av中文字字幕乱码综合| 一级毛片黄色毛片免费观看视频| 国产精品av视频在线免费观看| 国产黄片视频在线免费观看| 亚洲精品乱码久久久v下载方式| 制服丝袜香蕉在线| 听说在线观看完整版免费高清| 三级国产精品片| 高清日韩中文字幕在线| 老司机影院成人| 赤兔流量卡办理| 久久国产乱子免费精品| 成人无遮挡网站| 久久久久久久久久久丰满| 国产精品人妻久久久久久| 在线观看三级黄色| 亚洲av不卡在线观看| 交换朋友夫妻互换小说| 禁无遮挡网站| 九草在线视频观看| 激情五月婷婷亚洲| 在线观看一区二区三区激情| 搡女人真爽免费视频火全软件| 国产乱来视频区| 大话2 男鬼变身卡| 大片免费播放器 马上看| 高清视频免费观看一区二区| 久久久国产一区二区| 老司机影院毛片| a级毛片免费高清观看在线播放| 97在线人人人人妻| 看非洲黑人一级黄片| 免费在线观看成人毛片| 欧美三级亚洲精品| 亚洲色图综合在线观看| 久久这里有精品视频免费| 免费av毛片视频| 晚上一个人看的免费电影| 2021天堂中文幕一二区在线观| 一级黄片播放器| 亚洲av欧美aⅴ国产| 亚洲精品国产av成人精品| 亚洲成色77777| 日韩成人av中文字幕在线观看| 秋霞伦理黄片| 午夜激情久久久久久久| 国产亚洲av片在线观看秒播厂| av免费观看日本| 久久久久性生活片| 亚洲精品aⅴ在线观看| 插阴视频在线观看视频| 久久久久国产网址| 中国国产av一级| 中文字幕久久专区| 黑人高潮一二区| 亚洲图色成人| 少妇被粗大猛烈的视频| 丝瓜视频免费看黄片| www.色视频.com| 99热这里只有是精品50| 欧美国产精品一级二级三级 | 国产黄频视频在线观看| 99久久精品国产国产毛片| 日本黄大片高清| 亚洲成人久久爱视频| 国产爱豆传媒在线观看| 国产精品国产三级国产专区5o| 免费高清在线观看视频在线观看| 韩国高清视频一区二区三区| av免费观看日本| 亚洲精品一区蜜桃| 国产精品国产三级国产av玫瑰| 丝瓜视频免费看黄片| 高清日韩中文字幕在线| tube8黄色片| 国产成人福利小说| 成人午夜精彩视频在线观看| 男女边吃奶边做爰视频| 另类亚洲欧美激情| 久久久久久久午夜电影| 国产一区有黄有色的免费视频| eeuss影院久久| 香蕉精品网在线| 亚洲成人中文字幕在线播放| 日韩一区二区三区影片| 日日啪夜夜撸| 欧美丝袜亚洲另类| 亚洲熟女精品中文字幕| 狠狠精品人妻久久久久久综合| 国产成人91sexporn| 尤物成人国产欧美一区二区三区| 久久久久久伊人网av| 极品少妇高潮喷水抽搐| 3wmmmm亚洲av在线观看| 日本黄色片子视频| 国产精品国产三级专区第一集| 国产成人aa在线观看| 欧美精品一区二区大全| 国产 精品1| 纵有疾风起免费观看全集完整版| 好男人在线观看高清免费视频| 高清午夜精品一区二区三区| 六月丁香七月| 白带黄色成豆腐渣| 日日摸夜夜添夜夜爱| 色综合色国产| av在线播放精品| 少妇高潮的动态图| 久久午夜福利片| 色网站视频免费| 亚洲va在线va天堂va国产| 永久网站在线| 2021少妇久久久久久久久久久| 熟女电影av网| 最近手机中文字幕大全| 国产精品一区二区在线观看99| 一个人观看的视频www高清免费观看| 成人高潮视频无遮挡免费网站| 观看美女的网站| 久久精品国产亚洲av涩爱| 久久久精品欧美日韩精品| 免费看不卡的av| 亚洲精品国产色婷婷电影| 赤兔流量卡办理| a级毛片免费高清观看在线播放| 秋霞伦理黄片| 色哟哟·www| 国产午夜精品一二区理论片| 一级毛片电影观看| 日日摸夜夜添夜夜添av毛片| 久久97久久精品| 欧美日韩综合久久久久久| 最后的刺客免费高清国语| 精品国产一区二区三区久久久樱花 | 欧美日韩亚洲高清精品| 男女边摸边吃奶| 日韩 亚洲 欧美在线| 国产高潮美女av| 99久国产av精品国产电影| 在线看a的网站| 狂野欧美白嫩少妇大欣赏| 国产亚洲5aaaaa淫片| 欧美极品一区二区三区四区| 亚洲伊人久久精品综合| 久久久a久久爽久久v久久| 免费电影在线观看免费观看| 夜夜爽夜夜爽视频| 国产亚洲最大av| 成人欧美大片| 亚洲av一区综合| 成人高潮视频无遮挡免费网站| 免费观看无遮挡的男女| 白带黄色成豆腐渣| 日本一二三区视频观看| 夜夜看夜夜爽夜夜摸| 国产中年淑女户外野战色|