• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Deep Learning Framework for Mass-Forming Chronic Pancreatitis and Pancreatic Ductal Adenocarcinoma Classification Based on Magnetic Resonance Imaging

    2024-05-25 14:39:40LudaChenKuangzhuBaoYingChenJingangHaoandJianfengHe
    Computers Materials&Continua 2024年4期

    Luda Chen ,Kuangzhu Bao ,Ying Chen ,Jingang Hao,? and Jianfeng He,3,?

    1Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming,650504,China

    2Department of Radiology,Second Affiliated Hospital of Kunming Medical University,Kunming,650101,China

    3School of Physics and Electronic Engineering,Yuxi Normal University,Yuxi,653100,China

    ABSTRACT Pancreatic diseases,including mass-forming chronic pancreatitis(MFCP)and pancreatic ductal adenocarcinoma(PDAC),present with similar imaging features,leading to diagnostic complexities.Deep Learning(DL)methods have been shown to perform well on diagnostic tasks.Existing DL pancreatic lesion diagnosis studies based on Magnetic Resonance Imaging (MRI) utilize the prior information to guide models to focus on the lesion region.However,over-reliance on prior information may ignore the background information that is helpful for diagnosis.This study verifies the diagnostic significance of the background information using a clinical dataset.Consequently,the Prior Difference Guidance Network (PDGNet) is proposed,merging decoupled lesion and background information via the Prior Normalization Fusion(PNF)strategy and the Feature Difference Guidance(FDG)module,to direct the model to concentrate on beneficial regions for diagnosis.Extensive experiments in the clinical dataset demonstrate that the proposed method achieves promising diagnosis performance:PDGNets based on conventional networks record an ACC (Accuracy) and AUC (Area Under the Curve) of 87.50% and 89.98%,marking improvements of 8.19%and 7.64%over the prior-free benchmark.Compared to lesion-focused benchmarks,the uplift is 6.14% and 6.02%.PDGNets based on advanced networks reach an ACC and AUC of 89.77%and 92.80%.The study underscores the potential of harnessing background information in medical image diagnosis,suggesting a more holistic view for future research.

    KEYWORDS Pancreatic cancer;pancreatitis;background region;prior normalization fusion;feature difference guidance

    1 Introduction

    Accurate differentiation between Mass-Forming Chronic Pancreatitis (MFCP) and Pancreatic Ductal Adenocarcinoma (PDAC) is crucial in clinical practice due to the substantial differences in treatment approaches and prognoses [1].Both subtypes have similar features in various medical imaging modalities,presenting as localized pancreatic masses[2].This similarity increases the risk of misdiagnosis[3].For instance,some studies indicate that approximately 5%to 15%of pancreatitis is diagnosed as pancreatic cancer[4].Accurate preoperative diagnosis is crucial for distinguishing MFCP from PDAC[5].

    Radiologists accurately differentiate MFCP and PDAC without invasive procedures,basing their judgments on extensive experience and comprehensive references of multimodal data in the preoperative period.It is time-consuming and makes it impossible to ensure stable diagnosis in clinical practice.The application of deep learning in medical image analysis provides a solution to improve the accuracy and efficiency of diagnosis.There are three main research directions for deep learning-based image diagnosis of pancreatic lesions:1)prior-free end-to-end diagnostic networks,2)prior-injected cascade diagnostic networks,and 3)prior-injected parallel diagnostic networks.

    Prior-free end-to-end diagnostic networks use original images as the training set for the diagnostic model,as shown in Fig.1a.For example,Ziegelmayer et al.[6]used the VGG-19[7]architecture,pretrained on ImageNet [8],to accomplish the task of feature extraction and diagnostic differentiation between autoimmune pancreatitis (AIP) and PDAC.Such studies required large-scale datasets and more complex network structures to avoid the interference of redundant information.Notably,the relatively small percentage of the pancreatic lesion region in the image presents a challenge for priorfree networks,making capturing detailed information difficult.

    Prior-injected cascade diagnostic networks use a segmentation or detection model to identify the lesion region in original images,are used as the training set for the diagnostic model,as shown in Fig.1b.For example,Si et al.[9] used a full end-to-end deep learning approach that consists of four stages: Image screening,pancreas localization,pancreas segmentation,and pancreas tumor diagnosis.Qu et al.[10] first reconstructed the pancreas region through anatomically-guided shape normalization,then used an instance-level contrast learning and balance adjustment strategy for the early diagnosis of pancreatic cancer.Li et al.[11]designed a multiple-instance-learning framework to extract fine-grained pancreatic tumor features,followed by an adaptive-metric graph neural network and causal contrastive mechanism for early diagnosis of pancreatic cancer.Chen et al.[12]designed a dual-transformation-guided comparative learning scheme based on intra-space-transformation consistency and inter-class specificity.This scheme aimed to mine additional supervisory information and extract more discriminative features to predict pancreatic cancer lymph node metastasis.

    Prior-injected parallel diagnostic networks process the segmentation or detection task in a cascade network,running parallel to the diagnostic task.For example,Zhang et al.[13] first extracted the localization information of the tumor through the augmented feature pyramid network.They then enhanced this localization information with a self-adaptive feature fusion and dependencies computation module,enabling the simultaneous performance of pancreatic cancer detection and diagnosis tasks.Xia et al.[14] used a novel deep classification model with an anatomy-guided transformer to detect resectable pancreatic masses.They classified it as PDAC,other abnormalities(nonPDAC),and normal pancreas.Zhou et al.[15] proposed a meta-information-aware dual-path transformer consisting of a Convolutional Neural Network (CNN) based segmentation path and a transformer-based classification path.This design enabled the simultaneous handling of tasks related to detecting,segmenting,and diagnosing pancreatic lesion locations.

    Prior-injected diagnostic networks align better with radiologists’diagnostic mode.Focusing the analysis on the lesion region may avoid the interference of non-pathological changes in the image or irrelevant physiological information on the model training.However,these deep learning-based approaches have some limitations: 1) the diagnostic model’s performance strongly depends on the accuracy of segmentation or detection results,and biases in these results may mislead the diagnostic model,and 2) pancreatic lesions may cause nearby organs or tissues’morphologic and physiologic alterations [16,17].For example,PDAC,when infiltrating the duodenum,typically encircles the stomach and duodenal artery,resulting in bile duct dilation and pronounced jaundice.In contrast,the MFCP may not exhibit these effects[18].The model,which relies primarily on the lesion region,may ignore contextually significant diagnostic information.Therefore,efficiently leveraging this prior information while preserving information integrity and minimizing redundancy presents a critical challenge.

    Figure 1: Existing deep learning-based diagnostic frameworks for pancreatic lesions.(a)the prior-free diagnostic network,(b) the prior-injected diagnostic network,and (c) the prior difference guidance network(ours)

    For this purpose,the study involves the collection of an authentic dataset from MFCP and PDAC patients in a clinical environment.The dataset undergoes two initial exploratory experiments to assess the influence of prior information on diagnostic models’performance.Such prior information,acquired before the deep learning model training,encompasses lesion regions in MFCP and PDAC,identified directly by radiologists through annotations based on their expertise,and background regions,which are calculated indirectly by masking these lesion areas.Preliminary experiments indicate that background regions,typically considered“noise”in deep learning,offer valuable clues essential for the diagnostic process.

    Drawing on the insights,this study introduces the Prior Difference Guidance Network(PDGNet),as shown in Fig.1c.Unlike existing models,the PDGNet utilizes decoupled lesion and background information to direct the model to concentrate on beneficial regions for diagnosis.The Prior Normalization Fusion(PNF)strategy,the component of this network,integrates the prior information of lesions and backgrounds with the original image before the data is fed into the model.The strategy enables the model to access richer contextual information than the original image.Additionally,the Feature Difference Guide (FDG) module,which employs comparative learning,is proposed.The module further utilizes the prior-augmented lesion and background information,capturing the difference between the lesion region’s and the background region’s augmented features.These differences guide the model to adjust the focus region adaptively according to the importance of the decisions,to achieve a more accurate identification and differentiation between MFCP and PDAC.The main contributions of this study are summarized as follows:

    ? The study introduces a novel diagnostic framework,the Prior Difference Guidance Network(PDGNet),which uniquely utilizes decoupled lesion and background information to improve the accuracy of differentiating between MFCP and PDAC.

    ? The study develops the Prior Normalization Fusion (PNF) strategy,an innovative approach within PDGNet that integrates the prior information of lesions and backgrounds with the original image before processing,to enrich the model’s input with a broader context.

    ? The study implements the Feature Difference Guide(FDG)module,introducing a comparative learning approach that exploits the differences between the augmented features of the lesion and background regions,to direct the model to concentrate on beneficial regions for diagnosis for decision-making adaptively.

    2 Materials and Preliminary Analysis

    The study investigates the impact of prior information on deep learning-assisted diagnosis for MFCP and PDAC tasks.Authentic datasets of MFCP and PDAC patients from clinical settings are collected.Based on these datasets,two validation experiments are designed:One to examine the influence of images without the lesion region on the diagnostic model,and the other to assess the effect of the background region on the diagnostic model.

    2.1 Dataset

    A comprehensive dataset is collected from the Second Affiliated Hospital of Kunming Medical University,including arterial-phase abdominal Magnetic Resonance Imaging(MRI)sequences of 31 MFCP patients and 62 PDAC patients.The dataset includes 3,872 slices,with 375 slices annotated to indicate lesion regions.Fig.2 illustrates the slice-image with the lesion region.

    Figure 2: Illustration of MFCP and PDAC lesions.The top row shows the MFCP lesion slice-image,and the bottom row shows the PDAC lesion slice-image.(a)shows the original image,(b)Shows the lesion region with a masked background,(c) shows the lesion region after crop and resize,and (d)shows the background region with a masked lesion

    Inclusion criteria:1) Patients with MCFP and PDAC confirmed by surgery and/or biopsy histopathology,and 2)MRI scanning within 1 month before neoadjuvant chemotherapy or surgery.

    Exclusion criteria:Lesions were poorly visualized or showed non-mass-like enhancement that was difficult to outline.

    Scanning machine:Planar and dynamic enhancement scans of the upper abdomen were performed using a Siemens Sonata 1.5 Tesla(1.5 T)MR scanner.

    Scanning sequence and parameters:Transverse,coronal,and sagittal scans were performed in VIBE sequence using gadopentetate dextran(0.2 ml/kg)during the arterial phase(25–30 s).

    The lesion region annotation criteria:Initially,an experienced radiologist utilizes 3D Slicer software(https://www.slicer.org/) to label the entire tumor as comprehensively as possible,avoiding areas of necrosis,calcification,and gases that can obscure the lesion.To ensure accuracy,the labeled tumor area is subsequently reviewed by another radiologist.

    2.2 Preliminary Experiment

    This study involves randomly selecting 300 slice-images that contain lesion regions from the dataset.This selection establishes the base training for a preliminary diagnostic model for PDAC with MFCP.The percentage of slice-images without the lesion region in the training set is incrementally increased,to train multiple diagnostic models,as shown in Fig.3a.These models undergo evaluation using the same test set,with specific experimental results presented in Table 1,and visualized as shown in Fig.4a.

    Table 1: Diagnostic performance using varying IMAGE proportions in the training set.Where n1 represents the number of images with lesion region,n2 represents those without lesion region,defined by the equation n2=300×r

    Figure 3: Schematic diagram of experimental designs:(a)Experiment 1 investigates the impact of nonlesion images on the diagnostic model.(b) Experiment 2 investigates the impact of the background region on the diagnostic model

    In another experiment,the lesion regions from 300 slice-images are extracted and utilized to create the new training set.The proportion of background region within this lesion region is progressively increased,to train several diagnostic models,as shown in Fig.3b.These models are evaluated on the same test set,with specific experimental results presented in Table 2,and visualized as shown in Fig.4b.

    Table 2: Diagnostic performance using varying REGION proportions in the training set.Where r=0%represents using the lesion region’s maximum diameter as the side length of the cropped rectangle.The proportion of background in the rectangle is increased by r times this side length

    The VGG-11 architecture serves as the foundational diagnostic network for this study.All training sessions are conducted under uniform parameter settings.The slice-images designated for the training and test sets originate from distinct patients.

    The experimental results lead to the following conclusions and insights:1)the prior information plays an important role in deep learning-assisted diagnosis,according to the experimental results in Tables 1 and 2.The model’s performance fluctuates when the proportion of lesion and background region in the training data changes,2)the model’s performance is not optimal when only lesion region images are used for training,according to the experimental results in Table 1.With the increase of nonlesion region images,the model’s performance is improved in some cases,indicating that it is beneficial to maintain a certain balance of diseased and non-diseased images for the diagnostic task of PDAC with MFCP,and 3)the model’s performance starts to decrease when the proportion of background region increases to a certain extent,which indicates that the background information holds significant value in the diagnostic task,according to the experimental results in Table 2.However,exceeding a specific percentage interferes with the model’s performance.

    Figure 4: Visualization of experimental results: (a) ACC curve of the testing set as r varies with the proportion of non-lesion images in the training set,and (b) ACC curve of the testing set as r varies with the proportion of the background region in the training set

    The insights from the comparative analysis of two sets of experiments inform subsequent model design enhancements.These improvements include:1)a data augmentation strategy that maximizes the utilization of contextual information during training,to intensify the focus on identified lesion regions,thereby augmenting the recognition of these critical regions,and 2)an attention fusion module that enables it to dynamically adjust its focus on the lesion region and the relevant portions of the contextual regions,allowing for a more accurate diagnosis of PDAC and MFCP.

    3 Methods

    The analysis leads to the proposal of a Prior Difference Guidance Network(PDGNet),with its structure illustrated in Fig.5.The Network consists of two main components:The Prior Normalization Fusion(PNF)strategy and the Feature Difference Guidance(FDG)module.

    Figure 5: The structure of the prior difference guidance network (PDGNet),with two components:The prior normalization fusion(PNF)strategy and the feature difference guidance(FDG)module

    3.1 Prior Normalization Fusion(PNF)Strategy

    The Prior Normalization Fusion (PNF) strategy for data augmentation is proposed,as shown in Fig.6.Before the data is input into the model,it tries to fuse the prior information of lesion and background with the original image,which enables the model to obtain richer contextual information than the original image when performing diagnosis.

    Specifically,the lesion region is initially selected based on the optimal background occupancy ratio (r=60%),as determined in preliminary experiments.Subsequently,the background region is extracted by masking the lesion region in the original image.The original image is then overlaid with the prior images (both lesion and background).Normalization is conducted within the prior local region,considering only non-zero regions to prevent the diluted effect of a global homogeneous background on the normalized fused image.

    Figure 6: The structure of the prior difference guidance network (PDGNet),with two components:The prior normalization fusion(PNF)strategy and the feature difference guidance(FDG)module

    Given an imageI,let the lesion region beD.μprioris the average of the gray values of all pixels in the lesion(or background)region.σprioris the standard deviation of the gray values of all pixels within the lesion(or background)region.

    whereI(i)denotes the gray value of pixeliin imageI,|D|denotes the number of pixels in the lesion region,and?is a very small value used to avoid the case where the denominator is zero.The PNF strategy is essentially a linear contrast stretching method that augments the contrast by stretching the range of pixel values of the image.This strategy augments the feature recognizability of the prior region while preserving the full contextual information of the original image.

    3.2 Feature Difference Guidance(FDG)Module

    The PDGNet introduces a Feature Difference Guidance (FDG) module to utilize the prioraugmented lesion and background information further,as shown in Fig.7.The module further utilizes the prior-augmented lesion and background information,capturing the difference between the lesion region’s and the background region’s augmented features.These differences guide the model to adjust the focus region adaptively according to the decisions’importance.

    Figure 7: The structure of the prior difference guidance network (PDGNet),with two components:The prior normalization fusion(PNF)strategy and the feature difference guidance(FDG)module

    The module combines the original image with prior-augmented lesion and background information fusion images as inputs.This integration offers richer and multi-perspective contextual information for model training.Features z1,z2,and z3are extracted from these images using distinct encoders.z2and z3represent the full image encoding of lesion-augmented and background-augmented images,respectively.The magnitude of the difference between z2and z3reflects the relative importance of specific regions in the image for diagnostic decisions,guiding the model to focus more on regions with significant differences.

    The overall framework is shown in Fig.4,defines the backbone network comprising n blocks,with an FDG module added after each block.In the first n-1 blocks,z2,z3,and weightedas inputs to the next block.In the last(nth)block,feature splicing of z2,z3and weightedand then input them to the classifier to get the diagnosis.Loss calculation is done using the cross-entropy loss function.

    4 Results

    4.1 Implementation Details

    The training and test sets are divided by case with a 9:1 ratio to prevent mutual data leakage within the same case.The training set contains arterial-phase abdominal MRI sequences of 82 patients(27 MCFP,55 PDAC),with a total of 3,432 slices(326 slices annotated with lesion areas),and the test set contains 11 cases(4 MCFP,7 PDAC),with a total of 440 slices(49 slices annotated with lesion areas).A total of 440 slices(49 with lesion areas labeled).

    For preprocessing,the image resolution is adjusted to 224 × 224,and image augmentation is applied using restricted contrast adaptive histogram equalization.In the experiments,the total number of epochs is set to 300.The learning rate is initialized at 1e-4 and dynamically adjusts using a cosine function,with a minimum value set at 1e-6 and a loop count of 50.An early-stopping mechanism is employed to prevent overfitting,terminating training if the loss value of the validation set does not decrease for 30 epochs.The batch size is 64,and the model is optimized using adaptive moment estimation(AdamW)[19]with a weight decay of 1e-3.Additionally,all experiments use the PyTorch framework on an NVIDIA GeForce RTX 4090 graphics processing unit.

    4.2 Evaluation Metrics

    This study employs a comprehensive evaluation of the diagnostic performance of the model using several metrics:Accuracy(ACC),area under the subject operating characteristic curve(AUC),sensitivity/recall (SEN/REC),specificity (SPE),precision (PREC) and F1 score (F1).These metrics are defined below:

    Accuracy (ACC):Accuracy measures the proportion of all cases (both MFCP and PDAC) that are correctly identified by the model at a specific threshold,calculated asACC=(TP+TN)/(TP+FP+TN+FN).High accuracy in differentiating MFCP from PDAC indicates the model’s overall effectiveness in distinguishing these two conditions.

    Area Under the Curve(AUC):AUC refers to the area under the Receiver Operating Characteristic(ROC) curve,a graphical representation of a model’s diagnostic ability.It measures the model’s capability to discriminate between two classes (MFCP and PDAC) across all possible threshold values.A higher AUC value implies that the model performs better in distinguishing between negative(MFCP) and positive (PDAC) cases,regardless of any specific threshold set for classifying cases as positive or negative.

    Sensitivity/Recall (SEN/REC):This metric quantifies the model’s ability to correctly identify positive cases(PDAC),calculated asSEN=REC=TP/(TP+FN).High sensitivity in diagnosing PDAC means the model can effectively identify most true PDAC cases,reducing the risk of missed diagnoses,which is vital for timely and accurate diagnosis.

    Specificity (SPE):Specificity measures the model’s ability to correctly identify negative cases(MFCP),calculated asSPE=TN/(TN+FP).In this study,high specificity indicates that when the model identifies a sample as not being PDAC (i.e.,MFCP),this judgment is likely correct.This is crucial for preventing misdiagnosis of MFCP as PDAC,which could lead to unnecessary and potentially harmful treatments.

    Precision (PREC):Precision reflects the proportion of cases identified as positive (PDAC) that are indeed PDAC,calculated asPREC=TP/(TP+FP).High precision is particularly important in diagnosing PDAC to ensure that most cases diagnosed as PDAC are indeed PDAC,minimizing false positives.

    F1 Score (F1): The F1 score is the harmonic mean of precision and recall,calculated asF1=2×(REC×PREC)/(REC+PREC).In distinguishing MFCP from PDAC,the F1 score provides a composite measure that balances recall and precision,helping to assess the model’s performance in maintaining a balance between these two metrics.

    TP,TN,FP,and FN represent the number of true-positive,true-negative,false-positive,and falsenegative samples,respectively.

    These metrics are intended to offer a holistic view of the model’s performance,covering various aspects of diagnostic accuracy.Each metric offers insights into different dimensions of the model’s effectiveness,ensuring a thorough evaluation of its capabilities in medical diagnosis.

    4.3 Effectiveness of the Prior Normalization Fusion(PNF)Strategy

    The study explores the effectiveness of the Prior Normalization Fusion(PNF)strategy by selecting ResNet-18[20]as the baseline model,and comparing various data input types as the training set.These include the original image,the cropped and resized lesion region,the background region obtained by masking the lesion region,the lesion-augmented and background-augmented images obtained by the PNF strategy,and the lesion-augmented and background-augmented images obtained by the global normalization fusion strategy(GNF).

    Furthermore,to examine the impact of attention mechanisms on the models,the study evaluates Squeeze-and-Excitation (SE) [21] without the prior information condition,Convolutional Block Attention Module (CBAM) [22],Channel Prior Convolutional Attention (CPCA) [23],and Vision Transformer (ViT) [24] based on the global spatial attention mechanism Self-Attention [25].The corresponding results are shown in Table 3.

    Table 3: Performance comparison using different data types as inputs and augmented by different attention mechanisms

    Table 3 indicates that ResNet-18,when trained with lesion-augmented and backgroundaugmented images using the PNF strategy,surpasses the performance of models trained with images augmented by the GNF strategy or models trained with original images.In particular,the lesionaugmented image obtained by the PNF strategy achieves an ACC of 84.77%,a 5.46%improvement compared to the original image,and a 3.41%improvement compared to using only the lesion region.These results validate the superiority of the PNF strategy.Without utilizing the prior information,the SE attention mechanism improves the ACC of ResNet-18 to 80.22%,while the performance of CBAM,CPCA,and ViT is lower than that of the benchmark network model.

    4.4 Effectiveness of the Feature Differential Guidance(FDG)Module

    This study utilizes the lesion-augmented and background-augmented images obtained by the PNF strategy,and the original image as the training set,with ResNet-18 serving as the baseline model,to assess the role of the Feature Difference Guidance(FDG)module.

    Additionally,the impact of various fusion strategies on diagnostic performance is examined.These strategies include:1)Slicer-Dimension Concatenate:Connect the three images in the slicer dimension before modeling,2) Channel-Dimension Concatenate: Connect the three images in the channel dimension,and 3) ResNet-18+Feature Concatenate: Extract features using different encoders for each input image type,and then connect the features after each block of the model.The results are displayed in Table 4.

    Table 4: Performance comparison using different fusion strategies

    As described in Tables 3 and 4,ResNet-18,based on FDG modules,demonstrates the best performance with an ACC of 87.5%,higher than the other strategies.In addition,the other fusion strategies also brought performance improvements,reaching an ACC of 86.13%when feature linking was performed within the model.

    4.5 Ablation Experiments

    The study conducts ablation experiments on four mainstream backbones,ResNet,ViT,Swin Transformer [26],and ConvNeXt [27],to further explore the benefits of the PNF strategy and the FDG module.The relevant results are listed in Table 5,and the ROC curves are shown in Fig.8.

    Table 5: Results of the ablation experiments for the prior normalization fusion(PNF)strategy and the feature difference guidance(FDG)module based on different backbone network

    Figure 8: Average ROC curves and AUC values of the ablation experiments based on different backbone network

    Table 5 and Fig.8 illustrate that the implementation of the PNF strategy and the FDG module significantly improves the performance of models based on the CNN architecture,specifically ResNet-18 and ConvNeXt,as well as those based on the transformer architecture,such as ViT and Swin Transformer.This evidence underscores the effectiveness,superior generalization ability,and compatibility of these strategies across various network architectures.

    4.6 Comparison with Other Methods

    The study aims to differentiate between MFCP and PDAC using arterial phase MRI scans.The PDGNet is compared to other pancreatic lesion diagnostic models,including the prior-free diagnostic network by Ziegelmayer et al.,which employed VGG-19 to distinguish between AIP and PDAC[6].Si et al.[9] used a prior-injected diagnostic network,which was first trained using the U-Net32 [28]to train a pancreas segmentation model,and then input the segmentation results into ResNet34 to distinguish between five different pancreatic lesions.The study employs manual annotation instead of the segmentation results from U-Net.

    As described in Table 6,the PDGNet based on ConvNeXt outperforms other models on all evaluation metrics for MFCP and PDAC classification tasks.This further demonstrates that the implemented strategies can effectively alleviate the problem of the difficulty of discriminative feature extraction.

    Table 6: Performance comparison with other deep learning-based pancreatic lesion diagnosis methods

    The comparison may not be fair since the studies used different datasets,but it can still provide valuable references for future research.

    5 Discussion

    The study investigates a concept frequently overlooked in existing research on deep learning for pancreatic lesion diagnosis: Background region considered as “noise”can actually provide valuable information for diagnostic models.As shown in Tables 1 and 2,the ACC and AUC of the diagnostic model reach 63.26%and 65.65%,even though the training set consists entirely of images without lesion regions.When masking the lesion region from the complete image containing the lesion region,the ACC and AUC are 67.34%and 71.67%,underlining the significance of the background region in the diagnostic modeling dataset.

    Consequently,the Prior Normalization Fusion(PNF)strategy is proposed.The strategy,which fuses prior information before data input into the model,augments the feature recognizability of the prior (lesion and background) region while preserving the complete contextual details of the original image.As shown in Table 3,without utilizing the prior information,channel attention SE can only bring relatively limited performance improvement,with ACC and AUC increasing by 0.91% and 0.48%,respectively.In contrast,introducing spatial attention leads to a decline in model performance.This could be attributed to inherent noise in the image,causing a bias in the attention mechanism without prior information.However,the GNF and PNF strategies demonstrate significant performance gains,particularly the PNF strategy,which improves the ACC and AUC of the benchmark network model by 5.46%and 4.11%,respectively.

    Otherwise,the study observes that both lesion-augmented and background-augmented images generated by the PNF strategy are able to improve the diagnostic model’s performance.To explore the potential of this prior-augmented information more deeply,a Feature Difference Guidance (FDG)module is introduced.The module combines the original image with the lesion-augmented and background-augmented images so that they jointly participate in the model training process.The superiority of this fusion strategy is further confirmed by the data in Table 5,where the FDG module demonstrates the best performance.

    Ablation experiments on convolutional neural networks such as ResNet-18,ConvNeXt,and Transformer-based ViT,Swin Transformer,show that the proposed Prior Difference Guidance Network (PDGNet) with the PNF strategy and the FDG module achieve significant improvements on all four frameworks.Especially on ConvNeXt,the ACC and AUC of the model are improved to 89.77%and 92.80%,respectively.

    In summary,the study confirms that the background region carries useful information for diagnosis,which the model should more fully utilize.The PDGNet,incorporating the PNF strategy and FDG module,significantly improves the diagnostic accuracy for MFCP and PDAC,uniquely leveraging prior information from lesion and background regions.

    Although the model achieves excellent performance,it has some limitations.For example,the clinical datasets utilized might lack diversity and size.Nevertheless,the network demonstrates robustness and effectiveness in data diversity and size constraints by accurately extracting and analyzing key discriminative features.This offers promise for application in a wider range of clinical scenarios.Secondly,a notable shortcoming of the proposed approach is the extensive training time and large model parameters.Therefore,with continued optimization and algorithmic improvements,there is an expectation of significant reductions in training time and improvements in model efficiency.Thirdly,extensive testing in real-world clinical settings is yet to be conducted for the model.However,preliminary findings and the model’s theoretical design indicate that,with further refinement and validation,it will serve as an effective tool for assisted diagnosis in clinical environments.Future research will focus on collecting more clinical data to enhance the model’s generalization ability,exploring more efficient algorithms and network architectures to optimize the training process and minimize computational resource requirements,and conducting validations in actual clinical settings to confirm its effectiveness and feasibility.The ultimate goal is to improve the accuracy and reliability of automated diagnosis,aiming to implement these models in clinical practice and offering more effective diagnostic tools for physicians and patients.

    6 Conclusions

    This study proposes a novel approach for deep learning pancreatic lesion diagnostic research,focusing on the lesion region and fully utilizing the information in the background region.The study observes that even in background regions without obvious lesions,valuable information exists that helps with diagnosis.Drawing on this insight,the Prior Difference Guidance Network (PDGNet)significantly improves the performance of MFCP and PDAC diagnostic models through the Prior Normalization Fusion(PNF)strategy and the feature difference guidance(FDG)module.

    The PNF strategy preserves the complete contextual information of the original image.It augments the feature recognizability of the prior region by fusing the prior information of the lesion and the background region.The FDG module,on the other hand,combines the original image with the augmented lesion and background fusion image so that both of them participate in the model’s training process,further improving the model’s accuracy.Ablation experiments conduct on various prominent deep learning networks,including ResNet-18,ConvNeXt,Vision Transformer(ViT),and Swin Transformer,substantiate the effectiveness of this approach.

    In conclusion,the study emphasizes the importance of contextual information in deep learning pancreatic lesion diagnosis and proposes new methods to utilize this information more fully to improve model performance.The study provides a valuable reference for future medical image diagnosis.It suggests that scholars should not only focus on salient target regions but also pay full attention to the background information that is often overlooked.

    Acknowledgement:The authors would like to express their gratitude to Prof.He and Prof.Hao for supervising of this study.

    Funding Statement:This research is supported by the National Natural Science Foundation of China (No.82160347);Yunnan Key Laboratory of Smart City in Cyberspace Security (No.202105AG070010);Project of Medical Discipline Leader of Yunnan Province(D-2018012).

    Author Contributions:The authors confirm contribution to the paper as follows:Conceptualization,L.C.and K.B.;data curation,K.B.and Y.C.;investigation,L.C.,K.B.and Y.C.;methodology,L.C.;formal analysis,L.C.;project administration,J.H.(Jianfeng He);supervision,J.H.(Jianfeng He)and J.H.(Jingang Hao);writing—original draft preparation,L.C.;writing—review and editing,L.C.,J.H.(Jianfeng He)and J.H.(Jingang Hao).All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:The datasets generated during this study are not publicly available due to privacy and ethical considerations;however,anonymized data can be provided by the corresponding author upon reasonable request and with the approval of the Ethics Committee.Researchers interested in accessing the data should contact the corresponding author (Prof.Hao,kmhaohan@163.com)for further information.

    Ethics Approval:The study was conducted in accordance with the Declaration of Helsinki,and approved by the Ethics Committee of the Second Affiliated Hospital of Kunming Medical University(No.2023-156).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    19禁男女啪啪无遮挡网站| 午夜亚洲福利在线播放| 两性夫妻黄色片| 亚洲av电影不卡..在线观看| 中文字幕精品免费在线观看视频| 人人妻,人人澡人人爽秒播| 亚洲激情在线av| 精品免费久久久久久久清纯| 97超级碰碰碰精品色视频在线观看| 脱女人内裤的视频| 午夜福利高清视频| av有码第一页| 日本一区二区免费在线视频| 免费一级毛片在线播放高清视频 | 一级作爱视频免费观看| 日日摸夜夜添夜夜添小说| 亚洲精品在线美女| 亚洲男人天堂网一区| 国产精品久久视频播放| 中文字幕最新亚洲高清| 999久久久精品免费观看国产| 国产蜜桃级精品一区二区三区| 高清黄色对白视频在线免费看| 日韩一卡2卡3卡4卡2021年| 怎么达到女性高潮| 91字幕亚洲| 长腿黑丝高跟| 三级毛片av免费| 日日爽夜夜爽网站| 日韩欧美国产一区二区入口| 欧美日韩中文字幕国产精品一区二区三区 | 精品久久蜜臀av无| 国产熟女xx| 黑人巨大精品欧美一区二区蜜桃| 久久中文字幕一级| 精品日产1卡2卡| 精品卡一卡二卡四卡免费| 好男人电影高清在线观看| 老汉色av国产亚洲站长工具| 国产精品秋霞免费鲁丝片| 日日夜夜操网爽| 久久久久久免费高清国产稀缺| 久久精品国产清高在天天线| 久久精品成人免费网站| 黄色丝袜av网址大全| 精品福利观看| 亚洲片人在线观看| 欧美日韩乱码在线| 一卡2卡三卡四卡精品乱码亚洲| 超碰成人久久| 一区二区日韩欧美中文字幕| 又大又爽又粗| 脱女人内裤的视频| 日本免费一区二区三区高清不卡 | 熟妇人妻久久中文字幕3abv| avwww免费| av视频在线观看入口| 国产精华一区二区三区| 亚洲va日本ⅴa欧美va伊人久久| 国产亚洲精品av在线| 搡老熟女国产l中国老女人| 欧美日韩精品网址| 美国免费a级毛片| 91九色精品人成在线观看| 亚洲午夜精品一区,二区,三区| 久热爱精品视频在线9| 一区二区三区高清视频在线| 波多野结衣高清无吗| 国产精品香港三级国产av潘金莲| 人人妻人人爽人人添夜夜欢视频| 久久久久久大精品| 最近最新中文字幕大全免费视频| 亚洲人成电影观看| 亚洲人成伊人成综合网2020| 中亚洲国语对白在线视频| 午夜亚洲福利在线播放| 99国产综合亚洲精品| 亚洲性夜色夜夜综合| 天天添夜夜摸| 三级毛片av免费| 久久精品国产综合久久久| 国产精品影院久久| 18禁美女被吸乳视频| 成熟少妇高潮喷水视频| 精品国产亚洲在线| 亚洲av电影在线进入| 18禁黄网站禁片午夜丰满| 99国产精品99久久久久| 日本五十路高清| 亚洲性夜色夜夜综合| 欧美中文日本在线观看视频| 国产乱人伦免费视频| 男女做爰动态图高潮gif福利片 | 国产aⅴ精品一区二区三区波| 久久香蕉精品热| 99热只有精品国产| 久久久精品国产亚洲av高清涩受| 欧美乱码精品一区二区三区| 日韩欧美一区视频在线观看| 黄片大片在线免费观看| 看片在线看免费视频| 日韩欧美国产一区二区入口| 制服丝袜大香蕉在线| 极品教师在线免费播放| 欧美中文日本在线观看视频| 18禁黄网站禁片午夜丰满| 精品国产一区二区久久| 免费观看人在逋| 黄频高清免费视频| 精品免费久久久久久久清纯| a级毛片在线看网站| x7x7x7水蜜桃| 国产精品一区二区免费欧美| 超碰成人久久| 欧美+亚洲+日韩+国产| 丰满的人妻完整版| 免费看美女性在线毛片视频| tocl精华| 亚洲avbb在线观看| 欧美日本视频| 免费在线观看影片大全网站| 国产黄a三级三级三级人| 99久久国产精品久久久| 岛国在线观看网站| 国产私拍福利视频在线观看| 热99re8久久精品国产| 搡老熟女国产l中国老女人| 国产真人三级小视频在线观看| 老鸭窝网址在线观看| 亚洲第一电影网av| 精品电影一区二区在线| 亚洲国产精品成人综合色| 亚洲精品国产色婷婷电影| 激情视频va一区二区三区| 国产精品九九99| 老司机午夜十八禁免费视频| 亚洲精品一卡2卡三卡4卡5卡| 在线免费观看的www视频| 欧美人与性动交α欧美精品济南到| 国产1区2区3区精品| 久久久国产欧美日韩av| 高清毛片免费观看视频网站| 亚洲精品av麻豆狂野| 久久久国产欧美日韩av| 给我免费播放毛片高清在线观看| 丁香六月欧美| 久久国产乱子伦精品免费另类| av网站免费在线观看视频| 黑人欧美特级aaaaaa片| 村上凉子中文字幕在线| 一级片免费观看大全| 一级毛片女人18水好多| 美女午夜性视频免费| 18禁裸乳无遮挡免费网站照片 | 国产片内射在线| 99久久久亚洲精品蜜臀av| 国产精品自产拍在线观看55亚洲| 久久久久精品国产欧美久久久| 色哟哟哟哟哟哟| 成人免费观看视频高清| 这个男人来自地球电影免费观看| av天堂久久9| 精品不卡国产一区二区三区| 国产主播在线观看一区二区| 性少妇av在线| 激情视频va一区二区三区| 一个人免费在线观看的高清视频| 亚洲avbb在线观看| 日韩精品青青久久久久久| 亚洲精品中文字幕一二三四区| 国产高清videossex| 老汉色∧v一级毛片| 久久中文字幕人妻熟女| 91老司机精品| 美女午夜性视频免费| 久久性视频一级片| 日韩精品中文字幕看吧| 国产精品二区激情视频| 精品午夜福利视频在线观看一区| 国产成人欧美| 日日干狠狠操夜夜爽| 久久午夜综合久久蜜桃| 久久国产精品男人的天堂亚洲| 在线观看免费日韩欧美大片| 欧美国产精品va在线观看不卡| 欧美午夜高清在线| 国产97色在线日韩免费| 黄色 视频免费看| 精品国产乱码久久久久久男人| 欧美国产日韩亚洲一区| 一进一出抽搐gif免费好疼| 少妇裸体淫交视频免费看高清 | 桃红色精品国产亚洲av| 亚洲自偷自拍图片 自拍| 亚洲,欧美精品.| 无人区码免费观看不卡| 精品国产超薄肉色丝袜足j| 黄色视频,在线免费观看| 日本在线视频免费播放| 亚洲美女黄片视频| 操出白浆在线播放| 国产精品1区2区在线观看.| 中文字幕人妻丝袜一区二区| 国产成人啪精品午夜网站| 精品一区二区三区视频在线观看免费| av在线播放免费不卡| 久久久久国产精品人妻aⅴ院| 久久香蕉激情| av天堂久久9| 一区二区三区激情视频| 久久精品国产清高在天天线| 久久精品成人免费网站| 亚洲精品国产色婷婷电影| 精品不卡国产一区二区三区| 国产欧美日韩一区二区三| 电影成人av| 黄色丝袜av网址大全| 国产亚洲av高清不卡| 午夜福利,免费看| 禁无遮挡网站| 欧美久久黑人一区二区| 久久婷婷成人综合色麻豆| 19禁男女啪啪无遮挡网站| 大码成人一级视频| 乱人伦中国视频| 99riav亚洲国产免费| 女人爽到高潮嗷嗷叫在线视频| 他把我摸到了高潮在线观看| 老司机午夜福利在线观看视频| 身体一侧抽搐| 中国美女看黄片| 一级毛片精品| 欧美激情 高清一区二区三区| 精品一区二区三区四区五区乱码| 麻豆久久精品国产亚洲av| 亚洲av第一区精品v没综合| 免费观看精品视频网站| 免费av毛片视频| aaaaa片日本免费| 黄网站色视频无遮挡免费观看| 国产精品精品国产色婷婷| 久久热在线av| 久久精品国产亚洲av香蕉五月| 99久久国产精品久久久| 国产成人欧美| 禁无遮挡网站| 黑人巨大精品欧美一区二区mp4| 欧美日韩亚洲综合一区二区三区_| 亚洲在线自拍视频| av视频免费观看在线观看| 老司机午夜十八禁免费视频| 精品电影一区二区在线| 精品欧美国产一区二区三| 国产1区2区3区精品| 日韩中文字幕欧美一区二区| 日日摸夜夜添夜夜添小说| 亚洲 欧美 日韩 在线 免费| 久久国产乱子伦精品免费另类| 90打野战视频偷拍视频| 国产欧美日韩综合在线一区二区| 热re99久久国产66热| 亚洲美女黄片视频| 亚洲aⅴ乱码一区二区在线播放 | 91麻豆精品激情在线观看国产| 日韩大码丰满熟妇| 啦啦啦韩国在线观看视频| 99国产极品粉嫩在线观看| 久久中文字幕人妻熟女| 人妻久久中文字幕网| 免费无遮挡裸体视频| 欧美老熟妇乱子伦牲交| 麻豆国产av国片精品| 一区二区三区国产精品乱码| av天堂在线播放| 久久午夜亚洲精品久久| 国产视频一区二区在线看| 狠狠狠狠99中文字幕| 看片在线看免费视频| 黄片小视频在线播放| 欧美日韩亚洲国产一区二区在线观看| 精品电影一区二区在线| av在线天堂中文字幕| 男女做爰动态图高潮gif福利片 | 亚洲美女黄片视频| 亚洲七黄色美女视频| 国产精品永久免费网站| 天天躁夜夜躁狠狠躁躁| 日韩av在线大香蕉| 久久午夜综合久久蜜桃| 悠悠久久av| 午夜免费成人在线视频| 久久久久久久午夜电影| 亚洲一区高清亚洲精品| 国产精品乱码一区二三区的特点 | 老司机午夜福利在线观看视频| 曰老女人黄片| 他把我摸到了高潮在线观看| 成年人黄色毛片网站| 日本一区二区免费在线视频| 男男h啪啪无遮挡| 国产xxxxx性猛交| 91麻豆精品激情在线观看国产| 亚洲精品久久国产高清桃花| 精品久久久久久成人av| 亚洲中文日韩欧美视频| 免费少妇av软件| 高清毛片免费观看视频网站| 精品国产国语对白av| 在线免费观看的www视频| 欧美一级毛片孕妇| 变态另类成人亚洲欧美熟女 | 免费观看精品视频网站| 啦啦啦免费观看视频1| 操出白浆在线播放| 777久久人妻少妇嫩草av网站| 19禁男女啪啪无遮挡网站| 成人三级做爰电影| 欧美日韩黄片免| 极品教师在线免费播放| 国产精品久久电影中文字幕| 精品少妇一区二区三区视频日本电影| 亚洲视频免费观看视频| 国产国语露脸激情在线看| 99国产精品一区二区三区| 精品乱码久久久久久99久播| 一进一出抽搐动态| 成人国产综合亚洲| 国产三级黄色录像| www国产在线视频色| 亚洲 欧美 日韩 在线 免费| 欧美激情高清一区二区三区| 日韩欧美国产在线观看| 大型黄色视频在线免费观看| 久久久久久久久久久久大奶| 脱女人内裤的视频| 日韩欧美一区视频在线观看| av片东京热男人的天堂| 成人特级黄色片久久久久久久| 午夜亚洲福利在线播放| 天堂影院成人在线观看| ponron亚洲| 国产成人精品在线电影| 成年女人毛片免费观看观看9| 麻豆久久精品国产亚洲av| 国产色视频综合| 中文亚洲av片在线观看爽| 亚洲九九香蕉| 最新美女视频免费是黄的| 99久久综合精品五月天人人| 国产亚洲av嫩草精品影院| 日韩欧美国产在线观看| 老熟妇仑乱视频hdxx| 男女之事视频高清在线观看| 久久人妻av系列| 日韩欧美国产一区二区入口| 日韩av在线大香蕉| 久久青草综合色| 国产在线观看jvid| 两性夫妻黄色片| 欧美精品啪啪一区二区三区| 老汉色av国产亚洲站长工具| 老司机福利观看| 一区二区三区高清视频在线| 亚洲aⅴ乱码一区二区在线播放 | 亚洲激情在线av| 免费久久久久久久精品成人欧美视频| 少妇被粗大的猛进出69影院| 国产高清videossex| 国产av一区二区精品久久| 校园春色视频在线观看| 日韩欧美一区视频在线观看| 国产精品久久电影中文字幕| 久久精品国产亚洲av香蕉五月| 亚洲男人的天堂狠狠| 日本 av在线| 色尼玛亚洲综合影院| 精品人妻在线不人妻| 国产精品永久免费网站| 欧美激情 高清一区二区三区| 欧美久久黑人一区二区| 国产一区二区三区综合在线观看| 中文字幕高清在线视频| 国产精品精品国产色婷婷| 欧美乱码精品一区二区三区| 两个人视频免费观看高清| 欧美国产精品va在线观看不卡| 高清黄色对白视频在线免费看| 成人国产综合亚洲| 丝袜人妻中文字幕| 丝袜在线中文字幕| x7x7x7水蜜桃| 美女免费视频网站| 多毛熟女@视频| 国产免费av片在线观看野外av| 天堂影院成人在线观看| АⅤ资源中文在线天堂| 亚洲国产欧美网| 久久中文字幕人妻熟女| 美女国产高潮福利片在线看| 视频在线观看一区二区三区| 色哟哟哟哟哟哟| 日韩有码中文字幕| 午夜免费成人在线视频| 自拍欧美九色日韩亚洲蝌蚪91| 欧美在线黄色| www.熟女人妻精品国产| 人人妻人人爽人人添夜夜欢视频| 黄色毛片三级朝国网站| 丝袜美腿诱惑在线| 老司机在亚洲福利影院| 亚洲成av人片免费观看| www.自偷自拍.com| 国产成人欧美在线观看| 精品一品国产午夜福利视频| 黑人欧美特级aaaaaa片| 视频区欧美日本亚洲| 欧美乱码精品一区二区三区| 亚洲黑人精品在线| 亚洲电影在线观看av| 后天国语完整版免费观看| 亚洲中文字幕一区二区三区有码在线看 | 亚洲精品国产区一区二| 人人妻,人人澡人人爽秒播| 久久精品国产亚洲av高清一级| 日韩大尺度精品在线看网址 | 青草久久国产| 麻豆一二三区av精品| 国产精品久久久久久精品电影 | 成年女人毛片免费观看观看9| 精品电影一区二区在线| 亚洲色图 男人天堂 中文字幕| 男女午夜视频在线观看| 国产av又大| 精品少妇一区二区三区视频日本电影| 亚洲av第一区精品v没综合| 9191精品国产免费久久| АⅤ资源中文在线天堂| 精品久久久精品久久久| 老鸭窝网址在线观看| 亚洲国产高清在线一区二区三 | 自拍欧美九色日韩亚洲蝌蚪91| 久热这里只有精品99| 久久久久久久久免费视频了| 久久久久久久午夜电影| 在线十欧美十亚洲十日本专区| 色精品久久人妻99蜜桃| 一区二区三区高清视频在线| 在线av久久热| 精品国产一区二区三区四区第35| 久久国产精品人妻蜜桃| 欧美日韩中文字幕国产精品一区二区三区 | 少妇粗大呻吟视频| 首页视频小说图片口味搜索| 丰满的人妻完整版| 日日摸夜夜添夜夜添小说| 久久中文字幕一级| 变态另类成人亚洲欧美熟女 | 美女大奶头视频| 亚洲国产中文字幕在线视频| 欧美黑人欧美精品刺激| 91九色精品人成在线观看| 99国产精品一区二区蜜桃av| 亚洲中文字幕一区二区三区有码在线看 | 麻豆成人av在线观看| 欧美成狂野欧美在线观看| 乱人伦中国视频| 色av中文字幕| 国产精品一区二区三区四区久久 | 成年版毛片免费区| 亚洲色图 男人天堂 中文字幕| 乱人伦中国视频| 精品日产1卡2卡| 免费在线观看完整版高清| 一级毛片精品| 高清黄色对白视频在线免费看| 亚洲中文字幕日韩| 精品国产一区二区久久| 亚洲精品久久成人aⅴ小说| 夜夜躁狠狠躁天天躁| 午夜福利免费观看在线| 亚洲色图av天堂| 真人一进一出gif抽搐免费| 久久欧美精品欧美久久欧美| 欧洲精品卡2卡3卡4卡5卡区| 国产欧美日韩精品亚洲av| 好看av亚洲va欧美ⅴa在| a级毛片在线看网站| 一区二区三区高清视频在线| 婷婷丁香在线五月| 中文字幕人妻丝袜一区二区| 亚洲专区字幕在线| 男人的好看免费观看在线视频 | 妹子高潮喷水视频| 久久久久久亚洲精品国产蜜桃av| 国产免费av片在线观看野外av| 99精品在免费线老司机午夜| 免费女性裸体啪啪无遮挡网站| 欧美成人午夜精品| 免费在线观看亚洲国产| 少妇被粗大的猛进出69影院| 一区二区三区国产精品乱码| 一个人观看的视频www高清免费观看 | 99热只有精品国产| 精品久久久精品久久久| a在线观看视频网站| 乱人伦中国视频| 日本三级黄在线观看| 国产黄a三级三级三级人| 久久精品亚洲熟妇少妇任你| 国产亚洲av嫩草精品影院| 人人妻人人澡欧美一区二区 | 久久香蕉国产精品| 激情在线观看视频在线高清| 在线观看免费午夜福利视频| 国产真人三级小视频在线观看| 夜夜躁狠狠躁天天躁| 免费女性裸体啪啪无遮挡网站| 成人特级黄色片久久久久久久| 久久这里只有精品19| 国产精品久久久久久亚洲av鲁大| 亚洲欧美激情综合另类| 天天一区二区日本电影三级 | 亚洲七黄色美女视频| 午夜免费鲁丝| 欧美日韩精品网址| 久久人人精品亚洲av| av片东京热男人的天堂| 午夜免费成人在线视频| 97超级碰碰碰精品色视频在线观看| 男女下面进入的视频免费午夜 | 欧美 亚洲 国产 日韩一| 在线观看午夜福利视频| 精品人妻1区二区| 日韩欧美国产在线观看| 国产精品av久久久久免费| 在线观看66精品国产| x7x7x7水蜜桃| 亚洲成人国产一区在线观看| 国产成+人综合+亚洲专区| 久久性视频一级片| 97人妻精品一区二区三区麻豆 | 美国免费a级毛片| 久久久久九九精品影院| 伊人久久大香线蕉亚洲五| 精品久久久久久成人av| 视频区欧美日本亚洲| 黄色成人免费大全| 久久久精品国产亚洲av高清涩受| 一进一出好大好爽视频| 欧美日本视频| 国产色视频综合| 美女免费视频网站| 可以在线观看毛片的网站| 夜夜爽天天搞| 波多野结衣高清无吗| 中文字幕最新亚洲高清| 91av网站免费观看| 国产成人欧美| 国产成人啪精品午夜网站| 国产精品爽爽va在线观看网站 | 亚洲av成人不卡在线观看播放网| 国产片内射在线| 亚洲性夜色夜夜综合| 国语自产精品视频在线第100页| 日韩高清综合在线| 色哟哟哟哟哟哟| 婷婷六月久久综合丁香| bbb黄色大片| 99国产极品粉嫩在线观看| 精品一区二区三区四区五区乱码| 亚洲自拍偷在线| 精品久久久久久久久久免费视频| 日本免费a在线| 老汉色∧v一级毛片| 精品久久久久久成人av| 国产成人精品在线电影| 日韩中文字幕欧美一区二区| 精品久久蜜臀av无| 国产三级黄色录像| 国产亚洲av高清不卡| 久久精品国产99精品国产亚洲性色 | 亚洲狠狠婷婷综合久久图片| 激情视频va一区二区三区| 精品日产1卡2卡| 视频区欧美日本亚洲| 亚洲欧美激情在线| 99久久综合精品五月天人人| 国产亚洲欧美在线一区二区| 国产视频一区二区在线看| 成年女人毛片免费观看观看9| 国产一区二区在线av高清观看| 久久国产精品影院| 狂野欧美激情性xxxx| 老汉色av国产亚洲站长工具| 亚洲欧美精品综合一区二区三区| 十分钟在线观看高清视频www| 99在线人妻在线中文字幕| 久热爱精品视频在线9| 女人被狂操c到高潮| 亚洲精品国产精品久久久不卡| 精品少妇一区二区三区视频日本电影| 久久久国产成人精品二区| 美女高潮到喷水免费观看| 国产亚洲av高清不卡| 精品午夜福利视频在线观看一区| 久久精品人人爽人人爽视色| 在线观看www视频免费| 熟女少妇亚洲综合色aaa.| 日韩视频一区二区在线观看| 999精品在线视频| www.熟女人妻精品国产| 日本免费a在线| 大型黄色视频在线免费观看| xxx96com| 日本 欧美在线|