• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Unsupervised MRI Synthetic CT Image Generation Framework with Registration Network

    2023-12-15 03:57:14LiweiDengHenanSunJingWangSijuanHuangandXinYang
    Computers Materials&Continua 2023年11期

    Liwei Deng,Henan Sun,Jing Wang,Sijuan Huang and Xin Yang,★

    1Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration,School of Automation,Harbin University of Science and Technology,Harbin,150080,China

    2Institute for Brain Research and Rehabilitation,South China Normal University,Guangzhou,510631,China

    3Department of Radiation Oncology,Sun Yat-sen University Cancer Center,State Key Laboratory of Oncology in South China,Collaborative Innovation Center for Cancer Medicine,Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy,Guangzhou,510060,China

    ABSTRACT In recent years,radiotherapy based only on Magnetic Resonance (MR) images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed for dose calculation in the clinic.Recent deep-learning approaches to synthesized CT images from MR images have raised much research interest,making radiotherapy based only on MR images possible.In this paper,we proposed a novel unsupervised image synthesis framework with registration networks.This paper aims to enforce the constraints between the reconstructed image and the input image by registering the reconstructed image with the input image and registering the cycle-consistent image with the input image.Furthermore,this paper added ConvNeXt blocks to the network and used large kernel convolutional layers to improve the network’s ability to extract features.This research used the collected head and neck data of 180 patients with nasopharyngeal carcinoma to experiment and evaluate the training model with four evaluation metrics.At the same time,this research made a quantitative comparison of several commonly used model frameworks.We evaluate the model performance in four evaluation metrics which achieve Mean Absolute Error (MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR),and Structural Similarity(SSIM)are 18.55±1.44,86.91±4.31,33.45±0.74 and 0.960±0.005,respectively.Compared with other methods,MAE decreased by 2.17,RMSE decreased by 7.82,PSNR increased by 0.76,and SSIM increased by 0.011.The results show that the model proposed in this paper outperforms other methods in the quality of image synthesis.The work in this paper is of guiding significance to the study of MR-only radiotherapy planning.

    KEYWORDS MRI-CT image synthesis;variational auto-encoder;medical image translation;MRI-only based radiotherapy

    1 Introduction

    Cancer is often considered a threat to public health in recent years,and its incidence rate is increasing yearly[1,2].Among mainstream cancer treatment methods,radiation therapy[3]is the most widely used method of treatment for cancer and is the earliest one.In modern clinical treatment,using Magnetic Resonance (MR) and Computed Tomography (CT) images during radiation therapy are unavoidable.Because MR images can provide high-quality contrast of soft tissues,it is very important to determine the location and size of tumors.In addition,MR imaging has the advantage of being free of ionizing radiation and multi-sequence imaging.However,it is very important for CT images to provide electron density information for dose calculation during radiotherapy of cancer patients,but this cannot be obtained from MR images.Although CT images can provide electronic density information,this results in the patient being exposed to radiation with negative implications for the patient’s health.As a result,both CT and MR images were obtained during radiation exposure in both cases.Furthermore,MR images must be registered with CT images during radiation for further treatment,but this registration can introduce some errors[4].

    Given the above problems,some researchers have begun to study the method of generating CT images from MR-only images[5,6].It is challenging to achieve radiotherapy by MR alone.Researchers have used MRI to synthesize CT(sCT)through various methods,which can be broadly classified into three classes[7,8].The first method is voxel-based research[9],which requires accurate segmentation of MRI tissues,but this method takes a long time to complete.The second method is based on the atlas [10],which mainly registers MR and CT to get the corresponding deformation field,which can be used to register CT and MR in an atlas to get sCT.However,these methods all rely on high-precision registration,and the registration method’s accuracy directly affects the synthetic sCT.The third method is based on learning [11].This method is based on existing image data.Based on the two data distributions,a nonlinear mapping between the data is found,and the task of synthesizing the sCT is realized using this nonlinear relationship.Among the many different methods,deep learning-based techniques [12,13] have demonstrated their ability to produce high-quality sCT images.Among the methods of synthesizing sCT by deep learning,the mainstream research methods can be divided into supervised and unsupervised.The supervised methods require datasets to be strictly aligned and paired.Researchers attempted to perform MR synthetic CT using paired data using conditional Generative Adversarial Networks[14,15].In the data preprocessing process,image registration accuracy often significantly impacts the image quality generated by the network,so the paired MR and CT images must be strictly registered.On the one hand,strictly aligned data are challenging to obtain in practice,which undoubtedly increases the difficulty of the studies.To reduce the difficulty of data acquisition,in another method based on unsupervised learning,MR synthetic CT tasks can be performed from unpaired data.CycleGAN [16],a typically unsupervised learning network,is currently widely used in the field of image synthesis.For example,Wolterink et al.[17]used CycleGAN to perform brain MR to CT synthesis tasks.CycleGAN used a bidirectional network structure to generate images from different directions.Moreover,to constrain the structural consistency of the same mode,the cycle-consistency loss is added to the network.However,the training of CycleGAN is extremely unstable,which can easily cause mode collapse,and the network is often challenging to converge.The structural dissimilarity loss was added by Xiang et al.[18]to strengthen the constraint between images by capturing anatomical structures and improving the quality of synthetic dimensional CT.Yang et al.[19]introduced the modal neighborhood descriptors to constrain the structural consistency of input and synthesized images.

    This research proposed a novel unsupervised image synthesis framework with registration networks for synthesizing MR images into CT images.Like other researchers,this research adopts a bidirectional structure similar to CycleGAN.The primary contributions to this work are as follows:

    ? In this paper,to complete the task of MRI-CT conversion,we propose an image generation network based on the combination of variational self-encoder and generation adversarial network.Among them,we add a registration network in two directions to strengthen the structural consistency between the input image and the reconstructed image,as well as the input image and the cycle-consistent image.

    ? This paper introduces a new correction loss function to strengthen constraints between images,resulting in higher-quality synthetic images.The loss correction needs to be performed simultaneously with the registration network.Furthermore,we add ConvNeXt blocks to the network.This new convolution block has been proven effective,and its performance exceeds some Transformer blocks.

    ? Extensive experiments demonstrate our effectiveness.This research conducts extensive experiments on several popular frameworks,and the method proposed in this study outperforms other methods in modality conversion from MR to CT images.This research also conducts ablation experiments at the same time to confirm the effectiveness of each component.

    2 Methods and Materials

    2.1 Model Architecture

    The framework proposed in this paper is based on Variational Auto-Encoders (VAEs) [20-22]and Generative Adversarial Networks(GANs)[23].The network framework is shown in Fig.1.The network consists of eight sub-networks:two image encodersEMRandECT,two image generatorsGMRandGCT,two discriminatorsDMRandDCT,and two registration networksRMRandRCTfor enhancing cycle-constraints.Since the unpaired MR images are synthesized into the sCT images in this task,the generated sCT images lacked genuine labels to constrain the pseudo-CT;this paper adopts the same bidirectional structure as CycleGAN [16].Namely,the synthesis direction from MR to CT and the synthesis direction from CT to MR are included.Taking MR synthetic pseudo-CT as an example,anXMRdomain image is used as the input to the model,the image is encoded via theXMRdomain image encoder part of the model,and the obtained image code is input into theXCTdomain image generator to synthesize the target domain pseudo-CT.Similarly,the pseudo-CT is fed into theXCTimage encoder as the input fromXCTtoXMRto obtain image coding,and the image coding is fed into theXMRdomain image generator to be converted into the original MR image.Two discriminators are used to evaluate the authenticity of images from different image domains and compete with the generator to achieve the purpose of confrontation training.Finally,the registration network registers the original MR and the reconstructed MR image.In addition,the registration network also registers the original MR and the cycle-consistent MR image.The reconstructed MR image must be consistent with the original MR image,and the cycle-consistent and original images are no exception.Create a nonlinear mapping between unpaired image data.The network is trained through the above process,and the transformation of each image domain includes the image encoder,image generator,discriminator,and rigid registration network.

    2.2 Generators and Discriminator

    Among the models proposed in this paper,both the encoder for encoding images and the generator for synthesizing images adopt the ConvNeXt [24] module as the main structure of the model.The ConvNeXt module draws lessons from the successful experience use of the Vision Transformer(ViT)[25,26]and convolutional neural networks.It builds a pure convolutional network whose performance surpasses the advanced model based on Transformer.ConvNeXt adopts the standard neural network ResNet-50[27]and modernizes it to make the design closer to ViT.In the module,depthwise separable convolutions with a kernel size of seven are used to improve the perceptual field of the model and extract deeper information from the images.Using depthwise separable convolutions can effectively solve the computationally expensive problem caused by large convolution kernels.

    Figure 1: Flowchart of network framework of synthetic sCT based on VAE and CycleGAN.The black line represents the circular process in which the CT image domain participates,and the blue line represents the circular process in which the MR image domain participates

    In this paper,the two image encodersEMRandECTinclude three downsampling convolutional layers and an inverted bottleneck layer composed of six ConvNeXt modules.Each layer of downsampled convolutions contains the convolutions,the instance normalized (IN) leaky rectified linear unit (LReLU) operation,and the SAME padding.The first convolution layer has a convolution kernel size of 7 × 7,and the next two convolutions have a convolution kernel size of 4 × 4.Both image generatorsGMRandGCTcontain an inverse bottleneck layer consisting of six ConNeXt blocks and three upsampling convolutional layers.This sets the sample size in the first two upsampling convolutional layers to 2,an IN,LReLU operation,and the SAME padding.The activation function of the sampling layer in the last layer is Tanh.The specific network structure of the encoder,generator,and discriminator is shown in Fig.2.

    Figure 2: The concrete realization flow chart of the encoder,generator,and discriminator model architecture.The encoder and generator are symmetrical structures.Multi-scale discriminators and generators are used for confrontation training

    Most discriminators in Generative Adversarial Networks use PatchGAN [28].That is,feature extraction from images through convolutional networks,and the matrix with the final output is output to evaluate the image’s authenticity.The head of the image often contains complex texture information,while the texture information of the shoulder is relatively less.However,theN×Npatch output in PatchGAN is fixed.If the image is divided into large patches for calculation,it will lead to the loss of detailed information,and small patches will lead to high computational costs.The discriminator used in this paper is a multi-scale discriminator,which enables the discriminator to learn information from different scales simultaneously.

    The discriminator consists of three convolution blocks,wherein each convolution block comprises five layers of convolution and an average pooling operation;the first four convolution layers comprise a convolution operation and LReLU with the convolution kernel size of 4 and strides being 2;finally,a convolutional layer with a convolution kernel size of 1 is used to output anN×Nmatrix,and the final evaluation result is obtained through the average pooling operation.The multi-scale discriminator outputs evaluation matrices corresponding to different scales for loss calculation after the three convolution blocks are finished.It is ensured that the discriminator can learn image features from different scales.In this paper,two multi-scale discriminatorsDCTandDMRare used in the network.

    The registration network used in this research is consistent with RegGAN [29].There are seven downsampling layers composed of residual blocks in the registration network,and the convolution kernel size in each residual block is 3,and the stride is 1.The bottleneck layer uses three residual blocks.The upsampling layer also consists of seven residual modules.Finally,use the convolutional layer to output the registration result.The specific network structure of the registration network is shown in Fig.3.

    Figure 3:The structure of the registration network uses the ResUnet network structure

    2.3 Loss Functions

    This paper designs the complex loss functions,which include encoding loss,generator loss,discriminator loss,and smoothing and correction loss functions in the registration network.The network architecture of the generation model in this paper has a symmetrical structure,and the model structure of two different synthesis directions is the same.For the convenience of the expression,this paper useXCTandXMRto represent the images from the CT domain and the MR domain,XrecandXcycto represent the reconstructed and the cycle-consistent images,andcto represent the image code output by the encoder.

    2.3.1 Encoder Loss

    In the part of encoder loss,similar to Liu et al.[22],this paper punishes the deviation of potential coding distribution from prior distribution by calculating encoder loss.The concrete implementation is as follows:

    where the value ofλ1is 0.01 andNis the dimension of image coding.

    2.3.2 Adversarial Loss

    The generator primarily synthesizes the corresponding image via the input image encoding,matching the original image as closely as possible.At the same time,the synthesized images cheat the discriminator as much as possible.The generator’s total loss of the generator is as follows:

    In addition,the discriminator judges the authenticity of the input image,minimizing the loss of the real image and maximizing the loss for the image synthesized by the generator.This paper has a corresponding discriminator in each of the synthesis directions.The total loss of discriminator is as follows:

    2.3.3 Reconstruction Loss

    The reconstruction loss primarily includes the cycle-consistency loss of the model and the reconstruction loss of the same modal image.The cycle-consistent loss function is as follows:

    whereλ2is the loss weight ratio,and its value is 10.

    Image reconstruction loss means the image is encoded by the encoder output image,which is then input to the generator,which will reconstruct the image according to the same modality as the original input image.This loss function is comparable to the identity loss in CycleGAN.The loss function is calculated as follows:

    2.3.4 Registration Loss

    Then,the original image is taken as a fixed image,and the reconstructed or circularly consistent image is taken as a floating image.The reconstructed or cycle-consistent image is registered with the original image through the registration networkRto obtain the registration fieldT.Then the reconstructed or cycle-consistent image is deformed by the registration fieldT,and then the correction loss between them is calculated.The loss function is:

    where imagesXreal_1andXreal_2represent real images in the same modality asXrecandXcyc,respectively.T1andT2represent different deformation fields.Theλ3is the loss weight ratio,and its value is 20.

    At the same time,This work smoothes the deformation field,and designs a loss function to minimize the deformation field’s gradient in order to assess the smoothness of the deformation field.The smoothing loss of the field is consistent with RegGAN[29],so the loss function can be expressed by the Jacobian determinant as below:

    wherein each score represents the partial derivative of the point(m,n)in the image with respect to the direction of the image(x,y),andJ(m,n)represents the value of the Jacobian determinant of the point(m,n)in the image.Theλ4is the loss weight ratio,and its value is 10.

    In summary,this paper overall optimization goals are as follows:

    2.4 Evaluation Criterion

    In this research,four widely used evaluation metrics are used as benchmarks to test the quality of sCT generated by the proposed model in order to quantitatively evaluate its quality:Mean Absolute Error(MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR)and Structural Similarity(SSIM).

    The MAE metric is able to reflect the actual occurrence of voxel error between real CT and sCT.It can circumvent the problem of error cancellation and so accurately reflect the model’s prediction error.Optimizing the value of MAE to the minimum can make the performance of the model stronger.The objective optimization formula of MAE is as follows:

    whichXCT(k)andXMR(k)represent thekth set of test data.

    The RMSE measures the standard deviation between images,consistent with MAE.Optimizing the value of RMSE to a minimum can make the model perform better.Its calculation formula is as follows:

    The PSNR is an objective standard for evaluating images.The PSNR is optimized to the maximum,which proves that the image synthesized by the model is less distorted.Its calculation formula is as follows:

    whichHU_MAXrepresents the maximum intensity of CT and pseudo-CT images.

    Usually,the SSIM metric can reflect the similarity between two images and mainly measure the correlation between the adjacent HU values of the images.Optimizing SSIM to the maximum proves that the images synthesized by the model are more similar.The calculation formula is as follows:

    3 Data Acquisition and Processing

    This paper obtained CT and MR image data from 180 patients with nasopharyngeal carcinoma.We get MR and CT images scanning the patients in regular clinical treatment.These 180 patients served as the model’s training and testing data.Among them,the Siemens scanner was used to obtain the CT images with an image size of 512 × 512.T1-weighted MR images were obtained in the MR simulator of Philips Medical System with a magnetic field intensity of 3.0 T,and its size was 720×720.The project was approved by the Ethics Committee of Sun Yat-sen University Cancer Center,which gave up informed consent.This research uses the volume surface contour data in the radiotherapy(RT)structure to construct an image mask,retain the images,and delete invalid information outside the mask.The specific image processing process is shown in Fig.4.This research aligned the relevant CT and MR images for each patient using affine and deformable registration in the open-access medical image registration library (ANTS).For best network training results,this research cropped the original image to 256 × 384.Since the trainable information from head and neck data occupies a small proportion of the image,to further accelerate the training of the network,the image size is finally cropped to 256×256.This research splices the overlapped parts of the two shoulder images for shoulder images by calculating the average value during the test.Based on the data set information,the Hounsfield Unint (HU) range of CT was [-1024,3072].This research normalizes it to [-1,1]during training to speed up the model’s training.The dataset is roughly divided according to the ratio of 6:3:3,110 cases of data are randomly selected as the training set,and 35 cases of data are randomly selected as the evaluation set and test set.

    Figure 4:Implementation of specific operations for image preprocessing

    4 Experiment and Result

    4.1 Training Details

    All models in this study are built in the Pytorch framework.Among them,the Pytorch version is 1.8.1,and the Python version is 3.8.The experiments and experimental results mentioned in this paper are all trained on RTX 2080 Ti,and the memory size of the GPU is 11 G.The optimizer of the training model in the experiment is the Adam optimizer,and the learning rate set in the experiment is 1e-4 and(β1,β2)=(0.5,0.999),and that training is iterated through 80 epochs with the batch size of 1.

    4.2 Compare the Quality of Synthesized sCT by Different Methods

    Table 1 compares three conventional commonly used frameworks with the techniques presented in this study,such as CycleGAN[16],UNIT[22],MUNIT[30],and the latest RegGAN[29]framework.The experimental finding in Table 1 shows that the method proposed in this research has the best performance among the four evaluation metrics and is superior to the other four frameworks.The MAE score is 18.55±1.44,decreased by 2.17.The RMSE score is 86.91±4.31,decreased by 7.82.The PSNR score is 33.45 ± 0.74,increased by 0.76.Furthermore,the SSIM score is 0.960 ± 0.005,increased by 0.011.It can be concluded from the evaluation indexes that the quality of sCT synthesized by the proposed method is superior to that of other methods.In addition,thep-value in the studentt-test between different indicators is also calculated.Thep-value indicates significant improvement by pairedt-test(p <0.05).

    Table 1: Through four evaluation metrics,sCT generated by different methods is compared

    Fig.5 shows the comparison between the above four frameworks and the proposed method for synthesizing the anatomical structure of head slices.This paper reduces the error’s HU value between genuine CT and sCT to[-400,400].The results show that the proposed method has the smallest error between the synthetic head sCT slice and the original CT and the highest similarity with the original CT in anatomical structure.The synthesized sCT in this paper is more similar to genuine CT in the area with complex head texture.In Fig.6,the performance of the five models on the test set is demonstrated by violin and box diagram.The violin plot shows that the evaluation metric of the sCT synthesized by this model for each patient is concentrated on the better side.Fig.6 is drawn using Hiplot [31]platform.

    Figure 5:The concrete realization of HU differences between sCT and genuine CT predicted by five different methods ranging from[-400,400]

    Figure 6:Box plot gives the median and quartile ranges of four evaluation metrics of five models on the test set.Violin plots show the distribution and density of the predicted data of the five models on the test set

    Through qualitative comparison,it is further illustrated that the anatomical structure of the sCT synthesized by this method is more similar to the genuine CT.In Fig.7,the real CT and corresponding sCT images randomly selected by the proposed model are shown.In the figure,the areas marked by the blue and red boxes are enlarged,which are located in the upper right corner and the lower right corner of the image,respectively.In the figure,this research visually compares the synthetic quality of sCT images of bones.In the comparison of three sets of images,the proposed method outperforms the other four methods in terms of the quality of synthetic images in bone tissues.At the same time,it has advantages in synthesizing some texture details,such as the red-marked area of the first group of images.This shows that the proposed method can transform MR image mode into its sCT corresponding mode more effectively.

    In addition,as shown in Fig.8,sagittal images of three patients were randomly selected for this research.It is evident by comparing sagittal images of patients that the proposed method outperforms the other four methods in terms of synthesis quality.The head and neck bones are more like genuine CT images.In addition,the texture synthesized by the proposed method is clearer and more delicate,and the similarity with the actual CT is higher in the complex texture area of the head cavity.

    Figure 7: From left to right,there are genuine CT,sCT synthesized by CycleGAN,sCT synthesized by UNIT,sCT synthesized by MUNIT,sCT synthesized by RegGAN,and sCT synthesized by the proposed method.The upper right corner of the image is a locally enlarged image of bones or tissues in a blue frame,and the lower right corner o is a locally enlarged image of bones or tissues in a red frame

    Figure 8: Sagittal view of the image.From left to right are real CT,sCT synthesized by CycleGAN,sCT synthesized by UNIT,sCT synthesized by MUNIT,sCT synthesized by RegGAN,and sCT synthesized by the method proposed in this paper

    4.3 Ablation Study

    The data set used in the ablation experiment is the same as the above experiment.This research performs ablation experiments on the essential parts of the proposed method,respectively,demonstrating the effectiveness of some critical parts of the proposed method:adding ConvNeXt blocks,adding an additional registration network,and calculating the registered images and ground truth correction loss between images to constrain the structural similarity between genuine and reconstructed images along with between genuine and cycle-consistent images.The experimental findings following each part’s ablation are shown in Table 2.Based on UNIT [22],this study adds different components to UNIT and carries out four groups of experiments.

    Table 2: Ablation study:Each component improves the model

    The experimental findings in Table 2 show that the components of the proposed method are effective in the task of synthesizing sCT from MR images.In this paper,the ConvNeXt block is added to the large kernel convolution to improve the receptive field,extract more detailed image features and enhance the network’s processing of image details and textures.The proposed registration network method combined with loss correction significantly improves the task of synthesizing sCT images from MR images in four evaluation indexes.Finally,the evaluation index obtained by combining all methods is the best.

    The experimental findings in Table 2 show that the components of the proposed method are effective in the task of synthesizing sCT from MR images.In this paper,the ConvNeXt block is added to the large kernel convolution to improve the receptive field,extract more detailed image features and enhance the network’s processing of image details and textures.The proposed registration network method combined with loss correction significantly improves the task of synthesizing sCT images from MR images in four evaluation indexes.Finally,the evaluation index obtained by combining all methods is the best.

    5 Discussion

    This research proposes a new unsupervised image synthesis framework with registration networks to solve the task from magnetic resonance image synthesis to CT image.It is used to train unpaired head and neck data to avoid the effects of a severe shortage of paired data.The experimental results in Table 1 show that the proposed method has obvious performance advantages.Specifically,the proposed method outperforms the current mainstream frameworks significantly when the performance of the model surpasses the benchmark network UNIT selected in this paper,in which MAE is increased from 20.72 to 18.55,RMSE from 94.73 to 86.91,PSNR from 32.69 to 33.45,SSIM from 0.949 to 0.960.The proposed method adds a simple and effective module ConvNeXt block to expand the perceptual field of the model and obtain deeper image features.In addition,this study introduces a registration network and an image rectification loss in the method to strengthen the constraints between the reconstructed image and the input image,as well as between the cycle-consistent image and the input image,and enhance the control ability of the model generation domain.

    To intuitively show the advantages of the method proposed within that study for the problem of sCT synthesis,this research shows the error diagrams between the sCT from different methods and the genuine CT.The error diagram between sCT and genuine CT has shown in Fig.5,which shows that the proposed method is more similar to the original CT in the texture details of the synthesized sCT.The partial enlargement in Fig.6 shows that the method is superior to other methods in synthesizing sCT bones and some texture details.In addition,the sagittal diagram shown in Fig.8 shows that the CT synthesized by this method performs better in the sagittal plane than the other four methods.The bone and texture regions are more continuous,indicating that the model has information related to two adjacent slices when synthesizing CT.Compared with other networks,the proposed method adds ConvNeXt blocks to the network,effectively improving the model’s receptive field and establishing a long-term relationship with the network.In addition,the added registration network and image correction loss can strengthen the constraints between the reconstructed and the genuine image and between the cyclic-consistent and the genuine image and enhance the model’s ability to control its own domain patterns.

    Table 2 shows the ablation experiments’ results on the proposed method’s components.The experimental findings in Table 2 demonstrate that each component of the proposed method can improve the performance of the network.In particular,the correction loss proposed in this study can significantly improve the network’s performance.At the same time,the performance of the network receptive field optimization model can be enhanced by adding ConvNeXt blocks.The results show that the proposed method significantly enhances the image constraints.The registration network registers both the reconstructed and cycle-consistent images with the original images,correcting the genuine and registered images by a correction loss,thereby reducing the uncertainty of the generator.

    In this paper,we proposed the 2D model framework for synthesizing MR images to CT images.However,there are still some areas that need improvement.Although the method proposed in this paper can be used to synthesize unpaired images,2D slice data will lose context information,resulting in a lack of correlation between adjacent slice data.We will build a 3D model based on the proposed method to solve the above problems,improve the accuracy of model synthesis and apply it to radiotherapy planning.

    6 Conclusion

    This paper proposes a novel method of synthetic CT images from MR images primarily based on Variational Auto-Encoders and Generative Adversarial Networks.We conduct experiments using head and neck data from patients with nasopharyngeal carcinoma and evaluate them using four metrics.The experimental results in Table 1 and the error plot of sCTvs.genuine CT shown in Fig.5 demonstrate that the proposed method outperforms the current four popular generation methods regarding visual effects and objective metrics,with minimal error to genuine CT.In Fig.7,the CT synthesized by the proposed method is superior to other methods in details of the bone region.Fig.8 shows that the proposed method shows better coherence on the sagittal plane.In the ablation study part,the effectiveness of some components in the proposed method is proved,and the advantages of this method in unsupervised medical image synthesis are demonstrated.The network architecture proposed in this paper adds registration networks in two directions to strengthen the structural consistency between the input image and the reconstructed image as well as the input image and the cycle-consistent image,and ensure the stability of network training.ConvNeXt module enhances the network feature processing ability,which is clearer in the synthesis of bone and soft tissue regions and has less error with real CT.At the same time,this paper introduces a new correction loss function combined with registration networks to strengthen the constraints between images,avoid the offset phenomenon of synthesized images,and obtain higher-quality synthesized images.To sum up,the method proposed in this paper shows the best effect in the task of MR synthetic CT.Through the quantitative and qualitative evaluation of synthetic images,it shows the advantages of this method in many aspects.Although adding ConvNeXt blocks to the model can expand its receptive field and improve its performance,doing so slows down the model’s training because ConvNeXt blocks use large kernel convolutions.We will address this in the future.In addition,the 2D model framework has certain limitations,and it is easy to lose contextual information.We plan to extend the model frame to the 3D model frame to solve the discontinuity of the 2D model on the Z axis for patients.We will use a 3D network to generate a more accurate sCT,which can be used to sketch the lesion site more accurately in the field of image segmentation so as to carry out radiotherapy more accurately.At the same time,the ConvNeXt block will be extended to 3D,and the large convolution kernel will be abandoned to improve the training speed.The results of this study have guiding significance for the research based on a magnetic resonance-only radiotherapy plan.

    Acknowledgement:We thank Shanghai Tengyun Biotechnology Co.,Ltd.for developing the Hiplot Pro platform (https://hiplot.com.cn/) and providing technical assistance and valuable tools for data analysis and visualization.

    Funding Statement:This research was supported by the National Science Foundation for Young Scientists of China(Grant No.61806060),2019-2021,the Basic and Applied Basic Research Foundation of Guangdong Province(2021A1515220140),the Youth Innovation Project of Sun Yat-sen University Cancer Center(QNYCPY32).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Xin Yang,Liwei Deng;data collection:Xin Yang;analysis and interpretation of results:Liwei Deng,Henan Sun,Sijuan Huang,Jing Wang;draft manuscript preparation:Henan Sun,Sijuan Huang,Jing Wang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.The data are not publicly available due to ethical restrictions.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    色哟哟·www| 卡戴珊不雅视频在线播放| 国产成人精品婷婷| av免费在线看不卡| 亚洲伊人久久精品综合| 深夜a级毛片| av免费观看日本| 3wmmmm亚洲av在线观看| 国产精品国产三级专区第一集| 亚洲av欧美aⅴ国产| 国产精品一区二区在线观看99| 亚洲最大成人中文| 精品酒店卫生间| 欧美高清成人免费视频www| 男女那种视频在线观看| 中文乱码字字幕精品一区二区三区| av在线天堂中文字幕| 精品午夜福利在线看| 国产精品久久久久久精品电影| 男女下面进入的视频免费午夜| 日韩精品有码人妻一区| 国产精品一及| 久久久久久国产a免费观看| 亚洲最大成人av| 免费观看无遮挡的男女| 亚洲婷婷狠狠爱综合网| 久久久久九九精品影院| 热99国产精品久久久久久7| 春色校园在线视频观看| 亚洲欧美中文字幕日韩二区| 日韩一区二区三区影片| 日本黄色片子视频| 婷婷色av中文字幕| 直男gayav资源| 嘟嘟电影网在线观看| 最近2019中文字幕mv第一页| 国产 精品1| 国产淫片久久久久久久久| 狂野欧美激情性bbbbbb| 最近的中文字幕免费完整| 日韩 亚洲 欧美在线| 五月天丁香电影| 简卡轻食公司| 午夜爱爱视频在线播放| 国产精品熟女久久久久浪| 亚洲成人久久爱视频| 下体分泌物呈黄色| 韩国av在线不卡| 国产爽快片一区二区三区| 亚洲图色成人| 天堂网av新在线| 国产伦精品一区二区三区四那| 亚洲在线观看片| 日韩欧美一区视频在线观看 | 五月玫瑰六月丁香| 国产亚洲91精品色在线| 国产精品精品国产色婷婷| 超碰av人人做人人爽久久| 黄色怎么调成土黄色| 18禁在线无遮挡免费观看视频| 一级毛片我不卡| 国产美女午夜福利| 美女视频免费永久观看网站| 最近最新中文字幕大全电影3| 夜夜爽夜夜爽视频| 免费看不卡的av| 国产伦理片在线播放av一区| 国产白丝娇喘喷水9色精品| 肉色欧美久久久久久久蜜桃 | 亚洲精品国产成人久久av| 噜噜噜噜噜久久久久久91| 黄色怎么调成土黄色| 亚洲国产最新在线播放| 国产精品久久久久久精品电影小说 | 久热这里只有精品99| 欧美高清成人免费视频www| 成人黄色视频免费在线看| 一区二区三区精品91| 街头女战士在线观看网站| 国产伦精品一区二区三区视频9| 国产男人的电影天堂91| 色视频在线一区二区三区| 天天躁日日操中文字幕| 久久久久网色| 亚洲成人一二三区av| 观看免费一级毛片| av在线亚洲专区| 成人亚洲欧美一区二区av| av网站免费在线观看视频| 深夜a级毛片| 亚洲一级一片aⅴ在线观看| 99热网站在线观看| 我要看日韩黄色一级片| 欧美成人a在线观看| 国产精品99久久99久久久不卡 | 一级毛片久久久久久久久女| 18禁在线无遮挡免费观看视频| 国产精品嫩草影院av在线观看| 亚洲国产精品成人久久小说| 国产精品熟女久久久久浪| 国产精品久久久久久精品古装| 欧美日韩亚洲高清精品| 又爽又黄a免费视频| 国产又色又爽无遮挡免| 亚洲色图av天堂| 最近中文字幕2019免费版| 亚洲激情五月婷婷啪啪| 精品人妻视频免费看| 久久久久久久久大av| 亚洲综合色惰| a级毛片免费高清观看在线播放| 免费观看性生交大片5| 一个人观看的视频www高清免费观看| 国产av不卡久久| 国产亚洲午夜精品一区二区久久 | 午夜激情福利司机影院| 中文字幕亚洲精品专区| 午夜精品一区二区三区免费看| 国产成人一区二区在线| 人妻制服诱惑在线中文字幕| 国产免费福利视频在线观看| 亚洲熟女精品中文字幕| 日韩大片免费观看网站| 久久久精品94久久精品| 一个人观看的视频www高清免费观看| 欧美xxxx性猛交bbbb| 欧美极品一区二区三区四区| 久久久久久久亚洲中文字幕| 免费黄频网站在线观看国产| 国产在线男女| 国产精品一区二区在线观看99| 日日啪夜夜撸| 国产欧美另类精品又又久久亚洲欧美| av一本久久久久| 青青草视频在线视频观看| 亚洲欧美清纯卡通| 国产v大片淫在线免费观看| 男女那种视频在线观看| 伦精品一区二区三区| 边亲边吃奶的免费视频| 亚洲av.av天堂| 少妇裸体淫交视频免费看高清| 午夜福利视频1000在线观看| 青春草国产在线视频| 在线观看一区二区三区激情| 特级一级黄色大片| 免费观看av网站的网址| 深夜a级毛片| 日本猛色少妇xxxxx猛交久久| 国产免费视频播放在线视频| 黄色视频在线播放观看不卡| 内地一区二区视频在线| 麻豆成人av视频| 久久久亚洲精品成人影院| 99久久精品热视频| 大片电影免费在线观看免费| 午夜福利视频精品| 在线免费十八禁| 乱码一卡2卡4卡精品| 国产成人一区二区在线| 日韩欧美 国产精品| 国产精品秋霞免费鲁丝片| 99九九线精品视频在线观看视频| 亚洲va在线va天堂va国产| 建设人人有责人人尽责人人享有的 | 国产精品av视频在线免费观看| 18禁在线播放成人免费| 亚洲四区av| 婷婷色av中文字幕| 欧美一级a爱片免费观看看| 国产精品一区二区三区四区免费观看| 神马国产精品三级电影在线观看| 国产男人的电影天堂91| 91久久精品国产一区二区成人| 麻豆精品久久久久久蜜桃| 一级毛片aaaaaa免费看小| 国产白丝娇喘喷水9色精品| 成人亚洲欧美一区二区av| 国产精品偷伦视频观看了| 久久久久久国产a免费观看| 午夜老司机福利剧场| 美女视频免费永久观看网站| 成人欧美大片| 国产一区亚洲一区在线观看| 一级毛片久久久久久久久女| 久久人人爽人人片av| 国产成人a区在线观看| 成人免费观看视频高清| 哪个播放器可以免费观看大片| 国国产精品蜜臀av免费| 久久韩国三级中文字幕| 国产亚洲一区二区精品| 人妻 亚洲 视频| 人妻夜夜爽99麻豆av| 亚州av有码| 18禁裸乳无遮挡免费网站照片| 欧美日韩亚洲高清精品| 亚洲欧美成人精品一区二区| 国产免费一级a男人的天堂| 精品久久久久久电影网| 丰满乱子伦码专区| 国产精品99久久99久久久不卡 | 在线观看国产h片| 久久久色成人| 亚洲国产精品国产精品| 亚洲最大成人中文| 欧美日韩视频精品一区| 一级二级三级毛片免费看| 狂野欧美白嫩少妇大欣赏| av专区在线播放| 九九在线视频观看精品| 精品一区二区免费观看| 插阴视频在线观看视频| 国产爽快片一区二区三区| 日韩,欧美,国产一区二区三区| 成人黄色视频免费在线看| 欧美性猛交╳xxx乱大交人| 久久久久性生活片| av专区在线播放| 国产午夜福利久久久久久| 成人国产av品久久久| 亚洲不卡免费看| 免费看光身美女| 亚洲激情五月婷婷啪啪| 国产极品天堂在线| 亚洲av福利一区| 一级a做视频免费观看| 午夜老司机福利剧场| 中文字幕免费在线视频6| 尾随美女入室| 久久99精品国语久久久| 麻豆国产97在线/欧美| 免费观看的影片在线观看| 午夜精品国产一区二区电影 | 日本午夜av视频| 中国国产av一级| 亚洲欧美日韩另类电影网站 | 国产美女午夜福利| 另类亚洲欧美激情| 亚洲av二区三区四区| 国内少妇人妻偷人精品xxx网站| 国产精品av视频在线免费观看| 久久99热这里只频精品6学生| 亚洲精品中文字幕在线视频 | 一级毛片 在线播放| 国产精品久久久久久av不卡| 成人鲁丝片一二三区免费| 久久久欧美国产精品| freevideosex欧美| 精品久久国产蜜桃| 亚洲国产色片| a级一级毛片免费在线观看| 国产爱豆传媒在线观看| 一级片'在线观看视频| 男人和女人高潮做爰伦理| 国产成人精品一,二区| 久久热精品热| 亚洲内射少妇av| 欧美xxxx性猛交bbbb| 夜夜爽夜夜爽视频| 大话2 男鬼变身卡| 亚州av有码| 一本一本综合久久| 日韩视频在线欧美| 1000部很黄的大片| 大码成人一级视频| 男人爽女人下面视频在线观看| 熟妇人妻不卡中文字幕| 一个人看的www免费观看视频| 亚洲av成人精品一二三区| 少妇猛男粗大的猛烈进出视频 | 国产午夜精品久久久久久一区二区三区| 久久ye,这里只有精品| 97精品久久久久久久久久精品| 久久久精品免费免费高清| 日韩av在线免费看完整版不卡| 国产高清国产精品国产三级 | 嘟嘟电影网在线观看| 大香蕉久久网| 美女国产视频在线观看| 亚洲av免费高清在线观看| 欧美潮喷喷水| 日日啪夜夜撸| 毛片一级片免费看久久久久| 亚洲不卡免费看| 欧美日韩综合久久久久久| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 干丝袜人妻中文字幕| 在线精品无人区一区二区三 | 成人国产麻豆网| 亚洲国产最新在线播放| 最近最新中文字幕大全电影3| 亚洲av.av天堂| 丰满少妇做爰视频| 亚洲人成网站高清观看| 亚洲精品视频女| 观看美女的网站| 免费黄网站久久成人精品| 成人毛片60女人毛片免费| 国产一区二区亚洲精品在线观看| 狂野欧美白嫩少妇大欣赏| 人妻一区二区av| 久久久成人免费电影| 春色校园在线视频观看| 精品久久久噜噜| 男女边摸边吃奶| 久久久久久久精品精品| 免费看日本二区| 久久久久久久久久人人人人人人| 亚洲一级一片aⅴ在线观看| 麻豆成人午夜福利视频| 中文资源天堂在线| 五月伊人婷婷丁香| 国模一区二区三区四区视频| 日韩电影二区| 亚洲一区二区三区欧美精品 | 高清视频免费观看一区二区| 亚洲成人精品中文字幕电影| 大片免费播放器 马上看| 免费少妇av软件| 汤姆久久久久久久影院中文字幕| 国产高清有码在线观看视频| 精品久久久噜噜| 国产黄色免费在线视频| 最近中文字幕高清免费大全6| 免费黄网站久久成人精品| 人妻夜夜爽99麻豆av| 欧美性猛交╳xxx乱大交人| 精品99又大又爽又粗少妇毛片| 永久网站在线| 欧美一区二区亚洲| 亚洲天堂国产精品一区在线| av网站免费在线观看视频| 一级毛片久久久久久久久女| 直男gayav资源| 人妻夜夜爽99麻豆av| 久久ye,这里只有精品| 国产欧美日韩一区二区三区在线 | 蜜臀久久99精品久久宅男| 久久韩国三级中文字幕| 别揉我奶头 嗯啊视频| 人妻一区二区av| 久久国产乱子免费精品| 久久精品综合一区二区三区| 国产精品一区www在线观看| 麻豆成人av视频| 少妇人妻久久综合中文| 乱系列少妇在线播放| 伊人久久精品亚洲午夜| 爱豆传媒免费全集在线观看| 日韩一区二区三区影片| 黄色配什么色好看| 国产一区亚洲一区在线观看| 欧美成人精品欧美一级黄| 最近中文字幕2019免费版| 麻豆国产97在线/欧美| 男人狂女人下面高潮的视频| 精品国产一区二区三区久久久樱花 | 国产日韩欧美亚洲二区| 91久久精品电影网| 色视频在线一区二区三区| 久久精品国产鲁丝片午夜精品| 男人舔奶头视频| 欧美成人一区二区免费高清观看| 小蜜桃在线观看免费完整版高清| 久久精品久久久久久噜噜老黄| 国产视频首页在线观看| 久久精品久久久久久久性| 黄色日韩在线| 欧美另类一区| 免费黄网站久久成人精品| 久久热精品热| 91久久精品国产一区二区成人| 国产精品麻豆人妻色哟哟久久| av在线观看视频网站免费| 国语对白做爰xxxⅹ性视频网站| 另类亚洲欧美激情| 日本wwww免费看| 狂野欧美白嫩少妇大欣赏| 三级经典国产精品| 97在线视频观看| 亚洲图色成人| 91午夜精品亚洲一区二区三区| 一区二区三区免费毛片| 另类亚洲欧美激情| 久久99热这里只有精品18| 久久精品夜色国产| 成人亚洲精品一区在线观看 | 亚洲aⅴ乱码一区二区在线播放| 亚洲精品自拍成人| 亚洲av成人精品一区久久| 下体分泌物呈黄色| 2021少妇久久久久久久久久久| 国产欧美日韩一区二区三区在线 | 91aial.com中文字幕在线观看| 禁无遮挡网站| 赤兔流量卡办理| 久久久久久久久大av| 久久人人爽人人片av| 精品国产一区二区三区久久久樱花 | 中文资源天堂在线| 成人国产av品久久久| 美女脱内裤让男人舔精品视频| 一边亲一边摸免费视频| 亚洲美女搞黄在线观看| 亚洲av欧美aⅴ国产| 欧美激情在线99| 成人午夜精彩视频在线观看| 视频区图区小说| 少妇熟女欧美另类| 99久久九九国产精品国产免费| 有码 亚洲区| 久久久久久久久久久丰满| 男女边摸边吃奶| 欧美日韩国产mv在线观看视频 | 嫩草影院精品99| 亚洲欧美一区二区三区国产| h日本视频在线播放| 丰满乱子伦码专区| 日日啪夜夜爽| freevideosex欧美| 亚洲欧美清纯卡通| 久久精品国产亚洲av涩爱| 一级a做视频免费观看| 内地一区二区视频在线| 亚洲av中文av极速乱| 成人无遮挡网站| 日日摸夜夜添夜夜添av毛片| 中国三级夫妇交换| 女的被弄到高潮叫床怎么办| 亚洲精品视频女| 大香蕉97超碰在线| 亚洲色图av天堂| 青青草视频在线视频观看| 夜夜爽夜夜爽视频| 亚洲av男天堂| 蜜桃亚洲精品一区二区三区| 国内少妇人妻偷人精品xxx网站| 91精品伊人久久大香线蕉| 亚洲性久久影院| 少妇的逼好多水| 国产精品麻豆人妻色哟哟久久| 少妇被粗大猛烈的视频| 欧美丝袜亚洲另类| 亚洲国产高清在线一区二区三| 欧美日韩视频精品一区| 久久精品国产亚洲av涩爱| 久久久精品94久久精品| 国产亚洲av片在线观看秒播厂| 日本午夜av视频| 免费看av在线观看网站| 成人毛片a级毛片在线播放| 插阴视频在线观看视频| 久久久久国产网址| 一级二级三级毛片免费看| 国产色婷婷99| 亚洲国产精品专区欧美| 欧美成人午夜免费资源| 精品人妻视频免费看| 国产黄色视频一区二区在线观看| 最近中文字幕2019免费版| 日本爱情动作片www.在线观看| 久久亚洲国产成人精品v| 丰满乱子伦码专区| 欧美潮喷喷水| 在线观看三级黄色| 国产高清三级在线| www.色视频.com| 蜜桃久久精品国产亚洲av| 蜜臀久久99精品久久宅男| 亚洲成人久久爱视频| 国产高潮美女av| 亚洲综合精品二区| 女人久久www免费人成看片| 青青草视频在线视频观看| 久久久精品免费免费高清| 中文精品一卡2卡3卡4更新| 国产综合懂色| 少妇人妻久久综合中文| 中文字幕免费在线视频6| 只有这里有精品99| 人妻少妇偷人精品九色| 国产老妇女一区| 一个人观看的视频www高清免费观看| 国产伦在线观看视频一区| 国产亚洲午夜精品一区二区久久 | 精品国产乱码久久久久久小说| 成人二区视频| 欧美亚洲 丝袜 人妻 在线| 最近2019中文字幕mv第一页| 欧美最新免费一区二区三区| 国产成人免费无遮挡视频| 男女那种视频在线观看| 国产成人免费观看mmmm| 99久久人妻综合| 国内揄拍国产精品人妻在线| 欧美高清性xxxxhd video| 美女主播在线视频| 国产v大片淫在线免费观看| 深夜a级毛片| 超碰97精品在线观看| 在线a可以看的网站| 精品久久久久久久末码| 人妻制服诱惑在线中文字幕| 一级二级三级毛片免费看| 免费看不卡的av| 一级毛片电影观看| 国产精品国产av在线观看| 久久久久久国产a免费观看| 可以在线观看毛片的网站| 精品午夜福利在线看| 亚洲人成网站高清观看| 亚洲一级一片aⅴ在线观看| 最近最新中文字幕大全电影3| 菩萨蛮人人尽说江南好唐韦庄| 国产精品久久久久久精品古装| 国产极品天堂在线| 日韩一区二区三区影片| 国产精品成人在线| 成人二区视频| 日韩欧美精品v在线| 又爽又黄无遮挡网站| 免费看a级黄色片| 婷婷色av中文字幕| 亚洲天堂av无毛| 国产v大片淫在线免费观看| 欧美日韩在线观看h| 久久精品久久久久久久性| 亚洲欧美日韩无卡精品| 女人被狂操c到高潮| 久久99蜜桃精品久久| 成人鲁丝片一二三区免费| 少妇人妻一区二区三区视频| 超碰97精品在线观看| 黄片无遮挡物在线观看| 免费看av在线观看网站| 日本色播在线视频| 国产精品久久久久久久久免| 免费观看a级毛片全部| 晚上一个人看的免费电影| 美女主播在线视频| 久久久久久久国产电影| 搡女人真爽免费视频火全软件| 亚洲精品国产av蜜桃| 九九久久精品国产亚洲av麻豆| 高清在线视频一区二区三区| 亚洲国产精品专区欧美| 国产 一区 欧美 日韩| 99久久中文字幕三级久久日本| tube8黄色片| 777米奇影视久久| 国产探花在线观看一区二区| 日韩av免费高清视频| 日韩欧美一区视频在线观看 | 国产伦在线观看视频一区| 日本av手机在线免费观看| 99久久九九国产精品国产免费| 在线观看免费高清a一片| 免费av不卡在线播放| 插阴视频在线观看视频| 人妻夜夜爽99麻豆av| 黄色视频在线播放观看不卡| 日日啪夜夜撸| 内地一区二区视频在线| 国产伦精品一区二区三区视频9| 亚洲国产精品国产精品| 在线观看三级黄色| 小蜜桃在线观看免费完整版高清| 久久久精品欧美日韩精品| 久久久久久久久久成人| 亚洲aⅴ乱码一区二区在线播放| 亚洲最大成人中文| 亚洲精品影视一区二区三区av| 肉色欧美久久久久久久蜜桃 | 天堂中文最新版在线下载 | 国产在线男女| 一级毛片久久久久久久久女| 国产欧美亚洲国产| 一二三四中文在线观看免费高清| 欧美最新免费一区二区三区| 亚洲综合精品二区| 在线看a的网站| 最近的中文字幕免费完整| 自拍偷自拍亚洲精品老妇| 久久久久久久大尺度免费视频| 亚洲欧洲国产日韩| 涩涩av久久男人的天堂| xxx大片免费视频| 另类亚洲欧美激情| 超碰av人人做人人爽久久| 色综合色国产| 日韩免费高清中文字幕av| 午夜福利高清视频| 欧美xxxx性猛交bbbb| 街头女战士在线观看网站| 亚洲人成网站在线播| 国产精品久久久久久精品电影| 亚洲三级黄色毛片| 美女xxoo啪啪120秒动态图| 看十八女毛片水多多多| 99久久精品国产国产毛片| 精品人妻偷拍中文字幕| 性色av一级| 日本wwww免费看| 亚洲三级黄色毛片| 欧美性猛交╳xxx乱大交人| 国产毛片a区久久久久| 日韩免费高清中文字幕av| 日本黄大片高清| 97精品久久久久久久久久精品| 亚洲熟女精品中文字幕| 日本三级黄在线观看| 亚洲av福利一区| 亚洲无线观看免费| 91久久精品国产一区二区成人|