• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Hybrid Single Image Super-Resolution Algorithm for Medical Images

    2022-11-11 10:46:08WalidElShafaiEhabMahmoudMohamedMedienZeghidAnasAliandMoustafaAly
    Computers Materials&Continua 2022年9期

    Walid El-Shafai,Ehab Mahmoud Mohamed,Medien Zeghid,Anas M.Ali and Moustafa H.Aly

    1Department of Electronics and Electrical Communications Engineering,Faculty of Electronic Engineering,Menoufia University,Menouf,32952,Egypt

    2Security Engineering Lab,Computer Science Department,Prince Sultan University,Riyadh,11586,Saudi Arabia

    3Electrical Engineering Department,College of Engineering,Prince Sattam Bin Abdulaziz University,Wadi Addwasir,11991,Saudi Arabia

    4Electrical Engineering Department,Aswan University,Aswan,81542,Egypt

    5Electronics and Micro-Electronics Laboratory(E.μ.E.L),Faculty of Sciences,University of Monastir,Monastir,5000,Tunisia

    6Alexandria Higher Institute of Engineering&Technology(AIET),Alexandria,Egypt

    7Electronics and Communications Engineering Department,College of Engineering and Technology,Arab Academy for Science,Technology and Maritime Transport,Alexandria,1029,Egypt

    Abstract: High-quality medical microscopic images used for diseases detection are expensive and difficult to store.Therefore, low-resolution images are favorable due to their low storage space and ease of sharing, where the images can be enlarged when needed using Super-Resolution(SR)techniques.However,it is important to maintain the shape and size of the medical images while enlarging them.One of the problems facing SR is that the performance of medical image diagnosis is very poor due to the deterioration of the reconstructed image resolution.Consequently,this paper suggests a multi-SR and classification framework based on Generative Adversarial Network(GAN)to generate high-resolution images with higher quality and finer details to reduce blurring.The proposed framework comprises five GAN models: Enhanced SR Generative Adversarial Networks(ESRGAN),Enhanced deep SR GAN(EDSRGAN), Sub-Pixel-GAN, SRGAN, and Efficient Wider Activation-B GAN (WDSR-b-GAN).To train the proposed models, we have employed images from the famous BreakHis dataset and enlarged them by 4× and 16× upscale factors with the ground truth of the size of 256×256×3.Moreover,several evaluation metrics like Peak Signal-to-Noise Ratio(PSNR),Mean Squared Error(MSE),Structural Similarity Index(SSIM),Multiscale Structural Similarity Index (MS-SSIM), and histogram are applied to make comprehensive and objective comparisons to determine the best methods in terms of efficiency, training time, and storage space.The obtained results reveal the superiority of the proposed models over traditional and benchmark models in terms of color and texture restoration and detection by achieving an accuracy of 99.7433%.

    Keywords:GAN;medical images;SSIM;MS-SSIM;PSNR;SISR

    1 Introduction

    Almost half a million breast cancer patients die,and nearly 7.8 Million new cases are diagnosed yearly.These figures are likely to grow dramatically as social and medical engineering advances[1-3].Compared with other medical imaging types,histopathological images represent the gold standard for diagnosing breast cancer.The ideal treatment plan for breast cancer depends on the early classification.Early classification of tissue images requires many images and storage space,so finding more efficient ways to preserve images makes sense.The primary motivation for developing a more accurate breast cancer classification algorithm is to assist clinicians familiar with the molecular subtypes of breast cancer in controlling cancer cell metastasis early in the disease diagnosis and treatment planning process.Artificial intelligence(AI)based solutions are utilized to assist the automated identification of breast cancer.Deep Learning (DL)models are among the most popular approaches due to their superior performance in classifying and processing medical images.

    Because of the high cost of storage and hardware, it is prudent to obtain high-resolution (HR)medical images from low-resolution (LR)ones.Image super-resolution (SR)techniques focus on reconstructing low-quality images with lost pixels to address hardware costs.Furthermore,due to the importance of enlarging the images so that the feature information and texture details remain clear and indistinct while utilizing the quantifiable performance of the employed SR algorithms,Generative Adversarial Network (GANs)are gaining attention due to their ability to reconstruct the images realistically[2-6].

    This paper proposes a Multi-SR and Classification Framework (MSRCF)based on GAN networks to improve and reconstruct breast cancer images from histopathology ones.The MSRCF comprises five GAN models:ESRGAN,EDSRGAN,Sub-Pixel-GAN,SRGAN,and WDSR-b-GAN.We used the well-known models in SR such as EDSR,WDSR-b,and ESPCN as the generator in GAN models and proposed a special model for the discriminator.In addition,we also use the ESRGAN and SRGAN models while modifying their discriminators[7-11].The main contributions of this paper can be summarized as follows:

    ? Developing five fine-tuned multi-SR frameworks for medical image classification applications.

    ? Developing feedforward-based SR Convolutional Neural Network (CNN)models and using them as a GAN model generator.

    ? Employing fine-tuned GAN models to function properly on 7783 images of 7909 images after removing duplicate images of two satisfactory histological types.

    ? Implementing a hybrid content loss, MSE loss, and adversarial loss as the perceptual loss to generate HR images.

    ? Examining the suggested Single Image Super-Resolution (SISR)algorithm with different assessment metrics,including Structural Similarity Index(SSIM),Peak Signal-to-Noise Ratio(PSNR), histogram, and Multiscale Structural Similarity Index (MS-SSIM), to assess the trained image qualities.

    ? Implementing the proposed MSRCF model in the pre-processing stage prior to the classification stage to generate HR images.

    ? Achieving high detection accuracy with few iterations and epochs after SR pre-processing for the ResNeXt-101(32×8d)model than traditional and benchmark approaches.

    The rest of this paper is organized as follows.The proposed multi-super-resolution and classification framework is presented in Section 2.The experimental results are displayed and discussed in Section 3.The concluding remarks are summarized in Section 4.

    2 Proposed Multi-Super-Resolution and Classification Framework

    Reconstruction is an effective process to improve the efficiency of medical images in terms of storage while it supports the highly efficient diagnosis.Therefore,this paper presents effective MSRCF based on DL and GAN methods to overcome the high computational processing and the limited efficiency of the traditional techniques.As shown in Fig.1,the MSRCF consists of several phases that can be summarized as follows:In the first phase,a data set of breast cancer histopathological images is prepared,and then the images are reshaped into small dimensions that serve the idea of research.The dimension of the images is set to 64×64,valid for low storage hardware.In the second phase,SR image construction takes place for efficient and automated diagnosis purposes in the third phase.In the fourth phase,we perform many measurements on the medical images to prove the efficiency of the proposed framework in terms of the ability to save data economically.We can perform any operations on-demand without lowering the efficiency standards.In the following, the detailed descriptions of these four phases will be given.

    2.1 Pre-Processing Phase

    The first phase is random selection and reconfiguring image samples from the BreaKHis dataset.Reconfiguration means to reshape the dimensions of the medical images to fit the proposed models in the coming phases,such as the SR GAN phase,which needs images with low-resolution dimensions(64×64×3), and the classification phase, which requires images with high-resolution dimensions(256×256×3).Random sampling is a mandatory step due to the resource-poor Graphics Processing Unit(GPU)so that the super-resolution GAN models are trained on a maximum of 4 images at the same time.In order to reduce the training time,we randomly select 1700 images from the BreaKHis dataset.Also,most basic and texture features are preserved during the data reconfiguration process that uses bicubic interpolation.Moreover,normalization is used on the image data in the range of[1].This normalization range is selected after conducting several tests,and it was found that this range is better than the typical range of[0 1].The normalization is performed by subtracting 127.5 from each pixel in the image and then dividing the result by 127.5.The main benefits of normalization are to speed up the training process and reduce the required computational complexity of the proposed DL models.

    Moreover,at this phase,the BreaKHis dataset is divided into two groups,where one is used for training, and the other is used for validation.In this paper, 90% of the dataset is used for training,while the remaining 10%is used for validation.These ratios are not only going in harmony with what is stated in the literature,but also give the best results after conducting several tests and experimental validations for breast cancer detection.

    2.2 Super-Resolution(SR)GAN Phase

    This phase consists of two stages;in the first stage,the SR process takes place,where the HR image is estimated from the LR one(s).SR process can be categorized as a multi-image SR or a single-image SR.In the first category,several LR images are used to generate the HR image,while in the later,only one LR image is utilized.Due to its lack of available information, single-image SR is considered as the most difficult,but it is generally the most common type used in most applications.In this paper,a single-image SR is considered.Formerly,researchers relied on mathematical interpolation methods to generate/reconstruct SR images based on increasing the number of pixels in each space.Recently,interest in image reconstruction applications has been increased due to the reliance on computer vision for various functions.This opens the door to applying sophisticated methods such as DL in SR,which shows superior performance than conventional techniques[9].

    Figure 1:Proposed multi-SR and classification framework

    In the second stage, GAN is applied to create images from random noises so that they appear realistic, where the most common type of noise is Gaussian.In [12], the authors provided a comprehensive classification of DL-based generative models.The models were divided into two groups based on maximum likelihood,including explicit density.This method calculates images over a sample area and implicit density.This does not produce explicit density but generates realistic images that can model samples of the correct distribution.In the first method, pixel value estimation is based on autoregression,such as PixelCNN[13]utilizing traceable density or the autoencoder method[14]applying approximate density.Likewise,GAN is based on implicit density and is considered a new way to generate various data from images,audio,and video[15].The authors of[12]submitted the GAN model,which consists of two deep networks.The first is a generative network to produce acceptable images,and the second is a discriminatory network to supervise the distinction of fake images from the original ones[12].The GAN model succeeds in carrying out its task when the discriminator fails and the generator succeeds.The generator produces an image whose pixel distribution is similar to that of real images in that it deceives the discriminator.The discriminator fails when it cannot differentiate the generated images from the real ones.GAN has been widely used in SR.

    Many SR models have already been trained in general images such as EDSR [3], WDSR-b [4],ESPCN[5],ESRGAN[6],SRGAN[2].This paper uses the five GAN models to obtain the best HR breast cancer images.To increase image quality,we meticulously study four major components of the GAN model: the generator model architecture, the discriminant model architecture, the adversarial loss, and the perceptual loss.In the generator network architecture, five different architectures are adopted.Three of them are used in SR feedforward or explicit density models that do not depend on GAN or loss of perception.They are among the first CNNs used in SR, where LR images are fed into the network to produce HR images with MSE as the loss function.The other two types are called ESRGAN and SRGAN, and they are mainly used in the multi-SR framework with perceptual loss.They are bearing in mind that the discrimination network architecture and perceptual loss have been developed for all models,which will be explained in detail in the following paragraphs.

    The proposed discriminator network is used with all the proposed SR-GAN models, as shown in Fig.1.The great benefit of the GAN concept is the use of a discriminator network that is able to distinguish the original images from the images generated by the generator network.So,in order to train a generator network capable of generating an HR image similar to the original, and efficient discriminator network should be built.The two models fight each other during training to get the maximum training for both networks together,distinguishing GANs from other Convolutional Neural Networks(CNNs).

    As shown in Fig.2,the DL discriminator architecture consists of several connected CNN layers(19 layers),which are seven batch normalization(BN)layers,eight layers of convolutional(Conv2D)layers,four fully connected(FC)or dense layers,and a final sigmoid layer.The input images have a size of 256×256×3,and the output layer includes the sigmoid classifier used for distinguishing purposes.

    As mentioned,the use of five different networks was discussed as a generator based on the GAN,and among all the generators, the ESRGAN network achieves the best results compared to other networks.Therefore,we provide a detailed explanation about its structure,parameters,and simulation results from PSNR,SSIM,and MS-SSIM perspectives.The SRGAN model inspires the architecture of the ESRGAN model,and both are similar in most properties with some differences in structure.

    Figure 2:Architecture of the proposed discriminator model

    In this paper, the focus is on the structure of the generator part only of the ESRGAN model.The generator consists of a Residual-in-Residual Dense Block(RRDB),which is mainly inspired by the DenseNet model[16]and connects all layers within the Residual block directly to each other.In addition,removing the batch normalization layers from each RRDB in the ESRGAN model reduces artifacts and computational complexity with increased efficiency.The leaky version of a Rectified Linear Unit (ReLU)is also used to activate RRDB layers that can conv2D layers from extracting more features.Then, the feature dimensions are enlarged by a sub-pixel conv2D layer inspired by ESPCN[5].

    The significant drawback of the generator network of the ESRGAN model is that it performs complex calculations compared to other GAN models,and with this,the performance of the network is greatly enhanced.So, in order to provide a realistic comparison, another generator model called sub-pixel GAN will be presented.The outstanding feature is that it enhances the performance of the GAN without the need for complex calculations.

    The sub-pixel-GAN model is inspired by ESPCN [5], but some layers are modified and then used as a generator network in the GAN model.As shown in Fig.3, the architecture of the subpixel generator network includes seven different connected layers.It consists of five Conv2D layers and two depth-to-space layers.Initially,the breast cancer images are resized to 64×64×3 to fit the input layer of the sub-pixel generator model.These images are then passed through convolutional layers in order to extract the features.Convolutional layers consist of filters of 5×5 and 3×3 sizes,and the convolution stride is fixed to one per pixel.This is to ensure that the spatial dimensions of the features remain constant and that the best values for weights are obtained; ReLU is used in all convolutional layers.In addition,the enlarge factor of size 2×is applied in the depth-to-space layers.In the proposed sub-pixel-generator model,all weights layers are initialized by orthogonal weights.The notable advantage of the sub-pixel-generator model is that it boosts the performance of GAN without the need for many convolutional layers or deep RRDB layers.This means that the computational complexity of the model,in general,is as small as possible while maintaining a very high performance compared to the SR models.More details about the structures and interpretations of other SR models(EDSR-GAN,SR-GAN,and WDSR-b-GAN)are explored in[2-4,12].

    Figure 3:Architecture of the proposed sub-pixel generator model

    The strengths of the GAN models are represented in three points:the generator and discriminator network,explained in the previous paragraphs,and the cost function.The cost function is the most critical factor for the success of the SR model based on the GAN.In addition to deep tuning the weights of the layers,the cost function is used to measure the extent of the error of the model in finding a relationship between the inputs and the outputs.It is an iterative function that is repeatedly used during the training process to determine the best weights based on optimization algorithms.The cost function succeeds when it achieves the minimum error rate.The cost function describing the proposed multi-SR framework is expressed as:

    whereV(D,G) refers to value function depending on two networks,Eis the expected value,iHR~pdata(iHR)is a sample of HR image taken from HR dataset,DqDF(iHR) is a discriminator prediction on HR or original image,iLR~pG(iLR)is a sample of LR image taken from LR dataset,GqGF(iLR) is a generator that produces SR image from sample LR image, andDqDF(GqGF(iLR)) is a discriminator prediction(adversarial loss)on SR image.The discriminator objective maximizes the value function V,while the generator objective minimizes V,where each network has a different cost function.The discriminant network uses a binary cross-entropy function.The perceptual loss of the proposed Multi-SR framework consists of two parts,the first part is called adversarial loss,and the other one is called content loss or MSE as they are used separately.The adversarial loss [DqDF(GqGF(iLR))] is a mixture between the generator loss and the discriminator loss.As a result of a conflict between the surveys favoring MSE or content loss,we trained the multi-SR framework based on both and compared them in the results section.The MSE is given as:

    whereYis HR image,is a generated image,i&jdenote the pixel location in the image based on x-y coordinates,andh&ware the same height and width of HR and generated image.

    The VGG loss is another content loss function applied over generated images and real images.VGG19 is a very popular deep neural network that is mostly used for image classification [13].The intermediate layers of a pre-trained VGG19 network work as feature extractors and can be used to extract feature maps of the generated images and the real images.The VGG loss is based on these extracted feature maps.It is calculated as the Euclidean distance between the feature maps of the generated image and the real image.The used VGG loss is expressed as:

    whereWi,jHi,jare the dimensions of the corresponding function maps,andφi,jis the feature map taken from HR image and generated imageGθGF(iHR).

    2.3 Classification Phase

    In this phase, transfer learning is applied to diagnose breast cancer images.Transfer learning is considered as the best medical image diagnostic method due to its high ability to transfer what has been learned from the classification of general images and then apply it to medical images.Transfer learning is divided into three methods, and the first method is called shallow-tuning, in which the last layer in the transfer learning model is changed to fit the new task.The second method is finetuning, which depends on the optimization method, in which more than one layer is updated to fit the new task.The last method used in this paper is deep-tuning,in which all layers of the model are updated to fit the new classification task while keeping the weights as the initial value.The primary objective of the classification phase is to test multiple SR framework models.A pre-trained ResNeXt model is used on ImageNet competition data and achieves the highest efficiency.Initially, a data augmentation technique is applied to solve the overfitting problem.Data augmentation technology is a technology that changes the shape of images before they enter the ResNeXt model.in this work,two types are applied,including horizontal and vertical random flips,and 45 degree random rotation.The normalization of each color channel is also done by means of[0.485,0.456,0.406]and standard deviations of [0.229, 0.224, 0.225].The ResNeXt model is trained seven times, each on a different dataset;five times on the output of the data after applying SR framework models.The ResNeXt model is also trained on the original high-resolution images.Moreover, it is trained on the low-resolution image to make a comparison between training the same ResNeXt model on different data and to show the superiority of SR techniques in improving and maintaining the results of the diagnostic process.

    2.4 Performance Evaluation Phase

    A comprehensive analysis is carried out in the final phase to evaluate GAN-based and transferlearning models.We evaluate multi-SR-GAN models on four metrics, including SSIM [17], MSE,PSNR [18], MS-SSIM [19], and histogram.We also evaluate the transfer learning models by curves of loss and accuracy, specificity negative true rate (TNR), accuracy positive predictive value (PPV),recall positive true rate(TPR),sensitivity,and F1 score classification performance.They are comprehensively included in the research community to provide comprehensive assessments of classification approaches.The following are the mathematical formulas for these evaluation metrics[20,21]:

    whereμiris the average value for the first image,μiyis the average value of the second image,σiris the standard deviation for the first image,σiyis the standard deviation of the second image,σiriy=μiriy-μirμiyis the co-variation.c2=(k2L)2, andc1=(k1L)2are two variables to avoid division by zero,k2= 0.03,andk1= 0.01,MAXiis the maximum possible power of an image,ir,iyare original and generated images.true positive (TP), true negative (TN), false positive (FP), and false negative(FN)are the metrics estimated in the classification process.

    3 Experimental Results

    This section discusses the experiments and corresponding results obtained after applying five SR frameworks based on GAN models and transfer learning models under the histopathological breast cancer dataset,where it is also unbalanced data.

    3.1 Dataset

    The medical image data set for breast cancer known as BreakHis [11] is used.The medical images of breast cancer are recorded in two forms, the first is ultrasound images, and the second is histopathological images.BreakHis is a dataset from 82 patients, consisting of 7,909 images.The BreakHis dataset has different magnification factors including 40×, 100×, 200×, and 400× and consists of 5429 malignant sample images and 2480 benign sample images [22,23].A total of 7909 images are used after the image duplication test.It was found that there were 126 duplicate images,and they were deleted.The image duplication test ensures that correct results are obtained.

    3.2 Training Multi-SR Framework

    Training in the SR phase is a single-phase, where in most GAN research,the VGG-19 model is first trained on the database used in the GAN,and then VGG-19 is used in the content loss.However,we use ImageNet’s competition weights for the VGG-19 model and in the content loss function,which saves a lot of time and leads to more efficient results.After that,the BreaKHis dataset is used to train five different SR models,including ESRGAN,EDSRGAN,Sub-Pixel-GAN,SRGAN,and WDSRb-GAN.Multi-SR framework networks are trained with four pairs of images(LR,HR)in which LR images enter for generator models and HR images for discriminant models.The Google Colab Pro service is used in the proposed work application for its efficiency.1700 images from the BreakHis dataset are applied to the multi-SR framework due to the slowness of training of the GAN models in general.Also,100 images are used as validation data to test the SR models.All models are trained with the same methods as follows:

    1)Adam optimizer is applied with a learning rate of 0.0002 during training.

    2)The binary cross-entropy function is applied to train the discriminator.

    3)Two loss functions are used to calculate the perceptual loss criterion,and they are also used in the backpropagation.They are MSE and content loss.

    4)All models are trained on 1000 batches.Each batch contains four random images.Performance measures are observed, and the regression of the upper and lower part of the network is observed for each generator model and discriminator.

    5)The sigmoid function is used in the output layer in the proposed discriminant model.

    Google Colab Pro features a T4 or P100 GPU and a high-memory virtual machine with 25 GB of available RAM.Moreover, all models are trained on the same Google Colab pro service.The conditional training method is also used,in which the best weights of the generator models are stored only when the best results are achieved when evaluating the model in each batch.

    3.3 Training Classification

    The ResNeXt model is trained using the deep-tuning method by changing the last layer to a fully connected layer with a size of 1024.The trainable parameters are 86.746 M, and the cross-entropy loss is used to calculate the loss function.Also,we apply the Adam optimizer with a learning rate of 0.0001.We divide the data in a ratio of 90 to 10,so that the 90%represents the percentage of training data,which is 7008 images,and the 10%represents the percentage of investigation data,which is 784 images.The Pytorch library is also used in its latest version with Python version 3.8.

    3.4 Results Analysis

    3.4.1 Multi-SR Framework Results

    This subsection describes the results of the conducted experiments for the super-resolution part.1,700 random images train all GAN-based SR models from the BreaKHis breast cancer dataset,and 100 random images evaluate each model.Models are evaluated by several metrics,including PSNR,SSIM, MS-SSIM, trainable parameters, and size parameters.The MSE scale is ignored because the PSNR scale is based on the MSE as given in(6).The discriminant model is constant in all the proposed GAN models, containing 141 million trainable parameters with a size of 2179,189 MB.The GANbased SR models are trained twice to ensure a comprehensive comparison,using the MSE loss function in the generator and the content-loss based on VGG-19 as a loss function.

    As shown in Tab.1, the results of the five GAN-based SR models are compared using MSE as a loss function for the generation models.It shows the superiority of the ESRGAN model over the rest of the models in terms of metric performance SSIM,PSNR,MS-SSIM.However,it is the largest model in the trainable variables and the size of the parameters.Therefore, ESRGAN is considered the best model with high accuracy,but it needs time and large memory.Moreover,all models excel at bicubic interpolation.Images samples generated from each model given in Tab.1 are shown in Fig.4.

    Table 1: The MSE loss function is used to train the models using the same techniques

    We retrain the multi-SR framework models but using content-loss based on the VGG-19 model,where it shows higher efficiency than MSE as a loss function as given in Tab.2.The superiority of the ESRGAN model over all other models is shown by using VGG-19 as a loss function, but it has the largest share in the number of trainable parameters.However,the higher efficiency compensates this in terms of SSIM,PSNR,and MS-SSIM,which makes the large size of parameters acceptable.The smallest model in terms of the size of the trainable parameters is Sub-Pixel-GAN,which achieves high efficiency,approaching the ESRGAN model with a difference of 0.0127,which is widely accepted in most applications.The small size of the Sub-Pixel-GAN model is an advantage in terms of reducing computational complexity and can be applied to the lowest GPU requirements.Images samples generated from each model given in Tab.2, after using the VGG-19 model as a loss function, are shown in Fig.5.It shows how the image generated from the ESRGAN model is close to the original HR image.

    Figure 4: A sample of generated images from the multi-SR framework based on MSE.(a)low resolution(LR)image with the dimension of 64×64×3,(b)the result of the bicubic interpolation,(c)the ground truth (HR)image with the dimension of 256×256×3, (d)to (h)are the results of different SR models,and(h)the result of the ESRGAN model in which we have gained the best result for metrics performance

    In Fig.6,the histogram of a random image from the data set is calculated.We divide the results of the histogram into three sections.The first section contains the histogram of the original image,and it is repeated in all rows for clarifications.The second section contains the histogram after applying each model of the multi-SR framework on the random image.For the purpose of efficient comparisons,in the third section,we evaluate the histogram of the random image after entering it into the bicubic interpolation.

    Table 2: VGG-19 loss function is used to train the models using the same techniques

    Figure 5:A sample of generated images from the multi-SR framework based on VGG-19 loss.(a)low resolution(LR)image with the dimension of 64×64×3,(b)the result of the bicubic interpolation,(c)the ground truth (HR)image with the dimension of 256×256×3, (d)to (h)are the results of different SR models,and(h)the result of the ESRGAN model in which we have gained the best result for metrics performance

    Figure 6:Histogram simple image from the multi-SR framework based on VGG-19 loss

    3.4.2 Results of Two-Class Classification

    Three different images (HR, LR, and SR)with a size of 256×256×3 are used to evaluate the proposed framework.The proposed ResNeXt-101(32×8d)model is trained with different 64 epochs.Other CNN models based on the VGG-19 content loss are also tested in the proposed framework:ESRGAN,EDSRGAN,Sub-Pixel-GAN,SRGAN,and WDSR-b-GAN.Tab.3 presents the experimental results for each tested image.The ResNeXt model achieves an accuracy of 99.6149%and a loss of 0.01086 for epoch 57 for the HR images.In addition,the model achieves an accuracy of 98.5879%and a loss of 0.0403 for epoch 59 for the LR images.

    Table 3: Obtained outcomes of the ResNeXt-101(32×8 d)model for the tested images

    Tab.4 shows the results obtained for two classifications coming from the ResNeXt101_32×8d model after applying the multi-SR framework.These results confirm and prove the acquired outcomes in Fig.7.Fig.7 offers the loss and accuracy curves of the testing and training processes for the ResNeXt101_32x8d model.It is noticed that both accuracy and loss curves are steady before less than ten epochs.Also,there is no overfitting that occurs in the proposed model.

    Table 4:Obtained results of the ResNeXt-101(32×8d)model for the two-class classification scenario for the tested images

    Figure 7:Loss and accuracy curves of the ResNeXt101_32×8d model after using the ESRGAN model

    4 Conclusion

    This paper presented a multi-SR framework for medical images.This framework is efficient and robust to perform the learning process in training a set using a different set of enlargement techniques and different formulations.The combination of several loss functions,including adversarial loss,image loss, MSE loss, and perception loss, have been employed in the proposed models.To estimate the perception loss function, the pre-trained VGG-19 model was used and employed without the need to retrain it and use the ImageNet competition weights.We proposed the discriminator model for all generators to fairly correct the generator.The multi-SR framework was trained twice,once using the MSE loss function for the generator, and once using the VGG-19 model as a loss function.Then,features from images were extracted and then applied to the MSE equation, called perception loss.Also,different methods have been applied to measure the performance of SR models,including PSNR,SSIM, and MS-SSIM.It was noticed that the ESRGAN model worked better without the need for layers of equalization weight,and it also achieved the highest results in performance measures with a classification accuracy of 99.7433%.

    Acknowledgement:The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IF-PSAU-2021/01/18585).

    Funding Statement:The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IF-PSAU-2021/01/18585).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    级片在线观看| xxx96com| 国产成+人综合+亚洲专区| 黄色 视频免费看| 欧美午夜高清在线| 日日干狠狠操夜夜爽| 热99re8久久精品国产| 欧美大码av| 欧美日韩精品网址| 中文字幕人成人乱码亚洲影| 国产一区二区三区在线臀色熟女| 麻豆成人午夜福利视频| 国产精品一区二区免费欧美| 在线观看免费视频日本深夜| 亚洲美女视频黄频| a级毛片在线看网站| 老司机午夜福利在线观看视频| 亚洲中文日韩欧美视频| 亚洲av成人不卡在线观看播放网| 90打野战视频偷拍视频| 精品一区二区三区四区五区乱码| 真人做人爱边吃奶动态| 91字幕亚洲| 十八禁网站免费在线| 最近最新中文字幕大全免费视频| 日本a在线网址| 777久久人妻少妇嫩草av网站| 国产av麻豆久久久久久久| 啦啦啦观看免费观看视频高清| 免费观看精品视频网站| 精品高清国产在线一区| 午夜日韩欧美国产| 一本综合久久免费| 女人爽到高潮嗷嗷叫在线视频| 日韩国内少妇激情av| 亚洲专区国产一区二区| 国产99白浆流出| 久久婷婷成人综合色麻豆| 人妻夜夜爽99麻豆av| 午夜福利在线在线| 黄色视频,在线免费观看| 狂野欧美白嫩少妇大欣赏| 看免费av毛片| 免费在线观看影片大全网站| 老汉色av国产亚洲站长工具| 欧美日韩福利视频一区二区| 国产激情偷乱视频一区二区| 国产精品久久久久久亚洲av鲁大| or卡值多少钱| 国产69精品久久久久777片 | 妹子高潮喷水视频| 久久精品影院6| 一本一本综合久久| 亚洲av中文字字幕乱码综合| 97碰自拍视频| 免费看美女性在线毛片视频| 国内精品久久久久久久电影| 天堂影院成人在线观看| 在线免费观看的www视频| 巨乳人妻的诱惑在线观看| 午夜福利18| 97碰自拍视频| 国产不卡一卡二| 人人妻,人人澡人人爽秒播| 亚洲成人精品中文字幕电影| 成人三级黄色视频| 日本 av在线| 亚洲,欧美精品.| 最近视频中文字幕2019在线8| 99国产综合亚洲精品| 69av精品久久久久久| 亚洲色图 男人天堂 中文字幕| 亚洲人与动物交配视频| 国产v大片淫在线免费观看| 亚洲人成网站高清观看| 国产伦一二天堂av在线观看| 国产v大片淫在线免费观看| svipshipincom国产片| 午夜激情av网站| 久久这里只有精品中国| 桃红色精品国产亚洲av| 99国产精品99久久久久| 99国产极品粉嫩在线观看| 久久久精品欧美日韩精品| 97超级碰碰碰精品色视频在线观看| 亚洲成av人片在线播放无| 欧美日本亚洲视频在线播放| 亚洲国产精品成人综合色| 精品久久久久久成人av| 国产精品久久久久久久电影 | 在线观看66精品国产| 久久精品夜夜夜夜夜久久蜜豆 | 在线观看免费日韩欧美大片| 国产av又大| 久久久精品国产亚洲av高清涩受| 两性夫妻黄色片| 国产视频内射| 久久久精品大字幕| 亚洲中文日韩欧美视频| 欧美激情久久久久久爽电影| 亚洲中文字幕日韩| 欧美又色又爽又黄视频| 色综合亚洲欧美另类图片| 丁香欧美五月| 波多野结衣高清作品| 国产精品 国内视频| 国产人伦9x9x在线观看| 日韩中文字幕欧美一区二区| 亚洲中文av在线| 非洲黑人性xxxx精品又粗又长| 亚洲国产日韩欧美精品在线观看 | 日本黄大片高清| 日韩三级视频一区二区三区| 亚洲一区高清亚洲精品| 国产精品1区2区在线观看.| 香蕉av资源在线| 亚洲精华国产精华精| 性欧美人与动物交配| 国产成人啪精品午夜网站| 亚洲av中文字字幕乱码综合| 男男h啪啪无遮挡| 国产一区二区三区视频了| ponron亚洲| 丁香欧美五月| 五月玫瑰六月丁香| 一夜夜www| 后天国语完整版免费观看| www.熟女人妻精品国产| 日本黄大片高清| 淫秽高清视频在线观看| 国产成人欧美在线观看| 成人特级黄色片久久久久久久| 亚洲熟妇中文字幕五十中出| 又爽又黄无遮挡网站| 亚洲色图av天堂| 国产私拍福利视频在线观看| 久久这里只有精品19| 欧美乱色亚洲激情| 亚洲片人在线观看| 欧美人与性动交α欧美精品济南到| 国产1区2区3区精品| 亚洲欧美日韩无卡精品| 欧美黑人巨大hd| 男女那种视频在线观看| 国产亚洲精品第一综合不卡| cao死你这个sao货| 亚洲 欧美一区二区三区| 日本五十路高清| 岛国在线免费视频观看| 精品国内亚洲2022精品成人| 色在线成人网| 亚洲国产欧美一区二区综合| 人妻丰满熟妇av一区二区三区| 91字幕亚洲| 日本a在线网址| 黄色丝袜av网址大全| 两个人的视频大全免费| а√天堂www在线а√下载| 黄色丝袜av网址大全| 久久久久久人人人人人| 国产黄色小视频在线观看| 丰满人妻熟妇乱又伦精品不卡| 亚洲成人久久性| 丝袜美腿诱惑在线| 两个人看的免费小视频| 成人国产一区最新在线观看| 99久久无色码亚洲精品果冻| 国产av麻豆久久久久久久| 美女免费视频网站| 真人一进一出gif抽搐免费| 久久久久免费精品人妻一区二区| 精品国产美女av久久久久小说| 99在线视频只有这里精品首页| 9191精品国产免费久久| 久久天堂一区二区三区四区| 很黄的视频免费| 亚洲午夜精品一区,二区,三区| 99精品欧美一区二区三区四区| 午夜激情福利司机影院| 久久国产精品影院| 露出奶头的视频| 色av中文字幕| 亚洲全国av大片| 大型av网站在线播放| 亚洲在线自拍视频| 2021天堂中文幕一二区在线观| 日本 欧美在线| 免费在线观看日本一区| 日韩精品免费视频一区二区三区| 无人区码免费观看不卡| 精品不卡国产一区二区三区| 亚洲九九香蕉| 日韩大尺度精品在线看网址| 在线观看一区二区三区| 午夜老司机福利片| 亚洲熟女毛片儿| 淫秽高清视频在线观看| 国产高清videossex| 午夜福利高清视频| 欧美成人一区二区免费高清观看 | 999久久久国产精品视频| 大型av网站在线播放| 啦啦啦观看免费观看视频高清| 在线视频色国产色| 亚洲成人久久爱视频| 老司机在亚洲福利影院| 久久久精品欧美日韩精品| www.熟女人妻精品国产| 国产成人系列免费观看| 后天国语完整版免费观看| 欧美 亚洲 国产 日韩一| 亚洲成人国产一区在线观看| 精品欧美一区二区三区在线| 亚洲天堂国产精品一区在线| 久久久久久免费高清国产稀缺| 免费观看人在逋| 久久久久久久久免费视频了| 国产又色又爽无遮挡免费看| 免费在线观看成人毛片| 亚洲精品中文字幕在线视频| 一级片免费观看大全| 天堂av国产一区二区熟女人妻 | 国产69精品久久久久777片 | 欧美日韩中文字幕国产精品一区二区三区| 亚洲天堂国产精品一区在线| 国产不卡一卡二| 操出白浆在线播放| 成人特级黄色片久久久久久久| 免费看美女性在线毛片视频| 看免费av毛片| 免费av毛片视频| 精品一区二区三区四区五区乱码| 人妻夜夜爽99麻豆av| 久久久久久久精品吃奶| 一本综合久久免费| 亚洲无线在线观看| 国内精品久久久久精免费| 国产真人三级小视频在线观看| 好男人在线观看高清免费视频| 亚洲最大成人中文| 天天添夜夜摸| 亚洲国产精品久久男人天堂| 90打野战视频偷拍视频| 欧美黑人精品巨大| 国产高清有码在线观看视频 | 久久国产精品人妻蜜桃| 欧美激情久久久久久爽电影| 一a级毛片在线观看| 日韩av在线大香蕉| 老司机福利观看| 成人国产综合亚洲| 日韩欧美精品v在线| 极品教师在线免费播放| 亚洲成a人片在线一区二区| 久久久久免费精品人妻一区二区| 女人高潮潮喷娇喘18禁视频| 女生性感内裤真人,穿戴方法视频| 欧美日韩福利视频一区二区| 成人一区二区视频在线观看| 欧美一区二区精品小视频在线| 国产精品爽爽va在线观看网站| 美女 人体艺术 gogo| 久久精品国产综合久久久| 美女黄网站色视频| 亚洲第一电影网av| 国产不卡一卡二| 很黄的视频免费| 亚洲中文字幕一区二区三区有码在线看 | 亚洲av电影在线进入| 日韩欧美国产在线观看| 男人舔女人下体高潮全视频| 欧美色欧美亚洲另类二区| 日本黄大片高清| 欧美黑人巨大hd| 国产午夜精品论理片| 免费高清视频大片| 久久久久久久久免费视频了| 国产私拍福利视频在线观看| www.熟女人妻精品国产| 国产主播在线观看一区二区| 欧美日韩亚洲国产一区二区在线观看| 久久人人精品亚洲av| 精品熟女少妇八av免费久了| 妹子高潮喷水视频| 禁无遮挡网站| 夜夜看夜夜爽夜夜摸| 免费搜索国产男女视频| 欧美性长视频在线观看| 少妇裸体淫交视频免费看高清 | 黄片小视频在线播放| 精品欧美一区二区三区在线| 少妇人妻一区二区三区视频| 国产亚洲欧美98| 色哟哟哟哟哟哟| 色精品久久人妻99蜜桃| 精品福利观看| 亚洲aⅴ乱码一区二区在线播放 | 亚洲成av人片免费观看| 国产亚洲av高清不卡| 日韩av在线大香蕉| 日韩精品中文字幕看吧| 男女之事视频高清在线观看| 搞女人的毛片| 免费观看人在逋| 国产成人精品久久二区二区免费| 视频区欧美日本亚洲| 人妻丰满熟妇av一区二区三区| 国产欧美日韩一区二区三| 成人亚洲精品av一区二区| av天堂在线播放| 午夜精品久久久久久毛片777| 黑人巨大精品欧美一区二区mp4| 99国产精品一区二区蜜桃av| 激情在线观看视频在线高清| 大型av网站在线播放| 欧美乱色亚洲激情| 国产伦人伦偷精品视频| 欧美乱色亚洲激情| 黄频高清免费视频| 欧美绝顶高潮抽搐喷水| 黄频高清免费视频| 亚洲专区字幕在线| 国产精品av久久久久免费| 中文字幕高清在线视频| 久久久久久大精品| 国产精品电影一区二区三区| 国产一区二区激情短视频| 怎么达到女性高潮| 中文字幕人妻丝袜一区二区| 啦啦啦观看免费观看视频高清| 丰满人妻一区二区三区视频av | 老鸭窝网址在线观看| 欧美 亚洲 国产 日韩一| 亚洲性夜色夜夜综合| 麻豆一二三区av精品| www.999成人在线观看| 欧美大码av| 日韩欧美一区二区三区在线观看| 在线播放国产精品三级| 精品国产美女av久久久久小说| 欧美中文综合在线视频| 久久精品夜夜夜夜夜久久蜜豆 | 国产精品久久久久久亚洲av鲁大| 亚洲人成伊人成综合网2020| 最好的美女福利视频网| 最新在线观看一区二区三区| 精品久久久久久,| 免费在线观看成人毛片| 免费看a级黄色片| 免费在线观看成人毛片| 亚洲成人精品中文字幕电影| 精品午夜福利视频在线观看一区| 男女下面进入的视频免费午夜| 亚洲一区高清亚洲精品| 搡老岳熟女国产| 少妇的丰满在线观看| 无遮挡黄片免费观看| xxxwww97欧美| 亚洲美女黄片视频| 可以免费在线观看a视频的电影网站| 日本五十路高清| 精品无人区乱码1区二区| 大型黄色视频在线免费观看| 欧美色欧美亚洲另类二区| 亚洲精品在线观看二区| 欧美在线一区亚洲| 激情在线观看视频在线高清| 国产精品久久视频播放| 在线观看免费视频日本深夜| 禁无遮挡网站| 12—13女人毛片做爰片一| 久久香蕉激情| 97碰自拍视频| 首页视频小说图片口味搜索| 亚洲国产看品久久| 久久伊人香网站| 亚洲精品久久成人aⅴ小说| 精品欧美一区二区三区在线| 久久亚洲精品不卡| 欧美中文综合在线视频| 免费电影在线观看免费观看| 国内精品久久久久精免费| 久久精品影院6| 国产99久久九九免费精品| 九色成人免费人妻av| 免费观看精品视频网站| 90打野战视频偷拍视频| 又黄又粗又硬又大视频| 99久久无色码亚洲精品果冻| 欧美成人午夜精品| 麻豆成人午夜福利视频| 日韩免费av在线播放| 无人区码免费观看不卡| 无遮挡黄片免费观看| 亚洲第一欧美日韩一区二区三区| 国产精品av视频在线免费观看| 高清在线国产一区| 国产精品一区二区三区四区久久| 国产精品野战在线观看| 久久中文字幕一级| 亚洲va日本ⅴa欧美va伊人久久| 亚洲av熟女| www日本在线高清视频| 免费人成视频x8x8入口观看| 欧美日韩精品网址| 男人的好看免费观看在线视频 | 亚洲一区高清亚洲精品| 午夜a级毛片| 高清在线国产一区| 国产欧美日韩精品亚洲av| 免费在线观看日本一区| 嫩草影院精品99| 国产免费男女视频| 精品久久久久久久末码| 国产精品久久久久久久电影 | 精品久久久久久久末码| 国产99白浆流出| 亚洲人成电影免费在线| 国产野战对白在线观看| 99热这里只有精品一区 | 精品乱码久久久久久99久播| 国产精品电影一区二区三区| 亚洲av日韩精品久久久久久密| 国产亚洲精品av在线| 国产亚洲精品久久久久5区| 99国产精品一区二区三区| 欧美成人性av电影在线观看| 国产亚洲精品久久久久久毛片| 久久九九热精品免费| 人妻夜夜爽99麻豆av| 无人区码免费观看不卡| 18禁黄网站禁片免费观看直播| 亚洲av电影不卡..在线观看| 久久精品国产清高在天天线| 午夜精品一区二区三区免费看| 国产午夜精品久久久久久| 一级毛片高清免费大全| 51午夜福利影视在线观看| 激情在线观看视频在线高清| 国产亚洲av嫩草精品影院| 午夜福利高清视频| www国产在线视频色| 婷婷六月久久综合丁香| 午夜精品一区二区三区免费看| 色综合站精品国产| 在线十欧美十亚洲十日本专区| 免费搜索国产男女视频| 亚洲电影在线观看av| 欧美人与性动交α欧美精品济南到| 成人特级黄色片久久久久久久| 香蕉国产在线看| 日韩欧美国产一区二区入口| 首页视频小说图片口味搜索| 免费av毛片视频| 午夜精品一区二区三区免费看| 亚洲无线在线观看| 亚洲一区中文字幕在线| 高清毛片免费观看视频网站| 成人国产一区最新在线观看| 成熟少妇高潮喷水视频| 国产精品免费一区二区三区在线| 日日爽夜夜爽网站| 日本免费a在线| 白带黄色成豆腐渣| 真人一进一出gif抽搐免费| 国产精品久久久久久亚洲av鲁大| 精品久久久久久成人av| 亚洲精品美女久久久久99蜜臀| 天堂av国产一区二区熟女人妻 | 看片在线看免费视频| 欧美日韩乱码在线| 欧美成人午夜精品| 午夜老司机福利片| 男女下面进入的视频免费午夜| 久久国产精品人妻蜜桃| 亚洲狠狠婷婷综合久久图片| 成人国产综合亚洲| 99在线视频只有这里精品首页| 免费看a级黄色片| 国产成人啪精品午夜网站| 亚洲18禁久久av| netflix在线观看网站| cao死你这个sao货| 黄色 视频免费看| 国产97色在线日韩免费| 久久精品国产清高在天天线| 午夜精品一区二区三区免费看| 欧美一区二区国产精品久久精品 | 黄色视频不卡| 少妇熟女aⅴ在线视频| 99riav亚洲国产免费| 国内少妇人妻偷人精品xxx网站 | 亚洲欧美日韩东京热| 亚洲精品久久国产高清桃花| 国产三级中文精品| 蜜桃久久精品国产亚洲av| 一二三四社区在线视频社区8| 欧美高清成人免费视频www| 免费搜索国产男女视频| 一进一出抽搐gif免费好疼| 国产精品亚洲av一区麻豆| 久久精品国产清高在天天线| 亚洲av成人一区二区三| 亚洲国产精品成人综合色| 久久婷婷成人综合色麻豆| 国产精品亚洲av一区麻豆| 1024香蕉在线观看| 久久精品国产综合久久久| 中亚洲国语对白在线视频| 我的老师免费观看完整版| 久久精品国产99精品国产亚洲性色| 91麻豆精品激情在线观看国产| 国产激情欧美一区二区| 搡老妇女老女人老熟妇| 久久精品91蜜桃| 国产区一区二久久| 精品乱码久久久久久99久播| 欧美日韩国产亚洲二区| 桃红色精品国产亚洲av| 国产野战对白在线观看| xxx96com| 日本免费a在线| 在线观看www视频免费| 国产97色在线日韩免费| 在线免费观看的www视频| 中文资源天堂在线| 免费看十八禁软件| 国产av在哪里看| 黄色丝袜av网址大全| 露出奶头的视频| 变态另类成人亚洲欧美熟女| 免费在线观看成人毛片| 国产91精品成人一区二区三区| 在线国产一区二区在线| 国产精品 国内视频| 好男人电影高清在线观看| 法律面前人人平等表现在哪些方面| 天堂√8在线中文| 久久久久国产精品人妻aⅴ院| 香蕉国产在线看| 美女免费视频网站| 国产99久久九九免费精品| 大型av网站在线播放| 老熟妇仑乱视频hdxx| 色av中文字幕| 亚洲中文av在线| 国产精品98久久久久久宅男小说| 中文字幕精品亚洲无线码一区| 亚洲国产看品久久| 国产av麻豆久久久久久久| 夜夜爽天天搞| 大型黄色视频在线免费观看| 国产精品永久免费网站| 中文字幕熟女人妻在线| 夜夜看夜夜爽夜夜摸| 久久久国产成人精品二区| 88av欧美| 亚洲最大成人中文| 欧美日韩一级在线毛片| 国产精品香港三级国产av潘金莲| 免费一级毛片在线播放高清视频| 精品久久久久久久末码| 男女午夜视频在线观看| 亚洲人成网站高清观看| 亚洲av熟女| 91字幕亚洲| 久久久久精品国产欧美久久久| 极品教师在线免费播放| 国产午夜福利久久久久久| 亚洲欧美精品综合久久99| 亚洲精品美女久久av网站| 精品福利观看| 日本撒尿小便嘘嘘汇集6| 99热只有精品国产| 欧美精品啪啪一区二区三区| 日本一区二区免费在线视频| 一进一出好大好爽视频| 又黄又粗又硬又大视频| 99精品久久久久人妻精品| 午夜a级毛片| netflix在线观看网站| 青草久久国产| 一a级毛片在线观看| 亚洲中文日韩欧美视频| 男人的好看免费观看在线视频 | 精品久久久久久久久久免费视频| 99久久精品热视频| 香蕉久久夜色| 欧美人与性动交α欧美精品济南到| 国产高清激情床上av| 搡老熟女国产l中国老女人| 真人一进一出gif抽搐免费| 亚洲精品久久成人aⅴ小说| 99久久99久久久精品蜜桃| 精品久久久久久久末码| 麻豆久久精品国产亚洲av| 狂野欧美白嫩少妇大欣赏| 每晚都被弄得嗷嗷叫到高潮| 一级a爱片免费观看的视频| 少妇的丰满在线观看| 婷婷六月久久综合丁香| 国产精品永久免费网站| 日本精品一区二区三区蜜桃| 久久久久久久久中文| 免费人成视频x8x8入口观看| 老汉色∧v一级毛片| 老司机午夜十八禁免费视频| 两个人免费观看高清视频| 窝窝影院91人妻| 制服丝袜大香蕉在线| 亚洲一区中文字幕在线| 俺也久久电影网| 久久精品人妻少妇| 欧美日韩亚洲国产一区二区在线观看| 欧美午夜高清在线|