• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Efficient Encryption and Compression of Sensed IoT Medical Images Using Auto-Encoder

    2023-01-24 02:51:48PassentElkafrawyMaieAboghazalahAbdelmotyAhmedHanaaTorkeyandAymanElSayed

    Passent El-kafrawy,Maie Aboghazalah,Abdelmoty M.Ahmed,Hanaa Torkey and Ayman El-Sayed

    1School of Information Technology and Computer Science,Nile University,Giza,12677,Egypt

    2Math and Computer Science Department,Faculty of Science,Menoufia University,Menoufia,32511,Egypt

    3Department of Computer Engineering,College of Computer Science,King Khalid University,Abha,61421,Saudi Arabia

    4Computer Science and Engineering Department,Faculty of Electronic Engineering,Menoufia University,Menouf,32952,Egypt

    ABSTRACT Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common practice.Encryption of medical images is very important to secure patient information.Encrypting these images consumes a lot of time on edge computing;therefore,the use of an auto-encoder for compression before encoding will solve such a problem.In this paper,we use an auto-encoder to compress a medical image before encryption,and an encryption output(vector)is sent out over the network.On the other hand,a decoder was used to reproduce the original image back after the vector was received and decrypted.Two convolutional neural networks were conducted to evaluate our proposed approach:The first one is the auto-encoder,which is utilized to compress and encrypt the images,and the other assesses the classification accuracy of the image after decryption and decoding.Different hyperparameters of the encoder were tested,followed by the classification of the image to verify that no critical information was lost,to test the encryption and encoding resolution.In this approach,sixteen hyperparameter permutations are utilized,but this research discusses three main cases in detail.The first case shows that the combination of Mean Square Logarithmic Error(MSLE),ADAgrad,two layers for the auto-encoder,and ReLU had the best auto-encoder results with a Mean Absolute Error (MAE) = 0.221 after 50 epochs and 75% classification with the best result for the classification algorithm.The second case shows the reflection of auto-encoder results on the classification results which is a combination of Mean Square Error(MSE),RMSprop,three layers for the auto-encoder,and ReLU,which had the best classification accuracy of 65%,the auto-encoder gives MAE=0.31 after 50 epochs.The third case is the worst,which is the combination of the hinge,RMSprop,three layers for the auto-encoder,and ReLU,providing accuracy of 20%and MAE=0.485.

    KEYWORDS Auto-encoder;cloud;image encryption;IoT;healthcare

    1 Introduction

    Healthcare systems collect data from patients through the Internet of Things(IoT)sensors.The data is then stored in the cloud and analyzed for useful recommendations back to the healthcare provider.However,there are two problems facing the IoT in health systems,especially when such data is spatial:bandwidth congestion and security over the network.This research focuses on using X-ray images as one of the main medical analysis methods for diagnosis.Thus,this research focuses on such data types for health analysis.Healthcare images are huge in size; accordingly,the transmission of such data consumes a lot of bandwidth and thus transmission time.This problem can be reduced by image compression while maintaining resolution,as this data is very sensitive to quality.A major solution to the security issues of health data is encryption.IoT servers are hacked every day[1–3],so encryption of such data is necessary [4,5].Some models have been developed for image encryption using different algorithms.A chaotic-based artificial neural network(ANN)[6]is a stream image encryption mechanism that has a complex design and is time-consuming.A block-wise pixel shuffling algorithm was proposed for learnable image encryption[7].A stacked auto-encoder(SAE)network was proposed as an 8-bit RGB image block-wise pixel shuffling algorithm with batch image encryption,which generates two chaotic matrices.One set was used to produce a total shuffling matrix to shuffle the pixel positions on each plain image.The other produced a series of independent sequences,each of which was used to confuse the relationship between the permutated image and the encrypted image.The schema was efficient because of the advantages of parallel computing in SAE,which led to a significant reduction in run-time complexity.

    Although the hybrid application of shuffling and confusing enhances the encryption effect,the images still consume transmission overhead,and the process is time-consuming,which is not acceptable.Therefore,image compression also needs to be performed for higher performance.In previous studies [8],an SAE and a chaotic logistic map were suggested for image compression and encryption.Studies have shown that this application is viable and successful.It can be used simultaneously on the internet for picture transmission and safety.A five-layer SAE model was established in which the second layer had fewer neurons than the input layer to realize the primary compression of the image,and the third layer had fewer neurons than the second layer to realize the second stage of the image.The fourth and fifth remaining layers were mirror images of the second and first layers,respectively.In the experiment,the activation function was a nonlinear sigmoid function.The number of neurons was modified to achieve various compression ratios (CRs) in the hidden layers.The model took a lot of time for training and testing due to its complexity.Hence,the present study proposes a simplified and faster model that maintains the intrinsic properties of images.This research has two main contributions.The first one is utilizing the auto-encoder[9,10]for compression.The second is encrypting the X-ray image to secure it during transmission.The auto-encoder is a convolutional neural network used to compress the x-ray image (in the compressing phrase).This step is reversed to get the image decoding.Between the encoding and decoding phases,the output is encrypted and decrypted to secure the image.A classification algorithm is used after using the modified auto-encoder to evaluate the amount of loss in the original image.In this paper,we discuss the effect of the hyperparameter [11] of the auto-encoder on the quality of the retrieved image and the use of an encryption function on image loss.A convolution neural network is also used in the classification algorithm[12,13].

    Our approach introduces compression using an auto-encoder and encryption using a customized function that is not as complicated as in state-of-the-art models and thus faster.The compression and encryption processes utilize image distortion.Therefore,an artificial application was applied to examine the amount of distortion in the received images.This application was a diagnostic classifier used to examine the acceptable amount of distortion in the images.We also examined different hyperparameters of the modified auto-encoder and assessed their effects on the level of distortion of the images.The contributions of the study are as follows:

    1.An auto-encoder was utilized to compress images to reduce the time taken to transmit them over the internet.After encoding,the feature vector,which was the output of the encoder layer,was encrypted.

    2.After encryption,a reverse function was utilized for the decryption of the feature vector(the output of the encryption function).A mirror reverse of the encoder layers was run on the output of the decryption function to retrieve the original image,with some acceptable loss.

    3.Some changes were made to the auto-encoder hyperparameters to demonstrate their effects on the classification algorithm and the extent of the acceptability of the classification step.

    The rest of the paper is organized as follows:Section 2 presents related work.Section 3 introduces the proposed methodology.Section 4 describes the results achieved in this study.The conclusions are drawn in Section 5.

    2 Related Work

    IoT data security is a very challenging area of research in real life.In this section,a brief discussion of some recent developments in IoT systems,image encryption,and auto-encoders is presented.Hu et al.[14] proposed a batch image encryption scheme that included a stacked auto-encoder.The SAE network was introduced to create two chaotic matrices.One set was used to create a complete shuffling matrix to shuffle the pixel positions on every image.Another set generated a series of separate sequences,each of which was used to confuse the relationship between the permutated image and the encrypted image.The framework was efficient because of the benefits of SAE’s parallel computing,which led to a substantial decrease in its complexity.The hybrid implementation of shuffling and confusing improved the encryption effect.The authors contrasted the model with the prevalent“l(fā)ogistic diagram” to determine the efficacy of the system,and the performance in running time estimation was collected.The results and review of the experiments showed that their device had a strong encryption effect and was capable of resisting brute force,statistical,and differential attacks.

    Thanikaiselvan et al.[15] suggested implementing a batch image encryption method using a stacked auto-encoder network in which two chaotic sequences were used.The first sequence shuffles the image pixels of each input image,creating a shuffling matrix.The other sequence generates a set of separate sequences that are used to confuse the allowed images with the encrypted images.Parallel computation of the stacked auto-encoder in this scheme could reduce the complexity of the run time and improve the encryption effect.A collection of studies has assessed the performance of the scheme and concluded that the proposed scheme is capable of avoiding statistical and differential attacks.Sushmit et al.[16]presented a method of X-ray image compression based on a recurrent neural network convolution (RNN-Conv).During implementation,the proposed architecture can provide variable compression rates,while requiring each network to be trained only once for a particular X-ray image dimension.For efficient compression,the model uses a multi-level pooling scheme that learns contextualized features.Using the National Institute of Health (NIH)’s Chest X-ray dataset,they conducted image compression experiments and compared the output of the proposed architecture with a state-of-the-art RNN-based technique and JPEG 2000.In terms of the structural similarity index(SSIM) and peak signal-to-noise ratio (PSNR) metrics,the experimental results reflected improved compression efficiency.

    Akyazi et al.[17] introduced a learning-based method of image compression using wavelet decomposition as a preprocessing stage.The proposed convolution auto-encoder was trained endto-end to achieve a target bit rate of less than 0.15 bits per pixel across the entire CLIC2019 test range.The proposed model outperformed legacy JPEG compression as well as an auto-encoder of a related convolution that lacks the proposed preprocessing.The presented architecture demonstrates that wavelet decomposition is useful in modifying the compressed image’s frequency characteristics and helps enhance the performance of image compression models based on learning.Optimization of wavelet scales at the analysis/synthesis and quantization stages of the wavelet convolutional autoencoders (WCAE) may enhance the implemented method.Vu et al.[18] proposed a new method of representation learning to“describe”unknown attacks more precisely,enabling supervised methods to identify learning-based anomalies.Specifically,to learn a latent representation from the input data,the author created three regularized versions of auto-encoders.The bottleneck layers of these regularized auto-encoders need to be updated in a new classification algorithm.Input features were trained in a controlled way using normal data and known IoT attacks.The authors utilized a supervised learning method for anomaly detection.Specifically,they developed three regularized versions of auto-encoders to learn a latent representation from the input data.They also conducted experiments to explore the features of the proposed models and their effects on the output of the hyperparameters.

    Ameen Suhail et al.[19]used an auto-compression encoder and a chaotic encryption logistic map.An auto-encoder is an unsupervised deep-learning neural network algorithm that compresses the input vector into a smaller dimension vector that forms a dense representation of the input data.It is possible to use this feature of the auto-encoder to compress images.The encryption protocol is applied to the compressed data.In shuffling and encrypting the compressed image type,the sequences created by the logistic map are efficiently used.The security review indicates that the device is secure enough for image data to be transmitted.However,the chaotic encryption method is a loosy technique;the output image cannot be returned to its original colors.This is a critical drawback for medical images.

    From the previous studies,this research innovation combines two different techniques.The first is to compress medical images with the least amount of image loss.The second is to encrypt the images so they cannot be hacked over the internet,as it is medical data and should be secure.The proposed model combined these two ideas together to show that there is a way that we can send encrypted images with low capacity and minimum loss.To achieve minimum loss,auto-encoder parameters need to be tuned for the best results,which is our main goal in this study.The images need to be regenerated as the originals as soon as possible after transmission,particularly in areas where medical images are used for critical diagnosis.The regenerated images are validated by feeding them into a classifier and verifying the resultant diagnoses.

    3 Background and Methodology

    Auto-encoders are basic learning circuits aimed at transforming inputs into outputs with the least amount of possible distortion.

    Fig.1 shows auto-encoders that consist of an input layer and an output layer connected by one or more hidden layers.The auto-encoder has two main parts which are responsible for encoding and decoding.The encoding phase is done in the first layers after the input layer.After the encoding phase is ended,some other layers with the same number of the encoding layers are begun but in the inverse structure,which is known as the decoder.The input is obtained by the encoder and converted to a new representation,typically referred to as a code or latent variable.The decoder receives the generated code and converts it to a reconstruction of the original input.The goal of this network is to recreate the input by converting it into outputs in the simplest way possible so that the input is not very distorted.This kind of neural network has been used mostly to solve problems of unsupervised learning as well as transfer learning.An auto-encoder has two main components:an encoder and a decoder.The training method for auto-encoders requires minimizing reconstruction errors;that is,the output and input must differ as little as possible[20].The standard AE structure is shown in Fig.1.

    Figure 1:Structure of auto-encoders

    To compress an image,the AE has some nodes in the hidden layer that is less than the number of pixels in the input layer.The code produced by the hidden layer is transmitted to the other side.The received code is decoded into the original image through the output layer,which has the same number of nodes as the input layer.Transferring data on the IoT network is an unsecured practice that requires a lot of effort to secure all stages.One of the techniques used to ensure privacy is image encryption.Image encryption algorithms have mainly been developed to change images before transmission through public networks.To use the transmitted image,it must be decrypted first.

    4 Proposed Model

    As mentioned earlier,the transfer of patient data through the internet faces two problems: the large size of the images and image hacking.Hence,here,a novel image encryption and compression technique using an auto-encoder is introduced.Image compression is applied first,followed by image encryption.As shown in Fig.2,the other end of the network decrypts the image before decoding to retrieve the original image again.

    Given its large size,the image (1) congests the network at transmission,(2) requires complex encryption techniques to guarantee privacy,and (3) consumes high encryption time.The proposed technique uses a neural network (an auto-encoder) for fast compression.The reduced-size image is then encrypted with a simple,and fast encryption technique.As the image is not the original one,any encryption technique will suffice to change the image into an unrecognizable one.The integration of the auto-encoder proceeds with an encryption method and is an efficient model for safe image transmission with a low overhead.

    Figure 2:Proposed system diagram

    Input:X-ray images Steps:1-Compressong a.Convert x-ray to numpy array b.Compress the image(numpy array)2-encryption c.Get the output of the compressing phase and multiply it by it itself d.Adding the output of the previous step with another image which is multiplied by itself also 3-Decryption e.Convert the encryption step by subtract the image from the previous step f.Take the SQRT of the result 4-Decompressing g.The compressing step is reversed to get the original image After the auto encoder a classification algorithm is used to show the acceptable percent of loss in the images.Outputs-x-ray images(from autoender)-Acceptable percent of loss of images(from the classification algorithm)

    The algorithm steps for the compression and encryption of the image before it is sent over the internet are as follows:

    1.The image is compressed using an auto-encoder.

    2.A specific equation is applied to the output of the auto-encoder to encrypt the image.

    a) The encryption function is done on the 1-D vector,which is the output of the encoder,by adding another 1-D vector(another image).

    b) The square of the two vectors is calculated as the encrypted outcome.

    The reverse of the previous step is then performed as follows:

    1.Calculate the square root of the vector and then subtract the added image(vector)from it.

    2.Decode to obtain the transferred image with some allowed loss.

    To evaluate the loss and ensure that the content is not distorted,the reconstructed images are fed into a classifier to be recognized for medical diagnosis,as shown in Fig.2.

    Fig.3 shows the modified auto-encoder,which has a 200 × 200 pixel in image size as the input to the encoder.The resulting feature vector from the encoder represents the input to the encryption block and yields the output as an encrypted feature vector.The transmitted vector is decrypted and then decoded to retrieve the original image.The hyperparameters of the auto-encoder affect the output image.Thus,a CNN with different numbers of layers(but the main model has three layers)was utilized to test the changes in the hyperparameters in the AE and the effect of the results on the classification stage.Our main auto-encoder consisted of three layers(three convolutions and two max pooling)as an encoder and another three for the decoder,as follows:

    ·Layer 1:The first layer in the encoder was the input layer,consisting of 32 nodes,which had a 200×200 input image,with a(3,3)filter.Each image had the same padding.The pooling layer for the first layer was a MaxPooling2D layer with a(2,2)pool size.

    ·Layer 2:The second layer had 64 nodes,used the ReLU activation function with a(3,3)filter,and utilized the same padding for the image.Another MaxPooling2D layer with a(2,2)pool size as the pooling layer for the second one.

    ·Layer 3: The third layer is a convolution 2D (Conv2D) layer with the same criteria as the convolution layers but with 128 nodes.

    The decoder also consisted of three layers that were in reverse order to those of the encoder.

    · The first layer was the input layer of the decoder.The input of this layer was the feature vector that resulted from the encoder.

    · The second layer was a Conv2D layer with the same criteria as in the convolution layer in the encoder but with 128 nodes.Then,there was an UpSampling2D layer with a(2,2)filter.

    · The third layer was the convolution layer but with 64 nodes,followed by an UpSampling2D layer.The final layer was the convolution layer,with 32 nodes and a sigmoid function.

    The difference between the auto-encoder discussed in Fig.1 and the proposed model is that there is an encryption-decryption phase between the encoding and decoding phases.

    The encryption function between the encoder and the decoder takes the resulting feature vector from the encoder as an input.At this stage,another image is added to the original image to hide it,and then the square is calculated of the resulting feature vector,as shown in Fig.4.

    Figure 3:Modified auto-encoder

    Figure 4:The equation used for encryption

    The equation used for encryption is shown above.In the decryption step,the encrypted image(feature vector) is used and subtracted from the added image (vector).Then,the square root is calculated for the resultant vector.The resulting vector from the decryption step is the input vector to the decoder,which is used to retrieve the original image(Fig.5).

    Figure 5: (A) shows the original image; (B) is the encrypted image after applying encoding and encryption;(C)is the retrieved equation after applying decryption and decoding

    Our algorithm for the X-ray image can be summarized as follows:

    1.The X-ray image is decoded using a CNN auto-encoder.

    2.The encoder’s output,which is an encoded vector,is then encrypted by adding it to another image and using the power function on the feature vector.

    3.In the decryption step,the image(vector)is subtracted from the resulting vector,and then the power function is reversed using the square root function.

    4.The decrypted image is then decoded to retrieve the original image.

    5 Experimental Results and Discussion

    5.1 Dataset

    Our dataset was extracted from the clinical PACS database within the National Institutes of Physical Fitness Clinical Center and consists of 60%of all informed chest X-rays in the hospital[21].Therefore,this dataset is considered to be highly associated with the actual human patient population distributions and realistic.This dataset contains several images of healthy and non-healthy volunteers.We used 5000 images to train the auto-encoder and another 5000 to predict and train the classifier.As shown in Fig.6,the number of negative cases was 5800 and the number of positive cases was 3200.

    Figure 6:Positive and negative X-rays

    5.2 Experiment

    Some different hyperparameters were examined,such as the loss function,optimizers,number of layers,number of nodes,number of epochs,and activation function on the auto-encoder.Different measures of metrics were calculated.The source code can be found at the following link[22].

    Fig.7 shows the different metrics used to evaluate the auto-encoder,such as the mean square error(MSE) and the mean absolute error (MAE).The model with the combination of the loss function(mean square logarithmic error),optimizer(ADAM),activation function(ReLU),and 3-layer autoencoder provided the best results,with a minimum MSE and MAE.

    Fig.8 shows the loss function of the auto-encoder with different hyperparameters.This loss function occurs at 10,25,and 50 epochs.As shown in the graph,more than one combination results in a minimum loss,as the last five combinations from the right.Fig.8 shows some ups and downs,which indicates that the three combinations with the ups have the biggest values in the loss function with the least accurate model.

    Figure 7:Mean square error(MSE)and mean absolute error(MEA)for the different hyperparameter combinations in the auto-encoder

    Figure 8:The loss function in different numbers of epochs(10,25,50)for the different hyperparameter combinations in the auto-encoder

    After retrieving the image,we need to validate its resolution,specifically the use of different hyperparameters on the auto-encoder.Thus,the image was diagnosed using a classifier.Fig.9 shows the different measures,such as recall,precision,F1,and area under the curve (AUC),of the classification algorithm for the use of different hyperparameters of the auto-encoder.The diagnostic classifier is another CNN classifier used to test the amount of image distortion caused by the application of different hyperparameters on the transmitted image.This was to demonstrate that the distortion was minimal,and thus acceptable,and that the image was diagnosed as required through a classification step.

    After retrieving images from the auto encoder,a classification algorithm is utilized to show the acceptable amount of damage in the original image.Fig.10 shows the training accuracy and the testing accuracy of the images resulting from changing auto encoder parameters.The best result is gotten from the third hyperpparameter combination;the last combination gives the worst.The details of the results of the effects of the different hyperparameters on the classification are as follows:

    In Case 1,the best results are achieved for the auto-encoder with RMSprop as an optimizer,three layers for the encoder,and another reverse for the decoder,with ReLU as the activation function.Each epoch consumed takes 15 s to run.Table 1 shows the loss parameters of the training and validation iterations.

    Figure 9: Recall,precision,F1 measure,and AUC for the classifier on the retrieved images after applying different hyperparameter combinations in the auto-encoder

    Figure 10: The training accuracy and test accuracy of the classifier on the retrieved images after applying different hyperparameter combinations to the auto-encoder

    Table 1:Loss parameters of the training and validation iterations of the auto-encoder that has the best result in the classification step

    Table 1 shows the MSE,the MAE and the loss function with 10 epochs,25 epochs and 50 epochs.This table shows that the increase in the epochs decreases the three evaluation functions.The training and validation sets decreased as the epochs increased.The parameters for the auto-encoder provided the best results in the classification model.Fig.11 shows the training and testing loss function curves of the auto-encoder hyperparameter results for Case 1.

    Figure 11:The training and testing loss functions

    Table 2 shows the values of the following measures: training accuracy,test accuracy,precision,recall,F1-score,and AUC of the classifier.Table 2 shows the training accuracy,and the test accuracy,recall,precision,F1-score and AUC of the model in the first case.Case 1 shows the best evaluation matrices more than the other two cases.

    Table 2: Results of the classification model

    Fig.12 shows the confusion matrix of the classification stage.The matrix presents the performance of the classification model and the relationship between the predicted and actual values.The first cell shows a false-negative result.The second cell shows the true-negative results,and the third one shows the percentage of the true results according to the whole result(which is 41e+03).In the second row,the first cell is the false positive.The second cell is the true positive.It also shows the iterations in the classification stepvs.the loss function(the loss function increases and decreases according to the number of epochs until it remains slightly fixed).

    Figure 12:Left:The confusion matrix,and right:The iterations in the classification step

    In Case 2,the results of the auto-encoder are not good as in Case 1.This case used mean square logarithmic error as the loss function,ADAGrad as an optimizer,two layers for the encoder,with another set of layers reversed for the decoder and ReLU as the activation function.Each epoch consumed 10 s.

    Table 3 presents the auto-encoder evaluation parameters like MSE,MAE,and loss function.It shows that the evaluation function is not so good.The evaluation matrices decrease with the increase of the epoch’s numbers.

    Table 3: The best auto-encoder parameters

    Fig.13 shows the training and testing loss functions in the auto-encoder hyperparameter results in Case 2.It shows the decrease of the loss function during the increase in the epoch numbers.

    Figure 13:The training and testing loss function

    Table 4 shows the different parameters of the classification,such as the training accuracy,test accuracy,precision,recall,F1-score,and AUC(which were not the best values).In Case 2,the results of the auto-encoder reflected on the evaluation of the classification algorithm.

    Table 4: Some matrices of the classification step

    Fig.14 shows the confusion matrix of the classification stage.The matrix presents the performance of the classification model and the relationship between the predicted and actual values.The first cell shows a false-negative result.The second cell shows the true-negative results,and the third one shows the percentage of the true results according to the whole result(which was 17e+03).In the second row,the first cell is a false positive.The second cell is the true positive.The figure also shows the iterations in the classification stepvs.the loss function(the loss function increases and decreases according to the number of epochs until it remains slightly fixed).

    Figure 14:Confusion matrix and iterations in the classification step

    Case 3 had the worst classification results after applying the auto-encoder that used hinge as the loss function,RMSprop as an optimizer,three layers for the encoder,with another set of layers reversed for the decoder,and ReLU as the activation function.The total run time for each epoch consumed was 14 s.

    Table 5 shows the loss parameter of the training and validation iterations.All matrices remained the same with increasing epochs,even in the validation set,which indicates that there was no learning from the model side,even with increasing epoch numbers.This table shows the worst evaluation matrices in the auto-encoder algorithm.

    Table 5: Some matrices of the auto-encoder with the worst results in the classification step

    Fig.15 shows the training and testing loss function in the auto-encoder hyperparameter results in Case 3.The training and testing loss function remain the same which means there is no earning during the training or the validation phases.

    Figure 15:The training and testing loss function

    Table 6 shows the different parameters of the classification,such as the training accuracy,test accuracy,precision,recall,F1-score,and AUC(which were the worst values).These matrices ensure the results of the autoenceder,which gives the worst case in all cases with zero recall,precision,and F1-score.

    Table 6: Some matrices of the classification step

    Fig.16 shows the confusion matrix of the classification stage.The results of the predictedvs.the actual were not good; the expected results were not obtained.The figure also shows the iterations in the classification stepvs.the loss function (the loss function did not yield any good results).The model of that combination of hyperparameters gives the worst retrieved image and that reflected on the classification algorithm results which are precision=0,F1-score=0,and recall=0.The blank graph shows the classification model failure.

    Figure 16:Confusion matrix and iterations in the classification step

    Fig.17 presents the ROC curve of the classification algorithm,reflecting the effects of various hyperparameters on the classification results.The combination of the mean square logarithmic error loss function,the ADAGrad optimizer,the ReLU activation function,and two CNNs(MSLE+ADAGrad+ReLU+2 layers)shows the best curve,compared to combining the mean square logarithmic error loss function.A combination of the mean square logarithmic error loss function,the ADAM optimizer,the ReLU activation function,and two CNN (MSLE+ADAM+ReLU+2 layers),or the mean square logarithmic error loss function,the ADAM optimizer,the ReLU activation function,and five CNN(MSLE+ADAM+ReLU+3 layers).

    Figure 17: The ROC curve of the classifier on the retrieved images obtained by applying different hyperparameters in the auto-encoder

    6 Conclusion

    In this study,we provided some deep knowledge,describing various state-of-the-art auto-encoder architectures and showing the usefulness of compressing and encrypting medical images during transmission across an IoT system.We explained the auto-encoder and how different parameters can change its performance and the matrices of the classification algorithm.Based on our findings,we concluded that changing the algorithm hyperparameters may affect the algorithm matrices and the number of epochs.Our findings show that increasing the number of epochs decreases the loss function,while increasing the number of layers does not increase the evaluation matrices.Further,the optimizer ADAM and ADAGrad yielded the best results when applying the auto-encoder with the ReLU activation function and loss function mean square logarithmic error.Early stopping can also give good results.This paper presents a novel auto-encoder model for compressing and encrypting medical images during internet transfer.This model is very important because of the hacking processing over the internet,especially when transferring medical images.Compressing the images is important to speed up the process of transferring images.In future work,some other auto-encoder hyparameter permutations can be utilized.Another classification algorithm with a lot of critical structure can be used to get better results.

    Acknowledgement:The authors extend their appreciation to The Institute for Research and Consulting Studies at King Khalid University through Corona Research(Fast Track).

    Funding Statement:The funding was provided by the Institute for Research and Consulting Studies at King Khalid University through Corona Research(Fast Track)[Grant No.3-103S-2020].

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲欧美中文字幕日韩二区| 国产激情偷乱视频一区二区| 久久久a久久爽久久v久久| 国产精品麻豆人妻色哟哟久久 | 国产男人的电影天堂91| 91久久精品国产一区二区三区| 国产精品日韩av在线免费观看| 狂野欧美激情性xxxx在线观看| 国产一区二区三区在线臀色熟女| 校园人妻丝袜中文字幕| 日韩成人av中文字幕在线观看| 欧美在线一区亚洲| 国产黄片视频在线免费观看| 99国产精品一区二区蜜桃av| 亚洲av熟女| 97热精品久久久久久| 99久国产av精品国产电影| 人妻少妇偷人精品九色| 国产麻豆成人av免费视频| 国产视频内射| 国产单亲对白刺激| 尤物成人国产欧美一区二区三区| 国产日韩欧美在线精品| 色尼玛亚洲综合影院| 中文字幕制服av| 天堂影院成人在线观看| 啦啦啦韩国在线观看视频| 老女人水多毛片| 国产精品野战在线观看| 91精品国产九色| 蜜臀久久99精品久久宅男| 亚洲第一区二区三区不卡| 日韩一本色道免费dvd| 久久99蜜桃精品久久| 久久99热这里只有精品18| 亚洲自偷自拍三级| 久久久久久国产a免费观看| 亚洲av一区综合| 欧美成人a在线观看| 久久久久网色| 性欧美人与动物交配| 精品久久久久久久久久久久久| 亚洲最大成人中文| 26uuu在线亚洲综合色| 天美传媒精品一区二区| 成年av动漫网址| 日本熟妇午夜| 久久精品久久久久久噜噜老黄 | 午夜激情欧美在线| 欧美高清成人免费视频www| 午夜福利高清视频| 黄色视频,在线免费观看| 最近的中文字幕免费完整| 啦啦啦观看免费观看视频高清| 国产精品久久久久久久电影| 99热网站在线观看| 亚洲精品粉嫩美女一区| 色哟哟·www| 春色校园在线视频观看| 一级毛片aaaaaa免费看小| 亚洲欧美精品自产自拍| 丰满人妻一区二区三区视频av| 99久久无色码亚洲精品果冻| 直男gayav资源| 久久久久久久久久久免费av| 色吧在线观看| 精品一区二区三区视频在线| 亚洲无线观看免费| 美女脱内裤让男人舔精品视频 | 97人妻精品一区二区三区麻豆| 免费电影在线观看免费观看| 欧美性猛交黑人性爽| 丰满乱子伦码专区| 欧美性感艳星| 免费搜索国产男女视频| 国产淫片久久久久久久久| 国产69精品久久久久777片| 免费看av在线观看网站| 久久久久久久久久久丰满| 国产欧美日韩精品一区二区| 久久精品国产亚洲av涩爱 | 日韩制服骚丝袜av| 国产一区二区三区av在线 | 免费不卡的大黄色大毛片视频在线观看 | 免费在线观看成人毛片| 国产精品久久电影中文字幕| 可以在线观看毛片的网站| 黄色配什么色好看| ponron亚洲| 国产不卡一卡二| 一边亲一边摸免费视频| 日日干狠狠操夜夜爽| 好男人在线观看高清免费视频| 国产老妇伦熟女老妇高清| 亚洲av不卡在线观看| 久久久精品大字幕| 伦精品一区二区三区| 国产成人精品久久久久久| 亚洲国产精品sss在线观看| 国产国拍精品亚洲av在线观看| 亚洲欧洲日产国产| 欧美日韩乱码在线| 一本一本综合久久| 亚洲成a人片在线一区二区| 免费一级毛片在线播放高清视频| 亚洲欧美日韩无卡精品| АⅤ资源中文在线天堂| 综合色av麻豆| 噜噜噜噜噜久久久久久91| 午夜福利高清视频| 日韩精品有码人妻一区| 国产一区二区在线av高清观看| 成人午夜高清在线视频| 在线免费十八禁| 99久久成人亚洲精品观看| 欧美+日韩+精品| 国产精品一区二区性色av| 欧美bdsm另类| 我要看日韩黄色一级片| 在现免费观看毛片| 亚洲精品456在线播放app| 亚洲丝袜综合中文字幕| 国产亚洲精品久久久久久毛片| 五月伊人婷婷丁香| 欧美性猛交黑人性爽| 久久久国产成人免费| 日韩一区二区三区影片| 久久久国产成人精品二区| 国产色爽女视频免费观看| 国产伦一二天堂av在线观看| 中文字幕av成人在线电影| 丰满的人妻完整版| 中文字幕熟女人妻在线| 国产亚洲91精品色在线| 国产精品爽爽va在线观看网站| 99久久精品国产国产毛片| 欧美极品一区二区三区四区| 真实男女啪啪啪动态图| 麻豆成人av视频| 色综合色国产| 美女大奶头视频| 亚洲熟妇中文字幕五十中出| 欧美在线一区亚洲| 欧美丝袜亚洲另类| 亚洲国产精品国产精品| 人妻制服诱惑在线中文字幕| 国产黄片美女视频| av免费观看日本| 国产黄a三级三级三级人| 午夜福利在线观看吧| 日本黄色片子视频| 啦啦啦观看免费观看视频高清| 免费av不卡在线播放| 亚洲乱码一区二区免费版| 久久精品影院6| 午夜老司机福利剧场| 51国产日韩欧美| 国产伦在线观看视频一区| 成人三级黄色视频| 亚洲精品国产成人久久av| 久久国内精品自在自线图片| 国产中年淑女户外野战色| 成人综合一区亚洲| 亚洲乱码一区二区免费版| 中文字幕熟女人妻在线| 99国产精品一区二区蜜桃av| 国产色爽女视频免费观看| 在线观看av片永久免费下载| 乱人视频在线观看| 国产精品久久电影中文字幕| 久久久午夜欧美精品| 日日干狠狠操夜夜爽| 中文在线观看免费www的网站| 12—13女人毛片做爰片一| 国产精品精品国产色婷婷| 亚洲av电影不卡..在线观看| 日韩人妻高清精品专区| 欧美在线一区亚洲| 亚洲三级黄色毛片| 国产三级中文精品| av又黄又爽大尺度在线免费看 | 18+在线观看网站| 少妇高潮的动态图| 欧美最新免费一区二区三区| 亚洲av二区三区四区| 精品不卡国产一区二区三区| 久久久成人免费电影| 免费人成视频x8x8入口观看| 在线免费观看不下载黄p国产| 天堂网av新在线| 欧美日韩综合久久久久久| 3wmmmm亚洲av在线观看| а√天堂www在线а√下载| 久久久色成人| 又黄又爽又刺激的免费视频.| 一级黄色大片毛片| 少妇人妻一区二区三区视频| 18禁黄网站禁片免费观看直播| 五月玫瑰六月丁香| 18禁在线无遮挡免费观看视频| 国产精品蜜桃在线观看 | 亚洲欧美日韩无卡精品| or卡值多少钱| 韩国av在线不卡| 久久久欧美国产精品| 国内少妇人妻偷人精品xxx网站| 亚洲18禁久久av| 日本一二三区视频观看| 亚洲图色成人| h日本视频在线播放| 欧美日韩精品成人综合77777| 听说在线观看完整版免费高清| 麻豆成人午夜福利视频| 精品久久久噜噜| 九九在线视频观看精品| 精品国产三级普通话版| 免费搜索国产男女视频| 国产精品蜜桃在线观看 | 婷婷六月久久综合丁香| 免费人成视频x8x8入口观看| 校园春色视频在线观看| 国产精品乱码一区二三区的特点| 国产精品无大码| 国产黄色小视频在线观看| 桃色一区二区三区在线观看| 小蜜桃在线观看免费完整版高清| 中文字幕av成人在线电影| 久久精品综合一区二区三区| 1024手机看黄色片| 欧美一级a爱片免费观看看| 免费观看在线日韩| АⅤ资源中文在线天堂| 亚洲国产色片| 亚洲人成网站高清观看| 我的老师免费观看完整版| 国产成年人精品一区二区| 最近手机中文字幕大全| 亚洲av成人精品一区久久| 在线观看免费视频日本深夜| 亚洲激情五月婷婷啪啪| 亚洲内射少妇av| 国产黄色视频一区二区在线观看 | 欧美不卡视频在线免费观看| 日本与韩国留学比较| 狂野欧美激情性xxxx在线观看| 青春草国产在线视频 | 免费看a级黄色片| 国产精品爽爽va在线观看网站| 久久精品国产亚洲网站| 久久亚洲国产成人精品v| 亚洲真实伦在线观看| 精品免费久久久久久久清纯| 啦啦啦啦在线视频资源| 91狼人影院| 亚洲欧美日韩无卡精品| 观看美女的网站| 人妻系列 视频| 欧美人与善性xxx| 婷婷色综合大香蕉| 69av精品久久久久久| 成熟少妇高潮喷水视频| 青春草国产在线视频 | 精品人妻视频免费看| 日日摸夜夜添夜夜爱| 日本在线视频免费播放| 九九在线视频观看精品| 又粗又硬又长又爽又黄的视频 | 免费av观看视频| ponron亚洲| 日韩强制内射视频| 精品无人区乱码1区二区| 免费av观看视频| 亚洲真实伦在线观看| 久久久久九九精品影院| 一级黄色大片毛片| 99久久无色码亚洲精品果冻| 天堂av国产一区二区熟女人妻| 久久99热这里只有精品18| 麻豆精品久久久久久蜜桃| 伦理电影大哥的女人| 国产视频首页在线观看| 99热精品在线国产| 亚洲av免费在线观看| 人妻久久中文字幕网| 国产极品天堂在线| 日韩 亚洲 欧美在线| 国产精品久久久久久久电影| 久久6这里有精品| 中文字幕久久专区| av在线观看视频网站免费| 免费大片18禁| 亚洲精品456在线播放app| av福利片在线观看| 天美传媒精品一区二区| 91狼人影院| 97在线视频观看| 国产 一区 欧美 日韩| 婷婷色综合大香蕉| 欧美成人精品欧美一级黄| 欧美+亚洲+日韩+国产| 日本欧美国产在线视频| 成人亚洲欧美一区二区av| 一区二区三区四区激情视频 | 日韩成人av中文字幕在线观看| 成人午夜精彩视频在线观看| 免费电影在线观看免费观看| 亚洲国产精品久久男人天堂| 欧美又色又爽又黄视频| 久久午夜福利片| 欧美最黄视频在线播放免费| 成人毛片60女人毛片免费| 亚洲最大成人av| 免费观看的影片在线观看| 蜜桃亚洲精品一区二区三区| 欧美激情国产日韩精品一区| 在线观看66精品国产| 亚洲国产欧洲综合997久久,| 三级毛片av免费| av在线亚洲专区| 看非洲黑人一级黄片| 99riav亚洲国产免费| 国产伦精品一区二区三区四那| 久久99精品国语久久久| 天天躁日日操中文字幕| 丝袜喷水一区| 日本五十路高清| 91久久精品国产一区二区成人| 老女人水多毛片| 亚洲精品亚洲一区二区| 国产高清有码在线观看视频| 如何舔出高潮| 国产精品乱码一区二三区的特点| 亚洲av.av天堂| 日韩 亚洲 欧美在线| 亚洲国产精品合色在线| 日韩三级伦理在线观看| 久久人人爽人人片av| 免费无遮挡裸体视频| 国产成年人精品一区二区| 亚洲成a人片在线一区二区| 国产精品一区二区在线观看99 | 老师上课跳d突然被开到最大视频| 日韩一本色道免费dvd| 亚洲aⅴ乱码一区二区在线播放| 老女人水多毛片| 人人妻人人澡欧美一区二区| videossex国产| 小蜜桃在线观看免费完整版高清| eeuss影院久久| 免费观看的影片在线观看| 亚洲无线观看免费| 久久久成人免费电影| 非洲黑人性xxxx精品又粗又长| www.av在线官网国产| 哪里可以看免费的av片| 极品教师在线视频| 久久精品91蜜桃| 舔av片在线| 老师上课跳d突然被开到最大视频| 只有这里有精品99| 男人的好看免费观看在线视频| 久久亚洲国产成人精品v| 美女国产视频在线观看| 麻豆乱淫一区二区| 99视频精品全部免费 在线| 综合色丁香网| 国产在线精品亚洲第一网站| 国产精品福利在线免费观看| 日韩欧美精品免费久久| 人人妻人人看人人澡| 亚洲欧洲国产日韩| 午夜激情欧美在线| 婷婷精品国产亚洲av| 亚洲无线在线观看| 高清在线视频一区二区三区 | 插逼视频在线观看| 成人漫画全彩无遮挡| 日本一本二区三区精品| 亚洲国产精品sss在线观看| 在现免费观看毛片| 国产精品乱码一区二三区的特点| 亚洲国产欧洲综合997久久,| 国产精品蜜桃在线观看 | 乱人视频在线观看| 国产又黄又爽又无遮挡在线| 中文欧美无线码| 日韩亚洲欧美综合| 国产精品国产高清国产av| 成年女人永久免费观看视频| 别揉我奶头 嗯啊视频| 国产爱豆传媒在线观看| 欧美成人一区二区免费高清观看| 欧美日本视频| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 1024手机看黄色片| 婷婷亚洲欧美| 国产视频首页在线观看| 久久久a久久爽久久v久久| 欧美日本亚洲视频在线播放| 最后的刺客免费高清国语| 国产成人一区二区在线| 波多野结衣高清作品| 成年av动漫网址| 2022亚洲国产成人精品| 亚洲精品国产av成人精品| 国产日韩欧美在线精品| 免费观看a级毛片全部| videossex国产| 欧美+日韩+精品| 国产黄a三级三级三级人| 99久久人妻综合| 亚洲乱码一区二区免费版| 国产淫片久久久久久久久| 白带黄色成豆腐渣| 黄色视频,在线免费观看| 99热这里只有是精品50| 哪个播放器可以免费观看大片| 1024手机看黄色片| 国产精品一区二区三区四区免费观看| 哪里可以看免费的av片| 在线播放国产精品三级| 少妇人妻一区二区三区视频| 午夜爱爱视频在线播放| av福利片在线观看| 国产高清有码在线观看视频| 亚洲av中文字字幕乱码综合| 久久久久久伊人网av| 99久国产av精品| 日韩精品青青久久久久久| 毛片女人毛片| 九草在线视频观看| av在线天堂中文字幕| 联通29元200g的流量卡| 亚洲美女视频黄频| 成人二区视频| 日韩精品青青久久久久久| 给我免费播放毛片高清在线观看| 性高湖久久久久久久久免费观看| 久久影院123| 人妻 亚洲 视频| 在线观看三级黄色| 精品人妻一区二区三区麻豆| 日韩中字成人| 久久99热6这里只有精品| 高清黄色对白视频在线免费看| 亚洲高清免费不卡视频| 看免费成人av毛片| 狂野欧美激情性xxxx在线观看| 在线观看一区二区三区激情| 男人操女人黄网站| 一本久久精品| 色94色欧美一区二区| 人人澡人人妻人| 成人午夜精彩视频在线观看| 日韩三级伦理在线观看| 老熟女久久久| 欧美bdsm另类| 亚洲av综合色区一区| 亚洲av国产av综合av卡| 国产亚洲精品久久久com| 亚洲av国产av综合av卡| 夜夜骑夜夜射夜夜干| 女人久久www免费人成看片| 久久人人爽av亚洲精品天堂| 亚洲人成网站在线观看播放| 自拍欧美九色日韩亚洲蝌蚪91| 99久久人妻综合| 久久精品国产自在天天线| 久久精品国产亚洲网站| 22中文网久久字幕| 简卡轻食公司| 国产成人精品无人区| 青青草视频在线视频观看| 欧美老熟妇乱子伦牲交| 97在线视频观看| 亚洲三级黄色毛片| 免费不卡的大黄色大毛片视频在线观看| 亚洲图色成人| 最黄视频免费看| 999精品在线视频| 汤姆久久久久久久影院中文字幕| 曰老女人黄片| 亚洲精品aⅴ在线观看| 夜夜骑夜夜射夜夜干| 最近中文字幕2019免费版| 免费观看无遮挡的男女| 伊人久久精品亚洲午夜| 亚洲av电影在线观看一区二区三区| 亚洲丝袜综合中文字幕| 日韩中文字幕视频在线看片| 在线观看人妻少妇| 99热网站在线观看| 亚洲图色成人| 国产国语露脸激情在线看| 嘟嘟电影网在线观看| 亚洲美女黄色视频免费看| 午夜激情福利司机影院| 天天躁夜夜躁狠狠久久av| 日韩亚洲欧美综合| 人妻夜夜爽99麻豆av| 高清欧美精品videossex| 最近中文字幕2019免费版| videossex国产| 国产精品一区二区在线观看99| 国产免费视频播放在线视频| 国产成人freesex在线| 日本爱情动作片www.在线观看| 国产成人一区二区在线| 9色porny在线观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 熟妇人妻不卡中文字幕| 人人妻人人澡人人看| 内地一区二区视频在线| 欧美bdsm另类| 中文字幕免费在线视频6| 国产午夜精品久久久久久一区二区三区| 亚洲五月色婷婷综合| 视频在线观看一区二区三区| 精品人妻熟女av久视频| 亚洲精品乱码久久久久久按摩| 男人爽女人下面视频在线观看| 纵有疾风起免费观看全集完整版| 大香蕉97超碰在线| 制服丝袜香蕉在线| 2021少妇久久久久久久久久久| 高清午夜精品一区二区三区| 一本久久精品| 免费观看av网站的网址| 日韩亚洲欧美综合| 亚洲精品aⅴ在线观看| 精品人妻熟女av久视频| 日韩av在线免费看完整版不卡| 亚洲欧美日韩另类电影网站| freevideosex欧美| 99热这里只有是精品在线观看| freevideosex欧美| 日本黄色片子视频| 久久99蜜桃精品久久| 蜜桃国产av成人99| 一区在线观看完整版| 亚洲人成网站在线播| 九九久久精品国产亚洲av麻豆| 黑人巨大精品欧美一区二区蜜桃 | 亚洲av.av天堂| tube8黄色片| 国产亚洲欧美精品永久| 秋霞伦理黄片| 男女高潮啪啪啪动态图| 国产午夜精品一二区理论片| 狂野欧美激情性xxxx在线观看| a级片在线免费高清观看视频| 久久人人爽av亚洲精品天堂| 日韩,欧美,国产一区二区三区| av在线播放精品| 国产精品一区二区三区四区免费观看| 男男h啪啪无遮挡| 国产熟女欧美一区二区| av不卡在线播放| 边亲边吃奶的免费视频| 韩国av在线不卡| 国产午夜精品久久久久久一区二区三区| 婷婷成人精品国产| 日韩不卡一区二区三区视频在线| 中文字幕人妻熟人妻熟丝袜美| 日韩中文字幕视频在线看片| 久久婷婷青草| 中国国产av一级| 国产一区二区三区综合在线观看 | 五月天丁香电影| 国产黄色免费在线视频| 国产伦理片在线播放av一区| 天美传媒精品一区二区| 国产男女超爽视频在线观看| 免费日韩欧美在线观看| 纵有疾风起免费观看全集完整版| 免费看av在线观看网站| 中文精品一卡2卡3卡4更新| 国产成人精品福利久久| 97精品久久久久久久久久精品| 亚洲av男天堂| 99国产精品免费福利视频| a级毛片黄视频| 精品国产一区二区久久| 日韩成人伦理影院| 99久久精品国产国产毛片| 最黄视频免费看| 久久毛片免费看一区二区三区| 亚洲综合精品二区| 五月伊人婷婷丁香| av视频免费观看在线观看| av电影中文网址| 亚洲精品亚洲一区二区| 久久久国产欧美日韩av| 一区二区三区四区激情视频| 男女边摸边吃奶| 国产乱人偷精品视频| 国产日韩一区二区三区精品不卡 | 精品少妇内射三级| 一区二区三区精品91| 欧美97在线视频| 久久国产亚洲av麻豆专区| 蜜臀久久99精品久久宅男| 欧美一级a爱片免费观看看| 午夜福利视频精品| 亚洲精品,欧美精品| 美女中出高潮动态图| 王馨瑶露胸无遮挡在线观看| 国产女主播在线喷水免费视频网站| 久久99热6这里只有精品| 精品国产一区二区久久| h视频一区二区三区| 久久久久国产网址| 久久 成人 亚洲| 国产成人精品无人区| 天堂俺去俺来也www色官网|