• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning-based Wireless Signal Classification in the IoT Environment

    2022-08-23 02:21:04HyejiRohSheungminOhHajunSongJinseoHanandSangsoonLim
    Computers Materials&Continua 2022年6期

    Hyeji Roh,Sheungmin Oh,Hajun Song,Jinseo Han and Sangsoon Lim

    Department of Computer Engineering,Sungkyul University,Anyang,430742,Korea

    Abstract: With the development of the Internet of Things (IoT), diverse wireless devices are increasing rapidly.Those devices have different wireless interfaces that generate incompatible wireless signals.Each signal has its own physical characteristics with signal modulation and demodulation scheme.When there exist different wireless devices,they can suffer from severe Cross-Technology Interferences(CTI).To reduce the communication overhead due to the CTI in the real IoT environment, a central coordinator can be able to detect and identify wireless signals existing in the same communication areas.This paper investigates how to classify various radio signals using Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM)and attention mechanism.CNN can reduce the amount of computation by reducing weights by using convolution,and LSTM belonging to RNN models can alleviate the long-term dependence problem.Furthermore, attention mechanism can reduce the short-term memory problem of RNNs by reexamining the data output from the decoder and the entire data entered into the encoder at every point in time.To accurately classify radio signals according to their weights, we design a model based on CNN, LSTM, and attention mechanism.As a result, we propose a model CLARINet that can classify original data by minimizing the loss and detects changes in sequences.In a case of the real IoT environment with Wi-Fi, Bluetooth and ZigBee devices, we can normally obtain wireless signals from 10 to 20 dB.The accuracy of CLARINet’s radio signal classification with CNN-LSTM and attention mechanism can be seen that signal-to-noise ratio (SNR) exhibits high accuracy at 16 dB to about 92.03%.

    Keywords:Attention mechanism;wireless signal;CNN-LSTM;classification;deep-learning

    1 Introduction

    In the Internet of Things(IoT)environment,wireless signals are different between various wireless devices, and wireless signals are complexly mixed to form crowded signals [1].When there exist different wireless networks such as Wi-Fi, Bluetooth and ZigBee in the 2.4 GHz spectrum band,they can suffer from severe Cross-Technology Interferences (CTI).To reduce the communication overhead due to the CTI in the real IoT environment,a central coordinator can be able to detect and identify wireless signals existing in the same communication areas.Therefore, it is very challenging task to classify and obtain individual wireless signals in spaces where there are many obstacles,such as people or walls, or where there are many wireless signals, and to use them reliably.Based on the result of wireless signal classification,each IoT device adjusts its own communication channel to avoid congestions.In environments where wireless signals are diverse and heavily intertwined using a variety of wireless devices,it is difficult to solve these problems using traditional wireless signal classification methods.Therefore,deep learning models are being studied to solve complex wireless signal problems wisely,enabling models and systems to be created that show good performance compared to previously proposed wireless signal classification models.

    Convolutional Neural Network (CNN), which belongs to deep neural network among deep learning techniques,use convolution to reduce the number of weights required for image processing,thereby reducing computation and aiming for effective image processing.CNN consists of convolution layers and pooling layers, and solves the vanishing gradient problem using Rectified Linear Unit(ReLU)as activation function.In addition,CNN is characterized by deriving output values of a given size as a result through input values of a given size.Through this,most studies utilize CNN to predict time series data.

    Recurrent Neural Network (RNN), one of the artificial neural networks, is a sequence model of deep learning, which uses input and output data split into sequence units for natural language processing.And since it has a circular structure, it processes sequence-type inputs through internal memory.The Sequence-to-Sequence (Seq2Seq) model and the Long Short-Term Memory (LSTM)cell are representative of the RNN.The Seq2Seq model consists of two architectures,an encoder and a decoder, which processes every word in the entered sentence sequentially, eventually compressing the information of every word into a context vector.This context vector transmits the compressed information to the decoder architecture, processes it to the desired conditions, and outputs them sequentially.Each cell of the encoder and decoder of the Seq2Seq model consists of an LSTM cell or a Gated Recurrent Units(GRU)cell.However,interpreting sentences using the Seq2Seq model has the problem of losing information or vanishing gradient,which results in some information disappearing when the input sentence is long, resulting in reduced accuracy.LSTM is an RNN with the simplest form, which can alleviate the long-term dependence problem that rely on previous computational result to lose memory.It also calculates weights so that important inputs can be recognized by passing through a total of three gates:forget gate,input gate,and output gate.LSTM is highly utilized for longterm signals such as long-term sentences and time series predictions, as it can be used to remember and store important parts from past data,preserve them,and extract necessary parts by iterating the task of applying them to previous and current data[2,3].However,the problem with LSTM is that it is likely to be interrupted by an infinite increase in memory,and that the computation speed is quite slow[4].To address these problems,there are cases where we apply peephole connections to LSTM or use GRU that simplify computation to update hidden states[5].

    Furthermore,attention mechanism selectively learns only the parts that have a significant impact at every point in time,thus reducing the short-term memory problem of RNN.Attention mechanism transfers all the outputs that went through the encoder to the decoder and computes the sum of weights for the outputs of all the encoders through the decoder’s memory cells to determine the important words.This process allows the decoder to focus on and process words that are considered more important than other words.

    CLARINet,the model proposed in this paper,is designed to classify wireless signals similar to the original by applying attention mechanism based on CNN-LSTM,which has recently been utilized in time series prediction to show outstanding prediction performance.CLARINet is passes data through Conv1D twice, then through LSTM layer twice, applies attention mechanism, and applies softmax function,reducing distortion and loss of original data of the wireless signal.We propose a final signal classification model that minimizes distortion by passing signal data to CNN and three gates inherent in LSTM cells to classify importance,obtaining attention score and attention value,connecting hidden states at point in time,and iterating output layer computation process to weight important signals.And we describe experiments to verify accuracy and their result.

    Our main contributions are summarized as follows.

    · We propose a model that can be classified among various radio signals by minimizing distortion and loss for each wireless signal.

    ·We devise a method to classify wireless signals wisely based on CNN-LSTM by applying the techniques used in natural language processing.

    · We propose a model that most accurately classifies wireless signals by applying attention mechanism for various existing wireless signal analysis methods.

    ·We help to obtain the individual’s wireless signal in a crowded space,and propose a reliable model for the accuracy of wireless signal classification.

    The rest of this paper is organized as follows.Section 2 describes studies using radio machine learning dataset and related studies on wireless signal classification using traditional wireless signal classification techniques, deep learning.Section 3 describes the structure and operating principles of the CLARINet model applied with our proposed technique, and presents experimental results in Section 4.Finally,we conclude this paper in Section 5.

    2 Related Work

    This chapter describes the study of classifying radio signals by reducing interference in complex radio signal environments and the study of solutions to address deep learning-based radio signal classification.

    In Section 2.1,we describe a study dealing with a novel algorithm for identifying wireless signal modulation or a wireless signal classification technique that proposes improved directions.

    In Section 2.2,we describe the study of techniques for classifying modulation of radio signals by applying various techniques in deep learning.

    2.1 Radio Signal Classification Study

    Methods for identifying modulation for wireless signals have long been studied.The different devices that make up the Internet of Things communicate using different wireless signals.However,if many IoT devices are used in one space,interference occurs with different wireless signals.Therefore,it is difficult for wireless devices to seamlessly obtain individual wireless signals in large spaces.Since wireless signals are modulated without maintaining the original signal during the communication process,accurately classifying signals between complex signal interferences takes considerable time and requires long training.Recently,to address this problem,we have attempted radio signal classification using artificial intelligence technology and continue our research to show near 90% performance[6].Furthermore, we analyzed a study that judged accuracy on the results of implementing a fast,real-time wireless signal classification network to accurately classify the modulation of wireless signals [7].Furthermore, with a study where a novel algorithm extracts key features to identify modulation of radio signals, we designed a model that accurately classifies signals in complex radio signal environments [8].We designed CLARINet to secure individual radio signals because these studies have similar exact classifications for wireless signal modulation in many spaces.Furthermore,we seek to solve the problem of low accuracy of radio signal classification through deep learning-based models in a way to detect conventional radio signal modulation.

    2.2 Wireless Signal Classification Solution based on Deep Learning

    Recently, research on designing deep learning-based models has been rapidly evolving, and research has been underway to increase the performance of wireless signal classification by applying various models of deep learning to one or more existing designed wireless signal classification methods.For example,we aim to solve the problem of modulation of wireless signals by designing an extended framework based on CNN to increase the accuracy of radio signal classification [9] or to learn amplitude and phase information of training data through a model based on one of the RNN models[10].To increase efficiency such as accuracy or performance of the designed model,we have attempted to classify wireless signals by applying deep learning[11]or leveraging high-order cumulants(HOC)and machine learning [12].We seek to address the problem of failure to go beyond a certain range of accuracy by grafting an attention mechanism that can solve the information loss problem caused by encoding all information with fixed-length vectors, as the point of using the LSTM model when classifying wireless signals is similar.

    There is also an example of analyzing a model using radio machine learning dataset to evaluate the accuracy and performance of deep learning-based signal classification models based on signal-to-noise ratio(SNR).The radio machine learning dataset contains data on 11 signals,including 8PSK,AMDSB,AM-SSB,BPSK,CPFSK,GFSK,PAM4,QAM16,QAM64,QPSK,and WBFM.Our goal is to use this dataset as training data for deep learning models to solve problems that do not increase the accuracy of the previously proposed models.One study presented a plan to improve the performance of the model as automatic modulation classification(AMC)works progress[13],and there is a study that improves speed and accuracy by designing models with higher accuracy than conventional models[14].There is a study that proposed an algorithm that can increase the dataset so that the deep learning model can learn enough to improve the problem of lack of datasets[15].In addition,there is a study in which deep learning-based models have been trained by utilizing radio machine learning dataset to solve problems vulnerable to adversarial attacks.Furthermore,a classifier utilizing deep learning has a case of using radio machine learning dataset to demonstrate that even highly dependent and short radio signals can be misclassified when classifying radio signals [16].And one study used a CNNbased model to extract features learned using CNN to cluster wireless signal modulation types even for training data that are not labeled[17].

    As such, radio machine learning dataset is similar in that it is used to build and validate multiple wireless signal classification techniques using deep learning to analyze accuracy and check performance.Therefore,our designed CLARINet model similarly clarifies the criteria for determining accuracy by leveraging radio machine learning dataset as learning data to validate the performance of the model and the training.

    3 Design of Model

    We design a deep learning-based wireless signal classification model CLARINet, which incorporates attention mechanism into the results through two Conv1D layers and two LSTM layers.Model CLARINet allows complex and diverse wireless signal input data to extract data even in the context of transformations or distortions of attributes via CNN,and when extracted data is entered via LSTM’s encoder,it processes each gate’s characteristics via LSTM’s three gates,and applies the output results through LSTM cells to attention mechanism.Attention mechanism analyzes more intensively on data considered as necessary data through LSTM cells to produce more accurate classification results compared to results using only LSTM.Finally,the final result obtained by attention mechanism applies softmax regression using a cross entropy function as a cost function,classifying the first entered complex radio signals into a total of 11 radio signals (8PSK, AM-DSB, AM-SSB, BPSK, CPFSK,PAM4,QAM16,QAM64,and WPSK).

    3.1 Data

    The collected signals for data classification have a complex number of forms for flexibility and simplicity for mathematical operations,expressed as I=Acos(φ)and Q=Asin(φ).A andφrefer to the instantaneous amplitude and phase of the collected signal.RadioML2016.10a dataset follows a data representation using I,Q,and we use RadioML2016.10a dataset for training and performance evaluation of CLARINet model proposed in this paper.RadioML2016.10a dataset is a synthetic dataset with modulation methods currently in commercial use using GNU radio,which implements similar real-world noise environments such as multipath fading and white noise.This data set contains 128 sample data of 4 samples/symbol.It also consists of Python dict data stored in the form of Python pickle files, and consists of keys and values.Each key consists of 11 modulation methods and -20 to 18 dB of SNR tuple,and the value has a numpy array of(1000,2,128)corresponding to 220 key values.It consists of 1000 sample windows with two values,I and Q of 128 samples.

    In this work, we judged that low SNR data adversely affected learning performance, so we conducted learning using SNR from-10 to 18 dB,and simply because learning using I and Q values did not perform well, we changed I and Q values to phase and amplitude values.Furthermore, we compress from 128 samples of data to 64 samples by replacing two close values with average values for better learning performance.As a result,we have achieved approximately six times the performance improvement in CPU environments,with no significant variations in the shape of the data and little impact on accuracy.This allows us to complete the learning in a reasonable amount of time without using GPUs.In the learning process for maintenance,it is believed that it will be able to save a lot of money when learning using cloud servers.

    3.2 CNN

    CNN has a structure in which input data is configured to go through the convolution layer and the pooling layer,with a fully connected layer at the end.The entire structure of CNN is the same as Fig.1.

    The convolution layer is responsible for maintaining the shape of the input data,creating a feature map for each filter,and the pooling layer receives the output data from the convolution layer as input,reducing the size of the output data or emphasizing specific data.This allows data to be extracted from the data where features are modified or distorted.This allows data to be extracted from the data where features are modified or distorted.In addition,this layer can automatically extract properties.The convolution layer is a necessary layer,and the pooling layer is an optional layer.

    Figure 1:CNN architecture

    The convolution layer uses filters (two-dimensional matrices in the form of N×M) to extract features of the image.In two-dimensional data consisting of height and width, N×M-sized filters are traversed at a specified interval,multiplying the overlapping data by the values of the elements in the kernel,and then adding all the multiplied values.The traveling interval is called the“stride”,and if the “stride”is specified as 1, it moves one column at a time and make a convolution.The output from the convolution is called feature map,and the application of the active function to feature map is called activation map.After the convolution process,the output data is smaller than the input data,which goes through a process called padding to prevent the output data from decreasing.Padding is the process of filling the edge of the input data with a specific value by a specified size,usually zeroed.

    Pooling layers include max pooling,min pooling,and average pooling.Likewise,the concept of filter and stride is applied to the pooling operation,and usually the filter and stride are identical so that all elements can be processed once.For max pooling, we extract the maximum value from the region where the filter and the data overlap,and similarly,average pooling is the method of extracting the mean.The convolution and pooling operations look similar in that the filter and stride concepts are used,but the pooling operations differ in that no weights exist.

    We apply CNN layer to CLARINet to reduce the loss to the original data and make sure that we do not lose the association of each radio signal.

    3.3 LSTM

    LSTM is used to alleviate the long-term dependence problem of RNN mentioned in Section 1,and the overall structure consists of cells and three gates,such as Fig.2.Cells have values for arbitrary time intervals,and three gates are responsible for removing unnecessary information or leaving only necessary information that is considered important.

    Cell state of LSTM is the horizontal line at the top of Fig.2.Cell state(Ct)represents the state of the cell and is used to obtain the state of the next cell through the state of the previous cell,and serves to convey information generated from the previous node.This cell state allows the operation to be repeated again because it moves information considered important information in the previous step.LSTM performs the process of adding or erasing information from cell states through three gates.tanhstands for hyperbolic tangent function andσfor sigmoid function.LSTM also has three gates,each consisting of a get gate,an input gate,and an output gate.These gates are used to obtain hidden and cell state values.For each gate,a sigmoid function is applied to determine whether the previous data will affect the following data based on the derived values between 0 and 1.

    Figure 2:LSTM architecture

    The operating process of LSTM,such as Fig.3,first enters the wireless signal via the forget gate into the input information.Forget gate is a gate that determines what information is reflected in the current information through the operation of Eq.(1)and is determined via sigmoid function.

    Figure 3:LSTM gate architecture

    t represents the time point,Wfrepresents the weight,andbfrepresents the bias.

    Theftof Eq.(1)has a value of[0,1]through the sigmoid function.If theftis close to 1,the previous information is reflected a lot.If theftis close to 0,the value is reflected less.

    The input gate is configured as Fig.3 and is responsible for remembering the information to store for new information.To remember new information, perform the operations in Eqs.(2) and (3).t represents the time point,andWiandWCdenotes the weight.biandbCdenote bias.

    Theitof Eq.(2) shall have the value [0, 1] obtained by the sigmoid function and shall be determined to reflect the present information.When the value ofitis 1,[-1,1]values obtained from the hyperbolic tangent function of the Eq.(3)determine the candidate vector to be added to the cell state.These two values select information and determine how much to remember.

    Then,to update the contents of the forget gate and input gate,we go through Fig.3.Update the information by applying Eq.(4)toCt-1crossed from the previous node.

    Finally,output gate is the final step in determining which data to output,corresponding to the final step in Fig.3.The input is entered into the sigmoid function and the value[0,1]is output,which determines whether to export part of the cell state to output.This output is then passed through the hyperbolic tangent function to the input of the next state.

    We leverage CLARINet to automatically do well filtering on given data via CNN layer.Furthermore,we put CNN layers at the forefront of CLARINet to learn important characteristics for each radio signal when classifying radio signals,and to preserve the association of the data,leveraging them to remember the features of radio signals.

    3.4 Attention Mechanism

    Attention references and applies the entire input sentence at each point in predicting the output data,and concentrates the data associated with the data to be predicted and forwards it to the decoder.This mechanism allows us to deliver more data than was previously delivered.The method in which attention mechanism is used is the same as Fig.4.

    Figure 4:Attention mechanism architecture

    The attention score must be obtained to apply the attention mechanism to the layer consisting of LSTM.Attention score is a score that determines the similarity between the hidden state of the encoder and the hidden state stof the encoder at the present time.The softmax function is applied to obtain an attention distribution in which the sum of all values is equal to 1,and each value is an attention weight.This value and the hidden state provide an attention value,αt.The hidden state of the decoder is connected to this value,and the operation of Eq.(7)is performed to create and useas the input of the output layer.Wcstands for weight matrix,and bcstands for bias.

    We apply an attention mechanism to CLARINet to design an accurate classification by weighting the factors that have a significant impact when distinguishing the features of each radio signal.

    3.5 Architecture of CLARINet

    Finally, our designed network of CLARINet is designed to reduce conversion, distortion, and loss to the original data through two Conv1D layers,extract the data,and delete the rest of the data except the critical data through two LSTM layers.We then design an attention mechanism to weight important data so that it can be classified focusing on important data when analyzing radio signals.The structure of CLARINet performing this process is the same as Fig.5.

    Figure 5:CLARINet model architecture

    CLARINet receives the original radio signal for the frequency band signal as input data.We set the number of filters, the underlying property of CLARINet, to 64.Filter determines the filter size,determines the stride,and recognizes the data according to the length of the stride,and generates the activation map as an output.Therefore,CLARINet is designed to generate 64 activation maps over 64 filters.

    We designed that data passed through CNN layers remember the features each wireless signal has in order to classify wireless signal over LSTM layer,considering then as important information.It is also designed to remove hidden unit with a 60%chance by leaving the dropout of CLARINet at 0.6.In the LSTM layer,dropout is a type of regularization that solves overfitting and makes it not dependent on any single data through dropout.Through this,the CLARINet model is designed to be overfitted and non-dependent.

    Output data through CNN layer and LSTM layer are applied to attention mechanism to solve the gradient loss problem, increasing accuracy, and classified radio signals have an organic relationship with each other.Eventually,data passed to attention mechanisms are classified into 11 radio signals(8PSK,AM-DSB,AM-SSB,BPSK,CPFSK,GFSK,PAM4,QAM16,QAM64,QPSK,WBFM)via softmax regression.

    4 Result of Experiment

    To compare the performance of CLARINet on accuracy,we conducted experiments by selecting a model that combines two CNN layers and two LSTM layers,two CNN layers,one LSTM layer,two LSTM layers,two CNN layers and attention mechanism,as a comparison group.The results of our comparison of the accuracy according to SNR are as shown in Fig.6.

    Figure 6:Classification accuracy comparison of CLARINet

    If SNR is in the range of-20 to-10 dB,CLARINet and all the seven comparator models selected show less than 10% accuracy, with a narrow rise.Therefore, we analyze the range of SNR based on-10 dB or more to clearly classify the accuracy for CLARINet.Our design of CLARINet shows an average accuracy of 77.40% for SNRs above -10 dB, highest accuracy from 16 dB to 92.03%, and approximately 7.34%higher than that of seven comparators at 16 dB.

    We analyze on seven comparator models selected to classify radio signals, and we find that the classification accuracy of models with CNN layers is higher than that of models without CNN layers,and that the accuracy of models with two CNN layers in the range 0 to 18 dB shows an average accuracy of 80%.Furthermore,we compare models based on CNN layers,showing that the two-applied LSTM layers have approximately 2.98% higher accuracy than the single attention mechanism.This shows that CNN layer-based models exhibit high values in the range of 0 to 18 dB in determining accuracy for radio signal classification,and that additional LSTM layer or attention mechanism can be applied to increase accuracy.

    The results of CLARINet experiments on-8,0,16 and 18 dB on SNR basis to verify accuracy in classifying radio signals in complex radio signal environments were shown as Figs.7–10.

    Figure 7:Confusion matrix at SNR-8 dB

    SNR below -10 dB has little significance in data results because of its low accuracy, and SNR of -8 dB can be found to be mostly low in accuracy, such as Fig.7.However, out of a total of 11 modulation techniques, we can confirm that AM-DSB, AM-SSB, PAM4 and QAM64 exceed 50%accuracy,which means that the modulation techniques can be classified to some extent even if noise is severe.

    According to Fig.8,CLARINet shows nearly 90%accuracy on average at SNR 0 dB,and from 4 dB,it can be seen that the accuracy is over 90%on average.In particular,we show that AM-SSB is more classified with an accuracy of over 90%.While the SNR of 0 dB is mostly over 90%accuracy,8PSK,QAM16,QAM64,and WBFM of the 11 modulation techniques do not exceed 90%accuracy.QAM16 is a subset of QAM64,which often misjudges QAM16 as QAM64 because only the bits that can be sent from one signal are different.WBFM has the lowest accuracy among the 11 modulation techniques, and the signal from WBFM is misclassified as AM-DSB due to the absence of a signal because it was modulated in a real audio stream.At 18 dB SNR,most modulation techniques,such as Fig.9,show an average accuracy of over 90%,and the accuracy for QAM16 is about 10%higher than the accuracy classified in 0 dB.Furthermore,WBFM was also shown to be 10%higher than the accuracy classified at 0 dB,but relatively lower than other signals.

    Figure 8:Confusion matrix at SNR 0 dB

    Figure 9:Confusion matrix at SNR 18 dB

    Figure 10:Confusion matrix at SNR 16 dB

    We retained the basic properties of CLARINet to verify the experimental results according to the properties of CLARINet, and proceeded with the experiment by changing the number of filters.CLARINet has 64 filters, and the number of filters in the comparison model we selected for comparison is 32, 64, 128, and 256.The experimental results comparing accuracy according to the number of filters are as shown in Fig.11.

    Figure 11:Classification accuracy results by number of filters

    When the number of filters is 64, they represent the highest accuracy at all 0 to 18 dB, and the highest accuracy at 16 dB to 92%.While 128 filters and 256 filters represent similar accuracy overall,it can be seen that 128 filters represent slightly higher accuracy.The number of filters increases mainly as the layers are placed behind, with CNN layers located in front of the CLARINet model, with a relatively small number of 64 filters having higher accuracy than 128 filters and 256 filters.

    We further analyzed for models with 64 filter and 128 filter indicating high accuracy in Fig.11.The results of analyzing the two models based on total steps, initial loss, final loss, and runtime are the same as Tab.1.

    Table 1: 64 filter and 128 filter analysis results

    Comparing epochs according to the number of filters, 64 filter shows that Total Steps was 20 more times than 128 filter, which led to more learning.We used categorical_crossentropy as a loss function of CLARINet, and after checking the loss cost, we found that both 64 filter and 128 filter show approximately 1–2% loss.We can see that the final runtime of our designed CLARINet takes about two hours less than 128 filters,depending on the number of filters.

    In addition, experimental results comparing accuracy by changing the value of dropout, a regulatory technique to prevent overfitting,were shown as Fig.12.P,the hyperparameter of dropout,means probability.The probability of dropout temporarily changes depending on the value of this p.Our designed CLARINet is designed by selecting the p value of dropout as 0.6.

    Figure 12:Classification accuracy results by dropout

    According to Fig.12,when dropout is 0.6,it is shown that the highest accuracy is from 0 to 18 dB,and when dropout is 0.4 it is the next highest accuracy.If dropout is 0.8 then low performance indicates that strong regulation indicates low accuracy,and the most common accuracy when left at 20%to 60%.It showed an accuracy of 92%at 16 dB.Through this,we confirm that designing to dropout with a 60%chance at each training step can yield the best performance to improve the accuracy of CLARINet.

    Based on the experimental results,we design CLARINet as a structure of two CNN layers,two LSTM layers,and an attention mechanism,with 64 filters on the CNN layer and 0.6 dropout on the LSTM layer,selecting the model with the highest accuracy.

    5 Conclusion

    We propose a novel model CLARINet that integrates CNN layer with LSTM layer and attention mechanism as a deep learning-based solution for classifying wireless signals in the IoT environment.Many previous studies have attempted radio signal classification based on original signals with less distortion or loss of radio signals,and have proposed efforts to improve performance on radio signal classification by incorporating various techniques.However, the exact classification of each radio signal has yet to be completely resolved, as radio signals are not separated and propagated, but are complexly propagated in crowded spaces.Previous studies have mainly improved accuracy problems by implementing LSTM-based models.Therefore, we design a model CLARINet with LSTM layer and attention mechanism applied to CNN-based models to accurately classify complex radio signals for each feature.

    We show that CLARINet,which is designed to allow wireless signals to obtain individual radio signals in congested spaces,shows approximately 60%accuracy for environments with SNR of-20 to 18 dB,with approximately 92.03%accuracy at 16 dB.Analysis of CNN layer and LSTM layer used in CLARINet structure shows that CNN-based models have an average accuracy of about 40%higher than LSTM-based models.Through this,we have shown that classifying complex radio signals through CNN-based models exhibits higher accuracy than those that do not.Furthermore,we use attention mechanism to weight features on radio signals,remember only important features of radio signals and classify them,identify the possibility of minimizing distortion and loss,and finally confirm that they can be classified into 11 radio signals via softmax regression.

    In the future,we plan to improve accuracy by changing the attributes of CLARINet models or by adding layers,and explore ways to improve on misclassifying QAM16 as QAM64 and misclassifying WBFM as AM-DSB.Furthermore,we plan to utilize CLARINet to conduct experiments on image classification to improve the problem by applying it to problems that suffer from data loss or distortion,and simplify our model to optimize the overall performance.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1F1A1063319).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    熟妇人妻久久中文字幕3abv| 一卡2卡三卡四卡精品乱码亚洲| 亚洲aⅴ乱码一区二区在线播放 | www.自偷自拍.com| 可以在线观看毛片的网站| 欧美色欧美亚洲另类二区 | 亚洲国产欧美网| 成熟少妇高潮喷水视频| 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品91无色码中文字幕| 国产精品av久久久久免费| 午夜视频精品福利| 老司机午夜十八禁免费视频| 久久中文字幕人妻熟女| 神马国产精品三级电影在线观看 | 亚洲av成人av| 中文字幕人成人乱码亚洲影| 亚洲av成人不卡在线观看播放网| 午夜激情av网站| 国产亚洲精品av在线| 淫妇啪啪啪对白视频| 亚洲色图综合在线观看| 色尼玛亚洲综合影院| 亚洲成国产人片在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 女警被强在线播放| 日韩大码丰满熟妇| 美女高潮到喷水免费观看| 亚洲av电影在线进入| 黄片小视频在线播放| 欧美在线黄色| 亚洲少妇的诱惑av| 成人特级黄色片久久久久久久| 欧美日韩黄片免| 国产视频一区二区在线看| 日韩国内少妇激情av| 黄片播放在线免费| 一卡2卡三卡四卡精品乱码亚洲| 久久久精品国产亚洲av高清涩受| 嫁个100分男人电影在线观看| 亚洲精品粉嫩美女一区| 国产精品精品国产色婷婷| 亚洲国产看品久久| 免费看a级黄色片| 18禁观看日本| 国产精品98久久久久久宅男小说| 在线十欧美十亚洲十日本专区| 日日夜夜操网爽| 欧美久久黑人一区二区| 一卡2卡三卡四卡精品乱码亚洲| 九色国产91popny在线| 久久伊人香网站| 久久久久久久久免费视频了| 亚洲精品av麻豆狂野| 99久久99久久久精品蜜桃| 亚洲国产精品合色在线| 午夜福利,免费看| 久久中文看片网| 国产片内射在线| 麻豆国产av国片精品| 手机成人av网站| 日韩高清综合在线| 在线十欧美十亚洲十日本专区| 十八禁人妻一区二区| 午夜免费成人在线视频| 亚洲中文日韩欧美视频| 久久久久九九精品影院| 亚洲国产精品999在线| 国产成人啪精品午夜网站| 久久精品国产亚洲av高清一级| 国产亚洲精品第一综合不卡| 啦啦啦韩国在线观看视频| 男人的好看免费观看在线视频 | 啪啪无遮挡十八禁网站| 欧美不卡视频在线免费观看 | 波多野结衣巨乳人妻| 国产av一区二区精品久久| 亚洲成av片中文字幕在线观看| 亚洲欧美日韩另类电影网站| 久久天堂一区二区三区四区| 久久久国产精品麻豆| 国产精品亚洲av一区麻豆| 一级a爱视频在线免费观看| 亚洲av美国av| 国产成人av教育| 国产亚洲精品av在线| 伊人久久大香线蕉亚洲五| 亚洲性夜色夜夜综合| av片东京热男人的天堂| 99国产精品一区二区三区| 一区福利在线观看| 电影成人av| 免费观看精品视频网站| 国产精品98久久久久久宅男小说| 免费在线观看完整版高清| 国产欧美日韩综合在线一区二区| 少妇 在线观看| 欧美日韩黄片免| 欧美日韩亚洲综合一区二区三区_| 国产色视频综合| 国产精品永久免费网站| 国产三级在线视频| 久久九九热精品免费| 欧美日韩精品网址| 19禁男女啪啪无遮挡网站| 亚洲第一欧美日韩一区二区三区| 国产视频一区二区在线看| 淫秽高清视频在线观看| 黄片播放在线免费| 非洲黑人性xxxx精品又粗又长| 日日干狠狠操夜夜爽| 丰满的人妻完整版| 在线观看免费日韩欧美大片| 亚洲精品粉嫩美女一区| 免费看a级黄色片| 少妇 在线观看| 中文字幕高清在线视频| 一进一出好大好爽视频| 可以在线观看毛片的网站| 久久久精品国产亚洲av高清涩受| 嫁个100分男人电影在线观看| 日日干狠狠操夜夜爽| 99精品欧美一区二区三区四区| 巨乳人妻的诱惑在线观看| 午夜福利高清视频| 韩国av一区二区三区四区| 中出人妻视频一区二区| 国产欧美日韩一区二区精品| 一夜夜www| 日韩欧美免费精品| 悠悠久久av| 又黄又粗又硬又大视频| 亚洲成a人片在线一区二区| 99精品久久久久人妻精品| 色精品久久人妻99蜜桃| 国产亚洲精品久久久久5区| or卡值多少钱| 欧美黄色片欧美黄色片| 国产av一区二区精品久久| 久久狼人影院| 久久久国产欧美日韩av| 国产精华一区二区三区| 免费看美女性在线毛片视频| 国产精品爽爽va在线观看网站 | 最新在线观看一区二区三区| 桃红色精品国产亚洲av| 禁无遮挡网站| 久久香蕉国产精品| 巨乳人妻的诱惑在线观看| 免费看a级黄色片| av免费在线观看网站| 久久青草综合色| 国产精品美女特级片免费视频播放器 | 两人在一起打扑克的视频| a在线观看视频网站| 日韩大码丰满熟妇| 黄片大片在线免费观看| 日韩av在线大香蕉| 最近最新中文字幕大全免费视频| 久久精品国产亚洲av高清一级| 久久中文字幕一级| 久久精品人人爽人人爽视色| 国产一区二区三区在线臀色熟女| 久久久国产精品麻豆| 久久久水蜜桃国产精品网| АⅤ资源中文在线天堂| 欧美午夜高清在线| 搡老岳熟女国产| 精品国产一区二区三区四区第35| 亚洲美女黄片视频| 亚洲精品在线观看二区| 天天躁狠狠躁夜夜躁狠狠躁| 在线视频色国产色| 欧美国产日韩亚洲一区| 欧美黑人欧美精品刺激| 亚洲国产毛片av蜜桃av| 最好的美女福利视频网| 亚洲 欧美 日韩 在线 免费| 免费在线观看视频国产中文字幕亚洲| 一级毛片精品| 99久久精品国产亚洲精品| 一级作爱视频免费观看| 国产成人精品久久二区二区91| 麻豆一二三区av精品| 日本三级黄在线观看| 日韩一卡2卡3卡4卡2021年| 精品国内亚洲2022精品成人| 久久国产亚洲av麻豆专区| 在线十欧美十亚洲十日本专区| 黄色毛片三级朝国网站| 国产精品亚洲av一区麻豆| 99在线视频只有这里精品首页| 欧美色视频一区免费| a在线观看视频网站| 一区在线观看完整版| 多毛熟女@视频| 啦啦啦韩国在线观看视频| 好看av亚洲va欧美ⅴa在| 国产av又大| 午夜福利一区二区在线看| 亚洲成人国产一区在线观看| 女生性感内裤真人,穿戴方法视频| 国产精品久久久久久精品电影 | 精品卡一卡二卡四卡免费| 窝窝影院91人妻| 久久久久精品国产欧美久久久| 亚洲色图综合在线观看| 精品高清国产在线一区| 人人澡人人妻人| 国产成人精品久久二区二区免费| 美女国产高潮福利片在线看| 手机成人av网站| 天天一区二区日本电影三级 | 久久中文字幕人妻熟女| 日日干狠狠操夜夜爽| 99国产综合亚洲精品| 日韩欧美免费精品| 亚洲男人天堂网一区| 国产一区二区三区在线臀色熟女| 丁香六月欧美| 级片在线观看| 国产激情欧美一区二区| 日日夜夜操网爽| 欧美乱色亚洲激情| 亚洲一码二码三码区别大吗| a在线观看视频网站| 人人妻人人澡人人看| 在线观看免费午夜福利视频| 久久精品影院6| www.自偷自拍.com| 精品电影一区二区在线| 欧美在线黄色| 波多野结衣巨乳人妻| 午夜免费激情av| 两性夫妻黄色片| 精品欧美一区二区三区在线| www.精华液| √禁漫天堂资源中文www| 国内精品久久久久久久电影| 99精品久久久久人妻精品| 正在播放国产对白刺激| 国产1区2区3区精品| 丝袜美足系列| 无限看片的www在线观看| 人人澡人人妻人| 精品国产乱子伦一区二区三区| 国产成+人综合+亚洲专区| 亚洲欧美日韩另类电影网站| 麻豆久久精品国产亚洲av| 曰老女人黄片| 黑人巨大精品欧美一区二区mp4| 男人舔女人的私密视频| 日韩欧美三级三区| 亚洲av电影在线进入| 一个人观看的视频www高清免费观看 | 亚洲av五月六月丁香网| 国产免费男女视频| 亚洲人成网站在线播放欧美日韩| 神马国产精品三级电影在线观看 | 国产高清激情床上av| 国产一区二区三区综合在线观看| 国产一卡二卡三卡精品| 91成年电影在线观看| 男女下面插进去视频免费观看| 青草久久国产| 一级作爱视频免费观看| 亚洲色图 男人天堂 中文字幕| 久久性视频一级片| 女人精品久久久久毛片| 婷婷丁香在线五月| 久久青草综合色| 精品卡一卡二卡四卡免费| 热re99久久国产66热| 精品高清国产在线一区| 日韩中文字幕欧美一区二区| 婷婷精品国产亚洲av在线| 国产精品亚洲一级av第二区| netflix在线观看网站| 丝袜在线中文字幕| 精品人妻在线不人妻| 91精品三级在线观看| 亚洲久久久国产精品| 99riav亚洲国产免费| 国产精品亚洲av一区麻豆| 最近最新中文字幕大全电影3 | 悠悠久久av| 999久久久精品免费观看国产| 嫩草影视91久久| 亚洲,欧美精品.| 国产成人av激情在线播放| 欧美国产精品va在线观看不卡| 亚洲五月色婷婷综合| 天天躁狠狠躁夜夜躁狠狠躁| 男女下面进入的视频免费午夜 | 少妇的丰满在线观看| 亚洲五月天丁香| 多毛熟女@视频| 国产熟女xx| 美女免费视频网站| av在线天堂中文字幕| 国产av又大| 亚洲精品av麻豆狂野| 亚洲第一电影网av| 欧美激情极品国产一区二区三区| 如日韩欧美国产精品一区二区三区| 亚洲第一av免费看| 欧美激情 高清一区二区三区| 欧美日韩中文字幕国产精品一区二区三区 | 中文字幕人妻熟女乱码| 国产精品 欧美亚洲| 亚洲av成人一区二区三| 亚洲午夜精品一区,二区,三区| 日韩免费av在线播放| 免费av毛片视频| 国内毛片毛片毛片毛片毛片| 波多野结衣一区麻豆| 国产精品 欧美亚洲| 美国免费a级毛片| 99久久久亚洲精品蜜臀av| 日韩欧美一区视频在线观看| 女性生殖器流出的白浆| 久久久久九九精品影院| 亚洲欧美精品综合久久99| 成人精品一区二区免费| 黄网站色视频无遮挡免费观看| 亚洲国产精品999在线| 老汉色av国产亚洲站长工具| 老汉色av国产亚洲站长工具| av超薄肉色丝袜交足视频| 麻豆av在线久日| 在线av久久热| 黄色毛片三级朝国网站| 日韩国内少妇激情av| 在线观看免费视频日本深夜| 99国产精品一区二区三区| 亚洲国产精品合色在线| 超碰成人久久| 国产精品98久久久久久宅男小说| 国产午夜福利久久久久久| 悠悠久久av| 18禁观看日本| 97人妻天天添夜夜摸| 在线观看免费视频日本深夜| 亚洲三区欧美一区| 国产精品1区2区在线观看.| 悠悠久久av| 波多野结衣高清无吗| 久久人人97超碰香蕉20202| 欧美+亚洲+日韩+国产| 国产精品久久久久久亚洲av鲁大| 亚洲精品久久成人aⅴ小说| 丰满的人妻完整版| 天堂动漫精品| 久久精品影院6| 视频区欧美日本亚洲| 母亲3免费完整高清在线观看| 日本五十路高清| 亚洲国产欧美日韩在线播放| ponron亚洲| 妹子高潮喷水视频| 亚洲成人免费电影在线观看| 久久精品亚洲熟妇少妇任你| 久久热在线av| 日韩欧美国产在线观看| 亚洲av成人一区二区三| 美国免费a级毛片| av电影中文网址| 日本在线视频免费播放| 可以免费在线观看a视频的电影网站| svipshipincom国产片| svipshipincom国产片| 中文字幕最新亚洲高清| 免费在线观看日本一区| xxx96com| 女人爽到高潮嗷嗷叫在线视频| 免费在线观看亚洲国产| 黄片大片在线免费观看| 在线播放国产精品三级| 日韩免费av在线播放| 国产精品av久久久久免费| 婷婷六月久久综合丁香| 国产一区二区三区在线臀色熟女| a在线观看视频网站| 9191精品国产免费久久| 手机成人av网站| 亚洲成人精品中文字幕电影| 国产精品日韩av在线免费观看 | 两个人视频免费观看高清| 久久久国产成人精品二区| 又黄又粗又硬又大视频| 国产一区二区三区视频了| 一二三四社区在线视频社区8| or卡值多少钱| 中文字幕人妻熟女乱码| 高清在线国产一区| videosex国产| 男人的好看免费观看在线视频 | 老司机午夜十八禁免费视频| 精品久久久久久久久久免费视频| 免费少妇av软件| 视频在线观看一区二区三区| 国产人伦9x9x在线观看| 国产一级毛片七仙女欲春2 | 欧美黄色淫秽网站| 一本综合久久免费| 国产视频一区二区在线看| 久热这里只有精品99| 18禁裸乳无遮挡免费网站照片 | 51午夜福利影视在线观看| 99久久综合精品五月天人人| 久久人妻av系列| 亚洲午夜理论影院| 一区二区三区高清视频在线| 一边摸一边抽搐一进一出视频| 国产精品香港三级国产av潘金莲| 麻豆国产av国片精品| 精品国产亚洲在线| 91av网站免费观看| 国产成人av激情在线播放| 国产精品影院久久| 日韩高清综合在线| 欧美大码av| 久热这里只有精品99| 国产精品久久视频播放| www.熟女人妻精品国产| 久久久国产欧美日韩av| 青草久久国产| 手机成人av网站| 亚洲无线在线观看| 免费不卡黄色视频| 少妇 在线观看| 黄色视频,在线免费观看| 亚洲欧洲精品一区二区精品久久久| 国产又色又爽无遮挡免费看| 久9热在线精品视频| 亚洲人成网站在线播放欧美日韩| 欧美激情高清一区二区三区| 波多野结衣巨乳人妻| 久久精品亚洲精品国产色婷小说| 99热只有精品国产| 亚洲成a人片在线一区二区| 少妇 在线观看| 少妇裸体淫交视频免费看高清 | 国产aⅴ精品一区二区三区波| 一二三四在线观看免费中文在| 午夜a级毛片| 亚洲人成伊人成综合网2020| 麻豆一二三区av精品| 欧美黑人欧美精品刺激| 欧美中文日本在线观看视频| 999精品在线视频| 天堂动漫精品| 欧美大码av| 色在线成人网| 成年人黄色毛片网站| 韩国av一区二区三区四区| 国产私拍福利视频在线观看| 久久精品国产亚洲av高清一级| 久久这里只有精品19| 两个人看的免费小视频| 国产成+人综合+亚洲专区| 亚洲av成人不卡在线观看播放网| 日韩精品免费视频一区二区三区| 黄频高清免费视频| 中出人妻视频一区二区| 亚洲va日本ⅴa欧美va伊人久久| 久久亚洲精品不卡| 国产成人精品久久二区二区91| 在线av久久热| 成人永久免费在线观看视频| 神马国产精品三级电影在线观看 | 搡老岳熟女国产| 午夜亚洲福利在线播放| 国产单亲对白刺激| 亚洲欧美激情在线| 久久天堂一区二区三区四区| 久久天躁狠狠躁夜夜2o2o| 99热只有精品国产| 美女 人体艺术 gogo| 香蕉久久夜色| 国产精品免费一区二区三区在线| 国产欧美日韩综合在线一区二区| 欧美久久黑人一区二区| 亚洲欧美精品综合一区二区三区| 涩涩av久久男人的天堂| 精品乱码久久久久久99久播| 国产成人精品久久二区二区91| 久久草成人影院| 又黄又粗又硬又大视频| 波多野结衣av一区二区av| 叶爱在线成人免费视频播放| 国产麻豆69| 国产成人欧美在线观看| 中文字幕精品免费在线观看视频| 美女大奶头视频| 国产伦人伦偷精品视频| 99香蕉大伊视频| 黄色毛片三级朝国网站| 波多野结衣高清无吗| 亚洲av日韩精品久久久久久密| 日本vs欧美在线观看视频| 成人永久免费在线观看视频| 美女大奶头视频| 69av精品久久久久久| 成人亚洲精品av一区二区| 免费无遮挡裸体视频| 午夜a级毛片| 三级毛片av免费| 麻豆一二三区av精品| 婷婷精品国产亚洲av在线| 黄频高清免费视频| 亚洲欧美精品综合久久99| 欧美亚洲日本最大视频资源| 欧美激情极品国产一区二区三区| 久久这里只有精品19| 欧美激情久久久久久爽电影 | 日韩av在线大香蕉| 国产av一区在线观看免费| 日韩中文字幕欧美一区二区| 日韩三级视频一区二区三区| 亚洲欧美精品综合久久99| 亚洲最大成人中文| 日本精品一区二区三区蜜桃| 日本撒尿小便嘘嘘汇集6| 国产亚洲精品一区二区www| 99国产精品免费福利视频| 午夜福利成人在线免费观看| 国产亚洲av高清不卡| 午夜亚洲福利在线播放| 日韩一卡2卡3卡4卡2021年| 色播在线永久视频| 在线永久观看黄色视频| 成年人黄色毛片网站| 色综合婷婷激情| 在线观看一区二区三区| 精品一区二区三区四区五区乱码| 亚洲人成网站在线播放欧美日韩| 精品久久久久久久毛片微露脸| 亚洲av电影不卡..在线观看| 9191精品国产免费久久| 大型黄色视频在线免费观看| 亚洲久久久国产精品| 国产精品秋霞免费鲁丝片| 9热在线视频观看99| 精品乱码久久久久久99久播| 麻豆av在线久日| 免费观看精品视频网站| 婷婷精品国产亚洲av在线| 欧美成人性av电影在线观看| 国产人伦9x9x在线观看| 亚洲精品国产区一区二| 亚洲国产精品成人综合色| 亚洲在线自拍视频| 亚洲成人精品中文字幕电影| 国产精品美女特级片免费视频播放器 | 曰老女人黄片| 欧美在线黄色| 99香蕉大伊视频| av天堂在线播放| 午夜精品久久久久久毛片777| 97人妻天天添夜夜摸| 色精品久久人妻99蜜桃| 午夜激情av网站| 18禁裸乳无遮挡免费网站照片 | 大陆偷拍与自拍| 99国产精品99久久久久| 男女做爰动态图高潮gif福利片 | 女人被狂操c到高潮| 午夜免费成人在线视频| 午夜福利视频1000在线观看 | 精品人妻1区二区| 一级作爱视频免费观看| 老司机在亚洲福利影院| 99热只有精品国产| 国产色视频综合| 久久精品亚洲精品国产色婷小说| 88av欧美| 亚洲一区高清亚洲精品| 亚洲专区字幕在线| 桃色一区二区三区在线观看| 一边摸一边做爽爽视频免费| 日本免费a在线| 级片在线观看| 午夜日韩欧美国产| 成人国产综合亚洲| 视频区欧美日本亚洲| 国产人伦9x9x在线观看| 每晚都被弄得嗷嗷叫到高潮| 69精品国产乱码久久久| 久热爱精品视频在线9| 亚洲熟妇熟女久久| 丁香六月欧美| 亚洲成人国产一区在线观看| 可以免费在线观看a视频的电影网站| 亚洲欧美日韩无卡精品| 久99久视频精品免费| 亚洲电影在线观看av| 精品欧美国产一区二区三| 亚洲精品一卡2卡三卡4卡5卡| 丝袜美腿诱惑在线| 日韩国内少妇激情av| videosex国产| 高潮久久久久久久久久久不卡| 免费高清视频大片| 伦理电影免费视频| 在线观看一区二区三区| 日本在线视频免费播放| 亚洲av成人不卡在线观看播放网| 国产精华一区二区三区| 老鸭窝网址在线观看| 韩国av一区二区三区四区| 黄频高清免费视频| 亚洲午夜理论影院| 午夜亚洲福利在线播放| 亚洲av片天天在线观看|