• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Modulation recognition network of multi-scale analysis with deep threshold noise elimination*#

    2023-06-02 12:31:04XiangLIYibingLIChunruiTANGYingsongLI

    Xiang LI ,Yibing LI ,Chunrui TANG ,Yingsong LI

    1College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China

    2Key Laboratory of Advanced Marine Communication and Information Technology, Ministry of Industry and Information Technology,Harbin Engineering University, Harbin 150001, China

    3China Coal Technology Engineering Group Chongqing Research Institute, Chongqing 400037, China

    4State Key Lab of Methane Disaster Monitoring &Emergency Technology, Chongqing 400039, China

    Abstract: To improve the accuracy of modulated signal recognition in variable environments and reduce the impact of factors such as lack of prior knowledge on recognition results,researchers have gradually adopted deep learning techniques to replace traditional modulated signal processing techniques.To address the problem of low recognition accuracy of the modulated signal at low signal-to-noise ratios,we have designed a novel modulation recognition network of multi-scale analysis with deep threshold noise elimination to recognize the actually collected modulated signals under a symmetric cross-entropy function of label smoothing.The network consists of a denoising encoder with deep adaptive threshold learning and a decoder with multi-scale feature fusion.The two modules are skip-connected to work together to improve the robustness of the overall network.Experimental results show that this method has better recognition accuracy at low signal-to-noise ratios than previous methods.The network demonstrates a flexible self-learning capability for different noise thresholds and the effectiveness of the designed feature fusion module in multi-scale feature acquisition for various modulation types.

    Key words: Signal noise elimination;Deep adaptive threshold learning network;Multi-scale feature fusion;Modulation recognition

    1 Introduction

    Signal modulation identification is widely used in intelligent communication systems,electronic warfare,spectrum resource monitoring,and other fields(Liu et al.,2020).In the field of intelligent communication systems,with the substantial increase in the number of end-users,effective identification methods are needed to distinguish between multiple modulation techniques for data transmission to achieve efficient transmission,and thus to ensure stable and reliable communication systems.In electronic warfare,modulation identification can help the receiver identify the signal type accurately.Modulation identification helps estimate the carrier frequency and bandwidth of the signal to carry out subsequent work such as demodulation and decoding effectively.In spectrum resource monitoring,the radio resource management department needs to use modulation identification technology to detect and manage radio resources to guarantee legitimate users’ regular communication and prevent resource abuse (Peng et al.,2022).

    Current automatic modulation classification techniques fall into three main categories: decision theory based,feature-based,and deep learning based approaches (Han et al.,2021).

    The decision theory based modulation identification method aims to construct likelihood probability models for multiple hypothesis testing of categories based on the calculated probabilities of different modulation types.Therefore,this method is also known as the likelihood ratio judgment based algorithm.Although decision theory based modulation identification methods have matured (Huang S et al.,2017;Phukan and Bora,2018;Salam et al.,2019),they still have some shortcomings.First,the likelihood function model to be selected is becoming more and more complex,requiring much more prior knowledge.Second,the model is often for only a specific single scene,the generalization ability is poor,and the universality is low.

    The feature-based recognition method performs feature extraction from individual signals,and its overall process is divided into signal pre-processing,feature extraction,and classification of modulation categories based on feature parameters.Feature extraction techniques have been based on signals’ higherorder moments,singular value decomposition,cyclostationarity,etc.(Tayakout et al.,2018;Eltaieb et al.,2020;Serbes et al.,2020).In addition to extracting different signal features,classifier design can be studied.Classifier designs have been based on decision trees (Dahap and Hongshu,2015),support vector machines (Wei YJ et al.,2019),and random forests (Li T et al.,2020).Existing feature modulation recognition is usually based on specific signal samples and thus has limited recognition performance in noisy environments.The overly complex extraction methods introduce many parameters and increase the computational cost of the modulation recognition system,and the method for processing artificially selected features lacks universality.

    In response to the above problems,methods based on deep learning are gradually being applied in signal modulation recognition.Deep learning is a method that uses multi-layer neural networks for massive data processing.It easily analyzes the features of different data dimensions with the powerful feature extraction capabilities of neural networks such as local connectivity,parameter sharing,and isovariant representation.It can obtain the implicit mapping relationship between input and output,eliminating the complicated step of manual feature selection (Schmidhuber,2015).A neural network can approximately fit any function.Meng et al.(2018) proposed an end-to-end convolutional automatic modulation recognition neural network that outperforms feature-based methods.The method proposed by Zhang et al.(2019) fuses the handcrafted features of different images and signals and uses a convolutional neural network to design a multi-modal feature fusion model for automatic modulation recognition.Xu JL et al.(2020) designed a model with multichannel input using one-dimensional (1D) convolution,two-dimensional (2D) convolution,and longshort-term memory layers to extract features from multiple channels for classification.Zhu et al.(2020)proposed a multi-label complex signal modulation identification framework for identifying different types of complex signals.Li LX et al.(2021) designed a capsule network to perform automatic modulation recognition with fewer training samples.A low-latency automatic modulation identification method applying a temporal convolutional network has been proposed to meet the real-time requirements of communication services (Xu YQ et al.,2022).Li L et al.(2023)proposed a deep-learning hopping capture model,which uses a bidirectional long-short-term memory model to identify hopping features,and performs wireless communication signal classification under short data.The method of An et al.(2022) identifies the modulation type of multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM)subcarriers using a series-constellation multi-modal feature network to achieve modulation identification in realistic non-cooperative cognitive communication scenarios.Doan et al.(2022) used a deep learning network for automatic modulation identification and direction of arrival (DOA) estimation,enabling joint multi-task learning of the same network.The deep learning based method learns the differences between different modulation signals autonomously through repeated training of radio data,thereby increasing modulation recognition accuracy and making up for the shortcomings of likelihood ratio judgment based and feature-based modulation recognition methods.Although deep learning techniques have been investigated in modulation recognition,most algorithms have low recognition rates at low signal-to-noise ratios (SNRs)and have complex data pre-processing.

    To address these issues,we first use software radio equipment to acquire the in-phase and quadrature components of multiple modulated signals in a natural environment and pre-process them by wavelet transform.We use a deep adaptive threshold denoising network as the encoder,and design a threshold selfselection module to denoise the signal and extract the input data features simultaneously.We use a module with upsampling as a decoder to restore data,layer by layer,for classification.The proposed modulation recognition scheme uses not only the idea of encoding and decoding,but also deep multi-scale feature fusion.It uses skip connection to connect denoised encoded features with decoded features outputted from multi-scale analysis and upsampling to learn the differences between different kinds of signals.

    2 Modulation signal

    The modulation signal dataset is produced through two stages: signal acquisition and signal pre-processing.

    2.1 Signal acquisition

    Most modulation identification research is still based on simulation datasets generated by mathematical software.This approach lacks consideration of the signal’s impact on the transceiver environment.In the actual sending and receiving process,the signal may experience attenuation distortion caused by space propagation loss,interference by atmospheric noise such as thunderstorms and lightning,and may also appear as intermittent signals caused by unstable sending and receiving equipment.In our study,we build a signal transceiver system comprising a universal software radio peripheral (USRP),antenna,and software radio platform in a natural environment.USRP N210 is selected as the hardware device for signal transmission and reception.The software radio platform is used to generate,store,and analyze the actual modulated signals.Fig.1 shows the architecture of the signal transceiver system.

    Fig.1 Architecture of the signal transceiver system

    Flow graphs are constructed using gnu’s not unix(GNU) radio companion and a file source module is used to read the set signal data flow from the personal computer (PC).The data in the file source are pre-designed data of multiple modulation types.The modulation categories selected for this study are based on those previously used for radio datasets in modulation identification (O’Shea and West,2016).Modulation types are divided into analog modulation and digital modulation.Analog modulation includes double side band (DSB) modulation,simple side band(SSB) modulation,and frequency modulation (FM).Digital modulation includes 8 phase shift keying(8PSK),binary phase shift keying (BPSK),continuous phase frequency shift keying (CPFSK),Gauss frequency shift keying (GFSK),pulse amplitude modulation 4 (PAM4),16 quadrature amplitude modulation (16QAM),64 quadrature amplitude modulation(64QAM),and quadrature phase shift keying (QPSK).After sampling the modulated signal,the modulated signal can be expressed as

    whereA(k) is the instantaneous amplitude of the signal,f(k) is the instantaneous frequency,andθ(k)is the instantaneous nonlinear phase.Using the trigonometric formula,we obtain

    whereI(k) is the in-phase component andQ(k) is the quadrature component of the complex signal.Noise is added at different intensities for different kinds of modulated design signals.The SNR increases from-10 to 10 dB in 2-dB increments.The noised signal is as follows:

    wheren(k) is the added noise.

    2.2 Signal pre-processing

    Our scheme adopts the pre-processing method of wavelet noise reduction for the received in-phase and quadrature data,and saves the multi-channel data and SNR labels of each modulation type.The processed data are directly fed into the deep learning network recognition model.

    Wavelet threshold noise cancellation is a classical method in signal noise reduction (Donoho,1995).The wavelet transform originated from the Fourier transform,which convertes time domain functions to frequency domain functions by transforming them into trigonometric functions or their linear superposition (Harris,1978).The Fourier transform uses the entire signal in the time domain to extract spectral information,and obtains a single determined spectral value that does not reflect local characteristics.Compared with the Fourier transform,the wavelet transform chooses a finite-length family of wavelet functions (Chang et al.,2000).The family is obtained by translating and telescoping the wavelet basis,which decays rapidly to zero and integrates to zero in (-∞,+∞);i.e.,the amplitude oscillates between positive and negative.The essence of the wavelet transform is the inner product of the signal and the family of wavelet functions,i.e.,the projection of the signal onto the family of wavelet functions (Sendur and Selesnick,2002).The classical wavelet transform equation is as follows:

    wheref(t) is the input signal,Ψ(t) is the wavelet basis function,ais the scale parameter that performs function scaling,andbis the translation parameter that changes the function action position.The result of the transformation reflects not only the frequency components contained in the signal,but also the corresponding time domain location.Most practical applications use discrete wavelet function families:

    wherea=b=nb0,m,n∈Z,anda0>1.The wavelet transform relies on differentmandnfor different resolutions,as well as different translations,to decompose the signal to different scales.Therefore,the wavelet transform can analyze the localization of non-stationary signals in the time–frequency domain.

    We choose Daubechies’ wavelet basis function for the discrete wavelet transform.Daubechies’ wavelet belongs to compactly supported orthogonal wavelets.As a common function for signal decomposition and reconstruction,it has good regularity (Li B and Chen,2014).The Mallat algorithm carries out the decomposition,and the wavelet coefficients of low and high frequencies are

    wherecj[k] is the low-frequency wavelet coefficient,anddj[k] is the high-frequency wavelet coefficient.The selected wavelet basis function determines the scale and wavelet coefficients.The number of decomposition layers isj,andNis the signal length.Most of the noise in the data is distributed in highfrequency details,and needs to be eliminated.A fixed threshold is used to remove noise (Jia et al.,2013).The formula for threshold selection is as follows:whereλis the selected threshold andwis the original wavelet coefficient.For the threshold function,the soft threshold selected for denoising is

    wherewλis the wavelet coefficient after noise reduction.When the absolute value of the wavelet coefficients is greater than the given threshold,the wavelet coefficients subtract the threshold;when the absolute value is less than the given threshold,the wavelet coefficients are discarded.The wavelet inverse transform is performed on the filtered signal,i.e.,wavelet reconstruction.The equation is as follows:

    The low-frequency coefficients and noise cancellation high-frequency coefficients are reconstructed,which can realize the pre-processing of wavelet noise reduction and obtain the estimated value of the recovered original signal.

    3 Automatic modulation recognition system model

    In this section,we first describe the overall framework of the signal recognition system and introduce the recognition network in the framework,i.e.,the deep adaptive threshold feature fusion network.We then provide detailed descriptions of two critical subnetworks of the recognition network: the deep adaptive threshold denoising network and the deep multiscale feature fusion network.

    3.1 Overall framework of the signal recognition system

    The overall framework of the signal recognition system is shown in Fig.2.The signal transceiver system collects the modulation signal to obtain in-phase and quadrature components.We use wavelet noise reduction on the components and combine them into multi-channel data.At this point,the data processing is completed.The pre-processed data are read into the deep adaptive threshold feature fusion network designed in this study to obtain a prediction.The symmetric cross-entropy loss function between the predicted category and actual category is calculated to obtain the loss value.The parameters are iteratively optimized according to the loss values to obtain the final recognition model.

    Fig.2 Signal recognition system framework

    In the first step of the deep adaptive threshold feature fusion network,the input data are updated with dimensionality by the convolutional layer and pass through the batch normalization (BN) layer and LeakyRelu function.In the next step,the data pass through the critical components of the recognition network.The data are first extracted by the deep adaptive threshold denoising network of nonlinear encoding for feature extraction,and then dimensionally restored by the deep multi-scale feature fusion network of nonlinear decoding.We use the idea of an autoencoder to construct the above two sub-networks for modulation signal identification.We set four blocks with different dimensions in deep adaptive threshold denoising network with nonlinear encoder structure for feature extraction of different dimensions.Noise elimination means are introduced into each block.A threshold learning network with a designed threshold function removes redundant information from the set of learned features.This enables the network to automatically identify the noise to be removed and overcome the difficulty of determining the optimal value for setting the threshold manually.In the nonlinear decoding deep multi-scale feature fusion network,we set up decoding blocks corresponding to the dimension of the encoding block.In each decoding block,we convolve the input features using a parallel structure of dilated convolution for multi-scale feature extraction and superposition to form fused features and then upsample the fused features.The coding and decoding information is fused using skip connection so that the network learns both global and local information.Each decoding block is serially connected and gradually recovered to the initial data dimension.The output features go through a global average pooling layer,a dropout layer,and a fully connected layer to obtain the probability of each signal recognition.

    3.2 Deep adaptive threshold denoising network

    We propose a deep adaptive threshold denoising network based on the residual network.While ensuring the effectiveness of the network,this adaptively learns the threshold value and eliminates irrelevant data features to play the role of signal denoising.The deep adaptive threshold denoising network consists of four blocks of different dimensions,and each block contains a corresponding number of deep adaptive threshold denoising modules.The structure of each module is shown in Fig.3.The deep adaptive threshold denoising module contains an additional sub-module for setting the threshold of residual paths with respect to the deep residual module.The sub-module consists of a threshold training module and a threshold function.The threshold training module sets the corresponding threshold value for each channel feature.The threshold function can adaptively eliminate noise by judging the relationship between the data and the threshold of each channel.

    Fig.3 Deep adaptive threshold denoising module

    The core of the deep adaptive threshold denoising module lies in the design of threshold noise elimination for the residual path (Fig.3).Initial feature extraction is performed using the convolutional layer,BN,and the LeakyRelu function.Global average pooling then transforms featuresC×W×Hinto output featuresC×1×1 with global receptive fields,preventing overfitting and simplifying the computation when designing the subsequent noise elimination model thresholds.Among them,C,W,andHform a three-dimensional tensor,whereCrepresents the number of channels,Wrepresents width,andHrepresents height.After aggregatingC×W×Hinto the output features ofC×1×1,the model is divided into two parallel structures: one considers the relationship between different channels based on the original features,and the other is designed as the threshold training network.

    The first path flattens the globally average pooled featuresxinto a one-dimensional tensor (C×onedimension),with each data value representing a feature within the current channel.Then the weights corresponding to each channel data value in the whole feature set are calculated by iterative optimization of the BN layer,Sigmoid function,and neural network propagation process.Each weight is multiplied by the feature value in the corresponding channel to obtain the feature containing the respective importance level.Compared with the direct output of features with the same weight,this method can better fit the dependency relationship between each channel and provide more critical information for subsequent network processing.

    The other path is to obtain adaptive thresholds and use the threshold function to eliminate noise.Here,xis flattened in one dimension and multiplied with the features flattened by the adaptive local channel convolution.The resulting features are decompressed.Since the channel dimension is usually an integer multiple of 2,and considering the limitations of the linear mapping relationship for feature selection (Wang QL et al.,2020),an exponential function with a base of 2 is chosen to reflect the relationship between the convolution kernel and the number of channels.The adaptive local channel convolution is

    whereKis the convolution kernel size,indicating how many close neighbors participate in the calculation of the specified channel.The sizes areγ=2,b=1,and convolution kernels are related to the number of channels in the current feature.ConsiderKconvolution kernels to capture local cross-channel interaction information,which can set thresholds for different channels by adaptive local cross-channel convolution.Input each channel data value and threshold value into the designed threshold function for adaptive noise elimination.The conventional threshold functions are hard thresholding and soft thresholding.The hard thresholding is

    whereηis the set threshold value,xdenotes the input data,andxhdenotes the threshold noise elimination result.The hard threshold function is not continuous near the threshold value,causing the pseudo-Gibbs effect.Although the continuity of soft thresholding is improved,the sign function is prone to oscillate at the intermittent point,which affects the denoising effect.In our scheme,we use the tanh function instead of the sign function.The formula of the tanh function is

    Fig.4 shows the difference between the tanh function and the sign function.

    Fig.4 Function image

    Compared with the sign function,the tanh function is smoother at the intermittent point,eliminating the effect of the optimization difficulty caused by the intermittent point of the sign function on the denoising process.In addition,the data whose absolute values are greater than the threshold when using soft thresholding have a constant deviation between the denoised value and the actual value,which affects the approximation of the denoised output and the actual data.Therefore,our designed threshold function is as follows:

    whereζ1andζ2are the threshold results trained by adaptive noise elimination,xdenotes the input data,andxζdenotes the output of the deep neural network based on threshold function noise elimination.The network is flexible to self-learn the threshold value corresponding to the current feature so that essential features and redundant features learn different thresholds.Different noise elimination results are obtained by the threshold function.The features of the relationship between the adaptive noise elimination results and the retained channels are summed as the output of the residual path.This model ensures the overall efficiency.

    3.3 Deep multi-scale feature fusion network

    Our design uses a deep multi-scale feature fusion network as a decoder.The network consists of deep multi-scale feature fusion decoding blocks of different dimensions.Each decoding block corresponds to the dimension of the deep adaptive threshold denoising coding block.First,the decoding block synthesizes more discriminative features using continuous incremental multi-scale dilated convolutions for the input features.Dilated convolution is a method that increases the receptive field without adding additional computational effort (Wei YC et al.,2018).The receptive field is the size of the region where the extracted features are mapped to the input space (Rawat and Wang,2017).An increase in the receptive field indicates a larger spatial reach to the original data.Dilated convolution contains a hyperparameter dilated rate compared to standard convolution.Let the dilated rate bed.Thend-1 zeros are inserted between two adjacent elements of the convolution kernel,which constitutes a sparse filter:

    wherenis the size of the equivalent convolutional kernel after expansion andkis the input convolutional kernel size.The output data size iso,iis the input data size,pis the padding size,andsis the step size.Compared with standard convolution,dilated convolution can obtain a denser feature response while learning fewer feature parameters.Fig.5 shows the dilated convolution parallel structure designed in this study.

    Fig.5 Parallel structure of dilated convolution (References to color refer to the online version of this figure)

    The parallel structure contains four-way dilated convolution with progressively increasing dilated rates.The light blue rectangular boxes in Fig.5 show the specific role of the dilated convolution layer for each way.In Eq.(15),assumingkis 3,we set the dilated rates in four ways to be 1,2,3,and 5.The change of each red box area represents the change in the size of the individual convolution kernel,so we can obtain the equivalent convolution kernel sizes to be 3,5,7,and 11,respectively.This expands the original action range of the convolution kernel and increases the receptive field.Meanwhile,the parallel incremental dilated convolution design can map the features of different sizes in the input features to the corresponding positions of the output features.After BN and the LeakyRelu function,the results are prepared for the next step of multi-scale fusion.To prevent the convolution kernel from degenerating into a filter of 1×1 and ignoring the overall features when the dilated rate increases,the module also parallels one-way global average pooling to restore global features.This way then goes through convolution to recover the channel dimension and upsampling to recover the size of the features.The designed five-way multi-scale parallel features are fused,and the features are subjected to 1×1 convolution,BN,the LeakyRelu function,and the dropout layer to obtain multi-scale fusion decoding features.

    After the dilated convolution parallel structure,we use the bilinear interpolation method for upsampling calculation.Upsampling is a means of recovering data information.The four existing pixel values around the target point of the original image are used jointly to determine the target point’s pixel value.The core idea is to perform a linear interpolation in each of the two directions,which is computationally small and easy to implement.

    Furthermore,the coding noise reduction feature and the decoding recovery feature of multi-scale analysis of the corresponding channel are skip-connected to obtain new features and then inputted to the next layer for continuous decoding.This process fuses high-level features with low-level features to obtain global and local information and mine the available information fully.

    4 Experimental results and discussion

    We verified the effectiveness of our network experimentally using the acquired data.

    4.1 Dataset preparation

    The baseband signal generated by the source is limited by the antenna size and the channel bandwidth.The signal has a low frequency,which causes significant attenuation and distortion when transmitted directly.Therefore,various modulation methods are needed to change the baseband signal into a form suitable for transmission on the corresponding carrier frequency.The dataset was the modulated signal obtained by using a software radio platform built by USRP to transmit and receive signals in a natural environment.It serves to support the next step to prove the practicality of the deep adaptive threshold feature fusion network.The 11 modulation types in this study were DSB,SSB,FM,8PSK,BPSK,CPFSK,GFSK,PAM4,16QAM,64QAM,and QPSK.Since the feature extraction recognition ability differs at different SNRs,noise was added to the modulated signal.The SNR ranged from -10 to 10 dB,increasing every 2 dB,producing signals at 11 SNRs.There were 1000 samples for each type of signal at each SNR,so the dataset contained 121 000 samples.The in-phase and quadrature matrices were transformed into a multi-dimensional matrix using wavelet decomposition,fixed threshold denoising,and wavelet reconstruction.The training and testing set data were divided according to an 8:2 ratio.

    4.2 Experimental environment and parameter settings

    The experimental platform consisted of a Windows version operating system,an E5-2680 v4 CPU processor,and an A4000 graphics card with 30.1 GB RAM and 16.9 GB video memory.Our proposed model was built and trained in the PyTorch framework,which is one of the powerful deep learning frameworks for Python.The cross-entropy function can indicate the degree of difference between the two types of variables (Kline and Berardi,2005).The smaller the crossentropy function value,the closer the distribution of the two categories of variables,and the larger the crossentropy function value,the more significant the difference between the two categories.When the cross-entropy function is used,the simple category classification is overfitted,but the complex category classification with noise is still underfitted.Therefore,it is necessary to choose a loss function suitable for handling complex category labels.We chose the symmetric cross-entropy function (Wang YS et al.,2019).We first calculated

    where Eq.(17) is the formula for cross-entropy function,Eq.(18) is the formula for reverse cross-entropy function,p(x) is the true distribution,andq(x) is the predicted distribution.The combination of crossentropy and reverse cross-entropy constitutes the symmetric cross-entropy function:

    whereαlcesolves the problem of overfitting the crossentropy loss function andβlrceimproves the robustness of noisy data and enhances the overall system performance.Further,the symmetric cross-entropy loss function is handled using label smoothing (Szegedy et al.,2016) to reduce the undesirable effects of forcibly learning the wrong category when the labels themselves have problems.Error tolerance was set for each type of modulation label:

    whereεis a small constant.Label smoothing makes the probabilistic optimization objective of the loss function no longer 1 and 0,i.e.,1 becomes 1-ε,and 0 becomesε/(k-1),reducing the effect of overfitting and mislabeling on classification.To minimize the value of symmetric cross-entropy loss,the network needs to choose a suitable optimization strategy.Three gradient descent algorithms,SGDM,Adam,and RMSProp,were selected.The experimental results were recorded for every 4 dB increase from -10 dB to choose the most suitable strategy for this scheme.The results are shown in Table 1.

    Table 1 Identification results of different optimization methods

    A better optimization strategy can be obtained by using the SGDM method.SGDM is based on the SGD optimization algorithm but it incorporates a firstorder momentum update term.SGDM simulates the object’s inertia.The descent speed is increased for the position where the current gradient is consistentwith the last gradient.In other cases,the descent speed is reduced to avoid oscillation near a local optimum.This network uses SGDM for efficient learning of the network structure.At each SNR,we used the ratio of correctly classified signals to the total number of samples as the recognition accuracy for evaluating network performance.The confusion matrix of the modulated signals identified by the network was also plotted to evaluate the classification performance.For each class of modulated signals,TP means that the model correctly predicted signals,and FN means that the model incorrectly predicted signals as other classes.Thus,the prediction accuracy under each signal class is defined as

    4.3 Network recognition results and analysis

    Samples in the set were divided into 50 epochs.The batch size was set to 16.

    4.3.1 Effect of network depth on experimental results

    Under the network structure designed in this study,the number of deep adaptive threshold denoising modules in each coding block was changed to alter the number of overall network layers,to explore the influence of network depth on experimental results.The number of deep adaptive threshold denoising modules was increased one by one until optimal network architecture performance was obtained.The experimental networks included network A with 4 deep adaptive threshold denoising modules such that the numbers of modules from coding block 1 to coding block 4 were distributed as [1,1,1,1],network B with 5 modules such that the numbers were distributed as [1,1,1,2],network C with 6 modules such that the numbers were distributed as [1,2,1,2],network D with 7 modules such that the numbers were distributed as [1,2,2,2],network E with 8 modules such that the numbers were distributed as [2,2,2,2],and network F with 9 modules such that the numbers were distributed as [2,2,2,3].Fig.6 shows the experimental results of the six constructed depth networks at low SNRs of [-10,-2] dB.

    Fig.6 Experimental results at different network depths

    From the experimental results,when the number of deep adaptive threshold denoising modules was between 4 and 8,the recognition accuracy of the network under each SNR increased with the increase of the number of modules.This proved that as the depth of the network increases,the network learns richer feature information,expresses the features more strongly,and improves recognition results.When the number of modules increased from 8 to 9,the recognition accuracy of the network decreased under partial SNRs.The recognition accuracy was 59.50%,72.73%,and 94.14% at -10 dB,-6 dB,and -2 dB with 8 modules,respectively,and decreased to 58.32%,71.18%,and 93.41%,respectively,when the number of modules increased to 9.The reasons are as follows.First,the network dataset in this study was signal data,which do not need large-scale complex image feature recognition.Therefore,the recognition accuracy can easily reach saturation when the number of network layers rises.Second,the module parallelizes part of the hidden layer structure when the residual path is designed,accelerating the increase of the number of network layers.When the depth reaches the boundary value,increasing the depth again will gradually lose some shallow effective information and cause a decrease in accuracy.Additionally,the number of parameters of the network with 8 modules was 18 750 859,while the number of parameters of the network with 9 modules was 23 472 532.The increase in the number of parameters increases the training time.In this study,we combined the results of classification accuracy and model complexity.We selected network E containing 8 deep adaptive threshold modules such that the distribution of the numbers of modules from coding block 1 to coding block 4 was [2,2,2,2] for experiments.

    4.3.2 Recognition results of feature fusion networks with different dilated rates

    We tried to set different combinations of dilated rates for the parallel structure of dilated convolution in the decoding block.In the four-way parallel dilated convolution,we set the dilated rate to increase one by one.We chose the structures with four-way dilated rates of {1,2,3,5},{2,4,6,8},and {1,7,9,13} for the comparison experiment to select the most suitable combination of dilated rates under low SNRs.The results are shown in Fig.7.

    Fig.7 Identification results at different dilated rates

    The results showed that using a structure with the dilated rate combination of {1,2,3,5} was better than using the two other structures,because the dilated rate directly determined the size of the receptive field.A combination with proportionally increasing dilated rates like {2,4,6,8} will lose the continuity of image information and form a gridding effect.When using a convolutional combination with dilated rates like{1,7,9,13} to process high-level information,a large convolution makes the input sampling sparse,resulting in local information loss.Therefore,the fourway structure with dilated rates of {1,2,3,5} was chosen for the network.

    4.3.3 Identification results of the deep adaptive threshold denoising network based on multi-scale analysis

    In this study,we set up a network with 8 deep residual modules such that the numbers of modules from coding block 1 to coding block 4 were distributed as [2,2,2,2] as the underlying framework network.For experimentation,we chose the underlying residual framework network,the deep adaptive threshold denoising network,the deep feature fusion network,and the deep adaptive threshold feature fusion network.The results shown in Fig.8 were used to verify whether the network designed in this paper improves recognition accuracy.

    Fig.8 Results of the role of each network

    The recognition accuracy of the designed deep self-learning threshold module was higher than that of the underlying residual framework.In particular,the feasibility of the threshold learning structure for redundant feature processing was well illustrated in the low SNR stage from -10 to -2 dB.The recognition effect of the deep feature fusion network with the addition of multi-scale analysis decoding was also better than that of the underlying residual framework.This indicates that the multi-scale incremental dilated convolutions based on our design achieve integration and interaction between the extracted features.The recognition results of the combined codec network outperformed the results of the above three networks,indicating that the network with the skip connection codec structure fully combines contextual data information.

    4.3.4 Recognition accuracy comparison

    The signal data were fed into the different networks under the same data pre-processing conditions for comparison with our network (Fig.9).

    Fig.9 Different network modulation identification results

    Fig.11 The 0 dB confusion matrix

    Fig.12 The 10 dB confusion matrix

    As SNR increased,the recognition accuracy of the five kinds of networks also increased.When SNR was lower than 0 dB,recognition rates changed significantly with the increase of SNR.When SNR was higher than 0 dB,the recognition rates increased slowly with the increase of SNR,and the final recognition rates tended to be stable.Under the overall SNR,the recognition accuracy of DATFFNet was higher than the accuracy of the other modulated classification networks.The recognition rate of DATFFNet reached 94.14% at -2 dB,which clearly demonstrates its superiority.We compared WTNet,FCSTNet,and DATFFNet.The recognition results obtained using the depth-based adaptive thresholding noise elimination method outperformed those of the traditional signal noise elimination method.In the low SNR stage,DATFFNet showed an accuracy improvement of 3.27%–7.45% compared with the traditional threshold noise elimination method,which shows the superiority of deep self-learning.Meanwhile,the noise cancellation effect of our thresholding module was better than that of using the fully connected layer combined with soft thresholding learning.In the low SNR stage,our network had an accuracy improvement of 1.05%–4%.The denoising method,which adaptively selectsKchannels,can effectively filter the irrelevant information while considering the direct correspondence between the channel and the weight to capture the most significant features of the signal.The overall recognition accuracy was higher,and the effect was better.We compared GoogLeNet (Szegedy et al.,2015),DenseNet (Huang G et al.,2017),and DATFFNet.The recognition results of our method were better than those of GoogLeNet for multi-scale aggregation in the low SNR stage,with an accuracy improvement of 7.27%–11.82%.This indicates the advantage of multi-scale information fusion and superposition in our design.In addition,the recognition results of our network were better than those of DenseNet for crosslayer connectivity.In the low SNR stage,the recognition accuracy of DATFFNet was significantly improved,which indicates the feasibility of cross-layer connection.

    Visual analysis of the confusion matrix was carried out.Figs.10–12 show the classification results of the confusion matrix of the deep adaptive threshold denoising network based on multi-scale analysis when SNR was -10,0,and 10 dB,respectively.

    The horizontal axis is the category predicted by the network,and the vertical axis is the actual category.The numbers in the table represent the probability that for the actual type corresponding to the vertical coordinate,the network predicts this type of signal as the corresponding type signal on the horizontal coordinate.At -10 dB,the recognition rates of most types of signals were above 60%,and the network model could roughly distinguish various types of signals.The recognition rates of 8PSK,16QAM,and 64QAM modulations were low,being 51.10%,41.88%,and 44.50%,respectively.At the lower SNR,the characteristics of these three types of signals and other types of modulation were not obvious,the similarity between the signals was large,and the probability of extracting ideal features was low,so the recognition rate was low.At 0 dB,the types of signals,except those of 8PSK and 64QAM,were only slightly confused,and recognition rates were higher than 95%,which proves that the network can distinguish these types well.8PSK had a 17.03% probability of being misjudged as 64QAM,and 64QAM had a 10.05%probability of being misjudged as 8PSK.In the results shown in Figs.10 and 11,a misjudgment always occurred between 8PSK and 64QAM.The reasons are as follows.First,in the process of network learning features,the features are selective,and the network easily loses part of the information,resulting in misjudgment between signals.Observing the recognition results of -10 dB and 0 dB,the recognition rates of 8PSK and 64QAM were lower than those of most other types,which explains that the features learned by this network caused 8PSK and 64QAM to be easily misjudged as other types.Second,when collecting data,the environmental noise seriously pollutes the 8PSK and 64QAM signals,and the parameters,such as the phase and frequency of the signals,are damaged,making it difficult to distinguish these two types.Hence,8PSK and 64QAM are always confused.At 10 dB SNR,a clear diagonal in the confusion matrix was achieved with a 100% modulation recognition rate for all modulation classes.From the three confusion matrix figures,the values on the main diagonal of the same type of modulation increased as SNR increased.This shows that the recognition rates of all kinds of signals increase with the increase of SNR,and the network recognition effect is gradually enhanced.

    To further evaluate the performance of the algorithm,the RadioML2018.01A dataset (O’Shea et al.,2018) generated by the GNU radio was selected to test the algorithm.This dataset considers the effects of carrier frequency offset,symbol rate offset,delay time,and additive thermal noise on the signal in compromised environments.We selected 11 types of modulation signals,including 4ASK,AM-DSB-SC,AM-SSB-SC,BPSK,FM,GMSK,OOK,OQPSK,8PSK,16QAM,and QPSK.Different algorithms were inputted to the [-10,-2] dB segment for experiments,and the results are shown in Fig.13.

    Fig.13 Recognition results of RadioML2018.01A

    In impaired environments,the recognition of DATFFNet could reach 78.45% at -2 dB.Results of the algorithm used in our network were still better than those of the four other networks under low SNR,with an improvement of 0.32%–11.59%.This further proves that the designed network is suitable for noise threshold self-learning and multi-scale fusion analysis.

    4.3.5 Model complexity of deep adaptive threshold denoising network based on multi-scale analysis

    Model complexity is related to the computational resources used by the network.We used 1×1 convolution and adaptive grouping convolution to reduce the number of parameters.Further,we analyzed the experimental results from using different convolutional architectures in the encoding and decoding stages.Table 2 compares the number of parameters and recognition accuracy of the network using the underlying convolutional architecture of 1×n+n×1 in the encoding stage,the network using the output equivalent features ofn×nwithout expansion coefficients in the decoding stage,and our convolutional combination network,at low SNR.

    Table 2 Numbers of parameters and recognition results of different convolutional architectures

    Although the underlying architecture design of 1×n+n×1 reduced the number of parameters of the network,the recognition accuracy of the network was lower than that of our network.In multi-scale analysis,the training cost of using the convolutional network ofn×nwith no expansion was too large,and the recognition accuracy was not significantly improved.Therefore,the convolutional architecture of our proposed network not only had better recognition results,but also had fewer parameters and higher model efficiency.

    5 Conclusions

    In this paper,we proposed a deep adaptive threshold noise elimination network based on multi-scale analysis,called the DATFF network.First,unlike software simulation signals,our network uses USRP to build a software radio platform to transceive the actual signal and produce signal datasets.Second,we designed a coding network for deep adaptive threshold noise elimination to select the optimal threshold value in the denoising pre-processing stage.Meanwhile,we designed a deep multi-scale feature fusion decoding network and connected the coded and decoded features in skip connection.We conducted many comparative experiments on the collected datasets to demonstrate that our algorithm is effective in combining multi-scale information while eliminating noise from redundant features of signals,and has high recognition accuracy.In future work,we will focus on optimizing our network to achieve real-time classification using lightweight techniques while guaranteeing accuracy.We will also consider designing multi-path deep neuralnetworks to implement joint multi-task processing containing the automatic modulation recognition task.

    Contributors

    Xiang LI,Yibing LI,and Chunrui TANG designed the study.Xiang LI processed the data and drafted the paper.Yibing LI organized the paper.Chunrui TANG and Yingsong LI revised and finalized the paper.

    Compliance with ethics guidelines

    Xiang LI,Yibing LI,Chunrui TANG,and Yingsong LI declare that they have no conflict of interest.

    Data availability

    Due to the nature of this research,participants of this study did not agree for their data to be shared publicly,so supporting data are not available.

    国产黄色小视频在线观看| 99热这里只有是精品在线观看 | 在线观看免费视频日本深夜| 久久香蕉精品热| 99视频精品全部免费 在线| 日本黄色视频三级网站网址| 麻豆久久精品国产亚洲av| 成人毛片a级毛片在线播放| 亚洲 国产 在线| 午夜影院日韩av| 久久精品久久久久久噜噜老黄 | 麻豆久久精品国产亚洲av| 亚洲精品在线观看二区| 美女大奶头视频| www.色视频.com| 哪里可以看免费的av片| 听说在线观看完整版免费高清| 亚洲七黄色美女视频| 丝袜美腿在线中文| 女人十人毛片免费观看3o分钟| 69人妻影院| 97热精品久久久久久| 成年人黄色毛片网站| 日韩有码中文字幕| 久久精品国产99精品国产亚洲性色| 日韩欧美在线二视频| 尤物成人国产欧美一区二区三区| 精品久久久久久久久久久久久| 国产av麻豆久久久久久久| 一本综合久久免费| 国产白丝娇喘喷水9色精品| 成年女人看的毛片在线观看| 亚洲专区中文字幕在线| 成人三级黄色视频| 深夜精品福利| 欧美性感艳星| 亚洲精品影视一区二区三区av| 久久午夜亚洲精品久久| 美女 人体艺术 gogo| 国产成人欧美在线观看| 精品久久久久久久久久免费视频| 小蜜桃在线观看免费完整版高清| 免费看光身美女| 国产亚洲精品久久久久久毛片| 国产高清视频在线播放一区| 深夜a级毛片| 一区福利在线观看| 五月玫瑰六月丁香| 最近视频中文字幕2019在线8| 狠狠狠狠99中文字幕| www.www免费av| 美女 人体艺术 gogo| 欧美国产日韩亚洲一区| 国产在线精品亚洲第一网站| 国产色爽女视频免费观看| 国产精品人妻久久久久久| 国产毛片a区久久久久| 日韩欧美 国产精品| 日韩欧美国产一区二区入口| 午夜福利免费观看在线| 亚洲aⅴ乱码一区二区在线播放| 麻豆国产av国片精品| 十八禁国产超污无遮挡网站| 九九在线视频观看精品| 久久精品影院6| 麻豆国产av国片精品| 欧美潮喷喷水| 好男人电影高清在线观看| 亚洲av日韩精品久久久久久密| 国产成人av教育| 精品久久久久久久人妻蜜臀av| 精品人妻偷拍中文字幕| a级毛片a级免费在线| 午夜激情福利司机影院| 国产免费男女视频| 成人三级黄色视频| 国产成人影院久久av| 国产高清三级在线| 每晚都被弄得嗷嗷叫到高潮| 18禁在线播放成人免费| 99热这里只有精品一区| av天堂中文字幕网| 国产成人福利小说| 精品无人区乱码1区二区| 亚洲午夜理论影院| 亚洲成人中文字幕在线播放| 性色avwww在线观看| 久久人妻av系列| 亚洲电影在线观看av| 天堂动漫精品| 国产亚洲欧美98| 搞女人的毛片| 久久久成人免费电影| www.www免费av| 精品一区二区三区av网在线观看| 日韩欧美国产一区二区入口| 欧美一区二区亚洲| 国产精品永久免费网站| 欧美一区二区精品小视频在线| 久久久久久国产a免费观看| 亚洲人与动物交配视频| 国产精品国产高清国产av| 国产伦精品一区二区三区视频9| 欧美黑人巨大hd| 高清在线国产一区| 久久热精品热| 成人午夜高清在线视频| 中文字幕久久专区| 91av网一区二区| 性插视频无遮挡在线免费观看| 精品久久久久久久久久久久久| 国产极品精品免费视频能看的| 免费高清视频大片| 美女高潮的动态| 国产精品国产高清国产av| 国产精品久久久久久精品电影| 十八禁人妻一区二区| 国产又黄又爽又无遮挡在线| 亚洲av成人不卡在线观看播放网| 他把我摸到了高潮在线观看| 毛片一级片免费看久久久久 | 日韩亚洲欧美综合| 成人永久免费在线观看视频| 国产91精品成人一区二区三区| 久久九九热精品免费| 国产精品电影一区二区三区| 日韩中字成人| 别揉我奶头~嗯~啊~动态视频| 午夜视频国产福利| 国内揄拍国产精品人妻在线| 一进一出抽搐动态| 老司机深夜福利视频在线观看| 男人舔女人下体高潮全视频| 别揉我奶头~嗯~啊~动态视频| 国产精品不卡视频一区二区 | 成人美女网站在线观看视频| 中文字幕av成人在线电影| 制服丝袜大香蕉在线| 蜜桃久久精品国产亚洲av| 国产视频一区二区在线看| 九色成人免费人妻av| 亚洲欧美激情综合另类| 免费av不卡在线播放| 国产高清视频在线观看网站| 夜夜夜夜夜久久久久| 欧美激情久久久久久爽电影| 九色成人免费人妻av| 国产69精品久久久久777片| 黄色一级大片看看| 亚洲精品粉嫩美女一区| 色av中文字幕| 亚洲 国产 在线| 日日干狠狠操夜夜爽| 尤物成人国产欧美一区二区三区| 欧美又色又爽又黄视频| 国产精品久久视频播放| 99热精品在线国产| av欧美777| 午夜精品在线福利| 国产精品亚洲美女久久久| 国产蜜桃级精品一区二区三区| 日韩亚洲欧美综合| 欧美+日韩+精品| 久久精品国产亚洲av香蕉五月| 久久午夜亚洲精品久久| 成年版毛片免费区| 一个人免费在线观看的高清视频| 欧美国产日韩亚洲一区| 国产男靠女视频免费网站| 免费搜索国产男女视频| 看免费av毛片| 久久精品夜夜夜夜夜久久蜜豆| 久久性视频一级片| 9191精品国产免费久久| 草草在线视频免费看| 亚洲欧美日韩高清专用| 精品久久久久久久人妻蜜臀av| ponron亚洲| 尤物成人国产欧美一区二区三区| 真人做人爱边吃奶动态| 人妻夜夜爽99麻豆av| 一级a爱片免费观看的视频| 久久久久久久精品吃奶| 亚洲av不卡在线观看| 久久精品国产亚洲av涩爱 | 国产欧美日韩一区二区三| 女人十人毛片免费观看3o分钟| 一级av片app| 国产成+人综合+亚洲专区| 久久久久亚洲av毛片大全| 国产一区二区激情短视频| 麻豆成人午夜福利视频| 淫妇啪啪啪对白视频| 亚洲成人免费电影在线观看| 精品一区二区免费观看| 精品久久久久久久末码| 久久伊人香网站| 嫩草影院新地址| 免费在线观看亚洲国产| 国产精品人妻久久久久久| 国产三级中文精品| 成年女人永久免费观看视频| 最近在线观看免费完整版| 日韩大尺度精品在线看网址| 亚洲一区二区三区色噜噜| www.999成人在线观看| 国产精华一区二区三区| 在线播放国产精品三级| 亚洲精品一卡2卡三卡4卡5卡| 一进一出好大好爽视频| 老鸭窝网址在线观看| 中文字幕免费在线视频6| 免费看光身美女| 亚洲自拍偷在线| 黄色视频,在线免费观看| bbb黄色大片| 国产在线男女| 精品国产亚洲在线| 久久精品夜夜夜夜夜久久蜜豆| 久久久久久久精品吃奶| 亚洲男人的天堂狠狠| 国产精品国产高清国产av| 男女下面进入的视频免费午夜| 欧美成人性av电影在线观看| 成人亚洲精品av一区二区| 内射极品少妇av片p| 国产一区二区在线观看日韩| 黄色视频,在线免费观看| 在现免费观看毛片| 久久久久精品国产欧美久久久| 亚洲色图av天堂| 日韩欧美国产一区二区入口| 有码 亚洲区| 亚洲国产色片| 淫妇啪啪啪对白视频| 亚洲电影在线观看av| 在线免费观看的www视频| 老鸭窝网址在线观看| 精品久久久久久成人av| 国产精品乱码一区二三区的特点| 国产精品影院久久| a级毛片a级免费在线| 欧美国产日韩亚洲一区| 精品久久久久久久久亚洲 | 99热这里只有是精品50| 日韩欧美一区二区三区在线观看| 亚洲精品456在线播放app | 床上黄色一级片| 亚洲不卡免费看| 国产三级中文精品| 99精品久久久久人妻精品| 国语自产精品视频在线第100页| 90打野战视频偷拍视频| av在线老鸭窝| 一进一出好大好爽视频| 亚洲欧美精品综合久久99| 大型黄色视频在线免费观看| 久久午夜亚洲精品久久| 欧美日韩综合久久久久久 | 最后的刺客免费高清国语| 国产精品亚洲美女久久久| 国产国拍精品亚洲av在线观看| 成人亚洲精品av一区二区| 国产精品久久久久久亚洲av鲁大| 青草久久国产| 免费无遮挡裸体视频| 久久久精品欧美日韩精品| 观看免费一级毛片| 香蕉av资源在线| 国产成人欧美在线观看| 免费看a级黄色片| 最后的刺客免费高清国语| 国产av麻豆久久久久久久| 久久热精品热| 一个人免费在线观看的高清视频| 性欧美人与动物交配| 免费无遮挡裸体视频| 黄片小视频在线播放| 国产真实乱freesex| 在线观看舔阴道视频| av国产免费在线观看| 成人无遮挡网站| 国产成人a区在线观看| 日韩免费av在线播放| 日韩欧美免费精品| 精品99又大又爽又粗少妇毛片 | 大型黄色视频在线免费观看| 我要搜黄色片| 日本在线视频免费播放| 精品一区二区三区视频在线观看免费| 国产一区二区亚洲精品在线观看| 日本撒尿小便嘘嘘汇集6| 一个人看视频在线观看www免费| 亚洲av成人不卡在线观看播放网| 乱人视频在线观看| 天堂网av新在线| 亚洲美女视频黄频| 色综合欧美亚洲国产小说| 国产av在哪里看| 简卡轻食公司| 日韩欧美 国产精品| 性色avwww在线观看| 黄色一级大片看看| 欧美激情在线99| 亚洲国产日韩欧美精品在线观看| 90打野战视频偷拍视频| 久久久久久久久中文| 天堂√8在线中文| 久久久久性生活片| 欧美日本视频| 97人妻精品一区二区三区麻豆| 亚洲精品影视一区二区三区av| 老司机福利观看| 91九色精品人成在线观看| 大型黄色视频在线免费观看| 午夜福利免费观看在线| 亚洲一区高清亚洲精品| 白带黄色成豆腐渣| 国产精品免费一区二区三区在线| 亚洲成人久久爱视频| 欧美日韩国产亚洲二区| 最新中文字幕久久久久| 十八禁人妻一区二区| 国产在线男女| 日本 av在线| 国产精品嫩草影院av在线观看 | 免费看日本二区| 小说图片视频综合网站| 亚洲av电影在线进入| 国产成人啪精品午夜网站| 悠悠久久av| 岛国在线免费视频观看| 黄色配什么色好看| 九九热线精品视视频播放| 亚洲三级黄色毛片| 亚洲自偷自拍三级| 亚洲精品456在线播放app | 免费在线观看影片大全网站| 男插女下体视频免费在线播放| 国产白丝娇喘喷水9色精品| 两个人视频免费观看高清| 国产伦一二天堂av在线观看| 男女之事视频高清在线观看| 午夜精品一区二区三区免费看| 两个人的视频大全免费| 亚州av有码| aaaaa片日本免费| 亚洲av二区三区四区| 国产三级黄色录像| 亚洲人成网站在线播| 国产精品一区二区性色av| 精品免费久久久久久久清纯| 精品一区二区三区av网在线观看| 村上凉子中文字幕在线| 美女cb高潮喷水在线观看| 搡老熟女国产l中国老女人| 我的女老师完整版在线观看| 少妇人妻精品综合一区二区 | 午夜两性在线视频| 亚洲av电影在线进入| 精品人妻一区二区三区麻豆 | 欧美黑人欧美精品刺激| bbb黄色大片| 久久久久久久久久成人| 我的女老师完整版在线观看| 97超级碰碰碰精品色视频在线观看| 在线a可以看的网站| 舔av片在线| 中文在线观看免费www的网站| 日韩人妻高清精品专区| 欧美在线一区亚洲| 国产伦精品一区二区三区四那| 99久久精品热视频| 久久久久性生活片| 国产精品久久久久久久电影| av在线天堂中文字幕| 男女那种视频在线观看| 亚洲成人中文字幕在线播放| 99精品久久久久人妻精品| 一个人看的www免费观看视频| 乱人视频在线观看| 夜夜躁狠狠躁天天躁| 亚洲综合色惰| 成年女人毛片免费观看观看9| 亚洲人成网站在线播放欧美日韩| 一二三四社区在线视频社区8| 国产精品亚洲av一区麻豆| 搞女人的毛片| 宅男免费午夜| 午夜久久久久精精品| 我的女老师完整版在线观看| 国内精品久久久久精免费| АⅤ资源中文在线天堂| 赤兔流量卡办理| 90打野战视频偷拍视频| 久久久久久久久久成人| 啪啪无遮挡十八禁网站| 国产视频一区二区在线看| 男人的好看免费观看在线视频| 亚洲第一区二区三区不卡| 久久午夜亚洲精品久久| 成人国产一区最新在线观看| 欧美丝袜亚洲另类 | 国产黄a三级三级三级人| 亚洲人成网站高清观看| 国产亚洲欧美在线一区二区| 国产精品人妻久久久久久| 深爱激情五月婷婷| 婷婷亚洲欧美| 男人的好看免费观看在线视频| 动漫黄色视频在线观看| 成人性生交大片免费视频hd| 人人妻人人澡欧美一区二区| 亚洲av不卡在线观看| 久久这里只有精品中国| 亚洲av成人精品一区久久| 国产av不卡久久| 中文字幕高清在线视频| 国产伦人伦偷精品视频| 国产精品一区二区三区四区免费观看 | 成年女人看的毛片在线观看| 久久久久久久精品吃奶| 欧美高清性xxxxhd video| 一本精品99久久精品77| 亚洲精品在线美女| 久久亚洲真实| 色5月婷婷丁香| 搡女人真爽免费视频火全软件 | 免费观看人在逋| 国产中年淑女户外野战色| 久久精品91蜜桃| 国产精品久久久久久人妻精品电影| 午夜日韩欧美国产| 亚洲国产欧洲综合997久久,| 色尼玛亚洲综合影院| 精品久久久久久,| 成人性生交大片免费视频hd| 99视频精品全部免费 在线| 日日干狠狠操夜夜爽| 欧美高清成人免费视频www| АⅤ资源中文在线天堂| 国产在视频线在精品| 3wmmmm亚洲av在线观看| 99久久精品国产亚洲精品| 99久久久亚洲精品蜜臀av| 高清毛片免费观看视频网站| 又黄又爽又免费观看的视频| 乱人视频在线观看| 丁香六月欧美| 精品午夜福利在线看| 国产精品永久免费网站| 五月伊人婷婷丁香| 桃色一区二区三区在线观看| 欧美一区二区精品小视频在线| a在线观看视频网站| 亚洲最大成人中文| 噜噜噜噜噜久久久久久91| 精品一区二区三区视频在线| 国内精品美女久久久久久| 一进一出抽搐动态| 日韩中字成人| 国产精品一区二区免费欧美| 久久久久久久午夜电影| 伊人久久精品亚洲午夜| 18禁裸乳无遮挡免费网站照片| 日韩欧美 国产精品| 麻豆av噜噜一区二区三区| 久久精品夜夜夜夜夜久久蜜豆| 国产精品一区二区免费欧美| 久久久精品欧美日韩精品| 少妇高潮的动态图| 日韩亚洲欧美综合| 亚洲美女黄片视频| 一本精品99久久精品77| 成人特级av手机在线观看| 国产探花极品一区二区| 桃红色精品国产亚洲av| 午夜福利在线在线| 老司机午夜十八禁免费视频| 大型黄色视频在线免费观看| 欧美色视频一区免费| 999久久久精品免费观看国产| 嫁个100分男人电影在线观看| 欧美激情久久久久久爽电影| 欧美一区二区亚洲| 波多野结衣巨乳人妻| 变态另类丝袜制服| 久久精品国产亚洲av涩爱 | 成年女人看的毛片在线观看| 97超级碰碰碰精品色视频在线观看| av欧美777| 人妻丰满熟妇av一区二区三区| 国产精品精品国产色婷婷| 网址你懂的国产日韩在线| 国产精品嫩草影院av在线观看 | 欧美日本亚洲视频在线播放| 99热这里只有精品一区| 亚洲内射少妇av| 国产伦在线观看视频一区| 中亚洲国语对白在线视频| 国产免费一级a男人的天堂| 夜夜夜夜夜久久久久| 精品一区二区免费观看| 中文字幕av在线有码专区| 两个人视频免费观看高清| 亚洲精品在线美女| 国产精品亚洲av一区麻豆| 国产精品一及| 国产亚洲精品av在线| 色精品久久人妻99蜜桃| 97碰自拍视频| 国产久久久一区二区三区| 国产一区二区在线av高清观看| 欧美绝顶高潮抽搐喷水| 青草久久国产| 精品人妻视频免费看| 午夜久久久久精精品| 露出奶头的视频| 亚洲精品一卡2卡三卡4卡5卡| 能在线免费观看的黄片| 男女下面进入的视频免费午夜| 免费观看精品视频网站| 成人午夜高清在线视频| 日韩欧美 国产精品| 91麻豆av在线| 天堂√8在线中文| 亚洲最大成人中文| 99在线视频只有这里精品首页| 丁香欧美五月| 亚洲在线自拍视频| 亚洲av第一区精品v没综合| 999久久久精品免费观看国产| 首页视频小说图片口味搜索| 99热只有精品国产| 亚洲一区二区三区不卡视频| 少妇裸体淫交视频免费看高清| 国产精品亚洲美女久久久| 亚洲性夜色夜夜综合| 欧美高清成人免费视频www| 欧美性猛交╳xxx乱大交人| 色5月婷婷丁香| 搡老妇女老女人老熟妇| 欧美+日韩+精品| 日本a在线网址| 欧美日韩瑟瑟在线播放| 69av精品久久久久久| 看十八女毛片水多多多| 欧美成人一区二区免费高清观看| 欧美区成人在线视频| 国产成人av教育| 色综合亚洲欧美另类图片| 国产亚洲精品综合一区在线观看| 成人无遮挡网站| 亚洲真实伦在线观看| 观看免费一级毛片| 嫩草影院新地址| 亚洲成人久久性| 中文字幕免费在线视频6| 国产亚洲欧美在线一区二区| 欧美最新免费一区二区三区 | 精品国产亚洲在线| 特大巨黑吊av在线直播| 成年女人永久免费观看视频| 三级男女做爰猛烈吃奶摸视频| 日本撒尿小便嘘嘘汇集6| 精品乱码久久久久久99久播| 搞女人的毛片| 别揉我奶头~嗯~啊~动态视频| 久久精品夜夜夜夜夜久久蜜豆| 亚洲中文日韩欧美视频| 高清毛片免费观看视频网站| 成人三级黄色视频| 欧美潮喷喷水| 欧美xxxx黑人xx丫x性爽| 国产三级在线视频| 成人美女网站在线观看视频| 99国产精品一区二区蜜桃av| 男人舔奶头视频| 老女人水多毛片| 嫩草影院新地址| 久久热精品热| 一本精品99久久精品77| 性色av乱码一区二区三区2| 成人毛片a级毛片在线播放| 又爽又黄无遮挡网站| 久久精品夜夜夜夜夜久久蜜豆| 男女那种视频在线观看| 亚洲av电影在线进入| 网址你懂的国产日韩在线| 一本一本综合久久| 2021天堂中文幕一二区在线观| 日本三级黄在线观看| 美女黄网站色视频| 免费一级毛片在线播放高清视频| 51午夜福利影视在线观看| 中文亚洲av片在线观看爽| 成人av在线播放网站| 欧美日本亚洲视频在线播放| 别揉我奶头 嗯啊视频| 五月玫瑰六月丁香| 国产亚洲精品av在线| 欧美激情在线99| 亚洲精品在线美女| 99久久精品国产亚洲精品| 男人舔女人下体高潮全视频| 久久伊人香网站| 十八禁网站免费在线| 精品久久久久久久人妻蜜臀av| 嫩草影院新地址| 亚洲人成网站在线播放欧美日韩| 可以在线观看的亚洲视频| 91av网一区二区| 国产综合懂色| 90打野战视频偷拍视频| 国产精品一区二区三区四区免费观看 |