• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fault diagnosis of bearings based on deep separable convolutional neural network and spatial dropout

    2022-11-13 07:29:52JiqingZHANGXingweiKONGXueyiLIZhiyongHULiuCHENGMingzhuYU
    CHINESE JOURNAL OF AERONAUTICS 2022年10期

    Jiqing ZHANG, Xingwei KONG,b,c,*, Xueyi LI, Zhiyong HU,b,c,Liu CHENG, Mingzhu YU

    a School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China

    b Key Laboratory of Vibration and Control of Aero-Propulsion System, Ministry of Education, Northeastern University,Shenyang 110819, China

    c Liaoning Province Key Laboratory of Multidisciplinary Design Optimization of Complex Equipment, Northeastern University, Shenyang 110819, China

    d Angang Steel Company Limited, Anshan 114021, China

    KEYWORDS Batch normalization;Convolutional neural network;Fault diagnosis;Similarity pruning;Spatial dropout

    Abstract Bearing pitting,one of the common faults in mechanical systems,is a research hotspot in both academia and industry.Traditional fault diagnosis methods for bearings are based on manual experience with low diagnostic efficiency. This study proposes a novel bearing fault diagnosis method based on deep separable convolution and spatial dropout regularization. Deep separable convolution extracts features from the raw bearing vibration signals, during which a 3 × 1 convolutional kernel with a one-step size selects effective features by adjusting its weights. The similarity pruning process of the channel convolution and point convolution can reduce the number of parameters and calculation quantities by evaluating the size of the weights and removing the feature maps of smaller weights.The spatial dropout regularization method focuses on bearing signal fault features,improving the independence between the bearing signal features and enhancing the robustness of the model.A batch normalization algorithm is added to the convolutional layer for gradient explosion control and network stability improvement.To validate the effectiveness of the proposed method, we collect raw vibration signals from bearings in eight different health states. The experimental results show that the proposed method can effectively distinguish different pitting faults in the bearings with a better accuracy than that of other typical deep learning methods.

    1. Introduction

    Bearings are widely used in transmission mechanisms, actuators, and other mechanical equipment, playing a vital role in aerospace,transportation,industrial manufacturing,and other fields.1Therefore, condition monitoring and timely health management of bearings are critical in reducing maintenance costs and improving operational safety.2However, development in current machinery and the growing tendency of equipment toward intelligence, efficiency, and precision impose significant challenges for traditional methods in accurate machinery diagnosis under complex coupled conditions.3With the deepening of the research on fault diagnosis technology of bearings and other rotating machinery,a variety of fault diagnosis methods have emerged, such as noise analysis method,4acoustic emission analysis method,5,6oil analysis method,7fuzzy diagnosis analysis method,8and vibration analysis method.9,10Literature review shows that vibration analysis can accurately determine the type and location of the fault and is therefore widely used in practical engineering applications. The diagnosis technology for bearings and other rotating machinery based on vibration signal analysis usually involves three steps: the acquisition of vibration signals, the extraction of fault feature information, and fault type identification.11Among these steps, the acquisition of vibration signals is the basis of fault diagnosis. The accurate and effective extraction of fault feature information from vibration signals plays a decisive role in obtaining correct diagnosis results.According to different analysis domains, the diagnosis techniques based on vibration analysis mainly involve timedomain analysis, frequency-domain analysis, and time-frequency analysis.12The time-frequency analysis technology can judge the running state of bearings and other rotating machinery according to the dimensionless indexes such as the mean value, variance, maximum value, minimum value margin,and peak value. The time-domain synchronous average,13adaptive noise elimination,14autocorrelation analysis,15and other methods are also included. The time-frequency analysis technology is more effective in early fault detection, while less effective for compound fault of bearings and other rotating machinery. Compared with the time-domain analysis technology, frequency-domain analysis is more intuitive in reflecting fault signal feature changes, and is therefore widely used.Frequency-domain analysis obtains various frequency information in the signal through Fourier Transform. Currently,commonly used frequency-domain analysis techniques include power spectrum,16cepstrum,17and demodulation spectrum.18The frequency-domain analysis method is effective for stationary signals instead of non-stationary and nonlinear signals.Processing non-stationary and nonlinear signals requires combination of time-frequency techniques,such as empirical mode decomposition,19,20variational mode decomposition,21,22wavelet analysis,23,24nonlinear mode decomposition,25and local mean value decomposition.26,27

    Many scholars have attempted to use time-frequency analysis techniques to extract the feature information contained in fault signals. Liu et al.28combined the short-time Fourier Transform with the sparse autoencoder to analyze and diagnose the fault acoustic signals of rolling bearings. Lopeze-Ramirez et al.29used the short-time Fourier Transform to analyze the fault signals of induction motor bearings and effectively judged them by identifying different spectral components.Chen et al.30proposed a line Frequency Modulation(FM)Wigner-Weil distribution based on the Wigner-Weil distribution, combining polynomial and spline functions to extract the instantaneous frequency components of the signal.Chen et al.31analyzed the internal production process of wavelet transform and performed fault analysis on non-smooth signals of rotating machinery. While these methods are useful in fault diagnosis for rotating machinery, problems still remain.The first problem is that these methods require manual feature extraction and a priori knowledge of signal processing techniques. Another problem is that these features are extracted based on specific diagnostic conditions and may not apply to other conditions.

    Rapid development in increasingly efficient chips enables wide application of deep learning and artificial intelligence to fault diagnosis.Most current fault diagnosis approaches focus on methods that use pre-extracted features as input to a neural network. Yuan and Tian32used a GRU neural network for sequential data diagnosis in dynamic processes. Chen and Zhao33used wavelet transform to extract features of signals on different channels as input to a neural network for bearing fault classification. Huang34extracted the main fault information components using singular value decomposition and empirical mode decomposition. They extracted the fault feature parameters from the selected eigenmode function components as the input to the neural network for bearing fault diagnosis.Zhang et al.35extracted the bearing vibration signal features with an autoregressive model and implemented fault identification by optimizing artificial neural networks. Wang et al.36directly used the Hilbert envelope spectrum of the sampled bearing signal as the feature vector.They later used a deep belief network to classify the feature vector to complete the bearing fault diagnosis. Khajavi and Keshtan37used the discrete wavelet transform for signal pre-processing, classifying the bearing faults through a neural network using the normalized discrete wavelet coefficients of the input. Yu et al.38employed the bidirectional long short term memory network to diagnose early faults in gears. The above methods are constrained by the need to preprocess features, ignoring the powerful nonlinear fitting capability of neural networks that can inherently extract features and perform classification. This research uses deep separable convolutional neural networks,batch normalization, and spatial dropout regularization to improve the performance of deep models.

    We first establish a deeply separable convolutional neural network to reduce parameters and computation quantity,adopting the layer-by-layer convolution and the point-bypoint convolution. A spatial dropout layer is then built to diversify its learning features and prevent overfitting. Finally,a batch normalization layer is constructed to standardize the specified corresponding feature axis. The input values of each layer are normalized in a normal distribution with a mean value of 0 and a variance of 1.Experimental results show that this method is more effective than the single deep model and other deep learning methods. It has the following significant advantages in bearing fault diagnosis:

    (1) Compared with the traditional one-dimensional convolutional neural network, this method can reduce the model parameters by about 60% while maintaining good diagnostic performance.

    (2) The raw vibration signals are trained directly without time-frequency conversion,reducing the repetitive manual workload.

    (3) It can solve the problem of update coordination between networks, accelerate training and convergence speed,and effectively prevent overfitting.

    (4) It uses a 3 × 1 small-scale convolutional kernel with a one-step size, automatically adjusting the weights and selecting effective features during training.

    The rest of this article is organized as follows. Section 2 describes the approach proposed in this study, introducing the deep separable convolution,batch normalization,and spatial dropout regularization, and emphasizing the difference between traditional convolution and deep separable convolution. In Section 3, the effectiveness of the proposed method is validated by an experiment, and the experimental results are analyzed.Section 4 presents the results and discussion.Section 5 presents the conclusions.

    2. Methodology

    the rolling bearings,passed through the separable convolution layers, batch normalization layers, spatial dropout layers, and pooling layers, and then input to the fully connected layer through global maximum pooling.

    The softmax classifier is used to diagnose the healthy states of bearings. A 3 × 1 convolutional kernel with a one-step size is chosen to automatically adjust the weights and select effective features in the stacked one-dimensional separable convolutional layers. The deep separable convolution includes two pruning processes: layer-by-layer convolution and point-bypoint convolution. In the layer-by-layer convolution process,the similarity of the filters is measured by the K-L scatter,and similar filters are cropped out to reduce redundancy. In the point-by-point convolution process, the importance of the input feature map in the linear combination is assessed by the weights. The small weight and its associated feature channels are clipped out. The number of tracking network parameters is thus reduced by the above two convolution processes. In addition to achieving downsampling, we apply the spatial dropout strategy to the convolutional layer, focusing on the bearing fault signal features. The spatial loss helps to improve the independence between the features and reduce the risk of network overfitting, enhancing the robustness of the whole model. Moreover, a batch normalization algorithm is added to the convolutional layer to solve the update coordination problem among multi-layer networks to ensure the consistency of data distribution.

    2.1. Deep separable convolution

    Deep separable convolution consists of layer-by-layer convolution and point-by-point convolution to reduce computation and the number of parameters. W and H are channel length and width, respectively; S is the channel number;W × H × S is the input shape. Suppose that convolution2D has K 3 × 3 convolution; the stride is set to one; then the calculation quantity of convolution2D is W×H×S×K×3×3.∑is the summation symbol of the output feature map.b is the bias constant.The separable convolution2D is divided into C groups,get K convolution features by the number of channels.Then we carry on the N 1×1 convolution and obtain N pointby-point convolution feature maps.Fig.2 presents the process of the deep separable convolution,and Fig.3 presents the traditional convolution process.

    where KLis the layer-by-layer convolution kernel, and NPthe point-by-point convolution kernel.

    Assume that the size of the input feature graph is H × W,the number of the channels is S,the convolution kernel size is Df× Df, and the number of convolution kernel is K; then the quantities of traditional convolution and deep separable convolution are.

    It can be seen that the computation reduction of the deep separable convolution is related to the output channel K and the size of convolution kernel Df× Df. In practice, a 3 × 3 sized convolution kernel is generally used for deep separable convolution. If the output channel is 64, then the calculation amount for deep separable convolution by Eq.(5) is only 0.126 times that of the traditional convolution parameter calculation.

    2.1.1. Layer-by-layer convolution and similarity pruning

    In layer-by-layer convolution,the decreased number of output channel feature graphs K leads to parameter calculation reduction.The input channel feature graphs M is convolved with filter K, also called deep convolution. In this process,convolution computation only occurs in the internal of each channel; information between channels is not fused, and spatial features are mainly extracted.

    During layer-by-layer convolution, similar filters can complement each other.After one of them is removed,the remaining similar filters can still ensure their network accuracy in the

    process of retraining, thus reducing the complexity of the network. We use the KL divergence method to measure the similarity of the filters.39The asymmetry of KL divergence between the two filters is used to ensure the distance assessment of the similarity of the two filters.The calculation process is.

    where P is the original probability distribution, Q denotes the approximate probability distribution, P||Q represents the information loss generated when the approximate probability distribution Q is used to fit the original probability distribution P,p(xi)and q(xi)represent the weight ratios at the corresponding positions of the filter, respectively.

    The filter with a high entropy value is retained when we delete similar filters. A similar filter is evaluated by crossentropy, and the filter with low cross-entropy is deleted to ensure the richness of the reserved filter. The comentropy is defined as.

    where p(xi)is the probability of event X=xi,and lg(p(xi))the information of event X = xi. X is a discrete random variable whose values are set to X = x0, x1, ..., xn.

    The similarity pruning of the feature map algorithm in layer-by-layer convolution is conducted as follows.

    Input: The filters in layer-by-layer convolution and the pruning hyperparameters.

    Output: The removed filters.

    Step 1. Normalize filters of the current layer.

    Step 2.Calculate the KL scatter between the two filters and sort the filters from the smallest to the largest by distance.

    Step 3. Take the prune hyperparameters as a threshold value and similar filters as the ones to be pruned.

    Step 4. Calculate the cross-entropy of the filters to be pruned and remove the filters with small cross-entropy.

    2.1.2. Point-by-point convolution and weight pruning

    In point-by-point convolution,feature graphs K and 1×1×N filters are convolved layer-by-layer output. Since there are N 1 × 1 × N convolution kernels in the point-by-point convolution, N channel feature graphs are finally output. In this process, the feature fusion of the input feature map at the channel level is realized through point-by-point convolution,also known as point-wise convolution.

    In the deep separable convolution, computation mainly concentrates on the point-by-point convolution; therefore,deleting the feature graph with a small weight value to reduce calculation and the corresponding parameters is considered.

    where K1,K2,...,Knrepresent 1×1×N filters,and F1,F2,...,Fkthe feature graphs of input K.

    The output of point-by-point convolution can be regarded as a multinomial linear combination of the input feature graphs. Therefore, if the weight value in a certain channel in the convolution is minimal, deleting the unimportant feature graph and its weight can be considered. The pruning process is shown in Fig. 4.

    In layer-by-layer convolution, similar filters complement each other. Once one of the filters is removed, the remaining filters can supplement each other during the retraining process to maintain network accuracy. Before the model training, the weights are initialized randomly, and then the forward propagation, loss calculation, and back propagation are conducted.The weights are updated by the random gradient descent method until the loss function converges to the minimum.

    2.2. Batch normalization

    Batch normalization is a layer used after a convolutional layer or a dense connection layer to standardize the input data by receiving a bias parameter specifying the corresponding feature axis. Batch normalization could be adaptive to parameter initialization. The transforming method at each level makes the mean to be 0,the variance of 1 normal distribution.Moreover,falls in each layer activation function area are more sensitive,solve the problem of the update coordination and the data distribution between the network layers. Therefore, the primary functions of batch normalization accelerate network training and convergence, control gradient explosion, prevent overfitting, and improve network stability. The batch normalization algorithm is divided into the following four steps.

    Step 1. Calculate the mean value of each training batch.

    Step 2. Calculate the variance of each training batch.

    Step 3. Normalize the training data of the batch with the obtained mean value and variance, and obtain the 0-1 distribution.

    Step 4.Conduct scale transformation and migration for the normalized training.

    Specific algorithms of batch normalization are as follows:

    where ε is a tiny positive number used to avoid a divisor turning 0, γ is the scale factor, and β the translation factor.

    Fig.5 displays the convolutional neural network after batch normalization. The batch normalization is added after each hidden layer before transforming the nonlinear function and after obtaining the x activation value.

    The model metrics on the training set has a significant decreasing trend with batch normalization, and the Root Mean Square (RMS) error tends to be constant with the increase of the training period.39Conversely,there is no significant decrease in the model RMS error without batch normalization, and exists no convergence trend or fitting trend of the model. In the testing set, the RMS error of the model with batch normalization tends to decrease significantly as the training period increases,indicating that the predictive performance and the generalization ability of the network gradually increases. On the contrary, the RMS error of the model without batch normalization did not show a decreasing trend on the testing set, and the model performance did not improve with the increase of training cycles. Thus, the model did not have substantial prediction performance.

    2.3. Spatial dropout algorithm

    The spatial dropout strategy is an improvement in the dropout strategy. Traditional dropout randomly sets some elements to zero, which means that the neural network may set the other part of the elements to zero in the next training.Usually,adjacent elements are strongly correlated, and randomly setting some elements to zero will reduce the learning ability of the network. The spatial dropout randomly sets all elements of a specific dimension to zero, which helps to retain relevance between the features and enhances robustness of the model.Fig. 6 presents the principle diagrams of dropout and spatial dropout.

    Fig. 7 shows the one-dimensional convolution layer using the dropout strategy and the spatial dropout strategy process.The top of the first and second lines represent the convolution kernels of pixel features 1 and 2. The bottom layer before the third line represents the output,and the gray square in the figure stands for the discarded element. In backpropagation, the transformation of convolution kernel W2is conducted according to the features F2, and e is the error function.

    Fig. 7(a) shows the use of the conventional dropout algorithm, where f1bis the discarded neuron and f1ais the undiscarded neuron. Since F2and F1are the outputs of a convolutional layer, f1aand f1bhave a strong correlation: f1a≈f1b,or de/df1a≈de/df1b.In Fig.7(a),although the contribution gradient of f1bis 0, there is a strong correlation between f1band f1a, and the learning efficiency is not improved.

    Fig. 7(b) represents the spatial dropout algorithm. In this method, for a convolution feature tensor with a size of n-× length × width (where n is the number of features), only perform the dropout method for n times as if a certain channel was randomly discarded. After the spatial dropout, the adjacent feature s is either 0 or activated.

    Experiments show that the improved method is more effective in spatial dropout training.40

    3. Fault simulation experiment on rolling bearing

    To validate the effectiveness of the proposed method, we design an experiment at the constant speed of 1800 r/min.The raw vibration signals of the bearing are obtained by an acceleration sensor of type PCB 352C65, with sensitivity of 103.0 mV/g, output bias of 10.9 VDC, transverse sensitivity of 0.8%,resonant frequency of 58.3 kHz,and a discharge time constant of 1.6 s.

    Then we separate and convolve the raw vibration signals to classify the faults of the normal bearing,the outer ring,and the inner ring of the bearing. The sampling frequency is set to 19.2 kHz, and the sampling time for each state 110 s. The acceleration sensor is mounted on the housing of the bearing seat. The testbed is shown in Fig. 8.

    The test device consists of a motor frequency conversion controller, a motor, a coupling, a bearing seat, a rotor, and other parts. Eight health conditions of bearings are designed in this experiment, including 1 normal condition, 3 inner ring faults, and 4 outer ring faults. N205EM and NU205EM are used to simulate the faults of outer rings and inner rings,respectively.Fig.9 shows the health conditions of eight groups of different bearings. The main parameters of the cylindrical roller bearings are shown in Table 1.

    The raw vibration signals of the bearings are displayed in Fig. 10. The contact of defects on the inner and outer ring raceways will cause shock effect, when the frequency amplitude will increase abruptly. The frequency will be relatively smooth when no defective contact exists,leading to the similarity of the time domain signals.

    As shown in Fig.10,condition(1),(3),and(6)have distinct peaks, different from the other five signals. The fault impact point can be seen in conditions (6) and (8), while conditions(7) and (8) have both similarities and differences. It is almost impossible to distinguish the fault from the bearing vibration signals by naked eyes. Therefore, we build a convolutional neural network model to recognize these similar time-domain signals to achieve a better classification of faulty bearings.

    The data are checked for outliers before input into the machine learning model, and if data cleansing is required,the outliers should be removed in advance. Fig. 11 is the histogram of the sample numbers of four different bearing health states.The normal condition,one type of inner ring faults,and two types of outer ring faults are randomly selected from the eight bearing health states. 2500 points were selected from the normal condition and inner ring faults respectively, and 4000 points and 3500 points were selected from the two types of outer ring faults respectively, are used to draw the histogram,which shows that the health states of the four bearings are normally distributed. These raw data are then used for bearing fault detection.

    The training set, validation set, and test set are not duplicated. The sample numbers of the three sets are 20,008,2,496, and 2,496, respectively. The numbers of channels in the deep separable convolutional network are 32, 64, 64, and 64. The activation function is the Rectified Linear Unit. The data are processed by Python on Intel i7-10700 CPU and NVIDIA GeForce GPU.Table 2 shows the details of the proposed method,including the output shape and number of parameters of each layer.

    4. Results and discussion

    In deep learning, the loss function is the key to learning process configuration, and the model needs to minimize its loss value during training. The loss function can also evaluate the achievement effectiveness of current tasks, guiding the model learning.The model parameters are modified by backpropagation according to the loss function.

    Table 1 Main parameters of cylindrical roller bearings.

    The softmax loss function is a classifier. The softmax uses the logic function and normalized exponential function to provide the posterior probability of the classification label, which is mapped to the [0, 1] interval. At the last output node, the node with the maximum probability is selected as the prediction target of a bearing health state. For fault diagnosis problems,the probability values indicate the probability of bearing health status belonging to a certain category.

    The softmax classifier is adopted in this study, and the calculation formula is.

    where yiis the output of the jthneuron in the output layer, m the number of classes of the bearing failure,and p(yi)the probability output of the neuron after softmax.

    In this study, eight bearing fault vibration signals are consistently acquired at 1800 r/min. We divide the acquired data set into the training set, testing set, and validation set according to the ratio of 8:1:1.Fig.12 shows the relative accuracy of the convolutional neural network model on the training and validation sets for epochs from 0 to 450. When the epoch is 60,the accuracy of the training and the validation sets is about 90%,signifying the beginning of rapid convergence of the neural network model. When the epoch is larger than 240, the accuracy difference between the training and the validation sets is small, proving the non-existence of overfitting in the proposed convolutional neural network model. Finally, when the epoch is 450, the accuracy of the training and the validation sets reaches about 98%, showing a good classification of the proposed bearing fault diagnosis model.

    A confusion matrix is used to summarize the results of a classifier. The eight-component classification in this study is an 8 × 8 table, with each row indicating the number of actual samples in that class, and each column the number of samples predicted to be in that class. After generation of the classification results, matplotlib can be used to visualize the confusion matrix.

    Table 2 Network details and main parameters.

    Fig. 13 presents the confusion matrices of the training set,validation set, and testing set. The results show a good effect of the method on the raw vibration signal of the bearing under mixed conditions. There are 2501 samples in the training set,with a classification accuracy of 97.87%. Except for condition(4),the accuracy of the other conditions is above 95%,among which the accuracy of conditions(6)and(7)is 100%.The validation set has 312 samples with a classification accuracy of 97.38%. Except for conditions 3 whose accuracy is below 90%, the accuracy of all the other conditions is above 95%,with that of conditions (6), (7), and (8) being 100%. The testing set has 312 samples, and the classification accuracy is 97.75%. Except for condition (3) whose accuracy is 88.5%,the accuracy of the other conditions is higher than 95%,among which the accuracy of conditions (2), (6), (7), and (8)is 100%.

    To verify the effectiveness of the proposed method in vibration signal feature extraction, t-SNE visual samples are processed. t-SNE is an embedding model for mapping data in high-dimensional space to low-dimensional space and retaining the local characteristics of the data set. It transforms the similarity between data points into conditional probabilities.The Gaussian joint distribution expresses the similarity of data points in the original space, and the t-distribution represents that in the embedding space.

    Fig. 14 shows the 3D results of t-SNE visualization, and it incicates that the series features of the eight kinds of bearing conditions are accurately clustered. The three-dimensional images show apparent clustering of the cascade features obtained by the method, indicating the effectiveness of the method in feature extraction from vibration signals.

    Compared with the traditional one-dimensional convolutional neural network, the deep separable convolution based on spatial dropout proposed in this study can reduce the network parameters by about 60% with good diagnostic performance. As shown in Table 3, the testing set accuracy of this method reaches 97.75%, which is as good as the traditional convolutional neural network. The proposed model is also superior to those using Dropout or spatial Dropout based on traditional convolutional neural networks only. In terms of the testing accuracy alone, this method is about three to four percentage points higher than those using only traditional convolution or dilated convolution, indirectly validating the accuracy of the proposed method for multi-classification.

    Table 3 Comparison between proposed method and other typical diagnostic methods

    Table 3 shows the comparison between the proposed method(Model 1)and other typical diagnostic methods.Models 2-7 are the traditional convolutional neural network and spatial Dropout, the traditional convolutional neural network and Dropout, the traditional convolutional neural network,the dilated convolutional neural network, the method using a Gated Recurrent Unit neural network for sequential data diagnosis in dynamic processes,33and the bearing fault diagnosis method using a bidirectional long and short time memory network,41respectively.

    According to Table 3,Model 1 obtains the highest accuracy of 97.75%on the test set.Compared to the training parameters of Models 2-5,those of Model 1 are reduced by 60%,allowing significant savings in training time and computational resources. Model 7 has a lower accuracy on both the training and test sets, apparently requiring improvement. In summary,the proposed method can accurately diagnose bearing faults,and provide better diagnostic accuracy with fewer model parameters than other models.

    5. Conclusions

    This study proposes a new fault diagnosis method for bearings,which can directly learn the raw vibration data to realize fault diagnosis. The experimental results show that this method effectively learns the features with an excellent diagnostic effect on the bearing faults. Compared with the traditional artificial feature extraction method, it significantly reduces dependence on manual labor. In addition, the accuracy of this method is higher than 97% with only half the network parameters of the traditional convolutional neural network. Our future research work will focus on the downsampling and selection of valuable features using small-scale convolutional kernels.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This work is supported in part by the National Key Research and Development Program of China (No. 2019YFB1704500),the State Ministry of Science and Technology Innovation Fund of China (No. 2018IM030200), the National Natural Foundation of China(No.U1708255),and the China Scholarship Council(No.201906080059).The authors are grateful for Dr. Hui MA who provided the experimental equipment.

    成年美女黄网站色视频大全免费| 热re99久久精品国产66热6| 国产成人a∨麻豆精品| 日韩av免费高清视频| 久久精品久久久久久噜噜老黄| 中文天堂在线官网| 国产欧美亚洲国产| 亚洲精品一二三| 99久久人妻综合| 国产精品99久久99久久久不卡 | 老司机影院成人| 精品少妇内射三级| √禁漫天堂资源中文www| 桃花免费在线播放| 国产极品粉嫩免费观看在线| 最黄视频免费看| 成人亚洲欧美一区二区av| 国产探花极品一区二区| 欧美人与善性xxx| 老司机深夜福利视频在线观看 | 不卡av一区二区三区| 久久99热这里只频精品6学生| 亚洲国产中文字幕在线视频| 日本wwww免费看| 成人黄色视频免费在线看| 伊人亚洲综合成人网| 51午夜福利影视在线观看| 欧美变态另类bdsm刘玥| 久久久国产欧美日韩av| 精品人妻一区二区三区麻豆| 狠狠精品人妻久久久久久综合| 建设人人有责人人尽责人人享有的| 亚洲四区av| 男女国产视频网站| 久久久久久久久久久久大奶| 日韩熟女老妇一区二区性免费视频| 精品一品国产午夜福利视频| 欧美97在线视频| 亚洲欧美精品综合一区二区三区| 天美传媒精品一区二区| 2018国产大陆天天弄谢| 国产精品无大码| 久久久亚洲精品成人影院| 女人久久www免费人成看片| 蜜桃在线观看..| 在线观看免费视频网站a站| 欧美97在线视频| 人妻 亚洲 视频| 国产淫语在线视频| 欧美精品高潮呻吟av久久| 女人高潮潮喷娇喘18禁视频| 男女午夜视频在线观看| 久热爱精品视频在线9| 国产亚洲av高清不卡| 亚洲精品在线美女| 韩国av在线不卡| 亚洲欧美一区二区三区国产| 高清欧美精品videossex| 嫩草影院入口| 国产老妇伦熟女老妇高清| 久久精品亚洲熟妇少妇任你| 高清不卡的av网站| 亚洲,一卡二卡三卡| 亚洲情色 制服丝袜| 综合色丁香网| 观看美女的网站| 在线观看国产h片| 国产精品女同一区二区软件| 搡老乐熟女国产| 亚洲欧洲精品一区二区精品久久久 | 国产午夜精品一二区理论片| 考比视频在线观看| a级片在线免费高清观看视频| 欧美黑人精品巨大| 日韩视频在线欧美| 热99久久久久精品小说推荐| 另类亚洲欧美激情| 欧美黑人精品巨大| 国产成人午夜福利电影在线观看| 一级片免费观看大全| 欧美激情 高清一区二区三区| 欧美精品亚洲一区二区| 一本久久精品| 欧美另类一区| 久久久久人妻精品一区果冻| 日韩不卡一区二区三区视频在线| 久久久久精品国产欧美久久久 | 日韩 欧美 亚洲 中文字幕| 国产麻豆69| 只有这里有精品99| 两性夫妻黄色片| 亚洲欧洲国产日韩| 桃花免费在线播放| 亚洲一级一片aⅴ在线观看| 亚洲国产欧美在线一区| 2018国产大陆天天弄谢| 欧美精品av麻豆av| 欧美少妇被猛烈插入视频| 搡老乐熟女国产| 久久久久久久精品精品| 日本av手机在线免费观看| 黄色毛片三级朝国网站| av卡一久久| 亚洲国产欧美网| 亚洲欧美清纯卡通| 久久精品亚洲熟妇少妇任你| a级毛片黄视频| 在线免费观看不下载黄p国产| 黄色 视频免费看| 99久久人妻综合| 国产色婷婷99| 亚洲,欧美,日韩| 久久亚洲国产成人精品v| 亚洲中文av在线| 成年人免费黄色播放视频| 亚洲成色77777| 看十八女毛片水多多多| 欧美日韩综合久久久久久| 丝袜美腿诱惑在线| 18在线观看网站| 视频区图区小说| 爱豆传媒免费全集在线观看| 午夜久久久在线观看| 欧美久久黑人一区二区| 免费观看性生交大片5| 黄色毛片三级朝国网站| 免费日韩欧美在线观看| 亚洲欧美日韩另类电影网站| 欧美 日韩 精品 国产| 精品人妻一区二区三区麻豆| 日本欧美视频一区| 宅男免费午夜| 国产成人精品在线电影| 欧美在线一区亚洲| 18在线观看网站| 精品国产一区二区久久| 最新在线观看一区二区三区 | 亚洲欧美精品自产自拍| 亚洲av中文av极速乱| 欧美人与性动交α欧美精品济南到| 最近最新中文字幕免费大全7| 欧美精品av麻豆av| 色播在线永久视频| 国产老妇伦熟女老妇高清| 国产无遮挡羞羞视频在线观看| 飞空精品影院首页| 亚洲少妇的诱惑av| 青春草视频在线免费观看| 免费看不卡的av| 五月开心婷婷网| 国产福利在线免费观看视频| 18在线观看网站| 人人妻人人澡人人看| 亚洲综合色网址| 久久精品久久精品一区二区三区| av不卡在线播放| 欧美亚洲日本最大视频资源| 九色亚洲精品在线播放| 热re99久久精品国产66热6| 国产在线一区二区三区精| 一区福利在线观看| 亚洲精品美女久久av网站| 黑丝袜美女国产一区| 国产极品天堂在线| 日韩伦理黄色片| 男女边吃奶边做爰视频| av国产精品久久久久影院| 中文字幕精品免费在线观看视频| 精品国产超薄肉色丝袜足j| 亚洲欧美色中文字幕在线| 十八禁网站网址无遮挡| 国产成人精品久久二区二区91 | 精品久久蜜臀av无| 两个人看的免费小视频| 精品少妇内射三级| 国产亚洲av高清不卡| 精品人妻一区二区三区麻豆| 男人添女人高潮全过程视频| 人人妻人人爽人人添夜夜欢视频| 国产av码专区亚洲av| 一级,二级,三级黄色视频| 99香蕉大伊视频| 久久久久视频综合| 国产视频首页在线观看| 久久韩国三级中文字幕| 欧美精品亚洲一区二区| 伊人久久大香线蕉亚洲五| 亚洲国产看品久久| 赤兔流量卡办理| 考比视频在线观看| 捣出白浆h1v1| 大话2 男鬼变身卡| 久久精品亚洲av国产电影网| 国产极品粉嫩免费观看在线| 欧美日韩亚洲国产一区二区在线观看 | 在线观看免费日韩欧美大片| av片东京热男人的天堂| 精品一区在线观看国产| 搡老乐熟女国产| 最近手机中文字幕大全| 2021少妇久久久久久久久久久| 日韩一区二区三区影片| 久久国产精品男人的天堂亚洲| 国产一卡二卡三卡精品 | 99精品久久久久人妻精品| 777米奇影视久久| 制服人妻中文乱码| 好男人视频免费观看在线| 在线免费观看不下载黄p国产| 丁香六月天网| 亚洲在久久综合| 成年美女黄网站色视频大全免费| 哪个播放器可以免费观看大片| 成年女人毛片免费观看观看9 | 国产精品久久久久成人av| 狂野欧美激情性bbbbbb| 欧美 日韩 精品 国产| 国产xxxxx性猛交| 高清视频免费观看一区二区| 免费女性裸体啪啪无遮挡网站| 午夜av观看不卡| 最近最新中文字幕免费大全7| 色视频在线一区二区三区| 我要看黄色一级片免费的| 亚洲精品国产av蜜桃| 美女中出高潮动态图| 女性生殖器流出的白浆| 美女大奶头黄色视频| 99久久精品国产亚洲精品| 国产精品欧美亚洲77777| 欧美国产精品va在线观看不卡| 国产欧美日韩一区二区三区在线| 国产午夜精品一二区理论片| 看十八女毛片水多多多| 亚洲一区二区三区欧美精品| 亚洲av男天堂| 成人18禁高潮啪啪吃奶动态图| 亚洲一区中文字幕在线| 飞空精品影院首页| 80岁老熟妇乱子伦牲交| 成人亚洲精品一区在线观看| 亚洲精品自拍成人| 亚洲av男天堂| av国产精品久久久久影院| 国产亚洲av高清不卡| 涩涩av久久男人的天堂| 黄色视频不卡| 久久久亚洲精品成人影院| 亚洲免费av在线视频| 男女高潮啪啪啪动态图| 欧美日韩国产mv在线观看视频| 女人被躁到高潮嗷嗷叫费观| 国产精品香港三级国产av潘金莲 | kizo精华| 美女脱内裤让男人舔精品视频| 亚洲一卡2卡3卡4卡5卡精品中文| www.精华液| 最近手机中文字幕大全| 91成人精品电影| 美女高潮到喷水免费观看| 免费在线观看黄色视频的| 亚洲欧洲日产国产| 亚洲色图综合在线观看| av女优亚洲男人天堂| 精品亚洲乱码少妇综合久久| 捣出白浆h1v1| 十八禁人妻一区二区| 亚洲精品国产一区二区精华液| 老司机深夜福利视频在线观看 | 欧美精品一区二区免费开放| 欧美日韩亚洲国产一区二区在线观看 | 观看美女的网站| 免费日韩欧美在线观看| 老汉色∧v一级毛片| 少妇被粗大的猛进出69影院| 亚洲av欧美aⅴ国产| 亚洲av男天堂| 黄色怎么调成土黄色| 国产亚洲一区二区精品| 日韩一区二区视频免费看| 免费女性裸体啪啪无遮挡网站| 一二三四中文在线观看免费高清| 2021少妇久久久久久久久久久| 99久久人妻综合| 看免费成人av毛片| 日本av免费视频播放| 国产一区二区 视频在线| 美女国产高潮福利片在线看| 国产成人系列免费观看| 成人亚洲欧美一区二区av| 极品人妻少妇av视频| 久久精品国产综合久久久| 国产在线一区二区三区精| 午夜福利在线免费观看网站| av福利片在线| 日本av手机在线免费观看| 亚洲三区欧美一区| 久久精品国产亚洲av涩爱| av在线观看视频网站免费| 久久ye,这里只有精品| 不卡av一区二区三区| videos熟女内射| 制服丝袜香蕉在线| 日韩大片免费观看网站| 亚洲婷婷狠狠爱综合网| 欧美日韩亚洲高清精品| 午夜福利,免费看| 亚洲av日韩在线播放| 最近最新中文字幕免费大全7| 亚洲精品日本国产第一区| 精品人妻一区二区三区麻豆| 黄色一级大片看看| 99久久精品国产亚洲精品| 亚洲国产欧美网| 嫩草影视91久久| 色网站视频免费| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲欧美日韩另类电影网站| 久久毛片免费看一区二区三区| 18在线观看网站| 日本av手机在线免费观看| 纯流量卡能插随身wifi吗| 女人久久www免费人成看片| 欧美日韩一级在线毛片| 啦啦啦中文免费视频观看日本| 久久久久久人人人人人| 看免费成人av毛片| 成人国产麻豆网| 国语对白做爰xxxⅹ性视频网站| 久久国产精品男人的天堂亚洲| 天天躁夜夜躁狠狠久久av| 中国国产av一级| 国产成人欧美| 精品酒店卫生间| 亚洲一级一片aⅴ在线观看| 咕卡用的链子| 午夜精品国产一区二区电影| 亚洲av电影在线进入| 在线观看三级黄色| 亚洲综合精品二区| avwww免费| 婷婷色麻豆天堂久久| 人人澡人人妻人| 丝袜在线中文字幕| videosex国产| 欧美日韩亚洲国产一区二区在线观看 | 热re99久久精品国产66热6| 人妻一区二区av| 免费在线观看黄色视频的| 女人爽到高潮嗷嗷叫在线视频| 久久精品熟女亚洲av麻豆精品| 成人手机av| 日日爽夜夜爽网站| 欧美精品高潮呻吟av久久| 国产成人精品无人区| 欧美另类一区| 欧美成人精品欧美一级黄| 欧美日韩亚洲综合一区二区三区_| 大片电影免费在线观看免费| 精品少妇一区二区三区视频日本电影 | 国产精品香港三级国产av潘金莲 | 亚洲精品aⅴ在线观看| 精品午夜福利在线看| 老司机在亚洲福利影院| 亚洲,欧美精品.| 午夜老司机福利片| 97人妻天天添夜夜摸| 欧美人与善性xxx| 日本wwww免费看| 亚洲国产精品成人久久小说| 99国产精品免费福利视频| av片东京热男人的天堂| 亚洲五月色婷婷综合| 亚洲中文av在线| 中国三级夫妇交换| 亚洲欧美一区二区三区国产| 人妻一区二区av| 免费不卡黄色视频| 9热在线视频观看99| 在线精品无人区一区二区三| 99热网站在线观看| a 毛片基地| 国产探花极品一区二区| 99久久精品国产亚洲精品| 老司机影院成人| 精品视频人人做人人爽| 哪个播放器可以免费观看大片| 精品国产国语对白av| 欧美精品高潮呻吟av久久| 婷婷色麻豆天堂久久| 热re99久久国产66热| h视频一区二区三区| 久久精品亚洲av国产电影网| 好男人视频免费观看在线| 久久久精品免费免费高清| 欧美日韩福利视频一区二区| 成年人午夜在线观看视频| 一本一本久久a久久精品综合妖精| 亚洲欧美成人综合另类久久久| 在现免费观看毛片| 亚洲成人国产一区在线观看 | 日本av免费视频播放| 精品卡一卡二卡四卡免费| 人妻 亚洲 视频| 国产精品秋霞免费鲁丝片| 国产 精品1| 亚洲在久久综合| 爱豆传媒免费全集在线观看| 国产日韩欧美视频二区| 色94色欧美一区二区| 成人影院久久| 嫩草影视91久久| 男的添女的下面高潮视频| 亚洲色图综合在线观看| 亚洲国产毛片av蜜桃av| 男女国产视频网站| 国产乱来视频区| 国产精品一区二区在线观看99| 亚洲精品国产色婷婷电影| 九九爱精品视频在线观看| 国产成人精品久久二区二区91 | 亚洲欧美精品综合一区二区三区| 成年人免费黄色播放视频| 国产亚洲欧美精品永久| 中文字幕制服av| 亚洲国产欧美一区二区综合| 亚洲婷婷狠狠爱综合网| 1024香蕉在线观看| 亚洲免费av在线视频| 侵犯人妻中文字幕一二三四区| 99精国产麻豆久久婷婷| 国产日韩一区二区三区精品不卡| 丁香六月天网| 欧美日韩视频精品一区| 亚洲国产精品成人久久小说| 国产成人精品福利久久| 国产亚洲午夜精品一区二区久久| 老熟女久久久| 亚洲一区二区三区欧美精品| 无限看片的www在线观看| 免费看不卡的av| 亚洲精品乱久久久久久| 丝袜在线中文字幕| 高清黄色对白视频在线免费看| 啦啦啦视频在线资源免费观看| 国产av一区二区精品久久| 国产成人91sexporn| 亚洲成人一二三区av| 国产成人精品久久久久久| 99久久综合免费| 久久av网站| 亚洲av福利一区| 欧美 日韩 精品 国产| 国产精品 国内视频| 午夜日韩欧美国产| 国产伦理片在线播放av一区| 免费不卡黄色视频| 国产精品久久久久久久久免| 亚洲av欧美aⅴ国产| 国语对白做爰xxxⅹ性视频网站| 久久精品国产亚洲av涩爱| 国产激情久久老熟女| 高清不卡的av网站| 韩国av在线不卡| 亚洲七黄色美女视频| 久久国产亚洲av麻豆专区| av视频免费观看在线观看| 成年女人毛片免费观看观看9 | 一区在线观看完整版| 狂野欧美激情性bbbbbb| 欧美日韩一区二区视频在线观看视频在线| 国产成人av激情在线播放| 男女无遮挡免费网站观看| 18禁裸乳无遮挡动漫免费视频| 中文字幕亚洲精品专区| 男女边摸边吃奶| 国产精品熟女久久久久浪| 99国产综合亚洲精品| 国精品久久久久久国模美| 宅男免费午夜| 亚洲欧洲日产国产| 女人高潮潮喷娇喘18禁视频| 免费在线观看黄色视频的| 欧美成人午夜精品| 亚洲av欧美aⅴ国产| 老司机深夜福利视频在线观看 | 最近手机中文字幕大全| 亚洲视频免费观看视频| 操出白浆在线播放| 亚洲欧美清纯卡通| 天天躁夜夜躁狠狠躁躁| 在线天堂中文资源库| 国产黄色免费在线视频| 国产亚洲欧美精品永久| 国产免费一区二区三区四区乱码| 国产成人欧美| 99国产综合亚洲精品| 免费av中文字幕在线| 男人舔女人的私密视频| 香蕉国产在线看| 麻豆乱淫一区二区| 欧美xxⅹ黑人| 9191精品国产免费久久| 十八禁网站网址无遮挡| 9色porny在线观看| 考比视频在线观看| 18禁裸乳无遮挡动漫免费视频| 伊人久久大香线蕉亚洲五| 日本午夜av视频| 久久久精品区二区三区| 色精品久久人妻99蜜桃| 五月天丁香电影| 国产高清不卡午夜福利| 美女福利国产在线| 亚洲在久久综合| 精品第一国产精品| 国产av一区二区精品久久| 欧美日韩亚洲国产一区二区在线观看 | 亚洲国产欧美日韩在线播放| 18在线观看网站| 最近的中文字幕免费完整| 亚洲七黄色美女视频| 精品久久久精品久久久| 日韩熟女老妇一区二区性免费视频| 人人妻人人添人人爽欧美一区卜| 国产又爽黄色视频| 国产97色在线日韩免费| av线在线观看网站| 波多野结衣一区麻豆| 日韩一区二区三区影片| 午夜日本视频在线| 欧美日韩av久久| 成人毛片60女人毛片免费| 在现免费观看毛片| 制服丝袜香蕉在线| 亚洲自偷自拍图片 自拍| 免费女性裸体啪啪无遮挡网站| 女人爽到高潮嗷嗷叫在线视频| 国产成人精品在线电影| 久久韩国三级中文字幕| 美女高潮到喷水免费观看| 亚洲色图综合在线观看| 欧美人与善性xxx| 中文字幕最新亚洲高清| 日韩欧美一区视频在线观看| 2018国产大陆天天弄谢| 精品视频人人做人人爽| 中文字幕色久视频| 亚洲欧美色中文字幕在线| 亚洲精品乱久久久久久| 亚洲伊人色综图| 色婷婷av一区二区三区视频| 国产在线一区二区三区精| 久久这里只有精品19| av电影中文网址| 国产成人精品久久久久久| av女优亚洲男人天堂| 亚洲精品视频女| 一边摸一边做爽爽视频免费| 日本av免费视频播放| 久久久久久久国产电影| av片东京热男人的天堂| 国产精品麻豆人妻色哟哟久久| av网站在线播放免费| 国产av一区二区精品久久| 久久人人爽av亚洲精品天堂| 性高湖久久久久久久久免费观看| 王馨瑶露胸无遮挡在线观看| 久久人妻熟女aⅴ| 777久久人妻少妇嫩草av网站| 欧美亚洲 丝袜 人妻 在线| 如何舔出高潮| 亚洲国产av新网站| 免费少妇av软件| 国产亚洲最大av| 不卡视频在线观看欧美| 嫩草影院入口| 久久精品国产亚洲av高清一级| 亚洲五月色婷婷综合| 日本午夜av视频| 女人高潮潮喷娇喘18禁视频| 中文精品一卡2卡3卡4更新| 久久久欧美国产精品| 校园人妻丝袜中文字幕| 满18在线观看网站| 中文字幕亚洲精品专区| 伦理电影免费视频| 久久人妻熟女aⅴ| 亚洲专区中文字幕在线 | 五月开心婷婷网| 亚洲欧美激情在线| 中文精品一卡2卡3卡4更新| videosex国产| 在线观看www视频免费| 一边亲一边摸免费视频| 亚洲人成77777在线视频| 国产无遮挡羞羞视频在线观看| 熟女少妇亚洲综合色aaa.| 亚洲人成77777在线视频| 免费看av在线观看网站| 亚洲欧洲日产国产| 国产av码专区亚洲av| 久久久久久久大尺度免费视频| 亚洲精品国产色婷婷电影| 男女之事视频高清在线观看 | 亚洲国产看品久久| 精品卡一卡二卡四卡免费| 国产一区二区三区av在线| 久热这里只有精品99| 欧美日韩国产mv在线观看视频| 哪个播放器可以免费观看大片| 国产午夜精品一二区理论片| 国产精品久久久久成人av| 又大又黄又爽视频免费| 免费在线观看完整版高清|