• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Underwater Acoustic Signal Noise Reduction Based on a Fully Convolutional Encoder-Decoder Neural Network

    2023-12-21 08:10:06SONGYongqiangCHUQianLIUFengWANGTaoandSHENTongsheng
    Journal of Ocean University of China 2023年6期

    SONG Yongqiang, CHU Qian,LIU Feng, WANG Tao, and SHEN Tongsheng

    Underwater Acoustic Signal Noise Reduction Based on a Fully Convolutional Encoder-Decoder Neural Network

    SONG Yongqiang1), 2), CHU Qian3),LIU Feng1), *, WANG Tao1), and SHEN Tongsheng1)

    1),100089,2),100071,3),264000,

    Noise reduction analysis of signals is essential for modern underwater acoustic detection systems. The traditional noise reduction techniques gradually lose efficacy because the target signal is masked by biological and natural noise in the marine environment. The feature extraction method combining time-frequency spectrograms and deep learning can effectively achieve the separation of noise and target signals. A fully convolutional encoder-decoder neural network (FCEDN) is proposed to address the issue of noise reduction in underwater acoustic signals. The time-domain waveform map of underwater acoustic signals is converted into a wavelet low- frequency analysis recording spectrogram during the denoising process to preserve as many underwater acoustic signal characteristics as possible. The FCEDN is built to learn the spectrogram mapping between noise and target signals that can be learned at each time level.The transposed convolution transforms are introduced, which can transform the spectrogram features of the signals into listenable audiofiles. After evaluating the systems on the ShipsEar Dataset, the proposed method can increase SNR and SI-SNR by 10.02 and 9.5dB, re- spectively.

    deep learning; convolutional encoder-decoder neural network; wavelet low-frequency analysis recording spectrogram

    1 Introduction

    The collected signals from underwater targets such as ships and submarines may contain a portion of reverberant noise with complex spectral components due to the interference from complex and variable natural sound sources (Stulov and Kartofelev, 2014; Klaerner, 2019). Thesenoise disturbances can affect the detection, localization, andidentification of underwater acoustic signals. Therefore, help- ing the underwater acoustic signal-to-noise ratio achieve the experimental requirements before processing target source monitoring is necessary.

    Underwater acoustic noise reduction methods can be divided into two main categories after a long development period: traditional and artificial intelligence methods. Traditional methods usually involve experimentalists using manual processing to achieve underwater acoustic signal noise reduction, which is essentially a data preprocessing step based on equational inference, relying on a blind source separation framework and interpretable assertions to construct denoising algorithms. Examples of these algorithms include multi-resolution and high-precision decomposition (Huang., 2012), bark wavelet analysis (Wang and Zeng, 2014), energy significance (Taroudakis, 2017),and empirical mode decomposition. However, as the environment changes, the corresponding biological and marine environmental noise will also change, causing significant dynamic deviations in the noise signal source. Chen(2021) showed that multiple feature extraction me- thods have an important impact on processing the original dataset. Thus, the traditional algorithm fails to learn stable noise features, making it difficult to realize breakthroughs in the signal-to-noise ratio after signal processing (Vincent., 2006). Additionally, these experiments can achieve simplified operations and partial assumptions only under certain types, particular circumstances, or partial sequences of signals (Le., 2020). These traditional methods fail to satisfy the extensive and varied nonlinear feature learn- ing capability of underwater acoustic signals.

    Deep learning-based models (Hao., 2016) have de-monstrated considerable potential for applications in various disciplines compared to traditional approaches (Wu and Wu, 2022). For example, Wang(2020) proposed a novel stacked convolutional sparse denoising autoencoder model to complete the blind denoising task of underwater heterogeneous information data. Zhou and Yang (2020) de- signed a convolutional denoising autoencoder to obtain de- noising features with multiple images segmented by parallel learning and used it to initialize a parallel classifier. Yang(2021) presented deep convolutional autoencoders to denoise the clicks of the finless porpoise. Russo(2021)proposed a deep network for attitude estimation (DANAE), which works on Kalman filter data integration to reduce noise using an autoencoder. Qiu(2021) presented a reinforcement learning-based underwater acoustic signal processing system. However, complex system design and the selection of critical parameters are required to satisfy the oscillation conditions and maintain the nonlinear system of signal and noise balance. Zhou(2022) introduced the PF3SACO to accelerate convergence, improve search capability, enhance local search capability, and avoidpremature. The generative network is used to create highlyspurious data, and the discriminator is utilized to discriminate data availability. Yao(2022) expected that the scarcity problem of noisy data in complex marine environ-ments could be solved. However, the unstable marine environment makes the experiment costly, considering working time and human resources. Xing(2022) used or- thogonal matching pursuit and the optimal direction me- thod (MOD) to eliminate some noise in the underwater acoustic signal. Reaching a high step is difficult for signal-to-noise ratio improvement despite its adaptive capability. The signal reconstruction is completed in accordance with the updated dictionary and sparse coefficients. These intelligent methods mainly extract signal features manually, which causes a considerable amount of detail to be lost in the original signal (Hinton, 2015; Zhao,2021). However, these methods do not have the batch pro- cessing denoising function for underwater acoustic signals. In addition, numerous problems, such as changes in the ocean environment and the mixing of multichannel signals,increase the difficulty of obtaining high-quality signals for the above methods. Thus, realizing a breakthrough in the signal-to-noise ratio of the collected underwater acoustic signals is challenging.

    Existing mature deep learning methods are difficult to reference and apply directly due to the unique characteris- tics of underwater acoustic signals compared to their acous- tic methods. With the application of the Fourier transform, thetime-domain signal can be converted to the time-frequency domain for representation. Compared with familiar images, the conventional time-frequency spectrum has no specific meaning and lacks texture features. However, somespecific correlations exist between the two axes of the spectrum.

    These correlations must be dealt with concurrently du- ring the state analysis, which poses a significant challenge for feature extraction. In addition, traditional feature extraction methods (such as smoothing noise and removing outliers) frequently extract trait values without model training. Indiscriminate processing methods gradually contribute to the loss of some of the detailed information in the feature vector in subsequent module delivery. Therefore, a new low-frequency wavelet analysis of the recorded spectrumand a fully convolutional encoder-decoder neural network is proposed to reduce the noise of underwater acoustic signals. The following three significant contributions are in- cluded in this paper.

    1) A new feature extraction technique is proposed to replace the data preprocessing procedure to extract the spectrogram of underwater acoustic signals effectively. This technique combines wavelet decomposition and low- frequency analysis theories to extract features from under- water acoustic signal spectrograms recorded in the time domain as the input of the denoised model.

    2) The encoder-decoder framework is constructed to build the deep network. The fully convolutional encoder can compress underwater acoustic feature vectors of different lengths into the high-order nonlinear feature of the same dimension and obtain the optimal expression vector by designing different kernel sizes. More importantly, the transposed convolutional decoder can solve the bottleneck of information loss due to long sequence to fixed-length vector conversion.

    3) A mapping-based approach that replaces masking is employed to optimize the fully convolutional encoder-de- coder neural network. The fully convolutional mapping layer is introduced, which contributes to extracting the local characteristics of signals and timing correlation pieces of information without considering the features of the pure natural noise signal.

    2 Methodology

    An overview of the proposed system is provided, and the following two parts of the pipeline are then analyzed: the wavelet low-frequency analysis recording spectrogramextraction and the fully convolutional encoder-decoder neu- ral network structure.

    2.1 System Overview

    First, wavelet low-frequency analysis recording spectrum features are extracted to increase the correlation be- tween adjacent frames. Second, a fully convolutional neural network and encoder-decoder network model is used for signal noise reduction, which extracts the structural and local information of the spectrum and considers contextualknowledge of the timing signal. Moreover, a brief description of the method is shown in Fig.1.

    2.2 Wavelet Low-Frequency Analysis Recording Spectrogram Extraction

    The wavelet low-frequency analysis recording spectrogram can be used to construct a feature map with desired characteristics by modifying different wavelet functions without the required CUDA space. More importantly, the generation of the feature spectrogram is independent of the interval, surface, interval sampling, and signal length. The wavelet low-frequency analysis recording spectrogram extraction is divided into three steps. The first step is to decompose the underwater acoustic signal sequence. The entire decomposition process is shown in Fig.2, and the operation is as follows:

    Given an underwater acoustic signal sequence, as shown in Eq. (1), wherexis a value in the sequence, andis a time node:

    A partial sequence of sequenceis selected and divided into two parts according to the series of parity samples, as shown in Eqs. (2) and (3), whereXandXare sequences of even and odd segments, respectively:

    Fig.2 Waveforms of underwater acoustic signals under different states.

    Among them,(?) is the predictor, as shown in Eqs. (6) and (7):

    In the process of transformation, the frequency characteristic ofXis maintained, and the updater(?) is introduced. Thus, Eq. (8) holds

    Among them, the update method can be selected from the following two functions, as shown in Eqs. (9) and (10):

    The second step is the underwater acoustic signal sequenceextraction, as shown in Fig.2. This step reflects the temporal state transformation of a segment of the signal (Hu.,2007). Different decomposition methods can be established to obtain highly detailed information regarding this signal.

    The thresholdof coefficients is determined by Eqs. (11) and (12). The features of the signal sequence can be effectively extracted by setting the thresholdas follows:

    whereis the data length of the detail signal sequence={(),=1, 2, 3, ???,}and the thresholdprocessing method is shown in Eq. (13):

    whered() is the detail signal after threshold processing and is then reconstructed by Eqs. (14) and (15) to obtain the signal:

    The third step is to extract the spectrogram features of the signal after the first two steps. Different wavelet bases (Bayes, BlockJs, FDR, Minimax, SURE, Universal Thres- hold) are selected to extract signal details, and low-fre- quency analysis recording is then employed to extract the spectrogram features of the signal. Fig.3 shows the waveform features of the input signal. Different waveform features can be obtained by setting different wavelet bases (Li., 2019). Afterward, the spectrogram features are obtained through low-frequency analysis recording, as shownin Fig.4, and the spectrogram features of the signal are usedas the fully convolutional encoder-decoder neural network input to train the model.

    Fig.3 Waveform characteristics based on different wavelet bases. (a), Bayes; (b), BlockJS; (c), FDR; (d), Minimax; (e), SURE; (f), Universal threshold.

    2.3 Fully Convolutional Encoder-Decoder Neural Network

    A fully convolutional encoder-decoder neural network structure is constructed as a denoising base model. This structure improves the performance of the denoising modelby altering the network architecture or configuring various hyperparameters. Different network layers play various roles in the denoising process. The convolutional layer can be set with different kernel sizes to extract the local inva- riant features of the spectrogram. The encoder-decoder can be introduced to increase the weights of the relevant vectors and the feature aggregation of the local features extracted from the network. First, the acquired wavelet low- frequency analysis recording spectrogram features are used as input to the model. The encoding phase of the signal involves extracting its high-order features using successive one-dimensional convolutional networks that have been previously defined. Afterward, the input is fed into the fully convolutional mapping structure. The structure is then used to learn the high-dimensional mapping relationship between the noise and target signals. Finally, the acquired mapping features are converted into a time-series vector that can be used to generate audio files through a transposed convolution operation. The specific model architecture is shown in Fig.5.

    Fig.4 Spectrogram features based on different wavelet bases. (a), Bayes; (b), BlockJS; (c), FDR; (d), Minimax; (e), SURE; (f), Universal Threshold.

    Fig.5 Fully convolutional encoder-decoder network.

    The fully convolutional encoder-decoder neural networkhas three primary operations: 1) Encoder. The convolutional layers and activation functions reduce the size of the feature map. Therefore, the input spectrogram can become a low-dimensional representation and introduce a normalization method to prevent gradient disappearance. 2) Network separation module. The intermediate network layers can be adapted to any size of the input by removing the fully connected layer and replacing it with a convolutional layer. 3) Decoder. The transposed operation progressively recovers the spatial dimension. The decoder extracted the fixed length feature during the encoder-decoder process to complete the same size input and output with the least amount of information loss possible. The different parameters are described in Table 1. Where FCEDN3-8 means choosing the convolution kernel of size 3×3 and repeating the convolution operation eight times. FCEDN3-16 meanschoosing the convolution kernel of size 3×3 and repeatingthe convolution operation 16 times. FCEDN5-8 means choo- sing the convolution kernel of size 5×5 and repeating the convolution operation eight times. FCEDN5-16 means choo- sing the convolution kernel of size 5×5 and repeating the convolution operation 16 times.

    Table 1 Fully convolutional encoder-decoder network (FCEDN) structure

    3 Experiment

    The ShipsEar dataset is presented in this section, and the experimental findings of underwater acoustic signal de- noising using it as test data are discussed (Santos., 2016). Different evaluation metrics are used to represent the effect of the noise reduction experiment (Yaman., 2021). Various outcomes from the investigations intothe reduction of underwater acoustic signal noise are shown in some ablation experiments.

    3.1 Dataset

    The dataset was collected with recordings made by hy- drophones deployed from docks to capture different vessel speeds and cavitation noises corresponding to docking or undocking maneuvers. The recordings are of actual vessel sounds captured in a real environment. Therefore, the an- thropogenic and natural background noise and vocalization of marine mammals are present. The dataset comprises90 recordings in .wav format with five major classes. Each major class contains one or more subclasses; the duration of each audio segment varies from 15s to 10min, and the appearance of different ships is shown in Fig.6.

    Fig.6 Ships.

    Each class is divided, as shown in Table 2. Class A com- prises dredgers, fishing ships, mussel ships, trawlers, and tug ships. Class B comprises motorboats, pilot ships, and sailboats. Class C comprises passenger ferries. Class D com- prises ocean liners and RORO vessels. Class E is the natural noise, and we mix it with the first four classes to construct targets containing noise, and the numbers represent the length of the signal time. A noise-laden data set containing a mixture of two acoustic signals was constructed to validate the denoising performance of the model effectively. All signals were segmented at a fixed time of 5s, re- sulting in a total of 1956 labeled sound samples. Sample without the noise class were randomly selected from the data and fused with the target samples of the noise class. Therefore, the signal-to-noise ratio of fused signals was 0dB. Afterward, the dataset was divided into validation, testing, and training sets in the ratio of 1:1:8, respectively, to verify the denoising performance of the model.

    Table 2 Datasets of ShipsEar

    3.2 Configuration

    All networks are trained using backpropagation and gra-dient descent for batch normalization added after each convolutional layer in the mapping network. The optimization algorithm chooses an adaptive moment estimation algorithm that combines the first and second gradients(Wang, 2020). This article sets the exponential decay rates of the first- and second-order moment estimations to 0.9 and 0.999, respectively, for the setting of some specific parameters based on experience. The rates frequently lie infinitely close to 1 in sparse gradients. The sample rate is set to 44100, and the epoch is set to 50. The learning rate is reduced to 0.0001, which is then minimized by 25% from its initial value. The sampling rate is reduced by 25%, and the learning rate is reduced by 10% when the entire experiment is overfitted.

    3.3 Experimental Evaluation

    Training the end-to-end learning framework aims to ma- ximize the source-to-distortion ratioand the scale- invariant source-to-noise ratio. These ratios are commonly used as an evaluation metric for signal noise reduction. Therequires knowledge of the target and enhanced signals. It is an energy ratio, expressed in dB, between the energy of the target signal contained in the enhanced signal and the energy of the errors. Compared to the,uses a single coefficient to account for scaling discrepancies. The scale invariance is ensured by normalizing the signal to zero-mean before the calculation. A large scale of invariance is reasonable.andare defined as

    3.4 Results

    Some ablation experiments were performed to compare the efficiency of denoising by introducing various models to confirm the validity of the proposed model. Tables 3 and 4 show the results achieved when applying the FCEDN- 3-8 construction to implement noise reduction for different target classes. Table 4 demonstrates the use of Class B as a test set to verify the noise reduction performance of the different base models. The base model mainly comprises interval-dependent denoising (IDD) (Yan, 2019), fully connected network (FCN) (Russo., 2021), con- volutional network (CNN), and wavelet denoising (WD) (Huang., 2012).

    Table 3 Results of different targets

    Table 4 Results of different noise reduction methods

    The experimental results reveal the following:

    1) A random selection of ambient natural sounds as perturbations was used to verify whether the model has a de- noising performance. Table 3 reveals that using FCEDN3-8to reduce natural environment noise can increase targetsignal SNR and SI-SNR by an average of 8.3 and 6dB, re- spectively. Therefore, using FCEDN3-8 significantly improves the target signal after processing various classes of the noisy signal.

    2) The denoising of underwater acoustic signals can be significantly influenced by various network layer depths, layer structures, number of filters, filter width design me- thods, and filtering methods. Furthermore, the characteris-tic expression of energy transfer is significantly attenuatedwhen the number of layers reaches a particular range. There- fore, the impact of 3×3 and 5×5 convolution kernels and the iterations on experimental outcomes were compared, as shown in Table 4. The best results were achieved during the construction of the network with eight iterations of the 3×3 convolutional kernel. The model can extract local fea- tures precisely using small convolutional kernels, increasing the generalizability of the model. However, the model may be overfitted when the number of iterations is too high.Therefore, considering various factors, model sizes, and de- noising effects, FCEDN3-8 was selected as the model ar- chitecture method.

    3) The full convolutional layer and the small kernel are used to construct the denoised network, which produces the best performance when compared to other base models. As shown in Fig.7, the FCEDN3-8 can steadily improve the signal-to-noise ratio during model training when convergence is reached after approximately 45 epochs. The FCN converges slowly during model training (Sutskever and Hinton, 2014). Therefore, the convolutional layer decreases the resource requirements of the model compared with the connected layer when the parameters are trained. Therefore, the full convolution operation of other models is replaced with successive convolution kernels, which is a practical and innovative step. Moreover, the continuous convolution kernel deepens the stack of network layers, allowing the parameters to grow linearly rather than exponentially during the forward pass.

    Fig.7 Noise reduction process for different methods.

    4) As shown in Figs.8–11, the noise reduction effect of FCEDN3-8 can be confirmed by observing the change in the waveform and spectrogram of the signal. The original signal sampling rate was too high, and the feature information was not readily apparent. Thus, the classification network cannot accept the original signal directly. The signal features are usually extracted in spectrograms for classification experiments. The time-frequency analysis method provides joint information in the time and frequency domains, which can describe the relationship between the signal frequency and time change and thus determine the type of signal. However, spectrogram analysis cannot re- liably identify the signal class due to the inherent environmental noise (the top side of the figure shows the waveform map and spectrum of the target signal covered by noise). The low side of the figure shows the waveform and spectrogram of the signal after processing with FCEDN3-8. Different classes of underwater acoustic signals have been distinguished due to the noise reduction effect. Afterward, the acoustic generation mechanism and propagation law of ship noise were analyzed in combination with the underwater acoustic channel, and the particular time-frequency distribution difference was used for further signal detection and classification tasks.

    5) The effects of different noise reduction signals on the classification results are verified, as shown in Table 5. In addition, the classification confusion matrix of FCEDN3-8 models before and after noise reduction is presented, as shown in Figs.12 and 13, respectively.

    We adopt statistical sampling method where the model randomly selects different classes of denoised acoustic signals to validate the effectiveness of the noise reduction method. The validation model was chosen as the classical LSTM (Liu, 2021). The confusion matrix is used to describe the experimental results of the classification. According to Table 5, the accuracy of the LSTM model can reach 76.18% when the signal has been reduced for noise using the FCEDN3-8 method. The horizontal and vertical coordinates represent the predicted and true classes, respectively. The findings indicate that denoising improves accuracy by approximately 8%. In particular, the classification accuracy for Classes D and A increased from 67.9% to 79.4% and from 49.2% to 60.6%, as shown in Figs.12 and 13,respectively.

    Fig.8 Class A signal waveform diagram and spectrogram.

    Fig.9 Class B signal waveform diagram and spectrogram.

    Fig.10 Class C signal waveform diagram and spectrogram.

    Fig.11 Class D signal waveform diagram and spectrogram.

    Table 5 Classification results after noise reduction

    Fig.12 Classification results before underwater acoustic sig- nal noise reduction.

    4 Conclusions

    Noise reduction processing for underwater acoustic signals is implemented in this paper using deep learning techniques, and the FCEDN is proposed. The model is an end- to-end underwater acoustic signal denoising algorithm witha noise-containing signal at the input and a denoised signal at the output. Wavelet decomposition and low-frequency analysis theories are used to extract the features of the underwater acoustic signal. Deep neural networks are em- ployed to create the separation module between the target and noise signals. Meanwhile, the fully convolutional net- work structure is used to construct the mapping separation module based on an encoder-decoder neural network. This technique can successfully perform robust feature ex- traction and signal-to-noise separation for noisy underwater acoustic targets. The evaluation results were tested on the ShipsEar dataset, which can enhance theandto 10.2 and 9.5, respectively.

    Acknowledgements

    The study is supported by the National Natural Science Foundation of China (No. 41906169), and the PLA Aca- demy of Military Sciences.

    Chen, H., Miao, F., Chen, Y., Xiong, Y., and Chen, T.,2021. A hyperspectral image classification method using multifeature vectors and optimized KELM., 14: 2781- 2795.

    Hao, X., Zhang, G., and Ma, S., 2016. Deep learning., 10 (3): 417-439.

    Hinton, G., Vinyals, O., and Dean, J., 2015. Distilling the knowledge in a neural network., 14 (7): 38-39.

    Hu, Q., He, Z., Zhang, Z., and Zi,Y., 2007. Fault diagnosis of rotating machinery based on improved wavelet package transform and SVMs ensemble., 21 (2): 88-705.

    Huang, H. D., Guo, F., Wang, J. B., and Ren, D. Z., 2012. High precision seismic time-frequency spectrum decomposition me- thod and its application., 47 (5): 773-780.

    Klaerner, M., Wuehrl, M., Kroll, L., and Marburg, S.,2019. Ac- curacy of vibro-acoustic computations using non-equidistant frequency spacing., 145: 60-68.

    Le, C., Zhang, J., Ding, H., Zhang, P., and Wang, G., 2020. Preliminary design of a submerged support structure for floating wind turbines., 19 (6): 49-66.

    Li, H., Zhang, S., Qin, X., Zhang, X., and Zheng, Y., 2019. Enhanced data transmission rate of XCTD profiler based on OFDM., 18 (3): 1-7.

    Liu, F., Shen, T., Luo, Z., Zhao, D., and Guo, S., 2021. Underwater target recognition using convolutional recurrent neural networks with 3-D Mel-spectrogram and data augmentation., 178: 107989.

    Qiu, Y., Yuan, F., Ji, S., and Cheng, E., 2021. Stochastic resonance with reinforcement learning for underwater acoustic com- munication signal., 173: 107688.

    Russo, P., Di Ciaccio, F., and Troisi, S., 2021. DANAE++: A smart approach for denoising underwater attitude estimation., 21: 1526.

    Santos-Domínguez, D., Torres-Guijarro, S., Cardenal-López, A., and Pena-Gimenez, A., 2016. ShipsEar: An underwater vessel noise database., 113: 64-69.

    Stulov, A., and Kartofelev, D., 2014. Vibration of strings with nonlinear supports., 76 (1): 223-229.

    Sutskever, I., and Hinton, G. E., 2014. Deep, narrow sigmoid belief networks are universal approximators., 20 (11): 2629-2636.

    Taroudakis, M., Smaragdakis, C., and Chapman, N. R., 2017. De- noising underwater acoustic signals for applications in acoustical oceanography., 25 (2): 1750015.

    Vincent, E., Gribonval, R., and Févotte, C., 2006. Performance measurement in blind audio source separation., 14 (4): 1462-1469.

    Wang, S., and Zeng, X., 2014. Robust underwater noise targets classification using auditory inspired time-frequency analysis., 78: 68-76.

    Wang, X., Zhao, Y., Teng, X., and Sun,W., 2020. A stacked convolutional sparse denoising autoencoder model for underwater heterogeneous information data., 167: 107391.

    Wu, D., and Wu, C.,2022. Research on the time-dependent split delivery green vehicle routing problem for fresh agricultural products with multiple time windows., 12 (6): 793.

    Xing, C., Wu, Y., Xie, L., and Zhang, D., 2021. A sparse dictionary learning-based denoising method for underwater acoustic sensors., 180:108140.

    Yaman, O., Tuncer, T., and Tasar, B., 2021. DES-Pat: A novel DES pattern-based propeller recognition method using under- water acoustical sounds., 175: 107859.

    Yan, H., Xu, T., Wang, P., Zhang, L., Hu, H., and Bai, Y., 2019. MEMS hydrophone signal denoising and baseline drift removal algorithm based on parameter-optimized variational mode de- composition and correlation coefficient., 19 (21): 4622.

    Yang, W., Chang, W., Song, Z., Zhang, Y., and Wang, X., 2021. Transfer learning for denoising the echolocation clicks of fin- less porpoise () using deepconvolutional autoencoders., 150 (2): 1243-1250.

    Yao, R., Guo, C., Deng, W., and Zhao, H. M., 2022. A novel mathematical morphology spectrum entropy based on scale- adaptive techniques., 126: 691-702.

    Zhao, Y. X., Li, Y., and Wu, N., 2021. Data augmentation and its application in distributed acoustic sensing data denoising., 288 (1): 119-133.

    Zhou, X., and Yang, K., 2020. A denoising representation fra- mework for underwater acoustic signal recognition., 147 (4): 377-383.

    Zhou, X., Ma, H., Gu, J., Chen, H., and Wu, D., 2022. Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism., 114:105139.

    (June 30, 2022;

    August 25, 2022;

    February 15, 2023)

    ? Ocean University of China, Science Press and Springer-Verlag GmbH Germany 2023

    . E-mail: 1609217323@qq.com

    (Edited by Chen Wenwen)

    天天一区二区日本电影三级| 97超视频在线观看视频| 国产视频一区二区在线看| 国产激情偷乱视频一区二区| 久久热在线av| 国产成人aa在线观看| 亚洲精品中文字幕一二三四区| 日韩欧美三级三区| 91av网一区二区| 三级国产精品欧美在线观看 | 亚洲精品在线观看二区| 中文亚洲av片在线观看爽| 精品不卡国产一区二区三区| 国产亚洲精品久久久久久毛片| 超碰成人久久| 国产伦一二天堂av在线观看| 中文字幕久久专区| 国模一区二区三区四区视频 | 亚洲一区高清亚洲精品| 99riav亚洲国产免费| 母亲3免费完整高清在线观看| av在线天堂中文字幕| 亚洲av免费在线观看| 美女午夜性视频免费| 又粗又爽又猛毛片免费看| 午夜久久久久精精品| 美女 人体艺术 gogo| 99久久无色码亚洲精品果冻| 亚洲av电影在线进入| 日本与韩国留学比较| 亚洲无线观看免费| 91在线观看av| 亚洲无线观看免费| 久久精品亚洲精品国产色婷小说| 免费看a级黄色片| 亚洲av片天天在线观看| 国产高清三级在线| 97超级碰碰碰精品色视频在线观看| 免费看十八禁软件| 亚洲av熟女| 欧美激情在线99| 久久中文字幕人妻熟女| 天堂动漫精品| avwww免费| 日韩成人在线观看一区二区三区| 国产乱人视频| 日本黄色视频三级网站网址| 变态另类成人亚洲欧美熟女| 国产精品久久久久久久电影 | 高潮久久久久久久久久久不卡| 亚洲欧美精品综合久久99| 嫁个100分男人电影在线观看| 久久精品综合一区二区三区| 欧美黑人欧美精品刺激| 久久精品国产亚洲av香蕉五月| 97超视频在线观看视频| 亚洲av电影在线进入| 色精品久久人妻99蜜桃| 麻豆国产av国片精品| 欧美日韩亚洲国产一区二区在线观看| 国产精品永久免费网站| 丁香六月欧美| 又黄又粗又硬又大视频| 国产午夜福利久久久久久| 亚洲av成人不卡在线观看播放网| 国产三级中文精品| 婷婷六月久久综合丁香| 欧美日韩中文字幕国产精品一区二区三区| 99在线人妻在线中文字幕| 国产黄a三级三级三级人| 日日干狠狠操夜夜爽| 午夜福利在线观看吧| 日韩有码中文字幕| 在线看三级毛片| 亚洲成人久久爱视频| 亚洲欧美激情综合另类| 两人在一起打扑克的视频| 国产 一区 欧美 日韩| 国产熟女xx| 老熟妇乱子伦视频在线观看| 一区二区三区高清视频在线| 天天躁狠狠躁夜夜躁狠狠躁| 国产免费av片在线观看野外av| 亚洲性夜色夜夜综合| 国产美女午夜福利| 两个人看的免费小视频| 亚洲专区字幕在线| ponron亚洲| 免费av毛片视频| 久久精品影院6| 日日干狠狠操夜夜爽| 国产麻豆成人av免费视频| 亚洲天堂国产精品一区在线| 国产精品99久久久久久久久| 在线a可以看的网站| 好男人电影高清在线观看| 九九久久精品国产亚洲av麻豆 | 精品免费久久久久久久清纯| 亚洲无线在线观看| 久久婷婷人人爽人人干人人爱| 国产一区二区在线av高清观看| 啪啪无遮挡十八禁网站| 国产精品美女特级片免费视频播放器 | 国产精品久久视频播放| 一级毛片精品| 91av网一区二区| 亚洲av第一区精品v没综合| 蜜桃久久精品国产亚洲av| 女同久久另类99精品国产91| 99在线人妻在线中文字幕| 美女 人体艺术 gogo| 欧美乱色亚洲激情| 国产91精品成人一区二区三区| 青草久久国产| 国产亚洲精品一区二区www| 天天躁狠狠躁夜夜躁狠狠躁| 男人的好看免费观看在线视频| 免费在线观看日本一区| 国产男靠女视频免费网站| 观看美女的网站| 精品99又大又爽又粗少妇毛片 | 国产精品美女特级片免费视频播放器 | 国产私拍福利视频在线观看| bbb黄色大片| 国产熟女xx| 在线视频色国产色| 午夜福利视频1000在线观看| 韩国av一区二区三区四区| av国产免费在线观看| 在线观看美女被高潮喷水网站 | 日本在线视频免费播放| 国产 一区 欧美 日韩| 日日夜夜操网爽| 日韩精品中文字幕看吧| 日韩免费av在线播放| 亚洲美女视频黄频| 淫秽高清视频在线观看| 高清毛片免费观看视频网站| 成人鲁丝片一二三区免费| 啦啦啦韩国在线观看视频| 欧美日韩精品网址| 免费看a级黄色片| 岛国视频午夜一区免费看| 嫩草影院入口| 精品国产乱子伦一区二区三区| 欧美+亚洲+日韩+国产| 18禁美女被吸乳视频| 伦理电影免费视频| 此物有八面人人有两片| 日本a在线网址| 国产精品永久免费网站| 国产精品久久久久久亚洲av鲁大| 男女之事视频高清在线观看| 人妻丰满熟妇av一区二区三区| 亚洲 国产 在线| 国产一区二区激情短视频| 老司机午夜十八禁免费视频| 女同久久另类99精品国产91| 国产精品日韩av在线免费观看| 操出白浆在线播放| 亚洲av片天天在线观看| 午夜a级毛片| 久久久色成人| 亚洲午夜精品一区,二区,三区| 国产男靠女视频免费网站| 亚洲国产欧美一区二区综合| 久久久久久久久中文| 巨乳人妻的诱惑在线观看| 嫩草影院入口| 在线观看舔阴道视频| 国产伦精品一区二区三区视频9 | 在线免费观看不下载黄p国产 | 国产淫片久久久久久久久 | 欧美中文日本在线观看视频| 精品一区二区三区视频在线观看免费| 国产精品精品国产色婷婷| 99热这里只有精品一区 | 男人舔女人下体高潮全视频| 十八禁网站免费在线| 狠狠狠狠99中文字幕| 又爽又黄无遮挡网站| 韩国av一区二区三区四区| 国产精品影院久久| 国产三级黄色录像| 丝袜人妻中文字幕| 观看免费一级毛片| 又紧又爽又黄一区二区| 人妻夜夜爽99麻豆av| 久久欧美精品欧美久久欧美| 精品国产三级普通话版| 日本在线视频免费播放| 国产久久久一区二区三区| 老司机深夜福利视频在线观看| 国产高潮美女av| 精品乱码久久久久久99久播| 精品久久久久久久末码| 亚洲在线自拍视频| 日韩欧美 国产精品| 99热只有精品国产| 亚洲五月天丁香| 欧美日韩一级在线毛片| 欧美日韩亚洲国产一区二区在线观看| 一区二区三区国产精品乱码| 亚洲精品一区av在线观看| 亚洲av第一区精品v没综合| 熟女人妻精品中文字幕| 亚洲欧美日韩高清在线视频| 在线观看免费视频日本深夜| 五月伊人婷婷丁香| 90打野战视频偷拍视频| 夜夜夜夜夜久久久久| 亚洲无线观看免费| 午夜福利18| 亚洲九九香蕉| 国内精品久久久久久久电影| 亚洲欧美日韩高清在线视频| 国产精品国产高清国产av| 亚洲真实伦在线观看| 免费高清视频大片| tocl精华| 精品一区二区三区av网在线观看| 亚洲 欧美 日韩 在线 免费| 久久天堂一区二区三区四区| 丝袜人妻中文字幕| 欧美大码av| 久久精品91蜜桃| 免费电影在线观看免费观看| 母亲3免费完整高清在线观看| av黄色大香蕉| 男人舔奶头视频| 国产成人影院久久av| 欧美日韩国产亚洲二区| 九色成人免费人妻av| 蜜桃久久精品国产亚洲av| 此物有八面人人有两片| 国产淫片久久久久久久久 | 国产麻豆成人av免费视频| 黄色成人免费大全| 欧美zozozo另类| 亚洲av中文字字幕乱码综合| 五月玫瑰六月丁香| 高清在线国产一区| 亚洲欧美日韩无卡精品| 精品一区二区三区四区五区乱码| 一区二区三区高清视频在线| 久久性视频一级片| av福利片在线观看| 日本a在线网址| 天堂av国产一区二区熟女人妻| 色综合亚洲欧美另类图片| 日本撒尿小便嘘嘘汇集6| 999久久久精品免费观看国产| 国产成人精品无人区| 一区二区三区国产精品乱码| 亚洲18禁久久av| 成人三级做爰电影| 少妇人妻一区二区三区视频| 亚洲人成伊人成综合网2020| 岛国在线观看网站| 九九久久精品国产亚洲av麻豆 | 精品一区二区三区四区五区乱码| 久久久精品大字幕| 黄色 视频免费看| 美女大奶头视频| 热99在线观看视频| 欧美黑人巨大hd| 久久久久精品国产欧美久久久| 国产视频一区二区在线看| 美女高潮喷水抽搐中文字幕| 给我免费播放毛片高清在线观看| 国产精品 欧美亚洲| 中文亚洲av片在线观看爽| 97超视频在线观看视频| 国产真实乱freesex| 在线十欧美十亚洲十日本专区| 欧美日韩中文字幕国产精品一区二区三区| 啦啦啦观看免费观看视频高清| 精品国产三级普通话版| 婷婷丁香在线五月| 亚洲国产精品sss在线观看| 1024手机看黄色片| 黑人巨大精品欧美一区二区mp4| 搡老妇女老女人老熟妇| 久久精品亚洲精品国产色婷小说| 亚洲狠狠婷婷综合久久图片| 亚洲av片天天在线观看| 亚洲av日韩精品久久久久久密| 九色国产91popny在线| 99re在线观看精品视频| 欧美色欧美亚洲另类二区| 日韩欧美 国产精品| 午夜免费激情av| 国产乱人伦免费视频| 久久中文字幕人妻熟女| 午夜日韩欧美国产| 亚洲中文av在线| 亚洲在线自拍视频| 五月玫瑰六月丁香| 亚洲在线自拍视频| 五月玫瑰六月丁香| 欧美一级a爱片免费观看看| 国产精品久久久人人做人人爽| 国产v大片淫在线免费观看| 午夜福利在线在线| 国产成人av教育| 国内精品美女久久久久久| 中国美女看黄片| 国产精品一区二区三区四区免费观看 | 国产三级黄色录像| 国产精品野战在线观看| 19禁男女啪啪无遮挡网站| 久久午夜综合久久蜜桃| 免费电影在线观看免费观看| 欧美中文综合在线视频| av欧美777| 99久久综合精品五月天人人| www日本黄色视频网| 国产亚洲欧美98| 久久精品人妻少妇| 中文字幕精品亚洲无线码一区| 麻豆久久精品国产亚洲av| 成人无遮挡网站| 国产精品亚洲美女久久久| 丰满人妻一区二区三区视频av | 99久久精品国产亚洲精品| av黄色大香蕉| 成人av一区二区三区在线看| 91av网站免费观看| 久久热在线av| 日本免费一区二区三区高清不卡| 亚洲aⅴ乱码一区二区在线播放| 一级毛片精品| 1024香蕉在线观看| 成人鲁丝片一二三区免费| 动漫黄色视频在线观看| 欧美日韩黄片免| 88av欧美| 99热这里只有精品一区 | 天堂√8在线中文| 精品一区二区三区四区五区乱码| 久久精品91蜜桃| 91久久精品国产一区二区成人 | 色尼玛亚洲综合影院| 每晚都被弄得嗷嗷叫到高潮| 国产高清有码在线观看视频| 观看免费一级毛片| 免费看a级黄色片| 中国美女看黄片| 日日夜夜操网爽| 97人妻精品一区二区三区麻豆| 一边摸一边抽搐一进一小说| 丁香六月欧美| 一级a爱片免费观看的视频| 99riav亚洲国产免费| 亚洲中文日韩欧美视频| av欧美777| 亚洲人成电影免费在线| 亚洲国产精品合色在线| 国产成年人精品一区二区| 黄色片一级片一级黄色片| 国内精品一区二区在线观看| 免费高清视频大片| 精品久久久久久成人av| 亚洲精品一区av在线观看| 国产真实乱freesex| 国产亚洲精品久久久久久毛片| 亚洲性夜色夜夜综合| 国产在线精品亚洲第一网站| 国产三级在线视频| 久久久国产成人免费| 激情在线观看视频在线高清| 丰满的人妻完整版| 国产亚洲av嫩草精品影院| 国产精品野战在线观看| 草草在线视频免费看| bbb黄色大片| 国产精品98久久久久久宅男小说| 欧美一区二区国产精品久久精品| 一卡2卡三卡四卡精品乱码亚洲| 99久久久亚洲精品蜜臀av| 男女视频在线观看网站免费| 成人精品一区二区免费| 国产精品久久久久久精品电影| 国产日本99.免费观看| 久久香蕉精品热| 我要搜黄色片| 国产精品电影一区二区三区| 国产免费av片在线观看野外av| 欧美日韩福利视频一区二区| 好男人电影高清在线观看| 可以在线观看的亚洲视频| 日韩人妻高清精品专区| 免费在线观看成人毛片| 91av网一区二区| 人妻久久中文字幕网| 一a级毛片在线观看| 国内久久婷婷六月综合欲色啪| 国产蜜桃级精品一区二区三区| 级片在线观看| 99在线人妻在线中文字幕| 一级毛片精品| 国产精品女同一区二区软件 | 国模一区二区三区四区视频 | 两人在一起打扑克的视频| 在线视频色国产色| 色尼玛亚洲综合影院| 国产精品,欧美在线| 国产精品一区二区免费欧美| 久久久久久国产a免费观看| 日韩欧美在线乱码| 俄罗斯特黄特色一大片| 曰老女人黄片| 久久性视频一级片| 亚洲 国产 在线| 夜夜夜夜夜久久久久| 18禁黄网站禁片免费观看直播| 精品久久久久久成人av| 欧美黄色片欧美黄色片| 一区二区三区国产精品乱码| 51午夜福利影视在线观看| 亚洲自偷自拍图片 自拍| 欧美在线一区亚洲| 精品国产超薄肉色丝袜足j| 亚洲欧美日韩高清专用| av天堂在线播放| 成人鲁丝片一二三区免费| 国产一区在线观看成人免费| 久久午夜综合久久蜜桃| 亚洲国产欧美人成| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲精华国产精华精| 亚洲 欧美 日韩 在线 免费| 黄色片一级片一级黄色片| 久久久国产欧美日韩av| www.熟女人妻精品国产| 变态另类丝袜制服| 一本综合久久免费| 麻豆av在线久日| 午夜福利在线观看免费完整高清在 | 精品国产三级普通话版| 精品一区二区三区av网在线观看| 一个人免费在线观看的高清视频| 两性夫妻黄色片| 欧美三级亚洲精品| x7x7x7水蜜桃| www日本黄色视频网| 国产高清videossex| 男女床上黄色一级片免费看| 一区二区三区国产精品乱码| 成人午夜高清在线视频| 中文亚洲av片在线观看爽| 亚洲人成电影免费在线| 国产精品久久久av美女十八| 在线播放国产精品三级| 亚洲熟妇熟女久久| 一个人看视频在线观看www免费 | 男女那种视频在线观看| 一个人免费在线观看的高清视频| 亚洲午夜精品一区,二区,三区| 男插女下体视频免费在线播放| 久久久久久九九精品二区国产| 美女高潮的动态| 男女下面进入的视频免费午夜| 国产高清三级在线| 亚洲性夜色夜夜综合| 白带黄色成豆腐渣| 最近在线观看免费完整版| 国产av在哪里看| 蜜桃久久精品国产亚洲av| 亚洲18禁久久av| netflix在线观看网站| 91在线精品国自产拍蜜月 | 深夜精品福利| 搡老熟女国产l中国老女人| 最近最新中文字幕大全免费视频| 精品一区二区三区视频在线观看免费| 免费在线观看亚洲国产| 草草在线视频免费看| 国产探花在线观看一区二区| 欧美日韩乱码在线| 亚洲中文字幕日韩| 99热6这里只有精品| 少妇丰满av| 天堂网av新在线| 久久中文字幕人妻熟女| 在线观看免费视频日本深夜| 美女 人体艺术 gogo| 国产真人三级小视频在线观看| 观看免费一级毛片| av在线天堂中文字幕| 亚洲欧美精品综合一区二区三区| 国产美女午夜福利| 午夜a级毛片| 一级毛片女人18水好多| 视频区欧美日本亚洲| 婷婷亚洲欧美| 成人性生交大片免费视频hd| av国产免费在线观看| 国产在线精品亚洲第一网站| 91av网站免费观看| 日本黄色视频三级网站网址| 桃色一区二区三区在线观看| 国产黄色小视频在线观看| 又紧又爽又黄一区二区| 精品一区二区三区四区五区乱码| 99久久精品热视频| 看黄色毛片网站| 色播亚洲综合网| 一进一出抽搐gif免费好疼| 久久久国产成人精品二区| 国产精品,欧美在线| 久久精品国产综合久久久| 中国美女看黄片| 国产三级中文精品| 亚洲第一电影网av| 成人特级av手机在线观看| 亚洲色图av天堂| 一级作爱视频免费观看| 午夜福利欧美成人| 国产精品电影一区二区三区| cao死你这个sao货| 18禁观看日本| 日韩欧美在线二视频| 国产亚洲欧美在线一区二区| 超碰成人久久| 两性夫妻黄色片| 亚洲欧美日韩无卡精品| 天堂网av新在线| 一夜夜www| 999精品在线视频| 国产成人欧美在线观看| 久久久久久久久久黄片| 老司机深夜福利视频在线观看| 毛片女人毛片| 亚洲成人中文字幕在线播放| 人妻丰满熟妇av一区二区三区| 亚洲av第一区精品v没综合| 亚洲一区二区三区色噜噜| 亚洲第一欧美日韩一区二区三区| 长腿黑丝高跟| 国内精品一区二区在线观看| 亚洲人成电影免费在线| 国产高清激情床上av| 99国产精品99久久久久| 日本 欧美在线| 日韩精品青青久久久久久| 午夜成年电影在线免费观看| 色综合站精品国产| 国产精品1区2区在线观看.| 99热6这里只有精品| 午夜免费观看网址| 久久久久免费精品人妻一区二区| 国产亚洲av嫩草精品影院| 成人三级黄色视频| 夜夜躁狠狠躁天天躁| 黄频高清免费视频| 午夜视频精品福利| 欧洲精品卡2卡3卡4卡5卡区| 久久久国产精品麻豆| 日本成人三级电影网站| 两人在一起打扑克的视频| 亚洲av美国av| 欧美精品啪啪一区二区三区| 操出白浆在线播放| 国产精品久久久久久精品电影| 免费一级毛片在线播放高清视频| 国产精品女同一区二区软件 | 舔av片在线| 亚洲最大成人中文| 香蕉av资源在线| 国产久久久一区二区三区| 丰满人妻一区二区三区视频av | 亚洲人与动物交配视频| 神马国产精品三级电影在线观看| 免费看日本二区| 91麻豆精品激情在线观看国产| bbb黄色大片| 精品福利观看| 欧美日韩一级在线毛片| 一级a爱片免费观看的视频| 亚洲天堂国产精品一区在线| 亚洲18禁久久av| 91在线观看av| 99久久无色码亚洲精品果冻| 国产精品98久久久久久宅男小说| 国模一区二区三区四区视频 | 日本a在线网址| 又紧又爽又黄一区二区| 真人一进一出gif抽搐免费| 1000部很黄的大片| e午夜精品久久久久久久| 午夜a级毛片| 亚洲在线自拍视频| 噜噜噜噜噜久久久久久91| 国产免费男女视频| 丝袜人妻中文字幕| 国内少妇人妻偷人精品xxx网站 | 少妇人妻一区二区三区视频| 欧美中文综合在线视频| 亚洲国产欧洲综合997久久,| 午夜日韩欧美国产| 夜夜爽天天搞| 悠悠久久av| 亚洲精品456在线播放app | 亚洲五月婷婷丁香| 亚洲av成人不卡在线观看播放网| 成人鲁丝片一二三区免费| 亚洲国产欧洲综合997久久,| 婷婷精品国产亚洲av| av片东京热男人的天堂| 亚洲精品美女久久av网站| 日本成人三级电影网站| 99久久综合精品五月天人人| 亚洲av电影在线进入| av在线蜜桃|