• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Nonlinear Prediction with Deep Recurrent Neural Networks for Non-Blind Audio Bandwidth Extension

    2018-03-12 12:12:22LinJiangRuiminHuXiaochenWangWeipingTuMaoshengZhangNationalEngineeringResearchCenterforMultimediaSoftwareSchoolofComputerScienceWuhanUniversityWuhan007ChinaInstituteofBigDataandInternetInnovationHunanUniversityofCommerce
    China Communications 2018年1期
    關(guān)鍵詞:局限于低保戶財(cái)政部門

    Lin Jiang, Ruimin Hu*, Xiaochen Wang Weiping Tu Maosheng Zhang National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 007, China Institute of Big Data and Internet Innovation, Hunan University of Commerce, Changsha 00, China Software College, East China University of Technology, Nanchang 00, China Collaborative Innovation Center for Economics crime investigation and prevention technology, Jiangxi Province, Nanchang 00,China Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 007, China Collaborative Innovation Center of Geospatial Technology, Wuhan 0079, China

    I. INTRODUCTION

    In modern telecommunications, audio coding becomes an essential technology which attracts lots of attentions. In particular, for mobile applications, packing the data into small space with efcient methods is benecial. The coding algorithms must be fairly simple because mobile processors are relatively not very powerful and less processing leads to lesser use of battery. Audio bandwidth extension(BWE) is a standard technique within contemporary audio codecs to efciently code audio signals at low bitrates [1].

    In the audio codecs, the signals are split into low frequency (LF) and high frequency(HF) parts, and are encoded by core codec and BWE respectively. The approach is based on the properties of the human hearing. The hearing threshold for high frequencies is higher than for lower frequencies (except very low frequencies), so high frequency tones are not heard as loud as the same amplitude tones at lower frequencies [2]. Also the frequency resolution of hearing is better on lower frequencies. Therefore, the coding bitrates for HF is far lower than for LF.

    Another useful feature of many types of audio samples is that the level of the higher frequencies is usually lower than the level of lower frequencies. And finally, the sound of many musical instruments is harmonic, which means that some properties of the frequency spectrum are very similar in lower and higher frequencies [3]. The similarity of frequency spectrum is also called the correlation between LF and HF.

    According to above mentioned properties and feature, on the decode side, the HF signals are usually generated by a duplication of the corresponding decoded LF signals and a priori knowledge of HF. Depending on whether transmitting parameters, the BWE methods have two categories: blind BWE and nonblind BWE. In non-blind BWE, a few parameters of HF are transmitted to the decoder side for reconstructing the high frequency signals.In this paper, we only discuss about non-blind BWE. For the sake of concise narrative, the term non-blind BWE will be replace with abbreviation BWE in the following section.

    In the audio coding standard, BWE is a necessary module for coding high frequency signal. For example, MPEG Advance Audio Coding (AAC) used a spectral band replication method (SBR)[4], AMR WB+ used a LPC-based BWE[5], ITU-T G.729.1 used a hierarchical BWE[6], China AVS-M used a LPC-based BWE in FFT domain[7,8], MPEG Union Speech and Audio Coding (USAC)used an enhanced SBR (eSBR)[9], 3GPP EVS used a multi-mode BWE method, including TBE[10], FD-BWE[41] and IGF[42]. There are two main categories of BWE methods:time domain BWE and frequency domain BWE.

    The time domain BWE performs adaptive signal processing according to the well-known time-varying source-filter model of speech production [1]. This approach is based on the Linear Predictive Coding (LPC) paradigm(abbr. LPC-based BWE) in which the speech signal is generated by sending an excitation signal through an all-pole synthesislter. The excitation signal is directly derived from a duplication of the decoded LF signals. The all-pole synthesis filter models the spectral envelope and shapes the fine pitch structure of the excitation signal when generating the HF signal. A small number of parameters of HF, as parametric representations of spectral envelope, are transmitted to the decoder side,such as Liner Prediction Cepstral Coefcients(LPCCs), Cepstral Coefficients (CEPs), Mel Frequency Cepstral Coefficients (MFCCs)[11,12], and Line Spectral Frequencies (LSFs)[13]. In order to improve the perception quality of coding, a codebook mapping technique is also introduced for achieving more accuracy presentations of HF envelope [14,15].

    As the basic principle of the LPC-based BWE is speech generation model, this approach is widely used in speech coding[5-7,16]. However, because the lower frequencies of voiced speech signals generally exhibit a stronger harmonic structure than the higher frequencies, the duplication of LF excitation should cause too harmonic components on the generated HF excitation signal, which should bring out the objectionable, ‘buzzy’—sounding artifacts [16].

    The frequency domain BWE recreates the HF spectral band signals in frequency domain. The basic principle of BWE approach is derived from the human auditory system in which hearing is mainly based on a shortterm spectral analysis of the audio signal.Spectral band replication (SBR) [4] is the most widely used frequency domain BWE method. SBR uses a Pseudo-Quadrature Mirror Filter (PQMF) description of the signal and improves the compression efficiency of perceptual audio codecs. This is achieved by simply copying the LF bands to the HF bands within the used filter bank, followed by post processing (including inverse filtering, adaptive noise addition, sinusoidal regeneration,shaping of the spectral envelope). However, if the correlation between low and high frequency becomes weak, the method will produce artifacts because the harmonic structure of the HF signal is not preserved. To remedy this,some methods were developed to maintain the harmonic structure: the phase vocoder driven Harmonic Bandwidth Extension (HBE)[17], the Continuously Modulated Bandwidth Extension (CM-BWE) using single sideband modulation [18], QMF-based harmonic spectral band replication [19], MDCT-based harmonic spectral bandwidth extension method[20]. These methods significantly improved the perception quality of coding. However,the coding artifacts are still existed inevitably because the replication method from LF to HF needs require the strong correlation between LF and HF [21].

    The above mentioned BWE methods have two steps to generate the HF signal. First, rebuild the coarse HF signal by copying LF to HF at the corresponding present time frame.Second, generate the final HF signal by envelope adjustment using the transmitted HF envelope data. On therst step, the similarity between the coarse HF and original HF will directly affect the perception quality of coding.Consequently, the weak correlation between HF and LF will result in the degraded perception quality of coding. Our investigation found that the correlation existed in the LF signal of context dependent frames in addition to the current frame. In this paper, our main goal is to achieve more accurate coarse HF signal for improving the perception quality of coding.We propose a novel method to predict the coarse HF signal by deep recurrent neural network using the context dependent LF signal.Then we replace the conventional replication method by our method in the reference codecs.Moreover, in order to conrm the motivation of our method, we also propose a method to quantitatively analyse the correlation between LF and HF signal.

    The paper is organized as follows. Section 2 describes the motivation of this paper. In section 3, the prediction method of coarse HF signal is given, while the performance of proposed method and comparison with others are shown in section 4,nally section 5 presents conclusion of this paper.

    Fig. 1. Generic scheme of BWE.

    II. MOTIVATION

    2.1 Overview of BWE scheme

    The generic scheme of BWE is shown ingure 1. In generic scheme of BWE, according to the perceptive difference of human system for HF and LF, the full band input signalSfullis split into HF signalShfand LF signalSlf.The LF signal is coded using a core codec,such as algebraic code excited linear prediction (ACELP)[5,7,8,10], or Transform Coded Excitation (TCX) algorithm[5,7,8], or MDCT-based Coding[6,9,10,36]. While HF signal is usually without an explicit waveform coding,only a small number of HF parametersPhfare extracted and transmitted to the decoder side.On the decoder side, the final HF signalS’hfis recreated using coarse HF signalChfand decoded HF parametersP’hf. The coarse HF signal is usually generated by the decoded LF signalS’lf. To produce a pleasant sounding HF signal, an intuitive approach is to increase the HF parameters. However, it is in conict with requirement of low bitrate. Some approaches are developed to enhance the similarity between coarse HF and original HF.

    In time domain BWE, the coarse HF signal is usually derived from the decoded LF excitation signal. To preserve the harmonic structure of the HF excitation signal, a nonlinear function is used [16,22,]. In [23], the HF excitation signal is generated by upsampling a low band fixed codebook vector and a low band adaptive codebook vector to a predetermined sampling frequency. In frequency domain BWE, the coarse HF signal is usually derived from a duplication of decoded LF subband signal in frequency domain. To preserve the harmonic structure of original HF, some post processing is usually introduced, such as inverseltering, adaptive noise addition, sinusoidal regeneration, shaping of the spectral envelope [20], single sideband modulation [18],and a phase vocoder [17].

    The above approaches are conducive to improve the similarity of coarse HF with original HF. However, the improvement is limited when the correlation between HF and LF signal becomes weak. More importantly, we found only the current frame decoded LF signal is used to generate coarse HF in existing method. According to the physical properties of the audio signal, we consider the correlation also exists in the LF signal of context dependent frames.

    2.2 Correlation analysis between HF and LF

    The motivation for all bandwidth expansion methods is the fact that the spectral envelope of the lower and higher frequency bands of the audio signal is dependent, i.e., the low band part of the audio spectrum provides information about the spectral shape of the high band part. The level of dependency will affect the accuracy of reconstructed HF signal. In existing BWE methods, only the current frame LF signal is used to recreate the coarse HF. The utilization of current frame is due to the shortterm correlation of audio signal. However,there is also a long-term correlation when fundamental frequency of voice changes slowly[24]. To reveal the long-term correlation for recreating the HF signal, we quantitatively analyse the correlation using mutual information between HF and LF.

    Taking into account the uncertainty and nonlinearity of audio signal, mutual information is an appropriate measure of correlation[25,26]. The mutual information (MI) between two continuous variablesXandYis given by[27]:

    whereh(Y) is the differential entropy ofYand is defined by an integration over the value spaceΩYofY:

    wherefY(y) denotes the probability density function (pdf) ofY. The conditional differential entropyh(Y|X) ofYgivenXis dened as:

    whereΩXis the value space ofXandfY,X(y,x)is the jointpdfofXandY. Throughout our correlation analysis,Xis a frequency spectral amplitude vectorALrepresenting LF band andYis a frequency spectral amplitude vectorAHrepresenting the HF band. The mutual information is dened in the discrete forms:

    wherep(aL,aH) is the joint probability of LF and HF,p(aL) andp(aH) denotes the prior probability of LF and HF respectively.

    In order to quantitatively analyse the correlation of various types of sound, we calculate theMIvalueunder different frame-shift, wheredenotes thei-th frame HF, thedenotes the (i-t) frame LF,tis frame shift. In figure 2, whent=0, the correlation is the greatest than others (the greater the MI value, the higher the correlation). It is easy to understand, because the HF and LF come from the same one frame. Just as we proposed the hypothesis in section 2.1, there also exists the correlation between thei-th frame HF and thei-1,i-2,i-3, … frame LF signal. Moreover, we also give the average MI values of various types of sound (e.g. speech,music and instrument) for evalu ating the correlation (seegure 3). Ingure 3, we alsond that the HF signal is not only associated with the LF signal of the current frame, but also associated with the LF signal of the front frame.All of this shows that HF construction can be derived from the LF signal of context dependent frames besides the current frame.

    Fig. 2. The MI (bits) of bagpipes sound under different frame- shift.

    Fig. 3. The average MI (bits) of various types of sound under different frame-shift.

    Fig. 4. The conceptual comparison between conventional and our method for generating coarse HF signal on Spectrogram.

    2.3 Selection of prediction method

    The purpose of this paper is to predict the coarse HF signal from LF. In particular, we will establish a nonlinear mapping model from LF to HF for achieving more accurate coarse HF signal. In blind bandwidth extension, nonlinear mapping model as a generic method is developed for expanding the wideband speech signal. In these methods, neural network is a usual selection due to its strong capacity on modelling [28-30]. As the mapping from LF to HF is extreme complicated, the model ability of previous shallow network is inadequate. In our previous work, we used a deep auto-encoder neural network to predict the coarse signal of HF [31]. The method significantly improved the perception quality of coding.Since only using the current frame LF signal,the improvement is limited when the correlation between LF and HF becomes weak.

    According to the correlation analysis of above mentioned, the correlation existed in the LF signal of context dependent frame besides the current frame. The selected mapping method is required having a model capacity on time series signal. The recent deep recurrent neural networks showed an excellent performance for large scale acoustic modelling [32].Consequently, we select it as a model tool for predicting the coarse HF signal.

    III. THE PREDICTION METHOD OF COARSE HF SIGNAL

    3.1 Problem statement

    In previous work, the coarse HF signal is usually generated by a duplication of the corresponding current frame LF signal. According to the correlation analysis in section 2, we will establish a nonlinear mapping model to predict the coarse HF signal using the context dependent LF signal. The conceptual comparison between conventional and our method is shown ingure 4. The prediction task can be formulated as a generative model problem in mathematical. This reformulation allows applying wide range of well-established methods.

    be the training set whereliandhiis thei-th frame decoded LF signal and the original coarse HF signal, respectively. Let’s divide the dataset into train and validation setswith sizeNlandNvcorrespondingly. Further we introduce the set of prediction functionin which we want tond the best model. Assuming neural network with thexed architecture, it is possible to associate the set of functionFwith the network weights spaceWand thus functionfand vector of weightsware interchangeable.

    Next step is to introduce the loss function.As in almost all generative models, here we are interested in accuracy error measure. In particular, we wishnd a “perfect” prediction function to generate the coarse HF signal with minimal error. We definedLoss(h,y) as the loss function, wherehis the original coarse HF signal,y=f(l) is the prediction value of the model from decoded LF signall. Assuming that there is a “perfect” predictionfF*in prediction function setF, our task is tondfF*in the best possible way. According to statistical learning theory[43], the risk associated with predictionf(l) is then dened as the expectation of the loss function :

    whereP(h,l) is a joint probability distribution over coarse HF signal train setHand decoded LF train setL. Our ultimate goal is to find a prediction functionfF*among axed class of function setFfor which theR(f) is minimal:

    In general, the riskR(f) cannot be computed directly because the distributionP(h,l) is unknown to the learning algorithm. However, we can compute an approximation offF*, calledempirical risk[43], by averaging the loss function on the training set:

    whereNlis the size of training set.Empirical risk minimization[43] principle states that the learning algorithm should choose a predictionf? which minimizes the empirical risk:

    whereidenotes frame index,jis the frequency spectrum coefcient index, andMis frame length,wandbis the network weights and bias item, respectively. Common approach here to reduce overtting is to check the validation error from time to time during the optimization process and to stop when it starts growing. Due to the validation error will go up and down in a short time, the criterion of starts growing is implemented on the consecutive frames, e.g. 5 frames. If the validation error goes up steadily, we will stop it.

    3.2 Prediction method

    The purpose of recurrent neural networks(RNNs) was put forward to deal with time-serial data. Motivated by its superior performance in many tasks, we propose a nonlinear mapping model to predict the coarse HF signal using deep long short-term memory recurrent neural networks.

    3.2.1 RNN

    Recurrent neural networks allow cyclical connections in a feed-forward neural network[33]. Different from the feed-forward ones,RNNs are able to incorporate contextual infor-mation from previous input vectors, which allows them to remember past inputs and persist in the network’s internal state. This property makes them an attractive choice for sequence to sequence learning. For a given input vector sequence x=(x1,x2,…,xT), the forward pass of RNNs is as follows:

    wheret=1,…,T, andTis the length of the sequence; h=(h1,h2,…,hT) is the hidden state vector sequence computed from x; y = (y1,y2,…,yT) is the output vector sequence; W are the weight matrices, where Wxh, Whhand Whyare the input-hidden, hidden-hidden and hidden-output weight matrices, respectively.bhandbyare the hidden and output bias vectors, respectively, andHdenotes the nonlinear activation function for hidden nodes.

    For our prediction system, because of the context dependency correlation phenomenon,we desire the model to have access to both past and future context. But conventional RNNs can only access the past context and they ignore the future context. So the bidirectional recurrent neural networks (BRNNs) are used to relieve this problem. BRNNs compute both forward state sequenceh→and backward state sequenceh←, as formulated below:

    Fig. 5. Long short-term memory (LSTM) [34].

    3.2.2 LSTM-RNN

    Conventional RNNs can access only a limited range of context because of the vanishing gradient problem. Long short-term memory(LSTM) uses purpose-built memory cells, as shown in figure 5 [34], to store information which is designed to overcome this limitation[34]. In sequence-to-sequence mapping tasks,LSTM has been shown capable of bridging very long time lags between input and output sequences by enforcing constant error flow.For LSTM, the recurrent hidden layer functionHis implemented as follows:

    whereσis the sigmoid function;i,f,o,aandcare input gate, forget gate, output gate, cell input activation and cell memory, respectively.τandθare the cell input and output nonlinear activation functions, in whichtanhis generally chosen. The multiplicative gates allow LSTM memory cells to store and access information over long periods of time, thereby avoiding the vanishing gradient problem.

    3.2.3 DBLSTM-RNNs-based Prediction Method

    In order to accurately predict the coarse HF signal using the context dependent decoded LF signal, we design the DBLSTM-RNNs with the dilated LSTM, as shown in figure 6[35]. The dilated LSMT can make sure the predicted coarse HF signalH(ht|lt,lt-1,lt-2,…,l1)emitted by the model at timesteptdepend on any of the previous decoded LF signal at timestepst,t-1,t-2, ... ,1. A dilated LSTM is a LSTM which is applied over an area larger than its length by skipping input values with a certain step.

    Stacked dilated LSTM efficiently enable very large receptive fields with just a few layers, while preserving the input resolution throughout the network. In this paper, the dilation is doubled for every layer up to a certain point and then repeated: e.g.

    1,2,4,…,512, 1,2,4,…,512, 1,2,4,…,512.

    The intuition behind this configuration is two-fold. First, exponentially increasing the dilation factor results in exponential receptiveeld growth with depth. For example each 1,2, 4,…, 512 block has receptive field of size 1024. Second, stacking these blocks further increases the model capacity and the receptiveeld size.

    Learning DBLSTM-RNNs can be regarded as optimizing a differentiable error function:

    whereMtrainrepresents the number of sequences in the training data and w denotes the network inter-node weights. In our prediction system, the training criterion is to minimize the sum of squared errors (SSE) between the predicted value and the original coarse HF signal. We use back-propagation through time(BPTT) algorithm to train the network. In the BLSTM hidden layer, BPTT is applied to both forward and backward hidden nodes and back-propagates layer by layer. After training network, the weight vectors w and bias vectors b will be determined, we can use the network to predict the coarse HF signal with the decoded LF signal, as formulated below:

    wherek=i-m+1,mis the timestep, andm=2d,ddenotes the depth of DBLSTM-RNNs.

    六是財(cái)政支持低保政策實(shí)施的手段單一。財(cái)政部門對(duì)低保對(duì)象的支持局限于低保補(bǔ)貼,對(duì)低保對(duì)象從事創(chuàng)業(yè)與就業(yè)的支持政策不明確,如低保戶創(chuàng)業(yè)和就業(yè)的啟動(dòng)資金支持或貸款貼息支持政策不明確,缺乏激勵(lì)。

    IV. THE EXPERIMENT AND EVALUATION

    In order to verify the validity of the proposed method, we used the DBLSTM-RNNs instead of the conventional replication method to generate the coarse HF signal on the reference codecs. For testing the ubiquitous capacity of our method, we selected 6 representative reference codecs as evaluation object.

    In this section, we first describe the reference codecs for evaluating the performance of proposed method. Then we train the DBLSTM-RNNs architecture on different reference codecs. Finally, we show the experimental results of subjective listening test,objective test and comparison of computation complexity.

    4.1 Test reference codecs

    (2)WB+ 3GPP AMR WB+ is an extended AMR-WB codec that provides unique performance at very low bit rates from below 10.4 kbps up to 24 kbps [5]. Its HF signal is encoded by a typical time domain BWE method(LPC-based BWE), and the coarse HF signal is achieved by copying the decoded LF excitation signals. The bitrate is set to 16.4 kbps in our experiments.

    (3)AVS Audio and Video coding Standard for Mobile (AVS-M, submitted as AVS Part 10) is a low bitrate audio coding standard proposed for the next generation mobile communication system [7,8]. It is also therst mobile audio coding standard in China. Its BWE is similar to WB+, and the coarse HF signal derived from a duplication of decoded LF excitation signal. Like as WB+, the bitrate is set to

    16.4 kbps for testing.

    (5)EVS The codec for Enhanced Voice Services, standardized by 3GPP in September 2014, provides a wide range of new functionalities and improvements enabling unprecedented versatility and efficiency in mobile communication [10]. For upper band coding,EVS uses different BWE methods based on the selected core codecs. On LP-based coding mode, TBE and multi-mode FD-BWE method is employed. On MDCT based TCX coding mode, an Intelligent Gap Filling (IGF) tool is employed, which is an enhanced noiselling technique toll gaps (regions of zero values)in spectra.

    (6)DAE This is an improved version of AVS P10 from our previous work [31]. The coarse HF signal is predict from the LF signal of current frame by a deep autoencoder. This method is selected as reference codecs because of its representative of prediction.

    The more details of test reference codecs are listed on Table 1.

    Table I. The details of test reference codecs.

    4.2 Experiment setup

    All networks are trained on an about 50 hour dataset consisting of TIMIT speech, Chinese speech, natural sounds and music. We randomly divided the database into 2 disjoint parts:80% for training, 20% for validation. Due to the different input signal on six test reference codecs, the training process is carried out separately on six reference codecs. The inputs of networks are the decoded LF signals which are extracted from each reference codecs, respectively. To the supervised data, the original coarse HF signals are extracted on the encoder side at each reference codecs, respectively.The frequency ranges are listed in Table 1.

    Our goal is to predict the coarse spectrum,so the parameters remain untouched. For AAC+ and USAC, SBR and eSBR technique,the QMF coefficients of decoded LFs as input signal, due to the complex form of QMF,the real and imaginary coefficients are input separately, and the HF coarse spectrum also is predicted separately. For WB+ and AVS,the excitation of decoded LFs as input signal,and the excitation signal of HFs is predict. For DAE, the MDCT coefcients of decoded LFs as input, and the HF MDCT coefficients are predicted. For EVS, our method is implemented just on TBE, and the proposed model replaced the nonlinear function module on TBE.The excitation of decoded LFs and HFs as input and output of model respectively. For all reference codecs, the smoothing process was implemented on time domain for the generated final HFs. We used an energy suppressed method between frames to reduce the noise.

    According to the Correlation analysis in section 2.2, the correlation is exists in the previous consecutive frames. In our implementation, we generally use the previous 5 frames decoded LFs signal to predict the current frame HFs coarse spectrum. However, the weak correlation (e.g. transient and other non-stable frames) maybe result in the strong distortion. In order to remedy it, before predicting, we implement the transient detec-tion on decoded LFs, if the frame is transient signal, we will don’t use it to predict. If the transient frame exceeds 2, we will only use the current frame to predict.

    The training of networks architectures is implemented on a CPU-GPU Cluster which is a high performance computing system of Wuhan University [37]. We use the asynchronous stochastic gradient descent (ASGD)optimization technique. The update of the parameters with the gradients is done asynchronously from multiple threads on a multicore machine. The number of hidden layers is set based on the observation of Spectral Distortion (SD) between outputs of model and original coarse HF signal (see figure 7). The results show that the SD value is dropt with the increase of the networks depth, and the change levelled off at depth 10. Taking into account the computational complexity, we set the networks depth to 10, and the predicted timestep is 210=1024.

    Fig. 7. The Spectral Distortion (SD) values of under different depth of networks.

    4.3 Subjective evaluation

    For evaluating the perception quality of coding, a subjective listening test was conducted using the comparison category rating (CCR)test [38]. 12 expert listeners participated in a CCR test compare pairs of audio samples, and evaluate the referenced one in each comparison with the replaced one (using proposed method to predict the coarse HF signal) using a discrete 7-point scale that ranges from much worse (?3) to much better (3). The resulting average score is known as the comparison mean opinion score (CMOS). For CMOS value, the score increased by 0.1 indicates significant improvement. By the way, the threshold is not a standard criterion. We use it is because it is a habitual rules on China AVS Workgroup,they usually accept a new technical proposal use this criterion [44,45]. The MPEG audio test files are used as test material (see table 2.), which is a well-known testles for evaluating the perception quality of coding of audio codecs. The results of subjective listening test are shown ingure 8.

    Table II. List of test material in our experiments.

    For the selected reference codecs, the CMOS of our method is more than 0.15 except the USAC, which indicates a signicant improvement using DBLSTM- RNNs instead of conventional replication method. And we alsond the CMOS is higher on codecs with the low bitrate. The average CMOS reaches 0.29 on WB+ codec (only 0.8kbps for BWE),which demonstrates the accuracy of coarse HF signal is important for perception quality of coding. For USAC, the improved potential is limited with less than 0.1 average CMOS.In USAC, a strategy of increasing bitrate for BWE is used to remedy the flaw of spectral band replication. The purpose, DAE is selected as reference codec, is to verify the contribution of context dependent LF signal compare with the current frame LF. From the CMOS ingure 8, the score reaches 0.18, a signication improvement is showed, and which illustrates the correlation indeed exists in the successive frame besides the present time frame.

    Fig. 8. Comparison mean opinion scores (CMOS) of quality comparisons between different reference codecs in the CCR test. The scores for various audio types are shown separately. Error bars indicate the standard error of the mean.

    Fig. 9. Objective difference grade (ODG) of quality comparisons between different reference codecs and our method in the PEAQ test. The scores for various audio types are shown separately.

    For various audio types (speech, music,instrument), the CMOS of our method appears an obvious discrimination. For speech test samples, the CMOS is the lowest, while the highest CMOS appears on instrument test samples. This phenomenon can be explained according to the frequency components of signal. For speech and instrument signals,the richness of the harmonic is different on HF bands, and the instrument is richer than speech. The richer harmonic will bring about the stronger correlation between LF and HF.Therefore, the performance of DBLSTMRNNs can be better for the case of rich harmonic signal.

    4.4 Objective evaluation

    In order to further evaluate the performance of the proposed method, we also implement an objective test using the perceptual evaluation of audio quality method (PEAQ) [39]. PEAQ,as ITU-R Recommendation BS.1387, is a standardized algorithm for objectively measuring perceived audio quality. A PQevalAudio test software tool [40] is used to evaluate the objective difference grade (ODG) between reference sample and test sample, the ODG values range from 0 to -4, where 0 corresponds to an imperceptible impairment and –4 to an impairment judged as very annoying. In order to match the subjective test, we used the same test material as the subjective test (see Table 2).

    The objective test results are shown ingure 9, as expected, the ODG are approximately consistent with the CMOS of subjective listening test. The average ODG increased by 15.39%, 22.76%, 17.05%, 7.45%, 11.84%,13.55% on AAC+, WB+, AVS, USAC, EVS,DAE, respectively, and the total average ODG increased by 14.67%. The objective test results are also further verify the better performance of the proposed method compare with the reference codecs.

    4.5 Computational complexity

    In order to assess the calculation complexity, a simple codec runtime is executed. A 368 seconds wavele is selected as test item, and the run environment of test is also same between different codecs. Taking account into the implement location of our method on codecs,the test is carried out on the decoder side. We used a GetTickCount function at Visual C++(windows.h) to capture the runtimes, including whole decoder of reference codecs and our method, single DBLSTM- RNNs module. In order to reduce the runtimes, the parameters of network are stored to memory instead ofle. All test programme is implemented on Inter(R) Core(TM) i3- 2370M CPU @2.40 GHz,4G memory, Windows 7.0 OS. The runtimes of each codec are listed in Table 3.

    The runtimes of our method are inevitably incremental because of its complex architecture of network. The average runtimes of decode increased by 42.64% using our method to predict the coarse HF signal. And the runtimes of RNNs module accounts for 40.25% of total decode procedure. Despite having expensive computation complexity, our method is still acceptable on some non-real-time application scenario.

    V. CONCLUSION

    A method for the prediction of the coarse HF signal in non-blind bandwidth extension was described in this paper. The method was found to outperform the reference method for bandwidth extension both in subjective and objective comparisons. According to the testing results, the performance was excellent for the low bitrate BWE, and the outstanding prediction capacity was emergent for the rich harmonic signal, like as instruments. In addition to improving the perception quality of coding,we also found that the context dependent LF signal was vital for generating more accurate HF signal.

    Though the proposed method has a superior performance, the expensive computation complexity will limit its application, e.g. real-time application scenario. Consequently, reducing the computation complexity is still required in future work. Moreover, the perception quality of coding on USAC codec was satisfactory,while the bitrate is still high (3.5kbps) for BWE. Reducing the redundant parameters of HF is also further work.

    Table III. The runtimes comparison (unit: second).

    ACKNOWLEDGEMENT

    We gratefully acknowledge anonymous reviewers who read drafts and made many helpful suggestions. This work is supported by the National Natural Science Founda

    tion of China under Grant No. 61762005,61231015, 61671335, 61702472, 61701194,61761044, 61471271; National High Technology Research and Development Program of China (863 Program) under Grant No.2015AA016306; Hubei Province Technological Innovation Major Project under Grant No.2016AAA015; the Science Project of Education Department of Jiangxi Province under No. GJJ150585. The Opening Project of Collaborative Innovation Center for Economics Crime Investigation and Prevention Technology, Jiangxi Province, under Grant No. JXJZXTCX-025;

    [1] Larsen, Erik R, and R. M. Aarts.Audio Bandwidth Extension: Application of Psychoacoustics, Signal Processing and Loudspeaker Design.John Wiley& Sons, 2004.

    [2] T. D. Rossing, F. R. Moore, and P. A. Wheeler.The science of sound. Addison Wesley, 3rd edition,2001.

    [3] Arttu Laaksonen. “Bandwidth extension in high-quality audio coding”.Helsinki University of Technology, 2005.

    [4] M Dietz, L Liljeryd, K Kj?rling, O Kunz. “Spectral Band Replication, a novel approach in audio coding“.Proc. 112th AES, 2002, pp. 1-8.

    [5] J. Makinen, B. Bessette, S. Bruhn, P. Ojala. “AMRWB+: a new audio coding standard for 3rd gen-eration mobile audio services”.Proc. ICASSP,2005, pp. 1109-1112.

    [6] Geiser B, Jax P, Vary P, et al. “Bandwidth Extension for Hierarchical Speech and Audio Coding in ITU-T Rec. G.729.1“.IEEE Transactions on Audio Speech & Language Processing, vol.15, no.8,2007, pp. 2496-2509.

    [7] Zhang T, Liu C T, Quan H J. “AVS-M Audio: Algorithm and Implementation”.EURASIP Journal on Advances in Signal Processing, vol.1, no.1, 2011,pp. 1-16.

    [8] GB/T 20090.10-2013.Information technology advanced audio and video coding Part 10: mobile speech and audio. 2014 (in Chinese).

    [9] Quackenbush S. “MPEG Unified Speech and Audio Coding”.IEEE Multimedia, vol. 20, no. 2,2013, pp. 72-78.

    [10] Bruhn, S., et al. “Standardization of the new 3GPP EVS codec”.Proc. ICASSP,2005, pp. 19-24.

    [11] A.H. Nour-Eldin and P. Kabal. “Mel-frequency cepstral coeffcient-based bandwidth extension of narrowband speech”.Proc. INTERSPEECH,2008, pp. 53-56.

    [12] Seltzer, Michael L., Alex Acero, and Jasha Droppo. “Robust bandwidth extension of noise-corrupted narrowband speech”.Proc. INTERSPEECH,2005, pp. 1509-1512.

    [13] Chennoukh, S., et al. “Speech enhancement via frequency bandwidth extension using line spectral frequencies”.Proc. ICASSP, 2001, pp. 665-668.

    [14] Hang B, Hu R M, Li X, et al. “A Low Bit Rate Audio Bandwidth Extension Method for Mobile Communication”.Proc. PCM, 2008, pp. 778-781.

    [15] Wang Y, Zhao S, Mohammed K, et al. “Superwideband extension for AMR-WB using conditional codebooks”.Proc. ICASSP,2014, pp.3695-3698.

    [16] V Atti,V Krishnan,D Dewasurendra,V Chebiyyam, et al. “Super-wideband bandwidth extension for speech in the 3GPP EVS codec”.Proc. ICASSP, 2015, pp. 5927-5931.

    [17] Nagel F, Disch S. “A harmonic bandwidth extension method for audio codecs”.Proc. ICASSP,2009, pp. 145-148.

    [18] Nagel F, Disch S, Wilde S. “A continuous modulated single sideband bandwidth extension”.Proc. ICASSP, 2010, pp. 357 -360.

    [19] Zhong H, Villemoes L, Ekstrand P, et al. “QMF Based Harmonic Spectral Band Replication”.Proc. 131st AES, 2011, pp. 1-10.

    [20] Neukam C, Nagel F, Schuller G, et al. “A MDCT based harmonic spectral bandwidth extension method”.Proc. ICASSP, 2013, pp. 566-570.

    [21] Liu C M, Hsu H W, Lee W C. “Compression Artifacts in Perceptual Audio Coding”.IEEE Transactions on Audio Speech & Language Processing,vol. 16, no. 4, 2008, pp. 681-695.

    [22] Krishnan V, Rajendran V, Kandhadai A, et al.“EVRC-Wideband: The New 3GPP2 Wideband Vocoder Standard”.Proc. ICASSP,2007, pp. 333-336.

    [23] Sverrisson S, Bruhn S, Grancharov V. “Excitation signal bandwidth extension”, USA, US8856011,2014.

    [24] Z?lzer U. “Digital Audio Signal Processing (Second Edition)“. Wiley, 2008.

    [25] Nour-Eldin A H, Shabestary T Z, Kabal P. “The Eect of Memory Inclusion on Mutual Information Between Speech Frequency Bands”.Proc.ICASSP,2006, pp. 53- 56.

    [26] Mattias Nilsson and Bastiaan Kleijn, “Mutual Information and the Speech Signal”.Proc. INTERSPEECH, 2007, pp. 502-505.

    [27] T. M. Cover and J. A. Thomas, “Elements of Information Theory”. Wiley, 1991.

    [28] Liu H J, Bao C C, Liu X. “Spectral envelope estimation used for audio bandwidth extension based on RBF neural network”.Proc. ICASSP,2013, pp. 543-547.

    [29] Liu X, Bao C. “Audio bandwidth extension based on ensemble echo state networks with temporal evolution”.IEEE/ACM Transactions on Audio Speech & Language Processing, vol. 24, no. 3,2016, pp. 594-607.

    [30] WANG Yingxue, ZHAO Shenghui, YU Yingying,KUANG Jingming. “Speech Bandwidth Extension Based on Restricted Boltzmann Machines”.Journal of Electronics & Information Technology,vol. 38, no. 7, 2016, pp. 1717-1723.

    [31] Jiang L, Hu R, Wang X, et al. “Low Bitrates Audio Bandwidth Extension Using a Deep Auto-Encoder”.Proc. PCM, 2015, pp. 528-537.

    [32] H Sak,A Senior,and F Beaufays. “Long shortterm memory recurrent neural network architectures for large scale acoustic modelling”,Proc. INTERSPEECH. 2014, pp. 338- 342.

    [33] Williams RJ, Zipser D. “A learning algorithm for continually running fully recurrent neural networks”.Neural Computation, vol. 1, no. 2, 1989,pp. 270–280.

    [34] Hochreiter S, Schmidhuber J. “Long short-term memory“.Neural Computation, vol. 9, no. 8,1997, pp. 1735–1780.

    [35] Oord A V D, Dieleman S, Zen H, et al. “WaveNet:A Generative Model for Raw Audio”. 2016. URL https://arxiv.org/ abs/1609.03499.

    [36] Herre J, Dietz M. “MPEG-4 high-effciency AAC coding”.IEEE Signal Processing Magazine, vol.25, no. 3, pp. 137- 142.

    [37] “High performance computing system of Wuhan University”. http://csgrid.whu.edu.cn/ (in Chinese).

    [38] “ITU-T: Methods for Subjective Determination of Transmission Quality. Rec. P.800”. International Telecommunication, 1996.

    [39] Thiede T., Treurniet W. C., Bitto R. et al. “PEAQ---The ITU Standard for Objective Measurement of Perceived Audio Quality”.Journal of the Audio Engineering Society, vol. 48, no. 1, 2000, pp.3-29.

    [40] McGill University, “Perceptual Evaluation of Audio Quality”. http://www.mmsp.ece.mcgill.ca/Documents/Software

    [41] Miao L, Liu Z, Zhang X, et al. “A novel frequency domain BWE with relaxed synchronization and associated BWE switching”,Proc. GlobalSIP,2015, pp.642-646.

    [42] Helmrich C R, Niedermeier A, Disch S, et al.“Spectral envelope reconstruction via IGF for audio transform coding”.Proc. ICASSP, 2015, pp.389-393.

    [43] Vapnik, Vladimir N. The Nature of Statistical Learning Theory. Springer, 1995.

    [44] LI Hong-rui, BAO Chang-chun, LIU Xin, et.al.Blind Bandwidth Extension of Audio Based on Fractal Theory. Journal of Signal Processing. Vol.29, no. 9, 2013, pp. 1127- 1133. (in Chinese)

    [45] Chinese AVS Workgroup, M1628: The specification of subjective listening test for AVS audio technology proposal. AVS Audio group special sessions. August 15, 2005, Wuhan China. (in Chinese)

    猜你喜歡
    局限于低保戶財(cái)政部門
    “民間互助文檔”不該局限于民間
    規(guī)范財(cái)政部門檔案管理工作的幾點(diǎn)思考
    卷宗(2021年22期)2021-04-15 01:22:17
    好畫質(zhì)不局限于分辨率 探究愛普生4K PRO-UHD的真面目
    美國“低保戶”約翰遜
    雜文選刊(2019年6期)2019-06-11 03:03:50
    “抓鬮”評(píng)低保,荒唐!
    廉政瞭望(2018年19期)2018-11-20 01:46:13
    “抓鬮”評(píng)低保,荒唐!
    廉政瞭望(2018年10期)2018-10-30 06:45:22
    疊穿西裝
    對(duì)財(cái)政內(nèi)部控制制度的研究
    中國市場(2016年29期)2016-07-19 04:39:18
    反腐倡廉工作中發(fā)揮財(cái)務(wù)部門作用的策略
    人民銀行與財(cái)政部門財(cái)政支出數(shù)據(jù)差異分析——基于新疆2008年-2013年數(shù)據(jù)
    这个男人来自地球电影免费观看 | 色婷婷久久久亚洲欧美| 久久久久久人妻| 久久久欧美国产精品| 亚洲国产精品一区二区三区在线| 精品久久久久久电影网| 天美传媒精品一区二区| a级毛色黄片| 80岁老熟妇乱子伦牲交| 久久热精品热| 亚洲精品色激情综合| 免费观看无遮挡的男女| 如何舔出高潮| 岛国毛片在线播放| 精华霜和精华液先用哪个| 99九九线精品视频在线观看视频| 国产中年淑女户外野战色| 日韩电影二区| 永久免费av网站大全| 亚洲精品乱久久久久久| 夜夜骑夜夜射夜夜干| 欧美成人午夜免费资源| av福利片在线观看| 日本wwww免费看| 日日摸夜夜添夜夜爱| 嘟嘟电影网在线观看| 国产毛片在线视频| 欧美日韩视频精品一区| 国产成人免费无遮挡视频| 亚洲精品456在线播放app| 国产伦理片在线播放av一区| 日产精品乱码卡一卡2卡三| 2018国产大陆天天弄谢| 国产片特级美女逼逼视频| 国产色爽女视频免费观看| 亚洲经典国产精华液单| 日日撸夜夜添| 亚洲四区av| 国产亚洲av片在线观看秒播厂| 老司机影院毛片| 少妇人妻 视频| 国产成人免费无遮挡视频| 三上悠亚av全集在线观看 | 国产成人午夜福利电影在线观看| 免费高清在线观看视频在线观看| 在线观看美女被高潮喷水网站| 日韩一区二区三区影片| 亚洲在久久综合| 少妇人妻一区二区三区视频| 免费观看性生交大片5| 亚洲精品中文字幕在线视频 | 精品久久国产蜜桃| 国产中年淑女户外野战色| 少妇的逼水好多| 亚洲精品乱码久久久久久按摩| 午夜久久久在线观看| 国产淫片久久久久久久久| 精品一区二区三卡| 亚洲精品亚洲一区二区| 免费av中文字幕在线| 欧美日韩av久久| h视频一区二区三区| 边亲边吃奶的免费视频| 国产成人91sexporn| 黑人高潮一二区| 多毛熟女@视频| 九九在线视频观看精品| 国产精品久久久久久久久免| 色吧在线观看| 免费观看av网站的网址| 久久99热6这里只有精品| 少妇被粗大猛烈的视频| h视频一区二区三区| 男男h啪啪无遮挡| 午夜av观看不卡| 99热这里只有是精品50| 午夜福利影视在线免费观看| www.色视频.com| 美女国产视频在线观看| 亚洲欧洲日产国产| 国产成人免费无遮挡视频| 性色avwww在线观看| 国语对白做爰xxxⅹ性视频网站| videossex国产| 一级毛片 在线播放| 伦精品一区二区三区| 99热全是精品| 国产精品女同一区二区软件| 91aial.com中文字幕在线观看| 人人妻人人看人人澡| 亚洲精品乱码久久久v下载方式| 欧美日韩亚洲高清精品| 18禁在线无遮挡免费观看视频| 国产片特级美女逼逼视频| 亚洲图色成人| 久久精品夜色国产| 日本黄色片子视频| 日本av手机在线免费观看| 欧美日韩一区二区视频在线观看视频在线| 亚洲精品日韩在线中文字幕| 国产黄色免费在线视频| 午夜激情久久久久久久| 少妇 在线观看| 熟女av电影| 亚洲精品日韩av片在线观看| 成人无遮挡网站| 又黄又爽又刺激的免费视频.| 日韩欧美 国产精品| 国产精品欧美亚洲77777| 少妇人妻精品综合一区二区| 国产探花极品一区二区| 少妇丰满av| 一区二区三区免费毛片| 精品一区二区三卡| 欧美精品一区二区免费开放| 91精品国产国语对白视频| 内地一区二区视频在线| 两个人的视频大全免费| 激情五月婷婷亚洲| 国产欧美日韩一区二区三区在线 | 亚洲欧洲国产日韩| 女性被躁到高潮视频| 少妇被粗大的猛进出69影院 | 亚洲国产日韩一区二区| 成人国产av品久久久| av在线app专区| 国产女主播在线喷水免费视频网站| 搡老乐熟女国产| 男人狂女人下面高潮的视频| 免费观看av网站的网址| 亚洲情色 制服丝袜| 久久久久久人妻| 欧美xxⅹ黑人| 一级毛片久久久久久久久女| 丁香六月天网| 男人舔奶头视频| 久久青草综合色| 丝瓜视频免费看黄片| 日本免费在线观看一区| 黑人猛操日本美女一级片| 国产69精品久久久久777片| 亚洲性久久影院| 成人黄色视频免费在线看| 国模一区二区三区四区视频| 免费看不卡的av| 日韩中字成人| 在现免费观看毛片| 伦精品一区二区三区| 亚洲国产av新网站| 久久人人爽人人片av| 日本黄色日本黄色录像| 亚洲自偷自拍三级| 亚洲成人一二三区av| h日本视频在线播放| 高清在线视频一区二区三区| 老司机影院毛片| 2022亚洲国产成人精品| 色视频在线一区二区三区| 嘟嘟电影网在线观看| a级一级毛片免费在线观看| 91aial.com中文字幕在线观看| 精品卡一卡二卡四卡免费| 黄色欧美视频在线观看| 欧美bdsm另类| 简卡轻食公司| 看十八女毛片水多多多| 黑人猛操日本美女一级片| 欧美 日韩 精品 国产| 美女脱内裤让男人舔精品视频| 色哟哟·www| 国产精品伦人一区二区| 国产精品一区二区性色av| 成人漫画全彩无遮挡| 亚洲av在线观看美女高潮| 亚洲精品aⅴ在线观看| 国产综合精华液| 精品亚洲成a人片在线观看| 欧美精品人与动牲交sv欧美| 在线观看免费高清a一片| 精品久久久噜噜| 日本av免费视频播放| 亚洲国产最新在线播放| 日韩成人伦理影院| 美女国产视频在线观看| 免费看av在线观看网站| 国产成人精品久久久久久| 在线观看美女被高潮喷水网站| 精品亚洲成国产av| 亚洲熟女精品中文字幕| 在现免费观看毛片| 成人亚洲精品一区在线观看| av天堂久久9| 亚洲精品一区蜜桃| 一区二区三区精品91| 亚洲婷婷狠狠爱综合网| 久久av网站| 美女国产视频在线观看| 亚洲精品456在线播放app| 日本-黄色视频高清免费观看| 久久综合国产亚洲精品| 亚洲第一av免费看| 日韩伦理黄色片| 日韩亚洲欧美综合| 色94色欧美一区二区| 国产精品欧美亚洲77777| 内地一区二区视频在线| 国产精品成人在线| 久久久久精品性色| 久久久久久久久久人人人人人人| 综合色丁香网| 少妇被粗大的猛进出69影院 | 你懂的网址亚洲精品在线观看| av天堂久久9| 91成人精品电影| 国产午夜精品久久久久久一区二区三区| 一本久久精品| 国产成人精品福利久久| av天堂久久9| 中文在线观看免费www的网站| 男女边摸边吃奶| 嫩草影院入口| 内射极品少妇av片p| 亚洲精品色激情综合| 久热久热在线精品观看| 男女啪啪激烈高潮av片| 一本一本综合久久| av有码第一页| 偷拍熟女少妇极品色| 亚洲第一av免费看| 成人国产麻豆网| 一级黄片播放器| 精品国产一区二区三区久久久樱花| 18禁在线播放成人免费| 夜夜爽夜夜爽视频| 26uuu在线亚洲综合色| 国产 精品1| 三级经典国产精品| 久久99热这里只频精品6学生| 久久国产精品男人的天堂亚洲 | 男女啪啪激烈高潮av片| 日韩亚洲欧美综合| 熟女电影av网| 精品卡一卡二卡四卡免费| 国产伦理片在线播放av一区| 免费av中文字幕在线| 97在线人人人人妻| 最新中文字幕久久久久| 欧美3d第一页| 国产成人精品无人区| 一二三四中文在线观看免费高清| 日日摸夜夜添夜夜添av毛片| 夫妻午夜视频| 美女中出高潮动态图| 久久久久精品久久久久真实原创| 亚洲av综合色区一区| 午夜福利网站1000一区二区三区| 夜夜骑夜夜射夜夜干| 看免费成人av毛片| 视频中文字幕在线观看| 蜜桃在线观看..| 精品午夜福利在线看| 在线亚洲精品国产二区图片欧美 | 9色porny在线观看| 国产精品国产av在线观看| 久久久久视频综合| videos熟女内射| 亚洲熟女精品中文字幕| 日本黄色片子视频| 只有这里有精品99| 少妇 在线观看| 亚洲欧美一区二区三区黑人 | 伊人久久精品亚洲午夜| 亚洲精品日韩av片在线观看| 国产精品女同一区二区软件| 国产高清有码在线观看视频| 一区二区三区乱码不卡18| 精品国产国语对白av| 人妻人人澡人人爽人人| 美女内射精品一级片tv| 国产精品蜜桃在线观看| 日日啪夜夜爽| 国产淫语在线视频| 97超碰精品成人国产| 精品视频人人做人人爽| 亚洲第一av免费看| 国产伦精品一区二区三区四那| 18+在线观看网站| 日韩精品免费视频一区二区三区 | 亚洲国产色片| 久久婷婷青草| 久久国产乱子免费精品| 少妇猛男粗大的猛烈进出视频| 又爽又黄a免费视频| 夫妻性生交免费视频一级片| 亚洲伊人久久精品综合| 一级毛片久久久久久久久女| 妹子高潮喷水视频| 欧美3d第一页| 日本91视频免费播放| 免费大片黄手机在线观看| 免费高清在线观看视频在线观看| 亚洲精品自拍成人| 2022亚洲国产成人精品| 亚洲伊人久久精品综合| 91精品国产国语对白视频| av在线老鸭窝| 午夜日本视频在线| 亚洲成人手机| 两个人免费观看高清视频 | 丝袜在线中文字幕| 日本免费在线观看一区| 成人漫画全彩无遮挡| av黄色大香蕉| 亚洲激情五月婷婷啪啪| 一二三四中文在线观看免费高清| 中文字幕久久专区| 深夜a级毛片| 亚洲高清免费不卡视频| 成年av动漫网址| 国产女主播在线喷水免费视频网站| 伦理电影免费视频| 成年人午夜在线观看视频| 韩国高清视频一区二区三区| 丝袜在线中文字幕| 又爽又黄a免费视频| 乱码一卡2卡4卡精品| 午夜视频国产福利| av在线观看视频网站免费| 18禁在线播放成人免费| 亚洲色图综合在线观看| av不卡在线播放| 啦啦啦啦在线视频资源| 久久婷婷青草| 精品人妻偷拍中文字幕| av免费观看日本| 国产精品三级大全| 午夜福利影视在线免费观看| 尾随美女入室| 国内少妇人妻偷人精品xxx网站| 国产黄片视频在线免费观看| 高清在线视频一区二区三区| 男女啪啪激烈高潮av片| 黄色毛片三级朝国网站 | 亚州av有码| 国产无遮挡羞羞视频在线观看| 国产有黄有色有爽视频| 噜噜噜噜噜久久久久久91| 97在线人人人人妻| 午夜日本视频在线| 成人无遮挡网站| 亚洲精华国产精华液的使用体验| 极品人妻少妇av视频| 午夜激情福利司机影院| 色94色欧美一区二区| 我要看日韩黄色一级片| 国产色爽女视频免费观看| 国产一区二区三区av在线| 国内精品宾馆在线| 在线天堂最新版资源| av国产久精品久网站免费入址| 伦理电影大哥的女人| 久久国产亚洲av麻豆专区| 国产欧美亚洲国产| 亚洲高清免费不卡视频| 全区人妻精品视频| 中文乱码字字幕精品一区二区三区| 天堂中文最新版在线下载| 2021少妇久久久久久久久久久| 欧美日韩一区二区视频在线观看视频在线| 在线 av 中文字幕| 亚洲精品乱码久久久久久按摩| 少妇 在线观看| 国产黄色免费在线视频| 亚洲成人一二三区av| 少妇人妻一区二区三区视频| 精品99又大又爽又粗少妇毛片| 看非洲黑人一级黄片| 亚洲人成网站在线播| 99热国产这里只有精品6| 国产爽快片一区二区三区| 亚洲美女视频黄频| 国产高清三级在线| 男女边吃奶边做爰视频| 亚洲性久久影院| 国产黄片视频在线免费观看| 国产精品一区二区在线观看99| av卡一久久| 色网站视频免费| 午夜老司机福利剧场| 肉色欧美久久久久久久蜜桃| 国产精品一区www在线观看| 一级,二级,三级黄色视频| 99热全是精品| 欧美日韩精品成人综合77777| 国产91av在线免费观看| 中国三级夫妇交换| 午夜免费观看性视频| 国国产精品蜜臀av免费| 亚洲,欧美,日韩| 久久97久久精品| 亚洲av二区三区四区| 成人无遮挡网站| 亚洲电影在线观看av| 一本一本综合久久| 亚洲av中文av极速乱| 国产69精品久久久久777片| 精华霜和精华液先用哪个| 久久久久久人妻| 亚洲av.av天堂| 亚洲第一av免费看| 亚洲欧洲精品一区二区精品久久久 | 久久国产亚洲av麻豆专区| .国产精品久久| 性色avwww在线观看| 日本黄大片高清| 五月天丁香电影| 一级,二级,三级黄色视频| 天堂俺去俺来也www色官网| 精华霜和精华液先用哪个| 丝瓜视频免费看黄片| 亚洲一级一片aⅴ在线观看| 大又大粗又爽又黄少妇毛片口| 成人特级av手机在线观看| 中国美白少妇内射xxxbb| av国产精品久久久久影院| h日本视频在线播放| 最近手机中文字幕大全| a级毛片在线看网站| 日本黄色片子视频| 国产伦精品一区二区三区视频9| 一个人免费看片子| 多毛熟女@视频| 人妻一区二区av| 一二三四中文在线观看免费高清| 欧美三级亚洲精品| 久久人人爽人人片av| 免费观看a级毛片全部| 一级av片app| 国产伦精品一区二区三区视频9| 国产高清三级在线| 国产视频首页在线观看| 精品酒店卫生间| 最近手机中文字幕大全| 午夜日本视频在线| 狠狠精品人妻久久久久久综合| 美女国产视频在线观看| 久热这里只有精品99| 国产综合精华液| 成人亚洲精品一区在线观看| 成年人午夜在线观看视频| 国产精品国产三级专区第一集| 嘟嘟电影网在线观看| www.色视频.com| 青春草视频在线免费观看| 青春草亚洲视频在线观看| 精品酒店卫生间| 国产亚洲精品久久久com| 男的添女的下面高潮视频| 国产乱来视频区| 26uuu在线亚洲综合色| 国产高清三级在线| 在线观看国产h片| 五月开心婷婷网| 婷婷色综合www| 精品人妻偷拍中文字幕| 香蕉精品网在线| 十分钟在线观看高清视频www | 久久久久视频综合| 熟女av电影| 热re99久久国产66热| 男女免费视频国产| 亚洲成人一二三区av| 亚洲国产精品专区欧美| av卡一久久| 高清午夜精品一区二区三区| 在现免费观看毛片| 亚洲美女黄色视频免费看| 欧美成人午夜免费资源| 少妇的逼好多水| 人妻一区二区av| 一区二区三区免费毛片| 亚洲激情五月婷婷啪啪| 熟妇人妻不卡中文字幕| 2018国产大陆天天弄谢| 亚洲久久久国产精品| av国产精品久久久久影院| 日本免费在线观看一区| 免费黄色在线免费观看| 免费播放大片免费观看视频在线观看| 青春草视频在线免费观看| 天堂8中文在线网| 国产av精品麻豆| 精品人妻熟女av久视频| 日韩伦理黄色片| 欧美97在线视频| 美女xxoo啪啪120秒动态图| 黑丝袜美女国产一区| 最近2019中文字幕mv第一页| 最新中文字幕久久久久| av专区在线播放| av在线播放精品| 男女免费视频国产| 国产精品久久久久久久久免| 国产男人的电影天堂91| 国产在线视频一区二区| 国产亚洲精品久久久com| 在线观看免费日韩欧美大片 | 精品熟女少妇av免费看| 国产视频内射| 亚洲一级一片aⅴ在线观看| 在线播放无遮挡| 亚洲国产最新在线播放| 国产精品国产三级专区第一集| 国产在线一区二区三区精| 免费av中文字幕在线| 性高湖久久久久久久久免费观看| 国产淫片久久久久久久久| av免费观看日本| 国产色婷婷99| 视频中文字幕在线观看| 国产免费福利视频在线观看| 亚洲国产欧美日韩在线播放 | 精品少妇黑人巨大在线播放| 我要看黄色一级片免费的| 最新的欧美精品一区二区| 高清毛片免费看| 久久久精品94久久精品| av福利片在线观看| 日韩一区二区三区影片| 女人精品久久久久毛片| 中文资源天堂在线| 亚洲va在线va天堂va国产| 国产精品福利在线免费观看| 国产精品麻豆人妻色哟哟久久| 18禁在线播放成人免费| 午夜福利在线观看免费完整高清在| 亚洲精品中文字幕在线视频 | 极品少妇高潮喷水抽搐| 18禁在线播放成人免费| 成年美女黄网站色视频大全免费 | 看免费成人av毛片| 亚洲欧美中文字幕日韩二区| 日韩伦理黄色片| 日韩欧美一区视频在线观看 | 国产永久视频网站| 老熟女久久久| 狂野欧美激情性xxxx在线观看| 亚洲成人av在线免费| 免费在线观看成人毛片| 97超视频在线观看视频| 国产一区二区三区综合在线观看 | 一本大道久久a久久精品| 男的添女的下面高潮视频| 免费在线观看成人毛片| 欧美日韩在线观看h| 国产视频内射| 亚洲熟女精品中文字幕| 成年av动漫网址| 日韩人妻高清精品专区| 国产淫片久久久久久久久| 两个人的视频大全免费| 国产精品一区www在线观看| 日本黄大片高清| 夜夜骑夜夜射夜夜干| 大陆偷拍与自拍| 一区二区三区乱码不卡18| 插逼视频在线观看| 亚洲综合精品二区| 女人久久www免费人成看片| 六月丁香七月| 成人免费观看视频高清| 中国国产av一级| 18+在线观看网站| 狂野欧美激情性xxxx在线观看| 日韩精品免费视频一区二区三区 | 狂野欧美激情性bbbbbb| 天天躁夜夜躁狠狠久久av| 欧美xxxx性猛交bbbb| 22中文网久久字幕| av不卡在线播放| 中文天堂在线官网| 一本久久精品| 日本免费在线观看一区| 老司机亚洲免费影院| 国产精品久久久久久久电影| 成人无遮挡网站| 成人亚洲欧美一区二区av| 夜夜骑夜夜射夜夜干| 肉色欧美久久久久久久蜜桃| 免费观看的影片在线观看| 国产av精品麻豆| 在线 av 中文字幕| 久久99蜜桃精品久久| 高清毛片免费看| 精品久久久久久久久av| 伦精品一区二区三区| 精品久久久精品久久久| 男人舔奶头视频| 国产精品伦人一区二区| 国产熟女午夜一区二区三区 | 秋霞伦理黄片| 极品人妻少妇av视频| 国产乱人偷精品视频| 爱豆传媒免费全集在线观看| 男女无遮挡免费网站观看| 精品久久久久久久久av| 欧美成人精品欧美一级黄| 精品久久久久久电影网| 欧美精品高潮呻吟av久久| 国国产精品蜜臀av免费| 人妻一区二区av| 好男人视频免费观看在线| 亚洲第一区二区三区不卡| 欧美精品亚洲一区二区| 久久99蜜桃精品久久|