• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Source Recovery in Underdetermined Blind Source Separation Based on Articial Neural Network

    2018-03-12 12:12:41WeihongFuBinNongXinbiaoZhouJunLiuChangleLiSchoolofTelecommunicationengineeringXidianUniversityXianShaanxi7007ChinaCollaborativeinnovationcenterofinformationsensingandunderstandingXianShaanxi7007ChinaNationalLaboratoryof
    China Communications 2018年1期

    Weihong Fu*, Bin Nong, Xinbiao Zhou, Jun Liu, Changle Li School of Telecommunication engineering, Xidian University, Xi’an, Shaanxi, 7007, China Collaborative innovation center of information sensing and understanding, Xi’an, Shaanxi, 7007, China National Laboratory of Radar Signal Processing, Xidian University, Xi’an, Shaanxi, 7007, China

    I. INTRODUCTION

    The underdetermined blind source separation(UBSS) is one case of blind source separation(BSS) that the number of observed signals is less than the number of source signals[1]. In recent years, the technique of underdetermined blind separation is applied widely in speech signal processing, image processing, radar signal processing, communication systems[2-5],data mining, and biomedical science. Currently, many researches on UBSS have mainly focused on sparse component analysis (SCA)[6], which leads to the “two steps” approach[7].Therst step is to estimate the mixing matrix,and the second step is to recover the source signals. Notice that source signal may not be sparse in time domain, and in this case we assume that a linear and sparse transformation(e.g., Fourier transform, wavelet transform,etc.) can be found. The SCA can also be applied in the linearly transformed domain. In the “two step” approach, source signals recovery attracts very little attention, while many researchers investigate methods that identify the mixing matrix, such as clustering algorithm[8,9]or potential function based algorithm[10,11]. In this paper, we assume that the mixing matrix has been estimated successfully by the aforementioned algorithms, and we are concerned with the source signals recovery.

    UBSS shares the same model with compressed sensing (CS) on the condition that the source signals are sparse and the mixing matrix is known[12]. In fact, before the concept of compressed sensing was put forward, one usual method to solve sparse BSS is to minimize the ?1-norm. Y Li[13]et al. analyzed the equivalence of the ?0-norm solution and the?1-norm solution within a probabilistic framework, and presented conditions for recoverability of source signals in literature[14]. But,Georgiev[15]et al. illustrated that the recovery in UBSS using ?1-norm minimization did not perform well ,even if the mixing matrix is perfectly known. After compressed sensing[16]technique came out, many sparse recovery algorithms were proposed, and some methods based on compressed sensing have been proposed to solve UBSS[17,18].

    In 2009, Mohimani[19]et al. proposed a sparse recovery method based on smoothed ?0-norm (SL0), which can be applied to UBSS and is two or three times faster than the sparse reconstruction algorithm based on ?1-norm in the same precision or higher precision. And from then on, many scholars began to devote into the study of sparse signals reconstruction algorithm based on the SL0[20-23]. The SL0 and its improved algorithms have the advantages of less calculation and good robustness, but are easily affected by the degree of approximation of ?0-norm. To make the ?0-norm perform better, Vidya L[24]et al. proposed a sparse signal reconstruction algorithm based on minimum variance of radial basis function network (RASR). The algorithm firstly establishes a two-stage cascade network: the first stage fulfills the optimization of radial basis function, and the second stage is used for computing the minimum variance and outputting feedback to therst stage to accelerate the convergence. For the RASR algorithm,the computational complexity is not reduced obviously since two optimization models are built. What’s more, the improper step size may affect the convergence rate due to the adoption of gradient descent in the algorithm.Chun-hui Zhao[25]et al. introduce the articial neural network (ANN) into the model of compressed sensing reconstruction and obtain the compressed sensing reconstruction algorithm based on artificial neural network (CSANN),which enhances the fault-tolerant ability of the algorithm. However, the result is easy to fall into the area of local extreme point, since the penalty function of the CSANN algorithm cannot approximate the ?0-norm very well.In addition, the CSANN algorithm always needs a large number of iterations to end. In UBSS, the compressed sensing reconstruction algorithm mentioned above cannot meet the requirements of recovery precision of source signals and the computing complexity simultaneously. To solve the problem, we propose an algorithm for source recovery based on artificial neural network. The algorithm improves the precision of recovery by taking the Gaussian function as a penalty function to approximate the ?0-norm. A smoothed parameter is used to control the convergence speed of the network. Additionally, we manage to seek for the optimal learning factor to improve the recovery accuracy, and a gradually descent sequence of smoothed parameter is utilized to accelerate the convergence of the ANN. Numerical experiments show the proposed algorithm can recover the source signals with high precision but low computational complexity.

    The paper is organized as follows. In Section 2, the model of UBSS based on the ANN is introduced. In Section 3, the proposed algorithm for source recovery in UBSS is presented. The performance of the proposed algorithm is numerically evaluated by simulation results in Section 4. Finally, conclusions are made in Section 5.

    II. THE MODEL OF UNDETERMINED BLIND SOURCE SEPARATION BASED ON ANN

    2.1 Problem description

    In a blind source separation system, the received signal can be presented as

    For brevity, equation (1) is rewritten as

    The UBSS problem above can be viewed as a CS problem by regarding the source signalss, mixing matrixAand observed signalxin the UBSS as, respectively, the sparse signals,sensing matrix and measurement signals in the CS. So the sparse reconstruction algorithms in the CS can be readily applied to the signal recovery in the UBSS problem.

    2.2 The model of articial neural network for UBSS

    It is the unique knowledge structure and information processing principle that make articial neural network one of the main technologies of intelligent information processing,which attracts more and more scientific and technological workers’ interest[26]. ANN has many advantages on signal processing, including self-adaption, fault tolerance capability,etc.

    Fig. 1. The model of single-layer perceptron.

    In view of the single-layer perceptron that is sufficient to describe the model of UBSS and multi-layer perceptron in which it is not easy tond optimal learning factor, we introduce the single-layer perceptron articial neural network model into UBSS in the following.As shown ingure 1,Ninputs correspond to an output. The source signal vectorsin Eq.(2)is the weight vector of the perceptron, thej-th row vectorof the mixing matrixAis the input of the perceptron model,whereis thei-th element ofAj, and thej-th elemenofxis the threshold value. The output error decision rule of the perceptron is

    The learning procedure or the convergence process of the neural network is to minimizeEby adjusting the weight vector of the perceptron. In order to make the weight vector of the perceptron converge to the actual source signal vector, the constraint—sparsity of source signal—should be involved in the output error decision. Generally, both ?0-norm and?1-norm can measure the sparsity of source signal. To some extent, the sparse solution acquired by minimizing ?1-norm is equivalent to the solution obtained by minimizing ?0-norm in sparse recovery if mixing matrixAobeys a uniform uncertainty principle[27]. But literature[15] suggests that conditions (literature [28],theorem 7) which guarantee equivalence of ?0-norm and ?1-norm minimization are always not satisfied for UBSS. Hence, ?0-norm is used as a penalty function to further adjust the weight coefcients, and Eq. (3) can be rewritten as

    whereγ>0 is used to trade off the penalty function and the estimation error. For ease of analysis, we assumeγ=1. Since the minimization of the ?0-norm of source vectorsis an NP-hard problem, literature [25] uses Eq. (5)to approximate the ?0-norm.

    whereβ>0, and the greater the value ofβis, the better it is able to approximate the ?0-norm. CSANN algorithms generally use the empirical valueβ=10.

    In Eq. (5), however, the operation of taking absolute value leads to terrible smoothness of function. In order to approximate the ?0-norm more perfectly, the Gaussian function is introduced as the penalty term, and Eq. (5) can be rewritten as

    whereσ> 0, and the smallerσis, the better it is able to approximate the ?0-norm. Figure 2 illustrates the results of calculating the ?0-norm by using Eq. (5) and Eq. (6) respectively. Obviously, both Eq.(5) and Eq.(6) can reflect the characteristic of ?0-norm, but the result of approximating ?0-norm using Gaussian function is better than another. In order to further compare the degree of approximating the ?0-norm using the two penalty functions aforementioned, we produce a 12×20 signal matrix (12 and 20 are the number of sources and the number of sampling respectively) with sparsity (dened in Section 4) 0.75 to approximate?0-norm by the two functions. As shown ingure 3, the horizontal coordinate is the discrete sampling time (t=1,2,…,20), and the vertical coordinate is the value of the ?0-norm calculated by different functions and it demonstrates that the value of ?0-norm obtained by Eq. (6) is closer to the theoretical value. In addition, the average error of Eq.(6) is only0.0501, while the average error of Eq.(5) is 1.8107. Thus, it is better to use the Gaussian function to approximate the ?0-norm.

    If the value ofσis small enough, by substituting Eq. (6) into Eq. (4) we obtain that

    Fig. 2. Comparison of approximation of ?0-norm by different functions.

    Fig. 3. The comparison between two functions f or approximating ?0-norm and their theoretical values.

    The procedure of source recovery in the UBSS based on the ANN adjusts the weight vector of the perceptron according to the output error decisionE. Moreover, a gradient descent is used in order to increase the learning speed of the neural network and improve recovery accuracy. Then we manage to calculate the optimal step size, which we call it learning factor here.When the convergence condition is satisfied,the obtained weight vector of the perceptron is the estimated source signal vector.

    III. ALGORITHM FOR SOURCE RECOVERY BASED ON ANN

    Eq. (8) is used as the convergence criterion for the CSANN algorithm, that is

    whereε>0, ifεis small enough, the process of algorithm will get to the nearly ideal state of convergence, but more iterations are needed. According to [24], in practice, the CSANN algorithm requires the maximum number of iterations to terminate its iteration and the complexity is intolerable. In order to improve the precision of recovery, the maximum number of iterations in the CSANN algorithm will be a large value and it consumes much time. Trade-off between fewer numbers of iterations and higher accuracy of source signal recovery is a main difculty to achieve.To solve this difficulty, a gradually descent sequence of smoothed parameterσutilized in Eq.(7) is used to assure the convergence of the proposed algorithm and the accuracy of source recovery simultaneously. Explanations for the decent sequence can be found in [19]. We can prove that

    wheres0is the most sparse solution of UBSS problem (i.e. Eq.(1)) ands~ is the optimal solution of Eq.(7). The process of proof is shown in appendix A.

    However, the functionEin Eq.(7) is highly non-smoothed for small values ofσand contains a lot of local maxima, which leads to tough maximization. On the contrary,Eis smoother and contains less local maxima ifσis large, which results in easier maximization.But the second term of Eq. (7) cannot approximate ?0-norm well for largeσ.

    Therefore the convergence condition is

    Whereσminshould be small as small as possible but cannot be too small.

    For Eq.(7), the gradient descent method is used to adjust the weight coefficients of the perceptron. Calculating the gradient vector of Eq. (7), we obtain that

    Therefore, the updated formula for the weight coefcients of perceptron is

    Substituting Eq. (11a) into Eq. (12) yields Eq. (13) shown in the bottom at next page.

    Comparing (13) to (11a), and meanwhile due toEq.(14) can be obtained:

    Then, Eq. (14) is rewritten as

    So far, the optimal learning factor can be obtained from Eq.(15)

    According to the above mentioned analysis, the process source recovery for the UBSS based on the articial neural network contains only one optimization problem, which can improve the accuracy of source recovery and dramatically reduce computational cost.

    In summary, the steps of the UBSSANN algorithm are as following:

    Step 1Initialize the source signal vectorparameters of Gaussian functionthe scale factorδ(0<δ<1, used to implement decent sequence ofσ), the threshold valueσmin≤10?2and the number of iterationsk=0;

    Step 2For, calculateusing Eq.(11a) and (16);

    Step 3Update the smoothed parameter of Gaussian function:

    Step 5Ifσk>σmin, go to step 2; otherwise,output

    Step 4Update the number of iteration:k←k+1;

    The computational complexity of the SL0,CSANN and RASR algorithms has been analyzed in the literature[20]. Analysis and experimental results show that the computation time of the RASR algorithm is only half of that of the SL0, and the number of iterations in the RASR algorithm is signicantly smaller than that in the CSANN algorithm, and the convergence time of the CSANN algorithm increases exponentially with the growth of the number of non-zero elements in the source signal. For the convenience of comparison, a contrast mode in literature [20] is used. The index of measuring complexity is the number of multiplication consisting in the gradient descent method to update the source signal.

    As shown in Table 1, the computational complexity of the UBSSANN algorithm is lower than that of the SL0, CSANN and RASR algorithms in the case of low degree of sparsity and high degree of sparsity. The structure and characteristics of the SL0,CSANN, RASR and UBSSANN algorithms are presented in Table 2. For the RASR, it contains two optimization models while the UBSSANN contains one optimization model,which implies that it is easier to optimize for the UBSSANN.

    IV. SIMULATION RESULTS AND NUMERICAL ANALYSIS

    In this section, the simulation results of the proposed UBSSANN algorithm are compared with those of the other algorithms (the SL0,CSANN, and RASR). In order to evaluate the accuracy of recovery for different algorithms,the correlation coefcient[21]is dened as:

    whereSandS? denote the source signal and the estimated signal respectively,is then-th row andj-th column ofS,s?n(t) denotes then-th row andj-th column ofS?. The larger the correlation coefcient is, the more accurate the algorithm will be. The value ofρ(S?,S)ranges from 0 to 1.The sparsitypis dened as following: each source is inactive with probabilitypand is active with probability 1-p.pcontrols the degree of sparsity of the source signal. Source signal becomes sparser with increasingp.

    Table I. Computational complexity of the four algorithms.

    Table II. Comparison of algorithm structure feature.

    Firstly, in order to analyze the effect of parameters on the algorithm performance, parameters are tested for different SNR (signalto-noise ratio) and sparsity of source signals.Secondly, according to the results in the first experiment, we choose appropriate value of parameters and compare the proposed algorithm with conventional algorithm in the second experiment for random signals. Finally,radar signals are used in the third experiment to prove the availability of the proposed UBSSANN algorithm in a real scenario.

    4.1 Simulation for eect of parameters on performance

    In order to verify the performance of the proposed algorithm, the effect of parameters on algorithm performance is studied in this experiment. Two essential parameters, the scale factorδand convergence threshold valueσmin,are discussed. With different sources randomly generated and a mixing matrix of dimension 8×15, the simulations are repeated 100 times.

    In figure 5, the number of iterations corresponding to the scale factor is depicted for different SNRs and the sparsity of source signal. Fromgure 5(a) orgure 5(b), we can roughly conclude that the number of iterations will rapidly increase when the scale factor is greater than 0.6. Hence, in the next experiment, the scale factorσis set as 0.6 such that the convergence is accelerated.

    4.2 Simulation results for random source signals

    Source signals, following Gaussian distribution, are sparse signals with sparsityp∈[0.5,0.9], which are received byMantennas. The source signal becomes sparser with increasingp. A mixing matrix is of dimensionM×N, which is randomly generated with the normal distribution. Simulations are repeated 1000 times. For the SL0, RASR and UBSSANN algorithms,σminis set as 0.001.The convergence criteria of the CSANN algorithm isand the maximum number of iterations isxed to 500.

    Fig. 4. Effect of parameters on performance.

    Figure 6 demonstrates the correlation coefficient obtained by different algorithms corresponding to SNR for different system scales (i.e., the dimension of mixing matrix,MandN, mentioned in section 2) in the case ofp=0.8. As shown ingure 6, the correlation coefficient of the UBSSANN algorithm is obviously larger than that of all the other algorithms. For instance, the correlation coefcient obtained by the UBSSANN is about 6%, 10%,15% higher than that by the RASR,SL0 and CSANN, respectively, when SNR is 20dB in case ofM=3,N=5. This improvement on the correlation coefficient is essential for many real applications, such as radar signal processing. In the range of SNR from 10 dB to 20 dB, the average correlation coefcients of the UBSSANN algorithms are 0.9034, which shows its robustness against the noise.

    Fig. 5. The number of iterations varies with the parameter scale factor.

    Figure 7 shows the correlation coefficients obtained by the SL0, CSANN, RASR,UBSSANN algorithms corresponding to sparsitypwith several mixing matrices of different dimensions in the case where SNR is 30 dB.When the sparsitypis greater than 0.6, the correlation coefficient of the UBSSANN algorithm is larger than that in the SL0, RASR,CSANN algorithms. For example, in the case ofp=0.8,M=6,N=10, the correlation coefficients obtained by the UBSSANN algorithm are about 4%, 10% and 22% larger than that of the RASR, CSANN and CSANN algorithms,respectively. As presented in Table 1 and Table 2, however, the complexity of the UBSSANN is signicantly lower than that of the conventional algorithms. When the sparsity is less than 0.6, the correlation coefficient of the UBSSANN algorithm is a little smaller than that of the RASR but larger than that of SL0 and CSANN.

    Figure 8 shows the contrast between the correlation coefficients obtained by the UBSSANN and RASR corresponding to the number of iterations in the case of SNR=20dB andp=0.9 at axed time. Figure 6 illustrates that the UBSSANN algorithm has reached the convergence state in the 5th iteration, while the RASR is close to the state of convergence after the 20th iteration. So the convergence rate of the UBSSANN is relatively fast.

    In the following,we give the time required by different algorithms. As shown ingure 9,in the case of SNR=20dB, the running time of UMSRANN algorithm is less than the other three algorithms. With the range of sparsity from 0.5 to 0.9, the average running time of UBSSANN algorithm is reduced by about 40%, 60% and 29% respectively compared with SL0, CSANN and RASR algorithm. It implies that UBSSANN algorithm maintains high recovery accuracy while signicantly re-duce the computational complexity compared with the other three algorithms.

    Fig. 6. Correlation coef cient vs. SNR with p=0.8 for different dimension mixing matrix.

    Fig. 7. Correlation coef cient vs. sparsity p with SNR=10dB for different dimension mixing matrix.

    4.3. Simulation results for radar source signals

    In this experiment, 5 radar signalsare chosen as the source signals.are general radar signals with the same pulse width 10μsand pulse duration 50μs,but with different carrier frequencies 5MHz and 5.5MHz.s3is linearly frequency modulated (LFM) radar signal with a carrier frequency 5MHz, pulse width 10μs, pulse duration 50μsand pulse bandwidth 10MHz.s4is also a LFM radar signal with the same parameters ass3, but its pulse bandwidth is 15MHz.s5is a sinusoidal phase-modulated radar signal with a carrier frequency 5MHz, pulse width 10μs,pulse duration 50μsand the frequency of sine-wave modulation signal 200KHz. The dimension of mixing matrix isM=3,N=5(i.e.,3 receiving antennas and 5 source signals). To assess the recovery equality, signal-to-interference ratio (SIR) of recovery source signal is used here besides the correlation coefficient,and the SIR of recovery source signal is defined aswheresanddenotes the real source signal and recovery source signal, respectively.

    Fig. 8. correlation coef cient vs. iterations of UBSSANN and RASR algorithms respectively.

    Fig. 9. computing time vs. degree of sparsity with SNR = 20 dB.

    In this experiment, we use the SL0,CSANN, RASR, and UBSSANN algorithms to recover the source signals. Figure 10 and figure 11 demonstrate the correlation coefficient and SIR of the recovered signals obtained by the aforementioned algorithms,respectively. As shown in figure 10, the UBSSANN algorithm gives good results for the SNR ranging from 10 dB to 30 dB, performing better than the SL0, CSANN, and RASR in terms of correlation coefcient. For instance, correlation coefficient acquired by the UBSSANN is about 0.99, while those obtained by the SL0, RASR and CSANN are about 0.89, 0.87, 0.76, respectively, when SNR=30dB. Instead of correlation coefcient, the SNR of the recovered signal is used to evaluate the algorithms’ performance, as shown ingure 11. Obviously, the SIR of the recovered signal obtained by the UBSSANN is significantly greater than those acquired by the SL0,CSANN, and RASR while the SNR of mixed signal ranges from 10 dB to 30dB. Moreover,the SIRs obtained by the SL0, CSANN, and RASR are all lower than 10dB while that obtained by that UBSSANN roughly linearly increases with respect to the SNR of the mixed signal rising.

    V. SUMMARY

    For the problem of high computational complexity when the compressed sensing sparse reconstruction algorithm is used for source signal recovery in UBSS, the UBSSANN algorithm is proposed. Based on the sparse reconstruction model, a single layer perceptron articial neural network is introduced into the proposed algorithm introduces, and the optimal learning factor is calculated, which improves the precision of recovery. Additionally,a descending sequence of smoothed parameterσis used to control the convergence speed of the proposed algorithm such that the number of iterations can be signicantly reduced.Compared with the existing algorithms (i.e.,SL0, CSANN and RASR) the UBSSANN algorithm achieves good trade-off between the precision of recovery and computational complexity.

    Fig. 10. Correlation coef cient vs. SNR of mixed signal.

    Fig. 11. SIR of recovered signal vs. SNR of mixed signal.

    Appendix A

    UBSSANN’s original mathematical model is

    whereNrepresents the number of source sig-nals,Obviously, (A-1)is equivalent to

    Then

    According to Eq. (A-5) and (A-6), we can get that

    It is assumed that A? is a sub-matrix composed ofaivectors in matrix A, wherei∈Iκ, then A? contains at mostMcolumn vectors. Since each column vector is independent of each other, A? has left pseudo inverse,denoted asIn addition, in the vectorthe sub-vector corresponding to the element constituting the subscript setIkis denoted byand the sub-vector of the element corresponding to the subscript which is not belong toIkis denoted ass′,

    We can get that

    According to Eq. (A-7), it can be obtained that

    Similarly, combining with Eq.(A-6), we get

    By Eq. (A-10) and (A-11), it can be obtained that

    The set of all the sub-matrices A? of the mixing matrix A is now referred to as Θ, Letβbe as follow:

    Then

    Next we use Eq. (A-14) to prove that

    wheres0is the most sparse solution of UBSS problem (i.e. Eq.(1)) ands~ is the optimal solution of Eq.(7).

    It is assumed that the vectorsatisfies the constraintx=As~ and is the optimal point of the current objective functionThe set of subscripts corresponding to the elements satisfyingin vectors~ is denotedthere is

    Combined with Eq. (A-3), it can be obtained that

    Sinces~ is the optimal point of the current objective function, then

    According to Eq. (A-17), (A-18) and (A-19), we get

    Then

    So

    Therefore, the number of elements ofwhose absolute value is bigger thanκis at mostM?k, and that of non-zero elements ofs0is at mostk, so the number of elements inwhose absolute value is bigger thanκis at mostM?k+k=M.

    Then

    ACKNOWLEDGMENT

    This work was supported by National Nature Science Foundation of China under Grant(61201134, 61401334) and Key Research and Development Program of Shaanxi (Contract

    No. 2017KW-004, 2017ZDXM-GY-022).

    [1] G. R. Naik, W. Wang, Blind Source Separation:Advances in Theory Algorithms and Applications (Signals and Communication Technology Series), Berlin, Germany: Springer, 2014.

    [2] Gao, L., Wang, X., Xu, Y. and Zhang, Q., Spectrum trading in cognitive radio networks: A contract-theoretic modeling approach. IEEE Journal on Selected Areas in Communications, 2011,29(4), pp.843-855.

    [3] Wang, X., Huang, W., Wang, S., Zhang, J. and Hu, C.,Delay and capacity tradeoanalysis for motioncast. IEEE/ACM Transactions on Networking (TON), 2011,19(5), pp.1354-1367.

    [4] Gao, L., Xu, Y. and Wang, X., Map: Multiauctioneer progressive auction for dynamic spectrum access. IEEE Transactions on Mobile Computing, 2011,10(8), pp.1144-1161.

    [5] Wang X, Fu L, Hu C. Multicast performance with hierarchical cooperation[J]. IEEE/ACM Transactions on Networking (TON), 2012, 20(3): 917-930.

    [6] Y. Li, A. Cichocki, S. Amari, “Sparse component analysis for blind source separation with less sensors than sources”,Proc. Int. Conf. Independent Component Analysis (ICA), pp. 89-94, 2003.

    [8] J. Sun, et al., “Novel mixing matrix estimation approach in underdetermined blind source separation”,Neurocomputing, vol. 173, pp. 623-632, 2016.

    [9] V. G. Reju, S. N. Koh, I. Y. Soon, “An algorithm for mixing matrix estimation in instantaneous blind source separation”,Signal Process., vol. 89, no.3, pp. 1762-1773, Mar. 2009.

    [10] F. M. Naini, et al., “Estimating the mixing matrix in Sparse Component Analysis (SCA) based on partial k-dimensional subspace clustering”,Neurocomputing, vol. 71, pp. 2330-2343, 2008.

    [11] T. Dong, L. Yingke, and J. Yang, “An algorithm for underdetermined mixing matrix estimation”,Neurocomputing, vol. 104, pp. 26-34, 2013.

    [12] T. Xu, W. Wang, “A compressed sensing approach for underdetermined blind audio source separation with sparse representation”,Proc.IEEE Statist. Signal Process. 15th Workshop, pp.493-496, 2009.

    [13] Y. Q. Li, A. Cichocki, S. Amari, “Analysis of sparse representation and blind source separation”,Neural Comput., vol. 16, no. 6, pp. 1193-1234,2004.

    [14] Y. Li, et. al., “Underdetermined blind source separation based on sparse representation”,IEEE Trans. Signal Process., vol. 54, no. 2, pp. 423-437, Feb. 2006.

    [15] P. Georgiev, F. Theis, A. Cichocki, “Sparse component analysis and blind source separation of underdetermined mixtures”,IEEE Trans. Neural Networks, vol. 16, no. 5, pp. 992-996, Jul. 2005.

    [16] D. Donoho, “Compressed sensing”,IEEE Trans.Inform. Theory, vol. 52, no. 4, pp. 1289-1306,Apr. 2006.

    [17] T. Xu, W. Wang, “A block-based compressed sensing method for underdetermined blind speech separation incorporating binary mask”,Proc. Int. Conf. Acoust. Speech Signal Process.(ICASSP), pp. 2022-2025, 2010.

    [18] M. Kleinsteuber, H. Shen, “Blind source separation with compressively sensed linear mixtures”,IEEE Signal Process Lett., vol. 19, no. 2, pp. 107-110, Feb. 2012.

    [19] H. Mohimani, M. Babaie-Zadeh, and C. Jutten,“A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed L0 Norm”,IEEE Trans. Signal Process., vol. 57, no. 1, pp.289-301, Jan. 2009.

    [20] A. Eftekhari, M. Babaie-Zadeh, C. Jutten, H.Abrishami Moghad-dam, “Robust-SL0 for stable sparse representation in noisy settings”,Proc. Int. Conf. Acoust. Speech Signal Process.(ICASSP), pp. 3433-3436, 2009.

    [21] S. H. Ghalehjegh, M. Babaie-Zadeh, and C. Jutten, “Fast block-sparse decomposition based on SL0”,International Conference on Latent Variable Analysis and Signal Separation, PP. 426-433, 2010.

    [22] Changzheng Ma, Tat Soon Yeo, Zhoufeng Liu.“Target imaging based on ?1?0 norms homotopy sparse signal recovery and distributed MIMO antennas”,IEEE Transactions on Aerospace and Electronic Systems, vol.51, no.4, pp:3399-3414,2015

    [23] V. Vivekanand; L. Vidya, “Compressed sensing recovery using polynomial approximated l0 minimization of signal and error”,2014 International Conference on Signal Processing and Communications, pp:1-6, 2014

    [24] L. Vidya, V. Vivekanand, U. Shyamkumar, Deepak Mishra, “RBF-network based sparse signal recovery algorithm for compressed sensing reconstruction”,Neural Networks, vol. 63, pp. 66-78, 2015.

    [25] C. Zhao, and Y. Xu, “An improved compressed sensing reconstruction algorithm based on artificial neural network”,2011 International Conference on Electronics, Communications and Control (ICECC), pp. 1860-1863, 2011.

    [26] A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, U.K.,Chichester: Wiley, 1993.

    [27] E. Cands, J. Romberg, T. Tao, “Stable Signal Recovery from Incomplete and Inaccurate Measurements”,Comm. Pure and Applied Math., vol.59, no. 8, pp. 1207-1223, 2006.

    [28] D. Donoho, M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization”,Proceedings of the National Academy of Sciences, pp. 2197-2202, 2003.

    [29] I. F. Gorodnitsky, B. D. Rao, “Sparse signal reconstruction from limited data using FOCUSS:A re-weighted norm minimization algorithm”,IEEE Trans. Signal Process., vol. 45, pp. 600-616,1997.

    亚洲第一电影网av| 人妻夜夜爽99麻豆av| 99国产极品粉嫩在线观看| 五月玫瑰六月丁香| 99在线视频只有这里精品首页| 欧美国产日韩亚洲一区| 成人高潮视频无遮挡免费网站| 91字幕亚洲| 九九久久精品国产亚洲av麻豆| 亚洲性夜色夜夜综合| 嫁个100分男人电影在线观看| 亚洲男人的天堂狠狠| 久久精品亚洲精品国产色婷小说| 久久国产乱子伦精品免费另类| 成人特级av手机在线观看| 久久久色成人| 精品国内亚洲2022精品成人| 亚洲 国产 在线| 波多野结衣高清作品| 91久久精品国产一区二区成人 | 岛国在线观看网站| 欧美av亚洲av综合av国产av| 狠狠狠狠99中文字幕| 欧美成狂野欧美在线观看| 中文字幕人妻熟人妻熟丝袜美 | 麻豆国产av国片精品| 最近最新中文字幕大全电影3| 亚洲av五月六月丁香网| 色综合亚洲欧美另类图片| 女人十人毛片免费观看3o分钟| 美女高潮的动态| 99热精品在线国产| 亚洲精品亚洲一区二区| 欧美一区二区亚洲| 精品午夜福利视频在线观看一区| 99久久九九国产精品国产免费| 美女大奶头视频| 亚洲性夜色夜夜综合| 非洲黑人性xxxx精品又粗又长| 脱女人内裤的视频| 99久久久亚洲精品蜜臀av| 深夜精品福利| 国产成人aa在线观看| 亚洲av成人精品一区久久| 欧美区成人在线视频| 九九久久精品国产亚洲av麻豆| 狠狠狠狠99中文字幕| 国产精品久久久久久久久免 | 熟女电影av网| 长腿黑丝高跟| 国内精品久久久久久久电影| netflix在线观看网站| 嫩草影视91久久| 国产精品三级大全| 狂野欧美激情性xxxx| 成人永久免费在线观看视频| 精品国产三级普通话版| 日本一二三区视频观看| 中文字幕人妻熟人妻熟丝袜美 | 给我免费播放毛片高清在线观看| 啦啦啦韩国在线观看视频| 国产免费一级a男人的天堂| 中文字幕人妻丝袜一区二区| 麻豆成人午夜福利视频| 久久国产精品人妻蜜桃| 日本黄大片高清| 国产高清视频在线观看网站| 成人性生交大片免费视频hd| 亚洲av中文字字幕乱码综合| 村上凉子中文字幕在线| 18禁美女被吸乳视频| 亚洲精品456在线播放app | 一区二区三区国产精品乱码| 少妇人妻一区二区三区视频| 亚洲精华国产精华精| 亚洲不卡免费看| 黄色成人免费大全| 长腿黑丝高跟| 中文字幕精品亚洲无线码一区| 国产亚洲精品综合一区在线观看| 中文字幕人成人乱码亚洲影| 国产精品久久视频播放| 日日夜夜操网爽| 国内少妇人妻偷人精品xxx网站| 国产一区二区在线av高清观看| 操出白浆在线播放| 99国产精品一区二区三区| 啦啦啦免费观看视频1| 欧美zozozo另类| 亚洲不卡免费看| 在线观看免费视频日本深夜| 亚洲av成人av| 99久久精品国产亚洲精品| 久久精品国产综合久久久| 国产三级在线视频| 亚洲精华国产精华精| 国产高清三级在线| 日韩欧美国产一区二区入口| 18禁美女被吸乳视频| 熟女少妇亚洲综合色aaa.| 成年版毛片免费区| 一进一出好大好爽视频| 午夜精品在线福利| 亚洲无线观看免费| 久久99热这里只有精品18| 亚洲精品久久国产高清桃花| 淫妇啪啪啪对白视频| 国产高潮美女av| or卡值多少钱| 亚洲国产精品久久男人天堂| 久久久久久人人人人人| 51午夜福利影视在线观看| 欧美另类亚洲清纯唯美| 精品不卡国产一区二区三区| 丰满乱子伦码专区| 校园春色视频在线观看| 不卡一级毛片| 性色av乱码一区二区三区2| 欧美日韩亚洲国产一区二区在线观看| 亚洲精品一卡2卡三卡4卡5卡| 久久中文看片网| 精品欧美国产一区二区三| 夜夜躁狠狠躁天天躁| 中文字幕av成人在线电影| 亚洲狠狠婷婷综合久久图片| 国产精品永久免费网站| 97碰自拍视频| 亚洲国产精品成人综合色| 国产av一区在线观看免费| 亚洲精品在线美女| 在线观看美女被高潮喷水网站 | 免费在线观看日本一区| 亚洲国产精品999在线| 日韩av在线大香蕉| 午夜久久久久精精品| 1000部很黄的大片| 最近最新中文字幕大全免费视频| 在线播放国产精品三级| 国产一级毛片七仙女欲春2| 亚洲国产欧美人成| 欧美日韩亚洲国产一区二区在线观看| 1000部很黄的大片| 亚洲欧美一区二区三区黑人| 啦啦啦观看免费观看视频高清| 波多野结衣高清无吗| 国产黄色小视频在线观看| 内地一区二区视频在线| 国产一级毛片七仙女欲春2| 亚洲无线在线观看| 色尼玛亚洲综合影院| 精品国产超薄肉色丝袜足j| 成人特级av手机在线观看| 成年女人毛片免费观看观看9| av天堂中文字幕网| 床上黄色一级片| 亚洲欧美激情综合另类| 麻豆成人av在线观看| 变态另类丝袜制服| 亚洲国产中文字幕在线视频| 大型黄色视频在线免费观看| 欧美xxxx黑人xx丫x性爽| 看免费av毛片| 男人舔女人下体高潮全视频| 18禁黄网站禁片午夜丰满| 搞女人的毛片| 免费av毛片视频| 麻豆成人午夜福利视频| 日韩精品中文字幕看吧| 丰满乱子伦码专区| 中文字幕人妻熟人妻熟丝袜美 | 熟女少妇亚洲综合色aaa.| 国产一级毛片七仙女欲春2| 天天添夜夜摸| 九色国产91popny在线| 久久精品人妻少妇| 一区二区三区免费毛片| 久久久久免费精品人妻一区二区| 岛国视频午夜一区免费看| 在线免费观看不下载黄p国产 | 色综合婷婷激情| 欧美在线黄色| 亚洲人与动物交配视频| 女同久久另类99精品国产91| 我要搜黄色片| 国产 一区 欧美 日韩| 日韩欧美国产一区二区入口| 禁无遮挡网站| 色老头精品视频在线观看| 亚洲精品色激情综合| 99热6这里只有精品| 日韩精品中文字幕看吧| 女生性感内裤真人,穿戴方法视频| www国产在线视频色| 亚洲中文字幕日韩| 男女之事视频高清在线观看| 丁香六月欧美| 高清日韩中文字幕在线| 亚洲国产色片| 伊人久久精品亚洲午夜| 久久久久久国产a免费观看| 欧美极品一区二区三区四区| 亚洲成人久久性| 国产伦人伦偷精品视频| 中文字幕人妻熟人妻熟丝袜美 | 国产久久久一区二区三区| 成人av一区二区三区在线看| 中国美女看黄片| 小蜜桃在线观看免费完整版高清| 久久婷婷人人爽人人干人人爱| 亚洲成人久久爱视频| 国产毛片a区久久久久| 一二三四社区在线视频社区8| 国产97色在线日韩免费| 每晚都被弄得嗷嗷叫到高潮| 啪啪无遮挡十八禁网站| 亚洲中文日韩欧美视频| 88av欧美| 国产精品久久久久久久久免 | 久久国产乱子伦精品免费另类| 美女cb高潮喷水在线观看| 亚洲国产日韩欧美精品在线观看 | 小蜜桃在线观看免费完整版高清| 亚洲,欧美精品.| 欧美3d第一页| 欧美+亚洲+日韩+国产| 欧美黑人巨大hd| 天堂av国产一区二区熟女人妻| 日本 av在线| bbb黄色大片| 国语自产精品视频在线第100页| 成年女人永久免费观看视频| 男女那种视频在线观看| 最新中文字幕久久久久| 桃色一区二区三区在线观看| 国产精品综合久久久久久久免费| 美女cb高潮喷水在线观看| 内射极品少妇av片p| 深夜精品福利| av黄色大香蕉| 欧美在线黄色| 美女高潮的动态| 欧美日韩黄片免| 成人午夜高清在线视频| 好看av亚洲va欧美ⅴa在| 在线免费观看的www视频| 亚洲精品456在线播放app | 国产老妇女一区| 深爱激情五月婷婷| 精品久久久久久久人妻蜜臀av| 日韩欧美一区二区三区在线观看| 亚洲av不卡在线观看| 一进一出抽搐gif免费好疼| 免费人成视频x8x8入口观看| 亚洲一区二区三区不卡视频| 90打野战视频偷拍视频| 免费在线观看日本一区| 久久久久久久久久黄片| 国产精品 欧美亚洲| 亚洲真实伦在线观看| 一本一本综合久久| 三级男女做爰猛烈吃奶摸视频| 欧美性猛交╳xxx乱大交人| 国产成人a区在线观看| 亚洲精品成人久久久久久| 亚洲人与动物交配视频| 一区二区三区高清视频在线| 久久久久国内视频| 国产黄片美女视频| 狠狠狠狠99中文字幕| 亚洲中文日韩欧美视频| 有码 亚洲区| 丰满人妻熟妇乱又伦精品不卡| 日韩欧美精品v在线| 国产淫片久久久久久久久 | 午夜福利高清视频| 最近在线观看免费完整版| 黄色成人免费大全| 女人十人毛片免费观看3o分钟| 18+在线观看网站| 他把我摸到了高潮在线观看| 亚洲一区高清亚洲精品| av天堂在线播放| 一本一本综合久久| 成年女人看的毛片在线观看| 高清毛片免费观看视频网站| av福利片在线观看| 欧美黑人巨大hd| 成人永久免费在线观看视频| 极品教师在线免费播放| 在线播放无遮挡| 天堂√8在线中文| 黄色视频,在线免费观看| 国产在线精品亚洲第一网站| 在线观看一区二区三区| 国产97色在线日韩免费| 日本与韩国留学比较| 可以在线观看的亚洲视频| 亚洲成人免费电影在线观看| 女人十人毛片免费观看3o分钟| 亚洲成av人片在线播放无| 亚洲av成人av| 男女那种视频在线观看| 日韩欧美三级三区| 国产综合懂色| 一进一出好大好爽视频| 国产高清videossex| 国产一区二区在线观看日韩 | 国产免费一级a男人的天堂| 国模一区二区三区四区视频| 手机成人av网站| 日韩欧美免费精品| 中文资源天堂在线| 听说在线观看完整版免费高清| 中文字幕久久专区| АⅤ资源中文在线天堂| 中文字幕久久专区| 国产国拍精品亚洲av在线观看 | 一级a爱片免费观看的视频| 亚洲人成伊人成综合网2020| 亚洲精品在线观看二区| 久久久久性生活片| 神马国产精品三级电影在线观看| 51国产日韩欧美| 12—13女人毛片做爰片一| 桃红色精品国产亚洲av| 夜夜看夜夜爽夜夜摸| 亚洲五月天丁香| 国产精品乱码一区二三区的特点| 51午夜福利影视在线观看| 91九色精品人成在线观看| 国产亚洲欧美在线一区二区| 看黄色毛片网站| 国产国拍精品亚洲av在线观看 | 国产免费av片在线观看野外av| 欧美日韩亚洲国产一区二区在线观看| 色噜噜av男人的天堂激情| 国产单亲对白刺激| www日本黄色视频网| 首页视频小说图片口味搜索| 欧美av亚洲av综合av国产av| 精品日产1卡2卡| 色在线成人网| 99久国产av精品| 亚洲成人精品中文字幕电影| 国产精品久久久久久久久免 | 美女黄网站色视频| 久久久久久久亚洲中文字幕 | 最后的刺客免费高清国语| 老司机在亚洲福利影院| 日本 欧美在线| 成年版毛片免费区| 一区二区三区免费毛片| 在线天堂最新版资源| 最近最新免费中文字幕在线| 免费av不卡在线播放| 免费搜索国产男女视频| 久久国产精品影院| 嫁个100分男人电影在线观看| 小说图片视频综合网站| 嫁个100分男人电影在线观看| 日本撒尿小便嘘嘘汇集6| ponron亚洲| 午夜精品久久久久久毛片777| 老汉色∧v一级毛片| 男人和女人高潮做爰伦理| 国产一级毛片七仙女欲春2| 午夜福利视频1000在线观看| 人妻夜夜爽99麻豆av| 免费看十八禁软件| 亚洲中文字幕日韩| 一本一本综合久久| 老熟妇仑乱视频hdxx| 成人性生交大片免费视频hd| 日韩欧美国产一区二区入口| 欧美黄色淫秽网站| 国产一区二区在线观看日韩 | 国产三级在线视频| 亚洲第一欧美日韩一区二区三区| 色噜噜av男人的天堂激情| 欧美丝袜亚洲另类 | 我的老师免费观看完整版| 一本一本综合久久| 大型黄色视频在线免费观看| e午夜精品久久久久久久| 99国产极品粉嫩在线观看| 欧美色欧美亚洲另类二区| 人人妻人人澡欧美一区二区| 免费看日本二区| 亚洲精品日韩av片在线观看 | 国产精品影院久久| 少妇的丰满在线观看| 丁香欧美五月| 亚洲久久久久久中文字幕| 久久这里只有精品中国| 精品人妻一区二区三区麻豆 | 欧美极品一区二区三区四区| 色综合亚洲欧美另类图片| 欧美大码av| 欧美黄色片欧美黄色片| 国产精品野战在线观看| 乱人视频在线观看| 国内精品一区二区在线观看| 欧美黑人巨大hd| 又紧又爽又黄一区二区| 真人一进一出gif抽搐免费| 人妻夜夜爽99麻豆av| 久99久视频精品免费| 男人舔奶头视频| 最新在线观看一区二区三区| 久久精品国产自在天天线| 日韩欧美在线二视频| 久久久久久久久大av| 黑人欧美特级aaaaaa片| 国产一区二区亚洲精品在线观看| 久久久久免费精品人妻一区二区| 亚洲aⅴ乱码一区二区在线播放| 国产乱人伦免费视频| 国产成人影院久久av| 午夜福利在线观看免费完整高清在 | 欧美在线一区亚洲| 亚洲成人免费电影在线观看| 操出白浆在线播放| 亚洲成人精品中文字幕电影| 国产精品 欧美亚洲| 色吧在线观看| 免费一级毛片在线播放高清视频| 超碰av人人做人人爽久久 | 国产精品精品国产色婷婷| 久久精品国产亚洲av涩爱 | 国产久久久一区二区三区| 久久精品国产清高在天天线| 日日夜夜操网爽| 最近视频中文字幕2019在线8| 香蕉久久夜色| 老鸭窝网址在线观看| 国产精品久久久久久久电影 | 国产午夜精品久久久久久一区二区三区 | 免费av毛片视频| 99久久成人亚洲精品观看| 丰满人妻一区二区三区视频av | 国产精品一及| 99精品在免费线老司机午夜| 欧美一区二区亚洲| 国产精品99久久99久久久不卡| 十八禁人妻一区二区| 身体一侧抽搐| 99久久精品国产亚洲精品| 3wmmmm亚洲av在线观看| 制服人妻中文乱码| 床上黄色一级片| 好看av亚洲va欧美ⅴa在| 99国产精品一区二区蜜桃av| 成年女人看的毛片在线观看| x7x7x7水蜜桃| 男女下面进入的视频免费午夜| 最新美女视频免费是黄的| 国产成人福利小说| 欧美绝顶高潮抽搐喷水| 免费人成视频x8x8入口观看| 欧美日韩黄片免| 一级黄片播放器| 国产一区在线观看成人免费| 九色国产91popny在线| 久久精品综合一区二区三区| 免费看光身美女| 午夜福利在线观看吧| 老汉色∧v一级毛片| 老司机午夜十八禁免费视频| 日本 av在线| 亚洲在线观看片| 亚洲无线在线观看| 少妇高潮的动态图| 精品欧美国产一区二区三| 日本五十路高清| 美女高潮的动态| 国产精品av视频在线免费观看| 真实男女啪啪啪动态图| 久久久久久大精品| 少妇高潮的动态图| 黄片大片在线免费观看| 日韩人妻高清精品专区| 亚洲精品456在线播放app | eeuss影院久久| 制服人妻中文乱码| 久久久精品欧美日韩精品| 欧美不卡视频在线免费观看| 亚洲欧美日韩高清专用| av中文乱码字幕在线| 99热这里只有精品一区| 日本精品一区二区三区蜜桃| 欧美乱码精品一区二区三区| 午夜福利在线观看吧| 午夜福利高清视频| 成人亚洲精品av一区二区| 99精品欧美一区二区三区四区| 日韩欧美在线乱码| 亚洲五月天丁香| 老熟妇仑乱视频hdxx| 中出人妻视频一区二区| 深夜精品福利| 国产免费男女视频| 男人的好看免费观看在线视频| 又粗又爽又猛毛片免费看| 特大巨黑吊av在线直播| 美女高潮喷水抽搐中文字幕| 成人永久免费在线观看视频| 日本一二三区视频观看| 国产野战对白在线观看| 中文在线观看免费www的网站| 久久久久久大精品| 免费av不卡在线播放| 18禁黄网站禁片午夜丰满| 久久精品国产自在天天线| 夜夜躁狠狠躁天天躁| 制服丝袜大香蕉在线| 日韩中文字幕欧美一区二区| 免费在线观看成人毛片| 热99在线观看视频| 美女被艹到高潮喷水动态| 日本a在线网址| 国产极品精品免费视频能看的| 欧美丝袜亚洲另类 | 日韩 欧美 亚洲 中文字幕| 天堂网av新在线| 在线观看美女被高潮喷水网站 | av中文乱码字幕在线| 黄色丝袜av网址大全| a级毛片a级免费在线| 网址你懂的国产日韩在线| 午夜精品久久久久久毛片777| 五月伊人婷婷丁香| 国产色婷婷99| 欧美zozozo另类| 国产精品综合久久久久久久免费| 欧美另类亚洲清纯唯美| 久久久久久国产a免费观看| 成年版毛片免费区| 狂野欧美白嫩少妇大欣赏| 日本 av在线| 一级毛片高清免费大全| 亚洲精品影视一区二区三区av| 大型黄色视频在线免费观看| 国产精品永久免费网站| 嫩草影院精品99| 午夜两性在线视频| 首页视频小说图片口味搜索| 五月玫瑰六月丁香| www日本在线高清视频| 国产精品嫩草影院av在线观看 | 99久久久亚洲精品蜜臀av| 亚洲av不卡在线观看| 少妇的逼好多水| 手机成人av网站| 狂野欧美白嫩少妇大欣赏| 久久99热这里只有精品18| 国产亚洲精品综合一区在线观看| a在线观看视频网站| 国产三级在线视频| 淫妇啪啪啪对白视频| 两个人视频免费观看高清| 1000部很黄的大片| 亚洲久久久久久中文字幕| 久久久久久久午夜电影| 小说图片视频综合网站| 欧美黄色淫秽网站| 亚洲av成人av| 网址你懂的国产日韩在线| 一级黄片播放器| 久久中文看片网| 日韩国内少妇激情av| 别揉我奶头~嗯~啊~动态视频| 一区福利在线观看| 免费av观看视频| 一本久久中文字幕| 一级a爱片免费观看的视频| 国产精品电影一区二区三区| 亚洲五月天丁香| av中文乱码字幕在线| 国产精品一区二区免费欧美| 国产伦人伦偷精品视频| 91久久精品电影网| 亚洲第一电影网av| 国内毛片毛片毛片毛片毛片| 国产免费一级a男人的天堂| 99热这里只有精品一区| 嫩草影院精品99| 欧美日韩黄片免| 99热这里只有精品一区| 噜噜噜噜噜久久久久久91| 动漫黄色视频在线观看| 午夜福利成人在线免费观看| 欧美日韩中文字幕国产精品一区二区三区| 小说图片视频综合网站| www国产在线视频色| 女生性感内裤真人,穿戴方法视频| 久久国产精品影院| 久久精品国产99精品国产亚洲性色| 国产精品一区二区三区四区久久| 国产精品国产高清国产av| 美女 人体艺术 gogo| 美女高潮喷水抽搐中文字幕| 99久久九九国产精品国产免费| 免费在线观看亚洲国产| 亚洲国产欧洲综合997久久,| 国语自产精品视频在线第100页| e午夜精品久久久久久久| 久久精品影院6| 麻豆成人av在线观看| 久久草成人影院| 成人特级av手机在线观看| 久久精品91蜜桃| 亚洲avbb在线观看| a级毛片a级免费在线| 国产一区二区三区视频了| 国产成人啪精品午夜网站|