• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Nested Alternating Direction Method of Multipliers to Low-Rank and Sparse-Column Matrices Recovery

    2021-04-20 13:54:58

    (1. College of Mathematics and Statistics, Henan University, Kaifeng 475000, China; 2. College of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China;3. College of Software, Henan University, Kaifeng 475000, China)

    Abstract: The task of dividing corrupted-data into their respective subspaces can be well illustrated, both theoretically and numerically, by recovering low-rank and sparse-column components of a given matrix. Generally, it can be characterized as a matrix and a-norm involved convex minimization problem. However, solving the resulting problem is full of challenges due to the non-smoothness of the objective function. One of the earliest solvers is an 3-block alternating direction method of multipliers (ADMM) which updates each variable in a Gauss-Seidel manner. In this paper, we present three variants of ADMM for the 3-block separable minimization problem. More preciously, whenever one variable is derived, the resulting problems can be regarded as a convex minimization with 2 blocks, and can be solved immediately using the standard ADMM. If the inner iteration loops only once, the iterative scheme reduces to the ADMM with updates in a Gauss-Seidel manner. If the solution from the inner iteration is assumed to be exact, the convergence can be deduced easily in the literature. The performance comparisons with a couple of recently designed solvers illustrate that the proposed methods are effective and competitive.

    Keywords: Convex optimization; Variational inequality problem; Alternating direction method of multipliers; Low-rank representation; Subspace recovery

    §1. Introduction

    Given a set of corrupted-data samples drawn from a union of linear subspaces, the goal of the subspace recovery problem is to segment all samples into their respective subspaces and correct the possible errors simultaneously. The problem has recently attracted much attention because of its wide applications in the fields of pattern analysis, signal processing, and data mining,etc..

    whereλ>0 is a positive weighting parameter;A∈Rm×nis a dictionary which is actually column full rank;is a nuclear norm (trace norm or Ky Fan norm [12]) defined by the sum of all singular values;is-mixed norm defined by the sum of the-norm of each column of matrix. The nuclear norm is the best convex approximation of the rank function over the unit ball of matrices under the spectral norm [12]. The-norm represents the sparse component ofX, which shows that some data samples are corrupted while the others are keeping clean. When getting the minimizer (Z?,E?) of problem (1.1), the original datacan be reconstructed by settingX ?E?(orAZ?).

    Additionally, the minimizerZ?is named as the lowest-rank representation of dataXwith respect to a dictionaryA.

    Problem (1.1) is convex since it is separately convex in each of the terms. However, the nonsmoothness of the nuclear norm and-norm makes it too challenging task to minimize. On the one hand, this problem can be easily recast as a semi-definite programming problem, and solved by solvers such as [15] and [13]. On the other hand, it falls into the framework of the alternating direction method of multipliers (ADMM), which is used widely in variety of practical fields,such as, image processing [2,6,11], compressive sensing [17,19], matrix completion [4], matrix decomposition [9,14,16], nuclear norm minimization [18] and others. The earliest approach [9]reformulated problem (1.1) by adding an auxiliary variable, and minimized the corresponding augmented Lagrangian function with respect to with each variable in a Gauss-Seidel manner.Another alternative approach [16] solved (1.1) by using a linearized technique. More precisely,with one variable fixed, it linearized the subproblem to ensure that the closed-form solution is easily derived.

    In this paper, unlike all the mentioned algorithms, we propose three variants of ADMM for problem (1.1). In the first variant, we transfer (1.1) into an equivalent formulation by adding a new variableJ. Firstly, by fixing two variables, we minimize the corresponding augmented Lagrangian function to produce the temporary value of one variable. Secondly, by fixing the variable with its latest value, we treat the resulting subproblem as a new convex optimization problem but with fixed Lagrangian multipliers. Thus, it falls into the framework of the classic ADMM again. It is experimentally shown that the number of the inner loops greatly influence the whole performance of the algorithm. Meanwhile, the method reduces to the standard 3-block ADMM when the inner loop goes only once. Moreover, we design other two alternative versions of ADMM from different observation. The convergence of each proposed algorithm is analyzed under the assumption that the subproblem is solved exactly. Numerical experiments indicate that the proposed algorithms are promising and competitive with the recent solvers SLAL and LRR.

    The rest of this paper is organized as follows. In section 2, some notations and preliminaries which are used later are provided; a couple of recent algorithms are quickly reviewed; the motivation and iterative framework of the new algorithms are also included. In section 3, the convergence of the first version of the algorithm is established. In section 4, another variant of ADMM from a different observation together with its convergence are presented. In section 5,some numerical results which show the efficiency of the proposed algorithms are reported; the performance comparisons with other solvers are also included. Finally, in section 6, the paper is concluded with some remarks.

    §2. Algorithms

    2.1. Notations and preliminaries

    In this subsection, we summarize the notations which are used in this paper. The matrices are denoted by uppercase letters and vectors by lowercase letters. Given a matrixX, itsi-th row andj-th column are denoted respectively by[X]i,:and[X]:,j, andxi,jis the(i,j)-th component.The-norm,-norm, and Frobenius norm of matrix are defined respectively by

    For any two matricesX,Y ∈Rn×t, we define the standard trace inner product inthenFor a symmetric and positive definite matrixM ∈Rn×n, we defineSymbolis defined as the transpose of a vector or a matrix.

    Now, we list two important results, which are very useful to construct our algorithm.

    Theorem 2.1.[1,10] Given Y ∈Rm×n of rank r, let

    be the singular value decomposition (SVD) of Y. For each μ>0, we let

    where {·}+=max(0,·). It is shown that Dμ(Y)obeys

    Theorem 2.2.[5] Let Y ∈Rm×n be a given matrix, and Sμ(Y)be the optimal solution of

    then the i-th column of Sμ(Y)is

    2.2. Existing algorithms

    This subsection is devoted to review a couple of existing algorithms. The corresponding augmented Lagrangian function of (1.1) is

    where Λ∈Rm×nis the Lagrangian multiplier andμ>0 is a penalty parameter. For fixed(Ek,Zk,Λk), the next triplet (Ek+1,Zk+1,Λk+1) can be generated via

    For subproblem (2.3a), it can be easily deduced that

    On the other hand, fixing the latestEk+1, the subproblem (2.3b) with respect toZcan be characterized as

    For most of the dictionary matrixA, the closed-form solution of (2.5) is not easily derived.SLAL [16] linearizes the quadratic function and adds a proximal point term which ensure that the solution can be obtained explicitly.

    In a different way, another solver LRR [9] adds a new variableJ ∈Rn×nto model (1.1) and converts it to the following equivalent form:

    The augmented Lagrangian function of (2.6) is

    where Λ∈Rm×nand Γ∈Rn×nare the Lagrangian multipliers. LRR minimizesL2(E,Z,J,Λ,Γ)firstly with respect toE, later withZ, and then withJby fixing the other variables with their latest values. More precisely, with the given (Ek,Zk,Jk), the new iterate (Ek+1,Zk+1,Jk+1) is generated by

    The attractive feather of the above iterative scheme is that each variable permits closed-form solution.

    2.3. Nested minimizing algorithm

    In this subsection, we turn our attention to construct the new version of ADMM, namely nested minimizing algorithm here. Given (Ek,Zk,Jk,Λk,Γk), the nextEk+1is derived by

    IfZandJare grouped as one variable, for fixedEk+1, it is easy to deduce that

    Hence, (Zk+1,Jk+1) can also be considered as the solution of the minimization problem by standard Lagrangian function method but with fixed multipliers Λkand Γk,

    Fortunately, the favorable structure in both objective function and the constraint makes the resulting problem also fall into the framework of classic ADMM.

    For given (Ek+1,Zk,Jk), we letFor fixedthe next paircan be attained by the following alternating scheme:

    Firstly, the subproblem (2.13a) is equivalent to

    Clearly, (2.14) is a quadratic programming problem with respect toand can be further expressed as

    Secondly, the solution of subproblem (2.13b) with respect tocan be described as

    In summary,the algorithm named Nested Minimization Method(NMM_v1)can be described as follows.

    Algorithm 2.1 (NMM_v1).

    Remark 2.1.If the inner iteration goes only once without achieving convergence, then it reduces to the iterative form (2.8), where each variable is updated in a Gauss-Seildel way. Owing to the fact that the exact solution is not achieved as only one step goes in the inner iteration,the3-block ADMM can not converge globally (see [3]).

    Remark 2.2.The optimality condition of (2.6) (or (1.1)) can be characterized by finding the solution(E?,Z?,J?)∈Rm×n×Rn×n×Rn×n and the Lagrangian multipliersΛ?andΓ?satisfying the Karush-Kuhn-Tucker system

    At each iteration, the triple(Ek+1,Zk+1,Jk+1)generated by NMM_v1 satisfies

    Comparing the optimal conditions (2.18a)-(2.18e) with (2.19a)-(2.19e), it is clearly observed that the whole iteration process could be terminated ifΛk+1?Λk,Γk+1?Γk and Zk+1?Zk are all small enough. In other words, for positive constant >0, the stopping criteria should be

    From optimization theory, it is clear to see that the variables can be reordered by minimizing firstly with respect toJ, later withZ, and then withEby fixing the other variables with their latest values. More precisely, with given (Jk,Zk,Ek,Λk,Γk), the next iterate(Jk+1,Zk+1,Ek+1,Λk+1,Γk+1) can be generated via the following scheme — named Nested Minimization Method with version two (NMM_v2).

    Algorithm 2.2 (NMM_v2).

    §3. Convergence analysis

    This section is dedicated to establish the global convergence of algorithm NMM_v1. The convergence properties of second version NMM_v2 can be analyzed in a similar way. Hence, we omit it here. Throughout this paper, we make the following Assumption.

    Assumption 3.1.There exists a matrix pair(E?,Z?,J?,Λ?,Γ?)∈Rm×n×Rn×n×Rn×n×Rm×n×Rn×n satisfying the Karush-Kuhn-Tucker system(2.18).Besides the above Assumption,we also make the following Assumption on Algorithm NMM_v1.

    Assumption 3.2.The{(Zk+1,Jk+1)} is the exact solution of the resulting convex minimization(2.12).

    3.1. General description

    For convenience, we set

    whereIis an identity matrix and 0 is a so-called zero matrix in which its elements are all zero.Using these symbols, problem (2.6) is thus transformed into

    Let ?=Rm×n×Rn×n×Rn×n×R(m+n)×n. As a result, solving (3.2) is equivalent to findto satisfy the following variable inequalities problem

    Using the notations in (3.1) andthe augmented Lagrangian function in (2.7) can be rewritten as

    Moreover, it is not difficult to deduce that the subproblem (2.9) onEis equivalent to

    It is also easy to see that the subproblem (2.11) on variablesZandJis identical with

    Finally, the compact form of (2.16) and (2.17) equals to

    3.2. Further reformulations

    The subproblem (3.5) can be reformulated as the variable inequality scheme. That is, findEk+1such that

    Similarly, the problem (3.6) is equivalent to findZk+1andJk+1such that

    By (3.7), it holds that

    Using the above equality, (3.8) can be rewritten as

    In a similar way, (3.9) is reformulated as

    For the sake of simplicity, we denote

    Combining with (3.10) and (3.11), it yields that

    Furthermore, combining with (3.7), it holds that

    Recalling the definition ofWand letting

    then the inequality (3.13) is equivalent to

    3.3. Convergence theorem

    Let

    To establish the desired convergence theorem, we firstly list some useful lemmas.

    Lemma 3.1.Suppose that Assumptions 3.1 and 3.2 hold. Let {Wk+1}={(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    Proof.SettingW=W?in (3.14), we obtain

    By the monotonicity of operator Φ is easy to see that

    The first inequality is due to the monotonicity of Φ and the second one comes from (3.4) by recalling the symbols ofW,Fand Φ. Hence, the claims of this lemma is derived.

    By using the above lemma, it is easy to attain the following result,

    Lemma 3.2.Suppose that Assumptions 3.1 and 3.2 hold. Let {Wk+1}={(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    and

    Proof.By usingwe have

    Since (3.11) is true for anyk, we can get

    SettingZ=Zk+1in (3.11) andZ=Zkin (3.15), respectively, and adding both sides of the resulting inequalities, we have

    which shows the statement of this lemma.

    It is not difficult to deduce that both lemmas indicate the following fact.

    Lemma 3.3.Suppose that Assumptions 3.1 and 3.2 hold. Let the sequence {(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    For any matricesX,Y, and a symmetric definite matrixM, define

    Theorem 3.1.Suppose that Assumptions 3.1 and 3.2 hold. Let the sequence{(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    Proof.We have

    The proof is completed.

    The theorem shows that the sequenceis bounded, and

    which is essential for the convergence of the proposed method. Recalling the definition ofit also holds that

    To end this section, we state the desired convergence result of our proposed algorithm.

    Theorem 3.2.Suppose that Assumptions 3.1 and 3.2 hold. Let {(Ek,Zk,Jk,Λk,Γk)} be the sequence generated by Algorithm 2.1 from any initial points. Then the sequence converges to(E?,Z?,J?,Λ?,Γ?), i.e., the solution of equivalent model (3.2).

    Proof.It follows from(3.16)and(3.17)that there exists an index setkjsuch thatZkj →Z?,Additionally, since

    andZk ?Zk?1→0 and Λk ?Λk?1→0, it implies that

    It follows from (2.16) that

    or equivalently,

    By taking limit on both sides, it yields

    Similarly, it follows (2.17) that

    Taking limit on both sides, it holds

    Moreover, by (2.19a), we get

    Taking limit of both sides withkjon the above inequality and note (3.18), we get

    Similarly, from (2.19b) and (2.19c), we obtain respectively,

    and

    Note that (3.18) — (3.22) is exactly the optimal condition (2.18a) — (2.18e), it thus concludes that (E?,Z?,J?) is a solution of problem (3.2).

    §4. Another alternative scheme

    This section is devoted to develop another version of nesting minimizing algorithm for solving problem (1.1) from different observations. Reconsidering the original model and its augmented Lagrangian function (2.2), it clearly shows thatEk+1andZk+1are derived by (2.4)and (2.5), respectively. SettingHk+1=X ?Ek+1+Λk/μ, then (2.5) is reformulated as

    which indicates thatZk+1is the minimizer of the following optimization problems with auxiliary variableJ,

    Since the objective function and the constraint are both separable, it falls into the framework of the classic ADMM again. The augmented Lagrangian function of (4.2) is

    where ?!蔙n×nis a Lagrangian multiplier. Staring fromand from giventhe ADMM generates the next pairby

    Simple computing yields that

    and

    In short, the algorithm named Nested Minimizing Method with version three (NMM_v3)can be stated as follows.

    Algorithm 4.1 (NMM_v3).

    Remark 4.1.Comparing with Algorithm 2.1, it can be clearly seen that the significant difference between both algorithms is the updating of the multiplierΓk related to the constraint J ?Z=0.This multiplier is updated at the outer iteration process because the auxiliary variable J has been added to the original model (1.1) for obtaining (2.6), while it is updated at the inner loop owing to model (4.2) which is used only as a subproblem for deriving the next Zk+1.

    Remark 4.2.Similar as Remark 2.2, the optimal condition of (1.1) can be characterized by finding the solution(E?,Z?)∈Rm×n×Rn×n and the corresponding Lagrangian multiplierΛ?such that

    At each iteration, the triple(Ek+1,Zk+1,Λk+1)generated by NMM_v3 satisfies

    or, equivalently

    which indicates, for sufficient small >0, that the algorithm should be stopped when

    As the previous section, we make the following assumption to ensure the algorithm converge globally.

    Assumption 4.1.The{(Zk+1,Jk+1)} is the exact solution of the resulting convex minimization(4.2)We can clearly see that the inner iterations with variablesandand the out iterations withEandZare both classic ADMM. Hence, the convergence of this type of methods is available in this literature. Let

    It is similarly to prove that the sequence{Yk}generated by Algorithm 4.1 is contraction.

    Theorem 4.1.Suppose that Assumptions 3.1 and 4.1 hold. Let the sequence {(Ek+1,Zk+1,Λk+1)} be generated by Algorithm4.1. Then we have

    To end this section, we state the desirable convergence theorem without proof,

    Theorem 4.2.Suppose that Assumptions 3.1 and 4.1 hold. Let {(Ek,Zk,Λk)} be the sequence generated by Algorithm 4.1 from any initial points. Then every limit of {(Ek,Zk)} is an optimal solution of problem (1.1).

    §5. Numerical experiments

    In the section, we present two classes of numerical experiments. In the first class, we test the algorithms with different number of inner loops to verify their efficiency and stability. In the second class,we test against a couple of recent solvers SLAL and LRR to show that the proposed algorithms are very competitive. All experiments are performed with Window 7 operating system and Matlab 7.8 (2009a) running on a Lenovo laptop with Intel Dual-Core CPU at 2.5 GHz and 4G of memory.

    5.1. Test on NMM_v1, NMM_v2 and NMM_v3

    In the first class of experiment, we test the proposed algorithms with different number of inner steps on the synthetic data. The used data is created similarly as in the [9,16]. The data sets are constructed from five independent subspaceswith basesgenerated by Ui+1=TUi, 1≤i≤4, where T denotes a random rotation and Uiis a random orthogonal matrix of dimension 100×4. Hence, each subspace has a rank 4 and the data has an ambient dimension of 100. From each subspace,40 data vectors are sampled by usingwhere Qibeing a 4×40 independent and identically distributedN(0,1) matrix. In summary, each sample dataand the whole data matrix is formulated aswith rankr=20. In this test, a fraction (Fr=20%) of the data vectors are grossly corrupted by large noise while others are kept as noiseless. If thei-th column vectoris chosen to be corrupted, its components are generated by adding Gaussian noise with zero mean and standard deviationHence, we haveifi-th column is chosen to be corrupted.

    As usual,the dictionaryAis chosen asXin this test,i.e.,A=X. With the given noisy dataX, our goal is to derive the block-diagonal affinity matrixZ?and recover the low-rank matrix by settingX?=AZ?, orX?=X ?E?equivalently. To attain better performance, the values of multiplierμis taken as a nondecreasing sequence 1e?6≤μi ≤1e+10 with the relationshipμi+1=ρμiand settingρ=1.1. Moreover, the weighting parameter is chosen asλ=.1 which always achieves fruitful solutions as proved in experiments’preparing. Both test algorithms start at the zero matrix and terminate when the changes of two consecutive iterations are sufficiently small, i.e.,

    wheredenotes the maximum absolute value of components of a matrix;is a tolerance and chosen asin all the following test. To specifically illustrate the performance of each algorithm, we present two comparison results in terms of the number of iterations and running time as the number of inner steps varies from 1 to 10 in Figure 1.

    Fig. 1 Comparing the performance of NMM_v1, NMM_v2 and NMM_v3 in sense of the number of iterations (left), and the CPU time required (right) as the number of inner steps varied (the x-axes).

    As can be seen from Figure 1, the number of iterations required by algorithms NMM_v2 and NMM_v3 decreases dramatically at the beginning and slightly when the permitted number of inner iterations exceeds 5. It can be also observed that NMM_v3 needs less number of iterations while more computing time than NMM v2. The reason lies in that the newrequires a full Singular Value Decomposition (SVD) which may be the main computation burden in the inner iterative process. Another surprising observation is that the number of iterations required by NMM_v1 keeps invariable despite of the number of inner iterations. And this because the newandare obtained with only one step for the special constraintZ ?J=0.

    5.2. Compare with LRR and SLAL

    To further verify the efficiency of the algorithms NMM_v2 and NMM_v3, we test against a couple of solvers LRR and SLAL for performance comparison with different percent of data are grossly corrupted. The Matlab package of LRR is available athttp://sites.google.com/site/guangcanliu/. In running of LRR and SLAL, we set all the parameters as default except forλ=0.1, which is the best choice for the data settings by extensive preparing experiments. The noisy data are created the same as the previous experiment.In this test, the initial points, the stopping criterion, and all the parameters’ values are taken as the same as the previous test. Meanwhile, the quality of restorationX?is measured by means of the recovery errorwhereis the original noiseless data matrix.Moreover, for algorithms NMM_v2 and NMM_v3, we fixed the number of inner as 4 to balance the iterations and the computing time. The numerical results including the number of iterations(Iter), the CPU time required (Time), and the recovery error (Error) are listed in Table 1

    Table 1 Comparison results of NMM_v2 and NMM_v3 with LRR and SLAL.

    It can be seen from Table 1 that,for all the tested cases,each algorithm obtained comparable recovery errors. It is further observed that, comparing with NMM_v3, NMM_v2 requires more number of iterations but with least CPU seconds. Moreover, both NMM_v2 and NMM_v3 require less number of iterations than LRR, which indicates that more number of inner loops may decrease the whole number of outer iterations. The important observation experimentally verifies that the proposed approaches can accelerate the convergence of LRR. Now we change our attention to the state-of-the-art solver SLAL. We clearly see that SLAL is the fastest among the tested solvers. However, when the number of corrupted samples is relatively small (less than 60-percent), SLAL needs more number of iterations. From the limited performance comparisons,it concludes that our proposed algorithms perform quite well and are competitive with the well-known codes LRR and SLAL.

    §6. Concluding remarks

    In this paper, we have proposed, analyzed, and tested three variants of ADMM to solve the matrix-norm and nuclear norm involved non-smooth convex minimization problem. The problem mainly appears in the fields of pattern analysis, signal processing, and data mining, and used to find and exploit the low-dimensional structure of a given high-dimensional noisy data.The earliest solver LRR reformulated the problem into an equivalent model by adding a new variable and a new constraint, and derived the value of each variable alternatively. By using problem (1.1) as an example, this paper showed that if one variable is obtained, the other two variables should be grouped together and then minimized alternatively by standard ADMM.For variants of NMM_v2 and NMM_v3, we numerically illustrated that along with the advance of number of inner steps, both algorithms converge faster and faster in terms of outer iterations.

    There is no doubt that when the inner process goes only once without achieving convergence,all the proposed methods reduce to LRR. Surely, this is the main theoretical contribution of this paper. Unfortunately, the number of iterations generated by NMM_v1 keeps unchanged whatever the number of the inner steps. We think that the exact solutions onandhave been produced even the inner loops goes only once. Moreover, we have done performance comparisons with a couple of solvers LRR and SLAL in recent literature. The results showed that our proposed both NMM_v2 and NMM_v3 require less number of iterations to obtain similar quality reconstructions. To conclude our paper, we also hope that our method and its further modifications could produce even applications for problems in relevant areas of pattern analysis, signal processing, data mining and others.

    Acknowledgements

    We are grateful to the reviewers for their valuable suggestions and comments.

    av超薄肉色丝袜交足视频| 亚洲成人手机| 日韩有码中文字幕| 男女免费视频国产| 久久久精品免费免费高清| 国产日韩欧美亚洲二区| 激情在线观看视频在线高清 | 精品第一国产精品| 免费不卡黄色视频| 成人手机av| 免费人成视频x8x8入口观看| 老司机影院毛片| 日韩 欧美 亚洲 中文字幕| 久久香蕉精品热| 久久 成人 亚洲| 国产区一区二久久| 精品无人区乱码1区二区| 麻豆av在线久日| 国产欧美亚洲国产| 国产一区在线观看成人免费| 国产欧美日韩一区二区三区在线| 曰老女人黄片| 女人爽到高潮嗷嗷叫在线视频| xxx96com| av网站在线播放免费| 久久午夜亚洲精品久久| 19禁男女啪啪无遮挡网站| 日韩成人在线观看一区二区三区| 九色亚洲精品在线播放| av电影中文网址| 午夜成年电影在线免费观看| 久久亚洲真实| 精品人妻熟女毛片av久久网站| 免费黄频网站在线观看国产| 欧美激情久久久久久爽电影 | 在线观看免费视频日本深夜| 黄色毛片三级朝国网站| 国产男女内射视频| 91麻豆av在线| 国产亚洲欧美精品永久| 如日韩欧美国产精品一区二区三区| 精品免费久久久久久久清纯 | 黑人猛操日本美女一级片| 香蕉国产在线看| 99riav亚洲国产免费| 久久中文字幕一级| 久久精品国产清高在天天线| 欧美黑人精品巨大| 精品一品国产午夜福利视频| 国产日韩欧美亚洲二区| 免费看a级黄色片| 国产乱人伦免费视频| 啦啦啦视频在线资源免费观看| 女警被强在线播放| 久久久国产成人免费| 男人的好看免费观看在线视频 | 一进一出抽搐动态| 欧美在线一区亚洲| 成在线人永久免费视频| 亚洲av电影在线进入| 亚洲av成人av| 亚洲视频免费观看视频| 一区福利在线观看| 午夜免费成人在线视频| 中文亚洲av片在线观看爽 | 一级片免费观看大全| 亚洲av电影在线进入| 午夜免费观看网址| 午夜福利一区二区在线看| 国产在线观看jvid| 亚洲精品国产区一区二| 一边摸一边抽搐一进一出视频| 色综合婷婷激情| 我的亚洲天堂| 国产精品欧美亚洲77777| 午夜老司机福利片| 一级作爱视频免费观看| 日韩有码中文字幕| 美女 人体艺术 gogo| 99香蕉大伊视频| 制服人妻中文乱码| 一区二区三区国产精品乱码| 国产成+人综合+亚洲专区| 三上悠亚av全集在线观看| 婷婷精品国产亚洲av在线 | 日韩有码中文字幕| 极品人妻少妇av视频| 欧美日韩亚洲国产一区二区在线观看 | 色老头精品视频在线观看| 午夜成年电影在线免费观看| 淫妇啪啪啪对白视频| 9191精品国产免费久久| 无限看片的www在线观看| 亚洲欧美精品综合一区二区三区| 岛国在线观看网站| 成人特级黄色片久久久久久久| 亚洲中文字幕日韩| 美女视频免费永久观看网站| 成人特级黄色片久久久久久久| 手机成人av网站| 国产免费男女视频| 免费日韩欧美在线观看| 免费日韩欧美在线观看| 欧美 亚洲 国产 日韩一| 在线观看舔阴道视频| 黄色怎么调成土黄色| xxxhd国产人妻xxx| 伦理电影免费视频| 校园春色视频在线观看| 成年女人毛片免费观看观看9 | 欧美大码av| 最近最新中文字幕大全电影3 | 亚洲精华国产精华精| 亚洲国产欧美一区二区综合| 亚洲av日韩在线播放| 亚洲九九香蕉| 亚洲av日韩精品久久久久久密| 国产精品二区激情视频| 亚洲成人国产一区在线观看| 久久久精品免费免费高清| 午夜福利一区二区在线看| 午夜福利欧美成人| 久久久国产欧美日韩av| 黑人巨大精品欧美一区二区蜜桃| 欧美+亚洲+日韩+国产| 免费在线观看影片大全网站| 超碰97精品在线观看| 午夜两性在线视频| 黄色成人免费大全| 欧美日韩精品网址| 热99国产精品久久久久久7| 黑人操中国人逼视频| 王馨瑶露胸无遮挡在线观看| 王馨瑶露胸无遮挡在线观看| 色播在线永久视频| 日日夜夜操网爽| 日日爽夜夜爽网站| 国产精品电影一区二区三区 | 午夜福利乱码中文字幕| 国产蜜桃级精品一区二区三区 | 久久精品国产99精品国产亚洲性色 | 一边摸一边抽搐一进一小说 | 色播在线永久视频| 成人精品一区二区免费| 国产1区2区3区精品| 高潮久久久久久久久久久不卡| 久久影院123| 制服人妻中文乱码| 精品第一国产精品| 久久人妻av系列| 精品一区二区三区av网在线观看| 久久午夜综合久久蜜桃| 国产精品 欧美亚洲| √禁漫天堂资源中文www| 天堂动漫精品| 欧美黄色片欧美黄色片| 国产一卡二卡三卡精品| 国产蜜桃级精品一区二区三区 | 很黄的视频免费| 久久久久精品国产欧美久久久| 成人永久免费在线观看视频| 好看av亚洲va欧美ⅴa在| 黑人操中国人逼视频| 国产在视频线精品| 咕卡用的链子| 国产亚洲精品一区二区www | 人人妻人人爽人人添夜夜欢视频| 大陆偷拍与自拍| 两性夫妻黄色片| 美女 人体艺术 gogo| 亚洲欧美激情综合另类| 99久久精品国产亚洲精品| 国产麻豆69| 欧美黄色淫秽网站| 在线av久久热| 91麻豆av在线| 夫妻午夜视频| 久久国产精品人妻蜜桃| 亚洲精品久久午夜乱码| 黄色毛片三级朝国网站| 精品国产国语对白av| 成人永久免费在线观看视频| 欧美精品啪啪一区二区三区| 国产精品1区2区在线观看. | 日韩 欧美 亚洲 中文字幕| 国产高清激情床上av| 国产精品国产av在线观看| 亚洲成人国产一区在线观看| 99re6热这里在线精品视频| 亚洲精品粉嫩美女一区| 久久香蕉国产精品| 五月开心婷婷网| 91av网站免费观看| 日本wwww免费看| 免费久久久久久久精品成人欧美视频| 欧美国产精品一级二级三级| 精品第一国产精品| 亚洲精品av麻豆狂野| 国产成人精品久久二区二区免费| 在线国产一区二区在线| 好看av亚洲va欧美ⅴa在| 免费日韩欧美在线观看| 777久久人妻少妇嫩草av网站| 在线观看免费视频日本深夜| 老司机靠b影院| av线在线观看网站| 丁香欧美五月| 在线永久观看黄色视频| 美女高潮喷水抽搐中文字幕| 色94色欧美一区二区| 久久人妻熟女aⅴ| 嫁个100分男人电影在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 99久久精品国产亚洲精品| 一级片'在线观看视频| 久久午夜综合久久蜜桃| 亚洲成人手机| 又黄又爽又免费观看的视频| 国产精品免费视频内射| 国产99白浆流出| 日韩制服丝袜自拍偷拍| 巨乳人妻的诱惑在线观看| 热99久久久久精品小说推荐| 亚洲av美国av| 99国产综合亚洲精品| 亚洲五月天丁香| 免费观看a级毛片全部| 一进一出抽搐动态| 久久久久国产一级毛片高清牌| 国产aⅴ精品一区二区三区波| 热re99久久国产66热| 欧美不卡视频在线免费观看 | 十八禁人妻一区二区| 免费在线观看黄色视频的| 久久久久国产一级毛片高清牌| 久久久久精品国产欧美久久久| xxxhd国产人妻xxx| 亚洲一区二区三区不卡视频| 亚洲欧美精品综合一区二区三区| 亚洲精品国产区一区二| 涩涩av久久男人的天堂| 他把我摸到了高潮在线观看| 国产免费现黄频在线看| 国产成人精品久久二区二区91| 欧美精品人与动牲交sv欧美| 麻豆国产av国片精品| 女人精品久久久久毛片| 国内久久婷婷六月综合欲色啪| av国产精品久久久久影院| 亚洲专区字幕在线| 国产精品亚洲一级av第二区| 超碰97精品在线观看| 精品亚洲成国产av| 另类亚洲欧美激情| 亚洲av电影在线进入| 中国美女看黄片| 丰满迷人的少妇在线观看| netflix在线观看网站| 一级a爱视频在线免费观看| ponron亚洲| 欧美黑人精品巨大| 久久久久久久午夜电影 | 精品久久久久久久毛片微露脸| 亚洲人成电影观看| 老司机午夜福利在线观看视频| 怎么达到女性高潮| 色婷婷久久久亚洲欧美| 纯流量卡能插随身wifi吗| 午夜福利乱码中文字幕| 国产亚洲欧美在线一区二区| 亚洲黑人精品在线| 日韩有码中文字幕| 搡老乐熟女国产| 韩国av一区二区三区四区| 久久香蕉精品热| 国产精品久久久av美女十八| 一二三四社区在线视频社区8| 久久人妻福利社区极品人妻图片| 一区二区三区精品91| 女人被狂操c到高潮| 精品国产乱子伦一区二区三区| 国产精品乱码一区二三区的特点 | 国产伦人伦偷精品视频| 欧美日韩福利视频一区二区| 伦理电影免费视频| 美国免费a级毛片| 国产成人欧美| 国产精品国产av在线观看| 黄色视频,在线免费观看| 亚洲熟女精品中文字幕| 国产亚洲欧美精品永久| 一级毛片高清免费大全| 亚洲熟女毛片儿| 欧美黑人精品巨大| 中文字幕人妻丝袜制服| 国产色视频综合| 国产激情久久老熟女| 亚洲成人手机| 免费在线观看影片大全网站| 人妻一区二区av| 国产麻豆69| xxxhd国产人妻xxx| 国产精品1区2区在线观看. | 欧美黑人欧美精品刺激| 性色av乱码一区二区三区2| 欧美日韩视频精品一区| 国产精品免费大片| 欧美乱色亚洲激情| 男人舔女人的私密视频| 国产精品久久久人人做人人爽| 狠狠狠狠99中文字幕| 欧美丝袜亚洲另类 | 午夜福利在线免费观看网站| 久久午夜亚洲精品久久| 搡老乐熟女国产| 欧美日韩精品网址| 天天影视国产精品| 99国产综合亚洲精品| 在线观看免费午夜福利视频| 国产日韩一区二区三区精品不卡| 欧美日韩瑟瑟在线播放| 女人高潮潮喷娇喘18禁视频| 黄色视频不卡| 午夜精品在线福利| 国产精品久久电影中文字幕 | 老汉色∧v一级毛片| 午夜免费观看网址| 交换朋友夫妻互换小说| 女人被狂操c到高潮| 热re99久久国产66热| 亚洲国产欧美日韩在线播放| 99精国产麻豆久久婷婷| 99久久国产精品久久久| 午夜免费鲁丝| 中文亚洲av片在线观看爽 | 国产成人免费观看mmmm| 亚洲欧美一区二区三区久久| 18禁观看日本| 国产欧美亚洲国产| 亚洲熟女精品中文字幕| 最近最新中文字幕大全电影3 | 国产精品久久电影中文字幕 | 精品亚洲成a人片在线观看| videosex国产| 欧美日韩福利视频一区二区| 国产片内射在线| 精品一区二区三区av网在线观看| 九色亚洲精品在线播放| 亚洲中文字幕日韩| av欧美777| 人成视频在线观看免费观看| 欧美成人免费av一区二区三区 | 男人舔女人的私密视频| 成年人免费黄色播放视频| 亚洲熟女精品中文字幕| 精品第一国产精品| 亚洲中文字幕日韩| 亚洲成人国产一区在线观看| 一区二区三区精品91| 国产高清国产精品国产三级| 少妇 在线观看| 国产免费av片在线观看野外av| av网站免费在线观看视频| 人人妻人人爽人人添夜夜欢视频| 超色免费av| 亚洲情色 制服丝袜| 丰满饥渴人妻一区二区三| 丰满的人妻完整版| 久久久久久久精品吃奶| 首页视频小说图片口味搜索| 香蕉久久夜色| 国产在线一区二区三区精| 国产一区二区三区在线臀色熟女 | 视频在线观看一区二区三区| svipshipincom国产片| 国产乱人伦免费视频| 精品国产美女av久久久久小说| 高清毛片免费观看视频网站 | 交换朋友夫妻互换小说| 国产激情久久老熟女| 婷婷成人精品国产| 中文字幕人妻丝袜制服| 99久久国产精品久久久| 精品高清国产在线一区| 午夜免费成人在线视频| 中文字幕另类日韩欧美亚洲嫩草| 首页视频小说图片口味搜索| 不卡av一区二区三区| 国产淫语在线视频| 亚洲精品自拍成人| 91大片在线观看| 超碰97精品在线观看| 黄色视频不卡| 精品久久久久久电影网| 国产成+人综合+亚洲专区| 日韩成人在线观看一区二区三区| 国产成人精品久久二区二区91| 免费av中文字幕在线| 亚洲精品中文字幕在线视频| 岛国毛片在线播放| 18禁黄网站禁片午夜丰满| 亚洲欧美一区二区三区久久| 欧美黑人欧美精品刺激| 精品久久久久久电影网| 国产精品免费视频内射| 亚洲中文av在线| 人妻久久中文字幕网| 久久婷婷成人综合色麻豆| 精品久久久久久久毛片微露脸| 校园春色视频在线观看| 999久久久精品免费观看国产| 亚洲性夜色夜夜综合| 国产人伦9x9x在线观看| 久久精品国产a三级三级三级| 交换朋友夫妻互换小说| 在线观看免费高清a一片| 久久狼人影院| 后天国语完整版免费观看| 91九色精品人成在线观看| 久久精品国产亚洲av香蕉五月 | av免费在线观看网站| 日韩熟女老妇一区二区性免费视频| 亚洲精品乱久久久久久| 一级a爱视频在线免费观看| 亚洲欧美色中文字幕在线| 高清毛片免费观看视频网站 | 大香蕉久久成人网| 天天躁日日躁夜夜躁夜夜| 久久亚洲精品不卡| 日韩有码中文字幕| 高清欧美精品videossex| 欧美黄色片欧美黄色片| 女人精品久久久久毛片| 麻豆乱淫一区二区| 一个人免费在线观看的高清视频| 美女扒开内裤让男人捅视频| 亚洲成人免费电影在线观看| 久久久国产精品麻豆| 亚洲综合色网址| 12—13女人毛片做爰片一| 香蕉丝袜av| av一本久久久久| av线在线观看网站| 亚洲aⅴ乱码一区二区在线播放 | 日韩 欧美 亚洲 中文字幕| av超薄肉色丝袜交足视频| 国产1区2区3区精品| 国产主播在线观看一区二区| 视频区图区小说| 国产乱人伦免费视频| 亚洲熟妇中文字幕五十中出 | 水蜜桃什么品种好| 97人妻天天添夜夜摸| 亚洲精品美女久久av网站| 亚洲第一欧美日韩一区二区三区| 高清欧美精品videossex| 亚洲男人天堂网一区| 法律面前人人平等表现在哪些方面| 午夜视频精品福利| 亚洲国产精品一区二区三区在线| 成人18禁高潮啪啪吃奶动态图| 免费黄频网站在线观看国产| 在线观看午夜福利视频| 色播在线永久视频| 国产欧美日韩一区二区三| 免费不卡黄色视频| 亚洲五月色婷婷综合| videosex国产| 高清在线国产一区| 日韩制服丝袜自拍偷拍| av超薄肉色丝袜交足视频| 真人做人爱边吃奶动态| 少妇被粗大的猛进出69影院| 亚洲欧美激情综合另类| 一区二区三区激情视频| 国产亚洲精品久久久久5区| 国产99久久九九免费精品| 国产在线观看jvid| 后天国语完整版免费观看| 老司机亚洲免费影院| 欧美黑人欧美精品刺激| 每晚都被弄得嗷嗷叫到高潮| 国产一区二区三区综合在线观看| 中文字幕色久视频| 老熟妇仑乱视频hdxx| 日韩人妻精品一区2区三区| 男男h啪啪无遮挡| 日韩欧美一区二区三区在线观看 | 亚洲精品久久午夜乱码| 亚洲精品久久成人aⅴ小说| 国产成人一区二区三区免费视频网站| 欧美成人午夜精品| 国产精品久久视频播放| 久久精品亚洲精品国产色婷小说| 黄色视频不卡| 国产成人啪精品午夜网站| 日韩欧美国产一区二区入口| 香蕉国产在线看| 欧美精品亚洲一区二区| 亚洲九九香蕉| 一级毛片高清免费大全| 日韩欧美三级三区| 自拍欧美九色日韩亚洲蝌蚪91| 纯流量卡能插随身wifi吗| 丝袜人妻中文字幕| 国产乱人伦免费视频| 51午夜福利影视在线观看| xxxhd国产人妻xxx| 在线免费观看的www视频| 99久久综合精品五月天人人| 999久久久国产精品视频| 一边摸一边抽搐一进一小说 | 亚洲av成人av| 丰满人妻熟妇乱又伦精品不卡| 91成人精品电影| 午夜精品久久久久久毛片777| 老司机在亚洲福利影院| 最近最新中文字幕大全免费视频| 天天躁日日躁夜夜躁夜夜| 岛国毛片在线播放| 国产在线精品亚洲第一网站| 国产免费av片在线观看野外av| 99久久精品国产亚洲精品| 纯流量卡能插随身wifi吗| 久久九九热精品免费| 侵犯人妻中文字幕一二三四区| 狠狠婷婷综合久久久久久88av| 真人做人爱边吃奶动态| 人妻一区二区av| 亚洲精品国产色婷婷电影| 国产主播在线观看一区二区| 黑人猛操日本美女一级片| 久久国产乱子伦精品免费另类| av福利片在线| 亚洲欧美激情在线| 亚洲黑人精品在线| 亚洲va日本ⅴa欧美va伊人久久| 国产一区二区三区在线臀色熟女 | 国产高清videossex| 巨乳人妻的诱惑在线观看| 最新在线观看一区二区三区| 12—13女人毛片做爰片一| 大香蕉久久成人网| 久热爱精品视频在线9| 亚洲av熟女| 亚洲专区字幕在线| 久久中文字幕人妻熟女| 一区二区三区国产精品乱码| 国产欧美日韩精品亚洲av| 伦理电影免费视频| 韩国精品一区二区三区| 欧美乱妇无乱码| 日本黄色视频三级网站网址 | 久久精品亚洲精品国产色婷小说| av有码第一页| 50天的宝宝边吃奶边哭怎么回事| 欧美黑人精品巨大| 欧美日韩亚洲国产一区二区在线观看 | 精品高清国产在线一区| 亚洲熟妇熟女久久| 久久青草综合色| 亚洲久久久国产精品| 国产男女内射视频| 在线观看一区二区三区激情| 别揉我奶头~嗯~啊~动态视频| 老汉色∧v一级毛片| 欧美成狂野欧美在线观看| 一区二区三区激情视频| 十八禁人妻一区二区| 成人特级黄色片久久久久久久| 欧美日韩成人在线一区二区| 一级a爱视频在线免费观看| 狠狠婷婷综合久久久久久88av| 亚洲成人手机| 老司机影院毛片| 一级毛片精品| 19禁男女啪啪无遮挡网站| 亚洲 欧美一区二区三区| 欧美日本中文国产一区发布| 黄色怎么调成土黄色| 欧美激情 高清一区二区三区| 伦理电影免费视频| 黄色视频不卡| 韩国av一区二区三区四区| 免费观看人在逋| 在线观看舔阴道视频| 老司机影院毛片| 国产亚洲精品一区二区www | 久久国产亚洲av麻豆专区| 国产午夜精品久久久久久| 精品一区二区三区av网在线观看| 久久精品国产清高在天天线| 国产高清国产精品国产三级| 亚洲视频免费观看视频| 夫妻午夜视频| 一区二区日韩欧美中文字幕| 国产精品 欧美亚洲| 美女午夜性视频免费| 国产激情久久老熟女| 两个人免费观看高清视频| 国产高清国产精品国产三级| 宅男免费午夜| 人妻丰满熟妇av一区二区三区 | 久99久视频精品免费| 日韩欧美国产一区二区入口| 人人妻人人澡人人看| 亚洲av成人不卡在线观看播放网| av片东京热男人的天堂| 在线视频色国产色| 成人三级做爰电影| 国产高清激情床上av| 丰满迷人的少妇在线观看| 亚洲av电影在线进入| 国产精品1区2区在线观看. | 黄色丝袜av网址大全|