• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Total Variation Constrained Non-Negative Matrix Factorization for Medical Image Registration

    2021-04-13 06:55:46ChengcaiLengHaiZhangGuorongCaiZhenChenandAnupBasuSeniorMemberIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年5期

    Chengcai Leng, Hai Zhang, Guorong Cai, Zhen Chen, and Anup Basu, Senior Member, IEEE

    Abstract—This paper presents a novel medical image registration algorithm named total variation constrained graphregularization for non-negative matrix factorization (TV-GNMF).The method utilizes non-negative matrix factorization by total variation constraint and graph regularization. The main contributions of our work are the following. First, total variation is incorporated into NMF to control the diffusion speed. The purpose is to denoise in smooth regions and preserve features or details of the data in edge regions by using a diffusion coefficient based on gradient information. Second, we add graph regularization into NMF to reveal intrinsic geometry and structure information of features to enhance the discrimination power. Third, the multiplicative update rules and proof of convergence of the TV-GNMF algorithm are given. Experiments conducted on datasets show that the proposed TV-GNMF method outperforms other state-of-the-art algorithms.

    I. INTRODUCTION

    IMAGE registration is an important research topic for aligning two or more images of the same scene taken at different times, viewpoints, or sensors [1]. Registration is widely used in computer vision and medical image processing, including multimodal image fusion, medical image reconstruction, and the monitoring of tumors. For example, the fusion of multimodal information can be realized by registering two images, which provides better visualization of anatomical structures and functional changes to facilitate diagnosis and treatment [2]. Area-based registration methods[3] mainly uses gray level information to optimize the maximum similarity measure, including mutual information(MI), by adapting optimization algorithms for registration [4].Gong et al. [5] proposed a novel image registration method including the pre-registration and a fine-tuning process based on scale-invariant feature transform (SIFT) and MI. Woo et al.[6] presented a novel registration method based on MI by incorporating geometric and spatial context to compute the MI cost function in large spatial variation regions for multimodal image registration. However, these methods are very sensitive to intensity variations and suffer from noise interference.Feature-based methods for image registration directly detect salient features and construct feature descriptors, which are robust and invariant to noise, illumination, and distortion.SIFT [7] is one of the most popular methods invariant to rotation, scale, translation, and illumination changes. Rister et al.[8] extended SIFT to arbitrary dimensions by adjusting the orientation assignment and gradient histogram of key points.

    We can often treat the feature matching problem as a graph matching problem in image registration, since spectral graph theory [9] is widely used for image segmentation [10], [11],graph matching [12]–[15], and image registration [16]–[21].In order to make many algorithms practical in several real-life applications, dimensionality reduction is necessary. In order to avoid the curse of dimensionality, some dimensionality reduction matching or registration methods have been introduced [22]–[24]. Xu et al. [24] proposed such a method for high-dimensional data sets using the Cramer-Rao lower bounds to estimate the transformation parameters and achieve data set registration. In addition, some manifold learning methods [25] have also been presented, such as ISOMAP[26], locally linear embedding (LLE) [27], and Laplacian Eigenmap [28]. However, many of these algorithms have high computational complexity, and deal poorly with large data sets [29]. Liu et al. [30] proposed the text detection method based on morphological component analysis and Laplacian dictionary, which can reduce the adverse effects of complex backgrounds and improve the discrimination power of dictionaries.

    Recently, some low-rank matrix factorization methods have been introduced in data representation [31]. Among these methods, non-negative matrix factorization (NMF) [32]achieves a part-based representation for non-negative data sets with applications in data clustering [33], [34] and data or image analysis [35], [36]. Some researchers, such as [37]?[40], also incorporated manifold learning information into NMF. Li et al.[39] proposed a graph regularized non-negative low-rank matrix factorization (NLMF) method by adding graph regularization into NLMF to exploit the manifold structure information and utilizing robust principal components analysis(PCA). Shang et al. [40] proposed a novel feature selection method by adding sparse regression and dual-graph regularization to NMF to improve the feature selection ability.Ghaffari and Fatemizadeh [41] presented a new image registration method by introducing correlation into the low rank matrix theory based on rank-regularized sum-of-squareddifferences (SSD) to improve the similarity measures. In addition, there are still very few NMF based methods used for image matching or image registration. Luo et al. have also published several papers [42]–[46] using non-negative latent factor models for high-dimensional and sparse matrices, which can be widely used in industrial applications and highly accurate web service QoS predictions. We will introduce a special sparse matrix factorization method for image registration called total variation constrained graphregularization for non-negative matrix factorization (TVGNMF).

    Rudin et al. [47] first proposed the total variation (TV)method, which is effective for image denoising and can enhance the boundary features of large data sets. It can be used for various pattern recognition tasks, such as hyperspectral unmixing [48]–[50], data clustering [51], image restoration or image fusion [52]–[54], and face recognition[55], [56]. Thus, TV regularization is incorporated into NMF to enhance the details or features of the data. Graph regularization can also be added to NMF, which can discover the intrinsic geometric and structural information of the data.In the differential form of TV regularization, a diffusion coefficient is used to control the diffusion speed. This coefficient can denoise in smooth regions and preserve details in edges regions based on the gradient information. Therefore,our approach is a good part-based data representation that improves the data discrimination ability for clustering big data sets. We exploit this part-based data representation method to find better feature point matches for image registration.

    In this paper, we propose a special part-based matrix factorization method, called TV-GNMF, which extends our previous work in [57]. The manifold graph regularization enhances and efficiently reveals the intrinsic geometric and structural information of the data, and the TV regularization denoises and preserves the sharp edges or boundaries to enhance the features of an image. We now explain why we incorporate TV regularization into TV-GNMF. In the TV regularization terms, the diffusion coefficient 1/|? H| is used to control the diffusion speed, which can denoise and enhance the edges or details based on the gradient information. If |? H|has a large value in the neighborhood of a point, this point is considered to be an edge and the diffusion speed is lowered to preserve the edges. Otherwise, if |? H| has a small value in the neighborhood of a point, and the diffusion is strong, it helps remove noise. We develop novel iterative update rules, prove the convergence of our optimization technique and give a matching algorithm based on TV-GNMF. Experimental results demonstrate the discrimination ability and better performance of our algorithm.

    The remaining sections are organized as follows:Background work is introduced in Section II. Section III proposes the TV-GNMF method, detailed multiplicative update rules and proof of convergence of our optimization method. Section IV presents the image matching algorithm based on TV-GNMF. Experimental results are presented in Section V, before the conclusions in Section VI.

    II. PRELIMINARIES

    A. Symbols

    First, we list some necessary symbols used in this paper in Table I.

    TABLE I SOME NECESSARY SYMBOLS

    B. NMF

    C. TV-NMF

    Yin and Liu [56] proposed a new NMF model with bounded

    D. GNMF

    Graph regularization is introduced into NMF, i.e., GNMF,to reveal the geometric information of the data [37], as follows:

    III. TV-GNMF

    In this section, we outline the idea behind the total variation method for enhancing or preserving edge features of data sets(images). The TV method is a form of anisotropic diffusion,which smoothens by selectively using diffusion coefficients based on the gradient information to retain image features while eliminating noise. Therefore, TV regularization and graph regularization are integrated with the NMF model to preserve edge features of the intrinsic geometry and structure information of the data. The proposed novel model called TVGNMF can enhance the intrinsic geometry and preserve edge characteristics of the data to improve discrimination ability for data clustering and image matching.

    A. Total Variation

    In order to enhance the edge features of the data, we introduce TV [47] regularization in this paper, defined as

    B. Multiplicative Update Rules

    Based on TV regularization and graph regularization, the TV-GNMF model with TV regularization is given by

    where λ, β ≥0 are parameters that can balance the

    Algorithm 1 TV-GNMF Algorithm V ∈Rm×n S 1 ≤r ≤min{m,n}Input: , D, and .Initialization: , , , and .k=0,1...W0 H0λ β k=0 For until convergence or maximum iteration.Update according to Hk+1=Hk Hk+1(WT V+λHS+βdiv( ?H ))k|?H|(WT WH+λHD)k Wk+1 Update according to Wk+1=Wk (VHT)k(WHHT)k k=k+1 End W ∈Rm×r H ∈Rr×n Output: , .

    We will describe a theorem related to the above iterative update rules along with the detailed proof of convergence.

    TABLE II COMPUTATIONAL OPERATION COUNTS FOR EACH ITERATION FOR DIFFERENT METHODS

    C. Proof of Convergence

    Theorem 1 guarantees convergence under the update rules based on (12) and (13). ■

    D. Complexity Analysis

    The computational complexity of the TV-GNMF method will be discussed and compared with the NMF and GNMF methods. Since Cai et al. gave the arithmetic operations of NMF and GNMF for each iteration in [37], we follow their results as shown in Table II. The main difference between TV-GNMF and GNMF is the component of TV norm.Specifically, to generate the divergence function of the discrete gradient matrix H, we need to calculate the first and second derivatives of each element, which results in 9 floating-point additions and 3 floating-point divisions.Moreover, the update rule of the divergence function of each element needs 3 floating-point additions, 7 floating-point multiplications, and one floating-point division. In general,compared to GNMF, our method adds 12 floating-point additions, 7 floating-point multiplications, and 4 floating-point divisions for each iteration. Note that m denotes the rows of an input image, whose scale is much larger than 12. Therefore,the overall complexity of our TV-GNMF is also O(mnr).Details of the complexity analysis are summarized in Table II.

    In Table II, Fladd, Flmlt, Fldiv denote the number of floating-point additions, floating-point multiplications, and floating-point divisions, respectively; n represents the number of sample points; m is the number of features; and, r and p stand for the number of factors and the number of the nearest neighbors, respectively.

    IV. IMAGE MATCHING ALGORITHM BASED ON TV-GNMF

    To avoid confusion, the first part tests clustering performance of the data sets directly represented by the matrix V to compute matrices W and H based on (12) and (13),without using the following image registration algorithm. The data sets include images with many features or details and TV regularization can denoise and preserve details or edges of features to improve clustering performance.

    The second part evaluates matching performance on medical images. We construct the non-negative matrix, not images, by exploiting geometric positions of feature points.TV regularization can enhance and characterize the intrinsic relationship of feature points based on diffusion depending on the gradient information of points. Further details on the matching algorithm of TV-GNMF can be found in [57].

    V. ExPERIMENTAL RESULTS AND DISCUSSIONS

    In this section, we provide some experimental evaluation of the proposed TV-GNMF method for image clustering and registration. There are two aspects in this study. We commence with an analysis of image data sets to demonstrate the clustering performance based on multiplicative update rules. The image matching performance is tested in the second part to show that the dimensionality reduction method has better discrimination ability for medical image registration.The clustering performance is evaluated in the first part.

    A. Data Sets

    The important statistics of the data sets used to evaluate the clustering performance are summarized in Table III. Further details can be found in [35], [37], [51].

    TABLE III INFORMATION ON THE DATA SETS

    B. Performance Evaluation and Comparisons

    Performance is tested by comparing the labels obtained for each sample with the labels provided by the data sets. One metric is accuracy (AC), used to measure the percentage of correct labels obtained. The second metric is the normalized mutual information metric (NMI), used to measure how similar the two sets of clusters are. Detailed definitions of AC and NMI can be found in [33], [62].

    In order to demonstrate our method’s performance on the above data sets, we compared TV-GNMF with two other related methods, i.e., the NMF [32] and GNMF [37] methods.The Frobenius norm is used to measure the similarity for the above three methods. We construct the weight matrix S of (3)and (5) using the 0-1 weighting based on the p-nearest neighbor graph, with p=5 for the GNMF and TV-GNMF methods. In addition, the regularization parameter λ is set to 100 for the GNMF method [37]; and the TV regularization parameter β is given and tested in the experiments for the TVGNMF method.

    Table IV gives the data clustering results on the above four normalized datasets. In the experiments, the different cluster numbers are given on the Columbia University Image Library(COIL20) and Olivetti Research Laboratory (ORL) datasets for 100 iterations, and on the NIST Topic Detection and Tracking (TDT2) corpus dataset for 50 iterations. The regularization parameter β is set to 2 for the above three datasets in our TV-GNMF method. The number of iterations is set to 50 on the Pose, Illumination and Expression (PIE)face dataset for testing, and the parameter β is set to 0.2 in our TV-GNMF method. The GNMF method outperforms NMF,indicating that GNMF preserves or reveals the geometric structure of the data in learning under varying angles on the COIL20 dataset and different lighting and illumination conditions on the PIE dataset. Surprisingly, the average of AC and NMI of the GNMF method is lower than the NMF method for the ORL dataset. The GNMF method does not reveal the geometric information because the ORL database consists of 40 distinct subjects with varying lighting, different facial expressions, and details. Our TV-GNMF method has high accuracy and normalized mutual information. The TV-GNMF method improves clustering performance because it combines the merits of graphs and TV regularization to discover the geometric structure information and enhances feature details.The best results are highlighted in bold. In most cases, our TV-GNMF method has the best performance. However, in a few situations, GNMF has higher AC and NMI than ours for the underlined cases, such as when k=30 for PIE. Our method cannot preserve the sharp edges or boundaries to enhance the feature details because the PIE dataset consists of 68 distinct subjects with different lighting and illumination conditions. In addition, the parameter β also affects the clustering performance. Overall, our TV-GNMF method outperforms NMF and GNMF, and has better performance.Our TV-GNMF method preserves geometric structure information and enhances the edge features of the data as demonstrated in Table IV. Note that our method outperforms others in most cases, including every instance of the COIL20,ORL, and TDT2 datasets in Table IV. Even for the few situations where our method does not have the best score, we are within 2% of the top score. However, our model has two parameters, λ and β, which we need to choose adaptively or empirically.

    In addition, we use the ORL dataset as an example to test the effectiveness of our method. We add Gaussian noise with mean 0, and variance 0.09, based on the NMF, TV-NMF,GNMF, and TV-GNMF methods under the same conditions,with parameter β=2, and 50 iterations, for different cluster numbers. We can see that TV-GNMF has the best clustering results as shown in Table V. This happens because TV regularization can remove noise and preserve the details or features of the data, and graph regularization can discover the intrinsic geometric and structural information of the data while removing noise and enhancing features. TV-NMF has better results than NMF and GNMF, because GNMF cannot discover or reveal the intrinsic geometric and structural information of the data well in the presence of noise.

    C. Parameter Evaluation

    In this section, stability is tested based on our TV-GNMF method for various parameter settings. Our model has two important regularization parameters: λ and β. The GNMF method produces the best results when λ is set to 100. In our model, λ is also set to 100, and we vary the regularization parameter β to test stability. The performance of TV-GNMF varies with the β on COIL20 and PIE datasets as shown in Fig.1, which shows that TV-GNMF is very stable with respect to β. Fig. 1(a)?1(c) give the clustering performance when the regularization parameter β varies from 0.1 to 20 for different classes; such as 8, 13, and 18 on the COIL20 dataset. Fig. 1(d)?1(f) also present the clustering performance when the parameter β varies from 0.1 to 35 under different classes,such as 20, 35, and 50 on the PIE dataset. For a big range of the regularization parameter on the two data sets, TV-GNMFhas consistently good and stable performance. For the COIL20 dataset, our method produces relatively big clustering results based on parameter evaluation when β=5. The reason is that the randomness of the initial values of W and H affect the clustering performance. The initial values are randomly generated by non-negative constraints when we execute TVGNMF with β varying from 0.1 to 20 for different numbers of clusters, which can only ensure convergence to a local optima as they are updated iteratively. However, from the second experimental results, the range of the regularization parameter is larger than the first and has higher accuracy.

    TABLE IV COMPARISONS ON COIL20, PIE, ORL AND TDT2 DATASETS

    TABLE V COMPARISONS ON ORL DATASET WITH NOISE

    In addition, we also use the ORL dataset as an example to test the effectiveness of our method with different p based on the GNMF and TV-GNMF methods under the same conditions with λ=100 and 50 iterations, cluster number set to 16 classes and β set to 2 in the TV-GNMF method. As we have seen, GNMF and TV-GNMF use a p-nearest neighbor graph to capture the local geometric structure information on a scatter of data points. GNMF and TV-GNMF have better clustering performance based on the assumption that two neighboring data points share the same label. When there are more nearest neighbors p, this assumption is more likely to fail. This is the reason why the performance of GNMF and TV-GNMF declines, and TV-GNMF is still superior to GNMF as p increases, as shown in Table VI and in Fig. 2.

    D. Medical Image Registration Performance

    Fig. 1. Performance of TV-GNMF for varying regularization parameter β on COIL20 and PIE datasets.

    Fig. 2. The performance of GNMF and TV-GNMF decreases as p increases on ORL dataset.

    TABLE VI COMPARISONS ON ORL DATASET WITH THE DIFFERENT p

    In this section, a novel low-rank preserving technique is proposed by matching feature points to verify the discrimination ability to achieve one-to-one correspondences.We must emphasize that feature point detection or feature point description is not our research focus. The key issue is how our TV-GNMF method exhibits discriminating power to capture the intrinsic geometry and structure information and finds one-to-one correspondences between feature points. In order to test the matching performance, we applied it to medical images demonstrating that the proposed method has the discriminating power to achieve stable one-to-one feature correspondences. Furthermore, we compare the results of our TV-GNMF method with the projection clustering matching method (Caelli’s method) [23] and Zass’ method [63] in terms of matching.

    Fig. 3. Matching results: (a) and (b) Caelli’s method; (c) and (d) Zass’ method; (e) and (f) Our TV-GNMF method.

    The 32nd slice of T1 and T2 of a magnetic resonance imaging (MRI) sequence is used to test, and the image matching results are given in Fig. 3. T1 denotes prominent tissue T1 relaxation (longitudinal relaxation) difference,which is used to observe anatomical structures. T2 denotes prominent tissue T2 relaxation (transverse relaxation)difference, which is used to show tissue lesions. We use the Harris Corner Detector [64] to extract 27 feature points and 38 feature points from Figs. 3(a), 3(c), and 3(e), and Figs. 3(b),3(d), and 3(f), respectively. Obviously, Fig. 3(a) produces some two-to-one mismatches. Zass’ and our methods achieve one-to-one correspondences in Figs. 3(c) and 3(e),respectively. In addition, some feature points are added as shown in Figs. 3(b), 3(d), and 3(f). More many-to-one correspondences are produced by Caelli’s method in Fig. 3(b)when the number of feature points is increased. The reason is that if the distances between some of the extracted points are very close to each other, they are more likely to be in the same class. Thus, more many-to-one correspondences are produced.Zass’ method is better than Caelli’s method because the matching problem utilizes a probabilistic framework based on hypergraphs. However, this method also produces some mismatches as shown in Fig. 3(d). Despite some feature points being very close to each other, our method can still find oneto-one correspondences, as seen in Fig. 3(f). This result indicates that our method has better discrimination ability to improve matching performance, thereby achieving robust image registration. We also utilize the computation time to evaluate the quantitative analysis, and the computation time for the entire matching process, including feature point extraction, in Table VII. This table indicates that our method needs less computation time. Please see [57] for additional details.

    In addition, we also use our method to test the matching ability compared to a more classical and effective method called the coherent point drift (CPD) method [65]. We use the Harris Corner Detector to extract 156 feature points in T1 (red “*”) and T2 (blue “o”) of the 24th slice of an MRIsequence. The test experiment is intended to show the effectiveness of our method. We execute the CPD algorithm and our matching algorithm on the feature point sets. Both methods have good matching performance based on the experimental matching results shown in Fig. 4. However, our method takes 0.481 s less than the 0.990 s needed by the CPD method. This indicates that we have introduced an effective matching method that is also computationally more efficient.

    TABLE VII COMPARISON OF THE COMPUTATION TIME (s)AS SHOwN IN FIG. 3

    Fig. 4. Matching results for a feature point set.

    To test the accuracy of registration, the root mean squareerror (RMSE) is used to evaluate the accuracy. Detailed results can be found in [57].

    TABLE VIII COMPARISON OF COMPUTATION TIME AND ACCURACY FOR DIFFERENT IMAGE MODALITIES

    Fig. 5. Plot of accuracy considering different slices for the different patients of Table VIII.

    Finally, to verify the discrimination ability and robustness under different medical image modalities, we give the accuracy which is defined as Nc/N, where Ncdenotes the number of correct matches and N denotes the total number of feature points. Table VIII summarizes eight experiments including different patients, with each patient repeated twice.It also shows the computation time and accuracy for different patients from the brain datasets [66], and is compared with Caelli’s method. These experiments show that the proposed method has better discrimination ability in finding one-to-one correspondences and has good matching results. The reason for large fluctuation in accuracy in Caelli’s method for different patients is that if the distances between feature points are extracted very close to each other, it is more likely that these points are in the same class. This produces many-to-one correspondences to create a large fluctuation in accuracy. Fig. 5 shows the matching performance for different patients in Table VIII. For this figure, the y-axis represents the accuracy and the x-axis denotes the patients. The different numbers of feature points are obtained by using the Harris Corner Detector under the same condition for different patients, and the number of feature points detected is relatively small.Therefore, the computation time is less than 1.0 s and the time difference is not big. However, the results (in bold) are not as good for some patients based on Caelli’s method, as shown in Table VIII and Fig. 5. Thus, the experimental results indicate that our method is robust and has more discrimination ability than Caelli’s.

    Fig. 6. Discrimination and robustness considering the same patient for different number of feature points. (a) Feature point extraction results of reference and sensed images; (b)?(d) Caelli’s method; (e)?(g) TV-NMF method; (h)?(j) GNMF method; (k)?(m) TV-GNMF method.

    Fig. 6 also shows the discrimination ability and robustness on the 65th patient of PD and T2 by increasing the number of feature points. PD reflects the difference in hydrogen proton content for different tissues, i.e., comparison of hydrogen proton density in prominent tissues. Fig. 6 compares the matching results based on Caelli’s method, TV-NMF method,GNMF method, and our method. We can see that the matching results, whether correct or incorrect, more clearly,when there are relatively few feature points and matching lines. For example, for Figs. 6(d), 6(g), 6(j), and 6(m), the matching results have many matching lines used to connect the reference and sensed images. This makes it difficult to see the texture of images due to many mismatches, as shown in Fig. 6(d). In order to avoid this problem, we show decomposition results with the feature points of the reference image (left) and the sensed image (right); which are first extracted as shown in Fig. 6(a). Then, these points are used for image matching as shown in Figs. 6(d), 6(g), 6(j), and 6(m).For these experimental results considering more feature points, our method still has better matching results, as shown in Fig. 6(m), than the TV-NMF method (Fig. 6(g)) and the GNMF method (Fig. 6(j)). However, Caelli’s method has completely different results, as shown in Fig. 6(b)?6(d), for different number of feature points. This indicates that our method has good discrimination ability and robustness, and achieves one-to-one correspondences regardless of the number of feature points.

    E. Summary

    Based on the theory and empirical studies, we summarize that:

    1) The proposed TV-GNMF model is able to accurately achieve data clustering and image registration in a low dimensional feature space. Hence, TV-GNMF outperforms other state-of-the-art algorithms in accuracy of clustering,registration, and time efficiency.

    2) Total variation constraint and graph regularization can control the diffusion speed to denoise and preserve the features or details of the data. This is achieved by a diffusion coefficient based on the gradient information to reveal intrinsic geometric and structural information of features to enhance the discriminating power.

    3) Iterative update rules are developed and a proof of convergence for the TV-GNMF algorithm is given.

    VI. CONCLUSIONS

    In this paper, we proposed a novel matrix factorization method called TV-GNMF, which can effectively remove noise and preserve the data features utilizing total variation.Our method can also reveal the intrinsic geometric and structural information of the data well to improve discrimination ability. Experimental results on data sets and images indicate that TV-GNMF is a better low-rank representation method for data clustering and image registration. There are two parameters, λ and β, that play a key role in our model. How to adaptively choose the values of λ and β will be investigated in our future work.

    ACKNOwLEDGMENT

    The authors would like to thank Prof. D. Cai, in the College of Computer Science at Zhejiang University, China, for providing his code for implementing GNMF.

    一级黄色大片毛片| 女同久久另类99精品国产91| 国产99久久九九免费精品| 成人精品一区二区免费| 国产成+人综合+亚洲专区| 亚洲成人手机| svipshipincom国产片| 午夜福利在线观看吧| av欧美777| 成人永久免费在线观看视频 | 成人手机av| 巨乳人妻的诱惑在线观看| 9191精品国产免费久久| 成人黄色视频免费在线看| 欧美午夜高清在线| 美女国产高潮福利片在线看| 日韩一卡2卡3卡4卡2021年| 两个人免费观看高清视频| 国产无遮挡羞羞视频在线观看| 国产一区二区 视频在线| 天天操日日干夜夜撸| 人人妻人人澡人人爽人人夜夜| 午夜福利一区二区在线看| 免费在线观看完整版高清| 欧美黑人精品巨大| 欧美精品啪啪一区二区三区| 国产激情久久老熟女| 这个男人来自地球电影免费观看| 人人妻人人爽人人添夜夜欢视频| 狠狠精品人妻久久久久久综合| 下体分泌物呈黄色| 黄色成人免费大全| 99国产精品一区二区三区| 一区二区三区乱码不卡18| 欧美 亚洲 国产 日韩一| 欧美日韩av久久| netflix在线观看网站| 久久久久视频综合| 在线观看免费午夜福利视频| 18禁美女被吸乳视频| 他把我摸到了高潮在线观看 | 精品国产乱码久久久久久男人| 久久狼人影院| 亚洲国产看品久久| 久热这里只有精品99| 男女午夜视频在线观看| 久久婷婷成人综合色麻豆| 日本黄色视频三级网站网址 | 美国免费a级毛片| 看免费av毛片| 高清欧美精品videossex| 在线观看舔阴道视频| 亚洲色图综合在线观看| 搡老熟女国产l中国老女人| 五月天丁香电影| 亚洲色图av天堂| 美女国产高潮福利片在线看| 欧美日韩黄片免| 久久人人97超碰香蕉20202| 精品国产亚洲在线| 水蜜桃什么品种好| 在线观看免费高清a一片| 成人精品一区二区免费| 大片免费播放器 马上看| 国产1区2区3区精品| 久久久精品94久久精品| 亚洲国产av影院在线观看| 女警被强在线播放| 99国产综合亚洲精品| 高清黄色对白视频在线免费看| 久久久久久久精品吃奶| 亚洲五月婷婷丁香| 一区二区av电影网| 色婷婷av一区二区三区视频| 欧美成狂野欧美在线观看| 日韩一卡2卡3卡4卡2021年| 丰满人妻熟妇乱又伦精品不卡| 国产av国产精品国产| av不卡在线播放| 久久久久久免费高清国产稀缺| 国产男靠女视频免费网站| 免费在线观看影片大全网站| 丁香六月天网| 高清视频免费观看一区二区| 免费女性裸体啪啪无遮挡网站| 免费人妻精品一区二区三区视频| 在线av久久热| 两个人看的免费小视频| 国产99久久九九免费精品| 亚洲三区欧美一区| 天堂动漫精品| 日韩有码中文字幕| 女性被躁到高潮视频| 在线观看免费视频网站a站| 免费看十八禁软件| 成人精品一区二区免费| 精品亚洲成国产av| 最近最新免费中文字幕在线| 丁香六月欧美| 天天影视国产精品| 免费人妻精品一区二区三区视频| 啦啦啦中文免费视频观看日本| 大型黄色视频在线免费观看| cao死你这个sao货| 精品少妇久久久久久888优播| 悠悠久久av| 国产精品九九99| 色婷婷av一区二区三区视频| 在线观看免费视频日本深夜| 午夜福利免费观看在线| √禁漫天堂资源中文www| 国产视频一区二区在线看| 国产精品二区激情视频| 一本色道久久久久久精品综合| 激情在线观看视频在线高清 | 成在线人永久免费视频| 亚洲欧洲精品一区二区精品久久久| 久久久久久免费高清国产稀缺| 国产精品九九99| 桃花免费在线播放| 国产亚洲精品一区二区www | 男女下面插进去视频免费观看| 法律面前人人平等表现在哪些方面| 亚洲欧美激情在线| 成人免费观看视频高清| 大码成人一级视频| 国产无遮挡羞羞视频在线观看| 亚洲av日韩精品久久久久久密| 国产区一区二久久| 变态另类成人亚洲欧美熟女 | 欧美人与性动交α欧美软件| 久久久精品免费免费高清| 黄片大片在线免费观看| 真人做人爱边吃奶动态| 在线观看免费视频网站a站| 欧美亚洲 丝袜 人妻 在线| 丁香六月欧美| av欧美777| 国产色视频综合| 大片免费播放器 马上看| 天天影视国产精品| 免费观看a级毛片全部| 色播在线永久视频| 久久毛片免费看一区二区三区| 国产熟女午夜一区二区三区| 老司机深夜福利视频在线观看| 午夜福利乱码中文字幕| 女警被强在线播放| 变态另类成人亚洲欧美熟女 | 在线 av 中文字幕| 男女床上黄色一级片免费看| 国产单亲对白刺激| 美女视频免费永久观看网站| 99国产精品一区二区蜜桃av | 女人被躁到高潮嗷嗷叫费观| 男女下面插进去视频免费观看| 国产一区二区 视频在线| 在线亚洲精品国产二区图片欧美| 叶爱在线成人免费视频播放| 69精品国产乱码久久久| 黄片播放在线免费| 在线 av 中文字幕| 亚洲欧美激情在线| 久久精品熟女亚洲av麻豆精品| 亚洲成人国产一区在线观看| 满18在线观看网站| 日本一区二区免费在线视频| 久久热在线av| 亚洲va日本ⅴa欧美va伊人久久| 亚洲少妇的诱惑av| 黄色视频不卡| 中文欧美无线码| 亚洲欧洲日产国产| 亚洲欧美一区二区三区久久| 午夜视频精品福利| 欧美激情 高清一区二区三区| 欧美在线一区亚洲| 国产麻豆69| 我的亚洲天堂| 亚洲精品中文字幕在线视频| 欧美激情 高清一区二区三区| 久久久久国内视频| 成人18禁在线播放| 又紧又爽又黄一区二区| 久久精品国产亚洲av香蕉五月 | 国产99久久九九免费精品| 天天躁狠狠躁夜夜躁狠狠躁| 国产精品偷伦视频观看了| 色播在线永久视频| 亚洲成av片中文字幕在线观看| 最新在线观看一区二区三区| 黄网站色视频无遮挡免费观看| 亚洲av电影在线进入| 成人18禁在线播放| 午夜福利视频在线观看免费| 午夜精品久久久久久毛片777| 黑人欧美特级aaaaaa片| 动漫黄色视频在线观看| 国产在线视频一区二区| 在线观看免费午夜福利视频| 国产成人免费无遮挡视频| 久久久久久人人人人人| 中文字幕人妻丝袜一区二区| 精品国产乱码久久久久久男人| 后天国语完整版免费观看| 搡老乐熟女国产| 亚洲一区中文字幕在线| 新久久久久国产一级毛片| 欧美老熟妇乱子伦牲交| 一级片免费观看大全| 操美女的视频在线观看| 天天躁夜夜躁狠狠躁躁| 一区二区日韩欧美中文字幕| 欧美激情高清一区二区三区| 国产成人免费观看mmmm| 在线观看舔阴道视频| 亚洲色图av天堂| 极品少妇高潮喷水抽搐| 99在线人妻在线中文字幕 | 免费在线观看完整版高清| 亚洲精品美女久久av网站| 男女之事视频高清在线观看| 最新的欧美精品一区二区| 操出白浆在线播放| 欧美另类亚洲清纯唯美| 啦啦啦视频在线资源免费观看| 国产精品久久久久久精品电影小说| 成人av一区二区三区在线看| videos熟女内射| 久久精品人人爽人人爽视色| 欧美黄色片欧美黄色片| 他把我摸到了高潮在线观看 | 色精品久久人妻99蜜桃| 一区二区av电影网| 午夜成年电影在线免费观看| 真人做人爱边吃奶动态| 精品福利观看| tube8黄色片| 99re6热这里在线精品视频| 精品国产一区二区三区久久久樱花| 日韩一卡2卡3卡4卡2021年| 国产男女内射视频| 国产av国产精品国产| 女性被躁到高潮视频| 12—13女人毛片做爰片一| 波多野结衣av一区二区av| 99国产精品免费福利视频| 亚洲av第一区精品v没综合| 波多野结衣一区麻豆| 91麻豆av在线| 久久 成人 亚洲| 久久国产精品影院| 欧美+亚洲+日韩+国产| 国产免费av片在线观看野外av| 一个人免费在线观看的高清视频| tocl精华| 熟女少妇亚洲综合色aaa.| 丁香六月欧美| 亚洲欧美日韩另类电影网站| 一个人免费看片子| 国产欧美亚洲国产| 母亲3免费完整高清在线观看| 黄色片一级片一级黄色片| svipshipincom国产片| 国产精品98久久久久久宅男小说| 无遮挡黄片免费观看| 精品人妻熟女毛片av久久网站| 欧美精品一区二区大全| 一区二区av电影网| 国产aⅴ精品一区二区三区波| 99久久99久久久精品蜜桃| 777久久人妻少妇嫩草av网站| 欧美 日韩 精品 国产| 精品一区二区三卡| 国产在线视频一区二区| 夜夜爽天天搞| 亚洲精品乱久久久久久| 黑人操中国人逼视频| 777久久人妻少妇嫩草av网站| 男人舔女人的私密视频| 在线永久观看黄色视频| 91成年电影在线观看| 最新的欧美精品一区二区| 成人国产av品久久久| 色老头精品视频在线观看| 女人被躁到高潮嗷嗷叫费观| 精品一品国产午夜福利视频| 国产一区二区激情短视频| 久久天躁狠狠躁夜夜2o2o| 视频区图区小说| 欧美久久黑人一区二区| 免费日韩欧美在线观看| 午夜视频精品福利| 国产精品久久久人人做人人爽| 免费高清在线观看日韩| 国产色视频综合| 2018国产大陆天天弄谢| 美女视频免费永久观看网站| 国产伦理片在线播放av一区| 中文字幕人妻熟女乱码| 丝袜美足系列| 97在线人人人人妻| 99精国产麻豆久久婷婷| 国产精品 欧美亚洲| 久久婷婷成人综合色麻豆| 亚洲久久久国产精品| 99riav亚洲国产免费| 精品高清国产在线一区| 久久精品国产a三级三级三级| 精品国产一区二区三区久久久樱花| 美女主播在线视频| 国产免费视频播放在线视频| 国产亚洲精品第一综合不卡| 最近最新免费中文字幕在线| 最新美女视频免费是黄的| 久久国产精品影院| 亚洲成人国产一区在线观看| 国产色视频综合| av国产精品久久久久影院| 久久久久久免费高清国产稀缺| 国产在线视频一区二区| 男人操女人黄网站| 日日摸夜夜添夜夜添小说| 欧美日韩亚洲高清精品| www.999成人在线观看| 嫁个100分男人电影在线观看| svipshipincom国产片| 777久久人妻少妇嫩草av网站| 国产精品电影一区二区三区 | 不卡av一区二区三区| 国产日韩欧美亚洲二区| 亚洲人成77777在线视频| 性少妇av在线| 亚洲av片天天在线观看| 国产亚洲精品第一综合不卡| 亚洲欧美一区二区三区黑人| 高清在线国产一区| av免费在线观看网站| 男人舔女人的私密视频| 中文字幕人妻丝袜一区二区| 欧美日韩成人在线一区二区| 一级毛片电影观看| 一本大道久久a久久精品| 色在线成人网| 久久中文字幕一级| 大片电影免费在线观看免费| 两性午夜刺激爽爽歪歪视频在线观看 | 在线观看免费日韩欧美大片| 成人影院久久| 黑人巨大精品欧美一区二区mp4| 国产免费av片在线观看野外av| 中文欧美无线码| 搡老乐熟女国产| 精品国产超薄肉色丝袜足j| 国产91精品成人一区二区三区 | 香蕉久久夜色| 新久久久久国产一级毛片| 久久中文字幕人妻熟女| 日韩大片免费观看网站| 中文字幕高清在线视频| 大型黄色视频在线免费观看| videosex国产| 热99re8久久精品国产| 色播在线永久视频| 亚洲伊人色综图| 国产免费福利视频在线观看| 大片电影免费在线观看免费| 国产亚洲av高清不卡| 18禁国产床啪视频网站| 国产黄色免费在线视频| av网站在线播放免费| 老司机在亚洲福利影院| 他把我摸到了高潮在线观看 | 757午夜福利合集在线观看| 日韩成人在线观看一区二区三区| 国产亚洲精品久久久久5区| 国产日韩一区二区三区精品不卡| 在线观看免费视频日本深夜| 在线永久观看黄色视频| 高清毛片免费观看视频网站 | 国产精品国产高清国产av | 日韩欧美三级三区| 一进一出抽搐动态| 日韩免费高清中文字幕av| 日韩视频在线欧美| 久久精品国产亚洲av高清一级| 可以免费在线观看a视频的电影网站| 亚洲av片天天在线观看| 欧美 日韩 精品 国产| 黄片小视频在线播放| 国产成人精品久久二区二区91| 正在播放国产对白刺激| 国产精品久久久久久精品古装| 久久人妻熟女aⅴ| 天堂8中文在线网| 极品少妇高潮喷水抽搐| 少妇裸体淫交视频免费看高清 | 欧美日韩亚洲综合一区二区三区_| 日韩大片免费观看网站| 亚洲欧美一区二区三区久久| 午夜福利视频在线观看免费| 欧美在线一区亚洲| 久久热在线av| 国产精品麻豆人妻色哟哟久久| 大型av网站在线播放| 国产高清videossex| 女人被躁到高潮嗷嗷叫费观| 午夜福利在线免费观看网站| 91麻豆av在线| 热re99久久精品国产66热6| 一夜夜www| 国产aⅴ精品一区二区三区波| 国产真人三级小视频在线观看| 如日韩欧美国产精品一区二区三区| 精品国产国语对白av| 欧美午夜高清在线| av又黄又爽大尺度在线免费看| 女人被躁到高潮嗷嗷叫费观| 国产成人精品无人区| 亚洲欧美日韩高清在线视频 | 女警被强在线播放| 国产亚洲av高清不卡| 国产伦人伦偷精品视频| 精品国内亚洲2022精品成人 | 三上悠亚av全集在线观看| 视频在线观看一区二区三区| 国产成+人综合+亚洲专区| 久久精品人人爽人人爽视色| 窝窝影院91人妻| 99精品欧美一区二区三区四区| av一本久久久久| 亚洲精品久久成人aⅴ小说| 777米奇影视久久| 丝袜喷水一区| 国产麻豆69| 国产不卡av网站在线观看| 亚洲av国产av综合av卡| 国产精品久久久久久精品电影小说| 一本一本久久a久久精品综合妖精| 国产人伦9x9x在线观看| 19禁男女啪啪无遮挡网站| 9191精品国产免费久久| 交换朋友夫妻互换小说| 成人特级黄色片久久久久久久 | 老熟妇乱子伦视频在线观看| 最新的欧美精品一区二区| 黄色 视频免费看| 极品人妻少妇av视频| 国产日韩欧美在线精品| 亚洲欧洲精品一区二区精品久久久| www.999成人在线观看| 一级毛片精品| 亚洲欧美一区二区三区黑人| 亚洲熟女毛片儿| 欧美日韩福利视频一区二区| 亚洲国产毛片av蜜桃av| av一本久久久久| 捣出白浆h1v1| 中文字幕精品免费在线观看视频| 一边摸一边抽搐一进一小说 | 99在线人妻在线中文字幕 | 天天躁狠狠躁夜夜躁狠狠躁| av一本久久久久| 亚洲熟妇熟女久久| 欧美人与性动交α欧美软件| 99riav亚洲国产免费| 纯流量卡能插随身wifi吗| 日韩大片免费观看网站| 一进一出好大好爽视频| 男男h啪啪无遮挡| 亚洲精品中文字幕一二三四区 | 精品久久久久久电影网| 一区福利在线观看| 看免费av毛片| 一夜夜www| 一边摸一边做爽爽视频免费| 热99re8久久精品国产| 美女午夜性视频免费| 亚洲情色 制服丝袜| 午夜福利在线免费观看网站| 97人妻天天添夜夜摸| 麻豆国产av国片精品| 99国产极品粉嫩在线观看| 国产精品久久电影中文字幕 | 国产有黄有色有爽视频| 久热爱精品视频在线9| 最黄视频免费看| 国产在线观看jvid| 欧美久久黑人一区二区| 久久中文字幕人妻熟女| 在线观看免费视频日本深夜| 久久久精品国产亚洲av高清涩受| 精品一品国产午夜福利视频| 国产99久久九九免费精品| 91av网站免费观看| 成人特级黄色片久久久久久久 | 国产成人欧美| 日韩中文字幕视频在线看片| 深夜精品福利| 夜夜骑夜夜射夜夜干| 日韩大码丰满熟妇| 99国产极品粉嫩在线观看| 亚洲欧洲日产国产| 美女高潮到喷水免费观看| 久久精品亚洲熟妇少妇任你| 久久午夜亚洲精品久久| 色在线成人网| 国产一区二区 视频在线| 成在线人永久免费视频| 亚洲精品成人av观看孕妇| 久久久精品国产亚洲av高清涩受| 人妻 亚洲 视频| 国产成人免费无遮挡视频| 亚洲一码二码三码区别大吗| 老司机靠b影院| 日韩中文字幕视频在线看片| 久热这里只有精品99| 伦理电影免费视频| 久久久精品免费免费高清| 国产亚洲午夜精品一区二区久久| 日韩精品免费视频一区二区三区| 2018国产大陆天天弄谢| 国产精品香港三级国产av潘金莲| 一级片'在线观看视频| 精品一品国产午夜福利视频| 久久久国产一区二区| av在线播放免费不卡| 久久人妻av系列| 久久国产精品大桥未久av| 777久久人妻少妇嫩草av网站| 国产精品美女特级片免费视频播放器 | 一级片免费观看大全| 纯流量卡能插随身wifi吗| 高清av免费在线| 国产一区二区三区综合在线观看| 欧美精品啪啪一区二区三区| 亚洲天堂av无毛| 精品一区二区三卡| 中文字幕av电影在线播放| 久久99一区二区三区| 久久久久久久大尺度免费视频| 老司机午夜福利在线观看视频 | 伦理电影免费视频| 成在线人永久免费视频| 天堂俺去俺来也www色官网| 视频区图区小说| 丁香六月天网| 老司机福利观看| 国产淫语在线视频| 久久午夜综合久久蜜桃| 欧美日本中文国产一区发布| 精品高清国产在线一区| 国产精品九九99| 日日爽夜夜爽网站| 岛国毛片在线播放| 日日夜夜操网爽| av有码第一页| 国产免费视频播放在线视频| 欧美变态另类bdsm刘玥| 欧美大码av| www.999成人在线观看| xxxhd国产人妻xxx| 亚洲中文av在线| 亚洲国产欧美网| 99精国产麻豆久久婷婷| 国产精品久久久久成人av| 久久99一区二区三区| 国产精品一区二区精品视频观看| 51午夜福利影视在线观看| 色婷婷久久久亚洲欧美| 国产精品久久久久久人妻精品电影 | 丁香六月天网| 亚洲国产欧美在线一区| 久久av网站| 国产精品熟女久久久久浪| 99热国产这里只有精品6| 另类亚洲欧美激情| 国产成+人综合+亚洲专区| 国产熟女午夜一区二区三区| 久热这里只有精品99| 欧美日韩国产mv在线观看视频| 久久九九热精品免费| 国产又色又爽无遮挡免费看| 色综合婷婷激情| 在线观看免费日韩欧美大片| 欧美在线黄色| 精品高清国产在线一区| 一区二区三区精品91| 国产xxxxx性猛交| 真人做人爱边吃奶动态| 十八禁人妻一区二区| 欧美精品高潮呻吟av久久| 国精品久久久久久国模美| 视频在线观看一区二区三区| 国产精品麻豆人妻色哟哟久久| 中文字幕av电影在线播放| 久久久久国内视频| 天天影视国产精品| 午夜福利视频精品| 免费人妻精品一区二区三区视频| 日韩视频在线欧美| 午夜日韩欧美国产| 国产av又大| 考比视频在线观看| 我要看黄色一级片免费的| 在线观看免费午夜福利视频| 久久天堂一区二区三区四区| 在线十欧美十亚洲十日本专区| 一区二区av电影网| 亚洲成av片中文字幕在线观看| 精品福利观看| 欧美黄色淫秽网站| av有码第一页| 在线观看www视频免费| 午夜久久久在线观看|