• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Software Defect Distribution Prediction Model Based on NPE-SVM

    2018-06-07 05:22:38HuaWeiChunShanChangzhenHuHuizhongSunMinLei5
    China Communications 2018年5期

    Hua Wei, Chun Shan, Changzhen Hu, Huizhong Sun*, Min Lei5

    1 School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China

    2 China Information Technology Security Evaluation Center, Beijing 100085, China

    3 Beijing Key Laboratory of Software Security Engineering Technology, School of Software, Beijing Institute of Technology,Beijing 100081, China

    4 Information Security Center, Beijing University of Posts and Telecommunications, Beijing 100876, China

    5 Guizhou University, Guizhou Provincial Key Laboratory of Public Big Data, Guizhou Guiyang 550025, China

    I. INTRODUCTION

    In recent years, the technology of machine learning developed so rapidly and has been widely used in many research fields, including the predictions of the software defect distribution [1][2][3][4][5]. Software defect distribution prediction plays an important role in the whole process of software development.Timely and accurate prediction of defective software modules will greatly improve the effective allocation of software testing resources[6]. There are many researchers has collected many training samples by extracting the software metric attributes of the software modules,and constructed the software defect distribution prediction model by employing different kinds of machine learning technology. The machine learning technology has been introduced to the field of software defect prediction in [7][8]. David Bowes et al.[9] performed a sensitivity analysis to compare the performance of different algorithms, e.g., Random Forest, Naive Bayes, RPart and SVM classifiers, when predicting defects in NASA, open source and commercial datasets. Yongquan Yan et al.[10] proposed a method which can give practice guide to forecast software aging using machine learning algorithm.Liqiang Zhang et al.[11] present a measurement framework for evaluating these metrics.Zibin Zheng et al. [12] proposed a research framework for predicting reliability of individual software entities as well as the whole Internet application.Pengcheng Zhang et al.[13] proposed a novel Bayesian combinational model to predict QoS by continuously adjusting credit values of the basic models so as to keep good prediction accuracy. ChengshengYuan et al.[14] proposed a new software-based liveness detection approach using multi-scale local phase quantity (LPQ) and principal component analysis(PCA) . Yiqi Fu et al.[15] conducted comparative analysis to different machine learning based defect prediction methods, and found that different algorithms have both advantages and disadvantages in different evaluation tasks. Malhotra et al.[16] assessed the performance capability of the machine learning techniques in existing research for software fault prediction. They compared the performance of the machine learning techniques with the statistical techniques and other machine learning techniques.

    In this paper, the authors proposed an improved NPE-SVM software defect prediction model based on NPE algorithm,which is one kind of the manifold learning algorithm.

    However, all the above research methods do not consider two practical problems in software defect prediction. Firstly, there is a serious problems of class imbalance in the software defect data. Boehm pointed out that 20 percent of software modules in software systems contain 80 percent of software defects[17]. In other words, most of the software defects appear in a few software modules. Secondly, it is costly to collect a large amount of training data with labels. In practice, it takes a lot of time, manpower and other test resources,and even that expert knowledge is needed to collect the training data with labels. Moreover,it is almost unrealistic to obtain large quantities of labeled sample data for those projects without historical versions [18]. This poses a great challenge to the existing supervised forecasting model. It is difficult to construct an ideal classifier with a small quantity of defect training data. In this paper, we proposes a software defect distribution prediction model based on the improved NPE-SVM algorithm.The main contributions include the following three aspects:

    (1) The improved NPE algorithm tackles the central difficulties of software defect measurement in high-dimensional and small sample cases i.e., the analyses of singular matrix of the generalized characteristic matrix and the completeness and robustness of the solution. By using the matrix analysis method,the solution can be effectively processed and effectively applied in the practical application which does not significantly increase the complexity of the algorithm, and result in no loss of useful information.

    (2) The improved NPE algorithm transforms the singular generalized eigenvalue problem in the NPE method, i.e., singular generalized eigensystem computation, into two feature decomposition problems, i.e.,eigenvalue decomposition and EVD. In this case, the first feature breaks down to simplify the calculation of generalized features without losing useful discriminating information. The second feature is to break down the unstable broad features and convert the problems into stable features.

    (3) Based on the intrinsic matrix structure and correlation of software defect data, this paper discusses the learning ability of direct application of traditional manifold learning methods in matrix data representation, and further reduces computational complexity and maintains the structural characteristics of software defect measurement metadata.

    In order to predict the various defects in the software more accurately and improve the quality of the software, it is necessary to reduce the dimensionality of the high-dimensional software metric data. Manifold learning is an important method to deal with high-dimensional data, and it can discover the real structure hidden in high-dimensional software metric data. At present, the main method proposed by researchers are the local linear Embedding (LLE) [19], and Isometric Feature Mapping [20]. However, they both have a common defect: only mapping in the training data, not directly mapping these new test data,which leads to the problem of out-of-sample.In this paper, the software defect distribution prediction model based on improved NPESVM is proposed to solve these problems.By adjusting the correlation coefficients between data, the model solves the problem of high-dimensional and small sampling, and then extracts the features of low dimensional data. We compared several nonlinear dimensionality reduction algorithms to extract the in fluence of software defect characteristics on the prediction results. Through experimental verification, we find that the model greatly improves the accuracy of the software defect distribution prediction.

    The rest of this paper is organized as follows. Section II presents some related work and our optimized NPE algorithm. In Section III, we describe the NPE-SVM software defect prediction model. In Section IV, we provide a general discussion of commonly used defect prediction techniques and metrics. Finally,Section V gives conclusions and future work.

    II. THE NPE ALGORITHM AND ITS IMPROVEMENT

    2.1 NPE Algorithm

    The main idea of the NPE algorithm is that in the high-dimensional space, each data point is represented by K neighborhood points. After dimensionality reduction, the weights of each nearest neighbor point are kept unchanged,and the reconstructed dimension is simplified by corresponding to the data points, so that the reconstruction error is minimized [21].

    Suppose that there is a data setwhere D is the dimension of the observed data, and N is the sample size. The matrix of the front stack is denoted as X=[x1,x2,...,xn]∈RD*N. The essence of linear dimensionality is to find a set of linear projection directions g1,g2,...,gd, and the projection matrix is denoted by G=[g1,g2,...,gd]∈RD*N, such that the low-dimensional embedding identifier yi=GTxiafter projection is the low-dimensional representation of the original data xi.The specific implementation process is as follows:

    (1) Finding nearest neighbors points: according to the criteria, a certain number of nearest neighbor points of each data point xiis selected. For example, for each sample point xi, the K nearest distance points are selected as nearest neighbor points, typically based on the Euclidean distance. In the specific application of pattern classification, as the label information of training data is known, so we often use these tags information to assist in selecting nearest neighbor points. If and only if the label information of the two samples is the same and one point is in the K nearest neighbor point of the other one, it can be selected as the nearest neighbor point.

    (2) Calculating the optimal reconstruction weight matrix. For each data point Xi, linear reconstruction is performed locally with its nearest neighbor data point Xjby minimizing the reconstruction error for all data points as follows,

    The linear reconstruction error function is minimized if the weight matrix W satisfies the sparseness condition wij=0 (Suppose that the j data point does not belong to the nearest neighbor of the i reconstruction data point)and the normalization conditionWe can get the optimal weight matrix.

    (3) Obtaining the low dimensional embedding representation. In order to maintain the local reconstruction relationship between the samples, the last step of the NPE is to construct the embedded coordinate representation of the data in the d-dimensional Euclidean space, using the reconstructed weight wijof the original high dimensional data space as constraints. The optimization formulation objective function can be expressed as

    In the above expression, c is a suitable positive constant which makes g a unit vector.The minimization formula (2) is to ensure that the weights wijthat characterize the primitive space local neighbors relationship are also used to reconstruct the embedded representation of the low-dimensional space. Then,low-dimensional data will still retain some of the local geometry of the original data.

    According to simple algebraic deduction,the above formula (2) can be transformed into:

    where M = (1-W)T(1-W).

    It is not hard to prove that the column vector of the linear projection matrix G is the eigenvector which corresponds to the most minimal d feature values of the following generalized feature problem:

    The first two steps of NPE are exactly the same as its nonlinear counterpart LLE algorithm, and they differ from the optimization objective function and constraints.

    2.2 Analysis of the problems

    In practical applications, the existing observation matrix may lose the part information of the original signal after compressing sampling[22].if the data obtained with high-dimensional and small sampling characteristics,i.e., the data dimension D is high and the number of samples is relatively small and N <<D,then the implementation of NPE algorithm will encounter the following problems:

    (1)The dimension of the feature matrix XMXTand XXTis D×D. When D is very large,the dimension of these feature matrices are too large. So it is difficult to calculate the linear transformation by solving spectral decomposition XMXTg =λX XTg of generalized feature matrix, which requires a lot of time and space.

    (2) The rank of the feature matrix XXTis rank(XXT)=rank(X)≤min(D,N)=N, where XXTis a matrix of D×D. So in the case of high-dimensional small sampling, XXTis a singular matrix, which further makes the direct implementation of NPE infeasible.

    2.3 Improvement of NPE algorithm

    In order to solve the above problems, we combine the NPE algorithm with the linear projection direction (3) in the last step of the high-dimensional small sampling case. The following is the specific process of NPE algorithm improvement.

    (1) Calculating the eigen-decomposition of matrix

    where [I]1is the diagonal proof of the matrix’s non-zero feature values from large to small order, Vr and V~rare composed of eigenvectors corresponding to non-zero feature values and eigenvectors corresponding to zero feature values respectively.

    (2)Calculating the matrices Ur and ∑Iin the feature decomposition of the matrix

    where

    (3) Calculating the eigen-decomposition of matrix

    (4) The obtained linear projection matrix G is the former d unit column vector of

    The improved NPE algorithm has the following desired features. It solves the singular problem of generalized feature decomposition in high-dimensional small samples, and does not require other intermediate processes to reduce the dimensionality. It can decompose the high-dimensional data effectively, and the resulting feature vectors are more stable and correct, and reduce the computational complexity and computational cost effectively.

    III. THE SOFTWARE DEFECT DISTRIBUTION PREDICTION MODEL

    3.1 NPE-SVM software defect prediction model

    Based on the NPE-SVM algorithm, the software defect distribution prediction model is divided into the following steps:

    Step 1: Obtaining the software defect dataset;

    Step 2: The training set (Training Set) is selected in the dataset, and the improved NPE algorithm is used to reduce the dimension of the training set;

    Step 3: The reduced dimension data obtained from the second step is used as the input dataset of the third step. Since that the radial basis function (RBF) support vector machines (SVM) has the advantage of using a regularization parameter to control the number of support vectors and margin errors in software defect prediction [23].Recently, an interesting accurate on-line algorithm was proposed for training v-support vector classification, which can handle a quadratic formulation with a pair of equality constraints[24].so the kernel function of support vector machine in the model selection uses the RBF function, as shown in (8). According to the defined value interval and the step size, the network search combined with 10-fold cross validation is used to optimize the parameters, and obtain the optimal parameters;

    Step 4: Model testing are performed using the optimal parameters obtained in the third step;

    Step 5: The software defect pro file prediction is predicted by the reduced dimension dataset. If the prediction result satisfies the terminating condition, the optimized software defect distribution prediction model is obtained and ended. Otherwise, third steps will be continued to optimize the model. In this process,the termination condition is that the prediction accuracy of the model reaches a given threshold or the number of cycles exceeds the maximum number of cycles.

    3.2 Parameter selection of NPE-SVM

    The NPE-SVM software defect distribution prediction model consists of two parts: improved neighborhood preserving embedding(NPE) and support vector machine (SVM).Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. In the neighborhood preserving embedding part, two parameters need to be determined, which are select neighborhood size K and the embedding dimension d. In the support vector machine section, there are also two parameters to be determined, namely kernel function σ(g) and penalty factor C.

    3.2.1 Parameter selection of NPE

    The NPE algorithm parameter involves the choice of K and d. K stands for the size of the local neighborhood, and there is no uniform theoretical guidance on the selecting method for the value of K. If K is too large, each neighborhood will be closer to the whole and cannot reflect the characteristics of the local neighborhood. On the other hand, if the K is too small, the data topology cannot be effectively preserved in low dimensional space.Therefore, the choice of K values is of great importance. Since there is no unified and perfect theory to guide the selection process of the parameter except for the empirical experience from extensive previous literature, the empirical value K chosen in this paper is 16.

    d refers to the dimensionality of a dataset after dimension reduction operations. The choice of this parameters needs to consider the dataset itself characteristics, following the principle of d<K. In this paper, the method of maximum likelihood estimation (MLE) is used to estimate the intrinsic dimensionality of software defect datasets.

    3.2.2 Parameter selection of SVM

    The model in this paper adopts a search method combined with 10-fold cross validation to select the parameters of SVM algorithm. This method is used to optimize the parameter C of the SVM classification plot and the parameter σ(g) of kernel function, and find the values of the parameters C and g that support the classification accuracy of SVM. In this way,the optimal classification function of SVM classification plot can be determined, and then the trained SVM classification plot can be obtained.

    IV. RESULTS AND DISCUSSION

    4.1 Datasets

    The experimental data used are the MDP of NASA[25], which are widely applied to software defect prediction study. It contains of 13 datasets, as shown in table I. Each dataset contains a number of samples, each of which corresponds to a software module, and each software module consists of several static code attributes and identifies the number of attributes in the software module. The static code attributes identified in each data item include the line of code (Loc), the Halstead attribute[26], and the McCabe attribute[27]. In this paper, the datasets of CM1, KC3, MW1,PC1 and PC5 in NASA are selected to verify the experiment.

    Table I. The 13 datasets provided by NASA.

    4.2 In fluence comparison of feature extraction parameters

    In order to verify the predictive ability of the proposed model, we adopted 10-fold cross validation to carry out the experiment. The experimental data are randomly divided into 10 subsets, each experiment takes 1 subsets as the test set, and the rest as training sets. In this way, a total of 10 experiments are conducted,each of which had 1 test set. Finally, the performance of the model is evaluated with the average of 10 experiments.

    In this paper, accuracy, precision, recall and F-measure (harmonic mean of precision and recall) [28] are used to evaluate the predictive power of the model. Because d< K, and K =16, so d< 16, d∈N*. The limited value space is the premise of different experiments. As shown in figure 1, the results of a comparative test of d with different embedding dimensions under the same conditions are presented.

    In this experiment, the maximum likelihood estimation method is used to estimate the dimension of the experimental dataset. When d=9, the improved NPE-SVM software defect distribution prediction model has the best prediction effect.

    4.3 Comparison of different prediction algorithms for the same feature

    When d=9, the improved NPE-SVM model has the best prediction effect. Under the same characteristics, we compare the prediction effects of the improved NPE-SVM model with the single SVM model、NPE-SVM model and the LLE-SVM model on the 4 evaluation indexes. The specific experimental results under different datasets are shown in table II, III,IV, V and VI.

    In order to display the result more intuitively, the experimental results are shown by line graphs in figure 2 at page 180.

    As we can see clearly in the tables and figures, the improved NPE-SVM model is better than the single model SVM, LLE-SVM model and NPE-SVM model in the 4 criteria. The experimental results illustrate two aspects. On the one hand, under the same conditions, the prediction effect of data dimensionality reduction is better than that without data dimensionality reduction, which indicates that there exists the data redundancy problem in the data software defect set. On the other hand, for the problem of data redundancy, the improved NPE-SVM software defect distribution prediction model can solve the problem well, and improve prediction accuracy, precision, recall and F-measure.

    4.4 Time complexity

    The computational complexity of the dimension reduction method has a direct impact on its applicability. In this subsection, we give a brief analysis of the computational complexity of different methods.

    Suppose that we have a total of N samples from k individuals, each of which is represented by a matrix with dimension m * n, usually N << mn. The computational complexity of various methods is mainly composed of the characteristic analysis of the eigenmatrix.When neighbors are selected to calculate the weight matrix, the calculation of the weight matrix also constitutes the main computational complexity. For SVM, LLE-SVM and NPESVM, in the case of high dimension and small sample, pre-dimensionality methods are usually used to overcome the singularity problem of the original eigenmatrix. The median dimensions of their pre-reduction dimension are usually N-k, N and N, respectively. Since the computational complexity of a n*n full matrix eigenvalue analysis is O(n3), we can get the computational complexity of the eigenvalue analysis of each method.

    V. CONCLUSION AND FUTURE WORK

    In order to effectively predict defective software, more measurement attributes are needed. For high-dimensional software attribute measurement data, the traditional software defect prediction method will cause the “Curse of dimensionality” problem. In this paper,we proposed an improved NPE-SVM software defect prediction model based on NPE algorithm, which is one kind of the manifold learning algorithm. The basic idea is using manifold learning method to reduce the dimension of the data, and then use the SVM classification algorithm to predict the defects in the software. The experimental results show that the model effectively improves the accuracy and efficiency of software defect prediction. The future work is to study how to apply the model to the actual software development process, and use it to guide the actual software development.

    Fig. 1. NPE algorithm prediction result in different d.

    Table II. Prediction result of data set CM1.

    Table III. Prediction result of data set KC3.

    Table IV. Prediction result of data set MW1.

    Table V. Prediction result of data set PC1.

    Table VI. Prediction result of data set PC5.

    Table VII. Time Complexity of Algorithms.

    ACKNOWLEDGEMENTS

    This work is supported by the National Natural Science Foundation of China (Grant No.U1636115),the PAPD fund, the CICAEET fund, and the Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(2017BDKFJJ017).

    Fig. 2. Comparison of the prediction results.

    [1] Basili V R, Briand L C, Melo W L, “A Validation of ObjectOriented Design Metrics as Quality Indicators,” IEEE Transactions on Software Engineering,vol. 22,no. 10,1995,pp.751-761

    [2] Emam K E, Benlarbi S, Goel N, et al, “Comparing case-based reasoning classifiers for predicting high risk software components,” Journal of Systems & Software,vol. 55, no. 3,2001,pp. 301-320.

    [3] K.Ganesan,T.M.Khoshgoftar and E.B.Allen,” Casebased software quality prediction,” International Journal of Software Engineering & Knowledge Engineering, vol. 10,no. 2,2000,pp. 139-152.

    [4] Khoshgoftaar T M, Allen E B, Jones W D, et al. “Classification-tree models of software-quality over multiple releases,” IEEE Transactions on Reliability, vol. 49,no. 1, 2000, pp. 4-11

    [5] Khoshgoftaar T M, Seliya N. “Analogy-Based Practical Classification Rules for Software Quality Estimation”. Empirical Software Engineering,vol. 8, no. 4, 2003, pp. 325-350.

    [6] Liao S, Ling X U, Yan M. “Software defect prediction using semi-supervised support vector machine with sampling”. Computer Engineering& Applications, 2017.

    [7] Kokiopoulou E,Y Saad, “Orthogonal neighborhood preserving projections,” IEEE International Conference on Data Mining IEEE,vol. 29,2005,pp. 234-241.

    [8] Zhang CS, Wang J, Zhao NY, Zhang D.”Reconstruction and analysis of multi—pose face images based on nonlinear dimensionality reduction,” Pattern Recognition,vol.37,no. 1,2004,pp.325-336.

    [9] Bowes D, Hall T, Petri? J. “Software defect prediction: do different classifiers find the same defects “. Software Quality Journal,vol.1,2017,pp. 1-28.

    [10] Yan Y,Guo P, “A Practice Guide of Software Aging Prediction in a Web Server Based on Machine Learning “, China Communications,vol.13,no. 6,2016,pp. 225-235

    [11] Liqiang Zhang, Zhao, et al. “Dependence-Induced Risk: Security Metrics and Their Measurement Framework “.China Communications,vol.13,no. 11, 2016,pp. 119-128

    [12] Zheng Z,Zibin J M, Tao G,et al. “Reliability prediction for internetware applications: a research framework and its practical use “. China Communications,vol. 12,no. 12,2015,pp. 13-20

    [13] Zhang PH, et al. “A Novel Approach for QoS Prediction Based on Bayesian Combinational Model “. China Communications, vol. 13,no.11,2016,pp. 269-280

    [14] Chengsheng Yuan, Xingming Sun, and Rui LV,“Fingerprint Liveness Detection Based on Multi-Scale LPQ and PCA,” China Communications,vol. 13, no. 7, 2016,pp. 60-65.

    [15] Fu Y, Dong W, Yin L, et al.” Software Defect Prediction Model Based on the Combination of Machine Learning Algorithms “. Journal of Computer Research & Development, 2017.

    [16] Malhotra R. “A systematic review of machine learning techniques for software fault prediction “. Applied Soft Computing Journal, vol.27,2015,pp. 504-518.

    [17] Boehm B.” Industrial software metrics top 10 list“.IEEE Software,vol. 4,1987,pp.84-85

    [18] Li M,Zhang H, Wu R, et a1.” Sample-based software defect prediction with active and semi-supervised learning “. Automated Software Engineering,vol. 19,no. 2,2012,pp. 201-230

    [19] Roweis S T, Saul L K. “ Nonlinear Dimensionality Reduction by Locally Linear Embedding “. Science,vol. 290,no. 5500, 2000,pp. 2323-6.

    [20] Zhang Z,Zha H. “ Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment “. Society for Industrial and Applied Mathematics,2005.

    [21] Wang Y,Wu Y. “ Complete neighborhood preserving embedding for face recognition “,Pattern Recognition,vol. 43,no. 3,2010,pp. 1008-1015

    [22] Yajie Sun and Feihong Gu, “Compressive sensing of piezoelectric sensor response signal for phased array structural health monitoring,” International Journal of Sensor Networks, Vol. 23,No. 4,2017,pp. 258-264.

    [23] Gu B, Sheng V S. “ A Robust Regularization Path Algorithm for ν-Support Vector Classification“.IEEE Transactions on Neural Networks & Learning Systems, vol. 28, no. 5, 2016, pp. 1241-1248.

    [24] Bin Gu, Victor S. Sheng, Keng Yeow Tay,,Walter Romano,and Shuo Li,”Incremental Support Vector Learning for Ordinal Regression,” IEEE Transactions on Neural Networks and Learning Systems,vol. 26, no. 7, 2015,pp. 1403-1416.

    [25] Challagulla V U B, Bastani F B, I-Ling Yen,Paul R A.” Em-pirical assessment of machine learning based software defect prediction techniques”,Proceedings of the 10th IEEE Interna-tional Workshop on Object-Oriented Real-Time Dependable Systems. Washington,DC,USA, 2005, pp. 263-270.

    [26] Keerthi S S, Lin C J.” Asymptotic behaviors of support vector machines with Gaussian kernel “,Neural computation, 2003, pp. 1667-1668.

    [27] Wang X H, Shu P, Cao L et al. “ A ROC curve method for performance evaluation of support vector machine with opti-mization strategy “.Computer Science Technology and Appli-cations,vol. 2, 2009, pp. 117-120.

    bbb黄色大片| 高清在线国产一区| 一本综合久久免费| 久久久精品区二区三区| 18禁观看日本| 亚洲久久久国产精品| 在线看a的网站| 国产精品秋霞免费鲁丝片| 免费av中文字幕在线| 亚洲成人手机| 老司机深夜福利视频在线观看| videos熟女内射| 十分钟在线观看高清视频www| 侵犯人妻中文字幕一二三四区| av一本久久久久| 欧美日韩一级在线毛片| 亚洲一码二码三码区别大吗| 免费在线观看完整版高清| 99国产极品粉嫩在线观看| av电影中文网址| 免费观看精品视频网站| 免费av中文字幕在线| 十分钟在线观看高清视频www| 国产熟女午夜一区二区三区| 欧美在线一区亚洲| 黄色视频,在线免费观看| 99精品欧美一区二区三区四区| 免费少妇av软件| 亚洲成人免费电影在线观看| 亚洲精品国产色婷婷电影| 久久99一区二区三区| 男女床上黄色一级片免费看| 另类亚洲欧美激情| 亚洲欧美色中文字幕在线| 十八禁高潮呻吟视频| 美国免费a级毛片| 久久99一区二区三区| 亚洲精品美女久久av网站| 亚洲男人天堂网一区| 他把我摸到了高潮在线观看| 精品视频人人做人人爽| 久久精品熟女亚洲av麻豆精品| 欧美在线黄色| 免费少妇av软件| 国产又爽黄色视频| 日韩 欧美 亚洲 中文字幕| 欧美精品高潮呻吟av久久| 免费看a级黄色片| 伊人久久大香线蕉亚洲五| 亚洲专区中文字幕在线| 欧美日韩亚洲高清精品| 丝袜在线中文字幕| 国产精品综合久久久久久久免费 | 两性午夜刺激爽爽歪歪视频在线观看 | 久久久久久人人人人人| 精品一区二区三区视频在线观看免费 | 18禁裸乳无遮挡动漫免费视频| 99久久精品国产亚洲精品| 欧美黄色淫秽网站| e午夜精品久久久久久久| 女人精品久久久久毛片| 亚洲成a人片在线一区二区| 女人精品久久久久毛片| 亚洲第一欧美日韩一区二区三区| 18禁裸乳无遮挡动漫免费视频| 久久久久国产一级毛片高清牌| 99久久综合精品五月天人人| 久久久久视频综合| 高清av免费在线| 国产精品久久电影中文字幕 | 午夜两性在线视频| 在线观看免费视频日本深夜| 丰满饥渴人妻一区二区三| 超碰97精品在线观看| ponron亚洲| av天堂在线播放| 欧美av亚洲av综合av国产av| 美女午夜性视频免费| 成年动漫av网址| 另类亚洲欧美激情| 亚洲av成人av| 露出奶头的视频| 黑丝袜美女国产一区| 日韩欧美免费精品| 99国产精品一区二区蜜桃av | 女同久久另类99精品国产91| 不卡一级毛片| 好男人电影高清在线观看| 国产日韩欧美亚洲二区| 国产又爽黄色视频| 国产蜜桃级精品一区二区三区 | 两个人看的免费小视频| 午夜免费鲁丝| 日日夜夜操网爽| 91在线观看av| 岛国在线观看网站| 女同久久另类99精品国产91| 久久婷婷成人综合色麻豆| 丝瓜视频免费看黄片| videos熟女内射| avwww免费| 日韩欧美一区视频在线观看| 午夜福利乱码中文字幕| avwww免费| 久久中文看片网| 80岁老熟妇乱子伦牲交| 精品人妻在线不人妻| avwww免费| 国产欧美日韩一区二区三| 日韩欧美一区二区三区在线观看 | 91麻豆精品激情在线观看国产 | 亚洲成人免费av在线播放| 国产男靠女视频免费网站| 国产亚洲精品第一综合不卡| 久久国产乱子伦精品免费另类| 欧美av亚洲av综合av国产av| 亚洲国产精品sss在线观看 | 精品一区二区三区av网在线观看| 久久影院123| 黑人欧美特级aaaaaa片| 真人做人爱边吃奶动态| 老司机深夜福利视频在线观看| 欧美成人午夜精品| 国产男靠女视频免费网站| 精品人妻在线不人妻| 成人免费观看视频高清| 亚洲五月天丁香| av福利片在线| 自拍欧美九色日韩亚洲蝌蚪91| 国产99久久九九免费精品| 757午夜福利合集在线观看| 动漫黄色视频在线观看| 涩涩av久久男人的天堂| 久久天堂一区二区三区四区| 亚洲中文av在线| 母亲3免费完整高清在线观看| 午夜精品在线福利| 精品久久蜜臀av无| 国产精品久久电影中文字幕 | 国产精品乱码一区二三区的特点 | 久久久国产精品麻豆| 一级片免费观看大全| 99久久国产精品久久久| 最近最新中文字幕大全电影3 | 免费黄频网站在线观看国产| 99国产精品免费福利视频| 国产精品 国内视频| 亚洲欧美激情综合另类| 啦啦啦在线免费观看视频4| 精品少妇一区二区三区视频日本电影| 精品久久久久久,| 亚洲精品国产精品久久久不卡| 欧美亚洲 丝袜 人妻 在线| 99久久综合精品五月天人人| 精品久久久精品久久久| 老汉色∧v一级毛片| 国产一区在线观看成人免费| 叶爱在线成人免费视频播放| 国产成人av教育| 99精品在免费线老司机午夜| 身体一侧抽搐| 国产不卡av网站在线观看| 欧美在线一区亚洲| 成熟少妇高潮喷水视频| 亚洲专区字幕在线| 免费看a级黄色片| 精品久久久久久久毛片微露脸| 国产国语露脸激情在线看| xxxhd国产人妻xxx| 天堂俺去俺来也www色官网| 国产主播在线观看一区二区| 亚洲七黄色美女视频| 欧美日韩亚洲综合一区二区三区_| 99久久国产精品久久久| 窝窝影院91人妻| 美女视频免费永久观看网站| 日本精品一区二区三区蜜桃| 18禁裸乳无遮挡免费网站照片 | 午夜免费鲁丝| 精品国产亚洲在线| 久久精品国产清高在天天线| 大型av网站在线播放| 十分钟在线观看高清视频www| 亚洲精品国产色婷婷电影| 黄片播放在线免费| 精品少妇久久久久久888优播| 日韩中文字幕欧美一区二区| 午夜免费观看网址| 又大又爽又粗| 欧美黄色片欧美黄色片| 麻豆av在线久日| 后天国语完整版免费观看| 久久人人爽av亚洲精品天堂| 757午夜福利合集在线观看| 久久国产精品影院| 91麻豆精品激情在线观看国产 | 精品国产美女av久久久久小说| 国产精品九九99| 黄片大片在线免费观看| 日韩一卡2卡3卡4卡2021年| 精品一区二区三区四区五区乱码| 国产精品电影一区二区三区 | 亚洲av欧美aⅴ国产| 在线观看免费午夜福利视频| 免费观看精品视频网站| 国产精品久久电影中文字幕 | 欧美精品高潮呻吟av久久| 99国产精品免费福利视频| 午夜精品久久久久久毛片777| 电影成人av| 男女高潮啪啪啪动态图| 欧美成人免费av一区二区三区 | 亚洲免费av在线视频| 国产男女内射视频| 在线国产一区二区在线| 欧美成人免费av一区二区三区 | 女同久久另类99精品国产91| 国产又爽黄色视频| 精品人妻熟女毛片av久久网站| 久久久久久免费高清国产稀缺| 波多野结衣av一区二区av| 宅男免费午夜| 黄色女人牲交| 99国产综合亚洲精品| 69精品国产乱码久久久| 精品免费久久久久久久清纯 | 伦理电影免费视频| 久久人妻av系列| 精品国产国语对白av| 成年女人毛片免费观看观看9 | 一进一出好大好爽视频| 国产深夜福利视频在线观看| 80岁老熟妇乱子伦牲交| 美女高潮喷水抽搐中文字幕| 伊人久久大香线蕉亚洲五| 久久精品人人爽人人爽视色| 啪啪无遮挡十八禁网站| 免费在线观看亚洲国产| 女警被强在线播放| 精品国内亚洲2022精品成人 | 大香蕉久久成人网| 国产aⅴ精品一区二区三区波| 无限看片的www在线观看| 国产高清激情床上av| 成人免费观看视频高清| 久久久国产精品麻豆| 精品久久久久久电影网| 18禁裸乳无遮挡免费网站照片 | 午夜福利免费观看在线| 久9热在线精品视频| 俄罗斯特黄特色一大片| 中文欧美无线码| 久久久国产成人精品二区 | 窝窝影院91人妻| 变态另类成人亚洲欧美熟女 | 久热这里只有精品99| 国产欧美亚洲国产| 日本精品一区二区三区蜜桃| 宅男免费午夜| 亚洲精品中文字幕一二三四区| 精品国产一区二区久久| 亚洲精品乱久久久久久| 丝袜美足系列| ponron亚洲| 国产一区二区三区综合在线观看| 男女免费视频国产| 18禁黄网站禁片午夜丰满| 欧美一级毛片孕妇| 在线观看免费视频日本深夜| 成人av一区二区三区在线看| 大陆偷拍与自拍| cao死你这个sao货| 欧洲精品卡2卡3卡4卡5卡区| 丁香六月欧美| 国产在线一区二区三区精| 亚洲少妇的诱惑av| 亚洲av日韩精品久久久久久密| 亚洲精华国产精华精| 香蕉丝袜av| 国产精品偷伦视频观看了| 女警被强在线播放| 香蕉久久夜色| 亚洲欧美日韩另类电影网站| 国产又色又爽无遮挡免费看| 美女高潮喷水抽搐中文字幕| 18在线观看网站| 欧美成狂野欧美在线观看| 久久 成人 亚洲| 国产无遮挡羞羞视频在线观看| 免费在线观看影片大全网站| 两个人免费观看高清视频| 男女午夜视频在线观看| 精品人妻熟女毛片av久久网站| 一夜夜www| 99久久精品国产亚洲精品| 91麻豆av在线| 可以免费在线观看a视频的电影网站| av网站在线播放免费| 在线观看免费视频日本深夜| 老司机在亚洲福利影院| 一a级毛片在线观看| 一本一本久久a久久精品综合妖精| 高清黄色对白视频在线免费看| 国产有黄有色有爽视频| 黄频高清免费视频| 十分钟在线观看高清视频www| 亚洲视频免费观看视频| 精品人妻在线不人妻| 狠狠婷婷综合久久久久久88av| 国产麻豆69| 亚洲avbb在线观看| 啦啦啦在线免费观看视频4| 啦啦啦在线免费观看视频4| 下体分泌物呈黄色| 极品少妇高潮喷水抽搐| 纯流量卡能插随身wifi吗| a级片在线免费高清观看视频| 国产一区二区三区视频了| 大陆偷拍与自拍| 99精品久久久久人妻精品| 亚洲欧洲精品一区二区精品久久久| 亚洲综合色网址| 自拍欧美九色日韩亚洲蝌蚪91| 夫妻午夜视频| 51午夜福利影视在线观看| av福利片在线| 最新美女视频免费是黄的| 无限看片的www在线观看| 久久天堂一区二区三区四区| 在线天堂中文资源库| av福利片在线| 欧洲精品卡2卡3卡4卡5卡区| 王馨瑶露胸无遮挡在线观看| 精品午夜福利视频在线观看一区| 高清av免费在线| 99热网站在线观看| 99久久精品国产亚洲精品| 99久久综合精品五月天人人| 大码成人一级视频| av国产精品久久久久影院| 天堂俺去俺来也www色官网| www日本在线高清视频| 悠悠久久av| 两性午夜刺激爽爽歪歪视频在线观看 | 精品国产一区二区三区四区第35| 日韩熟女老妇一区二区性免费视频| 黄片小视频在线播放| 久久亚洲真实| 熟女少妇亚洲综合色aaa.| 色94色欧美一区二区| 三级毛片av免费| 久久久久久久久免费视频了| 国产精品亚洲av一区麻豆| 国产成人av激情在线播放| 老司机深夜福利视频在线观看| www.自偷自拍.com| 久久天堂一区二区三区四区| 精品人妻熟女毛片av久久网站| 99re6热这里在线精品视频| 国产欧美日韩一区二区三| 18在线观看网站| 一进一出抽搐动态| 亚洲国产欧美网| 国产不卡一卡二| 亚洲精品在线美女| 亚洲久久久国产精品| 香蕉国产在线看| 久久人妻熟女aⅴ| 777米奇影视久久| 国产片内射在线| 少妇 在线观看| av天堂在线播放| 99精品欧美一区二区三区四区| 久久久久久久精品吃奶| 一级片'在线观看视频| 亚洲人成伊人成综合网2020| 淫妇啪啪啪对白视频| 国产99白浆流出| 国产蜜桃级精品一区二区三区 | 巨乳人妻的诱惑在线观看| 午夜亚洲福利在线播放| 久久这里只有精品19| 亚洲欧美一区二区三区久久| 老司机深夜福利视频在线观看| 80岁老熟妇乱子伦牲交| 一个人免费在线观看的高清视频| 99久久精品国产亚洲精品| 美女福利国产在线| 亚洲欧美色中文字幕在线| 国产精品影院久久| 深夜精品福利| 国产男女超爽视频在线观看| 亚洲第一欧美日韩一区二区三区| 在线十欧美十亚洲十日本专区| 精品国产一区二区三区四区第35| 欧美日本中文国产一区发布| 国产成人免费观看mmmm| 国产亚洲欧美精品永久| 搡老熟女国产l中国老女人| 亚洲第一青青草原| 麻豆av在线久日| 欧美最黄视频在线播放免费 | 一级片免费观看大全| 免费av中文字幕在线| 在线国产一区二区在线| 亚洲美女黄片视频| 久久精品熟女亚洲av麻豆精品| 国产午夜精品久久久久久| 久久久久精品国产欧美久久久| 欧美日韩av久久| 午夜免费观看网址| 亚洲精品乱久久久久久| 又黄又爽又免费观看的视频| 精品人妻1区二区| 欧美激情高清一区二区三区| 午夜老司机福利片| 亚洲一区二区三区不卡视频| 午夜精品国产一区二区电影| 久久天躁狠狠躁夜夜2o2o| 免费在线观看黄色视频的| 99精品在免费线老司机午夜| 亚洲av成人不卡在线观看播放网| 亚洲九九香蕉| 成人亚洲精品一区在线观看| av网站在线播放免费| 亚洲成人免费电影在线观看| 曰老女人黄片| 亚洲av电影在线进入| 久久久久视频综合| 丁香六月欧美| 女人高潮潮喷娇喘18禁视频| 日本黄色日本黄色录像| 在线观看免费日韩欧美大片| 久久青草综合色| 免费黄频网站在线观看国产| 午夜视频精品福利| 久久人人97超碰香蕉20202| 久久久久久久午夜电影 | 精品电影一区二区在线| 大陆偷拍与自拍| 久久影院123| 9191精品国产免费久久| 女性被躁到高潮视频| 国产真人三级小视频在线观看| 99精品欧美一区二区三区四区| 欧美日韩福利视频一区二区| 欧美日韩国产mv在线观看视频| 亚洲va日本ⅴa欧美va伊人久久| 少妇粗大呻吟视频| 日韩免费av在线播放| 黄色视频,在线免费观看| 中文字幕色久视频| 亚洲精品成人av观看孕妇| 日韩欧美一区二区三区在线观看 | 日韩视频一区二区在线观看| 热99re8久久精品国产| 亚洲在线自拍视频| xxxhd国产人妻xxx| 国产精品久久久av美女十八| 51午夜福利影视在线观看| 国产高清国产精品国产三级| 黄网站色视频无遮挡免费观看| 亚洲欧美激情在线| 国产欧美亚洲国产| 色尼玛亚洲综合影院| 久久人人爽av亚洲精品天堂| 亚洲少妇的诱惑av| 国产成人精品久久二区二区91| 99精品在免费线老司机午夜| 日韩成人在线观看一区二区三区| 国产一区二区激情短视频| 亚洲第一欧美日韩一区二区三区| 乱人伦中国视频| 精品视频人人做人人爽| 久久精品国产a三级三级三级| 精品福利永久在线观看| 国精品久久久久久国模美| 免费在线观看影片大全网站| 中文字幕人妻丝袜制服| 亚洲精华国产精华精| 成人精品一区二区免费| 高清视频免费观看一区二区| 久久性视频一级片| 丁香六月欧美| 在线观看www视频免费| 一个人免费在线观看的高清视频| 久久精品亚洲av国产电影网| 精品国产国语对白av| 黄色a级毛片大全视频| 亚洲精品在线观看二区| 日韩精品免费视频一区二区三区| 一夜夜www| 亚洲国产精品一区二区三区在线| 制服人妻中文乱码| 黄色a级毛片大全视频| 亚洲伊人色综图| ponron亚洲| 在线看a的网站| 1024视频免费在线观看| 免费观看精品视频网站| 久久人人爽av亚洲精品天堂| ponron亚洲| 久久国产乱子伦精品免费另类| 午夜精品在线福利| 成人国语在线视频| 国产无遮挡羞羞视频在线观看| 午夜福利视频在线观看免费| a在线观看视频网站| 啪啪无遮挡十八禁网站| 下体分泌物呈黄色| 人人妻人人澡人人爽人人夜夜| 视频在线观看一区二区三区| 在线国产一区二区在线| 一级黄色大片毛片| 18禁美女被吸乳视频| 亚洲成a人片在线一区二区| 久久热在线av| 亚洲色图av天堂| avwww免费| 狂野欧美激情性xxxx| 精品电影一区二区在线| 99精国产麻豆久久婷婷| 国产高清国产精品国产三级| videos熟女内射| 国产av一区二区精品久久| 国产麻豆69| av国产精品久久久久影院| 日韩精品免费视频一区二区三区| 欧美精品人与动牲交sv欧美| 精品卡一卡二卡四卡免费| tube8黄色片| 久久人妻福利社区极品人妻图片| 欧美 亚洲 国产 日韩一| 亚洲欧美日韩高清在线视频| a级毛片黄视频| 九色亚洲精品在线播放| 国产有黄有色有爽视频| 99香蕉大伊视频| 在线观看免费视频日本深夜| 国产免费av片在线观看野外av| 亚洲人成电影免费在线| 在线av久久热| 又紧又爽又黄一区二区| 美女视频免费永久观看网站| 亚洲国产欧美一区二区综合| 欧美成狂野欧美在线观看| 久久久久国产精品人妻aⅴ院 | 国产伦人伦偷精品视频| 婷婷丁香在线五月| 国产精品一区二区精品视频观看| 久久国产精品男人的天堂亚洲| 黄色片一级片一级黄色片| 欧美人与性动交α欧美软件| 亚洲七黄色美女视频| xxxhd国产人妻xxx| 欧美性长视频在线观看| 午夜福利免费观看在线| 王馨瑶露胸无遮挡在线观看| 黄色女人牲交| 免费高清在线观看日韩| 精品国内亚洲2022精品成人 | 成人特级黄色片久久久久久久| 女人被躁到高潮嗷嗷叫费观| 岛国毛片在线播放| 一级,二级,三级黄色视频| 午夜福利影视在线免费观看| 欧美老熟妇乱子伦牲交| 一进一出抽搐动态| 精品国产亚洲在线| 欧美日韩成人在线一区二区| 交换朋友夫妻互换小说| 成人国语在线视频| 国产亚洲精品第一综合不卡| 69精品国产乱码久久久| 在线观看舔阴道视频| 亚洲第一欧美日韩一区二区三区| 十八禁高潮呻吟视频| 一级片'在线观看视频| 少妇猛男粗大的猛烈进出视频| 啦啦啦在线免费观看视频4| 久久精品国产亚洲av香蕉五月 | 日韩欧美免费精品| 两性夫妻黄色片| 亚洲欧美激情在线| 中文字幕人妻丝袜一区二区| 国产精品一区二区在线不卡| 国产在线一区二区三区精| 国产熟女午夜一区二区三区| 亚洲三区欧美一区| 久久久久久久午夜电影 | 一二三四社区在线视频社区8| 男男h啪啪无遮挡| 无遮挡黄片免费观看| 日韩有码中文字幕| 日韩成人在线观看一区二区三区| x7x7x7水蜜桃| 免费在线观看影片大全网站| 天堂中文最新版在线下载| 亚洲av成人一区二区三| 午夜福利一区二区在线看| 久久人人爽av亚洲精品天堂| 亚洲男人天堂网一区| 一区二区三区国产精品乱码| 人人妻人人爽人人添夜夜欢视频| www.精华液| 人妻一区二区av| 久久ye,这里只有精品| 国产欧美日韩精品亚洲av| 最新美女视频免费是黄的| 国产三级黄色录像| 欧美精品av麻豆av| 国产又色又爽无遮挡免费看| 精品久久蜜臀av无| 91麻豆精品激情在线观看国产 | 欧美亚洲 丝袜 人妻 在线|