• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A PID-incorporated Latent Factorization of Tensors Approach to Dynamically Weighted Directed Network Analysis

    2022-01-26 00:36:06HaoWuXinLuoMengChuZhouMuhyaddinRawaKhaledSedraouiandAiiadAlbeshri
    IEEE/CAA Journal of Automatica Sinica 2022年3期

    Hao Wu,Xin Luo,,MengChu Zhou,,Muhyaddin J.Rawa,,Khaled Sedraoui,and Aiiad Albeshri

    Abstract—A large-scale dynamically weighted directed network(DWDN) involving numerous entities and massive dynamic interaction is an essential data source in many big-data-related applications,like in a terminal interaction pattern analysis system(TIPAS).It can be represented by a high-dimensional and incomplete (HDI) tensor whose entries are mostly unknown.Yet such an HDI tensor contains a wealth knowledge regarding various desired patterns like potential links in a DWDN.A latent factorization-of-tensors (LFT) model proves to be highly efficient in extracting such knowledge from an HDI tensor,which is commonly achieved via a stochastic gradient descent (SGD)solver.However,an SGD-based LFT model suffers from slow convergence that impairs its efficiency on large-scale DWDNs.To address this issue,this work proposes a proportional-integralderivative (PID)-incorporated LFT model.It constructs an adjusted instance error based on the PID control principle,and then substitutes it into an SGD solver to improve the convergence rate.Empirical studies on two DWDNs generated by a real TIPAS show that compared with state-of-the-art models,the proposed model achieves significant efficiency gain as well as highly competitive prediction accuracy when handling the task of missing link prediction for a given DWDN.

    I.INTRODUCTION

    IN this era of big data,a large-scale dynamically weighted directed network (DWDN) representing dynamic interactions among massive entities is frequently encountered in various big data analysis projects [1]–[5].Its examples include a terminal interaction network [6],[7],a social network [8],[9],and a wireless sensor network [10],[11].However,owing to the huge number of involved entities,it becomes impossible to observe their full interactions,making a corresponding DWDN high-dimensional and incomplete(HDI).For instance,a terminal interaction pattern analysis system (TIPAS) concerned in this study focuses on a largescale terminal interaction network that involves 1 400 742 terminals (e.g.,personal computers,intellectual mobiles and servers) and 77 441 650 directed interactions scattering in 1158 time slots.Its dimensions come to 1 400 742 × 1 400 742 ×1158,while its data density (known data entry count over total) is about 3.41 × 10–8only.Despite its HDI nature,it contains rich knowledge describing various desired patterns like potential links and abnormal terminals [1],[2].Therefore,determining how to acquire such desired knowledge from it becomes a thorny issue.

    To date,researchers have proposed a myriad of data analysis models for extracting useful knowledge from an unweighted network.For example,a static network can be analyzed through a node similarity-based model [12],a metropolis-hasting random walk model [13],a kernel extreme learning machine-based model [14],and a random walk-based deepwalk model [15].When a target network is dynamic,its knowledge acquisition can be implemented through a kernel similarity-based model [16],a Louvain-based dynamic community detection model [17],a spectral graph wavelet model [18],and a dynamic node embedding model [19].The above mentioned models are able to perform knowledge discovery given an unweighted network.However,they are incompatible with DWDN with directed and weighted links.

    On the other hand,an HDI tensor is able to fully preserve a DWDN’s structural,spatial and temporal patterns [20],[21],making it possess inherent advantages in describing DWDN.Therefore,a latent factorization-of-tensors (LFT)-based approach to DWDN representation learning has become popular in recent years [22]–[26].Representative models of this kind include a regularized tensor decomposition model[22],a Candecomp/Parafac (CP) weighted optimization model[23],a biased non-negative LFT model [24],a fused CP factorization model [25],and a non-negative CP factorization model [26].

    When building an LFT-based model,stochastic gradient descent (SGD) is mostly adopted as the learning algorithm to build desired latent features (LFs) [27]–[29].However,an SGD-based LFT model updates an LF depending on the stochastic gradient defined on the instant loss only,without considering historical update information.Consequently,it takes many iterations to convergence,thus resulting in low computational efficiency given large-scale DWDNs from industrial applications [30]–[32].For instance,considering the aforementioned DWDN generated by a real TIAPS,an SGDbased LFT model needs to traverse 77441650 entries in one iteration,which is computationally expensive.

    A proportional-integral-derivative (PID) controller utilizes the past,current and future information of prediction error to control an automated system [33],[34].It has been widely adopted in various control-related applications [35],[36],e.g.,a digital fractional order PID controller for a magnetic levitation system [37],a PID passivity-based controller for the regulation of port-Hamiltonian systems [38],a cascaded PID controller for automatic generation control analysis [39],and a nonlinear self-tuning PID controller for position control of a flexible manipulator [40].However,a PID-based method is rarely adopted in HDI tensor representation,which is frequently encountered in many big data-related applications[31],[32].

    Motivated by the above discovery,the following research question is proposed:

    Research Question:Is it possible to incorporate the PID principle into an SGD-based LFT model,thereby improving its performance on an HDI tensor?

    To answer it,this work innovatively proposes a PIDincorporated LFT (PLFT) model,which seamlessly integrates the PID control principle into an SGD solver.A PLFT model’s main idea is to construct an adjusted instance error by using a PID controller,and then adopt this adjusted error to perform an SGD-based learning process,thereby improving the convergence rate.This work aims to make the following contributions:

    1) A PLFT model.It performs latent feature analysis on an HDI tensor with high efficiency and preciseness;

    2) Detailed algorithm design and analysis for PLFT.It provides specific guidance for researchers to implement PLFT for analyzing DWDNs.

    Empirical studies on two large-scale DWDNs from a real TIPAS demonstrate that compared with state-of-the-art models,PLFT achieves impressively high efficiency as well as highly competitive prediction accuracy for link prediction.

    The remainder of this paper is organized as follows.Section II provides the preliminaries.Section III provides a detailed description of PLFT.Section IV reports experimental results.Section V discusses related work.Finally,Section VI concludes this paper.

    II.PRELIMINARIES

    A.Symbols Appointment

    Adopted symbols in this paper are summarized in Table I.

    TABLE IADOPTED SYMBOLS AND THEIR DESCRIPTIONS

    B.A DWDN and an HDI Tensor

    Fig.1 depicts a small example illustrating the connections between a DWDN and an HDI tensor.Consider the dynamic states of a DWDN during |K| time slots.The net can be described by a snapshot sequence where each one describes its state observed during a specified time slot.

    Definition 1 (DWDN State):Gk= (V,Lk) is thekth state of a DWDN whereLkis the weighted directed link set observed at time slotk∈K= {1,2,...,|K|}.

    An HDI tensor can accurately describe a DWDN [24],[25].Formally,it is defined as follows.

    Definition 2 (HDI Tensor):Given entity setsI,J,andK,letY|I|×|J|×|K|∈ R be a tensor where each entryyijkrepresents certain relationship among entitiesi∈I,j∈Jandk∈K.GivenY’s known and unknown entry sets Λ and Γ,Yis an HDI tensor if |Λ| ? |Γ|.

    Fig.1.A DWDN and an HDI tensor Y.

    To obtain three LF matricesS,DandT,we build an objective functionεto measure the difference betweenYand,which relies on the commonly-used Euclidean distance[21]–[25]

    As shown in Fig.2,Yis an HDI tensor with only a few known entries.Hence,we defineεon Λ only to representY’s known entries precisely [20],[24],[28],[29] to reformulate(3) as

    Fig.2.Latent factorization of an HDI tensor Y.

    According to prior research [24]–[26],due to the imbalanced distribution ofY’s known entries,it is essential to incorporate Tikhonov regularization into (4) to avoid overfitting

    With SGD,the learning scheme for desired LFs is given as

    whereeijk=denotes the instant error onyijk.With (6),an SGD-based LFT model can extract latent patterns form an HDI tensor.For instance,Fig.3 depicts the process of performing missing link prediction forGthrough an LFT model.

    D.A PID Controller

    A PID controller adjusts the observed error based on proportional,integral,and derivative terms [33]–[36].Fig.4 depicts the flowchart of a discrete PID controller.Note that this paper adopts this base controller because it is directly compatible with an SGD-based LFT model’s learning scheme as shown later.As shown in Fig.4,at themth time point,a discrete PID controller builds the adjusted erroras follows:

    whereEmandEiare the observed error at themandith time points,CP,CIandCDare the coefficients for controlling the proportional,integral and derivative effects,respectively.With the adjusted error,the system considers prior errors to implement precise control.Next,we integrate the PID control principle into SGD-based LFT to achieve a PLFT model.

    III.PLFT MODEL

    A.Model

    According to [41]–[43],an LFT model’s stability and representation learning ability can be enhanced by incorporating linear bias (LB) into its learning objective.According to [24],for a three-way HDI tensorY|I|×|J|×|K|,the LBs can be modeled through three LB vectorsa,b,andcwith length |I|,|J|,and |K|.By introducing them into (5),we arrive at the biased learning objective given as

    With (8),consists ofRrank-one tensors {Xr|r∈ {1,…,R}} built on LF matricesS,DandT,and three rank-one bias tensorsH1–H3built on LB matricesA,BandC.The LB vectorsa,bandcare respectively set as the first,second and third columns ofA,BandC,and other columns are set as onevalued column vectors,which is similar to LBs involved in a matrix factorization model [42],[43].This design is shown in Fig.5.

    Fig.3.Missing link prediction by performing LFT on an HDI tensor Y.

    Fig.4.Flow chart of a discrete PID controller.

    As given in (5),with SGD,the instant error of an instanceyijkat thenth training iteration is.Following(7),such instant error can be regarded as a PID controller’s proportional term.Moreover,the information ofyijkis passed into a parameter update througheijkonly in a training iteration.Therefore,this process can be described by a simplified PID controller by settingCP= 1 andCI=CD= 0.

    Fig.5.Building HDI tensor Y’s biased approximation tensor .

    With (10)–(15) a PLFT model is achieved.Its flow is in Fig.6.

    B.Algorithm Design and Analysis

    Based on the above inferences,we design Algorithm PLFT.According to PLFT,the main task in each iteration is to update LF matrices and LB vectors.Hence,its computation cost is

    Note that (16) adopts the condition that |Λ| ? max{|I|,|J|,|K|} which is constantly fulfilled in real applications.SincenandRare positive constants in practice,a PLFT model’s computation cost is linear with an HDI tensor’s known entry count.

    Fig.6.Flowchart of PLFT.

    Its storage cost mainly depends on the following three factors: 1) arrays caching the input known entries Λ and the corresponding estimations; 2) auxiliary arrays caching the integral and derivative terms on each instance; and 3) LF matricesS,DandT,and linear vectorsa,bandc.Hence,the PLFT’s storage cost is given as

    which is linear with the target HDI tensor’s known entry count and its LF count.The next section conducts experiments to validate a PLFT model’s performance.

    IV.EXPERIMENTAL RESULTS AND ANALYSES

    A.General Settings

    Datasets:The experiments are conducted on two large-scale DWDNs labeled asD1 andD2 from a real TIPAS that concerns the dynamic interactions over two million terminals.Table II summarizes their details.On them,we build two different HDI tensors,e.g.,the HDI tensor corresponding toD1 has the size of 1 400 742 × 1 400 742 × 1158 and data density of 8.41 × 10–8only.

    TABLE IIDATASET DETAILS

    To achieve objective results,each dataset is randomly split into the disjoint training setK,validation set Ψ,and testing set Ω at the ratio of 7:1:2.A tested model is built onK,validated on Ψ,and tested on Ω to obtain the final outcomes.The training process of a model terminates if 1) the number of consumed iterations reaches a preset threshold,e.g.,500,or 2)it converges,i.e.,the error difference between two consecutive iterations is less than 10–6.Note that for all involved models,the LF space dimensionRis fixed at 20 uniformly to balance their computational cost and representation learning ability,following the settings in [20]–[25].

    Evaluation Metrics:We choose weighted missing link prediction as the evaluation protocol due to the great need of building full interaction mapping among involved entities in real systems [20]–[25].The root mean squared error (RMSE)and mean absolute error (MAE) are the evaluation metrics

    Note that for a tested model,a small RMSE and MAE denote high prediction accuracy for missing links in a DWDN.

    B.Parameters Sensitivity Test

    As shown in Section III,PLFT’s performance is affected by learning rateη,regularization coefficientλ,and PID constantsCP,CI,andCD.Thus,we start with parameter sensitivity tests.

    1) Effects of η and λ

    Fig.7.Impacts of η and λ.

    This set of experiments fixes the PID coefficients and makesηincrease from 0.001 to 0.006 andλfrom 0.05 to 0.25.Figs.7(a) and 7(c) records PLFT’s RMSE and MAE asηand λ changes.Figs.7(b) and 7(d) depicts the corresponding iteration count.From these results,we have the following findings:

    a) PLFT’s convergence is related with η and λ:As shown in Figs.7(b) and 7(d),the converging iteration counts of PLFT in RMSE and MAE decrease asηincreases while they increase asλincreases onD1 andD2.For instance,as shown in Fig.7(b),asηincreases from 0.001 to 0.006 onD2,the converging iteration count of PLFT decreases from 21 to 5 in RMSE,and from 20 to 5 in MAE,respectively.This results is consistent with the role ofηin PLFT,i.e.,the larger the learning rate,the faster the parameter update velocity,resulting in fewer iterations for convergence.However,we also note that owing to its incorporation of PID,PLFT’s convergence rate stays consistently high even if the learning rate is set to be small.On the other hand,as shown in Fig.7(d),PLFT’s converging iteration count is generally linear withλ.For instance,asλincreases from 0.05 to 0.25 onD2,its converging iteration count increases from 21 to 24 in RMSE,and from 20 to 26 in MAE.This is because whenλgrows larger,the parameter update process requires more fitting regularization term,which leads to more iterations.

    b) PLFT’s prediction error increases as η and λ increase:For instance,as depicted in Fig.7(a),PLFT’s RMSE is 0.2954 and 0.3004 asη= 0.001 and 0.006 onD1,where the increment ((RMSEhigh–RMSElow)/RMSEhigh) is 1.66%.This result indicates that sinceηgrows larger,the parameter update process overshoots an optimum.As shown in Fig.7(c),PLFT’s RMSE is 0.2954 and 0.2961 asλincreases from 0.05 to 0.25 onD1,where the increment is 0.24%.This phenomenon demonstrates the occurrence of underfitting asλincreases continually.Note that similar situations are seen onD2,as shown in Figs.7(a) and 7(c).In general,the PLFT’s prediction error increases more rapidly inηthan inλ.

    2) Effects of CP,CI,andCD

    This set of experiments fixesηandλ,and makes the PID coefficients increase to test their effects on PLFT’s performance.The experimental results are illustrated in Fig.8.From them,we have the following findings:

    a) PLFT’s converging iteration count relies on CP,CI,and CD:As shown in Figs.8(b) and 8(d),PLFT’s converging iteration count in RMSE and MAE decreases asCPandCIincrease.However,it increases asCDincreases as shown in Fig.8(f).For instance,as shown in Fig.8(b),onD1,PLFT’s converging iteration count in RMSE is 57 withCP= 1.WhenCPincreases to 5,it decreases to 23.Likewise,asCIincreases from 0.01 to 0.09,it decreases from 34 to 15.In contrast,PLFT’s converging iteration count in RMSE increases from 24 to 43 asCDincreases from 10 to 90.Similar phenomena are encountered onD2 as shown in Figs.8(b),8(d) and 8(f).

    b) CP and CI affect PLFT’s prediction error slightly:As shown in Fig.8(a) and 8(c),onD1 andD2,PLFT’s RMSE and MAE increase slightly withCPandCI,while they are generally steady asCDincreases.For instance,onD1,as shown in Fig.8(a),PLFT’s MAE increases from 0.2180 to 0.2196 asCPincreases from 1 to 5,where the increment is 0.73%.According to Fig.8(c),PLFT’s MAE is 0.2184 withCI= 0.01 and 0.2193 withCI= 0.09,where the increment is 0.42%.In comparison,as shown in Fig.8(e),whenCDincreases from 10 to 90,the difference in PLFT’s MAE is 0.0002 only.Similar results are obtained onD2,as shown in Figs.8(a),8(c) and 8(e).

    Fig.8.Impacts of CP,CI and CD.

    Note that the above results are consistent with the theoretical analysis in Section III-A.CPcontrols the proportional term,and actually plays the role of the learning rate in the current update.CIcontrols the adjustment of a learning direction.Accordingly,a largerCIhas a stronger restriction on the oscillation of a parameter update process,which can make PLFT converge faster but also cause slight accuracy loss.CDcontrols the future expectations of a parameter update process.CDin an appropriate range can effectively avoid overshooting.Moreover,a relatively smallCDcan make PLFT converge faster to an optimum.

    3) Summary

    Based on the above results,we summarize that PLFT’s convergence rate is sensitive in its hyper-parameters,while its prediction accuracy is less sensitive in them.However,to further increase a PLFT model’s practicability in addressing DWDN analysis problems,a hyper-parameter self-adaptive scheme is desired.We plan to address it as a future issue.

    C.Comparison With State-of-the-art Models

    Three state-of-the-art models with the ability of analyzing an HDI tensor are compared with the proposed model.Their details are as follows:

    a) M1:A time-SVD++ model [44].It models the temporal effects into the objective function of a latent factor model such that the model can well represent an HDI tensor.

    TABLE IIIHYPER-PARAMETERS SETTINGS OF M1-M4

    Fig.9.Training curves of M1–M4 on D1 and D2.

    b) M2:A PARAFAC decomposition-based model [45].It builds anl1norm-based regularization scheme,and adopts nonnegative alternating least squares as its learning scheme.

    c) M3:An SGD-based LFT model [29].It adopts a standard SGD to perform parameters learning.

    d) M4:A PLFT model proposed in this paper.

    Note that hyper-parameter settings of M1–M4 are given in Table III.Considering M1–M4,we perform a grid search for their related hyper-parameters on one set of experiments out of 20 independent runs in total to achieve their best performance,and then adopt the same settings on the remaining 19 sets of experiments.

    Fig.9 depicts M1–M4’ training curves.Fig.10 records their total time costs.Fig.11 depicts their RMSE and MAE.From these results,we have the following important findings:

    a) M4 converges much faster than its peers do:As shown in Fig.9(a),M4 takes 23 iterations only to converge in RMSE on D1.In comparison,M1–M3 takes 64,26 and 67 iterations to converge in RMSE onD1,respectively.Considering MAE,as shown in Fig.9(b),M1–M3 respectively requires 68,26 and 60 iterations to converge onD1,while M4 only consumes 21 iterations.Similar phenomena can be found onD2,as depicted in Figs.9(c) and 9(d).From these results,we can obviously see that by incorporating a PID control principle,PLFT’s convergence rate is improved significantly over an SGD-LFT model.It turns out that incorporating the PID control principle into an SGD-based LFT model is feasible and highly effective.

    b) M4 achieves higher computational efficiency than its peers do:For instance,as shown in 10(a),onD1,M4 takes about 10 min to converge in RMSE,which is 8.13% of 123 min by M1,7.63% of 131 min by M2,and 40% of 25 min by M3.Considering MAE,M1 takes about 9 min to converge,which is 6.87% of 131 min by M1,6.62% of 136 min by M2,and 40.90% of 22 min by M3.Similar outputs are obtained onD2,as in Fig.10(b).As summarized in Algorithm 1,the computation cost of a PLFT model mainly relies on an HDI tensor’s known entry set.From the above experimental results,we evidently see that such design is extremely suitable and necessary when analyzing a large-scale HDI DWDN.

    Fig.10.Total time cost (min) of M1–M4 on D1 and D2.

    Fig.11.The RMSE and MAE of M1–M4 on D1 and D2.

    c) M4 has highly competitive prediction accuracy for missing data of an HDI tensor when compared with its peers:As depicted in Fig.11(a),onD1,M4’s RMSE is 0.2954,which is about 14.10% lower than the 0.3439 RMSE obtained by M1 and 4.15% lower than the 0.3082 RMSE obtained by M2,but 0.14% higher than the 0.2950 RMSE obtained by M3.Likewise,onD2,M1–M4 respectively achieve RMSEs of 0.3645,0.3223,0.3096 and 0.3103,where the RMSE of M4 is 14.87% and 3.72% lower than that of M1 and M2 while 0.23% higher than that of M3.When measured by MAE,similar results are achieved on both datasets,as shown in Fig.11(b).Note that M4 suffers from a small accuracy loss when compared with M3.Yet its computational efficiency is much higher than that of M3.This phenomenon indicates the performance of a PLFT model can still be further improved by incorporating another advanced controller,e.g.,a nonlinear PID controller.We plan to address this issue in the next work.

    D.Summary

    According to the experimental results,we conclude that PLFT is equipped with: a) a high convergence rate,b)impressively high computational efficiency,and c) highly competitive prediction accuracy.Therefore,PLFT is a highly effective model to large-scale DWDNs analysis.

    V.RELATED WORK

    A network has become increasingly important to model a complex system consisting of numerous entities and interactions.Network data analysis is widely involved in many fields,where missing link prediction is one of the essential and vital tasks in network analysis [1]–[5].To date,many sophisticated models to missing link prediction in a given network are proposed.They can be roughly divided into the following three categories:

    1)LF Analysis-based Models

    Duanet al.[46] propose an ensemble approach,which decomposes traditional link prediction into sub-problems of smaller size and each sub-problem is solved with LF analysis models.Nguyenet al.[47] utilize LF kernels and cast the link prediction problem into a binary classification to implement link prediction on sparse graphs.Zhuet al.[48] propose a temporal latent space model for dynamic link prediction,where the dynamic link over time is predicted by a sequence of previous graph snapshots.Yeet al.[49] use nonnegative matrix tri-factorization to construct network latent topological features,which acquires the principal common factors of the network link structures.Wanget al.[50] present a fusion model to fuse an adjacent matrix and key topological metrics into a unified probability LF analysis-based model for link prediction,which considers both the symmetric and asymmetric metrics.

    2)Neural Networks-based Models

    Wanget al.[51] design a Bayesian-based relational network model that combines network structures and node attributes to predict missing links.Chenet al.[52] propose a dynamic link prediction model by combining long short-term memory and encoder-decoder mechanisms.Liet al.[53] develop a temporal-restricted Boltzmann machine-based deep network,which predicts missing links by utilizing individual transition variances and influence of local neighbors.Ozcanet al.[54]propose a multivariate link prediction model for evolving heterogeneous networks via a nonlinear autoregressive neural network.Liet al.[55] model the evolution pattern of each link in a network by incorporating a gradient boosting decision tree and temporal restricted Boltzmann machine.

    3)Network Analysis-based Models

    Groveret al.[56] map involved nodes into low-dimensional feature space and adopts a biased random walk method to implement link prediction.Fuet al.[57] utilize line graph indices along with original graph indices to implement link weight prediction.Wanget al.[58] develop a cold-start link prediction approach by establishing the connections between topological and non-topological information.Xiaet al.[59]design a model based on a random walk with restart,which can determine potential collaborators in academic social networks.Deet al.[60] design a novel framework by using a stacked two-level learning paradigm for link prediction,which integrates local,community,and global signals.

    The above mentioned approaches can efficiently perform missing link prediction on a specified network.However,they all fail to address DWDN with directed and weighted missing links.Moreover,they commonly suffer from high computational cost due to their model structure or learning scheme.In comparison,the proposed PLFT model describe a DWDN via an HDI tensor smoothly,and achieves high efficiency via incorporating the PID principle into its learning scheme.

    VI.CONCLUSIONS

    In order to improve an SGD-based LFT model’s efficiency,this work proposes a PLFT model that combines the information of past,current and future updates into its parameters update following the PID control principle,which is never seen in existing studies to our best knowledge.It exhibits a higher convergence rate and greater computational efficiency than state-of-the-art methods,and very competitive accuracy in missing link prediction in comparison with them.Hence,it provides highly effective solutions for large-scale DWDN analysis.However,the following issues remain open:

    1) The performance of a PLFT model is affected by its hyper-parameters.Thus,making them adaptable to various applications is of great importance [61],[62].

    2) A nonlinear PID [40],[63] has shown excellent performance in real applications.Thus,we may consider further performance gain by incorporating its principle into an SGD-based LFT model.

    3) Determining how to further improve PLFT’s computational efficiency via a parallel computation framework [64]–[65],and apply PLFT to other datasets [66]–[69].We plan to address them as our future work.

    亚洲精品一卡2卡三卡4卡5卡| 精品久久久久久,| 男女床上黄色一级片免费看| 欧美乱妇无乱码| 欧美国产日韩亚洲一区| 亚洲经典国产精华液单 | 亚洲一区高清亚洲精品| 免费人成在线观看视频色| 在线观看66精品国产| 成人毛片a级毛片在线播放| 国产不卡一卡二| 噜噜噜噜噜久久久久久91| 在线观看午夜福利视频| 十八禁国产超污无遮挡网站| 三级男女做爰猛烈吃奶摸视频| 91av网一区二区| 露出奶头的视频| 久久久国产成人免费| 精品午夜福利视频在线观看一区| 成人永久免费在线观看视频| 色在线成人网| 国产精品精品国产色婷婷| 性插视频无遮挡在线免费观看| 欧美中文日本在线观看视频| 99久久九九国产精品国产免费| 日本与韩国留学比较| 国产人妻一区二区三区在| 成年女人毛片免费观看观看9| 赤兔流量卡办理| 人妻夜夜爽99麻豆av| 日韩中字成人| 老鸭窝网址在线观看| 蜜桃亚洲精品一区二区三区| 国产伦在线观看视频一区| 一区二区三区高清视频在线| 长腿黑丝高跟| 国产人妻一区二区三区在| 免费在线观看日本一区| 国产一区二区三区视频了| 高清日韩中文字幕在线| 午夜精品在线福利| 欧美一级a爱片免费观看看| 亚洲成人免费电影在线观看| 国产av不卡久久| 国产精品久久久久久人妻精品电影| 很黄的视频免费| 国内精品久久久久精免费| 99久久九九国产精品国产免费| 日韩欧美精品v在线| 乱人视频在线观看| 老司机午夜十八禁免费视频| 搡老熟女国产l中国老女人| 亚洲 国产 在线| 免费高清视频大片| 国产麻豆成人av免费视频| 淫妇啪啪啪对白视频| .国产精品久久| 男女下面进入的视频免费午夜| 99国产综合亚洲精品| 国产成人aa在线观看| 高潮久久久久久久久久久不卡| 欧美日韩黄片免| 亚洲,欧美精品.| 最后的刺客免费高清国语| 亚洲国产精品久久男人天堂| 动漫黄色视频在线观看| 五月玫瑰六月丁香| 无人区码免费观看不卡| 麻豆一二三区av精品| 1000部很黄的大片| 中文字幕熟女人妻在线| 99热6这里只有精品| 亚洲国产日韩欧美精品在线观看| 一a级毛片在线观看| 久久6这里有精品| 亚洲欧美精品综合久久99| 欧美绝顶高潮抽搐喷水| 久久久久久久久大av| 成人无遮挡网站| 乱码一卡2卡4卡精品| 国产精品98久久久久久宅男小说| 久久中文看片网| 午夜免费激情av| 别揉我奶头 嗯啊视频| 亚洲专区国产一区二区| 97超视频在线观看视频| 一进一出抽搐动态| 中文资源天堂在线| 亚洲成人精品中文字幕电影| 欧美成人a在线观看| 校园春色视频在线观看| 三级男女做爰猛烈吃奶摸视频| 久久精品国产亚洲av香蕉五月| 免费人成视频x8x8入口观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 五月玫瑰六月丁香| 在线免费观看不下载黄p国产 | 又粗又爽又猛毛片免费看| 九色成人免费人妻av| 成人av在线播放网站| 亚洲av中文字字幕乱码综合| 免费在线观看影片大全网站| 男女做爰动态图高潮gif福利片| 精品一区二区免费观看| 免费av不卡在线播放| 国产老妇女一区| 人人妻人人澡欧美一区二区| 3wmmmm亚洲av在线观看| 五月玫瑰六月丁香| 91字幕亚洲| 亚洲片人在线观看| 天堂√8在线中文| 精品久久国产蜜桃| 日本五十路高清| 国产精品爽爽va在线观看网站| 两人在一起打扑克的视频| 人人妻人人看人人澡| 无遮挡黄片免费观看| 91在线精品国自产拍蜜月| 国产成人啪精品午夜网站| 成年版毛片免费区| 国产高清激情床上av| 久久久久久久午夜电影| 色播亚洲综合网| 99国产综合亚洲精品| 国产精品自产拍在线观看55亚洲| 成熟少妇高潮喷水视频| 国内精品美女久久久久久| 88av欧美| 久久草成人影院| 成人高潮视频无遮挡免费网站| 日韩中字成人| 久久午夜亚洲精品久久| 亚洲无线观看免费| 久久久久久久久大av| 久久九九热精品免费| 99热精品在线国产| 日韩中字成人| 成人性生交大片免费视频hd| 人妻制服诱惑在线中文字幕| 久久久久久久久大av| 真人一进一出gif抽搐免费| 成人欧美大片| 一进一出抽搐gif免费好疼| 乱人视频在线观看| 真实男女啪啪啪动态图| 久久久久亚洲av毛片大全| а√天堂www在线а√下载| 日韩欧美在线二视频| 在线天堂最新版资源| 国产精品三级大全| 午夜福利成人在线免费观看| 夜夜看夜夜爽夜夜摸| 久久久久久久久大av| 性色avwww在线观看| 免费人成在线观看视频色| av天堂在线播放| 天美传媒精品一区二区| 又黄又爽又刺激的免费视频.| 欧美一区二区精品小视频在线| 琪琪午夜伦伦电影理论片6080| 美女高潮的动态| 欧美+日韩+精品| 精品一区二区免费观看| 99在线视频只有这里精品首页| 国产av麻豆久久久久久久| 午夜福利在线在线| 日韩欧美精品免费久久 | 久久精品夜夜夜夜夜久久蜜豆| 十八禁人妻一区二区| 欧美最黄视频在线播放免费| 精品人妻视频免费看| 午夜福利在线观看吧| 高潮久久久久久久久久久不卡| 国产精品久久久久久亚洲av鲁大| 99国产精品一区二区蜜桃av| av在线老鸭窝| 成人三级黄色视频| 国产真实乱freesex| 午夜视频国产福利| avwww免费| 亚洲av免费高清在线观看| 国产亚洲精品av在线| 久久这里只有精品中国| 国产精品美女特级片免费视频播放器| av天堂中文字幕网| 欧美日韩乱码在线| av专区在线播放| 深夜a级毛片| 色精品久久人妻99蜜桃| 中文字幕高清在线视频| www.色视频.com| 国产欧美日韩一区二区精品| 国产毛片a区久久久久| 久久中文看片网| 99热6这里只有精品| 免费无遮挡裸体视频| 国产精品一区二区三区四区久久| 又爽又黄无遮挡网站| 在线观看免费视频日本深夜| 欧美性猛交╳xxx乱大交人| 精品久久久久久成人av| 国产三级中文精品| 毛片女人毛片| 亚洲最大成人中文| 亚洲av日韩精品久久久久久密| 色综合欧美亚洲国产小说| 九九在线视频观看精品| 在线观看美女被高潮喷水网站 | 日韩欧美精品v在线| 日韩高清综合在线| 特级一级黄色大片| 欧美精品国产亚洲| 色尼玛亚洲综合影院| 亚洲精品亚洲一区二区| 日韩高清综合在线| 老司机深夜福利视频在线观看| 久久99热这里只有精品18| 最近在线观看免费完整版| 91九色精品人成在线观看| 亚洲,欧美精品.| 久久久国产成人精品二区| 午夜福利在线观看免费完整高清在 | 国产人妻一区二区三区在| 日韩国内少妇激情av| 88av欧美| 国产精品久久电影中文字幕| 国产精品影院久久| 最近最新免费中文字幕在线| 人人妻人人看人人澡| 男女做爰动态图高潮gif福利片| 99精品久久久久人妻精品| 757午夜福利合集在线观看| 九色成人免费人妻av| 亚洲美女搞黄在线观看 | 亚洲中文字幕日韩| 欧美激情在线99| 窝窝影院91人妻| 一区福利在线观看| 色视频www国产| 九九在线视频观看精品| а√天堂www在线а√下载| 精品一区二区三区av网在线观看| 婷婷色综合大香蕉| 亚洲人与动物交配视频| 国产精品亚洲一级av第二区| 欧美日韩乱码在线| 国产亚洲精品av在线| 88av欧美| www.熟女人妻精品国产| 久久国产乱子伦精品免费另类| 久久久久九九精品影院| 天天一区二区日本电影三级| 国产一区二区三区在线臀色熟女| av专区在线播放| 国内揄拍国产精品人妻在线| 极品教师在线视频| 国内少妇人妻偷人精品xxx网站| 啪啪无遮挡十八禁网站| 成熟少妇高潮喷水视频| 白带黄色成豆腐渣| 日韩成人在线观看一区二区三区| 深夜精品福利| 国产精品亚洲av一区麻豆| av在线蜜桃| 狠狠狠狠99中文字幕| 国产欧美日韩精品一区二区| 亚洲av.av天堂| xxxwww97欧美| 91久久精品电影网| 毛片女人毛片| 在线a可以看的网站| 极品教师在线免费播放| 免费人成在线观看视频色| 精品一区二区免费观看| 亚洲第一欧美日韩一区二区三区| 亚洲美女黄片视频| 国产人妻一区二区三区在| 国产亚洲精品久久久久久毛片| 搡老妇女老女人老熟妇| 国产精品99久久久久久久久| 成人特级av手机在线观看| 国产aⅴ精品一区二区三区波| eeuss影院久久| 别揉我奶头~嗯~啊~动态视频| 国产黄片美女视频| 日韩欧美免费精品| 国产欧美日韩精品一区二区| 国产精品一区二区三区四区久久| 午夜久久久久精精品| 精品久久久久久,| 久99久视频精品免费| 中文亚洲av片在线观看爽| 亚洲av第一区精品v没综合| 国产av麻豆久久久久久久| 欧美午夜高清在线| 亚洲avbb在线观看| 免费人成在线观看视频色| 久久精品久久久久久噜噜老黄 | 欧美日韩国产亚洲二区| 国产成人福利小说| 夜夜躁狠狠躁天天躁| 精品国产三级普通话版| 午夜福利成人在线免费观看| 91av网一区二区| 国产日本99.免费观看| 精品一区二区三区人妻视频| www.www免费av| 婷婷亚洲欧美| 亚洲aⅴ乱码一区二区在线播放| 国产欧美日韩精品一区二区| 男女床上黄色一级片免费看| 亚洲美女搞黄在线观看 | 精品人妻偷拍中文字幕| 毛片女人毛片| 国产亚洲精品久久久久久毛片| 欧美zozozo另类| 在线播放无遮挡| 亚洲熟妇中文字幕五十中出| 亚洲在线自拍视频| 国产精品,欧美在线| 日本三级黄在线观看| 午夜福利18| 久久久国产成人免费| 99视频精品全部免费 在线| 我要看日韩黄色一级片| 免费一级毛片在线播放高清视频| 少妇的逼好多水| av在线老鸭窝| 天堂动漫精品| 欧美一级a爱片免费观看看| 美女黄网站色视频| 久久国产精品人妻蜜桃| 亚州av有码| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 亚洲人成伊人成综合网2020| 中文字幕av成人在线电影| 亚洲最大成人手机在线| 我的老师免费观看完整版| 毛片女人毛片| 国产午夜福利久久久久久| 午夜福利在线观看吧| 亚洲最大成人中文| 精品免费久久久久久久清纯| 老司机深夜福利视频在线观看| 综合色av麻豆| 可以在线观看毛片的网站| 色吧在线观看| 九色国产91popny在线| 国内揄拍国产精品人妻在线| 日韩欧美在线二视频| eeuss影院久久| 女人被狂操c到高潮| 人人妻,人人澡人人爽秒播| 国产淫片久久久久久久久 | 中亚洲国语对白在线视频| 少妇熟女aⅴ在线视频| 嫩草影院精品99| 赤兔流量卡办理| 97超视频在线观看视频| 日韩大尺度精品在线看网址| 少妇的逼好多水| 欧美一级a爱片免费观看看| 精品午夜福利在线看| 99久久成人亚洲精品观看| 精华霜和精华液先用哪个| 国产亚洲欧美98| 亚洲美女黄片视频| 老司机午夜福利在线观看视频| 国产野战对白在线观看| 久久精品国产99精品国产亚洲性色| 日韩欧美国产一区二区入口| 欧美色欧美亚洲另类二区| 国产精品久久久久久亚洲av鲁大| 欧美丝袜亚洲另类 | 舔av片在线| 成人欧美大片| 丰满乱子伦码专区| 一本综合久久免费| 我要搜黄色片| 国产精品乱码一区二三区的特点| 男女视频在线观看网站免费| 白带黄色成豆腐渣| 十八禁人妻一区二区| 日韩中文字幕欧美一区二区| 久久九九热精品免费| 动漫黄色视频在线观看| 精品人妻熟女av久视频| 男人舔奶头视频| 51国产日韩欧美| 日韩欧美 国产精品| 男插女下体视频免费在线播放| 欧美三级亚洲精品| 真人一进一出gif抽搐免费| 色哟哟哟哟哟哟| 久久性视频一级片| 悠悠久久av| 别揉我奶头~嗯~啊~动态视频| 成人av一区二区三区在线看| 精品熟女少妇八av免费久了| 人人妻,人人澡人人爽秒播| 亚洲人成伊人成综合网2020| 日本五十路高清| 欧美中文日本在线观看视频| 国产精品av视频在线免费观看| 亚洲精品乱码久久久v下载方式| 黄色女人牲交| 国产色爽女视频免费观看| 亚洲av成人av| 国产精品电影一区二区三区| av在线天堂中文字幕| 伊人久久精品亚洲午夜| 97热精品久久久久久| 国产精品久久久久久久电影| 国产淫片久久久久久久久 | 国产成人影院久久av| 一级黄色大片毛片| 欧美成狂野欧美在线观看| 999久久久精品免费观看国产| 日韩亚洲欧美综合| 精品人妻1区二区| 欧美色欧美亚洲另类二区| 午夜福利18| 欧美日韩亚洲国产一区二区在线观看| 宅男免费午夜| 麻豆一二三区av精品| 国产精品爽爽va在线观看网站| 国产精品亚洲美女久久久| 日本a在线网址| 久久久久国产精品人妻aⅴ院| 最近最新免费中文字幕在线| 999久久久精品免费观看国产| 亚洲国产色片| 日韩欧美国产一区二区入口| 亚洲av不卡在线观看| 热99在线观看视频| 午夜久久久久精精品| 国产av不卡久久| 中文字幕av在线有码专区| 噜噜噜噜噜久久久久久91| 日本熟妇午夜| 亚洲精品久久国产高清桃花| 精品一区二区三区人妻视频| 性色avwww在线观看| 欧美日韩瑟瑟在线播放| 亚洲熟妇熟女久久| 亚洲欧美精品综合久久99| 欧美又色又爽又黄视频| 欧美成人免费av一区二区三区| 他把我摸到了高潮在线观看| 丰满人妻熟妇乱又伦精品不卡| 成人三级黄色视频| 欧美在线一区亚洲| 1024手机看黄色片| 国产伦一二天堂av在线观看| 国产精品免费一区二区三区在线| 日韩高清综合在线| 首页视频小说图片口味搜索| 久久精品91蜜桃| 搡老妇女老女人老熟妇| 欧美日本视频| 午夜福利免费观看在线| 国产aⅴ精品一区二区三区波| 国产精品永久免费网站| 1000部很黄的大片| 亚洲欧美激情综合另类| 97碰自拍视频| 简卡轻食公司| 久久久久久大精品| 亚洲成av人片在线播放无| 欧美性感艳星| h日本视频在线播放| 18禁裸乳无遮挡免费网站照片| 亚洲第一区二区三区不卡| 成人性生交大片免费视频hd| 美女免费视频网站| 亚洲不卡免费看| 美女高潮喷水抽搐中文字幕| 中文字幕av在线有码专区| 免费观看人在逋| 看十八女毛片水多多多| 男插女下体视频免费在线播放| 嫩草影视91久久| 欧美丝袜亚洲另类 | 国产精品久久久久久精品电影| 精品久久久久久久久久免费视频| 琪琪午夜伦伦电影理论片6080| 赤兔流量卡办理| 一区二区三区四区激情视频 | 国产精品嫩草影院av在线观看 | 3wmmmm亚洲av在线观看| 老女人水多毛片| 欧美一区二区国产精品久久精品| 婷婷亚洲欧美| 亚洲成av人片免费观看| 亚洲 国产 在线| 成年女人毛片免费观看观看9| 男女做爰动态图高潮gif福利片| 国产色婷婷99| 性色av乱码一区二区三区2| 波野结衣二区三区在线| 欧美色视频一区免费| 精品人妻1区二区| 国产精品久久电影中文字幕| 国产欧美日韩精品亚洲av| 国产熟女xx| 欧美成人a在线观看| 两个人的视频大全免费| 美女高潮的动态| 99久久精品热视频| 国产伦精品一区二区三区四那| 午夜亚洲福利在线播放| 在线观看免费视频日本深夜| 中文字幕精品亚洲无线码一区| 91av网一区二区| 中文字幕人妻熟人妻熟丝袜美| 天堂动漫精品| 搞女人的毛片| 精品欧美国产一区二区三| 欧美又色又爽又黄视频| 蜜桃久久精品国产亚洲av| 精品乱码久久久久久99久播| 免费av观看视频| 日本 欧美在线| 成人国产一区最新在线观看| 老司机午夜福利在线观看视频| 91麻豆精品激情在线观看国产| 免费人成在线观看视频色| 亚洲av一区综合| 在线天堂最新版资源| 免费在线观看成人毛片| 热99在线观看视频| 成人欧美大片| 亚洲欧美日韩卡通动漫| 悠悠久久av| 亚洲avbb在线观看| 久久精品影院6| 国产免费男女视频| 欧美乱妇无乱码| 美女免费视频网站| 亚洲在线观看片| 久久久精品欧美日韩精品| h日本视频在线播放| 99精品久久久久人妻精品| www日本黄色视频网| 精品国内亚洲2022精品成人| 嫩草影院新地址| 夜夜看夜夜爽夜夜摸| 国内精品久久久久久久电影| 他把我摸到了高潮在线观看| 好看av亚洲va欧美ⅴa在| 成人无遮挡网站| 国产精品久久久久久亚洲av鲁大| 欧美国产日韩亚洲一区| a级毛片免费高清观看在线播放| 亚洲专区国产一区二区| 中文字幕精品亚洲无线码一区| av欧美777| 国内精品久久久久精免费| 亚洲av第一区精品v没综合| 两个人的视频大全免费| 欧美成人免费av一区二区三区| www.999成人在线观看| 亚洲欧美日韩东京热| 国产欧美日韩一区二区精品| 3wmmmm亚洲av在线观看| 成人欧美大片| 欧美成人一区二区免费高清观看| 最好的美女福利视频网| 十八禁国产超污无遮挡网站| 超碰av人人做人人爽久久| 亚洲 欧美 日韩 在线 免费| 久久久国产成人精品二区| 亚州av有码| 老司机深夜福利视频在线观看| 国产亚洲精品久久久com| 成人三级黄色视频| 丁香六月欧美| 男女视频在线观看网站免费| 91午夜精品亚洲一区二区三区 | 三级男女做爰猛烈吃奶摸视频| 很黄的视频免费| 美女黄网站色视频| 内射极品少妇av片p| 1024手机看黄色片| 3wmmmm亚洲av在线观看| 午夜免费男女啪啪视频观看 | 真实男女啪啪啪动态图| 国产伦人伦偷精品视频| 91久久精品电影网| 久久精品国产亚洲av天美| 午夜福利免费观看在线| 欧美三级亚洲精品| 精品日产1卡2卡| 国产成人福利小说| 欧美区成人在线视频| 如何舔出高潮| 午夜福利免费观看在线| 国产午夜福利久久久久久| 校园春色视频在线观看| 黄色视频,在线免费观看| 三级男女做爰猛烈吃奶摸视频| 99在线视频只有这里精品首页| 午夜福利高清视频| 久久午夜亚洲精品久久| 日本一本二区三区精品| 亚洲18禁久久av| 欧美乱色亚洲激情| 免费观看的影片在线观看| 欧美+亚洲+日韩+国产| 校园春色视频在线观看| 亚洲熟妇中文字幕五十中出| 天天一区二区日本电影三级| 午夜福利视频1000在线观看| 久久热精品热|