• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Spatial-Temporal Attention Model for Human Trajectory Prediction

    2020-08-05 09:42:46XiaodongZhaoYaranChenJinGuoandDongbinZhao
    IEEE/CAA Journal of Automatica Sinica 2020年4期

    Xiaodong Zhao,Yaran Chen,Jin Guo,and Dongbin Zhao,

    Abstract—Human trajectory prediction is essential and promising in many related applications.This is challenging due to the uncertainty of human behaviors, which can be influenced not only by himself, but also by the surrounding environment. Recent works based on long-short term memory(LSTM) models have brought tremendous improvements on the task of trajectory prediction. However, most of them focus on the spatial influence of humans but ignore the temporal influence. In this paper, we propose a novel spatial-temporal attention(ST-Attention) model,which studies spatial and temporal affinities jointly.Specifically,we introduce an attention mechanism to extract temporal affinity,learning the importance for historical trajectory information at different time instants.To explore spatial affinity,a deep neural network is employed to measure different importance of the neighbors. Experimental results show that our method achieves competitive performance compared with state-of-the-art methods on publicly available datasets.

    I.Introduction

    HUMAN trajectory prediction is to predict future path according to the history trajectory.The trajectory is represented by a set of sampled consecutive location coordinates.Trajectory prediction is a core building block for autonomous moving platforms,and the prospective applications include autonomous driving[1]–[3],mobile robot navigation[4],assistive technologies[5],and smart video surveillance[6],etc.

    When a person is walking in the crowd, the future path is determined by various factors like the intention,the social conventions and the influence of nearby people.For instance,people prefer to walk along the sidewalk rather than crossing the highway.A person is able to adjust his path by estimating the future path of the people around him,and the people do the same thing which in turn affects the target.Human trajectory prediction becomes an extremely challenging problem due to such complex nature of the people.Benefiting from the powerful deep learning[7],[8], human trajectory prediction has gained a significant improvement in the last few years.Yagiet al.in [5] present a multi-stream convolutiondeconvolution architecture for first-person videos,which verifies pose,scale,and ego-motion cues are useful for the future person localization.Pioneering works by[9],[10]shows that long-short term memory(LSTM)has the capacity to learn general human movements and predict future trajectories.

    Although tremendous efforts have been made to address these challenges, there are still two limitations:

    1)The historical trajectory information at different time instants has different levels of influence on the target human,which is ignored by most of works.However,it plays an important role on the prediction of the future path.As for the target human, the latest trajectory information usually has a higher level of influence on the future path as shown in Fig.1(a). As for the neighbors,the trajectory information will have a great impact as long as the distance is close to the target,as shown in Fig.1(b).Thus, the historical trajectory information at different time instants ought to be given different weights.The attention mechanism is capable of learning different weights according to the importance.

    Fig.1.Illustration of the influences at different time instants.(a)As for the target human( PT ), the trajectory information at timet ?1 and tmay affect future path more compared with that at time t ?2and t ?3.(b) As for the neighbor ( PN ),he turns away from PT at time t.The trajectory information of PN at timet ?1 has a greater influence on PT considering that PT is not allowed to occupy the position where PN just lefts.

    2)Most of trajectory prediction methods fail to capture the global context among the environment.Some methods capture the global context through an annotation text recording people location coordinates provided by the dataset.However, the text just annotates a few people,so it is not the real global information.A pre-trained detection model[11]can be used to extract all people in the image rather than relying on the annotation text.

    In this work, we propose a spatial-temporal attention network to predict human trajectory in the future.We adopt an LSTM called ego encoder to model the ego motion of the target human.We also consider all people in the scene by the pre-trained detection model extracting the positions of neighbors.The positions are fed into amulti-layer perceptron(MLP)to obtain high dimensional features.Then the inner product is used to acquire the weights which measure the importance of neighbors to the target.Further,another LSTM called interaction encoder is followed to model human-human interaction.It is noted that in most existing models, the trajectory information at different time instants gains equal treatment, which is not suitable for the complex trajectory prediction.Inspired by this,we introduce an attention mechanism to obtain the weights, which represent the levels of influence for trajectory information at different time instants.Finally,an LSTM decoder is employed to generate human trajectories for the next few frames.

    Our contributions can be summarized as following:

    1)We introduce an attention mechanism to automatically learn the weights.They dynamically determine which time instant’s trajectory information we should pay more attention to.

    2)We utilize a pre-trained detection model[11]to capture global context instead of retrieving local context from the dataset,then an MLP and the inner product are used to weight different neighbors.

    3) Based on the above two ideas,a spatial-temporal attention(ST-Attention)model,is proposed to tackle the challenges of trajectory prediction.ST-Attention achieves competitive performance on two benchmark datasets:ETH&UCY[12],[13]and ActEV/VIRAT [14].

    II.Related Work

    A.Traditional Approaches for Trajectory Prediction

    Kalman filter[15],[16]can be deployed to forecast the future trajectory in the case of linear acceleration,which has proven to be an efficient recursive filter.It is capable of estimating the state of a dynamic system from a series of incomplete and noisy measurements,especially in the analysis of time sequences.Williams[17]proposes to use Gaussian processes distribution to estimate motion parameters like the velocity and the angle offset,then a motion pattern of the pedestrian is built.Further,researchers begin to associate the energy with pedestrians.One representative work is the social forces proposed by Helbing and Molnár [18], which transforms the attraction and the exclusion between pedestrians and obstacles into energy to predict the pedestrian trajectory.The attractive force is used to guide the target to the destination,and the repulsive force is used to keep a safe distance and avoid collision.Subsequently,some methods[19]fit the parameters of the energy functions to improve the social forces model.

    However, the above methods rely on hand-crafted features.This becomes an obstacle to advance the performance of the trajectory prediction in light that these methods have the ability to capture simple interaction but fail in complex scenarios.In contrast,data-driven methods based on convolutional neural network(CNN)and recurrent neural network(RNN)overcome the above limitations of traditional ones.

    B.CNN Models for Trajectory Prediction

    CNN[20] has proven to be powerful to extract rich context information,which is salient cue for trajectory prediction task.Behavior-CNN in[21]employs a large receptive field to model the walking behaviors of pedestrians and learn the location information of the scene.Yagiet al.[5]develop a deep neural network that utilizes the pose, the location-scale,and the ego-motion cues of the target human, but they forget to consider human-human interaction.Huanget al.[22]introduce the spatial matching network and the orientation network.The former generates a reward map representing the reward of every pixel on the scene image,and the latter outputs an estimated facing orientation.However, this method can only consider the static scene but not the dynamic information of pedestrians.

    C. RNN Models for Trajectory Prediction

    RNN[23]has also proven to be efficient to deal with time sequence tasks.RNN models have shown dominant capability in various domains like neural machine translation[24],speech recognition[25],generating image descriptions[26]and DNA function prediction[27].Some recent works have attempted to use RNN to forecast trajectory.Social-LSTM in[9]introduces a social pooling layer to learn classic interactions that happen among pedestrians.But this pooling solution fails to capture global context.Besides,social-LSTM predicts the distribution of the trajectory locations instead of directly predicting the locations.This makes training process difficult while sampling process is non-defferentiable.Guptaet al.[10]propose Social-GAN combining approaches for trajectory prediction and generative adversarial networks.But the performance has not been improved obviously when sampling only one time during test time.Lianget al.[28]presentNext,an end-to-end learning framework extracting rich visual information to recognize pedestrian behaviors.Furthermore,the focal attention[29]is employed to the framework.It is originally proposed to tackle visual question answering,projecting different features into a lowdemensional space.But the focal attention used inNextis hard-wired and fails to learn from the data.Xuet al.[30]design a crowd interaction deep neural network which considers all pedestrians in the scene as well as their spatial affinity for trajectory prediction.However,they ignore the influence of temporal affinity.In our work we take into account both spatial affinity and temporal affinity.

    Fig.2.Overview of proposed ST-Attention.The model utilizes encoder to extract ego featureEgo(h1,...,hT obs) and interaction feature Inter(h1,...,hT obs)from Lit and Btjrespectively ( t ∈[1,T obs]) ,then the following decoder outputs future path Lit′ (t ′∈[T obs+1,T pred]).

    D. Attention Approaches for Trajectory Prediction

    Some approaches for trajectory prediction employ the attention mechanism to differentiate the influence of neighbors on the target.Suet al.[31] update the LSTM memory cell state with a coherent regularization, which computes the pairwise velocity correlation to weight the dependency between the trajectories.Further,a social-aware LSTM unit[32]is proposed,which incorporates the nearby trajectories to learn a representation of the crowd dynamics.Zhanget al.[33]utilize motion gate to highlight the important motion features of neighboring agents.Sadeghianet al.[34]apply the soft attention similarly with[26],and emphasis the salient regions of the image and the more relevant agents.However,the above works focus on the spatial influence of the neighboring agents, but ignore the temporal influence of the agents which is also valuable for human trajectory prediction.The attention mechanism in our model connects the decoder state and the temporal encoder state,allowing to give an importance value for each time instant's trajectory state of the neighboring humans and the target human.

    E. Pedestrian Re-identification for Trajectory Extraction

    with the advancement of the pedestrian re-identification(Re-ID)[35],the same person with different appearances can be identified accurately,which facilitates the extraction of the human trajectory.K?stingeret al.[36]consider that the difference among various image features of the same pedestrian conforms to Gaussian distribution,and propose keep-it-simple-and-straightforward metric learning (KISSME).However,KISSME meets the small sample size problem in calculating various classes of covariance matrices, which blocks the improvement of the Re-ID performance.Hanet al.in[37]verify that virtual samples can alleviate the small sample size problem of KISSME.And the re-extraction process of virtual sample features is eliminated by genetic algorithm,which greatly improves the matching rate of pedestrian Re-ID.Further,KISS+[38]algorithm is proposed to generate virtual samples by using an orthogonal basis vector, which is very suitable for real-time pedestrian Re-ID in open environment due to its advantages of simplicity,fast execution and easy operation.These works are of great significance to the human trajectory prediction.

    III.Method

    A person adjusts his trajectory based on the definite destination in mind and the influence of neighbors when he is walking in the crowd.On the one hand,the future trajectory of the target human depends on historical trajectories at different time instants,which we refer to as temporal affinity.On the other hand,the future trajectory hinges on the distances,the velocities and the headings of neighbors,which we refer to as spatial affinity.This idea motivates us to study the trajectory prediction jointly with temporal and spatial affinities.In this section, we present our spatial-temporal attention model tackling the problem.

    A. Problem Formulation

    B.Overview

    The overall network architecture is illustrated in Fig.2.Our model employs an encoder-decoder framework.Specifically,the encoder consists of ego module and interaction module,and the decoder includes attention module and prediction module.We feed the locations into ego module to get the ego feature which is used for modeling the motion of the target. At the same time,the observed boxes are fed into interaction module to get the interaction feature which is used for exploring the relationship among neighbors.The above feature vectors are weighted and summed along the temporal dimensions by the attention module.Then the prediction module employs an LSTM to generate future trajectory.In the rest of this section,we will detail the above modules.

    C. Ego Module

    Ego module aims at exploring the intention of the target human which can be reflected by the motion characteristics such as the velocity,the acceleration and the direction.Due to the powerful ability of addressing sequence data,LSTM is chosen as the ego module architecture.For the pedestrianpi,we embed the location into a vectoret.Then the embedding is fed into theego encoder, whose hidden statehtis computed by

    IV.Performance Analysis

    In this section,we analyze our model on pedestrian trajectory datasets based on world plane and image plane.Specifically, we evaluate meter values on ETH[12]and UCY[13]datasets,and report pixel values on ActEV/VIRAT dataset[14].Experimental results demonstrate that our model performs well on both world plane and image plane.

    A. Evaluation Metrics

    Similarly to prior works[10],[28],we use two metrics to report prediction error:

    1) Average Displacement Error(ADE):AverageL2distance between the ground truth coordinates and the prediction coordinates over all predicted time instants

    2) Final Displacement Error(FDE):TheL2distance between the true points and the prediction points at the final time instantTpred

    B. Baseline Methods

    We compare the results of our model with following stateof-the-art methods at the same conditions:

    1) Linear[10]:A linear regression model whose parameters are determined by minimizing leastL2distance.

    2) S-LSTM[9]:Alahiet al.[9] build one LSTM for each person and share the information between the LSTMs tclayer.At each time instanttduring the prediction period,the LSTM hidden-state represents a bivariate Gaussian distribution described by mean μ,standard deviation σ and correlation coefficient ρ.Then the predicted trajectory at timet+1 is sampled from the distribution.

    3) Next[28]:Lianget al.[28]encode a person through rich visual features rather than oversimplifying human as a point.The person behavior module is proposed to capture the visual information,modeling appearance and body movement.And the person interaction module is used to capture other objects information and the surroundings.The future trajectory is predicted by the LSTM with the focal attention [29].

    4) LiteNext:We implement a simplified version ofNextmodel which just takes into account the person’s trajectory and the person-objects.We keep the same in other settings.In this way,the input of LiteNext is the same as ours.

    C. Experiments on ETH and UCY

    The ETH dataset consists of 750 pedestrians and contains two sets(ETH and HOTEL).The UCY dataset embodies 786 pedestrians and is comprised of three sets(ZARA1,ZARA2 and UNIV).These datasets contain rich real-world scenarios including walking in company,giving way for each other and lingering about, which are full of challenges.The number of tags including frame, pedestrian,group,and obstacle in the datasets is summarized as shown in Table I.

    TABLE I The Number of Tags

    1)Setup:following the same experimental setup as[10],we use the leave-one-out strategy,that is,training on 4 setsand testing on the remaining set.Based on the sampling period of 0.4 s,we observe the trajectory for 8 frames(3.2 s)and predict the next 12 frames (4.8 s),namelyTobs= 8,Tobs+1=9,andTpred= 20.

    TABLE II Quantitative Results of Different Methods on ETH &UCY Datasets.We Use ADE and FDE to Measure Prediction Error in Meter Values,and Lower Is Better

    2) Implementation Details:In the interaction module,a multi-layer perceptron is employed,which embodies 3 layers.The node sizes in these layers are set to 32,64,128 respectively.And the dimension of embedding layer is 128.The LSTM hidden sizedis set to 256.A batch size of 64 is used and the epoch number of the training stage is 100.We use Adam optimizer with an initial learning rate of 0.001.To facilitate the training, we clip the gradients at a value of 10.A single NVIDIA GeForce GTX Titan-Xp GPU is used for the training.

    3)Quantitative Results:We report the experimental results about ADE and FDE for all methods across the crow d sets in Table II.The linear regressor presents high prediction errors since it has no ability to model curved trajectories unlike other methods employing LSTM networks to overcome this deficiency.Moreover,Nextmodel and ours are better than SLSTM as they consider global human-human interaction,and S-LSTM does not perform as well as expected since it fails to consider global context.

    Our ST-Attention achieves state-of-the-art performance on ETH and UCY benchmarks.Throughout the Table II, the evaluation error on single ETH crowd set is much higher than those on other sets.This crowd set contains a lot of pedestrians on the narrow passage and they walk in disorder with different velocities and headings.Compared with LiteNext, the same input information is obtained but ST-Attention performs significantly better,especially on ETH dataset.This verifies the effectiveness of our method.Compared withNext,ST-Attention misses two input feature channels and has a lighter network structure.At the same time,ST-Attention is still competitive and achieves powerful results no worse thanNext.This is because the focal attention[29] used inNextis hard-wired and cannot make full use of input features.

    Computational time is crucial in order to satisfy real-world applications.For instance,real-time prediction of the pedestrian trajectories in front of the vehicles is necessary in autonomous driving.In Table III, we make a comparison with other models in terms of speed.We can see that S-LSTM has fewer parameters,but the computational time is not as fast as expected.The decrease of speed is because that S-LSTM adopts recursive method to predict future trajectories,which means S-LSTM needs to compute occupancy grids to implement social pooling at each time instant.Compared withNext,our method reduces the number of parameters by almost half,since ST-Attention uses fewer input channels.Correspondingly,our method is 2.5x faster thanNext,taking about 0.02 s on the premise of thatandare obtained.Due to the efficient interaction model,our model is also faster than LiteNext.

    TABLE III Speed Comparison with Baseline Models.Our Method with Fewer Number of Parameters Gets 2.5x Speedup Compared to Next

    Fig.5.The visualization results of our ST-Attention predicting path on ETH&UCY datasets:history trajectory(orange),ground truth(red),predicted trajectory for Next model(blue)and our model(green).The first three rows show some successful cases and the last row presents some failure examples.we can see that in most cases our predicted trajectory coincides with the ground truth.

    4)Qualitative Results:We illustrate the qualitative results ofNextmodel and our ST-Attention with visualization to make a comparison in Fig.5.The results demonstrate the effectiveness of our model.When a person meets a couple as shown in Fig.5(a1)and Fig.5(a3),ST-Attention is able to pass through the cracks whileNextmodel might collide with one of them.In the second row of Fig.5, we present some scenes where people walk in a group,and ST-Attention is able to jointly predict their trajectories with lower error thanNextmodel.We would like to note that in Fig.5(c2), the predicted path byNextmodel through the wall even though theNextmodel encodes scene semantic information, which testifies that focal attention[29]cannot fully utilize the rich visual feature.In the last row of Fig.5,several failure cases are shown.In Fig.5(d1)when a pedestrian waits at the station,he moves slightly as he paces back and forth.ST-Attention assumes he will have a small movement along the previous trend while ground truth has a sudden turn.In Fig.5(d2)people move in the opposite direction,ST-Attention predicts that target human slows down to avoid collision but actually he gives way toward the right.In Fig.5(d3)ST-Attention predicts a change of direction toward a wide space whereas the pedestrian goes ahead.Although the predicted paths do not correspond to the ground truth in failure cases,the outputs still belong to acceptable trajectories that pedestrians may take.

    D. Experiments on ActEV/VIRAT

    ActEV/VIRAT[14]is a benchmark dataset devoted to human activity research.This dataset is natural,realistic and challenging in the field of video surveillance in terms of its resolution and diversity in scenes,including more than 400 videos at 30 frames/s.

    1) Implementation Details:In the experiment,training set includes 57 videos and validation set includes 5 videos.Besides,55 videos are used for testing.In order to keep consistent with the baseline models based on ETH&UCY,activity label is not used in this experiment.Other parameter settings are the same as those in ETH & UCY.

    2) Results:Table IV shows quantitative results compared to baseline methods.Combined with Table III,we can see that our model outperforms other methods with lightweight parameters.We also do visualization to reflect the performance of the algorithm intuitively.As show in Fig.6,our prediction trajectory can better match the ground truth.When a person walks at a normal pace,our model is able to predict his future path,including the situation that the person turns to change the direction of the trajectory such as Fig.6(b)and Fig.6(c).In Fig.6(c), the historical trajectory of the target human turns right and then left with a curvature.Nextmodel predicts that the human will continue to turn left,but in fact he turns right to the main road as our model predicts.However,our model performs poor when human has a great change of direction due to obvious external interference or other purposes,such as failed cases in Fig.6.Even though such cases are hard to our model, better performance is achieved compared with other methods.

    TABLE IV Quantitative Results on ActEV/VIRAT.We Report ADE and FDE in Pixel Values

    Fig.6.The visualization results on ActEV/VIRAT dataset:history trajectory(orange),ground truth(red), predicted trajectory for Next model(blue)and our model (green).

    TABLE V Ablation Experiments of the Interaction Module and the Attention Module.We Report ADE and FDE (ADE/FDE) on ETH&UCY (Meter Values) and ActEV/VIRAT (Pixel Values)

    E. Ablation Study

    To explore the role of each module in the trajectory prediction,we make an ablation study on ETH&UCY and ActEV/VIRAT datasets.

    1) Effectiveness of the Interaction Module:To verify the importance of interaction module, we train a network removing the interaction branch.Then a comparative experiment with and without interaction module is done and the results are shown in Table V.We can see that a better performance is achieved by the model with interaction module.This is because the interaction module measures the influence of neighbors on the target.

    2) Effectiveness of the Attention Module:To evaluate the effectiveness of our attention module,we make a comparison with focal attention[29]which is not learnable.The comparison result is shown in Table V and our attention module performs better than focal attention.This is because our soft attention can automatically learn the weights while focal attention fails,which suggests our attention module is effective for the trajectory prediction.

    V.Conclusion

    In this paper,we present ST-Attention,a spatial-temporal attention model for trajectory prediction.To explore spatial affinity, we use an MLP and the inner product to assign different weights for all pedestrians.To make full use of temporal affinity,the key component named attention model is introduced,which is quite efficient.Our model is fullydifferentiable and accurately predicts the future trajectory of the target, which automatically learns the importance for historical trajectories at different time instants and weights the influence of nearby people on the target.Comprehensive experiments on two publicly available datasets have been conducted to demonstrate that ST-Attention can achieve a competitive performance.

    Our approach is designed for human trajectory prediction.Future work can extend the model to vehicle trajectory prediction in view of the many similarities between the two predictions mentioned above.Meanwhile we should also distinguish their differences.For example,pedestrians can turn back easily while it is difficult for vehicles,and vehicles can change speed rapidly while pedestrians fail.In particular,it is critical in autonomous driving field to predict the human trajectory jointly with the vehicle trajectory.Besides,intelligent optimization algorithms[41],[42]can be used to learn all the parameters.

    老司机影院毛片| 亚洲人成77777在线视频| 国产深夜福利视频在线观看| 午夜日韩欧美国产| 爱豆传媒免费全集在线观看| 哪个播放器可以免费观看大片| 人人妻人人澡人人看| 亚洲国产欧美在线一区| 五月天丁香电影| 十八禁高潮呻吟视频| 久久精品国产亚洲av涩爱| 永久免费av网站大全| 婷婷色综合大香蕉| av免费观看日本| 成人黄色视频免费在线看| 别揉我奶头~嗯~啊~动态视频 | 午夜激情久久久久久久| 成人国语在线视频| 精品国产国语对白av| 亚洲国产成人一精品久久久| 欧美人与善性xxx| 1024视频免费在线观看| 黄片播放在线免费| 美女视频免费永久观看网站| 亚洲精品美女久久av网站| 亚洲精品视频女| 2018国产大陆天天弄谢| 国产精品免费视频内射| 欧美老熟妇乱子伦牲交| 制服诱惑二区| 国精品久久久久久国模美| 精品国产超薄肉色丝袜足j| 80岁老熟妇乱子伦牲交| 国产精品国产三级专区第一集| 午夜久久久在线观看| 老司机深夜福利视频在线观看 | 婷婷成人精品国产| 啦啦啦 在线观看视频| 国产黄频视频在线观看| 日本爱情动作片www.在线观看| 日韩中文字幕欧美一区二区 | 看免费成人av毛片| 亚洲精品视频女| 精品一区在线观看国产| 老司机深夜福利视频在线观看 | 少妇被粗大猛烈的视频| 青草久久国产| 免费av中文字幕在线| 大片免费播放器 马上看| 性色av一级| 天天影视国产精品| 伦理电影大哥的女人| 满18在线观看网站| 亚洲精品aⅴ在线观看| 亚洲美女黄色视频免费看| 日日撸夜夜添| 亚洲欧洲日产国产| 一级毛片电影观看| 91精品国产国语对白视频| 好男人视频免费观看在线| 国产成人欧美| 亚洲人成网站在线观看播放| 亚洲欧美精品自产自拍| 中文欧美无线码| 熟女少妇亚洲综合色aaa.| 热re99久久国产66热| 亚洲五月色婷婷综合| 精品国产一区二区三区四区第35| 波野结衣二区三区在线| 久久99精品国语久久久| 免费久久久久久久精品成人欧美视频| 欧美久久黑人一区二区| 中文字幕av电影在线播放| a 毛片基地| 久久精品亚洲熟妇少妇任你| 日日爽夜夜爽网站| 91成人精品电影| 亚洲成人av在线免费| 国产精品一国产av| 嫩草影院入口| 国产日韩欧美在线精品| 免费观看av网站的网址| 多毛熟女@视频| 亚洲欧美精品综合一区二区三区| 亚洲精品美女久久久久99蜜臀 | 欧美少妇被猛烈插入视频| 最近2019中文字幕mv第一页| 中文字幕人妻丝袜一区二区 | 丝袜喷水一区| 中文字幕人妻丝袜制服| 国产男女内射视频| 曰老女人黄片| 午夜福利,免费看| 国产 一区精品| 最近中文字幕高清免费大全6| 国产精品99久久99久久久不卡 | 久久 成人 亚洲| 纯流量卡能插随身wifi吗| 亚洲中文av在线| 色婷婷av一区二区三区视频| 国产片特级美女逼逼视频| 国产av一区二区精品久久| 老鸭窝网址在线观看| 国产精品欧美亚洲77777| 在线免费观看不下载黄p国产| 国产不卡av网站在线观看| 国产极品天堂在线| 高清不卡的av网站| 丁香六月天网| 久久久亚洲精品成人影院| 搡老乐熟女国产| 激情视频va一区二区三区| 精品少妇黑人巨大在线播放| 精品一区二区三区四区五区乱码 | 日韩中文字幕欧美一区二区 | 亚洲欧洲国产日韩| 老司机亚洲免费影院| 精品视频人人做人人爽| 1024视频免费在线观看| 毛片一级片免费看久久久久| 美女中出高潮动态图| 亚洲激情五月婷婷啪啪| 日日摸夜夜添夜夜爱| 亚洲图色成人| 男女下面插进去视频免费观看| 久久ye,这里只有精品| 久久久久久免费高清国产稀缺| 老汉色av国产亚洲站长工具| 亚洲一级一片aⅴ在线观看| 女性被躁到高潮视频| 午夜免费观看性视频| 汤姆久久久久久久影院中文字幕| 久久韩国三级中文字幕| 日韩精品免费视频一区二区三区| 69精品国产乱码久久久| 欧美人与性动交α欧美软件| 国产伦理片在线播放av一区| 日本vs欧美在线观看视频| 久久精品亚洲熟妇少妇任你| 不卡视频在线观看欧美| 可以免费在线观看a视频的电影网站 | 老熟女久久久| 国产精品久久久久久人妻精品电影 | 新久久久久国产一级毛片| 永久免费av网站大全| 欧美久久黑人一区二区| 欧美av亚洲av综合av国产av | 青青草视频在线视频观看| 成年人午夜在线观看视频| 赤兔流量卡办理| 国产精品无大码| 国产av国产精品国产| 丝瓜视频免费看黄片| 成人亚洲欧美一区二区av| 啦啦啦在线观看免费高清www| 曰老女人黄片| 亚洲成色77777| 狂野欧美激情性xxxx| 日本欧美国产在线视频| 久久天堂一区二区三区四区| 性少妇av在线| 久久人人爽人人片av| 亚洲国产日韩一区二区| 国产亚洲午夜精品一区二区久久| 亚洲专区中文字幕在线 | 国产亚洲av高清不卡| 日本黄色日本黄色录像| 可以免费在线观看a视频的电影网站 | 天天操日日干夜夜撸| 中国国产av一级| 乱人伦中国视频| 亚洲一码二码三码区别大吗| 亚洲国产精品一区三区| 少妇精品久久久久久久| 亚洲熟女精品中文字幕| 老司机亚洲免费影院| 一本一本久久a久久精品综合妖精| 日韩精品有码人妻一区| 一二三四在线观看免费中文在| 精品少妇黑人巨大在线播放| 亚洲欧洲日产国产| 男女午夜视频在线观看| 这个男人来自地球电影免费观看 | 成人亚洲欧美一区二区av| 国产精品欧美亚洲77777| 18禁动态无遮挡网站| av.在线天堂| svipshipincom国产片| 热re99久久精品国产66热6| 19禁男女啪啪无遮挡网站| 日本一区二区免费在线视频| 久热爱精品视频在线9| 国产探花极品一区二区| 欧美另类一区| 男人舔女人的私密视频| 亚洲av国产av综合av卡| 免费日韩欧美在线观看| 亚洲国产日韩一区二区| 搡老乐熟女国产| 亚洲精品一区蜜桃| 黄片无遮挡物在线观看| 中文乱码字字幕精品一区二区三区| 在线精品无人区一区二区三| 中文精品一卡2卡3卡4更新| 黄网站色视频无遮挡免费观看| 色94色欧美一区二区| 丰满乱子伦码专区| 国产亚洲精品第一综合不卡| 69精品国产乱码久久久| 久久久久精品性色| 国产av精品麻豆| 国产人伦9x9x在线观看| 国产精品久久久av美女十八| 看非洲黑人一级黄片| 国产老妇伦熟女老妇高清| 亚洲久久久国产精品| 亚洲一级一片aⅴ在线观看| 性色av一级| 天堂中文最新版在线下载| 久久久久人妻精品一区果冻| 国产精品久久久av美女十八| 日本av免费视频播放| 一本大道久久a久久精品| 熟女av电影| 日日撸夜夜添| 自线自在国产av| 伊人亚洲综合成人网| 久久 成人 亚洲| 少妇被粗大的猛进出69影院| 狂野欧美激情性xxxx| 国产精品久久久久久久久免| 成人毛片60女人毛片免费| 日日摸夜夜添夜夜爱| 午夜福利一区二区在线看| 亚洲美女视频黄频| 久久久久久免费高清国产稀缺| 制服丝袜香蕉在线| 久久韩国三级中文字幕| 国产探花极品一区二区| 亚洲国产精品一区三区| 少妇被粗大的猛进出69影院| 少妇人妻久久综合中文| 99久国产av精品国产电影| 一级爰片在线观看| 女性被躁到高潮视频| 99香蕉大伊视频| av免费观看日本| netflix在线观看网站| 丝瓜视频免费看黄片| 男的添女的下面高潮视频| 人人妻人人添人人爽欧美一区卜| 999久久久国产精品视频| 亚洲av日韩精品久久久久久密 | 亚洲国产成人一精品久久久| 男女边摸边吃奶| 国产免费视频播放在线视频| 欧美黑人欧美精品刺激| svipshipincom国产片| videos熟女内射| 成人黄色视频免费在线看| 777久久人妻少妇嫩草av网站| 国产精品.久久久| 久久午夜综合久久蜜桃| 国产精品久久久人人做人人爽| 日韩制服骚丝袜av| 国产精品蜜桃在线观看| 国产黄频视频在线观看| 久久人人爽人人片av| 亚洲欧美精品自产自拍| 午夜日韩欧美国产| 亚洲av日韩精品久久久久久密 | 日韩大片免费观看网站| 我的亚洲天堂| 狂野欧美激情性xxxx| 天天躁日日躁夜夜躁夜夜| 自线自在国产av| 国产福利在线免费观看视频| 在线观看国产h片| 国产日韩欧美视频二区| 亚洲在久久综合| 亚洲一卡2卡3卡4卡5卡精品中文| 国产老妇伦熟女老妇高清| 亚洲av电影在线进入| 日本av手机在线免费观看| 日韩精品免费视频一区二区三区| xxx大片免费视频| 99久久人妻综合| 成人手机av| av女优亚洲男人天堂| 国产熟女欧美一区二区| 色播在线永久视频| 欧美黑人欧美精品刺激| 青青草视频在线视频观看| 人妻 亚洲 视频| 日韩大片免费观看网站| videos熟女内射| 欧美乱码精品一区二区三区| 精品酒店卫生间| 在线观看国产h片| 在线观看免费视频网站a站| 少妇人妻久久综合中文| 97在线人人人人妻| 国产精品秋霞免费鲁丝片| 飞空精品影院首页| 99香蕉大伊视频| 激情视频va一区二区三区| 日韩人妻精品一区2区三区| 不卡av一区二区三区| 大陆偷拍与自拍| 久久97久久精品| 国产成人午夜福利电影在线观看| 国产视频首页在线观看| 亚洲国产毛片av蜜桃av| 最新在线观看一区二区三区 | 色婷婷久久久亚洲欧美| 女性生殖器流出的白浆| 男男h啪啪无遮挡| 国产精品二区激情视频| 两个人看的免费小视频| 亚洲第一青青草原| 多毛熟女@视频| 日韩 欧美 亚洲 中文字幕| 一本—道久久a久久精品蜜桃钙片| 18禁观看日本| 精品国产乱码久久久久久小说| 日韩中文字幕欧美一区二区 | 91国产中文字幕| 日本欧美国产在线视频| 久久鲁丝午夜福利片| 一级片'在线观看视频| 91成人精品电影| 少妇人妻久久综合中文| 在线观看人妻少妇| 色吧在线观看| 午夜影院在线不卡| 国产亚洲欧美精品永久| 欧美97在线视频| 最新在线观看一区二区三区 | 亚洲欧美成人精品一区二区| 亚洲av电影在线观看一区二区三区| avwww免费| 精品久久蜜臀av无| 波多野结衣av一区二区av| 日本vs欧美在线观看视频| 亚洲av成人不卡在线观看播放网 | 欧美亚洲 丝袜 人妻 在线| 久久久久国产精品人妻一区二区| 日韩一本色道免费dvd| 午夜激情av网站| 久久精品国产亚洲av高清一级| 日韩欧美精品免费久久| 叶爱在线成人免费视频播放| 国产精品一区二区在线不卡| 国产在线一区二区三区精| 亚洲国产毛片av蜜桃av| 欧美变态另类bdsm刘玥| av免费观看日本| 欧美精品亚洲一区二区| 美女中出高潮动态图| 亚洲精品视频女| 中文字幕最新亚洲高清| 在现免费观看毛片| 深夜精品福利| www.熟女人妻精品国产| kizo精华| 免费av中文字幕在线| 人妻 亚洲 视频| 欧美另类一区| 亚洲男人天堂网一区| 在线观看国产h片| 亚洲第一青青草原| 久久精品久久精品一区二区三区| 成年人免费黄色播放视频| 校园人妻丝袜中文字幕| 在线天堂中文资源库| 校园人妻丝袜中文字幕| 看十八女毛片水多多多| 一区二区三区乱码不卡18| 国产成人精品在线电影| 99九九在线精品视频| 99国产综合亚洲精品| 精品卡一卡二卡四卡免费| 色94色欧美一区二区| 精品国产国语对白av| 夫妻性生交免费视频一级片| 国语对白做爰xxxⅹ性视频网站| 香蕉丝袜av| 婷婷色综合www| 你懂的网址亚洲精品在线观看| 超碰成人久久| 国产精品 国内视频| 亚洲精品成人av观看孕妇| 亚洲精品视频女| 黄网站色视频无遮挡免费观看| 91aial.com中文字幕在线观看| 国产成人欧美| 日韩制服骚丝袜av| 国产xxxxx性猛交| 日本猛色少妇xxxxx猛交久久| 高清在线视频一区二区三区| 少妇的丰满在线观看| 国产精品久久久av美女十八| 欧美少妇被猛烈插入视频| 免费看不卡的av| 激情五月婷婷亚洲| 久久av网站| 综合色丁香网| 亚洲人成77777在线视频| 亚洲综合精品二区| 国产福利在线免费观看视频| 韩国av在线不卡| 久久99精品国语久久久| 国产欧美日韩一区二区三区在线| 亚洲国产av新网站| 美女午夜性视频免费| 国产男女内射视频| 尾随美女入室| 男男h啪啪无遮挡| 久久鲁丝午夜福利片| 大码成人一级视频| 日韩欧美一区视频在线观看| 久久久久人妻精品一区果冻| 欧美激情极品国产一区二区三区| av电影中文网址| av.在线天堂| 色婷婷久久久亚洲欧美| 汤姆久久久久久久影院中文字幕| 亚洲国产成人一精品久久久| 亚洲一级一片aⅴ在线观看| 青春草视频在线免费观看| 亚洲精品国产色婷婷电影| 18禁国产床啪视频网站| 啦啦啦中文免费视频观看日本| 国产精品.久久久| 亚洲欧美清纯卡通| 日韩av在线免费看完整版不卡| 麻豆乱淫一区二区| 99香蕉大伊视频| 97在线人人人人妻| 男女边吃奶边做爰视频| 国产探花极品一区二区| 天天躁日日躁夜夜躁夜夜| 亚洲国产精品一区二区三区在线| 午夜激情av网站| 国产精品无大码| 国产深夜福利视频在线观看| 高清av免费在线| 久久天堂一区二区三区四区| 久久 成人 亚洲| 你懂的网址亚洲精品在线观看| 国产 精品1| 日本av免费视频播放| 99久久人妻综合| 国产男人的电影天堂91| 久久av网站| 国产成人a∨麻豆精品| 99国产综合亚洲精品| 国产精品久久久久久精品电影小说| 欧美亚洲 丝袜 人妻 在线| 在线亚洲精品国产二区图片欧美| 桃花免费在线播放| 亚洲精品久久成人aⅴ小说| 久久久久久久久免费视频了| 久久精品人人爽人人爽视色| a级毛片黄视频| 18在线观看网站| 亚洲精品国产av蜜桃| 99热国产这里只有精品6| 久久久久久久久免费视频了| 精品一区二区三卡| 一级,二级,三级黄色视频| 一本大道久久a久久精品| 国产精品99久久99久久久不卡 | 欧美日韩亚洲综合一区二区三区_| 我的亚洲天堂| 亚洲精品一二三| av网站在线播放免费| 一个人免费看片子| 国产成人免费观看mmmm| 亚洲国产精品国产精品| 亚洲国产av影院在线观看| 18禁动态无遮挡网站| 亚洲色图 男人天堂 中文字幕| 在线观看www视频免费| 婷婷色麻豆天堂久久| 老司机影院成人| 黄频高清免费视频| 夜夜骑夜夜射夜夜干| 夫妻性生交免费视频一级片| av卡一久久| 女性生殖器流出的白浆| 不卡av一区二区三区| 国产极品天堂在线| 亚洲精品一区蜜桃| 男女国产视频网站| 久久精品人人爽人人爽视色| 国产 精品1| 99精品久久久久人妻精品| 国产野战对白在线观看| 在线观看国产h片| 午夜福利免费观看在线| 久久久久精品性色| 欧美日韩福利视频一区二区| 青春草视频在线免费观看| 色视频在线一区二区三区| 亚洲成人av在线免费| 97精品久久久久久久久久精品| 久久精品久久久久久噜噜老黄| 美女午夜性视频免费| 日本av手机在线免费观看| 黄频高清免费视频| 另类精品久久| 一级毛片 在线播放| 1024香蕉在线观看| 精品久久久精品久久久| 免费高清在线观看视频在线观看| 高清在线视频一区二区三区| 成人影院久久| 人人妻人人澡人人爽人人夜夜| 国产片特级美女逼逼视频| av免费观看日本| 久久国产精品男人的天堂亚洲| 亚洲av电影在线观看一区二区三区| 国产男女超爽视频在线观看| 99热全是精品| 女人精品久久久久毛片| 亚洲欧美成人精品一区二区| 好男人视频免费观看在线| 日韩av在线免费看完整版不卡| 嫩草影视91久久| 欧美国产精品一级二级三级| 久久女婷五月综合色啪小说| 最近2019中文字幕mv第一页| 精品一区二区三区四区五区乱码 | 九草在线视频观看| 国产精品久久久久久精品电影小说| 69精品国产乱码久久久| 国产淫语在线视频| 成人18禁高潮啪啪吃奶动态图| 老司机影院毛片| 亚洲欧美激情在线| 亚洲一级一片aⅴ在线观看| 人妻一区二区av| 欧美人与性动交α欧美软件| 男人操女人黄网站| 搡老岳熟女国产| 我的亚洲天堂| 男女下面插进去视频免费观看| 中文字幕制服av| 高清av免费在线| 女人久久www免费人成看片| 99九九在线精品视频| 免费高清在线观看视频在线观看| 丝袜喷水一区| 国产成人av激情在线播放| 制服丝袜香蕉在线| 午夜精品国产一区二区电影| 熟女少妇亚洲综合色aaa.| 五月开心婷婷网| 久久久久国产精品人妻一区二区| av片东京热男人的天堂| 欧美黑人欧美精品刺激| 男人爽女人下面视频在线观看| 美女主播在线视频| 国产av一区二区精品久久| 亚洲欧美一区二区三区久久| 免费高清在线观看视频在线观看| 三上悠亚av全集在线观看| 高清av免费在线| 午夜福利免费观看在线| 国产精品一二三区在线看| 久久久久国产一级毛片高清牌| 亚洲久久久国产精品| 男人爽女人下面视频在线观看| 视频区图区小说| 少妇被粗大的猛进出69影院| 人妻人人澡人人爽人人| 男人操女人黄网站| 精品人妻在线不人妻| 色播在线永久视频| 成人手机av| 日韩电影二区| 天堂8中文在线网| av.在线天堂| 纯流量卡能插随身wifi吗| 精品一区在线观看国产| 狠狠婷婷综合久久久久久88av| 超碰97精品在线观看| 中文精品一卡2卡3卡4更新| 老汉色av国产亚洲站长工具| 欧美国产精品va在线观看不卡| 美女主播在线视频| 国产精品国产av在线观看| 美国免费a级毛片| 如日韩欧美国产精品一区二区三区| 久久精品熟女亚洲av麻豆精品| 一二三四中文在线观看免费高清| 最新的欧美精品一区二区| 亚洲第一av免费看| 丁香六月欧美| 视频在线观看一区二区三区| 91成人精品电影| 久久97久久精品| 18在线观看网站| 美女午夜性视频免费| 精品国产国语对白av| 成人黄色视频免费在线看| 考比视频在线观看| 国产精品久久久久成人av| 99精品久久久久人妻精品| 亚洲精华国产精华液的使用体验| 免费日韩欧美在线观看| 又大又黄又爽视频免费| 亚洲欧美成人精品一区二区| 亚洲精品日本国产第一区|