• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Spatial-Temporal Attention Model for Human Trajectory Prediction

    2020-08-05 09:42:46XiaodongZhaoYaranChenJinGuoandDongbinZhao
    IEEE/CAA Journal of Automatica Sinica 2020年4期

    Xiaodong Zhao,Yaran Chen,Jin Guo,and Dongbin Zhao,

    Abstract—Human trajectory prediction is essential and promising in many related applications.This is challenging due to the uncertainty of human behaviors, which can be influenced not only by himself, but also by the surrounding environment. Recent works based on long-short term memory(LSTM) models have brought tremendous improvements on the task of trajectory prediction. However, most of them focus on the spatial influence of humans but ignore the temporal influence. In this paper, we propose a novel spatial-temporal attention(ST-Attention) model,which studies spatial and temporal affinities jointly.Specifically,we introduce an attention mechanism to extract temporal affinity,learning the importance for historical trajectory information at different time instants.To explore spatial affinity,a deep neural network is employed to measure different importance of the neighbors. Experimental results show that our method achieves competitive performance compared with state-of-the-art methods on publicly available datasets.

    I.Introduction

    HUMAN trajectory prediction is to predict future path according to the history trajectory.The trajectory is represented by a set of sampled consecutive location coordinates.Trajectory prediction is a core building block for autonomous moving platforms,and the prospective applications include autonomous driving[1]–[3],mobile robot navigation[4],assistive technologies[5],and smart video surveillance[6],etc.

    When a person is walking in the crowd, the future path is determined by various factors like the intention,the social conventions and the influence of nearby people.For instance,people prefer to walk along the sidewalk rather than crossing the highway.A person is able to adjust his path by estimating the future path of the people around him,and the people do the same thing which in turn affects the target.Human trajectory prediction becomes an extremely challenging problem due to such complex nature of the people.Benefiting from the powerful deep learning[7],[8], human trajectory prediction has gained a significant improvement in the last few years.Yagiet al.in [5] present a multi-stream convolutiondeconvolution architecture for first-person videos,which verifies pose,scale,and ego-motion cues are useful for the future person localization.Pioneering works by[9],[10]shows that long-short term memory(LSTM)has the capacity to learn general human movements and predict future trajectories.

    Although tremendous efforts have been made to address these challenges, there are still two limitations:

    1)The historical trajectory information at different time instants has different levels of influence on the target human,which is ignored by most of works.However,it plays an important role on the prediction of the future path.As for the target human, the latest trajectory information usually has a higher level of influence on the future path as shown in Fig.1(a). As for the neighbors,the trajectory information will have a great impact as long as the distance is close to the target,as shown in Fig.1(b).Thus, the historical trajectory information at different time instants ought to be given different weights.The attention mechanism is capable of learning different weights according to the importance.

    Fig.1.Illustration of the influences at different time instants.(a)As for the target human( PT ), the trajectory information at timet ?1 and tmay affect future path more compared with that at time t ?2and t ?3.(b) As for the neighbor ( PN ),he turns away from PT at time t.The trajectory information of PN at timet ?1 has a greater influence on PT considering that PT is not allowed to occupy the position where PN just lefts.

    2)Most of trajectory prediction methods fail to capture the global context among the environment.Some methods capture the global context through an annotation text recording people location coordinates provided by the dataset.However, the text just annotates a few people,so it is not the real global information.A pre-trained detection model[11]can be used to extract all people in the image rather than relying on the annotation text.

    In this work, we propose a spatial-temporal attention network to predict human trajectory in the future.We adopt an LSTM called ego encoder to model the ego motion of the target human.We also consider all people in the scene by the pre-trained detection model extracting the positions of neighbors.The positions are fed into amulti-layer perceptron(MLP)to obtain high dimensional features.Then the inner product is used to acquire the weights which measure the importance of neighbors to the target.Further,another LSTM called interaction encoder is followed to model human-human interaction.It is noted that in most existing models, the trajectory information at different time instants gains equal treatment, which is not suitable for the complex trajectory prediction.Inspired by this,we introduce an attention mechanism to obtain the weights, which represent the levels of influence for trajectory information at different time instants.Finally,an LSTM decoder is employed to generate human trajectories for the next few frames.

    Our contributions can be summarized as following:

    1)We introduce an attention mechanism to automatically learn the weights.They dynamically determine which time instant’s trajectory information we should pay more attention to.

    2)We utilize a pre-trained detection model[11]to capture global context instead of retrieving local context from the dataset,then an MLP and the inner product are used to weight different neighbors.

    3) Based on the above two ideas,a spatial-temporal attention(ST-Attention)model,is proposed to tackle the challenges of trajectory prediction.ST-Attention achieves competitive performance on two benchmark datasets:ETH&UCY[12],[13]and ActEV/VIRAT [14].

    II.Related Work

    A.Traditional Approaches for Trajectory Prediction

    Kalman filter[15],[16]can be deployed to forecast the future trajectory in the case of linear acceleration,which has proven to be an efficient recursive filter.It is capable of estimating the state of a dynamic system from a series of incomplete and noisy measurements,especially in the analysis of time sequences.Williams[17]proposes to use Gaussian processes distribution to estimate motion parameters like the velocity and the angle offset,then a motion pattern of the pedestrian is built.Further,researchers begin to associate the energy with pedestrians.One representative work is the social forces proposed by Helbing and Molnár [18], which transforms the attraction and the exclusion between pedestrians and obstacles into energy to predict the pedestrian trajectory.The attractive force is used to guide the target to the destination,and the repulsive force is used to keep a safe distance and avoid collision.Subsequently,some methods[19]fit the parameters of the energy functions to improve the social forces model.

    However, the above methods rely on hand-crafted features.This becomes an obstacle to advance the performance of the trajectory prediction in light that these methods have the ability to capture simple interaction but fail in complex scenarios.In contrast,data-driven methods based on convolutional neural network(CNN)and recurrent neural network(RNN)overcome the above limitations of traditional ones.

    B.CNN Models for Trajectory Prediction

    CNN[20] has proven to be powerful to extract rich context information,which is salient cue for trajectory prediction task.Behavior-CNN in[21]employs a large receptive field to model the walking behaviors of pedestrians and learn the location information of the scene.Yagiet al.[5]develop a deep neural network that utilizes the pose, the location-scale,and the ego-motion cues of the target human, but they forget to consider human-human interaction.Huanget al.[22]introduce the spatial matching network and the orientation network.The former generates a reward map representing the reward of every pixel on the scene image,and the latter outputs an estimated facing orientation.However, this method can only consider the static scene but not the dynamic information of pedestrians.

    C. RNN Models for Trajectory Prediction

    RNN[23]has also proven to be efficient to deal with time sequence tasks.RNN models have shown dominant capability in various domains like neural machine translation[24],speech recognition[25],generating image descriptions[26]and DNA function prediction[27].Some recent works have attempted to use RNN to forecast trajectory.Social-LSTM in[9]introduces a social pooling layer to learn classic interactions that happen among pedestrians.But this pooling solution fails to capture global context.Besides,social-LSTM predicts the distribution of the trajectory locations instead of directly predicting the locations.This makes training process difficult while sampling process is non-defferentiable.Guptaet al.[10]propose Social-GAN combining approaches for trajectory prediction and generative adversarial networks.But the performance has not been improved obviously when sampling only one time during test time.Lianget al.[28]presentNext,an end-to-end learning framework extracting rich visual information to recognize pedestrian behaviors.Furthermore,the focal attention[29]is employed to the framework.It is originally proposed to tackle visual question answering,projecting different features into a lowdemensional space.But the focal attention used inNextis hard-wired and fails to learn from the data.Xuet al.[30]design a crowd interaction deep neural network which considers all pedestrians in the scene as well as their spatial affinity for trajectory prediction.However,they ignore the influence of temporal affinity.In our work we take into account both spatial affinity and temporal affinity.

    Fig.2.Overview of proposed ST-Attention.The model utilizes encoder to extract ego featureEgo(h1,...,hT obs) and interaction feature Inter(h1,...,hT obs)from Lit and Btjrespectively ( t ∈[1,T obs]) ,then the following decoder outputs future path Lit′ (t ′∈[T obs+1,T pred]).

    D. Attention Approaches for Trajectory Prediction

    Some approaches for trajectory prediction employ the attention mechanism to differentiate the influence of neighbors on the target.Suet al.[31] update the LSTM memory cell state with a coherent regularization, which computes the pairwise velocity correlation to weight the dependency between the trajectories.Further,a social-aware LSTM unit[32]is proposed,which incorporates the nearby trajectories to learn a representation of the crowd dynamics.Zhanget al.[33]utilize motion gate to highlight the important motion features of neighboring agents.Sadeghianet al.[34]apply the soft attention similarly with[26],and emphasis the salient regions of the image and the more relevant agents.However,the above works focus on the spatial influence of the neighboring agents, but ignore the temporal influence of the agents which is also valuable for human trajectory prediction.The attention mechanism in our model connects the decoder state and the temporal encoder state,allowing to give an importance value for each time instant's trajectory state of the neighboring humans and the target human.

    E. Pedestrian Re-identification for Trajectory Extraction

    with the advancement of the pedestrian re-identification(Re-ID)[35],the same person with different appearances can be identified accurately,which facilitates the extraction of the human trajectory.K?stingeret al.[36]consider that the difference among various image features of the same pedestrian conforms to Gaussian distribution,and propose keep-it-simple-and-straightforward metric learning (KISSME).However,KISSME meets the small sample size problem in calculating various classes of covariance matrices, which blocks the improvement of the Re-ID performance.Hanet al.in[37]verify that virtual samples can alleviate the small sample size problem of KISSME.And the re-extraction process of virtual sample features is eliminated by genetic algorithm,which greatly improves the matching rate of pedestrian Re-ID.Further,KISS+[38]algorithm is proposed to generate virtual samples by using an orthogonal basis vector, which is very suitable for real-time pedestrian Re-ID in open environment due to its advantages of simplicity,fast execution and easy operation.These works are of great significance to the human trajectory prediction.

    III.Method

    A person adjusts his trajectory based on the definite destination in mind and the influence of neighbors when he is walking in the crowd.On the one hand,the future trajectory of the target human depends on historical trajectories at different time instants,which we refer to as temporal affinity.On the other hand,the future trajectory hinges on the distances,the velocities and the headings of neighbors,which we refer to as spatial affinity.This idea motivates us to study the trajectory prediction jointly with temporal and spatial affinities.In this section, we present our spatial-temporal attention model tackling the problem.

    A. Problem Formulation

    B.Overview

    The overall network architecture is illustrated in Fig.2.Our model employs an encoder-decoder framework.Specifically,the encoder consists of ego module and interaction module,and the decoder includes attention module and prediction module.We feed the locations into ego module to get the ego feature which is used for modeling the motion of the target. At the same time,the observed boxes are fed into interaction module to get the interaction feature which is used for exploring the relationship among neighbors.The above feature vectors are weighted and summed along the temporal dimensions by the attention module.Then the prediction module employs an LSTM to generate future trajectory.In the rest of this section,we will detail the above modules.

    C. Ego Module

    Ego module aims at exploring the intention of the target human which can be reflected by the motion characteristics such as the velocity,the acceleration and the direction.Due to the powerful ability of addressing sequence data,LSTM is chosen as the ego module architecture.For the pedestrianpi,we embed the location into a vectoret.Then the embedding is fed into theego encoder, whose hidden statehtis computed by

    IV.Performance Analysis

    In this section,we analyze our model on pedestrian trajectory datasets based on world plane and image plane.Specifically, we evaluate meter values on ETH[12]and UCY[13]datasets,and report pixel values on ActEV/VIRAT dataset[14].Experimental results demonstrate that our model performs well on both world plane and image plane.

    A. Evaluation Metrics

    Similarly to prior works[10],[28],we use two metrics to report prediction error:

    1) Average Displacement Error(ADE):AverageL2distance between the ground truth coordinates and the prediction coordinates over all predicted time instants

    2) Final Displacement Error(FDE):TheL2distance between the true points and the prediction points at the final time instantTpred

    B. Baseline Methods

    We compare the results of our model with following stateof-the-art methods at the same conditions:

    1) Linear[10]:A linear regression model whose parameters are determined by minimizing leastL2distance.

    2) S-LSTM[9]:Alahiet al.[9] build one LSTM for each person and share the information between the LSTMs tclayer.At each time instanttduring the prediction period,the LSTM hidden-state represents a bivariate Gaussian distribution described by mean μ,standard deviation σ and correlation coefficient ρ.Then the predicted trajectory at timet+1 is sampled from the distribution.

    3) Next[28]:Lianget al.[28]encode a person through rich visual features rather than oversimplifying human as a point.The person behavior module is proposed to capture the visual information,modeling appearance and body movement.And the person interaction module is used to capture other objects information and the surroundings.The future trajectory is predicted by the LSTM with the focal attention [29].

    4) LiteNext:We implement a simplified version ofNextmodel which just takes into account the person’s trajectory and the person-objects.We keep the same in other settings.In this way,the input of LiteNext is the same as ours.

    C. Experiments on ETH and UCY

    The ETH dataset consists of 750 pedestrians and contains two sets(ETH and HOTEL).The UCY dataset embodies 786 pedestrians and is comprised of three sets(ZARA1,ZARA2 and UNIV).These datasets contain rich real-world scenarios including walking in company,giving way for each other and lingering about, which are full of challenges.The number of tags including frame, pedestrian,group,and obstacle in the datasets is summarized as shown in Table I.

    TABLE I The Number of Tags

    1)Setup:following the same experimental setup as[10],we use the leave-one-out strategy,that is,training on 4 setsand testing on the remaining set.Based on the sampling period of 0.4 s,we observe the trajectory for 8 frames(3.2 s)and predict the next 12 frames (4.8 s),namelyTobs= 8,Tobs+1=9,andTpred= 20.

    TABLE II Quantitative Results of Different Methods on ETH &UCY Datasets.We Use ADE and FDE to Measure Prediction Error in Meter Values,and Lower Is Better

    2) Implementation Details:In the interaction module,a multi-layer perceptron is employed,which embodies 3 layers.The node sizes in these layers are set to 32,64,128 respectively.And the dimension of embedding layer is 128.The LSTM hidden sizedis set to 256.A batch size of 64 is used and the epoch number of the training stage is 100.We use Adam optimizer with an initial learning rate of 0.001.To facilitate the training, we clip the gradients at a value of 10.A single NVIDIA GeForce GTX Titan-Xp GPU is used for the training.

    3)Quantitative Results:We report the experimental results about ADE and FDE for all methods across the crow d sets in Table II.The linear regressor presents high prediction errors since it has no ability to model curved trajectories unlike other methods employing LSTM networks to overcome this deficiency.Moreover,Nextmodel and ours are better than SLSTM as they consider global human-human interaction,and S-LSTM does not perform as well as expected since it fails to consider global context.

    Our ST-Attention achieves state-of-the-art performance on ETH and UCY benchmarks.Throughout the Table II, the evaluation error on single ETH crowd set is much higher than those on other sets.This crowd set contains a lot of pedestrians on the narrow passage and they walk in disorder with different velocities and headings.Compared with LiteNext, the same input information is obtained but ST-Attention performs significantly better,especially on ETH dataset.This verifies the effectiveness of our method.Compared withNext,ST-Attention misses two input feature channels and has a lighter network structure.At the same time,ST-Attention is still competitive and achieves powerful results no worse thanNext.This is because the focal attention[29] used inNextis hard-wired and cannot make full use of input features.

    Computational time is crucial in order to satisfy real-world applications.For instance,real-time prediction of the pedestrian trajectories in front of the vehicles is necessary in autonomous driving.In Table III, we make a comparison with other models in terms of speed.We can see that S-LSTM has fewer parameters,but the computational time is not as fast as expected.The decrease of speed is because that S-LSTM adopts recursive method to predict future trajectories,which means S-LSTM needs to compute occupancy grids to implement social pooling at each time instant.Compared withNext,our method reduces the number of parameters by almost half,since ST-Attention uses fewer input channels.Correspondingly,our method is 2.5x faster thanNext,taking about 0.02 s on the premise of thatandare obtained.Due to the efficient interaction model,our model is also faster than LiteNext.

    TABLE III Speed Comparison with Baseline Models.Our Method with Fewer Number of Parameters Gets 2.5x Speedup Compared to Next

    Fig.5.The visualization results of our ST-Attention predicting path on ETH&UCY datasets:history trajectory(orange),ground truth(red),predicted trajectory for Next model(blue)and our model(green).The first three rows show some successful cases and the last row presents some failure examples.we can see that in most cases our predicted trajectory coincides with the ground truth.

    4)Qualitative Results:We illustrate the qualitative results ofNextmodel and our ST-Attention with visualization to make a comparison in Fig.5.The results demonstrate the effectiveness of our model.When a person meets a couple as shown in Fig.5(a1)and Fig.5(a3),ST-Attention is able to pass through the cracks whileNextmodel might collide with one of them.In the second row of Fig.5, we present some scenes where people walk in a group,and ST-Attention is able to jointly predict their trajectories with lower error thanNextmodel.We would like to note that in Fig.5(c2), the predicted path byNextmodel through the wall even though theNextmodel encodes scene semantic information, which testifies that focal attention[29]cannot fully utilize the rich visual feature.In the last row of Fig.5,several failure cases are shown.In Fig.5(d1)when a pedestrian waits at the station,he moves slightly as he paces back and forth.ST-Attention assumes he will have a small movement along the previous trend while ground truth has a sudden turn.In Fig.5(d2)people move in the opposite direction,ST-Attention predicts that target human slows down to avoid collision but actually he gives way toward the right.In Fig.5(d3)ST-Attention predicts a change of direction toward a wide space whereas the pedestrian goes ahead.Although the predicted paths do not correspond to the ground truth in failure cases,the outputs still belong to acceptable trajectories that pedestrians may take.

    D. Experiments on ActEV/VIRAT

    ActEV/VIRAT[14]is a benchmark dataset devoted to human activity research.This dataset is natural,realistic and challenging in the field of video surveillance in terms of its resolution and diversity in scenes,including more than 400 videos at 30 frames/s.

    1) Implementation Details:In the experiment,training set includes 57 videos and validation set includes 5 videos.Besides,55 videos are used for testing.In order to keep consistent with the baseline models based on ETH&UCY,activity label is not used in this experiment.Other parameter settings are the same as those in ETH & UCY.

    2) Results:Table IV shows quantitative results compared to baseline methods.Combined with Table III,we can see that our model outperforms other methods with lightweight parameters.We also do visualization to reflect the performance of the algorithm intuitively.As show in Fig.6,our prediction trajectory can better match the ground truth.When a person walks at a normal pace,our model is able to predict his future path,including the situation that the person turns to change the direction of the trajectory such as Fig.6(b)and Fig.6(c).In Fig.6(c), the historical trajectory of the target human turns right and then left with a curvature.Nextmodel predicts that the human will continue to turn left,but in fact he turns right to the main road as our model predicts.However,our model performs poor when human has a great change of direction due to obvious external interference or other purposes,such as failed cases in Fig.6.Even though such cases are hard to our model, better performance is achieved compared with other methods.

    TABLE IV Quantitative Results on ActEV/VIRAT.We Report ADE and FDE in Pixel Values

    Fig.6.The visualization results on ActEV/VIRAT dataset:history trajectory(orange),ground truth(red), predicted trajectory for Next model(blue)and our model (green).

    TABLE V Ablation Experiments of the Interaction Module and the Attention Module.We Report ADE and FDE (ADE/FDE) on ETH&UCY (Meter Values) and ActEV/VIRAT (Pixel Values)

    E. Ablation Study

    To explore the role of each module in the trajectory prediction,we make an ablation study on ETH&UCY and ActEV/VIRAT datasets.

    1) Effectiveness of the Interaction Module:To verify the importance of interaction module, we train a network removing the interaction branch.Then a comparative experiment with and without interaction module is done and the results are shown in Table V.We can see that a better performance is achieved by the model with interaction module.This is because the interaction module measures the influence of neighbors on the target.

    2) Effectiveness of the Attention Module:To evaluate the effectiveness of our attention module,we make a comparison with focal attention[29]which is not learnable.The comparison result is shown in Table V and our attention module performs better than focal attention.This is because our soft attention can automatically learn the weights while focal attention fails,which suggests our attention module is effective for the trajectory prediction.

    V.Conclusion

    In this paper,we present ST-Attention,a spatial-temporal attention model for trajectory prediction.To explore spatial affinity, we use an MLP and the inner product to assign different weights for all pedestrians.To make full use of temporal affinity,the key component named attention model is introduced,which is quite efficient.Our model is fullydifferentiable and accurately predicts the future trajectory of the target, which automatically learns the importance for historical trajectories at different time instants and weights the influence of nearby people on the target.Comprehensive experiments on two publicly available datasets have been conducted to demonstrate that ST-Attention can achieve a competitive performance.

    Our approach is designed for human trajectory prediction.Future work can extend the model to vehicle trajectory prediction in view of the many similarities between the two predictions mentioned above.Meanwhile we should also distinguish their differences.For example,pedestrians can turn back easily while it is difficult for vehicles,and vehicles can change speed rapidly while pedestrians fail.In particular,it is critical in autonomous driving field to predict the human trajectory jointly with the vehicle trajectory.Besides,intelligent optimization algorithms[41],[42]can be used to learn all the parameters.

    99热6这里只有精品| 亚洲av不卡在线观看| 久久精品久久精品一区二区三区| 亚洲精品乱码久久久久久按摩| 精品一区二区三卡| 丝瓜视频免费看黄片| 一本久久精品| 久久99一区二区三区| 亚洲国产成人一精品久久久| 91久久精品国产一区二区成人| 高清av免费在线| 视频区图区小说| 国国产精品蜜臀av免费| 狂野欧美激情性bbbbbb| 少妇被粗大的猛进出69影院 | 啦啦啦中文免费视频观看日本| 久久ye,这里只有精品| 一区二区av电影网| 国产成人精品福利久久| 男人添女人高潮全过程视频| 亚洲av在线观看美女高潮| 亚洲av不卡在线观看| 久久久久久久久久久丰满| 亚洲精品一二三| 国产成人精品在线电影| av有码第一页| av在线app专区| 18禁动态无遮挡网站| 三上悠亚av全集在线观看| 狠狠精品人妻久久久久久综合| 亚洲国产欧美日韩在线播放| 国内精品宾馆在线| 成人综合一区亚洲| 全区人妻精品视频| 最黄视频免费看| 中文精品一卡2卡3卡4更新| 啦啦啦中文免费视频观看日本| 亚洲欧洲国产日韩| 26uuu在线亚洲综合色| 欧美国产精品一级二级三级| 91久久精品国产一区二区成人| 中文字幕精品免费在线观看视频 | 国产精品成人在线| 99热网站在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 一级毛片 在线播放| 少妇被粗大猛烈的视频| 久久久久久久久大av| 伦理电影免费视频| 久久久久久久精品精品| 亚洲人与动物交配视频| 日韩 亚洲 欧美在线| 黑人巨大精品欧美一区二区蜜桃 | 精品酒店卫生间| 伊人久久精品亚洲午夜| 国产成人精品久久久久久| 免费观看a级毛片全部| 一级片'在线观看视频| 久久 成人 亚洲| 欧美精品亚洲一区二区| 中文字幕亚洲精品专区| 男女无遮挡免费网站观看| 黄色一级大片看看| 日韩中字成人| 少妇猛男粗大的猛烈进出视频| 精品99又大又爽又粗少妇毛片| 久久久久国产网址| 五月伊人婷婷丁香| 国语对白做爰xxxⅹ性视频网站| 蜜桃国产av成人99| 欧美日韩亚洲高清精品| 爱豆传媒免费全集在线观看| 亚洲av.av天堂| 搡老乐熟女国产| 老司机影院成人| 蜜桃久久精品国产亚洲av| 精品一品国产午夜福利视频| 99视频精品全部免费 在线| 69精品国产乱码久久久| 久久人人爽人人爽人人片va| 人人澡人人妻人| 人妻系列 视频| 在线观看一区二区三区激情| 国产精品一区二区三区四区免费观看| 最新的欧美精品一区二区| 精品久久蜜臀av无| 99九九在线精品视频| 久热久热在线精品观看| 久久狼人影院| 伦理电影大哥的女人| 最新的欧美精品一区二区| 一级片'在线观看视频| av天堂久久9| 久久 成人 亚洲| 中文字幕制服av| 久久狼人影院| 精品国产露脸久久av麻豆| 成人国产麻豆网| 色视频在线一区二区三区| 亚洲人与动物交配视频| 黄色欧美视频在线观看| 最近中文字幕2019免费版| 3wmmmm亚洲av在线观看| 亚洲精品国产色婷婷电影| 免费观看无遮挡的男女| 亚州av有码| 少妇猛男粗大的猛烈进出视频| 国产视频内射| 精品99又大又爽又粗少妇毛片| 亚洲在久久综合| freevideosex欧美| 97超视频在线观看视频| 一二三四中文在线观看免费高清| 亚洲国产成人一精品久久久| 国产精品99久久久久久久久| 人妻夜夜爽99麻豆av| 亚洲国产av影院在线观看| 天美传媒精品一区二区| 麻豆乱淫一区二区| 天天影视国产精品| 国产一区二区三区综合在线观看 | 涩涩av久久男人的天堂| 日韩成人伦理影院| 久久99一区二区三区| 国产伦精品一区二区三区视频9| av一本久久久久| 久久午夜福利片| 国产精品一二三区在线看| 如何舔出高潮| 欧美老熟妇乱子伦牲交| 日韩av不卡免费在线播放| 日日撸夜夜添| 久久国产亚洲av麻豆专区| 国内精品宾馆在线| 亚洲成人一二三区av| 亚洲丝袜综合中文字幕| 国产午夜精品久久久久久一区二区三区| 久久 成人 亚洲| 亚洲欧美色中文字幕在线| 欧美精品一区二区大全| 亚洲色图综合在线观看| 大又大粗又爽又黄少妇毛片口| 亚洲激情五月婷婷啪啪| 久久午夜福利片| 国产淫语在线视频| 日韩精品有码人妻一区| a 毛片基地| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 大又大粗又爽又黄少妇毛片口| 亚洲av中文av极速乱| 免费观看性生交大片5| 极品少妇高潮喷水抽搐| 成人毛片a级毛片在线播放| 人人澡人人妻人| 国产在线一区二区三区精| 制服丝袜香蕉在线| 日本av手机在线免费观看| 亚洲图色成人| 男女免费视频国产| 久久精品久久精品一区二区三区| 久久久a久久爽久久v久久| 欧美精品人与动牲交sv欧美| 99久久综合免费| 夜夜爽夜夜爽视频| 看非洲黑人一级黄片| 久久国内精品自在自线图片| 久久亚洲国产成人精品v| 国产精品成人在线| 五月天丁香电影| 高清视频免费观看一区二区| av免费在线看不卡| 国产精品久久久久久精品古装| 99精国产麻豆久久婷婷| 色哟哟·www| 99久国产av精品国产电影| 国产精品一二三区在线看| 最近中文字幕高清免费大全6| 各种免费的搞黄视频| 国产精品久久久久久av不卡| 999精品在线视频| 国产不卡av网站在线观看| 日韩人妻高清精品专区| 国产亚洲欧美精品永久| av不卡在线播放| 亚洲精品久久成人aⅴ小说 | 亚洲美女视频黄频| 国语对白做爰xxxⅹ性视频网站| 狠狠精品人妻久久久久久综合| 国产乱人偷精品视频| 精品一品国产午夜福利视频| 街头女战士在线观看网站| 视频中文字幕在线观看| 日韩一区二区三区影片| 国产淫语在线视频| 两个人的视频大全免费| 欧美日韩精品成人综合77777| 日本爱情动作片www.在线观看| 日韩一区二区三区影片| 久久久欧美国产精品| 夜夜看夜夜爽夜夜摸| 97超视频在线观看视频| 纯流量卡能插随身wifi吗| 亚洲第一区二区三区不卡| 久久影院123| 国产乱人偷精品视频| 亚洲精品日韩在线中文字幕| 人人澡人人妻人| 老司机亚洲免费影院| 中文精品一卡2卡3卡4更新| 亚洲高清免费不卡视频| 久久久久久久大尺度免费视频| 十八禁网站网址无遮挡| 日韩av在线免费看完整版不卡| 日韩中文字幕视频在线看片| 亚洲色图综合在线观看| 亚洲成人一二三区av| 麻豆成人av视频| 69精品国产乱码久久久| 91精品伊人久久大香线蕉| 99热这里只有精品一区| 久久久久久久久久成人| 大话2 男鬼变身卡| 国产成人91sexporn| 欧美日韩视频高清一区二区三区二| 久久99热6这里只有精品| 精品国产露脸久久av麻豆| 99九九线精品视频在线观看视频| 一个人免费看片子| 最新中文字幕久久久久| 亚洲人与动物交配视频| 熟女电影av网| 精品少妇久久久久久888优播| 国产一区有黄有色的免费视频| 91国产中文字幕| 久久午夜福利片| 97超碰精品成人国产| 久久久久精品久久久久真实原创| 亚洲五月色婷婷综合| 国产av码专区亚洲av| 丰满乱子伦码专区| 中文字幕人妻熟人妻熟丝袜美| 汤姆久久久久久久影院中文字幕| 亚洲四区av| 亚洲性久久影院| 日韩三级伦理在线观看| 国产乱人偷精品视频| 国产精品成人在线| 少妇 在线观看| 国产av精品麻豆| 成人影院久久| 秋霞伦理黄片| 日本wwww免费看| 天堂俺去俺来也www色官网| 夫妻性生交免费视频一级片| 亚洲,一卡二卡三卡| 两个人免费观看高清视频| 国产精品秋霞免费鲁丝片| 国产日韩欧美视频二区| 美女大奶头黄色视频| 精品卡一卡二卡四卡免费| 热99久久久久精品小说推荐| 久久久久视频综合| 丝瓜视频免费看黄片| 国产精品偷伦视频观看了| 国内精品宾馆在线| 十八禁网站网址无遮挡| 国产精品女同一区二区软件| 亚洲精华国产精华液的使用体验| √禁漫天堂资源中文www| 精品人妻一区二区三区麻豆| 日韩中字成人| 亚洲国产欧美日韩在线播放| 黄片无遮挡物在线观看| 青春草国产在线视频| 大话2 男鬼变身卡| 成人无遮挡网站| 精品人妻熟女毛片av久久网站| 爱豆传媒免费全集在线观看| 99热这里只有精品一区| 免费观看a级毛片全部| 最近的中文字幕免费完整| 精品熟女少妇av免费看| av电影中文网址| 日本黄色日本黄色录像| 亚洲精品久久午夜乱码| 国产精品99久久久久久久久| 亚洲情色 制服丝袜| 成年女人在线观看亚洲视频| 又粗又硬又长又爽又黄的视频| 高清在线视频一区二区三区| 国产国语露脸激情在线看| 妹子高潮喷水视频| av在线观看视频网站免费| av女优亚洲男人天堂| 国产有黄有色有爽视频| 99九九在线精品视频| 人妻制服诱惑在线中文字幕| 啦啦啦中文免费视频观看日本| 亚洲精品一二三| 精品久久国产蜜桃| 亚洲第一av免费看| 97超视频在线观看视频| tube8黄色片| 亚洲伊人久久精品综合| 久久精品国产a三级三级三级| 国产精品无大码| 51国产日韩欧美| 成人毛片60女人毛片免费| 亚洲综合精品二区| 日本免费在线观看一区| 成人无遮挡网站| 又粗又硬又长又爽又黄的视频| 纯流量卡能插随身wifi吗| 街头女战士在线观看网站| 久久精品夜色国产| 在线看a的网站| 高清欧美精品videossex| 国产白丝娇喘喷水9色精品| 黄片播放在线免费| 十八禁网站网址无遮挡| 啦啦啦中文免费视频观看日本| 亚洲无线观看免费| 在线观看美女被高潮喷水网站| 国产又色又爽无遮挡免| av一本久久久久| 日韩一区二区视频免费看| 在线观看三级黄色| 夜夜爽夜夜爽视频| 日韩av免费高清视频| 国产精品麻豆人妻色哟哟久久| 精品久久久久久电影网| 亚洲av在线观看美女高潮| 免费播放大片免费观看视频在线观看| 免费黄网站久久成人精品| 欧美日韩在线观看h| 三级国产精品片| 久久免费观看电影| 在线免费观看不下载黄p国产| 22中文网久久字幕| 狂野欧美激情性bbbbbb| 最新的欧美精品一区二区| 九色亚洲精品在线播放| 18在线观看网站| 欧美精品亚洲一区二区| freevideosex欧美| 欧美丝袜亚洲另类| 日本午夜av视频| 国产69精品久久久久777片| 国产极品天堂在线| 麻豆精品久久久久久蜜桃| 这个男人来自地球电影免费观看 | 免费观看在线日韩| 一本一本综合久久| 亚洲精品aⅴ在线观看| 老司机亚洲免费影院| 寂寞人妻少妇视频99o| 国产在视频线精品| 成年美女黄网站色视频大全免费 | 蜜臀久久99精品久久宅男| 免费黄色在线免费观看| 国产精品成人在线| 欧美日韩亚洲高清精品| 亚洲美女视频黄频| 久久久国产欧美日韩av| 一级片'在线观看视频| 大又大粗又爽又黄少妇毛片口| 又粗又硬又长又爽又黄的视频| 精品久久久噜噜| 午夜激情久久久久久久| 赤兔流量卡办理| 色哟哟·www| 精品人妻在线不人妻| 久久久精品94久久精品| 亚洲性久久影院| 人人妻人人添人人爽欧美一区卜| 少妇高潮的动态图| 国语对白做爰xxxⅹ性视频网站| 国产有黄有色有爽视频| 国产片特级美女逼逼视频| 交换朋友夫妻互换小说| 国产日韩欧美视频二区| 哪个播放器可以免费观看大片| 亚洲国产欧美日韩在线播放| 草草在线视频免费看| 伊人久久国产一区二区| 精品午夜福利在线看| 国产免费又黄又爽又色| 多毛熟女@视频| 热99久久久久精品小说推荐| 色视频在线一区二区三区| 国产一区二区三区综合在线观看 | 丝瓜视频免费看黄片| 亚洲伊人久久精品综合| 免费少妇av软件| 日韩av免费高清视频| 成人午夜精彩视频在线观看| 国产精品女同一区二区软件| 国产精品一区二区三区四区免费观看| 日韩欧美精品免费久久| 国产成人精品久久久久久| 美女国产高潮福利片在线看| 蜜臀久久99精品久久宅男| 超碰97精品在线观看| 男人操女人黄网站| 乱人伦中国视频| 我的女老师完整版在线观看| 精品一区二区三卡| av天堂久久9| 麻豆成人av视频| 一区二区三区四区激情视频| 看十八女毛片水多多多| 午夜福利影视在线免费观看| av网站免费在线观看视频| 秋霞在线观看毛片| 精品久久久久久久久亚洲| 亚洲精品乱码久久久v下载方式| 久久久久久久久大av| 国产日韩欧美在线精品| 亚洲精品国产色婷婷电影| 国产成人一区二区在线| 美女福利国产在线| 肉色欧美久久久久久久蜜桃| 欧美日韩av久久| 亚洲精品自拍成人| 美女视频免费永久观看网站| 极品人妻少妇av视频| 一级爰片在线观看| 2018国产大陆天天弄谢| a级片在线免费高清观看视频| 日韩成人av中文字幕在线观看| 性高湖久久久久久久久免费观看| 午夜福利影视在线免费观看| 久久免费观看电影| 亚洲精品456在线播放app| 26uuu在线亚洲综合色| 亚洲少妇的诱惑av| 日韩中文字幕视频在线看片| 亚洲欧美成人精品一区二区| 久久 成人 亚洲| 全区人妻精品视频| 少妇 在线观看| 成人黄色视频免费在线看| 亚洲图色成人| 九色成人免费人妻av| 91精品一卡2卡3卡4卡| 亚洲欧美日韩卡通动漫| 精品国产露脸久久av麻豆| 国产精品嫩草影院av在线观看| 久久久亚洲精品成人影院| 另类精品久久| 国产伦精品一区二区三区视频9| 在线观看一区二区三区激情| 青春草亚洲视频在线观看| 精品人妻熟女毛片av久久网站| 丝袜美足系列| 男的添女的下面高潮视频| 欧美丝袜亚洲另类| 亚洲综合色惰| 少妇被粗大的猛进出69影院 | 最近手机中文字幕大全| 久久久久久久久久久久大奶| 蜜桃国产av成人99| 99久久中文字幕三级久久日本| 美女中出高潮动态图| 久久久精品94久久精品| 免费大片黄手机在线观看| 性色avwww在线观看| 久久久久久久久久久久大奶| 欧美成人精品欧美一级黄| 一级毛片 在线播放| 日韩欧美一区视频在线观看| 免费观看性生交大片5| 99视频精品全部免费 在线| 熟女人妻精品中文字幕| 欧美日韩精品成人综合77777| 国产亚洲av片在线观看秒播厂| √禁漫天堂资源中文www| 99国产精品免费福利视频| 91久久精品国产一区二区成人| 成年美女黄网站色视频大全免费 | 亚洲美女视频黄频| 在线观看免费视频网站a站| 视频中文字幕在线观看| 一级毛片黄色毛片免费观看视频| 久久人人爽人人片av| 婷婷色综合大香蕉| 国产免费又黄又爽又色| 人人妻人人添人人爽欧美一区卜| 国产精品久久久久久久电影| 国产乱来视频区| 人人妻人人澡人人看| 大又大粗又爽又黄少妇毛片口| 黑人欧美特级aaaaaa片| 最近中文字幕2019免费版| 国产黄色视频一区二区在线观看| 国产熟女午夜一区二区三区 | 男女高潮啪啪啪动态图| 日韩熟女老妇一区二区性免费视频| 在现免费观看毛片| 欧美精品一区二区免费开放| 一个人免费看片子| 9色porny在线观看| 国产熟女欧美一区二区| 久久精品国产亚洲网站| 精品久久久久久电影网| 午夜免费男女啪啪视频观看| 99久久综合免费| 天天躁夜夜躁狠狠久久av| 中文字幕亚洲精品专区| 九色成人免费人妻av| 人妻制服诱惑在线中文字幕| 女性生殖器流出的白浆| 欧美日韩av久久| 如日韩欧美国产精品一区二区三区 | 久久久精品区二区三区| 制服诱惑二区| 久久精品国产亚洲av天美| 日韩不卡一区二区三区视频在线| 极品少妇高潮喷水抽搐| 国产欧美另类精品又又久久亚洲欧美| 日韩精品免费视频一区二区三区 | 热99国产精品久久久久久7| 亚洲精品亚洲一区二区| 久久精品国产鲁丝片午夜精品| 午夜久久久在线观看| 精品人妻熟女毛片av久久网站| av不卡在线播放| 男人操女人黄网站| 精品一区二区三卡| 亚洲精品av麻豆狂野| 纵有疾风起免费观看全集完整版| 中文字幕制服av| 精品久久久久久久久av| www.av在线官网国产| 亚洲av免费高清在线观看| 午夜av观看不卡| 欧美一级a爱片免费观看看| av天堂久久9| 久久久久久久久久成人| 国精品久久久久久国模美| 国产精品99久久久久久久久| 亚洲av欧美aⅴ国产| 制服人妻中文乱码| 一个人看视频在线观看www免费| 欧美性感艳星| 亚洲,欧美,日韩| 日韩成人av中文字幕在线观看| 国产精品一区二区在线观看99| 国产亚洲精品久久久com| 91午夜精品亚洲一区二区三区| 三级国产精品片| 亚洲精品日韩av片在线观看| 亚洲国产欧美日韩在线播放| 如日韩欧美国产精品一区二区三区 | 一级毛片我不卡| 夜夜看夜夜爽夜夜摸| 久久久久网色| 亚洲精品日韩在线中文字幕| 免费看av在线观看网站| 精品国产乱码久久久久久小说| 乱人伦中国视频| av又黄又爽大尺度在线免费看| 两个人免费观看高清视频| 亚洲国产精品一区三区| 日本黄色日本黄色录像| 好男人视频免费观看在线| av.在线天堂| 在线 av 中文字幕| 亚洲av国产av综合av卡| 999精品在线视频| 日本wwww免费看| 在线观看一区二区三区激情| 97在线视频观看| 99re6热这里在线精品视频| 男男h啪啪无遮挡| 免费观看av网站的网址| 日韩 亚洲 欧美在线| 91国产中文字幕| 欧美成人午夜免费资源| 国产欧美另类精品又又久久亚洲欧美| 午夜视频国产福利| 91在线精品国自产拍蜜月| 亚洲人成77777在线视频| 简卡轻食公司| 99热网站在线观看| 亚洲在久久综合| 国产精品一区www在线观看| 日本vs欧美在线观看视频| 亚洲久久久国产精品| 亚洲国产精品成人久久小说| 国产伦精品一区二区三区视频9| 欧美激情 高清一区二区三区| 最后的刺客免费高清国语| 91国产中文字幕| 一级毛片aaaaaa免费看小| 热re99久久国产66热| 免费高清在线观看视频在线观看| 亚洲第一av免费看| 热re99久久国产66热| 成人手机av| 亚洲丝袜综合中文字幕| 能在线免费看毛片的网站| 免费观看a级毛片全部| 赤兔流量卡办理| av不卡在线播放| 亚洲av成人精品一二三区| 考比视频在线观看| 大香蕉97超碰在线| 少妇人妻 视频| 人妻一区二区av| 黄色视频在线播放观看不卡| 桃花免费在线播放| 国产精品一区二区三区四区免费观看| 久久精品国产亚洲网站| 久久人人爽人人爽人人片va|