• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Standardization of Robot Instruction Elements Based onConditional Random Fields and Word Embedding

    2019-11-06 06:28:30HengshengWangZhengangZhangJinRenandTongLiu
    關(guān)鍵詞:武漢長江大橋口子拐角

    Hengsheng Wang, Zhengang Zhang, Jin Ren and Tong Liu

    (1. College of Mechanical & Electrical Engineering, Central South University, Changsha 410083, China;2. State Key Laboratory for High Performance Complex Manufacturing, Changsha 410083, China)

    Abstract: Natural language processing has got great progress recently. Controlling robots with spoken natural language has become expectable. With the reliability problem of this kind of control in mind, a confirmation process of natural language instruction should be included before carried out by the robot autonomously; and the prototype dialog system was designed, thus the standardization problem was raised for the natural and understandable language interaction. In the application background of remotely navigating a mobile robot inside a building with Chinese natural spoken language, considering that as an important navigation element in instructions a place name can be expressed with different lexical terms in spoken language, this paper proposes a model for substituting different alternatives of a place name with a standard one (called standardization). First a CRF (Conditional Random Fields) model is trained to label the term required be standardized, then a trained word embedding model is to represent lexical terms as digital vectors. In the vector space similarity of lexical terms is defined and used to find out the most similar one to the term picked out to be standardized. Experiments show that the method proposed works well and the dialog system responses to confirm the instructions are natural and understandable.

    Keywords: word embedding; Conditional Random Fields (CRFs); standardization; human-robot interaction; Chinese Natural Spoken Language (CNSL); Natural Language Processing (NLP)

    1 Introduction

    People want robots more human-like in almost all aspects, with no exception in communication and interaction. In Ref.[1], a scenario of remotely directing a mobile robot in disaster sites for rescue jobs with (Chinese) Natural Spoken Language (CNSL) was proposed, in which a cascaded CRF model was used to extract navigation elements from natural language instructions to make the robot understand what the instruction is about. The extracted navigation elements formed the structured navigation instruction (SNI) for robots. The intention was to train robots other than humans to make the interaction process easier and more natural. It cannot certainly suggest that robot understanding of the instructions always get perfect match to what it really means. For example, the command text from voice recognition might be incorrect, and disambiguation and conformation should be needed. We have been working on a dialog system, through turns of asking and answering, to make confirmation about (Chinese) Natural Spoken Language Instructions (CNSLI) during human robot interaction with CNSL and to make sure that the robot really knows what to do before completing the instruction. This system is called Dialog system of Human-Robot-Interaction through Chinese Natural Spoken Language or shortly DiaHRICNSL. The dialog system[2-3]for human robot interaction through natural spoken language is different from those for tickets in travel agencies, which have fixed procedure for information collection about destination, transportation, accommodation, etc., and also different from ordinary chatbots which usually have only one turn of asking and answering. More flexible and diverse interactions are expected in DiaHRICNSL. This paper focuses on one particular problem in the development of DiaHRINSL, which comes from the fact that people usually speak in different ways for the same meaning. For example, in an instruction like “走到前面的路口” (go forward to the intersection ahead), the destination place “路口” (intersection) can be expressed in different ways in CNSL like “岔道” (cross road), “口子處” (cross place), “拐角” (corner), or sometimes even more particularly as “十字路口” (cross intersection), “丁字路口” (T-intersection) etc., and the action word “走” (go or walk) can also be expressed differently as “移動” (move), or even simply as “到……去” (to). We call this problem STANDARDIZATION for the elements of CNSLI for robot control. After standardization of elements, the robot should, for example, understand the end place “路口” (cross road) as the same meaning with “岔道” (intersection), “口子處”(cross place), “拐角” (corner) etc., or replace all the later three with the standardized former one, “路口” (cross road), in dialog procedure, or randomly use one of them for natural conversation.

    A possible way to tackle this problem is to put all the synonymous words, in the context of robot control (say navigation) with CNSL, together into a synonym dictionary, but it is tedious and hardly complete. There is another method called Approximate String Matching (ASM) in some Chinese language related applications, which is to search similar Chinese characters, but which does not show any semantic connections that are just the concerns of our system. For instance, the input character “北京” (Beijing) may be returned with choices of “北京市” (Beijing City), “北京路” (Beijing Road), “北京餐館” (Beijing Restaurant), “北京烤鴨” (Beijing Roast Duck), etc., which are very different in meanings. And in our situation place names like “岔道” (intersection) and “路口” (cross road), “廁所” (toilet) and “洗手間” (washroom),“走廊” (passageway) and “過道” (aisle) have similar meanings but share no Chinese characters. So the ASM approach does not suit our needs.

    This paper proposes an approach to tackle this problem which is called standardization here. We build a standardized vocabulary for each NE. We extract lexical Terms To Be Standardized (TTBS) from SNI with a CRF model (Section 2). TTBS are then replaced with the most suitable standard lexical Terms In Vocabulary (TIV) by comparing the similarity between TTBS and TIV using word embedding model (Section 3). Section 4 shows experimental results of the methods given in Section 2 and 3. Conclusions of the work are given in Section 5.

    2 CRF Models for Navigation Instructions

    2.1 Extracting Navigation Elements from Instructions

    The outline of extracting navigation elements from CNSLI, proposed in Ref.[1], is shown in Fig.1. From the input navigation instruction (NI) finally we get the output of SNI. The big circle enclosed parts indicate handling steps, the first (shaded) one of which uses the on-line service from free website (Jieba) to do lexical segmentation and part of speech (POS) tagging on the input sequence of NI, and the arrows around which indicate the input and output information marked with words inside the dashed squares. There are slight differences in Fig.1 compared with Ref.[1] where we had three cascaded layers of CRF but here we have only two layers. We simplifies the NE (Navigation Elements) from six in Ref.[1], namely, Start Place (SP), End Place (EP), Action (AN), Direction (DN), Distance (DC), and Speed (SD), to four, and the SP is neglected because it always means the current place to start in instructions, and we combine the AN and DN as a new AN including direction information. So the third layer of CRF which was to distinguish the elements of SP and EP is neglected here.

    The CNSLI is structured with four NEs which the corresponding four slots are to be filled with. Fig.2 shows a slot-filling example where there is no DC element in the instruction and the DC slot remains empty. NPOS in Fig.1 means Navigation Part Of Speech defined in Ref.[1], which is the basis for procedures followed including filling the slots.

    2.2 Extract TTBS from NEs

    In the four elements of NEs, EP has the greatest variation in the description because of various place names used in instructions, while the other three elements have relatively less variations and so are relatively simple. We take the example of EP to demonstrate the procedure of standardization.

    Place names (used for EP) usually come with attributive words or phrases modifying them, like“旁邊的教室” (next door classroom), “走廊的盡頭” (the end of corridor), “空調(diào)旁邊的椅子” (the armchair next to the air conditioner). It is the core words, like “教室” (classroom), “走廊” (corridor) and “椅子” (armchair) in above example Eps that people usually use with different expressions and that deserve the process of standardizing. We want to pick up these words as TTBS and neglect the others.

    Fig.1 Procedure of handling from CNSLI to structured navigation instruction

    Fig.2 Filling slots with NEs from structured navigation instructions

    We use another CRF model to extract the TTBS navigation elements[4-5](as shown in Fig.3). The features for this CRF model are the token terms after segmentation, its POS, and its context mainly refer to the interdependencies between the token and the terms nearby. The inputs of feature function are:

    (1)Observed sequenceSconsisting of segmented terms and their POS tags;

    (2)Token’s spoti;

    (3)TTBS Tags of the tokenliand its adjacent termsli±n(nis the length of the context window that we set);

    Fig.3 TTBS tagging

    The feature functions are extracted from navigation instructions via template file. Take the word “的” (of) from instruction “到前面的路口左轉(zhuǎn)” (turn left at the intersection ahead) as a token whose spot is 0, and then the example template and feature are shown in Table 1.

    Table1 Template and feature

    The“row” in %x[row, col] represents the relative row deviated from the current token, and the “col” is the column count (“col 0” for word and “col 1” for POS). POS tags: nd, u, n indicate noun of direction, auxiliary and noun accordingly. “/” is to separate two successive features in one template. The length of the context window is 3 in Table 1, and we found out that the best length is 5 for this model from training practice. Every row in Table 1 generates a feature function, and we collect training corpus to form feature functions for all sample instructions. The label sequence consists of four types of tags shown in Table 2. The CRF model is then trained based on the feature functions and the corresponding tags.

    Table 2 Tags of TTBS

    As an example for tags, we take the place name“武漢長江大橋” (Wuhan Yangtze River Bridge) which might be incorrectly segmented to several names like “武漢” (Wuhan), “長江” (Yangtze River) and “大橋” (Bridge). In this case we must label it as follows: “武漢” (Wuhan)/ B-A, “長江” (Yangtze River) / I-A , “大橋” (Bridge)/ I-A, and the place name “武漢長江大橋” (Wuhan Yangtze River Bridge) would be recognized as a whole in tests because of the tagging.

    The CRF model was trained with CRF++, an open software. Table 3 shows the test result for a navigation instruction“沿著玉帶河走到機(jī)電樓門口進(jìn)去到桌子邊上停下” (go along the Yudai river to the entrance of the M&E Building and go inside, and then stop at the reception desk). The maximum probability for each term gives the output tag sequence {N,N,N,N,A,N,N,N,A,N,N}, and the TTBS terms are “機(jī)電樓” (M&E Building) and “桌子” (reception desk) corresponding to “A”s in the tag sequence, which are correct.

    The POS tags in Table 3: p, ns, v, n, nl, nd, indicate preposition, noun of places, verb, noun, noun phrases and noun of direction, accordingly.

    3 Word Embedding for Standardization

    3.1 Word Embedding and Word2Vev

    Word embedding is a method of representation of words with numerical vectors which have advantages of handling words and relations between words with numerical calculations, or simply word embedding is word vector. In Chinese, the meaningful unit in sentences usually is phrase, or fixed word-sequence, which is called lexical term (or simply term) in this paper. We will represent Chinese lexical terms with numerical vectors by the method of word embedding, and terms are segmented from a sentence thanks to a free website service of lexical segmentation.

    Word embedding was proposed by Hinton[6]in 1986. Xu[7]introduced the idea of neural network into the training of word vectors in 2000. Bengio[8]proposed a multi-layer neural network in model training. Collobert[9]applied word embedding to accomplish part-of-speech tagging, semantic role labeling, phrase recognition in NLP. Mnih[10-11]started training word embedding language model with deep learning, and proposed a hierarchical method to improve training efficiency. Mikolov[12]started using Recurrent Neural Network (RNN) in 2010 to do the word embedding training. Huang[13]attempted to make word embedding contain more semantic information. Word embedding had been used in many NLP tasks such as language translation[14]and text classification[15].

    Table 3 Output of CRFs model

    In 2013,Mikolov developed Word2Vec free software for the training of word embedding[16-17]which became a main tool in the research community[18-20]. It is a prediction model using shallow neural network based on Continuous Bag-of-Words (CBOW) and Skip-gram and the training is efficient. Fig.4 shows the prediction of the token wordv(wt) from its context words ofv(wt-2),v(wt-1),v(wt+1),v(wt+2) based on CBOW, and after training the word vectors are contained in the parameters of the projection layer.v(.) in Fig.4 indicates the One-hot Representation of a word over a fixed vocabulary, andwtindicates the token word.

    Apart from CBOW there is an option of Skip-Gram model which predicts the context words according to the token words. CBOW model is better for the case of scarce training data while Skip-Gram usually for abundant training data. Word2Vec also provides Hierarchical Softmax (HS) and Negative Sampling (NEG) algorithms for efficient training process.

    Fig.4 CBOW model

    3.2 Training

    We collected 314 sentences of instructions for robot navigation, some other materials from news, literatures and messages of BBS, all of 4017 sentences as our corpus. After preprocessing of lexical segmentation with Jieba free software, the vocabulary has 41149 lexical terms.

    The Word2Vec in Gensim[21]package was used to train the model, and the options we chose were as follows: vector size (dimension), 150; windows size (the maximum distance for context), 5; min_count (the minimum occurrence of terms less than which will be ignored), 3; sg (training algorithm selection), 0 for CBOW; hs (choice for HS), 1; negative (choice for NEG), 0; iter (the number of training iterations), 10; alpha (initial learning rate), 0.025.

    3.3 Similarity Between Terms

    We use cosine similarity measure for the standardization of TTBS.

    (1)

    4 Experiments and Results

    We used iFLYTEK[22]to obtain the text from Chinese spoken language, and Jieba[23]for Chinese word segmentation and POS tagging.

    4.1 Experiment 1

    The CRF model introduced in Section 2 for the tagging of TTBS was tested. In the first case, there were 200 instructions in the test set, which intentionally included TTBSs; and 100 instructions in the second case without TTBS. The results are shown in Table 4. The CRF model shows good performance for the selection of TTBSs from input instructions. The reasons for fault tagging are mainly, (1) wrong text recognition of spoken language, and (2) wrong word segmentation because of non-covered phrases in training set. The further improvement of tagging can be achieved by the enlargement of the positive training set in experiments.

    Table 4 Result of Experiment 1

    4.2 Experiment 2

    The word embedding model introduced in section III for the standardization of TTBS was evaluated; and the results were also compared with traditional ASM method.

    The procedure of the experiment was based on our prototype of dialogue system. Chinese spoken instruction inputs were given and the dialogue system responded with generated natural language. The process in between includes the models introduced in this paper. The response of the dialogue system was used as a criterion for the judgement of whether the standardization worked well or not. The generation of the natural language response was created through AIML (Artificial Intelligence Markup Language). For the diversity of the natural language instructions, the test set was collected from 5 different students with 20 instructions each.

    The response was judged with Matching Rate (MR) and Intention Rate (IR). MR measures the matching count of TTBS to TIV, but not necessarily in accordance with the intentions of the input instructions which are measured by IR. For example a navigation instruction “快點去岔道” (Go to the crossroad fast) might be standardized as “快點去樓道” (Go to the corridor fast). This is a kind of matching, but not the intention of the instruction at all, which might be from the imperfect standardization wrongly putting “岔道” (crossroad) in the position most closer to “樓道” (corridor) which should be “路口” (intersection) instead. IR is defined as:

    100%

    (2)

    Table 5 shows the experiment results, in which two methods have close results of MR while apparently higher IR result occurs with word embedding matching. This is because word embedding method reflects the semantic meanings of the words while ASM method is only based on the similarity of strings of characters literally without any intrinsic semantics related. Three examples of spoken instruction input is shown in Table 6 with the responses from our prototype dialog system with different standardization methods, which shows how incorrectly the ASM method sometime responses. In the first case two methods have correct intention matching; in the second case, ASM method wrongly matches corridor (樓道) with staircase (樓梯) because these two place names in Chinese share the same word “樓”, while word embedding method correctly matches corridor (樓道) with passageway (走廊) although they do not share any word in Chinese. In the third case, ASM method has got no match of the place name W.C. (廁所), while word embedding method matches the W.C. (廁所) correctly with washroom (洗手間).

    This experiment shows that the word embedding model trained in this paper is good as a digital expression of lexical term which was used to find a substitution meaningfully for a lexical term in a spoken instruction which was recognized as TTBS. Furthermore the goodness or accuracy of the word embedding model can be improved through practical use with more enlarged training set; while the ASM method depends more on the rules created manually which quickly become too complicated to be improved.

    Table 5 Result comparison of two methods

    Table 6 Experimental instances of responding from the dialog system

    4.3 Experiment 3

    The selection of threshold value of similarity was tested in this experiment.Only when the largest similarity between TTBS and the candidate TIV is greater than the threshold, the pair is considered as a match, and so the threshold can be used to filter out irrelevant matches. The absolute value of similarity is less important although the larger values generally indicate better model trained. The relative value counts more.

    The experimental result is in Table 7 which shows that the value of 0.5 got less than half (26%) of TTBS being standardized with TIV but which are almost all (11 out of 13) intentionally correct. The value of 0.4 in Table 7 seems a better one because we got more matches, yet negatively the IR value went down also. The procedure can be shown in Fig.5.

    Table 7 Effect of threshold value

    It is not likely to obtain the optimized one in the way of arbitrarily choosing the threshold value shown in Fig.5. In fact, more accurately the value should be different for every TIV. We developed a scheme to avoid this hard selection of threshold value, in which Reinforcement Learning (RL) is used (as shown in Fig.6). The main idea is the Q-value matrix which is being optimized through the collected and continuous updating corpus from recordings of actual interaction on the dialog system. The details of learning model are not presented here, but the experimental result is presented in Fig.7 which shows the growth of both MR and IR (dashed lines in Fig.7) with the increase of the number of interactive instructions when using RL. The solid lines in Fig.7 show that the MR and IR stay relatively in the same level.

    Fig.5 The threshold value of similarity

    Fig.6 The reinforcement learning added in the threshold value selection

    Fig.7Comparative results of MR and IR with and without RL added (threshold value is0.4)

    5 Conclusions

    Navigating a robot with natural spoken language has been long expected, only recent advances in artificial intelligence make the application expectable. This paper stood on the fact that no matter how delicate a computational natural language understanding module is there will always be possible with misunderstanding. The confirmation of instructions to robots before being accepted and carried out should be always necessary for practical use of natural-language-controlled robots especially for some important and critical jobs like rescue and assembly. Our prototype dialog system was structured for this reason, and lexical term standardization proposed in this paper is part of the dialog system. We focused on place names, which usually are expressed in different lexical terms in everyday life, occurred in spoken natural language instructions for robot navigation in indoor environment. The aim was to substitute the place name to be understood with some known one, which was called standardization in this paper. The first step was to pick up the lexical term required to be standardized, and we trained a CRF model which picked place names from the sequence of lexical terms of instructions pretty good as expected in experiments. Then for the standardization of the picked place name, we expressed lexical terms with digital vectors using word embedding model which meant to catalog terms according to the training corpus, and the similarity value was used to standardize the picked place name with the most similar one in the vocabulary. To make the correctness of standardization non-sensitive to the threshold value of similarity, a reinforcement learning model was added. All the experiments verified the proposals, and the respond of the dialog system to human instructions became more natural and meaningful. More work need to be done in collecting robot indoor instructions and improving the word embedding model.

    猜你喜歡
    武漢長江大橋口子拐角
    拐 角
    一塊錢
    Where Is My Home?
    我是
    詩歌月刊(2020年3期)2020-03-30 03:26:58
    我 是
    詩選刊(2020年3期)2020-03-23 13:34:35
    堅決堵住隨意調(diào)整預(yù)算的“口子”
    毛澤東與武漢長江大橋
    Qiáo
    Special Focus(2019年6期)2019-02-20 15:27:22
    走過那一個拐角
    美文(2017年4期)2017-02-23 14:26:12
    上海控?zé)煑l例二審稿被指“留口子”
    国产又色又爽无遮挡免| 日韩三级伦理在线观看| 成人毛片60女人毛片免费| 在线天堂最新版资源| 国产福利在线免费观看视频| 国产免费现黄频在线看| 亚洲欧美成人综合另类久久久| www.熟女人妻精品国产 | 午夜福利,免费看| 母亲3免费完整高清在线观看 | 国产成人精品婷婷| 国产成人a∨麻豆精品| 欧美成人午夜精品| 高清在线视频一区二区三区| 欧美 日韩 精品 国产| 丝袜脚勾引网站| 国产精品久久久久久av不卡| 99九九在线精品视频| 国产成人免费无遮挡视频| 亚洲人与动物交配视频| 午夜免费观看性视频| 欧美激情 高清一区二区三区| 久久99一区二区三区| 色吧在线观看| www日本在线高清视频| 国产亚洲精品第一综合不卡 | 国产在视频线精品| 夜夜爽夜夜爽视频| 一级片免费观看大全| 亚洲av成人精品一二三区| 日本欧美国产在线视频| 亚洲丝袜综合中文字幕| 侵犯人妻中文字幕一二三四区| 久久青草综合色| 国产伦理片在线播放av一区| 亚洲av在线观看美女高潮| 国产一区有黄有色的免费视频| 美女中出高潮动态图| 国产又爽黄色视频| 亚洲,欧美精品.| 精品国产露脸久久av麻豆| 国产精品 国内视频| 日本色播在线视频| 亚洲av.av天堂| 一级a做视频免费观看| 亚洲欧洲精品一区二区精品久久久 | 黑丝袜美女国产一区| 伊人亚洲综合成人网| 大陆偷拍与自拍| 国产黄频视频在线观看| 中国三级夫妇交换| 在线观看三级黄色| 久久综合国产亚洲精品| 亚洲婷婷狠狠爱综合网| 免费观看无遮挡的男女| 午夜老司机福利剧场| 国产成人aa在线观看| 亚洲国产成人一精品久久久| 丁香六月天网| 男人添女人高潮全过程视频| 少妇的逼水好多| 日本黄色日本黄色录像| 你懂的网址亚洲精品在线观看| 永久网站在线| 2022亚洲国产成人精品| 欧美精品国产亚洲| 一二三四中文在线观看免费高清| 美女国产高潮福利片在线看| 久久人人97超碰香蕉20202| 晚上一个人看的免费电影| tube8黄色片| a级片在线免费高清观看视频| 黄色一级大片看看| 你懂的网址亚洲精品在线观看| 这个男人来自地球电影免费观看 | 97在线人人人人妻| 乱码一卡2卡4卡精品| 精品人妻在线不人妻| 国产不卡av网站在线观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 久久人人爽人人片av| 欧美成人午夜精品| 婷婷成人精品国产| 性色avwww在线观看| 亚洲高清免费不卡视频| 久久影院123| 亚洲欧美日韩卡通动漫| 看免费av毛片| 女性被躁到高潮视频| 国产日韩欧美视频二区| 99久久人妻综合| 国产亚洲精品第一综合不卡 | 欧美性感艳星| 最后的刺客免费高清国语| 成年人午夜在线观看视频| 蜜桃国产av成人99| 日本91视频免费播放| 欧美97在线视频| 欧美激情国产日韩精品一区| 久久精品人人爽人人爽视色| 伊人亚洲综合成人网| 少妇被粗大猛烈的视频| 亚洲成国产人片在线观看| 久久青草综合色| 亚洲精品国产av成人精品| 51国产日韩欧美| 日韩制服丝袜自拍偷拍| 夜夜爽夜夜爽视频| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 日本欧美视频一区| 国产精品不卡视频一区二区| 99国产精品免费福利视频| 天堂中文最新版在线下载| 免费观看a级毛片全部| 国产亚洲欧美精品永久| 五月玫瑰六月丁香| 中国三级夫妇交换| 男女高潮啪啪啪动态图| 丝袜美足系列| 国产一区二区在线观看av| 国产精品秋霞免费鲁丝片| 一本色道久久久久久精品综合| 国产视频首页在线观看| 日韩欧美精品免费久久| 侵犯人妻中文字幕一二三四区| 久久久久久久大尺度免费视频| 亚洲第一av免费看| 尾随美女入室| 美女脱内裤让男人舔精品视频| 只有这里有精品99| 日韩大片免费观看网站| 丰满乱子伦码专区| 亚洲国产最新在线播放| 人妻少妇偷人精品九色| 纯流量卡能插随身wifi吗| 两个人免费观看高清视频| 成人免费观看视频高清| 1024视频免费在线观看| 亚洲av.av天堂| 欧美日韩视频高清一区二区三区二| 欧美 亚洲 国产 日韩一| 亚洲精品456在线播放app| 又黄又爽又刺激的免费视频.| 色5月婷婷丁香| 国产高清不卡午夜福利| 国产精品国产三级专区第一集| 99久久中文字幕三级久久日本| 黄片播放在线免费| 天堂俺去俺来也www色官网| 国产av一区二区精品久久| 精品熟女少妇av免费看| 日本vs欧美在线观看视频| 亚洲,一卡二卡三卡| 一二三四在线观看免费中文在 | 国产免费现黄频在线看| 中文字幕另类日韩欧美亚洲嫩草| 大香蕉久久成人网| 亚洲一级一片aⅴ在线观看| 久久精品久久精品一区二区三区| a 毛片基地| 国产一区有黄有色的免费视频| 大香蕉久久网| 国产精品 国内视频| 色哟哟·www| 亚洲,一卡二卡三卡| 久久久久国产精品人妻一区二区| 老熟女久久久| 久久精品熟女亚洲av麻豆精品| 91精品三级在线观看| 午夜福利网站1000一区二区三区| 亚洲天堂av无毛| 亚洲综合色网址| 免费观看无遮挡的男女| 欧美成人午夜精品| 又黄又爽又刺激的免费视频.| 亚洲在久久综合| 人体艺术视频欧美日本| 亚洲精品,欧美精品| 日本猛色少妇xxxxx猛交久久| 精品卡一卡二卡四卡免费| 秋霞在线观看毛片| 久久久久久久亚洲中文字幕| 国产成人精品福利久久| 只有这里有精品99| 久久人人爽人人爽人人片va| 大片免费播放器 马上看| 欧美亚洲日本最大视频资源| 考比视频在线观看| 午夜福利在线观看免费完整高清在| 亚洲精品乱久久久久久| 王馨瑶露胸无遮挡在线观看| 国产精品蜜桃在线观看| 亚洲人成网站在线观看播放| 天天操日日干夜夜撸| 最新中文字幕久久久久| 人人澡人人妻人| 中文乱码字字幕精品一区二区三区| 日韩制服丝袜自拍偷拍| 亚洲精品一二三| 91午夜精品亚洲一区二区三区| 黑人高潮一二区| 丰满少妇做爰视频| 午夜精品国产一区二区电影| 国产毛片在线视频| av网站免费在线观看视频| 亚洲国产av新网站| 午夜免费鲁丝| 最近的中文字幕免费完整| 久久久久久久大尺度免费视频| 亚洲性久久影院| 黄片无遮挡物在线观看| 夫妻午夜视频| 国产精品久久久久成人av| 中文字幕制服av| 免费看不卡的av| 亚洲激情五月婷婷啪啪| 大码成人一级视频| 国产免费一区二区三区四区乱码| 熟妇人妻不卡中文字幕| 九草在线视频观看| 观看av在线不卡| 亚洲欧洲精品一区二区精品久久久 | 性色av一级| 久久狼人影院| 肉色欧美久久久久久久蜜桃| 少妇人妻久久综合中文| 黄色毛片三级朝国网站| 欧美丝袜亚洲另类| 国产成人一区二区在线| 成人亚洲精品一区在线观看| 草草在线视频免费看| av在线播放精品| 国产69精品久久久久777片| 97超碰精品成人国产| 2022亚洲国产成人精品| 国产成人精品无人区| 久久久久精品久久久久真实原创| 色视频在线一区二区三区| 人妻系列 视频| 日韩在线高清观看一区二区三区| 伊人亚洲综合成人网| 国产成人精品婷婷| 亚洲国产色片| 久久国产精品男人的天堂亚洲 | 日本猛色少妇xxxxx猛交久久| 国产精品嫩草影院av在线观看| 丝瓜视频免费看黄片| 老司机影院毛片| 日本wwww免费看| 99久久综合免费| 亚洲伊人色综图| 老司机影院毛片| 国产成人免费观看mmmm| 一级毛片我不卡| 欧美精品高潮呻吟av久久| 插逼视频在线观看| 亚洲精品乱久久久久久| 在现免费观看毛片| 久久狼人影院| 国产精品 国内视频| 男女国产视频网站| 三级国产精品片| 蜜桃在线观看..| 超色免费av| 亚洲国产精品一区三区| 99热这里只有是精品在线观看| 高清视频免费观看一区二区| 国产 一区精品| 男男h啪啪无遮挡| 乱码一卡2卡4卡精品| 亚洲欧美精品自产自拍| 亚洲av福利一区| 一二三四在线观看免费中文在 | 欧美xxⅹ黑人| 777米奇影视久久| 99热全是精品| 99国产精品免费福利视频| 制服诱惑二区| 91在线精品国自产拍蜜月| 如何舔出高潮| 九草在线视频观看| 桃花免费在线播放| 97人妻天天添夜夜摸| 国产毛片在线视频| 亚洲欧美成人综合另类久久久| 又黄又粗又硬又大视频| 久久国产精品男人的天堂亚洲 | 我要看黄色一级片免费的| 国产精品久久久av美女十八| 精品福利永久在线观看| 国产日韩一区二区三区精品不卡| 国产69精品久久久久777片| 国产成人精品福利久久| 一本色道久久久久久精品综合| 国产成人aa在线观看| 色5月婷婷丁香| 国产一区二区在线观看av| 97人妻天天添夜夜摸| 免费大片黄手机在线观看| 日韩伦理黄色片| 欧美 亚洲 国产 日韩一| 秋霞伦理黄片| 男人操女人黄网站| 久久国产亚洲av麻豆专区| 国产xxxxx性猛交| 亚洲精品久久久久久婷婷小说| 国精品久久久久久国模美| 黑人巨大精品欧美一区二区蜜桃 | 欧美日韩国产mv在线观看视频| 亚洲国产毛片av蜜桃av| 亚洲av中文av极速乱| 黄色怎么调成土黄色| 亚洲伊人色综图| 美女大奶头黄色视频| 人人澡人人妻人| 老女人水多毛片| 亚洲一区二区三区欧美精品| 国产成人免费无遮挡视频| 久久这里有精品视频免费| 99热国产这里只有精品6| 婷婷色综合www| 日韩视频在线欧美| 一级,二级,三级黄色视频| av在线播放精品| 丝袜喷水一区| 视频在线观看一区二区三区| 日本av手机在线免费观看| 天天躁夜夜躁狠狠躁躁| 在线看a的网站| 黑人欧美特级aaaaaa片| 91成人精品电影| 最新的欧美精品一区二区| 国产免费视频播放在线视频| 在线观看人妻少妇| 欧美亚洲 丝袜 人妻 在线| 在线观看免费高清a一片| 日本欧美视频一区| av在线观看视频网站免费| 嫩草影院入口| av视频免费观看在线观看| 国产极品粉嫩免费观看在线| 国产av国产精品国产| 久久久久久久亚洲中文字幕| 免费高清在线观看视频在线观看| 少妇猛男粗大的猛烈进出视频| 欧美97在线视频| 国产黄频视频在线观看| 黄色毛片三级朝国网站| 国产免费又黄又爽又色| 如日韩欧美国产精品一区二区三区| 校园人妻丝袜中文字幕| av线在线观看网站| 男女免费视频国产| 欧美精品av麻豆av| 18禁动态无遮挡网站| 成年美女黄网站色视频大全免费| 在线观看国产h片| 成年av动漫网址| 欧美精品一区二区大全| 看免费av毛片| 国产免费又黄又爽又色| 午夜日本视频在线| 久久综合国产亚洲精品| 亚洲精品一区蜜桃| 成人毛片a级毛片在线播放| 亚洲色图综合在线观看| 亚洲第一av免费看| 亚洲av国产av综合av卡| 亚洲婷婷狠狠爱综合网| 国产亚洲精品第一综合不卡 | 99久久综合免费| 精品99又大又爽又粗少妇毛片| 高清黄色对白视频在线免费看| 视频中文字幕在线观看| 老女人水多毛片| 国产亚洲最大av| 欧美人与性动交α欧美软件 | 一本—道久久a久久精品蜜桃钙片| 美女国产高潮福利片在线看| 免费看光身美女| 男女高潮啪啪啪动态图| 99热这里只有是精品在线观看| 美女脱内裤让男人舔精品视频| 亚洲欧洲精品一区二区精品久久久 | 久久狼人影院| 婷婷色综合大香蕉| 亚洲欧美中文字幕日韩二区| 九草在线视频观看| 又粗又硬又长又爽又黄的视频| 99热这里只有是精品在线观看| 99视频精品全部免费 在线| 伦理电影免费视频| 伊人久久国产一区二区| 蜜臀久久99精品久久宅男| 国产免费一区二区三区四区乱码| 免费看不卡的av| 女人精品久久久久毛片| 麻豆精品久久久久久蜜桃| 国产高清不卡午夜福利| 久久热在线av| 久久精品国产鲁丝片午夜精品| 久久99精品国语久久久| 2021少妇久久久久久久久久久| 欧美激情国产日韩精品一区| 五月玫瑰六月丁香| 亚洲国产成人一精品久久久| 欧美精品一区二区大全| 午夜福利视频精品| 国产av一区二区精品久久| 69精品国产乱码久久久| 曰老女人黄片| 国产在线免费精品| 制服人妻中文乱码| 日本欧美视频一区| 99热国产这里只有精品6| a 毛片基地| 久久国产精品大桥未久av| 国产综合精华液| 午夜视频国产福利| 少妇人妻久久综合中文| 一边摸一边做爽爽视频免费| 日本-黄色视频高清免费观看| av国产精品久久久久影院| 久热这里只有精品99| 亚洲国产av新网站| 欧美3d第一页| 在线看a的网站| 亚洲 欧美一区二区三区| 成人综合一区亚洲| 国产国语露脸激情在线看| 韩国高清视频一区二区三区| 国产免费一级a男人的天堂| 国产1区2区3区精品| 欧美+日韩+精品| 国产精品欧美亚洲77777| 夜夜爽夜夜爽视频| www.色视频.com| 女人被躁到高潮嗷嗷叫费观| 亚洲精品乱码久久久久久按摩| 春色校园在线视频观看| 黄色一级大片看看| 中国国产av一级| 国产淫语在线视频| 色吧在线观看| 女人被躁到高潮嗷嗷叫费观| 男女边摸边吃奶| av不卡在线播放| 黑丝袜美女国产一区| 日日爽夜夜爽网站| 99re6热这里在线精品视频| 激情五月婷婷亚洲| 国产成人精品一,二区| 人人妻人人澡人人看| 99热这里只有是精品在线观看| 亚洲精品aⅴ在线观看| 最近手机中文字幕大全| 九草在线视频观看| 国产精品一区二区在线不卡| 亚洲国产看品久久| 欧美亚洲日本最大视频资源| 精品一品国产午夜福利视频| 爱豆传媒免费全集在线观看| 天天躁夜夜躁狠狠躁躁| 欧美亚洲 丝袜 人妻 在线| 欧美日韩av久久| 欧美xxⅹ黑人| 午夜老司机福利剧场| 国产成人精品福利久久| 久久久久久久精品精品| 亚洲性久久影院| av天堂久久9| av视频免费观看在线观看| 午夜激情久久久久久久| 国产亚洲av片在线观看秒播厂| 26uuu在线亚洲综合色| 国产精品国产三级国产专区5o| 久久久亚洲精品成人影院| 男人爽女人下面视频在线观看| 全区人妻精品视频| 嫩草影院入口| 国产成人精品福利久久| 日日爽夜夜爽网站| 精品人妻熟女毛片av久久网站| 色视频在线一区二区三区| 色5月婷婷丁香| 精品一区在线观看国产| 午夜视频国产福利| 国产激情久久老熟女| 免费黄网站久久成人精品| 成人国产麻豆网| 男人舔女人的私密视频| 99热国产这里只有精品6| 欧美日韩亚洲高清精品| 日韩中字成人| 飞空精品影院首页| 69精品国产乱码久久久| 久久久久久久大尺度免费视频| 看非洲黑人一级黄片| 女人被躁到高潮嗷嗷叫费观| 日本色播在线视频| 美女视频免费永久观看网站| 欧美人与性动交α欧美精品济南到 | 精品国产乱码久久久久久小说| 欧美3d第一页| 如日韩欧美国产精品一区二区三区| 在线观看www视频免费| 欧美精品一区二区免费开放| 99热全是精品| freevideosex欧美| 欧美精品亚洲一区二区| 欧美精品一区二区大全| 精品少妇内射三级| 日韩,欧美,国产一区二区三区| 两个人看的免费小视频| 久久av网站| 伦理电影免费视频| 午夜免费观看性视频| 国产激情久久老熟女| freevideosex欧美| 日韩 亚洲 欧美在线| 欧美精品一区二区大全| 国产 精品1| 22中文网久久字幕| 九色亚洲精品在线播放| 日韩av不卡免费在线播放| 最近的中文字幕免费完整| 日韩av在线免费看完整版不卡| 国产精品一区www在线观看| 国产精品 国内视频| 老司机亚洲免费影院| 99精国产麻豆久久婷婷| 王馨瑶露胸无遮挡在线观看| 国产精品一区二区在线观看99| 观看美女的网站| 久久精品国产亚洲av涩爱| 天美传媒精品一区二区| 国产在线视频一区二区| 久久青草综合色| 女性生殖器流出的白浆| 美女福利国产在线| 18禁国产床啪视频网站| 欧美国产精品一级二级三级| 午夜免费男女啪啪视频观看| 国产精品一区二区在线不卡| 免费在线观看完整版高清| 午夜免费鲁丝| 国产在线免费精品| 国产成人a∨麻豆精品| 大陆偷拍与自拍| 五月天丁香电影| 永久网站在线| 午夜激情av网站| 亚洲精品乱码久久久久久按摩| 麻豆精品久久久久久蜜桃| 蜜臀久久99精品久久宅男| 日本欧美国产在线视频| 久久女婷五月综合色啪小说| 性色avwww在线观看| 国产精品久久久久久久电影| 91精品国产国语对白视频| 日本黄色日本黄色录像| av国产久精品久网站免费入址| 欧美xxⅹ黑人| 侵犯人妻中文字幕一二三四区| 视频在线观看一区二区三区| 麻豆乱淫一区二区| 男女下面插进去视频免费观看 | 久久久久久人妻| 中文字幕另类日韩欧美亚洲嫩草| 久久久久人妻精品一区果冻| 黄色 视频免费看| 人妻一区二区av| 日韩制服骚丝袜av| 午夜福利影视在线免费观看| 欧美变态另类bdsm刘玥| 国产极品粉嫩免费观看在线| 在线 av 中文字幕| 亚洲国产精品一区二区三区在线| 97精品久久久久久久久久精品| 精品福利永久在线观看| 51国产日韩欧美| 22中文网久久字幕| 亚洲精品456在线播放app| 精品久久久久久电影网| 日韩电影二区| 久久ye,这里只有精品| 亚洲国产日韩一区二区| 午夜91福利影院| 亚洲av日韩在线播放| 丝袜脚勾引网站| 91国产中文字幕| 精品亚洲乱码少妇综合久久| 精品午夜福利在线看| 蜜桃在线观看..| 日韩成人av中文字幕在线观看| 久久久国产精品麻豆| 国产片特级美女逼逼视频| 国产精品成人在线| 日韩制服丝袜自拍偷拍| 桃花免费在线播放| 午夜视频国产福利| av在线播放精品| 国产精品一区二区在线观看99| 一级毛片我不卡| 永久免费av网站大全| 欧美成人午夜精品| 99香蕉大伊视频| 中文字幕免费在线视频6| 99久久中文字幕三级久久日本| 国产成人精品一,二区| 伦理电影免费视频| av有码第一页| www日本在线高清视频| 汤姆久久久久久久影院中文字幕| 天天躁夜夜躁狠狠躁躁|