• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fast Chinese syntactic parsing method based on conditional random fields

    2015-04-22 02:33:18HANLei韓磊LUOSenlin羅森林CHENQianrou陳倩柔PANLimin潘麗敏
    關(guān)鍵詞:韓磊姚明森林

    HAN Lei(韓磊), LUO Sen-lin(羅森林), CHEN Qian-rou(陳倩柔), PAN Li-min(潘麗敏)

    (School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China)

    ?

    Fast Chinese syntactic parsing method based on conditional random fields

    HAN Lei(韓磊), LUO Sen-lin(羅森林), CHEN Qian-rou(陳倩柔), PAN Li-min(潘麗敏)

    (School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China)

    A fast method for phrase structure grammar analysis is proposed based on conditional random fields (CRF). The method trains several CRF classifiers for recognizing the phrase nodes at different levels, and uses the bottom-up to connect the recognized phrase nodes to construct the syntactic tree. On the basis of Beijing forest studio Chinese tagged corpus, two experiments are designed to select the training parameters and verify the validity of the method. The result shows that the method costs 78.98 ms and 4.63 ms to train and test a Chinese sentence of 17.9 words. The method is a new way to parse the phrase structure grammar for Chinese, and has good generalization ability and fast speed.

    phrase structure grammar; syntactic tree; syntactic parsing; conditional random field

    Chinese syntactic parsing is one of the significant components in natural language processing. With the large scale development of Chinese Treebank[1], the corpus-based techniques that are successful for English have been applied extensively to Chinese. The two main approaches are traditional statistical and deterministic parsing.

    Traditional statistical approaches build models to obtain the unique tree with the highest PCFG probability (Viterbi tree)[2-9]. These algorithms require calculating PCFG probability of every kind of possible trees. Only at the expense of testing time, the Viterbi tree can be obtained. To reduce the testing time, deterministic parsing emerges as an attractive alternative to probabilistic parsing, offering accuracy just below the state-of-the-art in syntactic analysis of English, but running in linear time. Yamada and Matsumoto[10]proposed a method for analyzing word-word dependencies using deterministic bottom-up manner using support vector machines (SVM), and achieved R/P of over 90% accuracy of word-word dependency. Sagae and Lavie[11]used a basic bottom-up shift-reduce algorithm, but employed a classifier to determine parser actions instead of a grammar and achieved R/P of 87.54%/87.61%. Cheng et al.[12]presented a deterministic dependency structure analyzer which implements two algorithms—Yamada and Nivre algorithms—and two sorts of classifiers—SVM and Maximum Entropy models for Chinese. Encouraging results have also been shown in applying deterministic models to Chinese parsing. Wang et al.[13]presented classifier-based deterministic parser for Chinese constituency parsing. Their parser computes parse trees from bottom up in one pass, and uses classifiers to make shift-reduce decisions and achieves R/P of 88.3%/88.1%.

    In this paper, a faster deterministic parsing method is proposed based on conditional random fields (CRF) to reduce the runtime of parsing. The proposed method applies the CRF++[14]as the classifiers training and testing software to label the phrase nodes in syntactic levels for each segment. The nodes labeled are combined to construct the phrase structure syntactic tree.

    1 Method

    Base on the part-of-speech (POS) tagging results, the method labels the phrase and sentence nodes (both of them are defined as nodes) of each segment to construct a syntactic tree. The principle of the method is shown in Fig.1.

    Fig. 1 Principle of the method

    1.1 Data processing

    Data processing contains two parts, i.e. data training and data testing. The aim of data training is to change the data format from the POS and syntax tagging result to one word per line. The data training inputs are the POS and syntax tagging results and outputs are nodes in the format of one word per line. Take the sentence of “姚明說范甘迪是核心(Yao Ming said that Van Gundy is the core)?!?as an example. The data training inputs are the POS and syntax tagging results, which are “0^姚明(Yao Ming)/nr 1^說(said)/v 2^范甘迪(Van Gundy)/nr 3^是(is)/v 4^核心(the core)/n 5^。/w” and “[dj [np 0^姚明/nr] [vp 1^說/v [dj [np 2^范甘迪/nr] [vp 3^是/v 4^核心/n]]] 5^。/w]”. The data training outputs are shown in Tab.1. And the syntax tree is shown in Fig. 2.

    Tab. 1 Format of one word per line for“姚明說范甘迪是核心”

    Fig. 2 Syntactic tree of “姚明說范甘迪是核心”

    To explain the process of data training, the concept of syntactic level (hereinafter referred to as level) is introduced. The closest node to the segment is defined in the 1stlevel (1L) of the segment and less close node to the segment is defined in the 2ndlevel (2L) of the segment. The further the distance between segment and the node is, the higher level the node is in. The root node of the syntactic tree is defined in the last level of the segment. It can conclude that different segments may have different levels. Take the segment of “核心(the core)” as an example in Fig. 2. The phrase node of vp is in the 1stlevel, dj that does not belong to the root node is in the 2ndlevel, vp that is further from the segment is in the 3rdlevel (3L) and dj of the root node is in the 4thlevel (4L). In addition, the maximum level number among the segments in a sentence equals the level number of the sentence.

    In Tab. 1, the first column shows word segmentation results of the example sentence, the second column shows the POS tagging of each segment, the third column contains the segment order (SO) which means the order of the segment in the sentence, the other columns contain all the nodes in the syntactic tree including phrase nodes and sentence nodes.

    The data testing changes the POS tagging results into the format for the next recognition. Its inputs are POS tagging results and its outputs are the POS in the format as the first three columns in Tab. 1.

    1.2 Model training and testing

    Model training is to train the recognition models for each level labeling. Its inputs are the tagged syntactic tree and feature template. Its outputs are the recognition models for each level labeling. The model for labeling the nodes in the 1stlevel is defined as the 1stmodel and the model for labeling the nodes in the 2ndlevel is defined as the 2ndmodel and so on. Considering the data set for testing, the maximum level number of sentences is 14, thus 14 models need to be trained at least. Take the sentence in Tab. 1 as an example. The sentence needs 4 models to label each level. If the 2ndmodel needs to be trained, the inputs are the first 5 columns in Tab. 1 and the outputs are the model for labeling the nodes in the 2ndlevel. Each model has its own feature template and the parameters for training.

    Model testing labels the nodes of each segment from the 1stlevel to the highest level. Its inputs are the POS tagging results and outputs are the syntactic tree in the format of one word per line. Take the sentence in Tab. 1 as an example. The inputs are the first three columns, the 1stmodel, the 2ndmodel, the 3rdmodel and the 4thmodel. Firstly, the method uses the first three columns and the 1stmodel to label the nodes (np, vp, np, vp, vp, dj) in the 1stlevel. Secondly, the method uses the first 3 columns, the labeled nodes in the 1stlevel and the 2ndmodel to label the nodes in the 2ndlevel. Thirdly, the method uses the first 3 columns, the labeled nodes in the 1stand 2ndlevels and the 3rdmodel to label the nodes in the 3rdlevel. At last, based on all the labeled nodes and the 4thmodel, the output of the model testing is obtained as shown in Tab. 1.

    1.3 Constructing syntactic tree

    Constructing syntactic tree is an inverse process of data processing. Take the sentence in Tab. 1 as an example. Firstly, it connects the nodes from a segment to the node in the highest level as a single tree. Secondly, it combines the same node of dj. Thirdly, it combines the same child nodes of dj. Then, it combines the same nodes of every child node until the POS. The process is shown in Fig. 3.

    Fig. 3 Process of constructing syntactic tree

    2 Experiment

    Two experiments are designed to choose the CRF++ training parameters and verify the efficiency, stability and precision of the method. One is parameters selecting and the other is method verifying. For choosing the training parameters of 14 models, the parameters selecting applies the grid method to select the bestcandf. The value ofcranges from 0.2 to 6 with the step length of 0.2. The value offranges from 1 to 5 with the step length of 1. The method verifying uses the parameters with the highest precision for labeling the nodes in different levels to train the models, and then the syntactic tree is verified both in open and closed testing based on the method of 10-fold-cross-validation. At the same time, the training and testing time are recorded.

    2.1 Data set and hardware

    The data set is 10 000 Chinese tagged syntactic sentences in Beijing forest studio Chinese tagged corpus (BFS-CTC)[15-16]. The number of words in a sentence is 17.9 on the average. The maximum number of words in a sentence is 53, the minimum number of words in a sentence is 3, and the maximum syntactic level number is 14 in the BFS-CTC.

    The main hardware which is mostly relevance with the computing speedis as follows.

    ① CPU: Intel?Xeon?X5650 @ 2.67 GHz. (2 CPUs)

    ② Memory: 8 G.

    ③ System type:Windows Server 2008 R2 Enterprise sp1 64 bit.

    2.2 Evaluating

    2.3 Results

    2.3.1 Parameters selecting

    Take the 1stmodel training as an example to show the process of parameters selecting. There are total of 18 features used for 1stmodel training, including POS, POS of former segment etc., as shown in Tab. 2. Based on these features, the partly results are shown in Fig. 4, whose best result is 90.47% for nodes labeling in 1stlevel whencequals 0.4 andfequals 1. The process of parameters choosing of the other 13 models is the same.

    2.3.2 Method verifying

    Method verifying contains closed test and open test.In the closed test, theP,RandFvalues of 10 experimental groups are shown in Fig. 5. The averageP,RandFvalues are 76.32%, 81.25% and 0.787 1 respectively. In the open test, the result is shown in Fig.6. The averageP,RandFvalues are 73.27%, 78.25% and 0.756 8.

    Tab.2 Features used in model training of first level

    Fig. 4 Partly results of choosing parameters

    Fig.5 P, R and F in closed test

    Fig. 6 P, R and F in open test

    In the same condition, the recently results of the PCFGs[9]are shown in Tab. 4. The time of the table contains the rule collecting and testing for one group experiment. In Tab.4, theP,RandFare calculated by using the PARSEVAL[17]. The time of testing a sentence for PCFG, L-PCFG, S-PCFG and SL-PCFG is 1 847.1 ms, 1 425.9 ms, 205.8 ms and 82 ms in closed set, and 1 850.2 ms, 1 412.2 ms, 178.3 ms and 68.1 ms in open set. In comparison with previous work, the time of proposed method is very competitive. Compared with the PCFGs, the proposed method is 22 times faster and gives almost equal results for all three measures. In particular, although the SL-PCFG is the fastest in Tab.4 and almost equals the sum time of the proposed method in closed test, the proposed method gives a significantly better results in open test as shown in Tab.4.

    Tab.3 Time of training and testing in closed and open set ms

    Tab.4 Results of PCFGs

    Some other previous related works are shown in Tab.5. These models used the Penn Chinese Treebank. The average runtime of Levy & Manning’s, Bikel’s and stacked classifier model to analysis a sentence in Penn Chinese Treebank is 117.7 ms, 776.6 ms and 64.1 ms respectively. The runtime is calculated by using the time[13]divides the sum of sentences used in the experiment. Because of the differences of data set in the experiment, the results can’t be compared directly with the proposed method. But these results also have significant reference. The results show that the previous works are not fast enough until the machine learning algorithms are applied to the parser. The fastest result in Tab. 5 costs 64.1 ms to analysis a sentence while the proposed method costs 4.63 ms. But the three measures of the proposed method are lower much.

    Tab. 5 Previous related works

    In a word, the proposed method is fast for testing a sentence and stable in closed and open test. In addition, the syntactic structure may be failed to analyze in PCFGs and the previous works in Tab. 5, while there is no fail condition in the proposed method. But thePof the proposed method is not high, possibly because the nodes and number of syntactic levels tested by the classifiers cannot be all correct.

    3 Conclusion

    A fast method based on conditional random fields is proposed for syntactic parsing. For constructing a syntactic tree, firstly, the method uses the bottom-up to recognize the nodes from the nearest node to the segment to the root node. Secondly, it uses the up-bottom to combine the same node tested from the root node to the segment. Based on the BFS-CTC, two experiments are designed to illustrate the process of choosing training parameters and verify the validity of method. In addition, the results of the method are compared to PCFGs. The results show that the method has a faster time in training and testing. The method is a new way for syntactic analysis. According to the ability of machine learning, the method has good generalization ability and can be used for applications which don’t need high precision but fast speed.

    [1] Xue N, Xia F, Chiou F, et al. The Penn Chinese TreeBank: phrase structure annotation of a large corpus [J]. Natural Language Engineering, 2005, 11(02): 207-38.

    [2] Bikel D M, Chiang D. Two statistical parsing models applied to the Chinese Treebank[C]∥Proceedings of the second workshop on Chinese language processing: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 12. Stroudsburg, PA, USA: Association for Computational Linguistics, 2000: 1-6.

    [3] Bikel D M. On the parameter space of generative lexicalized statistical parsing models [D]. PhiLadelphia: University of Pennsylvania, 2004.

    [4] Chiang D, Bikel D M. Recovering latent information in treebanks[C]∥Proceedings of the 19th international conference on Computational linguistics-Volume 1, Stroudsbury, PA, USA: Association for Computational Linguistics, 2002: 1-7.

    [5] Levy R, Manning C. Is it harder to parse Chinese, or the Chinese Treebank?[C]∥Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Stroudsburg, PA, USA: Association for Computational Linguistics, 2003: 439-446.

    [6] Xiong D, Li S, Liu Q, et al. Parsing the penn chinese treebank with semantic knowledge[M]∥Natural Language Processing-IJCNLP 2005. Berlin, Heidelberg: Springer, 2005: 70-81.

    [7] Jiang Zhengping. Statistical Chinese parsing [D]. Singapore: National University of Singapore, 2004.

    [8] Mi H T, Xiong D Y, Liu Q. Research on strategies for integrating Chinese lexical analysis and parsing[J]. Journal of Chinese Information Processing, 2008, 22(2): 10-17. (in Chinese)

    [9] Chen Gong, Luo Senlin, Chen Kaijiang, et al. Method for layered Chinese parsing based on subsidiary context and lexical information [J]. Journal of Chinese Information Processing, 2012, 26(01): 9-15. (in Chinese)

    [10] Yamada H, Matsumoto Y. Statistical dependency analysis with support vector machines [J]. Machine Learning, 1999, 34(1-3): 151-175.

    [11] Sagae K, Lavie A. A classifier-based parser with linear run-time complexity [C]∥Parsing 05 Proceedings of the Ninth International Workshop on Parsing Technology. Stroudsburg, PA, USA: Association for Computational Linguistics, 2005: 125-132.

    [12] Cheng Y, Asahara M, Matsumoto Y. Machine learning-based dependency analyzer for Chinese [J]. Journal of Chinese Language and Computing, 2005, 15(1): 13-24.

    [13] Wang M, Sagae K, Mitamura T. A fast, accurate deterministic parser for Chinese [C]∥Proceeding ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2006: 425-432.

    [14] Lafferty J, McCallum A, Pereira F. Conditional random fields: probabilistic models for segmenting and labeling sequence data [C]∥the Eighteenth International Conference on Machine Learning, San Francisco, CA, USA, 2012.

    [15] Luo Senlin, Liu Yingying, Feng Yang, et al. Method of building BFS-CTC: a Chinese tagged corpus of sentential semantic structure [J]. Transactions of Beijing Institute of Technology, 2012, 32(03): 311-315. (in Chinese)

    [16] Liu Yingying, Luo Senlin, Feng Yang, et al. BFS-CTC: a Chinese corpus of sentential semantic structure [J]. Journal of Chinese Information Processing, 2013, (27): 72-80. (in Chinese)

    [17] Charniak E. Statistical parsing with a context-free grammar and word statistics [C]∥the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, Providence, Rhode Island, 1997.

    (Edited by Cai Jianying)

    10.15918/j.jbit1004-0579.201524.0414

    TP 391.1 Document code: A Article ID: 1004- 0579(2015)04- 0519- 07

    Received 2014- 05- 03

    Supported by the Science and Technology Innovation Plan of Beijing Institute of Technology (2013)

    E-mail: panlimin_bit@126.com

    猜你喜歡
    韓磊姚明森林
    姚明的兩筆賬
    哈Q森林
    哈Q森林
    哈Q森林
    跟著姚明,為愛奔跑!
    海峽姐妹(2016年5期)2016-02-27 15:18:50
    哈Q森林
    像姚明那樣誠實
    An ocean circulation model based on Eulerian forward-backward difference scheme and three-dimensional, primitive equations and its application in regional simulations*
    家長該從韓磊摔嬰案中反思什么?
    海峽姐妹(2014年2期)2014-02-27 15:09:03
    PARTY 控
    八小時以外(2012年1期)2012-06-27 07:28:27
    国产精品久久久久久精品电影小说| 日本vs欧美在线观看视频| 亚洲国产精品一区二区三区在线| 视频区欧美日本亚洲| 国产欧美日韩精品亚洲av| 亚洲人成网站在线观看播放| 人体艺术视频欧美日本| 国产97色在线日韩免费| 丝袜美足系列| 老汉色∧v一级毛片| 久久毛片免费看一区二区三区| 亚洲成色77777| 国产精品一国产av| 高潮久久久久久久久久久不卡| 日韩 亚洲 欧美在线| 欧美精品高潮呻吟av久久| 国产三级黄色录像| 欧美精品人与动牲交sv欧美| www.熟女人妻精品国产| 水蜜桃什么品种好| 久久久久久久久免费视频了| 在线观看www视频免费| 午夜福利乱码中文字幕| 天天操日日干夜夜撸| 麻豆国产av国片精品| 啦啦啦在线观看免费高清www| 亚洲国产最新在线播放| 999久久久国产精品视频| 欧美黄色片欧美黄色片| 欧美成狂野欧美在线观看| 久久ye,这里只有精品| 亚洲国产精品成人久久小说| 国产亚洲精品第一综合不卡| 叶爱在线成人免费视频播放| 精品一区二区三区四区五区乱码 | 男女无遮挡免费网站观看| 久久中文字幕一级| 少妇的丰满在线观看| av国产久精品久网站免费入址| 老汉色∧v一级毛片| 欧美黑人欧美精品刺激| 婷婷色综合大香蕉| 日日摸夜夜添夜夜爱| 老司机靠b影院| 深夜精品福利| 精品熟女少妇八av免费久了| 精品高清国产在线一区| 三上悠亚av全集在线观看| 亚洲,欧美,日韩| 欧美日韩成人在线一区二区| 日韩av免费高清视频| 不卡av一区二区三区| 免费黄频网站在线观看国产| 性色av乱码一区二区三区2| 极品人妻少妇av视频| 国产免费视频播放在线视频| 日本五十路高清| 人人妻人人澡人人爽人人夜夜| 日本午夜av视频| 国产精品一国产av| 欧美日韩视频高清一区二区三区二| 不卡av一区二区三区| 精品人妻一区二区三区麻豆| 精品亚洲成国产av| 久久精品人人爽人人爽视色| 日韩一本色道免费dvd| 超色免费av| 欧美中文综合在线视频| 国产精品一区二区精品视频观看| 赤兔流量卡办理| 男女下面插进去视频免费观看| 日韩一卡2卡3卡4卡2021年| 国精品久久久久久国模美| 啦啦啦啦在线视频资源| 国产成人一区二区在线| 日本色播在线视频| 一区二区三区四区激情视频| 人人妻,人人澡人人爽秒播 | 一级片'在线观看视频| 国产在视频线精品| 亚洲精品一区蜜桃| 国产精品av久久久久免费| 国产一区二区在线观看av| 丁香六月欧美| 中文字幕制服av| 99国产精品免费福利视频| 欧美亚洲 丝袜 人妻 在线| 伦理电影免费视频| 婷婷成人精品国产| 成年人免费黄色播放视频| 在线观看免费日韩欧美大片| 久久青草综合色| 久久久精品区二区三区| av欧美777| 国产免费福利视频在线观看| 午夜激情久久久久久久| 亚洲精品第二区| 在线观看一区二区三区激情| 黑丝袜美女国产一区| 久久精品久久久久久噜噜老黄| 亚洲欧美日韩高清在线视频 | 亚洲国产最新在线播放| 欧美av亚洲av综合av国产av| 一级,二级,三级黄色视频| 久久久精品94久久精品| 97人妻天天添夜夜摸| 午夜福利视频在线观看免费| 免费高清在线观看日韩| 精品久久久久久电影网| 老司机亚洲免费影院| 国产免费又黄又爽又色| 亚洲美女黄色视频免费看| 国产99久久九九免费精品| 女人被躁到高潮嗷嗷叫费观| 久久国产精品大桥未久av| 最新在线观看一区二区三区 | 大香蕉久久网| 精品久久蜜臀av无| 亚洲三区欧美一区| 色94色欧美一区二区| 操美女的视频在线观看| 精品福利观看| 久久久久精品人妻al黑| 大码成人一级视频| 超碰97精品在线观看| 看免费成人av毛片| 亚洲av国产av综合av卡| 18禁裸乳无遮挡动漫免费视频| 亚洲综合色网址| 久久综合国产亚洲精品| 夜夜骑夜夜射夜夜干| 三上悠亚av全集在线观看| av天堂在线播放| 亚洲欧美色中文字幕在线| av在线老鸭窝| 国产在线免费精品| 日韩一本色道免费dvd| 少妇人妻久久综合中文| 欧美大码av| 不卡av一区二区三区| 日本五十路高清| 国产一区二区三区综合在线观看| 亚洲av国产av综合av卡| 大码成人一级视频| 女人爽到高潮嗷嗷叫在线视频| 老汉色∧v一级毛片| 2021少妇久久久久久久久久久| 一区二区三区激情视频| 美女午夜性视频免费| 国产高清不卡午夜福利| 精品熟女少妇八av免费久了| 免费在线观看日本一区| 午夜福利视频精品| av国产精品久久久久影院| 在现免费观看毛片| www.自偷自拍.com| 国产精品国产av在线观看| 男女床上黄色一级片免费看| 亚洲av国产av综合av卡| 丰满人妻熟妇乱又伦精品不卡| 热re99久久国产66热| 国产精品国产三级国产专区5o| 极品少妇高潮喷水抽搐| 看十八女毛片水多多多| 国产欧美日韩综合在线一区二区| 精品少妇久久久久久888优播| 亚洲一区中文字幕在线| www.自偷自拍.com| 久久久久精品人妻al黑| 美女主播在线视频| 国产精品一区二区免费欧美 | 欧美 亚洲 国产 日韩一| 波多野结衣av一区二区av| 日韩av在线免费看完整版不卡| 一级毛片女人18水好多 | 激情五月婷婷亚洲| 亚洲成人免费av在线播放| 国产精品 国内视频| 男女边吃奶边做爰视频| 狠狠精品人妻久久久久久综合| 男女免费视频国产| 国产1区2区3区精品| 国产一区有黄有色的免费视频| 视频在线观看一区二区三区| 又黄又粗又硬又大视频| 啦啦啦 在线观看视频| 两人在一起打扑克的视频| 少妇 在线观看| 视频区欧美日本亚洲| 国产精品免费大片| 久久精品国产a三级三级三级| 精品福利永久在线观看| 亚洲欧美日韩高清在线视频 | 欧美+亚洲+日韩+国产| 亚洲七黄色美女视频| 欧美久久黑人一区二区| 国产免费又黄又爽又色| 精品一区二区三区四区五区乱码 | 51午夜福利影视在线观看| 中国国产av一级| 亚洲图色成人| 成年人黄色毛片网站| 夜夜骑夜夜射夜夜干| 久久av网站| 99久久精品国产亚洲精品| 国产黄色视频一区二区在线观看| 国产av精品麻豆| 国产精品一区二区在线不卡| 国产精品久久久av美女十八| 波多野结衣一区麻豆| 在现免费观看毛片| 夫妻性生交免费视频一级片| 精品一区二区三区av网在线观看 | 超碰成人久久| 国产精品九九99| 一级片'在线观看视频| 晚上一个人看的免费电影| 亚洲第一av免费看| 久久av网站| 久9热在线精品视频| videosex国产| 香蕉丝袜av| 香蕉国产在线看| 久久午夜综合久久蜜桃| a 毛片基地| 精品一区二区三卡| 中文字幕另类日韩欧美亚洲嫩草| 欧美成狂野欧美在线观看| 精品一区二区三区av网在线观看 | 日本黄色日本黄色录像| 国产色视频综合| 精品一区二区三区av网在线观看 | 91国产中文字幕| 黄色a级毛片大全视频| 欧美成人精品欧美一级黄| av有码第一页| 日本vs欧美在线观看视频| 亚洲自偷自拍图片 自拍| 日韩视频在线欧美| 麻豆av在线久日| 久久国产亚洲av麻豆专区| 极品少妇高潮喷水抽搐| 天堂中文最新版在线下载| 久久青草综合色| 久热这里只有精品99| 一区二区av电影网| 欧美亚洲日本最大视频资源| 欧美亚洲 丝袜 人妻 在线| 又黄又粗又硬又大视频| 国产男女超爽视频在线观看| 国产精品一区二区免费欧美 | 天天躁夜夜躁狠狠久久av| 亚洲国产欧美网| 高清视频免费观看一区二区| www.熟女人妻精品国产| 精品一品国产午夜福利视频| 少妇的丰满在线观看| 女人被躁到高潮嗷嗷叫费观| 亚洲美女黄色视频免费看| a级毛片黄视频| 精品久久久久久久毛片微露脸 | 国产免费一区二区三区四区乱码| 精品少妇黑人巨大在线播放| 亚洲欧洲精品一区二区精品久久久| av在线app专区| 成年人黄色毛片网站| 成年人午夜在线观看视频| 91精品三级在线观看| 国产一区二区在线观看av| 日韩 亚洲 欧美在线| 啦啦啦视频在线资源免费观看| 91精品国产国语对白视频| 波多野结衣av一区二区av| 色精品久久人妻99蜜桃| 国产国语露脸激情在线看| 90打野战视频偷拍视频| 欧美精品一区二区大全| 99热网站在线观看| 99九九在线精品视频| 国产无遮挡羞羞视频在线观看| 欧美黑人欧美精品刺激| 免费观看a级毛片全部| 亚洲国产欧美网| 日韩 欧美 亚洲 中文字幕| 又大又爽又粗| 黄色怎么调成土黄色| 婷婷成人精品国产| 精品久久久久久久毛片微露脸 | 亚洲成色77777| 国产亚洲欧美精品永久| 国产精品一二三区在线看| 国产三级黄色录像| 各种免费的搞黄视频| 久久国产精品影院| 久久鲁丝午夜福利片| 亚洲欧洲国产日韩| 在现免费观看毛片| 久久天堂一区二区三区四区| 两性夫妻黄色片| 国产精品久久久久久精品电影小说| 美女脱内裤让男人舔精品视频| 丁香六月欧美| 欧美激情极品国产一区二区三区| xxxhd国产人妻xxx| 亚洲国产看品久久| 日韩人妻精品一区2区三区| 亚洲免费av在线视频| 高清视频免费观看一区二区| av网站免费在线观看视频| 国产成人欧美在线观看 | 国产精品.久久久| 国产成人影院久久av| 桃花免费在线播放| 又紧又爽又黄一区二区| 国产欧美亚洲国产| 午夜老司机福利片| 99久久99久久久精品蜜桃| 亚洲国产毛片av蜜桃av| 一级片免费观看大全| 亚洲色图综合在线观看| 在线观看一区二区三区激情| 嫁个100分男人电影在线观看 | av有码第一页| av又黄又爽大尺度在线免费看| 亚洲成人免费电影在线观看 | netflix在线观看网站| 天天躁狠狠躁夜夜躁狠狠躁| 男女免费视频国产| 久久青草综合色| 免费少妇av软件| 亚洲一卡2卡3卡4卡5卡精品中文| 9191精品国产免费久久| 少妇被粗大的猛进出69影院| 观看av在线不卡| 久久精品国产综合久久久| 天天躁狠狠躁夜夜躁狠狠躁| 中文字幕色久视频| 午夜影院在线不卡| 777米奇影视久久| 免费黄频网站在线观看国产| 狂野欧美激情性xxxx| 91成人精品电影| 国产精品秋霞免费鲁丝片| 纯流量卡能插随身wifi吗| 国产精品一国产av| 中文字幕制服av| 欧美人与性动交α欧美精品济南到| 久久久精品区二区三区| 国产精品一国产av| 国产高清视频在线播放一区 | 亚洲精品av麻豆狂野| 欧美人与善性xxx| 黑人欧美特级aaaaaa片| 国产在线视频一区二区| 日日摸夜夜添夜夜爱| 在线天堂中文资源库| 欧美日韩视频高清一区二区三区二| 亚洲av美国av| 9色porny在线观看| 成年人免费黄色播放视频| xxxhd国产人妻xxx| 看十八女毛片水多多多| 咕卡用的链子| 免费观看av网站的网址| 欧美国产精品va在线观看不卡| 丝袜美腿诱惑在线| www日本在线高清视频| 肉色欧美久久久久久久蜜桃| 国产片特级美女逼逼视频| 国产成人精品久久二区二区91| 欧美精品啪啪一区二区三区 | 中文欧美无线码| 另类精品久久| 观看av在线不卡| 午夜精品国产一区二区电影| 国产成人av教育| 自线自在国产av| 日本wwww免费看| 久久人人97超碰香蕉20202| 亚洲欧洲国产日韩| 日本91视频免费播放| e午夜精品久久久久久久| 最好的美女福利视频网| 女人被狂操c到高潮| 亚洲成人精品中文字幕电影| 欧美另类亚洲清纯唯美| 91麻豆精品激情在线观看国产| 成人18禁高潮啪啪吃奶动态图| 99热只有精品国产| 亚洲欧美日韩高清在线视频| 视频区欧美日本亚洲| 国产亚洲精品综合一区在线观看 | 夜夜夜夜夜久久久久| 久久久国产欧美日韩av| 99久久综合精品五月天人人| 午夜a级毛片| 午夜成年电影在线免费观看| 香蕉av资源在线| 久久热在线av| 国产一区二区激情短视频| 我的亚洲天堂| 露出奶头的视频| 亚洲一区二区三区不卡视频| 亚洲七黄色美女视频| 亚洲av五月六月丁香网| 一边摸一边抽搐一进一小说| 亚洲精品中文字幕在线视频| 美女 人体艺术 gogo| 2021天堂中文幕一二区在线观 | 亚洲,欧美精品.| 黑人欧美特级aaaaaa片| 女性生殖器流出的白浆| 国产伦一二天堂av在线观看| 亚洲av熟女| 一级a爱视频在线免费观看| 丝袜人妻中文字幕| 一进一出抽搐gif免费好疼| 99精品久久久久人妻精品| 男人舔奶头视频| 国产亚洲欧美98| 欧美在线黄色| 日本三级黄在线观看| 国产精品久久久久久精品电影 | 国产精品av久久久久免费| 国产成+人综合+亚洲专区| 大型av网站在线播放| 麻豆成人av在线观看| 男人的好看免费观看在线视频 | 亚洲精品国产精品久久久不卡| 国产亚洲精品综合一区在线观看 | 每晚都被弄得嗷嗷叫到高潮| 日韩有码中文字幕| 国产三级在线视频| 亚洲国产中文字幕在线视频| 美女高潮喷水抽搐中文字幕| 人人妻人人看人人澡| 制服丝袜大香蕉在线| 国产精品国产高清国产av| 一边摸一边抽搐一进一小说| 成人av一区二区三区在线看| 两人在一起打扑克的视频| 精品国产一区二区三区四区第35| 日韩欧美在线二视频| 亚洲av美国av| 9191精品国产免费久久| 免费人成视频x8x8入口观看| 国产99白浆流出| 无人区码免费观看不卡| 美女免费视频网站| 国产人伦9x9x在线观看| av有码第一页| 一本大道久久a久久精品| 免费人成视频x8x8入口观看| 黄色女人牲交| 国产在线精品亚洲第一网站| 淫秽高清视频在线观看| 在线十欧美十亚洲十日本专区| 啪啪无遮挡十八禁网站| 亚洲中文字幕一区二区三区有码在线看 | 日本撒尿小便嘘嘘汇集6| 亚洲国产精品999在线| 欧美成人免费av一区二区三区| 国产av一区二区精品久久| 淫妇啪啪啪对白视频| 亚洲av成人av| 一本大道久久a久久精品| 亚洲精华国产精华精| 欧美三级亚洲精品| 亚洲人成电影免费在线| 亚洲第一av免费看| 精品久久久久久久久久免费视频| 久久香蕉国产精品| 国产免费男女视频| 黄片大片在线免费观看| 人人妻,人人澡人人爽秒播| 成熟少妇高潮喷水视频| 亚洲久久久国产精品| 亚洲国产欧美网| 亚洲国产精品成人综合色| 国产一级毛片七仙女欲春2 | 亚洲激情在线av| 中文字幕av电影在线播放| 欧美日韩乱码在线| 夜夜看夜夜爽夜夜摸| 中文字幕精品亚洲无线码一区 | 日本撒尿小便嘘嘘汇集6| 男人舔女人下体高潮全视频| 欧美久久黑人一区二区| 欧美性猛交╳xxx乱大交人| 欧美精品亚洲一区二区| 最近在线观看免费完整版| 免费看a级黄色片| 日本在线视频免费播放| 久久久久久久午夜电影| 日韩精品中文字幕看吧| 99在线视频只有这里精品首页| 91成人精品电影| 亚洲欧洲精品一区二区精品久久久| 女生性感内裤真人,穿戴方法视频| 久久久久免费精品人妻一区二区 | 亚洲一区二区三区不卡视频| 18禁裸乳无遮挡免费网站照片 | 1024手机看黄色片| 国产野战对白在线观看| 久久午夜综合久久蜜桃| 久久香蕉国产精品| 手机成人av网站| 欧美乱妇无乱码| 亚洲成人精品中文字幕电影| 男女那种视频在线观看| 天天添夜夜摸| 免费无遮挡裸体视频| 国产亚洲av嫩草精品影院| 亚洲真实伦在线观看| 法律面前人人平等表现在哪些方面| 99在线人妻在线中文字幕| 亚洲最大成人中文| 香蕉国产在线看| 日韩成人在线观看一区二区三区| 天堂影院成人在线观看| 亚洲精品国产精品久久久不卡| 欧美日韩瑟瑟在线播放| 午夜精品久久久久久毛片777| 18禁黄网站禁片免费观看直播| 黄色a级毛片大全视频| 少妇裸体淫交视频免费看高清 | 人人妻人人澡欧美一区二区| 中文字幕最新亚洲高清| 日韩视频一区二区在线观看| 国产亚洲精品久久久久久毛片| 免费在线观看成人毛片| 91麻豆精品激情在线观看国产| 夜夜躁狠狠躁天天躁| 一二三四社区在线视频社区8| 成人一区二区视频在线观看| 国产亚洲欧美在线一区二区| 人成视频在线观看免费观看| 欧美精品啪啪一区二区三区| 亚洲av第一区精品v没综合| 老熟妇乱子伦视频在线观看| 亚洲精品粉嫩美女一区| 真人做人爱边吃奶动态| 丰满的人妻完整版| 日韩欧美国产在线观看| 亚洲久久久国产精品| 免费在线观看日本一区| 亚洲第一青青草原| 50天的宝宝边吃奶边哭怎么回事| 久久久水蜜桃国产精品网| 色综合婷婷激情| 国产一区二区三区在线臀色熟女| 国产免费男女视频| a级毛片在线看网站| 午夜激情av网站| 国产亚洲精品久久久久5区| 国产精品亚洲美女久久久| 欧美成人性av电影在线观看| 99国产精品一区二区三区| 69av精品久久久久久| 国产黄片美女视频| a在线观看视频网站| 999久久久精品免费观看国产| 麻豆国产av国片精品| 97碰自拍视频| 可以免费在线观看a视频的电影网站| 午夜a级毛片| xxx96com| 亚洲性夜色夜夜综合| 757午夜福利合集在线观看| www.www免费av| 999精品在线视频| 亚洲人成伊人成综合网2020| 亚洲国产精品sss在线观看| 午夜久久久在线观看| 美女高潮喷水抽搐中文字幕| 麻豆一二三区av精品| 日韩欧美免费精品| 亚洲午夜精品一区,二区,三区| 久久精品91无色码中文字幕| 色哟哟哟哟哟哟| 99久久精品国产亚洲精品| 在线观看一区二区三区| 成人18禁高潮啪啪吃奶动态图| 黄色视频不卡| 无限看片的www在线观看| 男女视频在线观看网站免费 | 日韩中文字幕欧美一区二区| 精品无人区乱码1区二区| 日韩欧美一区视频在线观看| 精品国产乱子伦一区二区三区| 99在线人妻在线中文字幕| 亚洲一区二区三区不卡视频| 男女视频在线观看网站免费 | 桃色一区二区三区在线观看| 亚洲真实伦在线观看| 欧美日韩亚洲国产一区二区在线观看| 波多野结衣高清作品| 亚洲自偷自拍图片 自拍| 午夜福利视频1000在线观看| 久久精品人妻少妇| 欧美国产日韩亚洲一区| 亚洲熟妇熟女久久| 岛国在线观看网站| 一本久久中文字幕| 亚洲自拍偷在线| 精品一区二区三区av网在线观看| 香蕉丝袜av| 久久精品91蜜桃| 在线av久久热| 亚洲av美国av| 欧美 亚洲 国产 日韩一| 十八禁人妻一区二区| 日本免费一区二区三区高清不卡| 亚洲第一欧美日韩一区二区三区| 黄片小视频在线播放|