• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved image captioning with subword units training and transformer ①

    2020-07-12 02:34:20CaiQiangLiJingLiHaishengZuoMin
    High Technology Letters 2020年2期

    Cai Qiang(蔡 強(qiáng)) , Li Jing, Li Haisheng, Zuo Min

    (School of Computer and Information Engineering, Beijing Techology and Business University, Beijing 100048, P.R.China) (Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing 100048, P.R.China) (National Engineering Laboratory for Agri-Product Quality Traceability, Beijing 100048, P.R.China)

    Abstract

    Key words: image captioning, transformer, byte pair encoding (BPE), reinforcement learning

    0 Introduction

    Problems combining image and language understanding like image captioning continue to inspire considerable researches at the boundary of computer vision and natural language processing. In these tasks, it is reasonable to perform some fine-grained visual processing, or even multiple steps of reasoning to create high quality outputs. As a result, visual attention mechanisms have been widely adopted in image captioning[1-4]. These mechanisms improve image captioning performance by extracting salient and useful image features.

    However, this problem can also be addressed from a language perspective. Image captioning is more than an image processing problem and a fine-grained method for generating high-quality captions is proposed in this paper. Image captioning has recently shown impressive results[2]by backing off words with a frequency below 5. The training vocabulary of neural models is usually limited in 10 000-30 000 words on MSCOCO[5]image captioning training data, but caption generation is an open-vocabulary problem, and especially for images with massive visual parts, image captioning models require a mechanism that generates more detailed and informative words.

    For previous word-level caption models, the generation of out-of-vocabulary words is impossible and these models generate some common words with fixed sentence form. It is observed that such methods make assumptions that often do not hold true in a practical scene. For instance, there is not always a 1-to-1 correspondence between training image and corresponding up to 5 captions in that not all descriptive information is involved in the captions. In addition, word-level models are unable to generate captions unseen before.

    In this work, image captioning models that train on the level of subword units[6]is investigated. The goal is to build a model which can handle open-vocabulary problem in the encoder-decoder network itself. The model is able to make the captions generation model more fine-grained and achieve better accuracy for the translation of rare words than back-off dictionaries. It is showed that the neural networks are able to learn rare descriptive words from subword representations in experimental analysis.

    To make the image captioning process simpler, transformer[7], instead of recurrent neural network (RNN) or its variants is used as decoder part. Transformer, as a backbone architecture, has been applied to a large amount of natural language processing tasks[8,9]. Transformer is a novel neural network architecture based on a self-attention mechanism proposed by Google that has been proved particularly well suited for generation tasks, such as machine translation and text-to-speech. So it can also contribute to image captioning. The transformer outperforms both recurrent and convolutional models on academic English to German and English to French translation benchmarks. The transformer proposed by Google also complies with sequence-to-sequence structure, consisting of encoder and decoder. The encoder is made up of multi-head attention layer and feed forward layer for extracting features from source and the decoder part consist of masked multi-head attention layer, multi-attention layer and feed forward layer. The decoder part of the full transformer model is employed for decoding visual information. In transformer based image captioning (TIC) model, bi-direction long short-term memory (LSTM) decoder is replaced by transformer decoder for less training time and better captions generation.

    This paper has 2 main contributions:

    (1) Open-vocabulary image captioning is feasible by encoding (rare) words via subword units is proved. Moreover, byte pair encoding (BPE)[6]is utilized for the task of fine-grained word segmentation and caption generation. BPE allows for the representation of an open vocabulary, which makes it suitable for word segmentation in neural network architecture.

    (2) Transformer based image captioning model is proposed, it adopts a self-attention based neural network to the task of image captioning. Other than taking advantage of the full transformer model, the decoder part of transformer is extracted for the generation of sentence and the experimental results show that the proposed method outperforms baseline model.

    1 Related work

    Most modern approaches[1,2]encode an image using a convolutional neural network (CNN), and feed this as input to a recurrent neural network or its variants, typically with some form of gating or memory mechanism. The RNN can generate an arbitrary length sequence of words. Within this common framework, many research work[10,11]explored different encoder-decoder structures including attention-based models. Multi-kinds of attention mechanism are applied to the output of one or more layers of a CNN, by predicting weights distribution on CNN output of the input image. Whereas, choosing the optimal number of image regions invariably leads to an unwinnable trade-off between coarse and fine levels of detail. Moreover, the arbitrary positioning of the regions with respect to image content may make it more difficult to detect objects that are poorly aligned to regions and to bind visual concepts associated with the same object.

    Comparatively few previous works have considered addressing caption generation problem from a language perspective. Sennrich et al.[6]proposed byte pair encoding to segment words, which enable the encoder-decoder machine translation model to generate open-vocabulary translation. Applied originally for neural machine translation (NMT), BPE is based on the intuition that various word classes are made of smaller units than words such as compounds and loanwords. In addition to making the vocabulary smaller and the length of sentences shorter, the subword model is able to productively generate new words that are not seen at training time.

    Neural networks, in particular recurrent neural network, has been the center of leading approaches to sequence modeling tasks such as image captioning, question answering and machine translation for years. However, it takes long time to train an RNN model in that it can only process the input data step by step. The transformer proposed by Google has received much attention in the last two years.In contrast to RNN-based approaches, the transformer used no recurrence, instead processing all words or symbols in the sequence in parallel while making use of a self-attention mechanism to incorporate context from words or features farther away. By processing all words in parallel and letting each word attend to other words in the sentence over multiple processing steps, the transformer was much faster to be trained than recurrent models. Remarkably, experiments on machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to be trained. Transformer achieves state-of-the-art performance on the machine translation task. Besides, given large or limited training data, the transformer model generalizes well to other sequence modeling problem. However, on smaller and more structured language understanding tasks, or even simple algorithmic tasks such as copying a string (e.g. to transform an input of ‘a(chǎn)bc’ to ‘a(chǎn)bcabc’), the transformer does not perform very well. In contrast, models that perform well on these tasks fail on large-scale language understanding tasks like translation and caption generation.

    2 Approach

    Given an imageI, the image captioning model takes as input a possibly variably-sized set ofkimage features,VI={v1,…,vk},Vi∈RD, such that each image crop feature encodes a sematic region of the image. The spatial image featuresVcan be variously defined as the output of bottom-up attention model, which extracts multi crop features under the architecture of Faster R-CNN[12]. The same approach in Ref.[1] is followed to implement a bottom-up attention model and the details are described in Ref.[1]. In Section 2.1, the practical use of BPE algorithm for captions segmentation is demonstrated. In Section 2.2, the architecture of TIC model is outlined.

    2.1 Byte pair encoding

    Byte pair encoding is a technique designed for simple data compression. BPE iteratively replaces the most frequent pair of bytes in a captioning sentence with a single, unused byte. This algorithm is adopted for subword segmentation. Instead of merging frequent pairs of bytes, it uses merge characters or character sequences. Following the work of Ref.[6], the BPE preprocess consists of 2 stages: learning BPE and applying BPE.

    First of all, in learning BPE stage, the symbol vocabulary is initialized with the character vocabulary, and each word in image caption sentences is represented as a sequence of characters, plus a special end-of-word symbol ‘·’, which allows it to restore the original tokenization after caption generation. All symbol pairs are iteratively counted and replaced each occurrence of the most frequent pair (‘A’, ‘B’) with a new symbol ‘AB’. Each merge operation produces a new symbol which represents a character n-gram. Frequent character n-grams (or whole words) are eventually merged into a single symbol, thus BPE requires no shortlist. The final symbol vocabulary size is equal to the size of the initial vocabulary, plus the number of merge operations, the latter is the only hyperparameter of the algorithm. For efficiency, pairs that cross word boundaries are not taken into consideration. The algorithm can thus be run on the dictionary extracted from a text, with each word being weighted by its frequency.

    After learning BPE stage, a fixed dictionary is completed. In applying BPE stage, all words in all sentences from training data are substituted for subword units according to the BPE dictionary. Then this dictionary is used to represent each subword units. At last, the one-hot vectorxfor each word is acquired. The embedding model embeds the one-hot vector into admodeldimensional vector. All these embedding vectors in one sentence are combined into a matrixL×dmodelas the input to the transformer decoder, whereLis the length of the sentence.

    Two methods of applying BPE are evaluated: learning encodings only for image captioning training dataset, or learning the encoding on the union of the MSCOCO2014 image captioning training dataset and VQA v2.0 dataset[13](which is called expand BPE). The former has the advantage of being more compact in terms of text and vocabulary size, whereas the latter leads to accurate semantic units by taking the larger vocabulary into account.

    2.2 Transformer based image captioning

    Transformer based image captioning model contains 2 parts, the encoder and the decoder, as is shown in Fig.1.

    Fig.1 The framework of the proposed TIC model

    Most image captioning models are made up of the encoder-decoder structure. The encoder used in this work is a bottom-up attention model borrowed from Ref.[1]. Bottom-up attention model utilizes Faster R-CNN for mapping an image to a context featureVI. This process is shown as

    VI=FasterR-CNN(I)

    (1)

    where,Iis vector of input image,VI={v1,…,vk} is the image features processed by Faster R-CNN based bottom-up attention model.

    Faster R-CNN is an object detection model designed to localize and recognize objects in a project given image with bounding boxes. Objects are detected by Faster R-CNN in 2 stages. The first stage, described as a region proposal network (RPN), predicts object proposals. Then the predicted top box proposals are selected as input to the second stage for labels classification and class-specific bounding box refinements. In this work, ResNet-101 CNN is used as feature extractor in Faster R-CNN model. The final output of the model is selected as the input caption model. For each selected regioni,VIis defined as the mean-pooled convolutional feature from this region, such that the dimensionDof the image feature vectors is 2 048. Faster R-CNN is served as a ‘hard’ attention mechanism using this fashion, as only a relatively small number of image bounding box features are selected from a large number of possible configurations.

    The decoder part of the transformer with stacked attention mechanisms is taken to decode the encoded image feature into the sentence. The transformer model composes of a stack ofNidentical layers and contains no RNN structure. Note that each layer has 3 sub-layers. The first sub-layer uses the multi-head self-attention mechanism. The multi-head attention is shown as

    (2)

    H=Concat(h1,…,hn)

    (3)

    O=HWh

    (4)

    Fig.1 shows that the inputs of this layer are fixed to output embedding plus positional embedding. This sub-layer makes use of a masked mechanism for preventing this model from seeing the future information, which ensures the generation of the current word with only the previous generated words. In contrast to the first sub-layer, the second sub-layer is a multi-head attention layer without the masked mechanism in that it takes all the image features into consideration in every time step. The multi-head attention is employed over preprocessed image features and the output of the first sublayer. This sublayer is vital importance to blend the text information with the image information using attention mechanism. The third sublayer is a position-wise fully connected feed-forward network aiming at selecting the most relevant information for generating image captions. In addition, a residual connection is utilized around each of the 3 sub-layers in the transformer decoder, followed by layer normalization. Finally, a full connected layer and a softmax layer is used to project the output of the transformer decoder to the probabilities distribution of the vocabulary. Using the notationy1:Tto refer to a sequence of words (y1,…,yT), at each time steptthe conditional distribution over possible output words is given by

    (5)

    where,Wp∈R|Σ|×Mandbp∈R|Σ|are learned weights and biases.

    (6)

    For fair comparison with recent work[14], results optimized for CIDEr is also reported. Initializing from the cross-entropy trained model, the training seeks to minimize the negative expected score:

    LR(θ)=-Ey1:T~pθ[r(y1:T)]

    (7)

    where,ris the score function (e.g., CIDEr). Following the approach described as self-critical sequence training (SCST), the gradient of this loss can be approximated:

    (8)

    3 Experiments and results

    3.1 Datasets

    The MSCOCO2014 captions dataset[5]is employed to evaluate the proposed transformer based image captioning model. For validation of model hyperparameters and offline testing, this paper uses the ‘Karpathy’ splits[16]that have been used extensively for reporting results in prior work. This split contains 113 287 training images with 5 captions each, and 5 K images respectively for validation and testing. To explore the performance of BPE, all sentences are converted to lower case, tokenized on white space, and substituted words with subword units according to BPE vocabulary. To evaluate caption quality, this work uses the standard automatic evaluation metrics, namely SPICE[17], CIDEr, METEOR, ROUGE-L[18]and BLEU[19].

    To evaluate the proposed expand BPE model, the recently introduced VQA v2.0 dataset[13]is used. VQA v2.0 is proposed to minimize the effectiveness of learning dataset priors by balancing the answers to each question, but in the experiment this dataset only takes advantage of expanding BPE corpus with 1.1 M questions and 11.1 M answers relating to MSCOCO images.

    3.2 Experiment settings

    For fair comparison with bottom-up and top-down baseline model, TIC model takes the same pretrained image features of bottom-up and top-down baseline model as inputs. To pretrain the bottom-up attention model, Anderson et al.[1]initialized Faster R-CNN with ResNet-101 pretrained for classification on ImageNet, then trained it on visual genome[20]data. For the six-layer-stacked transformer model, this work sets the model size which isdmodelto be 512 and the mini-batch size to be 32. The Adam method is adopted to update the parameters of transformer. The initial learning rate of the transformer is 4×10-4. The momentum and the weight-decay are set as 0.9 and 0.999 respectively. All implements of neural networks are based on PyTorch deep learning framework. In evaluation stage, the beam search size is set to 5 for high-quality caption generation at the sacrifice of decoding time.

    3.3 Image captioning results

    Table 1 shows single-model image captioning performance on the MSCOCO Karpathy test split. TIC+BPE-10K-exp stands for expanding BPE trained on MSCOCO2014 captions dataset and VQA v2.0 dataset with a dictionary of 10 000. The TIC model obtains similar results to baseline, the existing state-of-the-art on this test set. TIC plus BPE training model achieves significant (2%-7%) relative gains across all metrics regardless of whether cross-entropy loss or CIDEr optimization is used, which illustrates the contribution of transformer and BPE algorithm to image captioning task.

    In Table 1 the performance of the improved TIC model and the existing state-of-the-art bottom-up and top-down baseline is demonstrated in comparison to SCST approach on the test portion of the Karpathy splits. For fair comparison, results are reported for models trained with both standard cross-entropy loss, and models optimized for CIDEr score. Note that the SCST[14]takes advantage of reinforcement learning to optimize evaluation metrics. And it also uses ResNet-101[21-23]encoding of full images, similar to the bottom-up and top-down baseline model and TIC model. All results are reported for a single model with no fine-tuning-of the input ResNet/Faster R-CNN model.

    Table 1 Performance of different models on MSCOCO2014

    Compared to the bottom-up and top-down baseline model, TIC model obtains slightly better performance under both cross-entropy loss and CIDEr optimization loss, which shows the feasibility of replacement of RNN with transformer. Moreover, instead of using word-level model with a back-off dictionary, BPE subword units model brings improvements in the generation of rare and unseen words and outperforms the bottom-up and top-down baseline by 0.1-1.2 BLEU-4 and 0.9-3.2 CIDEr under XE loss training. Regardless of whether cross-entropy loss or CIDEr optimization is used, Tabel 1 shows that TIC models acquire improvements across all metrics using just a single transformer decoder model and BPE method. The TIC model achieved the best reported performance on the Karpathy test split as illustrated in Table 1.

    In addition, the results about the effect of the different sizes of BPE dictionary is explored. Three different sizes are implemented to find the appropriate settings. The TIC+BPE-10K model means that BPE dictionary size is set to 10 000. From these scores in Table 1, it can be implied that all TIC with BPE model is improved over the baseline model. And when the vocabulary size is set to 10 000 and trained on multi-dataset, the TIC+BPE-10K-exp model gets the best performance. According to these scores, it can be inferred that fixed dictionary size is necessary for the generation common description. Whereas, it is believed that larger dictionary size is needed given larger image captioning dataset.

    4 Conclusions

    This work proposes a novel transformer image captioning model which is improved by training on subword units. It is shown that image captioning systems are capable of open-vocabulary generation by representing rare and unseen words as a sequence of subword units. The transformer decoder with multi-head self-attention modules enables the caption model to memorize dependencies between vision and language context. With these innovations, performance gains have been obtained over the baseline with both BPE segmentation and transformer decoder. The state-of-the-art performance is achieved on the test portion of the Karpathy MSCOCO2014 splits. In addition, the proposed models can be taken into consideration in vision to language problems like visual question answering and text-to-speech.

    又爽又黄a免费视频| 美女主播在线视频| 久久影院123| 国产精品久久久久成人av| 精品99又大又爽又粗少妇毛片| 少妇被粗大的猛进出69影院 | 97在线人人人人妻| 熟女av电影| 交换朋友夫妻互换小说| 九色成人免费人妻av| 一级毛片我不卡| 亚洲国产成人一精品久久久| 中文资源天堂在线| 亚洲激情五月婷婷啪啪| 搡女人真爽免费视频火全软件| 国产日韩一区二区三区精品不卡 | 晚上一个人看的免费电影| 18+在线观看网站| 91久久精品电影网| a级毛片免费高清观看在线播放| av不卡在线播放| av在线app专区| h日本视频在线播放| 成人黄色视频免费在线看| 久久午夜综合久久蜜桃| 啦啦啦啦在线视频资源| 久久鲁丝午夜福利片| 91aial.com中文字幕在线观看| av天堂久久9| 精品人妻熟女av久视频| 蜜桃在线观看..| 我的老师免费观看完整版| 菩萨蛮人人尽说江南好唐韦庄| 久久 成人 亚洲| 如何舔出高潮| 国产成人精品福利久久| 亚洲国产av新网站| 欧美日韩国产mv在线观看视频| 免费黄网站久久成人精品| 伊人久久精品亚洲午夜| 欧美bdsm另类| 男男h啪啪无遮挡| 99热这里只有精品一区| 熟女电影av网| 国产精品免费大片| 少妇人妻精品综合一区二区| 亚洲无线观看免费| 日本av免费视频播放| 欧美性感艳星| 亚洲人成网站在线播| 人妻制服诱惑在线中文字幕| 少妇人妻久久综合中文| 精品久久久精品久久久| 97在线人人人人妻| 成人毛片a级毛片在线播放| 五月玫瑰六月丁香| 色婷婷久久久亚洲欧美| 新久久久久国产一级毛片| 中文字幕久久专区| av专区在线播放| 国产在线男女| 午夜福利网站1000一区二区三区| 一本色道久久久久久精品综合| 91久久精品国产一区二区成人| 久久久久久久大尺度免费视频| 不卡视频在线观看欧美| 国产伦精品一区二区三区视频9| 亚洲欧美日韩卡通动漫| 丁香六月天网| 日本免费在线观看一区| 国产亚洲91精品色在线| 免费在线观看成人毛片| 国产在线男女| www.av在线官网国产| 你懂的网址亚洲精品在线观看| 大话2 男鬼变身卡| 街头女战士在线观看网站| 我的女老师完整版在线观看| 久久综合国产亚洲精品| 久久国内精品自在自线图片| 天堂俺去俺来也www色官网| 乱人伦中国视频| 成人午夜精彩视频在线观看| 三级经典国产精品| 亚洲精品乱码久久久v下载方式| 久久久久久久久久久丰满| 国产成人午夜福利电影在线观看| 亚洲,欧美,日韩| 亚洲国产日韩一区二区| 熟女电影av网| 国产精品国产三级专区第一集| 日韩中字成人| 一级毛片电影观看| av在线播放精品| 一本一本综合久久| 国产精品国产三级国产专区5o| 国产精品伦人一区二区| 亚洲国产精品国产精品| 男人添女人高潮全过程视频| 十八禁网站网址无遮挡 | 亚洲精品一区蜜桃| av网站免费在线观看视频| 国产高清国产精品国产三级| 丰满饥渴人妻一区二区三| 伦精品一区二区三区| 亚洲天堂av无毛| 啦啦啦视频在线资源免费观看| 亚洲美女搞黄在线观看| 天堂中文最新版在线下载| 日本猛色少妇xxxxx猛交久久| 免费观看的影片在线观看| 国产乱来视频区| 国产爽快片一区二区三区| 国产伦精品一区二区三区四那| 亚洲激情五月婷婷啪啪| av卡一久久| 观看免费一级毛片| a级毛片在线看网站| 美女xxoo啪啪120秒动态图| 丝袜在线中文字幕| 免费观看在线日韩| 99九九线精品视频在线观看视频| 久久久久久久久久久丰满| 大话2 男鬼变身卡| 久久久久人妻精品一区果冻| 亚洲精品国产av蜜桃| 精品酒店卫生间| 亚洲成人av在线免费| 久久人人爽人人片av| 欧美成人精品欧美一级黄| 免费少妇av软件| 欧美激情极品国产一区二区三区 | 国产一级毛片在线| 国产又色又爽无遮挡免| 成人毛片60女人毛片免费| 亚洲欧美精品专区久久| 免费播放大片免费观看视频在线观看| 国产精品国产三级国产av玫瑰| 国产男人的电影天堂91| 一区二区三区免费毛片| 女的被弄到高潮叫床怎么办| 久久这里有精品视频免费| 老司机影院毛片| 亚洲精品久久久久久婷婷小说| 菩萨蛮人人尽说江南好唐韦庄| 91在线精品国自产拍蜜月| 欧美丝袜亚洲另类| 国产欧美亚洲国产| 国产真实伦视频高清在线观看| 免费大片18禁| 国产精品久久久久久久久免| 内射极品少妇av片p| 亚洲国产精品999| 国产精品国产三级专区第一集| 午夜久久久在线观看| 国精品久久久久久国模美| 一区二区三区乱码不卡18| 91精品国产国语对白视频| 国产成人a∨麻豆精品| 亚洲婷婷狠狠爱综合网| 精品久久久精品久久久| 国产精品蜜桃在线观看| 亚洲精华国产精华液的使用体验| 国产精品久久久久久久电影| 国产 一区精品| 精品一区在线观看国产| 一级毛片我不卡| 国产爽快片一区二区三区| 99国产精品免费福利视频| 九九久久精品国产亚洲av麻豆| 中文乱码字字幕精品一区二区三区| 久久av网站| 亚洲美女视频黄频| 97精品久久久久久久久久精品| 国产在线男女| 午夜日本视频在线| 欧美bdsm另类| 女人久久www免费人成看片| 亚洲欧美一区二区三区黑人 | a级毛色黄片| 少妇 在线观看| 久久久久久久亚洲中文字幕| 大香蕉久久网| av在线观看视频网站免费| 久久人人爽人人爽人人片va| 丰满少妇做爰视频| 国产精品久久久久久久久免| 欧美高清成人免费视频www| 久久人妻熟女aⅴ| 亚洲av男天堂| 亚洲高清免费不卡视频| 国产精品国产av在线观看| 欧美日韩精品成人综合77777| 亚洲精品国产av成人精品| 在线播放无遮挡| 亚洲美女视频黄频| 国产精品一二三区在线看| .国产精品久久| 精品久久久久久久久av| 国产乱人偷精品视频| 亚洲精品国产av成人精品| 国产免费又黄又爽又色| 日韩伦理黄色片| 在线 av 中文字幕| 亚洲在久久综合| 亚洲av福利一区| 婷婷色av中文字幕| 日韩av在线免费看完整版不卡| 精品少妇内射三级| 另类亚洲欧美激情| 一本色道久久久久久精品综合| a级毛色黄片| 欧美日韩综合久久久久久| 国产精品三级大全| 精品亚洲成国产av| 精品国产乱码久久久久久小说| 夜夜骑夜夜射夜夜干| 中文资源天堂在线| √禁漫天堂资源中文www| 三上悠亚av全集在线观看 | 人妻系列 视频| 国产精品一区二区在线观看99| 久久久国产一区二区| 欧美 日韩 精品 国产| 日本黄大片高清| 视频中文字幕在线观看| 亚洲人成网站在线观看播放| 亚洲精品乱久久久久久| 一级av片app| 男女免费视频国产| 亚洲,一卡二卡三卡| 日韩电影二区| 97超视频在线观看视频| 久久精品国产a三级三级三级| 国产永久视频网站| 99久久精品国产国产毛片| 久久久久久久久久成人| 亚洲自偷自拍三级| 乱人伦中国视频| 只有这里有精品99| 亚洲国产色片| 亚洲第一区二区三区不卡| 青青草视频在线视频观看| av黄色大香蕉| 69精品国产乱码久久久| 一级黄片播放器| 国产熟女欧美一区二区| 大片免费播放器 马上看| 中文资源天堂在线| a级毛片免费高清观看在线播放| 三上悠亚av全集在线观看 | 丝瓜视频免费看黄片| 国产精品一区二区性色av| 亚洲欧美日韩卡通动漫| 99国产精品免费福利视频| 一边亲一边摸免费视频| videos熟女内射| 日日啪夜夜撸| 在线 av 中文字幕| 五月开心婷婷网| 国产高清国产精品国产三级| 男人添女人高潮全过程视频| 桃花免费在线播放| 如何舔出高潮| 国产真实伦视频高清在线观看| 亚洲国产精品成人久久小说| 在线天堂最新版资源| av国产精品久久久久影院| 日韩一区二区视频免费看| 免费播放大片免费观看视频在线观看| 建设人人有责人人尽责人人享有的| 美女脱内裤让男人舔精品视频| 日本av手机在线免费观看| 国产av一区二区精品久久| 国产精品国产三级国产av玫瑰| 亚洲国产成人一精品久久久| 亚洲精品日韩av片在线观看| 97精品久久久久久久久久精品| 国产av码专区亚洲av| 久久99热这里只频精品6学生| 中文字幕久久专区| av又黄又爽大尺度在线免费看| 啦啦啦啦在线视频资源| 亚洲精品日韩在线中文字幕| 简卡轻食公司| 亚洲av日韩在线播放| 成人无遮挡网站| 欧美激情国产日韩精品一区| 激情五月婷婷亚洲| av视频免费观看在线观看| 一本色道久久久久久精品综合| 国产欧美日韩一区二区三区在线 | 国产日韩一区二区三区精品不卡 | 看免费成人av毛片| 久久人人爽av亚洲精品天堂| 亚洲精品一二三| 久久久国产一区二区| 乱码一卡2卡4卡精品| 久久精品久久久久久噜噜老黄| 在线观看www视频免费| av网站免费在线观看视频| 婷婷色综合www| 国产精品一区二区三区四区免费观看| 久久99热这里只频精品6学生| kizo精华| 在线观看国产h片| av专区在线播放| 大码成人一级视频| 国产高清三级在线| 美女国产视频在线观看| 内地一区二区视频在线| 中文欧美无线码| 最近的中文字幕免费完整| 免费观看av网站的网址| 春色校园在线视频观看| 国产熟女欧美一区二区| 一本大道久久a久久精品| 日韩熟女老妇一区二区性免费视频| 天堂8中文在线网| 亚洲人成网站在线观看播放| 91久久精品国产一区二区三区| 欧美一级a爱片免费观看看| 国产成人91sexporn| 亚洲av不卡在线观看| 亚洲精品,欧美精品| 亚洲欧美日韩另类电影网站| 日日啪夜夜撸| 色视频www国产| 久久久午夜欧美精品| av女优亚洲男人天堂| 天堂中文最新版在线下载| 亚洲综合精品二区| 插逼视频在线观看| 国产精品人妻久久久影院| 亚洲精品第二区| 国产白丝娇喘喷水9色精品| 看非洲黑人一级黄片| 日日摸夜夜添夜夜爱| 欧美一级a爱片免费观看看| 如日韩欧美国产精品一区二区三区 | av天堂中文字幕网| 狂野欧美激情性xxxx在线观看| 亚洲图色成人| 日韩av不卡免费在线播放| av在线播放精品| 日韩视频在线欧美| 欧美日韩精品成人综合77777| 日本欧美国产在线视频| 亚洲国产精品999| 最近最新中文字幕免费大全7| 国产视频内射| 免费观看在线日韩| 人人澡人人妻人| a级一级毛片免费在线观看| 九色成人免费人妻av| 大又大粗又爽又黄少妇毛片口| 亚洲一级一片aⅴ在线观看| 久久久久久久久久久丰满| 亚洲怡红院男人天堂| 久久精品国产亚洲av天美| 免费播放大片免费观看视频在线观看| 99久久人妻综合| 久久久久精品久久久久真实原创| 熟妇人妻不卡中文字幕| 美女国产视频在线观看| 又黄又爽又刺激的免费视频.| av免费在线看不卡| 在线观看一区二区三区激情| 中文精品一卡2卡3卡4更新| 97在线视频观看| 日本欧美视频一区| √禁漫天堂资源中文www| 久热这里只有精品99| 精品久久久久久久久av| 97超碰精品成人国产| 我要看日韩黄色一级片| 麻豆成人av视频| 丰满迷人的少妇在线观看| 国产毛片在线视频| av在线播放精品| 五月开心婷婷网| 嫩草影院入口| 国产熟女欧美一区二区| 成年人午夜在线观看视频| 亚洲av日韩在线播放| 男人爽女人下面视频在线观看| 天天操日日干夜夜撸| 熟女人妻精品中文字幕| 亚洲精品,欧美精品| 国产69精品久久久久777片| av播播在线观看一区| 亚洲精品中文字幕在线视频 | 韩国av在线不卡| 自拍偷自拍亚洲精品老妇| 亚洲成人一二三区av| 69精品国产乱码久久久| 国产在线一区二区三区精| 我的女老师完整版在线观看| 青春草视频在线免费观看| 国产亚洲午夜精品一区二区久久| 蜜桃在线观看..| 9色porny在线观看| 黄色配什么色好看| 午夜老司机福利剧场| 天堂中文最新版在线下载| 麻豆成人午夜福利视频| 欧美亚洲 丝袜 人妻 在线| 99热这里只有是精品在线观看| 深夜a级毛片| 亚洲av不卡在线观看| 国产伦精品一区二区三区视频9| 中文在线观看免费www的网站| 国产一区亚洲一区在线观看| 日韩一区二区视频免费看| 国产视频首页在线观看| 99re6热这里在线精品视频| 国精品久久久久久国模美| 国产精品一区二区在线观看99| 人妻人人澡人人爽人人| 我的老师免费观看完整版| 最近2019中文字幕mv第一页| 日本av免费视频播放| 国产91av在线免费观看| 中国三级夫妇交换| 精品视频人人做人人爽| 9色porny在线观看| 亚洲经典国产精华液单| 日本欧美国产在线视频| 久久久午夜欧美精品| 男人狂女人下面高潮的视频| 人人妻人人澡人人看| 亚洲国产精品国产精品| 九色成人免费人妻av| 日韩在线高清观看一区二区三区| 亚洲欧美日韩卡通动漫| av.在线天堂| a 毛片基地| 免费人妻精品一区二区三区视频| 亚洲人成网站在线播| 亚洲一级一片aⅴ在线观看| 精品一品国产午夜福利视频| 久久影院123| 成年人免费黄色播放视频 | 国产在线视频一区二区| videossex国产| 欧美xxⅹ黑人| 国产日韩欧美视频二区| 视频区图区小说| videos熟女内射| 日韩av免费高清视频| a级片在线免费高清观看视频| 最近中文字幕高清免费大全6| 夫妻性生交免费视频一级片| 曰老女人黄片| 中文欧美无线码| 黑人巨大精品欧美一区二区蜜桃 | 国产极品粉嫩免费观看在线 | kizo精华| 国产精品久久久久久av不卡| 两个人的视频大全免费| 建设人人有责人人尽责人人享有的| 日韩伦理黄色片| 国产高清三级在线| 欧美另类一区| 黄色一级大片看看| 美女cb高潮喷水在线观看| 成人毛片a级毛片在线播放| 日本与韩国留学比较| 中文字幕制服av| 桃花免费在线播放| 另类亚洲欧美激情| 日日摸夜夜添夜夜爱| 亚洲电影在线观看av| 在线免费观看不下载黄p国产| 观看av在线不卡| 日本91视频免费播放| 有码 亚洲区| 精品少妇黑人巨大在线播放| 亚洲欧美清纯卡通| 天天操日日干夜夜撸| 妹子高潮喷水视频| 我的女老师完整版在线观看| kizo精华| 亚洲高清免费不卡视频| 伊人亚洲综合成人网| 欧美3d第一页| 国产精品国产三级国产av玫瑰| 人人妻人人看人人澡| 啦啦啦中文免费视频观看日本| 高清午夜精品一区二区三区| 久久久久国产精品人妻一区二区| 最新中文字幕久久久久| 夜夜爽夜夜爽视频| 视频中文字幕在线观看| 亚洲电影在线观看av| 久久精品国产亚洲网站| 菩萨蛮人人尽说江南好唐韦庄| 在线观看免费视频网站a站| 午夜免费鲁丝| 肉色欧美久久久久久久蜜桃| 少妇熟女欧美另类| 精品国产一区二区久久| 久久婷婷青草| 久久人人爽av亚洲精品天堂| 全区人妻精品视频| 国产精品一二三区在线看| 少妇被粗大猛烈的视频| 精品视频人人做人人爽| 日日摸夜夜添夜夜爱| 在线看a的网站| 色视频www国产| 女人精品久久久久毛片| 中国美白少妇内射xxxbb| 一区二区三区免费毛片| 亚洲精品乱码久久久久久按摩| 精品99又大又爽又粗少妇毛片| 高清av免费在线| 久久鲁丝午夜福利片| 老司机亚洲免费影院| 亚洲美女黄色视频免费看| 中文在线观看免费www的网站| 成人影院久久| 欧美最新免费一区二区三区| 99热6这里只有精品| 亚洲国产av新网站| 久久久欧美国产精品| 91精品伊人久久大香线蕉| 91久久精品电影网| 熟妇人妻不卡中文字幕| 国国产精品蜜臀av免费| 久久女婷五月综合色啪小说| 在线免费观看不下载黄p国产| 亚洲怡红院男人天堂| 国产午夜精品一二区理论片| 9色porny在线观看| 欧美日本中文国产一区发布| 亚洲婷婷狠狠爱综合网| 成人亚洲欧美一区二区av| 99九九在线精品视频 | av天堂久久9| 亚洲性久久影院| 卡戴珊不雅视频在线播放| 大陆偷拍与自拍| 亚洲综合精品二区| 中文字幕人妻丝袜制服| 夜夜骑夜夜射夜夜干| 制服丝袜香蕉在线| 国产精品一区www在线观看| 午夜av观看不卡| 97在线人人人人妻| 亚州av有码| 欧美97在线视频| av又黄又爽大尺度在线免费看| 精品视频人人做人人爽| 狂野欧美激情性xxxx在线观看| 国产伦理片在线播放av一区| av天堂久久9| 国产成人freesex在线| 高清不卡的av网站| 欧美最新免费一区二区三区| 免费观看的影片在线观看| 男女边摸边吃奶| 丰满饥渴人妻一区二区三| 国产成人免费观看mmmm| 99re6热这里在线精品视频| 国产老妇伦熟女老妇高清| 国产欧美日韩综合在线一区二区 | 国产欧美另类精品又又久久亚洲欧美| 免费看光身美女| a 毛片基地| www.色视频.com| 人妻一区二区av| 国内精品宾馆在线| 久久精品久久久久久噜噜老黄| 国产在线男女| 亚洲va在线va天堂va国产| 久久国产乱子免费精品| 大又大粗又爽又黄少妇毛片口| 美女福利国产在线| 亚洲精品日韩在线中文字幕| 一个人免费看片子| 免费观看性生交大片5| 22中文网久久字幕| 精品人妻一区二区三区麻豆| 亚洲情色 制服丝袜| 亚洲国产精品一区二区三区在线| 国产精品蜜桃在线观看| 亚洲欧美日韩另类电影网站| 十分钟在线观看高清视频www | 成人特级av手机在线观看| 插逼视频在线观看| 丝袜在线中文字幕| 久久久精品94久久精品| 肉色欧美久久久久久久蜜桃| 三上悠亚av全集在线观看 | 人人妻人人澡人人爽人人夜夜| 久久99热6这里只有精品| 亚洲自偷自拍三级| 女人久久www免费人成看片| 麻豆乱淫一区二区| 国产男人的电影天堂91| 黄色日韩在线| 天堂俺去俺来也www色官网| 午夜激情久久久久久久| 亚洲成色77777| 亚洲成人一二三区av| 夫妻午夜视频| 肉色欧美久久久久久久蜜桃| 中文字幕免费在线视频6| 成人影院久久| 大话2 男鬼变身卡| av线在线观看网站| 9色porny在线观看| 亚洲av欧美aⅴ国产| 91成人精品电影|