• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Scene Text Recognition Method Based on Deep Learning

    2019-08-13 05:55:16MaosenWangShaozhangNiuandhenguangGao
    Computers Materials&Continua 2019年8期

    Maosen Wang,Shaozhang Niu, and Ζhenguang Gao

    Abstract: Scene text recognition is one of the most important techniques in pattern recognition and machine intelligence due to its numerous practical applications.Scene text recognition is also a sequence model task.Recurrent neural network (RNN) is commonly regarded as the default starting point for sequential models.Due to the nonparallel prediction and the gradient disappearance problem,the performance of the RNN is difficult to improve substantially.In this paper,a new TRDD network architecture which base on dilated convolution and residual block is proposed,using Convolutional Neural Networks (CNN) instead of RNN realizes the recognition task of sequence texts.Our model has the following three advantages in comparison to existing scene text recognition methods:First,the text recognition speed of the TRDD network is much fast than the state-of-the-art scene text recognition network based recurrent neural networks(RNN).Second,TRDD is easier to train,avoiding the problem of exploding and vanishing,which is major issue for RNN.Third,both using larger dilated factors and increasing the filter size are all viable ways to change receptive field size.We benchmark the TRDD on four standard datasets,it has higher recognition accuracy and faster recognition speed based on the smaller model.It is hopefully used in the real-time application.

    Keywords: Scene text recognition,dilated convolution,CTC,CNN,TCN.

    1 Introduction

    With the popularization of smart phones and the tremendous demands of text recognition in Augmentation Reality,scene text recognition is an important part for scene understanding.However,scene text recognition is much more challenging task due to the text in natural scene images is vastly variable in layout and appearance,being drawn from a lot of fonts and styles,suffering from occlusions,inconsistent lighting,noise,orientations.

    Scene text recognition methods can be generally grouped into segmentation-based word recognition and holistic word recognition.There are many studies of segmentation-based methods [Yi,Huang,Hao et al.(2014);Jaderberget,Vedaldi and Zisserman (2014);Babenko and Belongie (2012)],these methods are very effective for the scanned documents.However,it is very difficult to split characters in complicated cases,especially for Asian characters,that include amount of left and right structure characters.

    Error splitting or merging in the word segmentation almost affect the accuracy of recognition.In addition,these methods adopt isolated character classification,recognize subsequent word separately,and discard meaningful context information of text,so their reliability and robustness are reduced in text recognition.In order to solve the problem,sequence text recognition [Espa?aboquera,Castrobleda,Gorbemoya et al.(2016);Bissaccoet,Cummins and Netzer (2013);Xiong,Wang,Zhu et al.(2018)] is proposed.For the scene text,the segmentation of the text is no needed,the holistic text is directly recognized as a sequence.The strong sequence features are extracted through the Deep Neural Networks (DNN) network ensure robustness to various distortions text and messy background.Sequence text recognition becomes the mainstream model of scene text recognition,such as CRNN [Shi,Bai and Yao (2015)],DTRN [He,Zhang,Ren et al.(2015)],FAN [Cheng,Bai,Xu et al.(2017)],that generally use the RNN model to learn contextual information of text.RNN is almost the only choice of sequence models,but it has some disadvantages such as not parallelism,unstable gradient,and high memory requirement for training [Bai,Kolter and Koltun (2018)],researchers have been looking for better models to replace RNN.

    In recent research,temporal convolutional network (TCN) is applied across all sequence tasks,the performance of TCN outperforms canonical recurrent architectures such as LSTM [Hochreiter and Schmidhuber (1997)],GRU [Jozefowicz,Zaremba and Sutskever(2015);Dey and Salemt (2017)] and RNN on 11 sequence tasks [Bai,Kolter and Koltun(2018)].TCN network is essentially a CNN network,which integrates dilated causal convolution and residual block.

    Motivated by TCN design idea,this paper proposed a new network model TRDD (Text recognition based on dilation and residual block).The model uses two basic residual modules,one module is composed of dilated convolution and the other is composed of ordinary convolution.The TRDD network has the following characteristics:

    First,in both training and evaluation,a long input sequence can be processed as whole in TRDD,instead of sequentially as in RNN.

    Second,the receptive field size of sequence features can be increased by using larger dilation factors or increasing the filter size.

    Third,the residual block is used to improve the network training speed and enrich the semantic features of the text.

    Fourth,the filters in TRDD are shared across a layer,with backpropagation path depending only on network depth,in practice,gated RNNs likely to take up too much memory.

    2 Related work

    Before defining the network architecture,we describe the nature of the sequence modeling tasks.Suppose that we give an input sequencex0,x1,…,xTand wish to predict some corresponding outputsy0,y1,…,yTat each time.A sequence modeling network is any functionf:XT+1→YT+1that produces the mappingDepends only onx0,…,xtand not on any later inputsxt+1,…,xT.

    The goal of supervised network training is to find a network f that minimizes some expected loss between the predictions and the actual outputs:

    RNN was once considered to be the only option that processes sequence data.The RNN architecture is shown in Fig.1.x Is the input,h is the hidden layer unit,o is the output,L is the loss function,and y is the label of the training set.htRepresents the state at timet,which is determined not only by the input ofxt,but also byht-1,… ,V,UandW Are weights,and the same type of connection weights are the same.

    Figure1:RNN architecture

    The BPTT (back-propagation through time) algorithm is a common used method for training RNN.In fact,it is a BP algorithm based on time back propagation which continuously searches for a better path along the negative gradient direction until module convergence.The partial derivatives of W and U at time t are as follows:

    Middle part of the formula:

    A major issue of RNN is the problem of gradient vanishing,which is caused by its architecture.The activation function of RNN is generally the sigmod function or the tanh function,reference formula (11,12),the function graph is shown in Figs.2(a),2(b).In the back-propagation gradient calculation,it can be seen from formula (9,10) thatis the multiplication of the derivatives of sigmod or tanh over time series.

    Figure2:Activation function and its derivatives

    It can be seen from Figs.2(c) and 2(d) that the range of the function derivative range of the sigmod function is (0,0.25),the derivative range of the tanh function is (0,1].The product of multiple derivatives multiplied is getting smaller and smaller until it is close to zero,which is the phenomenon of “gradient disappearance”,the calculation process is as follows:

    .Because:|sigmoid(x)′|< 0.25 so:0 Causing

    Because:|tanh(x)′|< 1 so:

    The second problem RNN is that predictions for later time steps are performed sequentially,y^tdepends only onx0,x1,…,xtand not on any “future” inputsxt+1,…,xT.The predictions for later time steps must wait for their predecessors to complete,so it cannot be done in parallel like CNN predictions.

    Finally,the RNN takes up too much memory during the training process.In the case of a long input sequence,RNN can easily consume amount of memory to store the temporary and partial results.Because the backpropagation path depends not only on network depth but also on the length of sequence,RNN network requires more memory than CNN.

    Jozefowicz et al.[Jozefowicz,Zaremba and Sutskever (2015)] searched through more than ten thousand different RNN architectures and evaluated their performance on various sequence modeling tasks.They concluded that if there were “architecture must better than the LSTM”,then they were “not trivial to find”.

    Yet recent results indicate that temporal convolutional network which called TCN can outperform recurrent networks on sequence modeling task.The distinguishing characteristics of TCN are the convolutions in the architecture and map a sequence of any length to output sequence of the same length,just as with RNN.TCN architecture is shown in Fig.3(a),a dilation causal convolution with dilation factors d=1,2,4 and filter size k=3.The receptive field is able to cover all values from the input sequence.Fig.3(b)is an example of residual connection [He,Zhang,Ren et al.(2015)] in a TCN.

    Figure3:Schematic diagram of TCN network architecture

    Inspired by the TCN network,the TRDD network proposed in this paper makes full use of the dilated convolution and residual modules in the network architecture.The dilated convolution expands the size of receptive field,and the residual network enhances semantic information of sequence features.

    3 TRDD network for Scene text recognition

    The network architecture of TRDD,as shown in Fig.4,mainly consists two parts,the features extraction layers and transform layer.The features extraction layers use a dilated convolution and residual network to extract robust sequence features which is consistent with the order of the text in image.The transform layer translates the pre-frame predictions by the features extraction layers into a label sequence.The TRDD absorbs the design idea of TCN in the sequence modeling task,abandons the RNN network,and fuses the dilated convolution and residual module in the network,and archives large improvements.

    Figure4:TRDD Model pipeline

    3.1 Features extraction layers

    The traditional text recognition aims at taking a cropped image of a single word and recognizing the word depict,but they can’t be applied in function to scene text recognition due to the variable foreground and background texture.Scene text is no longer segmented by single characters,and features are extracted directly from text images to form sequence features.Assuming that (x0,… ,xt,… ,xT) are feature vectors extracted from text images through CNN.From CNN receptive field analysis,the receptive field size of the sequence feature corresponds to a range of the input text image.As shown in Fig.5.

    Figure5:Receptive field of the sequence feature

    RNN is one of approach to increase the size of the receptive field of sequence features.In this paper,we present a new module that uses dilated convolutions to extract sequence features from input text image,which is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of the resolution or coverage.Let F0,F1,…,Fn-1be discrete functions and let k0,k1,…,knbe discrete 3×3 filters.

    Define the receptive field of an element q in Fi+1as the set of elements in F0that modify the value of Fi+1(q).Suppose the size of the receptive field of q in Fi+1be the number of these elements.It is clear to see that the size of the receptive filed of each element in

    Fi+1is 2i+2-1 × (2i+2-1).The receptive field is a square of exponentially increasing size.As shown Fig.6:

    Set F1Is produced by from F0by a one-dilated convolution,each element in F1has a receptive field of 3×3;

    Set F2Is produced by from F1by a two-dilated convolution,each element in F2has a receptive field of 7×7;

    Set F3Is produced by from F2by a four-dilated convolution,each element in F3has a receptive field of 15×15.

    Figure6:Dilation supports exponential expansion of receptive field

    In the TRDD model,we used two basic unit modules to extract sequence features:(1) a residual module consists of three dilated convolution with a dilation factor d=1,2,4 ,which call “A-Module” as shown in Fig.7(a),(2) a residual module consists of three convolutions with filter sizes k=1×1,3×3,1×1,which call “B-Module”,as shown in Fig.7(b).

    Figure7:Two basic modules of TRDD

    The network architecture is shown in Fig.8.Before text images being fed into the network,they need to be scaled to the same height.The text image is a color image with a height of 32.The feature of image is extracted through two paths,one based on “AModule” and the other path is on “B-Module”.The features are represented by C × W ×H (W>H),C is the number of channels of feature map,W is the width of the feature map and H is the height of the feature map.After several pooling operations,the height H of the feature map is converted to 1(H=1),and the three-dimensional matrix C×W×H is converted to the two-dimensional matrix C×W,which is the sequence features of the text image.For example,for a color text image with a height of 32 and a width of 280,the matrix of sequence features vector extracted by features Extraction layers is 36×512.It should be noted that each feature vector of a feature sequence is produced from left to right on feature maps by column,each column of the feature maps corresponds to a text range of input image,which termed the receptive field.Experiments show that the size of receptive fields for extracting sequence features by this method is large,and the width is generally larger than half of the image width.

    Figure8:Two-branch feature extraction

    3.2 Transform layer

    Transform layer transforms the sequence features(X=(x0,…,xT)) extracted from the text image into a sequence of label set (Z=(z0,…,zT)),including Chinese characters,punctuation,English characters,numbers,spaces and all other characters.This conversion process is shown in Fig.9,predictions are made by select the label sequence that has the highest probability.

    Figure9:Conditional probability of sequence features

    3.3 Calculation of the loss function

    We utilize conditional probability define in Connectionist temporal classification (CTC)[Graves,Santiago and Gomez (2006);Graves (2008)] to calculate loss in the training phase.TRDD can be training with the maximum likelihood estimation of the probability as the objective function.

    The input sequence x=x1,…,xTwhere T is the sequence length.Is a probability distribution over the set S‘=S∪{blank} where S contains all labels in the recognition task,as well as a ‘blank’ label or no label.Since the probabilities of the labels at each time step are conditionally independent given xt,the conditional probability of a path π ∈ S‘is given by:

    Paths of model output are mapped onto labelling l∈ S‘≤ T by an operator F that removes first the repeated labels,then the blanks.Assume path:π1,π2,π3,π4values are as follows

    π1= -,n,n,i,h,h,a,a,-,o,o

    π2= -,n,i,-,i,h,a,-,-,-,o

    π3= n,n,i,i,h,h,a,a,o,o

    π4= n,n,i,-,i,h,a,-,-,o

    Then F(π1),F(π2),F(π3),F(π4) yield the labelling l= (n,i,h,a,o),since the paths are mutually exclusive,the conditional probability of some labelling l∈ S′is sum of all paths corresponding to it,since the paths are mutually exclusive.

    where the probability of π is defined as formula (15).The different paths that are mapping into the same labelling is what allows CTC to use unsegmented data,because it means that the module only to learn the order of the labels,and not to align with the input sequence one by one.A naive calculation of p(l|y) is unfeasible,since there are many paths for labelling l.For example,suppose there are 30 path mapping label sequences l and length oflis 5,there are=120000 possible paths.However,p(l|y) can be efficiently computed using the forward-backward propagation algorithm described in Graves et al.[Graves,Santiago and Gomez (2006)].

    3.4 Network training

    Denote the training dataset by D=(Ii,li),where Ii,is training text image and liis the ground truth label sequence.The objective function O for CTC is to minimize the negative log probability of the conditional probability of ground truth.

    where yiis the sequence vectors produced by the features extraction layers fromIi.The objective function is calculated directly from the input image and its ground truth label sequence,so the module can be end-to-end trained on pairs of images and sequences.

    4 Experiments

    In this section,we perform a mass of experiments to verify the effectiveness of TRDD from three aspects:the receptive field of sequence features,the convergence speed and prediction accuracy of the network,and the speed and accuracy of network recognition.

    4.1 Receptive field analysis

    The concept of receptive filed is crucial for understanding and analyzing how deep network work.Since anywhere in an input text image outside the receptive field of unit does not affect the value of that unit,it is necessary to control the size of receptive field to ensure that it covers the all relevant image region.

    State-of-the-art scene text modules are basically based on CNN and RNN module,such as CRNN and DTRN.The CNN extracts the sequence features from the text image and RNN learns contextual information.From the concept of receptive field,the feature extracted by CNN already contains a big receptive field,If the receptive field range of feature can meet the needs of text recognition,the LSTM module has little role and can be removed.

    Feature vectors C1,…,Ct…,CTare extracted from the input image through the network module.The method of calculating the receptive field of Ctin the input image is as follows:From left to right,the pixel values of each column of the text image are set to zero,and the variation of the feature vector Ctis calculated.The magnitude of these changes reflects the intensity of each column affecting the feature vectorCt,thereby derives Ctsize of receptive field in the input image.

    The input image resolution is a 3×280×32,the extracted feature vector is(C1,…,Ct…,C36).Calculate the receptive field size of C10extracted by CRNN and TRDD,The X-axis represents the width of the text image and the Y-axis represents the average response intensity.As shown the Fig.11:

    (1) The size of receptive field of sequence feature extracted by TRDD and CRNN in input image is very close in the X-axis.

    (2) The receptive field sensitivity of TRDD is larger than CRNN in the Y-axis,For Asian fonts such as Chinese,Japanese or Korean recognition,the local information is more important for text recognition,representation of the sequence feature extracted by TRDD is more effective than CRNN.

    4.2 Network convergence and accuracy

    DataSets:Syn90K[Jaderberg,Simonyan,Vedaldiet al.(2016)]

    Comparative model:DTRN[He,Huang,Qiao et al.(2015)]

    Computer configure:

    CPU:Xeon E3 1230,memory:16G,an Nvidia 1080TI GPU graphics.

    The experimental results are shown in Fig.12.It can be seen from the Fig.12(a),the TRDD network converges faster than the DTRN,and the network training error is reduced by 2%.As can be seen from the Fig.12(b),the TRDD network is 3% more accurate than DTRN.

    Figure12:Loss curve and prediction curve during training

    4.3 Network convergence and accuracy

    4.3.1Network predictive speed test

    The 60,000 text images from Syn90K are used to evaluate the prediction speed,model size and prediction accuracy of the two models.The results of the comparison are shown in Tab.1.

    Table1:Evaluation of TRDD and DTRNon synth90K

    In the table,“Pre.Time” is the average prediction time of 6000,“Mod.Size” is the size of the model file,and “Accuracy” is the average test accuracy.As can be seen from the Tab.1,Compared with DTRN network,the prediction time of TRDD network is greatly improved.The prediction time of the TRDD network is increased by 2.5 times,the model size is reduced by 27%,and the average prediction accuracy is greater than the DTRN model.

    4.3.2Network recognition accuracy experiment

    DataSets

    The following datasets are used in our experiments.

    IIIT 5K-words(IIIT5K) [Mishra,Alahari and Jawahar (2012)] contains 5000 words patches cropped from natural scene images found by Google Image,2000 for training and 3000 for test.Select 1000 images from the test data as test data.

    Street View Text(SVT) [Babenko and Belongie (2012)] is organized from the Google Street View dataset.Selects 600 text images from this dataset.

    ICDAR 2013(IC13) [Karatzas,Shafait,Uchida et al.(2013)] has 848 cropped word patches for training and 1095 for test,Select 1000 text images from this dataset,mostly horizontal text images.

    ICDAR 2015(IC15) [Karatzas,Lu,Shafait et al.(2015)] contains 4468 patches for training and 2077 for test,Select 1800 text images as test data.

    The test images were selected from the datasets IIIT5K,SVT,IC13,and IC15,and the TRDD model trained by Synth90k dataset was compared with Mishra et al.[Mishra,Alahari and Jawahar (2012)],Jaderberg et al.[Jaderberg,Simonyan,Vedaldi et al.(2016)],PhotoOCR [Bissacco,Cummins and Netzer (2013)],and CRNN [Shi,Bai and Yao (2015)] algorithms.The results are shown in Tab.2.

    Table2:Recognition accuracies(%) on four datasets

    From the experimental data on four datasets in Tab.2,it can be seen that the recognition accuracy of TRDD is improved by 1%-2% comparing with other algorithms.

    5 Conclusion

    In this work we present a new TRDD network model based TCN.Comparing with the traditional sequence text recognition model,the problem of gradient vanishing and gradient exploding in the training phase can be solved due to the removal of the RNN module.Moreover,the prediction speed has a fundamental improvement comparing to other networks since it can be processed in parallel.The dilated convolution increases the receptive field range of the sequence features,and the residual network enriches the semantic expression of the sequence features.Experiments show that the convergence speed,prediction speed and network model size of the TRDD are better than other networks,especially the network prediction speed,TRDD outperforms previous state-ofthe-art results in scene text recognition.

    Acknowledgement:This work is supported by The National Natural Science Foundation of China (U1536121,61370195).

    亚洲国产精品999在线| 亚洲乱码一区二区免费版| 黄色丝袜av网址大全| 亚洲欧美日韩卡通动漫| 午夜a级毛片| 一级黄色大片毛片| 男女那种视频在线观看| 51国产日韩欧美| 久久九九热精品免费| 午夜福利免费观看在线| 91av网一区二区| 观看免费一级毛片| 99riav亚洲国产免费| 亚洲精品色激情综合| 男人舔奶头视频| 亚洲国产精品合色在线| 欧美最黄视频在线播放免费| 夜夜夜夜夜久久久久| 一夜夜www| 国产免费av片在线观看野外av| 国产99白浆流出| 人人妻人人看人人澡| 亚洲avbb在线观看| 国产视频内射| 性色avwww在线观看| 一个人免费在线观看的高清视频| 欧美bdsm另类| 日本黄大片高清| 又黄又粗又硬又大视频| 国产aⅴ精品一区二区三区波| 午夜激情福利司机影院| 亚洲黑人精品在线| 波多野结衣高清作品| 欧美日本亚洲视频在线播放| 夜夜看夜夜爽夜夜摸| 好看av亚洲va欧美ⅴa在| 制服人妻中文乱码| 国产视频内射| 国产精品嫩草影院av在线观看 | 午夜a级毛片| av中文乱码字幕在线| 国产精品亚洲一级av第二区| 免费一级毛片在线播放高清视频| 欧美极品一区二区三区四区| 色视频www国产| 欧美xxxx黑人xx丫x性爽| 欧美性猛交黑人性爽| 日韩欧美一区二区三区在线观看| 亚洲成人中文字幕在线播放| 日本黄大片高清| 久久久久国产精品人妻aⅴ院| 九色国产91popny在线| av片东京热男人的天堂| 成人无遮挡网站| 深爱激情五月婷婷| 国产蜜桃级精品一区二区三区| 婷婷精品国产亚洲av在线| 99视频精品全部免费 在线| 中文字幕高清在线视频| 精品一区二区三区av网在线观看| 亚洲精品456在线播放app | 国产精品98久久久久久宅男小说| 91在线观看av| 最新美女视频免费是黄的| av片东京热男人的天堂| 真人一进一出gif抽搐免费| 男女视频在线观看网站免费| 床上黄色一级片| 欧美bdsm另类| 亚洲美女黄片视频| 在线免费观看的www视频| 亚洲激情在线av| 免费看a级黄色片| 国产极品精品免费视频能看的| 无限看片的www在线观看| 国产免费男女视频| 99久久精品一区二区三区| 黄色丝袜av网址大全| 久久草成人影院| 亚洲第一欧美日韩一区二区三区| 午夜精品在线福利| 欧美黑人巨大hd| 亚洲18禁久久av| 99在线人妻在线中文字幕| 国产免费av片在线观看野外av| 国产午夜精品久久久久久一区二区三区 | 亚洲成人精品中文字幕电影| 国产午夜福利久久久久久| 婷婷精品国产亚洲av| 国产欧美日韩精品一区二区| 精品日产1卡2卡| 亚洲人与动物交配视频| 午夜精品在线福利| 国产精品 国内视频| 欧美性感艳星| www日本在线高清视频| 内射极品少妇av片p| 国语自产精品视频在线第100页| 久久国产精品影院| 欧美激情久久久久久爽电影| www日本在线高清视频| 午夜视频国产福利| 国产高清三级在线| 国产精品永久免费网站| 久久精品国产99精品国产亚洲性色| 午夜激情福利司机影院| 99在线视频只有这里精品首页| 午夜老司机福利剧场| 日韩av在线大香蕉| 亚洲国产精品成人综合色| 午夜亚洲福利在线播放| 免费看美女性在线毛片视频| 亚洲国产欧美人成| 波多野结衣高清作品| 精品乱码久久久久久99久播| 日韩欧美一区二区三区在线观看| 男人舔女人下体高潮全视频| 亚洲精品成人久久久久久| 国产成人系列免费观看| 国产男靠女视频免费网站| 国产爱豆传媒在线观看| 身体一侧抽搐| 成人av一区二区三区在线看| 在线观看av片永久免费下载| 一进一出抽搐gif免费好疼| 精品电影一区二区在线| 日韩欧美 国产精品| 色哟哟哟哟哟哟| 两个人的视频大全免费| 人人妻,人人澡人人爽秒播| 日韩人妻高清精品专区| 日本免费一区二区三区高清不卡| 日韩精品青青久久久久久| 久久精品人妻少妇| 国内少妇人妻偷人精品xxx网站| 99久久久亚洲精品蜜臀av| 麻豆久久精品国产亚洲av| 亚洲av成人不卡在线观看播放网| 亚洲 国产 在线| 女同久久另类99精品国产91| 一级毛片女人18水好多| 操出白浆在线播放| 久久性视频一级片| 1024手机看黄色片| 熟女人妻精品中文字幕| 操出白浆在线播放| xxxwww97欧美| 美女 人体艺术 gogo| 51国产日韩欧美| 国产中年淑女户外野战色| aaaaa片日本免费| 又爽又黄无遮挡网站| 国产精品亚洲av一区麻豆| 成年免费大片在线观看| 99久久精品国产亚洲精品| 精品日产1卡2卡| 亚洲av电影不卡..在线观看| 欧美中文综合在线视频| 亚洲男人的天堂狠狠| 九色国产91popny在线| 国产又黄又爽又无遮挡在线| 欧美大码av| 男女那种视频在线观看| 天堂网av新在线| 窝窝影院91人妻| 国内精品一区二区在线观看| 熟妇人妻久久中文字幕3abv| 欧美绝顶高潮抽搐喷水| 特级一级黄色大片| 国产探花极品一区二区| 国产精品国产高清国产av| 激情在线观看视频在线高清| h日本视频在线播放| 99视频精品全部免费 在线| 国产精品影院久久| 免费看a级黄色片| 国产成人系列免费观看| 五月伊人婷婷丁香| 少妇人妻一区二区三区视频| 国产一区二区在线观看日韩 | 97超视频在线观看视频| 国产成年人精品一区二区| 女生性感内裤真人,穿戴方法视频| 亚洲av成人av| 色综合站精品国产| 免费观看人在逋| 99国产精品一区二区蜜桃av| 黄片大片在线免费观看| 真人一进一出gif抽搐免费| 在线观看免费午夜福利视频| 免费在线观看亚洲国产| 法律面前人人平等表现在哪些方面| 国产精品永久免费网站| 亚洲av成人精品一区久久| 中文字幕久久专区| 脱女人内裤的视频| 美女被艹到高潮喷水动态| 精品久久久久久久人妻蜜臀av| 中文资源天堂在线| 夜夜看夜夜爽夜夜摸| 久久久久久九九精品二区国产| 欧美日本亚洲视频在线播放| 午夜福利欧美成人| 一进一出好大好爽视频| 欧美性猛交黑人性爽| 香蕉丝袜av| 国产精品 国内视频| 男人的好看免费观看在线视频| 很黄的视频免费| 中出人妻视频一区二区| 中文在线观看免费www的网站| 精品国内亚洲2022精品成人| 亚洲午夜理论影院| 亚洲精品在线美女| 在线观看av片永久免费下载| 色在线成人网| 男女做爰动态图高潮gif福利片| 欧美黄色片欧美黄色片| 91在线精品国自产拍蜜月 | 免费看光身美女| 一个人看的www免费观看视频| 欧美日韩黄片免| 一级a爱片免费观看的视频| 久久99热这里只有精品18| 国产国拍精品亚洲av在线观看 | 禁无遮挡网站| xxxwww97欧美| 久久久久精品国产欧美久久久| 免费观看精品视频网站| 好看av亚洲va欧美ⅴa在| 亚洲精品一卡2卡三卡4卡5卡| 99久久久亚洲精品蜜臀av| 99国产极品粉嫩在线观看| 国产一区在线观看成人免费| 国产精品一及| 性色av乱码一区二区三区2| 日韩欧美 国产精品| 欧美三级亚洲精品| 日本免费一区二区三区高清不卡| 日韩成人在线观看一区二区三区| 午夜a级毛片| 亚洲内射少妇av| 午夜老司机福利剧场| 一级黄片播放器| 精品久久久久久久久久久久久| 波多野结衣巨乳人妻| 国产精品乱码一区二三区的特点| 日韩免费av在线播放| 久久精品国产自在天天线| 天堂动漫精品| 免费看美女性在线毛片视频| a在线观看视频网站| x7x7x7水蜜桃| 亚洲精品乱码久久久v下载方式 | 国产一区二区激情短视频| 最近最新中文字幕大全免费视频| 男人的好看免费观看在线视频| 亚洲成a人片在线一区二区| 午夜精品一区二区三区免费看| 国产成+人综合+亚洲专区| 亚洲中文字幕日韩| 日韩精品青青久久久久久| 久久这里只有精品中国| 听说在线观看完整版免费高清| 一区二区三区免费毛片| 一级黄片播放器| 99久久成人亚洲精品观看| 一区二区三区国产精品乱码| 一级黄色大片毛片| 人人妻,人人澡人人爽秒播| 久久亚洲精品不卡| 国产一区二区在线av高清观看| 狠狠狠狠99中文字幕| 国产视频内射| 亚洲在线自拍视频| 岛国在线观看网站| 亚洲 国产 在线| 国产伦精品一区二区三区视频9 | 国产精品嫩草影院av在线观看 | 又黄又粗又硬又大视频| 我的老师免费观看完整版| 久久精品国产自在天天线| 欧美日韩中文字幕国产精品一区二区三区| 最近最新免费中文字幕在线| 国产日本99.免费观看| 九色国产91popny在线| 啦啦啦观看免费观看视频高清| 欧美三级亚洲精品| 日本与韩国留学比较| 亚洲av成人av| 色精品久久人妻99蜜桃| 精品无人区乱码1区二区| 啦啦啦观看免费观看视频高清| 手机成人av网站| 国产精品影院久久| 欧美黑人欧美精品刺激| 国产高清有码在线观看视频| 校园春色视频在线观看| 亚洲午夜理论影院| 国产免费一级a男人的天堂| 国内揄拍国产精品人妻在线| 听说在线观看完整版免费高清| 亚洲欧美日韩高清专用| 亚洲第一电影网av| 老司机深夜福利视频在线观看| 麻豆久久精品国产亚洲av| 久久婷婷人人爽人人干人人爱| 亚洲狠狠婷婷综合久久图片| 操出白浆在线播放| 国产在线精品亚洲第一网站| 老熟妇乱子伦视频在线观看| 午夜免费成人在线视频| 床上黄色一级片| 欧美日韩福利视频一区二区| 两个人看的免费小视频| www日本在线高清视频| 精品人妻1区二区| 亚洲熟妇熟女久久| 在线观看午夜福利视频| 中出人妻视频一区二区| 色综合婷婷激情| 91久久精品电影网| 美女高潮喷水抽搐中文字幕| 欧美zozozo另类| 精品99又大又爽又粗少妇毛片 | 国产av不卡久久| 美女cb高潮喷水在线观看| 日本三级黄在线观看| 色老头精品视频在线观看| 午夜福利在线在线| 少妇的逼好多水| 成人无遮挡网站| 法律面前人人平等表现在哪些方面| 亚洲av二区三区四区| 3wmmmm亚洲av在线观看| 欧美日韩乱码在线| 两人在一起打扑克的视频| 天堂av国产一区二区熟女人妻| 夜夜看夜夜爽夜夜摸| 亚洲激情在线av| 精品久久久久久久末码| 欧美另类亚洲清纯唯美| 国产探花极品一区二区| 亚洲美女黄片视频| а√天堂www在线а√下载| 夜夜夜夜夜久久久久| 欧美日本视频| 亚洲 国产 在线| 久久精品影院6| 国产欧美日韩一区二区精品| 可以在线观看毛片的网站| 亚洲精品在线美女| 丰满人妻一区二区三区视频av | 国产成人av教育| 国产精品野战在线观看| 日韩免费av在线播放| 19禁男女啪啪无遮挡网站| 亚洲欧美日韩高清专用| 黑人欧美特级aaaaaa片| 日本成人三级电影网站| 精品国产美女av久久久久小说| 欧美zozozo另类| 欧美极品一区二区三区四区| 日韩国内少妇激情av| 亚洲欧美日韩无卡精品| 欧美性猛交╳xxx乱大交人| 亚洲成a人片在线一区二区| 久久久久久久久中文| 91字幕亚洲| 最近视频中文字幕2019在线8| 操出白浆在线播放| 亚洲av中文字字幕乱码综合| 老汉色∧v一级毛片| 香蕉丝袜av| 国产高清videossex| 国产视频一区二区在线看| 丰满的人妻完整版| 性色avwww在线观看| 国产高清videossex| 在线观看免费视频日本深夜| 亚洲美女视频黄频| 嫩草影院入口| 国产高清videossex| 亚洲人成网站在线播放欧美日韩| 免费av毛片视频| 俺也久久电影网| 脱女人内裤的视频| 久久久久久九九精品二区国产| 国产一区二区亚洲精品在线观看| 亚洲成人久久爱视频| 久久久久久久久中文| 2021天堂中文幕一二区在线观| 超碰av人人做人人爽久久 | 午夜日韩欧美国产| 色在线成人网| 亚洲欧美日韩卡通动漫| 亚洲国产精品久久男人天堂| 精品国产亚洲在线| 亚洲熟妇熟女久久| 国内精品久久久久久久电影| 国产午夜精品论理片| 国产一区二区在线观看日韩 | 成人鲁丝片一二三区免费| 两性午夜刺激爽爽歪歪视频在线观看| 3wmmmm亚洲av在线观看| 欧美成人a在线观看| xxx96com| 国产乱人视频| 日韩欧美免费精品| 久久伊人香网站| 国内精品美女久久久久久| 久久香蕉国产精品| 欧美黄色淫秽网站| 99热6这里只有精品| 欧美高清成人免费视频www| 国产真实伦视频高清在线观看 | 又黄又粗又硬又大视频| 91麻豆精品激情在线观看国产| 毛片女人毛片| 中文字幕高清在线视频| 久久国产乱子伦精品免费另类| 色播亚洲综合网| 男女那种视频在线观看| 亚洲 国产 在线| 18禁美女被吸乳视频| 国产精品 欧美亚洲| 亚洲中文字幕一区二区三区有码在线看| 亚洲国产欧美人成| 欧美中文日本在线观看视频| 亚洲精品一卡2卡三卡4卡5卡| 国产一区二区亚洲精品在线观看| 两人在一起打扑克的视频| 99久久九九国产精品国产免费| 国产精品,欧美在线| 黄色片一级片一级黄色片| 欧美绝顶高潮抽搐喷水| 免费电影在线观看免费观看| 精品一区二区三区视频在线 | 一二三四社区在线视频社区8| 亚洲欧美激情综合另类| 少妇高潮的动态图| 亚洲18禁久久av| 精品日产1卡2卡| 精品不卡国产一区二区三区| 久久精品综合一区二区三区| 欧美中文日本在线观看视频| 久久国产精品影院| 亚洲国产欧美人成| 亚洲av不卡在线观看| 日本成人三级电影网站| 久久久久久大精品| 精品久久久久久成人av| 人妻夜夜爽99麻豆av| 亚洲欧美一区二区三区黑人| 国产高清三级在线| 午夜免费成人在线视频| 一边摸一边抽搐一进一小说| 小蜜桃在线观看免费完整版高清| 国产真人三级小视频在线观看| 美女cb高潮喷水在线观看| 午夜免费男女啪啪视频观看 | 在线观看一区二区三区| 欧美乱妇无乱码| 久久久久免费精品人妻一区二区| 亚洲成av人片在线播放无| 日本黄色视频三级网站网址| 最近视频中文字幕2019在线8| 日韩欧美在线二视频| 日本免费一区二区三区高清不卡| 欧美激情在线99| 99热精品在线国产| 欧美在线黄色| 一边摸一边抽搐一进一小说| 18+在线观看网站| 色综合亚洲欧美另类图片| 99热6这里只有精品| 午夜两性在线视频| 国产精品女同一区二区软件 | 精品国产三级普通话版| 国产麻豆成人av免费视频| 夜夜看夜夜爽夜夜摸| 国内毛片毛片毛片毛片毛片| 国产色爽女视频免费观看| 热99在线观看视频| 国产久久久一区二区三区| 午夜影院日韩av| 成年人黄色毛片网站| 亚洲va日本ⅴa欧美va伊人久久| 国产高清视频在线观看网站| 中出人妻视频一区二区| 91久久精品电影网| 国产精品久久久久久人妻精品电影| 啦啦啦韩国在线观看视频| 久久精品夜夜夜夜夜久久蜜豆| 久99久视频精品免费| 国语自产精品视频在线第100页| 18禁在线播放成人免费| 在线国产一区二区在线| 亚洲精品粉嫩美女一区| 九色国产91popny在线| www日本黄色视频网| 热99re8久久精品国产| 在线天堂最新版资源| 精品免费久久久久久久清纯| 国产激情欧美一区二区| 久久精品国产清高在天天线| 久久久国产成人精品二区| 一区二区三区激情视频| 搡老妇女老女人老熟妇| 日日夜夜操网爽| 丰满的人妻完整版| 成年女人毛片免费观看观看9| 麻豆久久精品国产亚洲av| 母亲3免费完整高清在线观看| 91在线观看av| 亚洲av美国av| svipshipincom国产片| 波多野结衣高清无吗| 亚洲最大成人手机在线| 成人国产综合亚洲| 亚洲性夜色夜夜综合| 一级毛片女人18水好多| 九九久久精品国产亚洲av麻豆| 成人鲁丝片一二三区免费| 一进一出抽搐gif免费好疼| 久久久久九九精品影院| 岛国在线观看网站| 欧美性感艳星| 国产精品野战在线观看| 99热只有精品国产| 在线播放国产精品三级| 五月伊人婷婷丁香| xxxwww97欧美| 欧美黑人欧美精品刺激| 亚洲人成网站在线播| 色在线成人网| 国产麻豆成人av免费视频| 日本黄大片高清| 欧美xxxx黑人xx丫x性爽| www日本在线高清视频| 亚洲av一区综合| 免费av观看视频| 在线免费观看的www视频| 亚洲第一电影网av| 一区福利在线观看| 亚洲久久久久久中文字幕| 天美传媒精品一区二区| 在线观看舔阴道视频| 日本免费a在线| 国产亚洲精品一区二区www| 久久久国产精品麻豆| 免费一级毛片在线播放高清视频| 丰满人妻一区二区三区视频av | 中文字幕久久专区| 国产精品,欧美在线| 色尼玛亚洲综合影院| 国产一区二区在线观看日韩 | 免费一级毛片在线播放高清视频| 亚洲18禁久久av| 精品一区二区三区av网在线观看| 亚洲激情在线av| 亚洲欧美精品综合久久99| 色在线成人网| 国产日本99.免费观看| 好男人电影高清在线观看| 欧美日韩一级在线毛片| 欧美一区二区精品小视频在线| 久久午夜亚洲精品久久| 久久国产精品影院| 最近最新免费中文字幕在线| 免费无遮挡裸体视频| 最近最新中文字幕大全免费视频| 国产精品亚洲美女久久久| 综合色av麻豆| 一区二区三区国产精品乱码| 成年女人永久免费观看视频| 99精品在免费线老司机午夜| 亚洲国产欧洲综合997久久,| 国产精品一区二区免费欧美| 精品乱码久久久久久99久播| 中文在线观看免费www的网站| 他把我摸到了高潮在线观看| 级片在线观看| 美女黄网站色视频| 国产成人a区在线观看| 岛国视频午夜一区免费看| 国产高清三级在线| 99精品欧美一区二区三区四区| 欧美一级a爱片免费观看看| 天堂动漫精品| 怎么达到女性高潮| 99在线人妻在线中文字幕| 国产伦精品一区二区三区四那| 久久久久久大精品| 国产毛片a区久久久久| 久久精品91无色码中文字幕| 亚洲av二区三区四区| 日韩欧美精品免费久久 | 搞女人的毛片| 两个人视频免费观看高清| 内射极品少妇av片p| 18禁美女被吸乳视频| 岛国在线观看网站| 久久亚洲真实| 精品福利观看| 亚洲精品久久国产高清桃花| 成人特级黄色片久久久久久久| 国产亚洲av嫩草精品影院| 悠悠久久av| 老司机午夜十八禁免费视频| 色在线成人网| 亚洲avbb在线观看| 久久久久久久久大av| 精品一区二区三区av网在线观看| 亚洲内射少妇av|