• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recurrent Convolutional Neural Network MSER-Based Approach for Payable Document Processing

    2021-12-15 07:08:20SulimanAladhadhHidayatUrRehmanAliMustafaQamarandRehanUllahKhan
    Computers Materials&Continua 2021年12期

    Suliman Aladhadh,Hidayat Ur Rehman,Ali Mustafa Qamarand Rehan Ullah Khan

    1Department of Information Technology,College of Computer,Qassim University,Buraydah,Saudi Arabia

    2Ainfinity Algorythma,Abu Dhabi,United Arab Emirates

    3Department of Computer Science,College of Computer,Qassim University,Buraydah,Saudi Arabia

    4Department of Computing,School of Electrical Engineering and Computer Science(SEECS),National University of Sciences and Technology(NUST),Islamabad,Pakistan

    Abstract:A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition (OCR) is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization, the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model, which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction, modeling sequence, and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore, it applies to both the lexicon-free and lexicon-based text recognition, and finally, it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.

    Keywords: Character recognition; text spotting; long short-term memory;recurrent convolutional neural networks

    1 Introduction

    Deep Learning (DL) relies on the powerful function approximation and representation attributes of deep neural networks [1].DL’s innovation and realization have revolutionized many areas, including computer vision, speech recognition, pattern recognition, and natural language processing.DL has enabled computational mathematical models and frameworks, which comprise multiple interlinked processing intermediate layers, to learn the inherent representations of data.This learning is achieved with multiple levels of abstraction by introducing multiple layers [2].Recognition of sequence objects, such as handwritten text, scene text, and the musical score, is challenging compared to other similar problems.The challenge comes from the prediction of the series of object labels rather than single labels.The second challenge to sequence-based objects is their arbitrary lengths.The lengths of sequence objects may vary on a case-to-case basis,and no restrictions can be imposed as these occur in natural problems and represent natural circumstances.An example of such sequence objects in the scene is the text with the word “yes”with only three characters and a word like “investments” having eleven characters.This poses a challenge to detection and recognition algorithms.State-of-the-art contains efforts to address this problem using conventional and non-conventional Machine Learning (ML) approaches.DL is used in object detection, image understanding, document analysis, and text recognition.

    In this paper, we propose an end-to-end Optical Character Recognition (OCR) system, which does both localization and recognition and serves as a single unit to automate payable document processing such as cheques, vendor invoices, and cash disbursement.For text localization,the maximally stable extremal region is used, which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model, which performs text recognition.The deep learning model utilizes both the convolution neural network and long short-term memory (LSTM).The convolution layer is used for extracting features, which are fed to the LSTM.The model integrates feature extraction, modeling sequence, and transcription into a unified network.The proposed architecture, being an end-to-end trainable network, handles the sequences of unconstrained lengths, independent of the character segmentation or horizontal scale normalization.

    Furthermore, it applies to both the lexicon-free and lexicon-based text recognition, and finally,it produces a comparatively smaller model, which can be implemented for practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.

    The rest of the paper is organized as follows.Section 2 presents the related work, whereas the proposed architecture is given in Section 3.Similarly, Section 4 discusses the experimental analysis and evaluation, whereas Section 5 concludes the paper.

    2 Related Work

    Shi et al.[3] investigated text recognition in the scene.A unified framework and novel deep architecture are presented that integrate feature extraction, sequence modeling, and transcription.The proposed approach is end-to-end trainable and handles sequences without restrictions on the length.The approach is also independent of the prior lexicon and generates a comparatively smaller model for real-time, real-world scenarios.Tian et al.[4] propose an approach for text localization in natural images.They term the approach as the Connectionist Text Proposal Network (CTPN).CTPN is based on the vertical anchor that efficiently predicts text location and the scoring of text and non-text for fixed-width proposals.The approach fuses the Recurrent Neural Network (RNN) with the Convolutional Neural Network (CNN).The CTPN, which uses RNN and CNN, is shown to work reliably on multi-scale and multi-language text.The approach does not need the post-processing steps compared to the previous approaches.

    In [5], the authors propose an approach to text detection in complex scenarios involving panorama images.The approach exploits the Extremal Regions (ER) as well as the fusion of edge information, probabilistic color detection, and geometric properties for segmenting the text from the background.The authors report good overall detection performance.In [6], the authors present a novel approach for detecting tables in document images.The workflow for table detection is based on three unique steps:the first one is preprocessing, followed by the detection of horizontal and vertical lines.The last one is the table detection based on the previous two steps.The performance is evaluated using forms, magazines, newspapers, scientific journals, certificates,handwritten documents, and cheques.In [7], the authors use morphological operators for text feature extraction for text line segmentation in documents.The algorithm is based on projecting multiple histograms.From the horizontal projection on the text image, line segments are extracted based on the peak horizontal projection.Threshold-based segmenting segments the images into multiple parts.The histogram’s vertical projection is exploited for the line segments, followed by the decomposition in words and, finally, characters using different thresholds.They got an accuracy of 98%.

    Yang et al.[8] propose a hierarchical network based on two unique characteristics.First, the proposed network follows the structure of the documents.Second, the approach employs two attention levels that are applied at the sentence and word levels.In evaluating six large-scale text tasks, the proposed method outperforms the state-of-the-art by a large margin.The approach in [9] investigates Neural Network (NN) architecture for multi-label text classification tasks.The article proposes that simple NN models and the integration of rectified, dropout, and AdaGrad are suitable for this task.Specifically, Backpropagation for Multi-Label Learning’s (BP-MLL)ranking loss minimization is useful to be replaced with the commonly used Cross Entropy Error(CEE) function.The evaluation suggests that rectified linear units (RLU), dropout, and AdaGrad outperform other detection approaches based on six large-scale text datasets.Graves et al.[10]proposed an approach based on the RNN that is specifically designed for sequence labeling tasks,in which the data is complex and contains multi-range, directional dependencies.The proposed network is robust to the size q of the lexicon.The impact of hidden layers and the use of hidden layers’context is also demonstrated.The approach significantly outperforms the state-of-the-art.

    In [11], two text extraction approaches from the natural images are compared based on the edge-based and connected-component.Furthermore, DL and RNN are also widely used for generic object detection in images.In [12], Zuo argues that CNN networks alone are not adequate and suitable to model the complex relationship between pixels in images.RNN, on the other hand, can model the inherent contextual dependencies in digital images.Therefore, the authors propose to merge CNN and RNN, especially for tasks involving pixel-based detection.As an example application, the work demonstrates its use of the fusion approach for skin color detection in two datasets.The work in [13] is also based on the similar concept of using CNN and RNN together due to the inherent complexities involved in the images’objects.The proposed approach is termed the CNN-RNN framework.It uses image-label embedding to learn the semantic label interdependency and the relevance of the image label.The CNN-RNN is end-to-end trainable.The approach outperforms the state-of-the-art.

    In [14], the authors also propose a recurrent CNN represented as the RCN for image-based object recognition tasks.The activities of the proposed Recurrent Convolutional Neural Network(RCNN) layers and units evolve by modulated activities of the neighboring units, thus learning the contextual information.On evaluating the proposed approach using the four datasets, the RCNN outperforms state-of-the-art models on all of these datasets and demonstrates the advantage of RCNN.Elfwing et al.[15] propose the deep architecture of the Free-Energy Restricted Boltzmann Machines (FE-RBM).The RBMs are stacked on top of each other, and the class node is connected to all the hidden layers to improve performance.The performance of the approach shows its effectiveness.

    3 Proposed Methodology

    OCR generally requires two steps:the first is the localization, and the second being recognition.Our method of processing payable documents is also divided into two steps.The first step is text localization.In this step, an image is segmented to get only the text candidate region,and other regions are removed.These segmented regions or chunks are then passed to the text recognition module, which then transcribes the image.We use the Maximally Stable Extremal Region (MSER) for text localization due to its robustness to noise and illumination.MSER detects text chunks, which are then passed on to the recurrent convolutional neural network, which generates a transcription of the image.

    3.1 Maximally Stable Extremal Region(MSER)

    MSER is proposed by Matas et al.[16] for the correspondence between two image-based objects with different viewpoints.The MSER regions are used for blob detection in digital images.MSER regions possess two properties:first, they are affine invariant and are independent of warping or skewness.Secondly, the regions are sensitive to lightness or darkness.The intensity function calculates the MSER regions in the corresponding region and the outer boundary, which results in the regions’valuable characteristics for detection tasks.Sambyal et al.[17] proposed character segmentation and text extraction based on the MSER.The MSER is used to treat the essential letter candidates.The MSER threshold regions are used to determine the various connected components for various characters’identification.The algorithm is evaluated on the character sets from English, Russian, Urdu, and Hindi languages.The authors report good performance for the English and Russian languages characters, but comparatively low performance for the character set of Urdu and the English languages.The authors advocate the simplicity and less overhead of the proposed approach.In [18], Sung et al.propose the Extremal Region(ER) tree construction.It is advocated that the use of MSER regions alone, as done by Sambyal et al.[17], is not a viable solution due to the strict requirements of maximum stability and,therefore, achieves decreased performance.The approach employs sub-path sampling, pruning,character candidate selection, and finally, using Adaptive boosting to verify the candidates in extracted characters.Thus, the approach achieves an increased recall of 8%, precision of 1%, and F-measure of 4%.In [19], the authors propose a multi-level MSER for text segmentation from the scenes.The proposed approach defines a segmentation score based on the four measures of stroke width, boundary curvature, character confidence, and color constancy.The best MSER scored from each channel is fused for final segmentation.In [20], the authors propose a text detection approach based on enhanced MSER.The approach employs an enhancement based on edge detection and is termed as edge-enhanced MSER for basic letter candidates.Based on the geometric and stroke width information, the basic letters’candidates are then filtered for excluding non-textual objects.Finally, the letters are paired and subsequently separated into discrete words using the text lines’identification.

    In our paper, an image is fed to MSER, which extracts the character candidate region and the input image’s noise.Further enhancement to the MSER is done to extract only the characters and discard the non-text regions such as logos, lines, and boxes.The text and text regions are separated based on the stroke width of the candidate region.The characters usually have a stroke width less than that of non-text.After getting the character candidate region, chunking of individual letters was done to form words from characters.Word chunks are formed if two letters overlap each other in a horizontal direction.

    3.2 Proposed MSER Recurrent Convolutional Neutral Network(MRCNN)

    The proposed architecture is shown in Fig.1.Our network consists of four units, the MSER layer, convolution layers, recurrent layers, and the transcription layers.The MSER is applied as a preprocessing to segment the characters.The convolutional layers extract and learn the feature sequence from each input image character.The recurrent network is constructed from the feature sequence of the convolutional layer output, making a probabilistic prediction for each frame.The final layer of transcription that gets input from the recurrent layer is used to translate the recurrent layer predictions into sequence labels.The convolutional and the recurrent networks are jointly trained with one loss function.

    3.3 Convolutional Feature Map

    In the proposed approach, the convolutional layers are like conventional Deep CNN, using the convolutional and max-pooling layers and removing the fully connected layers.This setup thus extracts sequential features from input.One constraint is the similar scaling of the input images.The feature vectors of feature maps are produced by applying the convolutional layers used by the next recurrent layer.Each feature vector is generated in a left to right fashion on the feature map using a column.The width of the column is kept fixed, i.e., 26 pixels.In the proposed network, deep features are conveyed into sequential representations invariant to the length of sequence-like objects.The translation invariance comes from applying the layers of convolution,max pooling, and element-wise activation function operating on the local regions.Thus, the feature maps’columns represent the original image’s rectangle region, also referred to as the receptive field.

    3.4 LSTM Recurrent Labeling

    For predicting the labelYfor each framexin feature sequenceX, a deep bi-directional RNN is constructed on top of the CNN layers as the recurrent layers.The RNN has strong capabilities in capturing the context information in the image sequence.It can trace back the errors to input convolutional layers that calculated these features, allowing a joint CNN and RNN to train in sequence.Furthermore, RNN can operate on arbitrary length sequences.A basic unit of the RNN contains a self-connected hidden layer in between the input layer and the output layer.When it receives a frame in the input sequence, it issues an update to its internal state using a nonlinear function.This function takes the current input and previous state as inputs and predicts the current class.The generic RNN networked units suffer from the vanishing gradient problem,as discussed by Bengio et al.[21].Thus, this problem adds a burden on the overall training setup and reduces the range of context storage.Thus, the LSTM of [22,23] as RNN types addresses this problem.Since the context from both directions is valuable and complementary to each other, we follow [3,24], combining two LSTMs.We combine a forward and backward LSTM for two-way directional LSTM.

    Moreover, multiple LSTMs layers can be used to construct a deep LSTM model.The deep LSTM has contributed to the task of speech recognition [24].Furthermore, it allows for a higher level of abstractions than simple LSTM.The error is propagated in the opposite directions of the directional LSTM using the Back-Propagation Through Time (BPTT).In the last stages of RNN, the propagated sequence differentials are mapped.This inverts the operation of feature maps conversion into feature sequences, fed to the convolutional layers.

    Figure 1:The proposed architecture of the CNN-LSTM network.The network contains three layers:the convolution layer, which learns and thus extracts features, LSTM Recurrent layer that predicts the class label for each frame; and the transcription layer that maps the predictions into the final label

    3.5 Label Transcription

    The frame-based predictions of the RNN are converted into sequence labels.This process is termed transcription, where we find the label sequence based on the highest probabilistic distribution of predictions.There are two kinds of transcriptions:lexicon-based and lexiconfree transcriptions.A lexicon puts a constraint on the predictions.In a lexicon-free setup, the predictions are unconstrained.In lexicon mode, the highest probability drives the predictions for label sequence.The transcription is done using the connectionist temporal function, which uses a forward-backward algorithm for finding the optimal candidate.That is why it learns the contextual information about predicting a chunk.However, the Back-Propagation Through Time (BPTT) is applied in the recurrent layers for calculating the error differentials in these layers.

    4 Experiments and Results

    This section contains the details of the experiments along with a detailed discussion of the obtained results.

    4.1 Network Training

    We consider an image training datasetX= {I,L}, whereIrepresent the images, andLstands for the ground truth labels.The network minimizes the negative log-likelihood of the conditional probability of the ground truth.A Stochastic Gradient Descent (SGD) is used and calculated by the back-propagation algorithm for training the network.The “forward-backward”algorithm of [25] propagates the error differences backward in the Transcription layer.For perdimension learning rates calculation and its optimization, we used the ADADELTA algorithm of [26].Compared to others, the ADADELTA automatically calculates the learning rates.We also found that the optimization by the ADADELTA converged faster.

    4.2 Network Configuration

    The architecture of the convolution setup is extended from the work of Simonyan et al.[27].Tab.1 shows the configuration of the network.The network of [27] is adapted to work for the English text.As such, in the max-pooling layers of row 8 and row 11 (Tab.1), we adapted pooling strides of 1 × 2 compared to the conventional 2 × 2.We also represent them as third and fourth max-pooling layers.This results in the feature maps having larger widths, thus producing a comparatively more extended feature sequence.Our network has deep convolutional layers with deep recurrent layers and uses the batch normalization technique introduced by Ioffe et al.[28].The technique in [28] is beneficial for training a network of these extreme depths.The network is augmented with the batch normalization layers.These layers are inserted after the third, fifth, and seventh convolutional layers.The batch normalization process of [28] greatly reduces the training times, thereby expediting the network’s execution.

    4.3 Environmental Specification and Experimental Details

    We used the GeForce GTX 1080 Ti server containing 3584 cores and 12 GB of dedicated GPU memory for the experimental setup.Besides the GPU configuration, the server contains 20 CPUs and 32 GB of memory.The model was trained on 0.65 million images and validated against 0.15 million images.Due to the simplicity of the training data (since the data only consists of numbers and alphabets), the model converged in just two hours.On a test set of 20 thousand payable cash vocabulary, the model reported an accuracy of 95 percent when used without lexicon.When used with a lexicon, the model reported 99 percent accuracy in predicting words.One of the advantages of this network is the duality of its configuration, i.e., it can be used to predict words restricted either to a particular lexicon or without any restrictions.Restricting the model prediction to a lexicon helps in the correct identification of key terms that are used in various financial and analytic processes.In this way, we utilize both lexicon-based and lexicon-free models.Predicting words through the lexicon-based model is cost-effective, but if there is a small lexicon,e.g., twenty thousand words and CUDA environment, then the time taken by the lexicon-based model does not cost that much.

    Table 1:RNN configuration

    4.4 Experimental Results and Discussion

    Our deep pipeline consists of two phases:the first phase is image detection, i.e., detection of textual geometries in a scanned document, and the second one is the transcription of detected textual geometry.MSER, already discussed in this paper’s methodology section, is widely used for text detection [17–20].Although MSER detects the textual geometries accurately in scanned documents, it detects some non-textual regions such as horizontal and vertical lines, logos, noise(bar codes and dashes).MSER works in the following manner:firstly, it performs thresholding based on the luminance, and then it extracts the connected components called extremal regions that survive the recursive thresholding.Thus, the final regions we obtain are the maximally stable extremal ones.To get only the stable textual regions, we enhanced the MSER module to get only textual regions.Before passing the image to the MSER module, we performed some preprocessing to remove vertical and horizontal lines, as shown in Fig.2.

    Figure 2:(a) Shows the overall MSER pipeline (b) shows original images (c) shows binary images after applying MSER (d) shows the final image after applying preprocessing and MSER

    Lines are not the only type of noise that we encounter in documents, and there are some other types of noise as well, such as logos and barcodes.To remove such noise, we got statistical properties of these regions obtained from the connected component analysis.For each of the connected components obtained from MSER, we obtain the following region properties as shown in Eqs.(1)–(5):

    Based on the region properties, we applied adaptive thresholding to determine whether a particular region is textual or not.Fig.3 shows the graphical illustration of our discussion.

    Figure 3:The description of the various transformations applied to the input data

    After localizing the text candidate regions using MSER, we then pass the image to the RCNN(Recurrent convolutional neural network).The configuration of layers used for RCNN is discussed in the previous sections.RCNN uses the connectionist temporal classification, which predicts the whole word by using contextual information.However, it cannot help identify numbers since they do not rely on the context.We can use RCNN either in lexicon mode, i.e., predictions are constrained to a limited vocabulary, or lexicon-free mode, where the predictions are not constrained to any specific dictionary.We made changes in the RCNN and used the classifier in both the modes, i.e., lexicon mode and lexicon-free mode.The lexicon mode is used for predicting words that can later be used for extracting specific information from invoices, while the non-lexical mode is used for predicting alphanumeric and numbers.The connectionist cost function does not help predict numbers because number prediction does not depend on the previously predicted number.

    Nevertheless, bi-directional LSTM can capture the geometrical information about a particular symbol and classify that number/alphanumeric.The previous results obtained using the connectionist temporal classification have been state-of-the-art [3].In our case, we obtained an F-score of 0.99 during testing of 0.15 M images in lexicon-free mode.

    5 Conclusion and Future Work

    This paper presented a deep learning model based on Convolutional Recurrent Neural Network (CRNN) and Long Short-Term Memory (LSTM) to automate the tedious task of payable document processing.CNN helps in feature extraction.The extracted features are then passed to the LSTM.The RNN part is explored to describe the context of scene text images and predict sequence-like objects’structured outputs.The primary benefit of this approach is that it can handle both lexicon-free and lexicon-based text recognition.Furthermore, Maximally Stable Extremal Regions (MSER) were used for text extraction while avoiding noise.Our approach was able to get an accuracy of 95 percent on a test set of 20,000 payable images when used without lexicon.On the other hand, we got 99 percent accuracy while using the lexicon-based approach.In the future, we plan to use Bidirectional Gated Recurrent Units (BGRU).

    Funding Statement:Researchers would like to thank the Deanship of Scientific Research, Qassim University, for funding publication of this project.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    女警被强在线播放| 欧美 日韩 精品 国产| 悠悠久久av| 欧美在线黄色| 亚洲精品国产精品久久久不卡| 男女之事视频高清在线观看| 久99久视频精品免费| 91av网站免费观看| 久久香蕉精品热| 国产男女内射视频| 一a级毛片在线观看| av中文乱码字幕在线| 欧美日韩亚洲高清精品| 精品一区二区三区视频在线观看免费 | 正在播放国产对白刺激| 高清在线国产一区| 伦理电影免费视频| 人妻丰满熟妇av一区二区三区 | 精品国产亚洲在线| 国产野战对白在线观看| 在线免费观看的www视频| 国产免费现黄频在线看| 99re6热这里在线精品视频| 亚洲精品粉嫩美女一区| 国产亚洲欧美在线一区二区| 最近最新中文字幕大全免费视频| 男人的好看免费观看在线视频 | 在线十欧美十亚洲十日本专区| 女同久久另类99精品国产91| 黄色成人免费大全| 久久久精品区二区三区| 久久久精品区二区三区| 亚洲国产看品久久| 一本一本久久a久久精品综合妖精| 国产午夜精品久久久久久| 19禁男女啪啪无遮挡网站| 精品国产乱码久久久久久男人| 性色av乱码一区二区三区2| 午夜精品国产一区二区电影| 日韩精品免费视频一区二区三区| 中文字幕人妻丝袜一区二区| 亚洲精品美女久久av网站| 国产免费现黄频在线看| 国产精品 国内视频| 中文字幕人妻熟女乱码| 欧美国产精品一级二级三级| 99精品在免费线老司机午夜| 超碰成人久久| 麻豆成人av在线观看| 一区二区三区激情视频| 变态另类成人亚洲欧美熟女 | 1024香蕉在线观看| 99在线人妻在线中文字幕 | 99国产精品99久久久久| 欧美在线黄色| 国产成+人综合+亚洲专区| 又黄又爽又免费观看的视频| 999久久久精品免费观看国产| 飞空精品影院首页| 久久国产精品男人的天堂亚洲| 十八禁网站免费在线| 窝窝影院91人妻| 岛国在线观看网站| 精品人妻1区二区| 欧美亚洲日本最大视频资源| 精品亚洲成国产av| 无遮挡黄片免费观看| 国产欧美亚洲国产| 色精品久久人妻99蜜桃| 夜夜夜夜夜久久久久| 三级毛片av免费| 别揉我奶头~嗯~啊~动态视频| 午夜老司机福利片| 欧美成人免费av一区二区三区 | 久久性视频一级片| 亚洲一码二码三码区别大吗| 桃红色精品国产亚洲av| 久久精品国产亚洲av高清一级| 一级a爱视频在线免费观看| www.熟女人妻精品国产| 最近最新中文字幕大全电影3 | 欧美人与性动交α欧美精品济南到| 岛国毛片在线播放| 欧美黑人欧美精品刺激| 精品久久久久久电影网| 久久午夜亚洲精品久久| 十八禁网站免费在线| 999久久久精品免费观看国产| 老熟妇仑乱视频hdxx| 在线视频色国产色| 老汉色av国产亚洲站长工具| 一夜夜www| 男人操女人黄网站| 国产单亲对白刺激| 久久人妻av系列| 啦啦啦视频在线资源免费观看| 高清欧美精品videossex| 亚洲精品在线美女| 美女午夜性视频免费| 亚洲av美国av| 国产午夜精品久久久久久| www日本在线高清视频| 三级毛片av免费| 18禁裸乳无遮挡免费网站照片 | 中文字幕另类日韩欧美亚洲嫩草| 91国产中文字幕| aaaaa片日本免费| av天堂久久9| 欧美人与性动交α欧美软件| 精品一区二区三卡| 亚洲熟妇中文字幕五十中出 | 正在播放国产对白刺激| 久久人人97超碰香蕉20202| 国产免费现黄频在线看| 亚洲精品国产一区二区精华液| 亚洲色图av天堂| 少妇被粗大的猛进出69影院| av福利片在线| 老司机亚洲免费影院| 国产高清激情床上av| 一边摸一边抽搐一进一小说 | 国产男女超爽视频在线观看| 首页视频小说图片口味搜索| 国产精品1区2区在线观看. | 99久久综合精品五月天人人| 亚洲三区欧美一区| 一本一本久久a久久精品综合妖精| 亚洲熟女精品中文字幕| 午夜91福利影院| 久久国产乱子伦精品免费另类| 欧美激情高清一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 午夜两性在线视频| 成在线人永久免费视频| 亚洲国产中文字幕在线视频| 亚洲精品在线美女| 91av网站免费观看| xxx96com| 99国产极品粉嫩在线观看| 亚洲av第一区精品v没综合| 国产精品 欧美亚洲| 亚洲成人免费电影在线观看| 国产不卡av网站在线观看| 亚洲av日韩在线播放| 人成视频在线观看免费观看| 极品人妻少妇av视频| 91老司机精品| 久热爱精品视频在线9| av在线播放免费不卡| 午夜福利免费观看在线| 久久香蕉激情| 欧美精品av麻豆av| 久久精品国产综合久久久| 国产精品一区二区精品视频观看| 亚洲专区中文字幕在线| 午夜福利乱码中文字幕| 俄罗斯特黄特色一大片| 女人久久www免费人成看片| 人人妻人人澡人人爽人人夜夜| 久久久国产成人精品二区 | 亚洲五月婷婷丁香| 久久久久视频综合| 日日爽夜夜爽网站| 水蜜桃什么品种好| 免费看a级黄色片| www.999成人在线观看| 亚洲一码二码三码区别大吗| 人人妻人人澡人人看| 国产人伦9x9x在线观看| 亚洲国产精品合色在线| 每晚都被弄得嗷嗷叫到高潮| 亚洲精品久久成人aⅴ小说| 中文字幕另类日韩欧美亚洲嫩草| 精品国产一区二区久久| 人人澡人人妻人| 国产在线一区二区三区精| 亚洲av熟女| 50天的宝宝边吃奶边哭怎么回事| 亚洲欧美色中文字幕在线| 十八禁人妻一区二区| 中文字幕av电影在线播放| tube8黄色片| 最新的欧美精品一区二区| 一区二区三区国产精品乱码| 日韩欧美一区二区三区在线观看 | 不卡av一区二区三区| 亚洲精品久久成人aⅴ小说| tocl精华| 高清欧美精品videossex| 国产一卡二卡三卡精品| 亚洲成国产人片在线观看| 一级黄色大片毛片| 91av网站免费观看| 国产成人免费无遮挡视频| 国产真人三级小视频在线观看| 超色免费av| 男男h啪啪无遮挡| 久久亚洲精品不卡| 老司机午夜十八禁免费视频| 亚洲熟女毛片儿| 国产免费现黄频在线看| 国产激情久久老熟女| 日日摸夜夜添夜夜添小说| 99国产精品一区二区蜜桃av | 韩国av一区二区三区四区| 色综合欧美亚洲国产小说| 日韩三级视频一区二区三区| 巨乳人妻的诱惑在线观看| 日韩欧美一区二区三区在线观看 | 久久精品熟女亚洲av麻豆精品| 97人妻天天添夜夜摸| 侵犯人妻中文字幕一二三四区| 身体一侧抽搐| 又黄又爽又免费观看的视频| 久久国产精品人妻蜜桃| 国产精品亚洲av一区麻豆| 久久久久久久久久久久大奶| 看黄色毛片网站| 国产精品亚洲av一区麻豆| 欧美日韩福利视频一区二区| 日本a在线网址| 精品免费久久久久久久清纯 | 最新美女视频免费是黄的| 国产高清videossex| 亚洲国产中文字幕在线视频| 国产极品粉嫩免费观看在线| 最新在线观看一区二区三区| 香蕉国产在线看| 成人黄色视频免费在线看| 十八禁网站免费在线| 丁香欧美五月| www日本在线高清视频| 亚洲精品成人av观看孕妇| 亚洲va日本ⅴa欧美va伊人久久| 热re99久久国产66热| 国产精品乱码一区二三区的特点 | 午夜精品国产一区二区电影| 在线十欧美十亚洲十日本专区| 亚洲欧洲精品一区二区精品久久久| 多毛熟女@视频| 久久久久久久久免费视频了| 欧美中文综合在线视频| 狂野欧美激情性xxxx| 国产高清videossex| 欧美精品av麻豆av| 亚洲精品av麻豆狂野| 夜夜躁狠狠躁天天躁| 欧美乱码精品一区二区三区| 美国免费a级毛片| 中文字幕高清在线视频| 国产精品一区二区免费欧美| 日韩免费高清中文字幕av| 国产精品亚洲一级av第二区| 99久久99久久久精品蜜桃| 黑人欧美特级aaaaaa片| 狂野欧美激情性xxxx| 精品少妇久久久久久888优播| 高清欧美精品videossex| 日韩免费av在线播放| 亚洲伊人色综图| 十八禁高潮呻吟视频| 精品亚洲成国产av| 国产一区二区激情短视频| 满18在线观看网站| 99热国产这里只有精品6| 成人黄色视频免费在线看| 大型av网站在线播放| 国产免费现黄频在线看| 亚洲七黄色美女视频| 在线观看免费日韩欧美大片| 精品熟女少妇八av免费久了| 国产欧美日韩一区二区精品| 在线观看免费午夜福利视频| 国产成人av教育| 色尼玛亚洲综合影院| 高清欧美精品videossex| 老司机在亚洲福利影院| 国产真人三级小视频在线观看| 高清视频免费观看一区二区| 免费在线观看完整版高清| 国产精品免费一区二区三区在线 | 99re在线观看精品视频| 两个人免费观看高清视频| 十八禁高潮呻吟视频| avwww免费| 宅男免费午夜| 国产在线观看jvid| 欧美+亚洲+日韩+国产| 91字幕亚洲| 99re在线观看精品视频| 欧美日韩亚洲高清精品| 欧美亚洲日本最大视频资源| 亚洲美女黄片视频| 久热这里只有精品99| 国产午夜精品久久久久久| 久久久久精品人妻al黑| 91成年电影在线观看| 国内毛片毛片毛片毛片毛片| 天堂中文最新版在线下载| 十八禁人妻一区二区| 18禁黄网站禁片午夜丰满| 很黄的视频免费| 亚洲精品乱久久久久久| 在线播放国产精品三级| 精品高清国产在线一区| 热99国产精品久久久久久7| 身体一侧抽搐| 人人妻,人人澡人人爽秒播| 丁香欧美五月| 亚洲,欧美精品.| 国产一区二区三区综合在线观看| 老司机影院毛片| 亚洲一码二码三码区别大吗| 这个男人来自地球电影免费观看| 成年人黄色毛片网站| 操美女的视频在线观看| 日韩欧美在线二视频 | 亚洲国产精品合色在线| 国产不卡一卡二| 深夜精品福利| 久久中文字幕一级| 久久亚洲真实| 99热国产这里只有精品6| 亚洲精品av麻豆狂野| 久久久精品区二区三区| 在线观看免费视频网站a站| 免费观看人在逋| a在线观看视频网站| 在线播放国产精品三级| 水蜜桃什么品种好| 午夜免费观看网址| 国产亚洲精品久久久久5区| 亚洲熟女精品中文字幕| 日本一区二区免费在线视频| 成熟少妇高潮喷水视频| 亚洲成人手机| 动漫黄色视频在线观看| 久久狼人影院| 在线看a的网站| 久久久国产成人精品二区 | 好男人电影高清在线观看| 国产亚洲一区二区精品| av在线播放免费不卡| 19禁男女啪啪无遮挡网站| av网站免费在线观看视频| 视频区图区小说| 国产色视频综合| 国产在线精品亚洲第一网站| 美女福利国产在线| 亚洲av片天天在线观看| 亚洲午夜精品一区,二区,三区| 久久人人97超碰香蕉20202| 精品国产一区二区三区四区第35| 99热只有精品国产| 久久久久久久精品吃奶| 天天添夜夜摸| 在线观看日韩欧美| av一本久久久久| 亚洲av成人一区二区三| 精品第一国产精品| 一级毛片精品| 婷婷成人精品国产| 人妻 亚洲 视频| 亚洲国产看品久久| 中文字幕人妻丝袜一区二区| 丝袜美腿诱惑在线| 亚洲熟女精品中文字幕| 黑人欧美特级aaaaaa片| 亚洲精品一卡2卡三卡4卡5卡| 国产免费av片在线观看野外av| 久久中文字幕人妻熟女| 久久久国产成人精品二区 | 欧美成狂野欧美在线观看| 成年女人毛片免费观看观看9 | 国产99久久九九免费精品| 少妇粗大呻吟视频| avwww免费| 国产男女超爽视频在线观看| 18禁黄网站禁片午夜丰满| 国内久久婷婷六月综合欲色啪| 天天躁日日躁夜夜躁夜夜| 亚洲三区欧美一区| 精品亚洲成国产av| 在线天堂中文资源库| 母亲3免费完整高清在线观看| 国产亚洲欧美精品永久| 男男h啪啪无遮挡| 日韩免费av在线播放| 亚洲第一青青草原| 黄片小视频在线播放| 19禁男女啪啪无遮挡网站| 精品福利永久在线观看| 校园春色视频在线观看| 看免费av毛片| 妹子高潮喷水视频| 免费日韩欧美在线观看| 久久久久久久国产电影| 91麻豆精品激情在线观看国产 | 国产在线观看jvid| 性色av乱码一区二区三区2| 国产成人免费观看mmmm| 久久青草综合色| 午夜视频精品福利| 日韩熟女老妇一区二区性免费视频| 99精品久久久久人妻精品| 欧美最黄视频在线播放免费 | 亚洲中文av在线| 午夜福利,免费看| 99香蕉大伊视频| 日日夜夜操网爽| 久久精品亚洲精品国产色婷小说| 国产日韩欧美亚洲二区| 亚洲性夜色夜夜综合| 99热国产这里只有精品6| 淫妇啪啪啪对白视频| 精品国产超薄肉色丝袜足j| 国产一区二区三区综合在线观看| 日本一区二区免费在线视频| 757午夜福利合集在线观看| 日韩精品免费视频一区二区三区| 亚洲色图 男人天堂 中文字幕| 中文字幕av电影在线播放| 一进一出抽搐gif免费好疼 | 人人妻,人人澡人人爽秒播| 极品教师在线免费播放| 亚洲欧洲精品一区二区精品久久久| 成年版毛片免费区| 波多野结衣av一区二区av| 亚洲人成77777在线视频| bbb黄色大片| 亚洲中文日韩欧美视频| 好男人电影高清在线观看| 性色av乱码一区二区三区2| 一级片免费观看大全| 美女 人体艺术 gogo| 两个人看的免费小视频| 亚洲欧美日韩另类电影网站| 久久天堂一区二区三区四区| 成人手机av| 中文字幕色久视频| 美女国产高潮福利片在线看| 亚洲成人免费av在线播放| 久久国产精品大桥未久av| 嫁个100分男人电影在线观看| 久久国产乱子伦精品免费另类| 国产精品.久久久| 国产欧美日韩一区二区精品| 9色porny在线观看| 精品国产亚洲在线| 欧美午夜高清在线| 18在线观看网站| 夜夜躁狠狠躁天天躁| 超碰97精品在线观看| 99热只有精品国产| 午夜福利在线免费观看网站| 一级作爱视频免费观看| 亚洲 欧美一区二区三区| 久久精品国产综合久久久| av天堂在线播放| 天堂√8在线中文| bbb黄色大片| 在线十欧美十亚洲十日本专区| 国产蜜桃级精品一区二区三区 | 黄色片一级片一级黄色片| 国产精品电影一区二区三区 | 亚洲男人天堂网一区| 嫁个100分男人电影在线观看| 精品国内亚洲2022精品成人 | 韩国精品一区二区三区| av有码第一页| 亚洲人成伊人成综合网2020| 国产精品乱码一区二三区的特点 | 日韩免费av在线播放| 欧美+亚洲+日韩+国产| 亚洲av电影在线进入| 久久天躁狠狠躁夜夜2o2o| 国产aⅴ精品一区二区三区波| 老熟妇仑乱视频hdxx| 久久中文字幕一级| 成熟少妇高潮喷水视频| 欧美+亚洲+日韩+国产| 69av精品久久久久久| 丰满迷人的少妇在线观看| 欧美精品亚洲一区二区| 亚洲av日韩精品久久久久久密| 亚洲一区高清亚洲精品| 国产视频一区二区在线看| 19禁男女啪啪无遮挡网站| 国产成人啪精品午夜网站| 国产亚洲av高清不卡| 国产亚洲精品久久久久5区| 久久国产精品人妻蜜桃| 国产一区在线观看成人免费| 免费看十八禁软件| 亚洲色图综合在线观看| 国产高清激情床上av| 国产精品成人在线| 欧美激情久久久久久爽电影 | 欧美乱妇无乱码| 国产成+人综合+亚洲专区| 亚洲精品美女久久av网站| 精品欧美一区二区三区在线| 老熟女久久久| 亚洲一区二区三区欧美精品| 欧美人与性动交α欧美软件| av欧美777| 老司机午夜十八禁免费视频| 又黄又粗又硬又大视频| 在线永久观看黄色视频| 99久久国产精品久久久| 欧美国产精品va在线观看不卡| 午夜免费观看网址| 两性夫妻黄色片| 亚洲av成人一区二区三| 久久狼人影院| 中文欧美无线码| 99国产精品99久久久久| 精品亚洲成a人片在线观看| 中亚洲国语对白在线视频| 久久久久久久久免费视频了| 91在线观看av| 国产淫语在线视频| 这个男人来自地球电影免费观看| 国精品久久久久久国模美| 亚洲美女黄片视频| 丝袜美腿诱惑在线| 人妻久久中文字幕网| 亚洲精品成人av观看孕妇| 天堂动漫精品| 精品无人区乱码1区二区| 免费在线观看视频国产中文字幕亚洲| 午夜福利欧美成人| 久久精品亚洲av国产电影网| 啦啦啦在线免费观看视频4| 日韩制服丝袜自拍偷拍| 亚洲七黄色美女视频| 亚洲精品国产色婷婷电影| 老汉色∧v一级毛片| 成在线人永久免费视频| 老司机午夜福利在线观看视频| 一二三四在线观看免费中文在| 黄片小视频在线播放| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲午夜理论影院| avwww免费| 久久九九热精品免费| 色播在线永久视频| 人妻 亚洲 视频| 欧美人与性动交α欧美精品济南到| 久久热在线av| 女人高潮潮喷娇喘18禁视频| 亚洲一码二码三码区别大吗| 黄色a级毛片大全视频| 乱人伦中国视频| 亚洲一码二码三码区别大吗| 窝窝影院91人妻| 欧美黑人精品巨大| 国产精品秋霞免费鲁丝片| 国产精品亚洲一级av第二区| 国产有黄有色有爽视频| 丰满人妻熟妇乱又伦精品不卡| 丝袜在线中文字幕| 午夜福利在线免费观看网站| 老司机午夜福利在线观看视频| 久久精品国产综合久久久| 另类亚洲欧美激情| 久久久国产成人免费| 久久香蕉精品热| cao死你这个sao货| 看黄色毛片网站| 一本一本久久a久久精品综合妖精| 欧美黄色淫秽网站| 国产亚洲精品久久久久久毛片 | 久久精品人人爽人人爽视色| 一级毛片精品| 91成年电影在线观看| 国产人伦9x9x在线观看| 新久久久久国产一级毛片| 久久这里只有精品19| 日韩中文字幕欧美一区二区| 久久精品成人免费网站| 久久精品国产综合久久久| 热99久久久久精品小说推荐| 国产熟女午夜一区二区三区| 岛国在线观看网站| 男女之事视频高清在线观看| 丁香六月欧美| 午夜福利免费观看在线| 美女福利国产在线| 18禁国产床啪视频网站| 亚洲综合色网址| 老汉色∧v一级毛片| 欧美色视频一区免费| 免费日韩欧美在线观看| 女性生殖器流出的白浆| 在线播放国产精品三级| 久久人人97超碰香蕉20202| 香蕉国产在线看| 又紧又爽又黄一区二区| 亚洲精品中文字幕一二三四区| 国产精品一区二区免费欧美| 99久久人妻综合| 1024视频免费在线观看| 无限看片的www在线观看| 他把我摸到了高潮在线观看| 在线播放国产精品三级| 村上凉子中文字幕在线| 国产极品粉嫩免费观看在线| 亚洲三区欧美一区| 精品免费久久久久久久清纯 | 久久香蕉精品热| 国产有黄有色有爽视频| 热99国产精品久久久久久7| 老鸭窝网址在线观看| 老司机深夜福利视频在线观看|