• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Text Detection in Natural Scene Images Using Morphological Component Analysis and Laplacian Dictionary

    2020-02-29 14:20:38ShupingLiuYantuanXianHuafengLiandZhengtaoYu
    IEEE/CAA Journal of Automatica Sinica 2020年1期

    Shuping Liu, Yantuan Xian, Huafeng Li, and Zhengtao Yu

    Abstract—Text in natural scene images usually carries abundant semantic information.However,due to variations of text and complexity of background,detecting text in scene images becomes a critical and challenging task. In this paper, we present a novel method to detect text from scene images. Firstly, we decompose scene images into background and text components using morphological component analysis (MCA), which will reduce the adverse effects of complex backgrounds on the detection results.In order to improve the performance of image decomposition,two discriminative dictionaries of background and text are learned from the training samples. Moreover, Laplacian sparse regularization is introduced into our proposed dictionary learning method which improves discrimination of dictionary. Based on the text dictionary and the sparse-representation coefficients of text, we can construct the text component. After that, the text in the query image can be detected by applying certain heuristic rules. The results of experiments show the effectiveness of the proposed method.

    I. INTRODUCTION

    NOWADAYS vast number of images and videos are produced routinely everyday. But, how to find a way to retrieve interested images or video frames from the image and video collections effectively becomes an important and challenging task. Text in natural scene images provides direct semantic information. Text detection and recognition play important roles in image retrieval and recognition. Furthermore,the technology of text detection and recognition can also be used in many applications such as image classification, image annotation and video event detection.

    In recent years,text detection has attracted a lot of attention and many text detection methods have been proposed. The most popular method for text detection in recent years is sparse representation (SR), which inspired by the sparsecoding mechanism of human vision system [1], [2]. SR technology has been successfully used for face recognition[3], [4], image classification [5], [6], image restoration [7],[8], compressed sensing [9], and image denoising [10].

    Recently,researchers have demonstrated the effectiveness of sparse representation in text-detection of natural scene images[11]-[13]. In [11], Zhaoet al.proposed a new text detection method using a sparse representation with discriminative dictionaries. In this method, the edges of the scene images must be detected by the wavelet transform and then scanned into patches by a sliding window.Thus the performance of the method is highly dependent on the edge detection results, and when background and text have similar edge properties, this method may misclassify the text and background. In contrast to this method,Doet al.[13]proposed an approach to segment the text from complex backgrounds using sparse representation, and the curvelet transform and Harr wavelet transform are chosen as dictionaries to represent the background and text. However, these analytically designed dictionaries are insufficient to characterize complex structures of background and text of natural images.

    To address problems mentioned above, we proposed a new text detection method based on the SR theory,which can locate text in natural scene images with complex backgrounds. In our method, to avoid the effect of complex backgrounds on the detection results, we convert the problem of text detection into image decomposition, and the SR-based morphological component analysis (SRMCA) is employed to decompose the source image into background and text components. In this process, one important issue of the SR-based text separation is the selection of dictionaries for background and text. To further increase the discrimination capability of dictionaries,we enforce the sparse representation coefficients having small within-class scatter.After the sparse representation coefficients of the input scene image over the learned text dictionary are obtained, we can reconstruct the text components by using the learned discriminative dictionary and the sparse representation coefficients. By applying certain heuristic rules to the reconstructed text component, the text in the query image can be easily detected.

    In summary, the main advantages and the contributions of proposed method for text detecting from scene images are as follows:

    1) We firstly convert text detection problem into image decomposition, which reduce the influence of complex backgrounds on the text detection result.Then,text and background are separated by SR-based morphological component analysis(SRMCA).

    2) We learn dictionaries of text and background from training samples, rather than analytically designed by discrete cosine transform (DCT), wavelet, or curvelet. By introducing Laplacian sparse regularization term into dictionary learning method, the discrimination capability of dictionaries are improved,and characterize complex structures of background and text well.

    The rest of the paper is organized as follows. Section II proposes an overview of the text detection task in scene images. Section III, the MCA based sparse representation is reviewed. Section IV presents the proposed text detection methodology, describing the designed steps in details. Section VI presents experiments carried out to validate the proposed solution and finally conclusions close the paper.

    II. RELATED WORK

    In recent years, many methods for text detection have been presented,that prove effective for text detection in various configurations.Yeet al.[14]presented a review of the research on text detection. Text detection approaches typically stem from machine learning and optimization methods, and these methods can be roughly classified into four categories: connected component(CC)based methods,texture-based methods,edgebased methods and hybrid methods (involving combinations of the above methods). CC-based methods [15]-[18] usually separate text from background by grouping pixels with similar colors into connected components, and the non-text components are then pruned with heuristic rules. In [19], conditional random field was used to produce connected component of text or non-text labels. Based on the CC method, Wanget al.[20] presented a coarse to fine method to locate characters in natural scene images. By using CC analysis, Wanget al.[21] developed a method to locate and segment text in natural scene images automatically.Yinet al.[18]presented a learning framework for adaptive hierarchical clustering, it constructs text candidates by grouping characters based on the proposed adaptive hierarchical clustering. In general, these approaches are suitable for detecting captions and superimposed text.The CC-based methods can be implemented rapidly. More importantly,the located text components successfully detected by the CC-based methods can be directly used for text recognition. Nevertheless, these methods may fail when text is not homogeneous and/or has complex backgrounds.

    Texture-based text detection methods [22], [23] which treat text as a special type of texture, are proposed based on the assumption that text in images have distinct textural properties.In these methods, local binary pattern (LBP), wavelet decomposition, Fourier transform and Gaussian filtering are usually used as the texture analysis approaches to detect and assess textural properties such as local intensity and filter response.In [24], Zhouet al.proposed a multilingual text detection method by integrating the texture features of histogram of oriented gradient, mean of gradients and LBP. Angadiet al.[25] used the highpass filtering in DCT domain to avoid the impact of background on the text detection result. Based on local Harr binary pattern,Jiet al.[26]developed a robust text characterization approach. Generally, texture-based methods can detect and localize text accurately when the text regions have distinct textural properties from non-text ones. However,in some case, the texture property of a text region is similar to non-text region. Even worse, the texture of non-text and text may blend in together in a local region. For these cases,the texture-based methods may produce an incorrect detection result.

    As we all know, text is composed by stroke components with a variety of orientation. Based on the fact, edge-based methods are developed to detect potential text areas by identifying regions where edge strength is higher in a certain direction. The edge-based methods are usually efficient and simple when the edges of text and background vary considerably. Own to the property of the above mentioned, edgebased methods attracted much attention in these years and some effective methods have already been developed in many literatures.Sunet al.[27]used color image filtering technique to extract board text under natural scenes. In this method,the rims are first obtained and then the text is extracted by analyzing the relationships among inherent features and characters. This method is efficient for extracting the board text under natural scenes. Yeet al.[28] located edge-dense image blocks using edge feature and morphology operation,and then a SVM classifier was employed to identify the texts blocks. Jainet al.[29] developed an effective edge-based text extraction method which was implemented by investigating the location of text in complex background images. Yinet al.[30] developed a method to detect text, which firstly group text candidates using clustering algorithm, and identified texts with a text classifier. Edge-based methods can achieve a good performance when scene images exhibiting strong edges.However,the image quality of a natural scene is easily affected by the presence of shadows and illumination. So, in some circumstances, image with good edge profiles is very hard to obtain.In this case,the performance of the edge-based method may be degraded.

    Due to the variations of text and background in scene images, single text detection method may produce a disappointing result under certain conditions.To solve this problem,approaches by combining methods mentioned above were proposed.Panet al.[31]presented a hybrid approach by combing conditional random field (CRF) and CC-based methods. By combining the complimentary properties of Canny edges and maximally stable extremal regions, Chenet al.[32] proposed a new texts detection method. The detection results generated by this method are binarized letter patches, thus they can be directly used for text recognition. Leet al.[33] proposed to use the parallel edge feature of text stroke to locate the text in natural scene images. In this method, mean-shift clustering is utilized to group similar pixels into candidate text CCs,and then the parallel edges are detected and used to determine which CCs are text strokes. Although the hybrid approaches can overcome the disadvantages of the single method,the good performance cannot always be gained under certain conditions such as non-uniform illumination,non-uniform color and nonuniform shadow.

    III. MCA BASED SPARSE REPRESENTATION

    Sparse representation is developed based on the assumption that the natural signals such as natural images can be modeled as a linear combination of a few elementary atoms of dictionary.LetD={d1,d2,...,dm}∈Rn×mbe an over-complete dictionary, mathematically, signaly ∈Rncan be represented as

    wherexis the representation coefficient ofyand most entries are zero or close to zero. Usually, it assumes thatm ?n,implying that the dictionaryDis redundant. Therefore, the solution of (1) is generally not unique. To find a solution that the number of nonzero component is the smallest, the sparse representation problem can be formulated as

    where‖·‖0is a pseudo norm that counts the number of nonzero entries inx.

    The above optimization is an NP problem and it is often relaxed to the convexl1minimization. Considering that the source signalymay contain additive noise,(2)can evolves into(3) by using Lagrange multiplier and relaxing the constraint in (2).

    whereλis a constant.

    Starcket al.[34]proposed a novel image separation method which based on MCA sparse representation. The method is developed based on assumption that signal atomic behavior to be separated can decomposed into different components due to the existence of the dictionary that enables every component to be constructed using sparse representation.Also,it is assumed that the dictionary used to represent a certain component is highly inefficient in representing the other component.In other words,given a dictionaryDk,the representation coefficients ofykare sparse, but not sparse forIf we assume that one image only contains textureytand cartoonyc, then, they can be sparsely represented by texture dictionaryDtand cartoon dictionaryDc. For an arbitrary imagey, sparse representation coefficients of texture and cartoon can obtained by solving the following optimization problem:

    whereλ1>0 andλ2>0 are parameters to balance different terms in the objective function.

    IV. PROPOSED METHOD

    A. Outlines of Proposed Method

    In this paper, we propose a novel text detection method using Morphological component analysis and Laplacian dictionary, which finally leads to performance improvement over other competitive methods. The components of the proposed method are presented in Fig.1. The proposed scene text detection method includes the following stages:

    1) Text and background separation: We separate text and background of scene images using SR-based morphological component analysis. Details of separation procedure are described in Section IV-B.

    2) Dictionary learning: dictionaries of text and background are learned from training samples. these dictionaries are used in text and background separation stage.

    3)Reconstruction of text in scene images:once dictionariesDt,Dbare learned separately, the coding coefficients of a query samplecan be obtained by solving the following equation:

    4) Text detection in reconstructed image: the gravity points in the reconstructed text picture are calculated and connected with the equivalent points in the original picture. Then, the candidate text areas are boxed. The final text area is obtained by merging the adjacent rectangles.

    B. Text and Background Separation Based on SRMCA

    Text and background separation plays a significant role in image recognition, image fusion, image enhancement, astronomical imaging and etc.In this paper we assume that natural scene images containing text can be considered as a mixture of background and text. Thus, imageYcan be modeled as

    whereαt,αbare sparse representation coefficients ofonDtandDbrespectively.

    As we know, characters in different languages usually have different structure characteristics and morphologies. Suppose that scene images containlanguages, the dictionaryDtcan be modified asDt= [Dt1,Dt2,...,Dt~m]. Accordingly,proposed model in (8) should be rewritten as

    whereαt= [αt1,αt2,...,αt~m],αtiis the sub-vector associated with languagei.

    Fig.1. Flowchart of the proposed method.

    C. Discriminative Dictionary Learning

    A key issue of text separation based on SRMCA is the discriminant of dictionariesDtandDb. Thus, strong discriminative capacity dictionaries have to be learned firstly. In this paper, a Laplacian sparse regularization term is introduced to model relationships between the local features. Since, the Laplacian sparse regularization term presents the consistence of sparse codes for similar features, the sparse codes for similar features are no longer independent. Moreover, the quantization error of the local feature can be significantly reduced and the similarity of sparse codes among the similar local feature can be presented well.

    LetWbe the similarity matrix, we encode the relationship among the similar features by following equation:

    For simplification purposes,we only compute theKnearest neighbors for each sample. In this case, (10) can be rewritten as

    whereαiis the sparse codes for each clustered class andαbis the center of the class,Wibrepresents the similarity betweeniandb. We note that minimizing (11) makeαiconsist withαb. Thus, the regularization termf(α) encourages similar groups to have similar codes. Therefore, Laplacian sparse regularization term defined in (11) can be introduced into the dictionaries learning model to improve their discriminative capacity.

    SupposeDt= [Dt1,Dt2,...,Dt~m] is the learned text dictionary, then dictionaryDcan be expressed asD=whereis subdictionary ofDtandDbis background dictionary.Data of text dictionary is divided intoclasses using spectral-clustering method before training.Since the background is very complex and there is no uniform criterion for classification,the training data of background were not classified.

    Given training samplesY= [Y1,Y2,...,Yk], the sparse representation matrix ofYoverDis denoted byα.We expectDnot only representsYwell, but also have capability to distinguish text and complex backgrounds. To achieve this,we propose the following Laplacian discrimination dictionary learning (LDDL) model

    where‖α‖1denotes sparse penalty which is used to control the sparsity of the sparse codes.λ1andλ2are coefficients,andf(α)is the Laplacian sparse regularization.By employing, eachdiofDis constrained tol2-norm, this will avoidDhas arbitrary largel2-norm. To reduce the affect of noise,(12)may be rewritten as(13)by relaxing the constraint.

    V. OPTIMIZATION

    Equation (13) is not convex forDandαsimultaneously.However, it is convex forDwhenαis fixed, and vice versa.Thus we can find the optimizedDand the corresponding coefficientαusing iterative optimization algorithm.

    A. Update of α

    Suppose that dictionaryDis fixed, (13) is reduced to a sparse representation problem. The objective function in (13)is reduced to

    with

    whereα=[αt1,αt1,...,αt~m,αb]denotes the sparse representation coefficients of training samplesYoverD. Coefficientsαtkcan be computed by fixed all other= 1,2,...,It should be note that the background of the scene images is considered as one class (15).

    It is noticeable that terms in (14) are differentiable except‖α‖1. We rewrite (14) as

    whereandτ=λ1/2.SinceQ(α) is strictly convex and differentiable, the objective function (16) can be solved by the iterative projection method (IPM) [35]. The update procedure ofαis outlined in Algorithm 1, whereis derivative ofQ(α), and

    Algorithm 1 Updating α

    B. Update of D

    Supposeαis fixed, we can updatediof dictionaryDby fixed all the otherNow,the objective function(13)is reduced to

    Each atom of the dictionary is normalized to a unit vector.Once the dictionaryDis initialized, we can learn optimizedDby Algorithm 2.

    Algorithm 2 Updating D

    VI. EXPERIMENTS AND RESULTS

    A. Datasets and Evaluation Protocol

    We tested the proposed algorithm on two public image datasets which consist of scene images containing text:ICDAR(2003, 2011, 2013)1http://algoval.essex.ac.uk/icdar/Datasets.html, http://www.icdar2011.org/EN/column/column26.shtml, MSRA-TD 5002http://www.iapr-tc11.org/mediawiki/index.php/MSRAText Detection_500_Database_ (MSRA-TD500)[36].

    We noticed that the evaluation protocols of ICDAR and MSRA-TD are slightly different. However, in general, text detection performance criteria are defined by precision, recall and the f-measure. In order to fairly compare the proposed method with other state-of-the-art methods, we use the evaluation protocols defined by ICDAR and MSRA-TD respectively in our experiments.

    B. Parameters Settings

    The parameters for the proposed method are set using the values recommended in relevant papers and tested in numerous experiments. The values of these parameters are consistent across our experiments. Some of the parameters, such asλ1andλ2, appear in the formula; others, such asandnIter, are coded parameter settings.denotes the number of iterations in the process of sparse coding and nIter denotes the number of iterations in the process of dictionary learning. Specific values are provided in Table I.

    TABLE I PARAMETER SETTINGS FOR PROPOSED MODEL

    The size of the sliding window has a considerable influence on the performance of the learned dictionary.If the patch is too small, there will be little difference between the information in the background and that in the text, so candidate text patches may be erroneously classified as background patches.Conversely, if the patch is too large, it will contain both text and graphic components, neither of which will be sufficiently sparse to use the text dictionary or the background dictionary.In this paper, we use the same sliding window size for all datasets. According to experimental results for different patch sizes from 8 to 20 shown in Fig.2, we found that widow size 16 yielded sufficient text features and the best experimental results.

    Fig.2. Experimental results for different sizes of sliding window (window size from left to right: 8, 12, 16, 20).

    The number of classes in the spectral-clustering algorithm is another important parameter,because the sparse-representation coefficients are learned class by class. In addition, class number is closely related to the textural features of the text and background.Spectral classes number is selected according to experimental results of ICDAR2003 datasets.Text detection F-score for different numbers of spectral classes are shown in Fig.3.

    Fig.3. Text detection F-score for different numbers of spectral classes.

    The results show that the performance of the classification dictionary is superior to that of a dictionary without classification.Text reconstruction becomes more accurate as the number of spectral classes increases, reaching its optimal point at five spectral classes. The quality of text reconstruction decreases when more than five spectral classes are used.Therefore,class number is set at five in our spectral-clustering algorithm.

    C. Residuals

    The residual value is obtained using the formulaDuring the dictionary-learning and sparse-representation stages, we use a two-step iterative shrinking algorithm to calculate the residual value [37]. The maximum difference between residuals obtained in adjacent iterations indicates the optimal path, along which the overall residual curve stabilizes most rapidly.Fig.4 shows the residual curves for the dictionary-learning and sparse-representation stages, respectively.

    Fig.4 shows that both residual curves decline very quickly,approaching their minimum values with the fourth iteration.The residual curves for both the dictionary-learning and sparse-representation stages reach their minimum values after 12 iterations.Therefore,we set the number of iterations in the process of dictionary learning to 12.

    Fig.4. Residual curves for dictionary-learning and sparse-representation stages.

    D. Results and Discussions

    To show the advantages of the proposed model,we compare the proposed method with ICDAR competition methods and other state-of-the-art methods in [11], [30], [38]-[41]. Table II show the comparison results.

    On the ICDAR-2003 dataset,the proposed method achieves the best performance, with the greatest recall (0.73) and high precision (0.72). Though the method described in [44] is almost as precision (0.79) as the proposed method, its recall(0.64) is much lower. Methods that achieve higher recall separate more text from background. On the ICDAR2011 dataset,the proposed method achieves relatively high precision(0.78) and the highest recall (0.75) among these methods. As the ICDAR-2013 and ICDAR-2011 datasets are almost the same, with only a small number of images duplicated, the difference in performance between the two datasets is very small.

    Comparing with ICDAR dataset, MSRA-TD500 has larger text size range and more flexible text direction [45]. We compare the proposed method with methods in Yaoet al.[36]and Yinet al.[18] on the MSRA-TD500 database. The data obtained by the proposed method is slightly higher than that of [36] and lower than that of [18].

    Next, we compare the proposed method with the stroke width transform (SWT) method and the K-SVD method. The basic stroke candidates obtained by SWT [43] are merged using adaptive structuring elements to generate compactly constructed texts. Individual characters are chained using the knearest neighbors algorithm to identify arbitrarily oriented text strings.In some papers based on the K-SVD[6]method,sparse representation is used to learn an overcomplete dictionary for text detection. SWT is a traditional and efficient method of text detection, and K-SVD is a novel but effective method of text detection using traditional dictionary learning method.The results of the comparison between our method and these two methods are shown in Fig.5.

    Fig.6 shows the processed results using our proposed method, and the source images are selected from ICDAR and MSRA-TD500 datasets.From Fig.6 we can see that the sourceimages contain a wide range of text types: text in real-world scenes, text in different languages, text in different colors and sizes and text with complicated backgrounds. For different types of image, our proposed method can produce pleasing detection results. These show that the proposed method has good detection performance in most case.

    TABLE II COMPARISON ON THE ICDAR(2003,2011, 2013) AND MSRA-TD500

    Fig.5. Comparison of original SWT method (in the left column), K-SVD method(in the middle column)and the proposed method(in the right column).

    Fig.6. Sample text-detection results using the proposed method.

    VII. CONCLUSION

    This paper introduced a novel text detection model based on MCA and sparse representation.There are two key innovations for the good text detection result.One is that the discriminative power of the learned dictionary in the proposed method enhanced the ability of reconstructing the text. And the other is that we used the MCA based method to detect the text in the reconstructed text picture. The experimental part of this paper has demonstrated that the proposed method had outstanding performance on text detection from natural-scene images.The experimental results show that the proposed text-detection method outperforms existing techniques.In particular,it allows robust text detection without limitations on text size, color or other properties. It can detect text overlaying images as well as text within scene images.We next will try online-dictionary learning method to reconstruct a sparser text component and use intermediate frequency and high frequency of text for text detection problem.

    久久久国产成人免费| 国产欧美日韩精品一区二区| 国产成人影院久久av| 久久久久久国产a免费观看| 免费看a级黄色片| 99精品在免费线老司机午夜| 国产精品爽爽va在线观看网站| 2021天堂中文幕一二区在线观| 亚洲,欧美,日韩| 亚洲第一电影网av| 国产麻豆成人av免费视频| 久久亚洲精品不卡| av在线蜜桃| 亚洲经典国产精华液单| 99久国产av精品| 在线免费观看不下载黄p国产| 99在线人妻在线中文字幕| 97碰自拍视频| 国产精品亚洲美女久久久| 久久久欧美国产精品| 欧美色欧美亚洲另类二区| 欧美zozozo另类| 最近视频中文字幕2019在线8| 搡老岳熟女国产| 日韩欧美 国产精品| 伊人久久精品亚洲午夜| 亚洲人成网站高清观看| 欧美色欧美亚洲另类二区| 久久九九热精品免费| 国产黄色小视频在线观看| 97超视频在线观看视频| 成人国产麻豆网| 一个人看的www免费观看视频| 精品日产1卡2卡| 97在线视频观看| 国产精品综合久久久久久久免费| 国产亚洲精品av在线| 真实男女啪啪啪动态图| 91精品国产九色| 亚洲国产色片| 国产亚洲av嫩草精品影院| 2021天堂中文幕一二区在线观| 亚洲在线观看片| 亚洲国产精品成人久久小说 | 99久久久亚洲精品蜜臀av| 日韩欧美在线乱码| 黄色欧美视频在线观看| 欧美另类亚洲清纯唯美| 一级毛片我不卡| 一级毛片电影观看 | 美女高潮的动态| 亚洲av一区综合| 精品国产三级普通话版| 精品人妻偷拍中文字幕| 网址你懂的国产日韩在线| 麻豆一二三区av精品| 蜜桃亚洲精品一区二区三区| 欧美色欧美亚洲另类二区| 国产成人aa在线观看| 99riav亚洲国产免费| 九九热线精品视视频播放| 精品午夜福利在线看| 国产精品一及| 一a级毛片在线观看| 欧美三级亚洲精品| 九九在线视频观看精品| 亚洲欧美日韩东京热| 成人精品一区二区免费| 日本一本二区三区精品| 99久久成人亚洲精品观看| 亚洲成人久久爱视频| 男人和女人高潮做爰伦理| videossex国产| 亚洲精品久久国产高清桃花| 18禁在线播放成人免费| 国产一区二区亚洲精品在线观看| 亚洲av.av天堂| www日本黄色视频网| 色在线成人网| 国产精品久久久久久久电影| 国产精品1区2区在线观看.| 欧美日韩国产亚洲二区| 久久久久久久久中文| 一本精品99久久精品77| 久久久国产成人精品二区| 国产精品一二三区在线看| 国内精品宾馆在线| 国产亚洲av嫩草精品影院| 毛片女人毛片| 天堂动漫精品| 亚洲精品一区av在线观看| 欧美+亚洲+日韩+国产| 亚洲人成网站在线播放欧美日韩| 赤兔流量卡办理| 亚洲在线观看片| 亚洲图色成人| 日本三级黄在线观看| 中国美女看黄片| 在线观看美女被高潮喷水网站| 日本精品一区二区三区蜜桃| 亚洲精品成人久久久久久| eeuss影院久久| 色在线成人网| 欧美一区二区精品小视频在线| 成人高潮视频无遮挡免费网站| 国产伦精品一区二区三区四那| 国产精品永久免费网站| 在线免费十八禁| 最近手机中文字幕大全| 成人无遮挡网站| 最后的刺客免费高清国语| 69av精品久久久久久| 有码 亚洲区| 哪里可以看免费的av片| 日韩中字成人| 亚洲美女黄片视频| 亚洲第一区二区三区不卡| 日本-黄色视频高清免费观看| 日韩一区二区视频免费看| 亚洲欧美日韩高清在线视频| 又爽又黄a免费视频| 国产视频内射| 免费在线观看成人毛片| 干丝袜人妻中文字幕| 国产视频内射| 日韩亚洲欧美综合| 毛片女人毛片| 婷婷精品国产亚洲av| 午夜福利在线观看吧| 久久久午夜欧美精品| 欧美又色又爽又黄视频| 亚洲中文字幕日韩| 亚洲国产精品合色在线| 在线国产一区二区在线| 亚洲人成网站在线播| 久久国内精品自在自线图片| 国产国拍精品亚洲av在线观看| 亚洲精品一区av在线观看| 日韩在线高清观看一区二区三区| 亚洲国产精品成人久久小说 | 国产精品亚洲美女久久久| 少妇高潮的动态图| 欧美一区二区精品小视频在线| 此物有八面人人有两片| 男人舔女人下体高潮全视频| 日韩欧美一区二区三区在线观看| 黑人高潮一二区| 99久久中文字幕三级久久日本| 香蕉av资源在线| 亚洲综合色惰| 久久精品国产清高在天天线| 少妇人妻一区二区三区视频| 日韩av不卡免费在线播放| 99久久久亚洲精品蜜臀av| 插逼视频在线观看| 亚洲中文字幕日韩| av.在线天堂| 夜夜看夜夜爽夜夜摸| 1000部很黄的大片| 午夜福利高清视频| 欧美日韩综合久久久久久| 一个人看的www免费观看视频| 国产欧美日韩精品一区二区| 在线国产一区二区在线| 搡老熟女国产l中国老女人| 国产伦一二天堂av在线观看| 日韩成人伦理影院| 欧美区成人在线视频| ponron亚洲| 久久精品国产自在天天线| 亚洲无线在线观看| 国产成人影院久久av| 干丝袜人妻中文字幕| 久久人妻av系列| 午夜福利成人在线免费观看| 欧美中文日本在线观看视频| 午夜福利视频1000在线观看| 精品欧美国产一区二区三| 久久久久九九精品影院| 91久久精品国产一区二区成人| 精品免费久久久久久久清纯| 一级毛片电影观看 | 精品熟女少妇av免费看| 全区人妻精品视频| 亚洲电影在线观看av| 久久久久久九九精品二区国产| 欧美日韩一区二区视频在线观看视频在线 | 中文字幕久久专区| 亚洲成av人片在线播放无| 天堂av国产一区二区熟女人妻| 韩国av在线不卡| a级毛色黄片| 九九爱精品视频在线观看| 又黄又爽又刺激的免费视频.| 又爽又黄无遮挡网站| 一卡2卡三卡四卡精品乱码亚洲| 一卡2卡三卡四卡精品乱码亚洲| 特大巨黑吊av在线直播| 男人狂女人下面高潮的视频| 国产精品久久久久久久久免| 久久人妻av系列| 久久久久久久亚洲中文字幕| 乱码一卡2卡4卡精品| 国内精品久久久久精免费| 国产伦一二天堂av在线观看| 欧美潮喷喷水| 中文资源天堂在线| 欧美不卡视频在线免费观看| 亚洲欧美精品自产自拍| 精品一区二区三区人妻视频| 国内精品美女久久久久久| 国产午夜精品久久久久久一区二区三区 | 97在线视频观看| 午夜亚洲福利在线播放| 少妇裸体淫交视频免费看高清| 国产一区二区激情短视频| or卡值多少钱| 亚洲国产精品成人综合色| 男插女下体视频免费在线播放| 精品一区二区三区视频在线| 欧美一区二区亚洲| 男女视频在线观看网站免费| 久久久久久久久久黄片| 在线观看一区二区三区| 免费看a级黄色片| 嫩草影视91久久| 亚洲精品一区av在线观看| 悠悠久久av| av在线观看视频网站免费| 日韩精品青青久久久久久| 国产成人精品久久久久久| 国产伦一二天堂av在线观看| 综合色av麻豆| 99九九线精品视频在线观看视频| 麻豆国产97在线/欧美| 国产一区亚洲一区在线观看| 99热只有精品国产| 日本色播在线视频| 一a级毛片在线观看| 蜜臀久久99精品久久宅男| 91久久精品电影网| av国产免费在线观看| 黄片wwwwww| 深夜精品福利| 免费观看人在逋| 一本久久中文字幕| 国产日本99.免费观看| 最近中文字幕高清免费大全6| 日本黄色视频三级网站网址| 日韩三级伦理在线观看| 身体一侧抽搐| 国产精品1区2区在线观看.| 小蜜桃在线观看免费完整版高清| 中文字幕人妻熟人妻熟丝袜美| 校园春色视频在线观看| 精品久久久久久久久久久久久| a级毛色黄片| 久久99热6这里只有精品| 久久久久久久亚洲中文字幕| 午夜日韩欧美国产| 一区二区三区高清视频在线| 一级a爱片免费观看的视频| 国产精品久久久久久精品电影| 综合色av麻豆| 99在线视频只有这里精品首页| 毛片女人毛片| 国产日本99.免费观看| 日韩一区二区视频免费看| 欧美不卡视频在线免费观看| 午夜福利高清视频| 成人国产麻豆网| 丝袜美腿在线中文| 晚上一个人看的免费电影| 日韩成人伦理影院| 永久网站在线| 亚洲精品日韩在线中文字幕 | 国产单亲对白刺激| 精品人妻一区二区三区麻豆 | 1000部很黄的大片| 国产精品久久久久久精品电影| 成年女人永久免费观看视频| 夜夜爽天天搞| 久久国内精品自在自线图片| 性欧美人与动物交配| 亚洲在线自拍视频| 日韩,欧美,国产一区二区三区 | 一夜夜www| 深夜精品福利| 免费不卡的大黄色大毛片视频在线观看 | 久久热精品热| 黄片wwwwww| 日韩欧美三级三区| 毛片女人毛片| 91久久精品国产一区二区成人| 国产黄a三级三级三级人| 日韩欧美三级三区| 国产精品日韩av在线免费观看| 变态另类丝袜制服| 日韩制服骚丝袜av| 天堂动漫精品| 麻豆久久精品国产亚洲av| 天天躁夜夜躁狠狠久久av| 日本a在线网址| 床上黄色一级片| 最近视频中文字幕2019在线8| 黄色欧美视频在线观看| 国产精品无大码| 99国产极品粉嫩在线观看| 国产伦精品一区二区三区视频9| 国产精品人妻久久久影院| 亚洲精品影视一区二区三区av| 中文字幕精品亚洲无线码一区| 久久精品国产亚洲av天美| 国产成人freesex在线 | 少妇被粗大猛烈的视频| 美女xxoo啪啪120秒动态图| 亚洲精品国产av成人精品 | 日本成人三级电影网站| 日韩中字成人| 狠狠狠狠99中文字幕| 69av精品久久久久久| 亚洲国产色片| 一级av片app| 成人美女网站在线观看视频| 99久久九九国产精品国产免费| 国产熟女欧美一区二区| 日本黄大片高清| 亚洲精品一区av在线观看| 男人舔女人下体高潮全视频| 男人舔奶头视频| 久久精品久久久久久噜噜老黄 | 久久中文看片网| 色综合亚洲欧美另类图片| 欧美日韩乱码在线| 在线免费十八禁| 最新在线观看一区二区三区| 国产av一区在线观看免费| 亚洲精品456在线播放app| 性欧美人与动物交配| 久久久久久久久久久丰满| 成熟少妇高潮喷水视频| 在线播放国产精品三级| 在线播放无遮挡| 久久热精品热| 一进一出好大好爽视频| 噜噜噜噜噜久久久久久91| 男女视频在线观看网站免费| 久久精品夜色国产| 直男gayav资源| 午夜日韩欧美国产| 欧美+日韩+精品| 成人av一区二区三区在线看| 国产精品亚洲一级av第二区| 欧美日韩国产亚洲二区| 伊人久久精品亚洲午夜| 日韩欧美精品v在线| 久久久精品欧美日韩精品| 久久久久国内视频| 啦啦啦啦在线视频资源| 两性午夜刺激爽爽歪歪视频在线观看| av天堂在线播放| 亚洲性夜色夜夜综合| 18禁裸乳无遮挡免费网站照片| 国产69精品久久久久777片| 国产久久久一区二区三区| 高清午夜精品一区二区三区 | 夜夜爽天天搞| 成人国产麻豆网| 亚洲成人精品中文字幕电影| 日本熟妇午夜| 日韩欧美国产在线观看| 美女cb高潮喷水在线观看| 18禁在线无遮挡免费观看视频 | a级毛片a级免费在线| 国产不卡一卡二| 伊人久久精品亚洲午夜| 日本在线视频免费播放| 久久精品国产鲁丝片午夜精品| 赤兔流量卡办理| 国产伦在线观看视频一区| 成人永久免费在线观看视频| 精品不卡国产一区二区三区| 人人妻,人人澡人人爽秒播| 日韩精品有码人妻一区| 精品久久国产蜜桃| 精品欧美国产一区二区三| 麻豆一二三区av精品| 日本免费一区二区三区高清不卡| 变态另类丝袜制服| 极品教师在线视频| .国产精品久久| 深爱激情五月婷婷| 啦啦啦韩国在线观看视频| 精品乱码久久久久久99久播| 午夜福利在线观看吧| 国产精品久久久久久久久免| 国产老妇女一区| 国产一区二区激情短视频| 尤物成人国产欧美一区二区三区| 精品一区二区三区视频在线观看免费| avwww免费| 在线观看午夜福利视频| 国产 一区 欧美 日韩| 不卡一级毛片| 一进一出好大好爽视频| 国产真实伦视频高清在线观看| 国产蜜桃级精品一区二区三区| 美女cb高潮喷水在线观看| 亚洲av成人精品一区久久| 精品一区二区三区人妻视频| 成年女人永久免费观看视频| 一进一出抽搐动态| 黄色欧美视频在线观看| 精品一区二区三区人妻视频| 91久久精品国产一区二区三区| 亚洲成人久久爱视频| 在线观看66精品国产| 深夜精品福利| 欧美中文日本在线观看视频| 久久久欧美国产精品| 国产 一区精品| 国产高清有码在线观看视频| 直男gayav资源| 国产精品综合久久久久久久免费| 久久久久久伊人网av| 非洲黑人性xxxx精品又粗又长| 欧美高清成人免费视频www| 欧美日韩精品成人综合77777| 天天躁日日操中文字幕| 久久久精品大字幕| 日韩欧美精品v在线| 一本一本综合久久| 小蜜桃在线观看免费完整版高清| 人妻夜夜爽99麻豆av| 岛国在线免费视频观看| 国产亚洲精品综合一区在线观看| 男人舔女人下体高潮全视频| 亚洲欧美清纯卡通| 亚洲无线观看免费| 久久精品影院6| a级毛色黄片| 麻豆久久精品国产亚洲av| 一区二区三区四区激情视频 | or卡值多少钱| 国产69精品久久久久777片| 在线观看免费视频日本深夜| 国产成年人精品一区二区| 亚洲一区高清亚洲精品| 婷婷亚洲欧美| 亚洲自偷自拍三级| 日本撒尿小便嘘嘘汇集6| 亚洲成av人片在线播放无| 最新中文字幕久久久久| 欧美xxxx黑人xx丫x性爽| 日韩大尺度精品在线看网址| 日本与韩国留学比较| 欧美xxxx黑人xx丫x性爽| 一个人看视频在线观看www免费| 午夜a级毛片| 亚洲精品影视一区二区三区av| 99久国产av精品| 不卡一级毛片| 天堂√8在线中文| 久久99热6这里只有精品| 寂寞人妻少妇视频99o| 欧美在线一区亚洲| 亚洲三级黄色毛片| 国产蜜桃级精品一区二区三区| 精品无人区乱码1区二区| 黄色一级大片看看| 美女大奶头视频| 久久天躁狠狠躁夜夜2o2o| 久久人妻av系列| 国产精品久久久久久亚洲av鲁大| 变态另类丝袜制服| 欧美极品一区二区三区四区| 老熟妇仑乱视频hdxx| 国产黄色小视频在线观看| 老女人水多毛片| 午夜福利18| 国产黄色视频一区二区在线观看 | 一夜夜www| 中文字幕av在线有码专区| 亚洲熟妇中文字幕五十中出| 老司机福利观看| 精品熟女少妇av免费看| 天美传媒精品一区二区| 99精品在免费线老司机午夜| 亚洲精品亚洲一区二区| 亚洲国产色片| 欧美三级亚洲精品| 18禁在线播放成人免费| 中文字幕久久专区| 日韩精品有码人妻一区| 99热6这里只有精品| 亚洲国产高清在线一区二区三| 国产一区二区在线av高清观看| 女人被狂操c到高潮| 悠悠久久av| 嫩草影院入口| 久久久久国产网址| 黄色视频,在线免费观看| 天堂影院成人在线观看| av在线蜜桃| 69人妻影院| 国产精品一区二区三区四区免费观看 | 一级黄色大片毛片| 国国产精品蜜臀av免费| 亚洲精品乱码久久久v下载方式| 看十八女毛片水多多多| 丰满乱子伦码专区| 最近最新中文字幕大全电影3| 精品一区二区三区人妻视频| 丝袜美腿在线中文| 亚洲精品一卡2卡三卡4卡5卡| 此物有八面人人有两片| 最近手机中文字幕大全| 欧美人与善性xxx| 国产精品一区二区三区四区久久| 又爽又黄无遮挡网站| 最好的美女福利视频网| 久99久视频精品免费| 国产探花极品一区二区| 久久久久久伊人网av| 国产久久久一区二区三区| 搡老岳熟女国产| 久久久久性生活片| ponron亚洲| 免费观看人在逋| 成人综合一区亚洲| 国产一区亚洲一区在线观看| 国内久久婷婷六月综合欲色啪| 国产精品综合久久久久久久免费| 国产精品久久视频播放| 精品一区二区三区av网在线观看| 三级国产精品欧美在线观看| 嫩草影院入口| 亚洲经典国产精华液单| 亚洲aⅴ乱码一区二区在线播放| 国产欧美日韩精品一区二区| 日日撸夜夜添| 亚洲人成网站在线播| 男人舔奶头视频| 国产黄色视频一区二区在线观看 | 欧美日韩在线观看h| 午夜免费激情av| 白带黄色成豆腐渣| 午夜亚洲福利在线播放| 尤物成人国产欧美一区二区三区| 男女下面进入的视频免费午夜| 成人鲁丝片一二三区免费| 老熟妇仑乱视频hdxx| 亚洲欧美清纯卡通| 国产精品久久久久久久电影| 午夜福利在线观看吧| 欧美日韩国产亚洲二区| 波多野结衣高清作品| 淫妇啪啪啪对白视频| 男女下面进入的视频免费午夜| 最近的中文字幕免费完整| 一进一出抽搐gif免费好疼| av卡一久久| 精品少妇黑人巨大在线播放 | 久久精品国产亚洲网站| 永久网站在线| 一本久久中文字幕| 亚洲经典国产精华液单| 18禁裸乳无遮挡免费网站照片| 黄色一级大片看看| 国产精品久久久久久av不卡| 熟妇人妻久久中文字幕3abv| 狂野欧美激情性xxxx在线观看| 久久久a久久爽久久v久久| 亚洲精品日韩av片在线观看| 秋霞在线观看毛片| 国产女主播在线喷水免费视频网站 | 欧美bdsm另类| 国产单亲对白刺激| 国产精品综合久久久久久久免费| 久久九九热精品免费| 亚洲av二区三区四区| 女人十人毛片免费观看3o分钟| 99精品在免费线老司机午夜| 国产精品一及| 中文在线观看免费www的网站| avwww免费| 国产久久久一区二区三区| avwww免费| 直男gayav资源| 婷婷色综合大香蕉| 国产精品久久久久久久久免| 亚洲成人精品中文字幕电影| 99热这里只有是精品在线观看| 精品熟女少妇av免费看| 一区二区三区免费毛片| 国产白丝娇喘喷水9色精品| 两个人视频免费观看高清| 久久鲁丝午夜福利片| 男女之事视频高清在线观看| 一级毛片aaaaaa免费看小| 一级毛片电影观看 | 一区二区三区高清视频在线| 国内精品宾馆在线| 亚洲一区二区三区色噜噜| 亚洲国产精品成人久久小说 | 国产真实乱freesex| 99九九线精品视频在线观看视频| 老司机福利观看| 日本黄色视频三级网站网址| 日本爱情动作片www.在线观看 | 久久久午夜欧美精品| 能在线免费观看的黄片| 亚洲中文字幕一区二区三区有码在线看| 男女视频在线观看网站免费| 久久久久国产精品人妻aⅴ院|