• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A New Segmentation Framework for Arabic Handwritten Text Using Machine Learning Techniques

    2021-12-11 13:32:52SaleemIbraheemSaleemAdnanMohsinAbdulazeezandZeynepOrman
    Computers Materials&Continua 2021年8期

    Saleem Ibraheem Saleem,Adnan Mohsin Abdulazeez and Zeynep Orman

    1Department of Information Technology,Technical Informatics College of Akre,Duhok Polytechnic University,Duhok,42004,Kurdistan Region,Iraq

    2Department of Computer Engineering,Faculty of Engineering,Istanbul University-Cerrahpasa,Istanbul,34320,Turkey

    Abstract:The writer identification(WI)of handwritten Arabic text is now of great concern to intelligence agencies following the recent attacks perpetrated by known Middle East terrorist organizations.It is also a useful instrument for the digitalization and attribution of old text to other authors of historic studies, including old national and religious archives.In this study, we proposed a new affective segmentation model by modifying an artificial neural network model and making it suitable for the binarization stage based on blocks.This modified method is combined with a new effective rotation model to achieve an accurate segmentation through the analysis of the histogram of binary images.Also, propose a new framework for correct text rotation that will help us to establish a segmentation method that can facilitate the extraction of text from its background.Image projections and the radon transform are used and improved using machine learning based on a co-occurrence matrix to produce binary images.The training stage involves taking a number of images for model training.These images are selected randomly with different angles to generate four classes (0-90, 90-180, 180-270, and 270-360).The proposed segmentation approach achieves a high accuracy of 98.18%.The study ultimately provides two major contributions that are ranked from top to bottom according to the degree of importance.The proposed method can be further developed as a new application and used in the recognition of handwritten Arabic text from small documents regardless of logical combinations and sentence construction.

    Keywords: Writer identification; handwritten Arabic; biometric systems;artificial neural network; segmentation; skew detection model

    1 Introduction

    The increased usage of innovative and sophisticated hacking and falsification technologies in recent years has contributed to the rising demand for alternate strategies for identifying individual identities [1].Biometric systems remain promising innovations that differ from conventional methods as they can be used in digital systems to identify and/or check an individual’s identity uniquely and efficiently without having to bring or recognize something.Specifically, these systems, such as the automation of access control over physical and virtual places (e.g., borne controls, ATMs, safety and surveillance systems, financial transactions, and computer/networking security), are commonly used in various governmental and civil-sensitive applications [2].Personal identification based on biometric features offers numerous benefits over conventional knowledgebased approaches (e.g., passwords or personal identification numbers) because it eliminates the difficulties emerging from switching, losing, forgetting, or duplicating driver’s licenses, passports,identification cards, or even simple keys.Therefore, this approach is easy and user-friendly for clients who no longer need to recall or bring anything with them in the process of verifying their identities [3].Biometric systems are subject to higher safety standards in comparison with conventional methods.A biometric system identifies and classifies biometric functions that match extracted personal features against a reference collection stored in a system database with a sequence of discriminating features from captured objects.Furthermore, their ultimate judgment depends on the implications of the comparison.Biometric systems benefit users who do not want to have to recall information or take something with them to verify their identities.In this case,losing, exchanging, or missing individual knowledge is prevented.Given the low contrast and high complexity of biometrical models, physiological biometrics, particularly iris, fingerprint, and DNA, are relatively accurate.Such biometrics are commonly used and implemented in recognition schemes.By contrast, conduct biometrics are seldom used because of the strong comparison between different biometric models extracted from actions.Nonetheless, the recognition of an individual based on handwriting samples appears to be a valuable biometric tool primarily because of its forensic applicability.Writer identity is a type of behavioral biometry because personal writing is an early learning and perfected activity [4].

    The writer identification (WI) of Handwritten Arabic text is the foundation of our WI program.The Arabic language has a wide spectrum of uses, with 347 million individuals collectively speaking it [5].However, more than 1.2 billion Muslims in the world use Arabic every day to recite the Quran and to pray.Several other languages use Arabic letters in their writing or the same letter types but with small modifications (population of about 700 million) [6].Examples include Hausa, Kashmiri, Tatar, Ottoman Turkish, Uyghur, Urdu and Cashmiri, Malay, Moriscco, Pashto.Kazakhstan, Kurdish, Punjabian, Sindhi, and Tataran.Arabic is also one of the five most commonly spoken languages in the world (including Chinese, French, Spanish, and Hindi/Urdu) [7].The current research focuses on author recognition (WI) on the basis of handwritten Arabic text.WI is a mental biometric that utilizes a handwritten document in any language.It is regarded as useful because personal handwriting is an early learning and redefined habit.Humans are also valuable biometric tools through the recognition of their signature patterns and the digitalization of theological and historical records, forensics, violent incidents, and terrorist attacks.Technically,the automated recognition of an individual’s handwriting may be treated in the same manner as any biometric identification.It involves the retrieval of digital vector representation, the availability of adequate samples from multiple consumers of these vectors, and the presence of a certain measurement/distance between such vectors.Such distance may represent a correlation between the persons from which the samples are collected and correct classification.This research may profit from current WI work utilizing handwriting in other languages, such as English.However,in our attempt to tackle this issue, we should understand the form and characteristics of Arabic language.The success of existing schemes is influenced by several factors, such as the variations in the national and educational histories of Arabic text authors [8].

    WI has been revived in recent years for a range of purposes.Today, forensic proof and verification techniques are being widely used by courts around the world.Furthermore, the rise in crimes and terror activities has prompted authorities to carry out proactive counter-activities on the basis of WI.WI’s handwritten Arabic text is now of great concern to intelligence agencies following the recent attacks perpetrated by Middle East terrorist organizations.WI is also useful in the digitalization and attribution of old text to other authors of historic studies, including old national and religious archives [1-3].WI from handwritten text structures is typically focused on digital characteristics, with letters/strokes representing information acquired from current research on the integration of individual writing habits/styles.According to [1], the idea of WI emerged from previous research that reported the better recognition of a word’s attributes than those of a character or stroke.Given the complexity of Arabic handwriting, segmenting and separating letters and strokes from a script presents another challenge on top of WI schemes [8].

    The main aim of the current work is to build a system that can identify writers on the basis of their handwriting.We attempt to investigate and highlight the limitations of existing methods.Subsequently, we build a powerful model that can help to identify writers by using their sentences,words, and subwords.The main contributions of this study can be summarized as follows:

    ? We propose a new affective segmentation model by modifying an existing ANN model and making it suitable for the binariztion stage based on blocks.This model is combined with a new effective rotation model to achieve an accurate segmentation through the analysis of the histogram of binary images.

    ? We propose a new framework for correct text rotation that will help us to establish a segmentation method that can facilitate the extraction of text from its background.Image projections and the radon transform are used and improved using machine learning based on a co-occurrence matrix to produce binary images.

    ? The main handwritten texts display a confident orientation method from the horizontal text baseline called “text skews.” Skew detection is an important stage that can facilitate segmentation and the extraction of good features or signs for WI.To address related limitations, we propose a framework that is based on binary images produced from the initial segmentation stage.

    The rest of the paper is organized as follows.Section 2 presents the related work and highlights the research problem.Section 3 introduces the proposed system that can identify writers on the basis of their handwriting.Section 4 discusses the experimental results of the preprocessing and segmentation stages for Arabic handwriting recognition.Finally, Section 5 details the conclusion and future works.

    2 Related Work

    Preprocessing and segmentation are critical factors in obtaining significant and vital function vectors from accessible handwritten text samples to be used for the identification of text writers.In this context, preprocessing is aimed toward the preparations for the requested segmentation of Arabic handwritten text based on input signal scanning [9].The preprocessing phase involves measures that are unique to (1) the scanning method, which may add errors or noise; (2) the general design of handwriting, including the mismatch of written lines; and (3) the arrangement of Arabic text, including the overlap between text lines, phrases, subwords, letters, and idiomatic phrases.The last two categories of preprocessing activities include the form of the pen used,the pressure added to the pen, and the font size [10].Segmentation is the method of separating a scanned image’s written text into elements for recognition.The separation covers the rather critical trends in the text that must be derived from, contrasted, and adapted to new prototype vectors.This section discusses in depth the problems of preprocessing and segmentation and the possible algorithms for addressing them.To provide evidence in support of our views and to direct our research toward the second segmentation portion, we discuss herein the composition of the elements of Arabic documents [11].

    Text preprocessing is a critical feature of all content classification methods.The key strategy is to delete unrealistic functions while maintaining features that support the classification method in later steps to forecast the correct form of text.Text preprocessing typically requires the removal of stop phrases, writing, numerals, and terms at low levels.Different experiments to determine the impact of text preprocessing on various facets of text classification have been performed.For example, Ahmad et al.[9], Gon?alves et al.[12], and Srividhya et al.[13] compared the preprocessing results of English, Turkish, and Portuguese texts.They reported that text planning increases grading precision and reduces the characteristic scale while improving the grading speed.Arabic text identification involves two forms of preprocessing, as well as the elimination of punctuation, prefixes, and low-frequency phrases and stop words.The first form includes specific measures involving the deletion of Latin letters, Arabic and Kashida idiomatic phrases, and the normalization of Hamza and Taa Marbutah.This step minimizes the function size and addresses the lack of data.This procedure is also commonly adopted in the classification of Arabic text and in other Arabic natural language processing applications.The characteristics resulting from this type of preprocessing are the same as those obtained from term phonology.Several experiments have shown that when such basic preprocessing measures are implemented, the accuracy of classification improves [14,15].Meanwhile, the second form of preprocessing is required for the complex manipulation of details, stemming, and root extracting.The stems and roots that are removed are most often used as characteristics.Stemming and root extraction minimize the usable size and quantity of records while term phonology is converted into one stem or root.Instead of being implemented separately, the first and second preprocessing modes are usually combined and performed simultaneously.Many studies have tested the impact of this approach on the Arabic text categorization of stems and roots relative to term orthography.

    The characterization of the usage of stems is deemed more precise than that of the usage of roots [16,17].If the use of word orthography is contrasted with the use of roots and stems for the grading of Arabic language, then conflicting results are obtained.For example, Haralambous et al.[18] found that stems yield better results for identification than spelling and roots while Hmeidi et al.[19] reported that orthography provides better classification results than roots and stems.Wahbeh et al.[20] analyzed two datasets, recognized that they have just a particular dataset,and inferred that stemming with limited datasets and term spelling with large datasets are reliable.The possible interpretations of these contradictory findings contribute to the usage of multiple stemmers of specific and low precision and to the design of datasets for experiments.

    According to [21] suggested a Latin letter or component of character feature extraction and proposed a testing algorithm.The authors collected related factors into diagrams and then separated them into potential strokes (characters or a portion of characters).Grapheme k-means clustering was used to describe a function context that is specific to all database documentation.Experiments were conducted on three different datasets containing 88 books and 39 historical accounts.These datasets were authored by 150 scholars.In a knowledge recovery system, a writer’s identity was investigated, and writer verification was focused on shared information among the graphics distributed in the manuscripts.The tests revealed an accuracy of almost 96%.For feature extraction, the same technique for grapheme classification has been used [22].For the study of Latin text fragments (characters or pieces of characters), Schomaker used related methods but focused on Kohlon’s self-organized feature map (SOM).The author introduced a writer recognition algorithm by categorizing documents into fragments [23].After attempts to realize smoothing and binarizing, the information obtained was based on the part contours for such fragments.Then, the information was used to measure the Moore contour.The Moore district comprises eight pixels in a 2D square matrix next door with a center pixel.In comparison with Kohonen’s self-organizing map, Kohlon’s SOM shows the “connection contours” of the fragments of the learning set.SOM is a form of artificial neural network that is taught to identify the inputs of vectors by classifying the vectors into maps or cartoons in the input field.SOM uses a multifaceted system presentation method in relatively narrow dimensional spaces.

    In another study by [24], an algorithm was evaluated on the basis of a separate database of Western language collected from texts by 150 authors.K-nearest neighbor algorithm was used to identify the top 1 writers with a precision of 72% and the top 10 authors with an accuracy of 93%.The same technique was adopted for uppercase Western text.They introduced an algorithm to classify Latin writers on the basis of the detection of handwriting character attributes.The specifications included parameters such as height, distance, numbers of end points, frequency of joints, roundness, duration of circle, field and center of gravity, angle, and frequency of loops.The device was checked 10 times by 15 authors in random sequences of 0 to 9.For identification,the hamming interval was used with a reliability rate of 95%.Another study by [24] suggested methods for the recognition and authentication of Arab and Latin writers on the basis of extraction structure and textural and allographic characteristics.The study proposed the segmentation of text into characters by using allographic characteristics as author style with a focus on the premise that words should not overlap.In the lowest contour, the segmentation was performed to a point.K-means was used for codebook clustering.This codebook was introduced as a teaching package of sample graphics.The codebook was also standardized to construct a histogram for any related term by using the Euclidian width.The analysis was performed by 350 authors, each of whom conducted five tests using an unbiased database.The most effective recognition rate was 88%.The result related to Arabic text were lower than those reported for Latin text.

    In another work by [25], independent Persian characters were checked for the recognition of Persian authors.The pictures were analyzed and eventually segmented into multiple strokes.A collection of characteristics, including the stroke orientation, horizontal and vertical outlines,and stroke measuring ratio, was defined in every stroke.To classify researchers, the study used a variation of fuzzy rule-based quantization and FLVQ.The algorithm was evaluated on a standalone database by 128 readers, with the findings and accuracy levels in different test circumstances reaching 90%-95%.Also, another study by [26] proposed an algorithm instead of dealing with alphabetic characters to classify Latin authors by using handwritten digits to retrieve characteristics.The specifications comprised parameters such as height, distance, frequency of end points, frequency of joints, roundness, duration of circle, field and center of gravity, angle, and volume of loops.The device was checked 10 times by 15 authors on random strings between 0 and 9.For identification, the hamming interval was used with a precision rate of 95%.

    Many researchers have conducted writer recognition by using other languages.Existing approaches rely on a single type of recognition (e.g., onlinevs.offline similarity) or on other text content and functions, as explained and mentioned in the introduction section (text dependentvs.nontext dependent).Other studies have utilized articles, chapters, sections, terms, or characters in writer recognition.Certain methods are more involved than others in terms of small sections, such as character or stroke pieces found in text written in Latin, Persian, and other languages.Such methods have been employed in various languages.Researchers should clarify the observable traits of current handwritten text analytics with brief samples.Considering that most of the existing literature is focused on offline text, we structure our analysis according to the collection of text elements that are deemed to be biased and widely used in writer recognition.Our analysis is not strictly focused on Arabic because the growing curiosity in WI lies in Arabic handwritten script.

    3 The Proposed Method

    Fig.1 shows the framework for of the proposed algorithm for segmenting text from a static image and assessing the affective features for writer recognition.To correctly locate the text in the given image, the proposed algorithm utilizes histogram equalization for noise elimination,highlights the text, and increases the contrast for dark regions.In this stage, we enhance the existing histogram equalization by using an adaptive threshold that can help to identify dark regions.We also use the histogram equalization based on the complexities of a region.In other words, we develop a new adaptive method for histogram control.In this method, the first step is segmentation, which involves binarizing an image in the next stages.Specifically, we propose a powerful trainable model that can help to identify a piece of text from the rest of a given image even if the text is written in a different color.This model helps to capture poor-quality text.Then,the skew of the text is corrected using the binary image as the input.In this stage, we rotate the image to the correct position.We then propose a new trainable technique to identify the angle of rotation for the text.If the text is in the right angle, then we move to the post-segmentation of the text; otherwise, we use a morphological operation and radon transform to rotate the text.In the previous stages, we binarize the image and rotate it in preparation for the post-segmentation stage.The post-segmentation stage includes investigating the histogram of the binary image for each vertical line and adjusting the threshold for the extraction of each line that includes text while ignoring others.At this stage, we have extracted each line from the image that includes text.Then, for each line and by using the same technique, we extract each subword.At this point, we analyze each line and use a threshold for extracting each subword.A new model based on new features is finally investigated and implemented for WI.

    3.1 The Preprocessing Stage

    3.1.1 Adaptive Histogram Equalization

    Given the poor quality of some of the images used in our research, we use histogram equalization to enhance these images and highlight the pieces of text for the segmentation and identification stages.Fig.2 shows the poor-quality images in our dataset.

    As shown in Fig.2, the images contain unclear text, noise, and unclear backgrounds.On the basis of our investigation and analysis of existing articles, we find that histogram equalization is one of the most effective techniques with minimal limitations.We enhance the existing adaptive histogram equalization and make it suitable for our problem.One of the most frequently utilized approaches to image contrast improvement is histogram equalization because of its simplicity and high efficiency.It is accomplished by normalizing intensity distribution using the method’s cumulative distribution function.The resultant images will then have identical intensity distributions.Histogram equalization may be categorized into two types depending on the utilized transformation function:local or global.Global histogram equalization (GHE) is fast and simple,but its contrast enhancement power is relatively small.By contrast, local histogram equalization(LHE) improves total contrast efficiently.The histogram of the overall input image is utilized in GHE for computing the function histogram transformation.As result, the dynamic range of the image histogram is expanded and flattened, and the total contrast is enhanced.As the computational complexity of GHE is relatively small, it is an appealing tool for several applications involving contrast enhancement.Useful histogram equalization enhancement applications involve texture synthesis, speech recognition, and medical image processing, and they are commonly used to modify histograms.Histogram-based image improvement techniques typically depend on the equalization of image histograms and the expansion of the range of dynamics corresponding to images.

    Figure 1:General framework of proposed system

    Figure 2:Poor-quality images.Image (A) shows poor edges and an overlap between the words and the lines, image (B) shows noise and indistinguishable objects

    Histogram equalization has two main disadvantages that reduce its efficiency [27-33].First,histogram equalization assigns one gray level to two dissimilar neighbor gray levels with dissimilar intensities.Second, if a gray level is included in most parts of an image, then histogram equalization allocates a gray level with greater intensity to that gray level, resulting in a phenomenon called “wash out.” Assume that X = X(i,j)represents a digital image while X(i,j)represents the pixel gray level at(i,j)place.The number of pixels in the entire image is denoted as n,and the intensity of the image is digitized in L levels, that is, {X0,X1,X2,...,XL ?1}.Thus,?X(i,j)∈{X0,X1,X2,...,XL ?1}.Suppose that nk represents the overall number of pixels with gray level of Xk in the image; the Xk probability density is written as

    The relationship between p(Xk)and Xk is expressed as the probability density function(PDF), and the graphical appearance of the PDF is called the histogram.Depending on the image’s PDF, its cumulative distribution function is expressed as

    where k=0,1,...,L ?1 and c(XL ?1)=1.A transform function f(x)is described as follows depending on the cumulative density function:

    Next, the output image of GHE, Y=Y(i,j)may be defined as

    Bihistogram equalization (BHE) is used to solve the challenges in brightness preservation.BHE divides the histogram of an input image into two portions depending on the input mean before separately equalizing them.Several BHE approaches have been suggested to solve the aforementioned challenges.Essentially, BHE involves dividing the input histogram into two portions.Those two portions are independently equalized.The key distinction among the techniques in this family is the criterion utilized to choose the separation threshold represented by XT.The techniques described above increases the image contrast but often with insufficient details or undesirable effects.The next section describes histogram equalization and BHE methods as implemented and then suggest modifications to remove such undesirable effects for images with mean intensity toward the lower side.

    Clearly, XT ∈{X0,X1,...,XL ?1}.Depending on the threshold, the input image X may be separated into two sub-images XL and XU:

    Then, the respective PDFs of the sub-images XL and XU are described as

    and

    nk denotes the Xk numbers in XL and XU and nL and nU are the overall sample numbers in XL and XU, respectively.The respective cumulative density functions for XL and XU are therefore

    Similar to the GHE case, in which a function of total density is utilized as a transform function, we describe the next transform functions by using the functions of total density.The result of the proposed adaptive histogram equalization is shown in Fig.3.

    3.1.2 Text Orientation(Skew)Problem

    Text orientation can be made flexible by scanning written text.In this case, we face two problems.First, the writer has problematic handwriting and is not capable of controlling his hand to write on the same line.A skewed image is then produced, as shown in image A in Fig.4.Second, when we scan the document to convert it into an image, we obtain a skewed image,as shown in image A of Fig.4.Skew detection is an important activity in the preprocessing of text images because it directly affects the efficiency and reliability of the segmentation and feature extraction phases.The major methods for correcting skew are smearing, projection, Hough transform, grouping, graph-based, and correlation [34].In projection, the profile is measured by summing up the intensities discovered at every scan line from whole pixels.The corresponding profile is smoothed, and the created valleys are recognized.The space between text lines is specified by these valleys.In smearing, the black pixels sequential in the horizontal profile are marked.Whether the distance among the white space is over a particular edge, it is completed with black pixels.The bounding boxes of linked components in the marked image are proposed as lines of text.Grouping techniques establish lines of alignments through grouping units.These units can be blocks, linked components, or pixels.They are then linked together to extract the lines of alignments.Hough transform is also utilized in skew detection.The points in the Cartesian coordinate system are defined as a sinusoidal distribution in the following equation:

    Figure 3:Affective result of proposed adaptive histogram equalization

    Figure 4:Types of image skews:Image (A) text skew and image (B) whole image skew

    The outcomes show that the proposed algorithm (utilizing our database) for estimating and correcting skewed text is effective and provides decent line separations depending on the projection.An additional problem is to keep the text features unchanged while addressing challenges or at least restoring them to their state after solving the challenges [35].Overlapping lines make this problem increasingly difficult to fix.In this stage, we propose a new framework for rotating text correctly [36].The framework should help us find a segmentation method that can facilitate the extraction of text from backgrounds.The proposed model includes a trainable model for binarizing images and rotating skewed images.

    3.2 Initial Segmentation(Image Binarization)

    The aim of the binarization is to isolate a piece of text from the rest of an image.We use a trainable model instead of the traditional threshold method.The proposed model can capture text with poor structures and different colors as it depends on structure edges and texture.This step includes two stages:training and testing stages.The training process begins with the selection of a definite number of images to be trained.In our work, we use 100 images.These images include different texts with different challenges.From every training image, many samples, such as a minor window of square areas of a specific size (e.g., 5×5), are obtained from text and the background.In this case, we take 100 windows from each image as two classes; 50 windows include text, and the remaining ones show the background, which includes white background,noise, or unwanted objects.This selection helps us to keep the true positive objects and ignore false positive objects, including noise, unwanted objects, and artifacts.For every window of any class, the texture features are utilized instead of pixel intensity.A selection of histogram of oriented gradient (HOG) features and moments features is derived and defined herein.The labeled feature vectors gathered from the samples are subsequently utilized to train ANN classifiers.As presented in Fig.5, an ANN represents a network of interrelated computational nodes (units).Every node denotes two processing functions:transformation and summarization.The function of summarization is to estimate a weighted sum on the basis of the inputs, each of which contains of an input value and its related weight.Weight wji is the product of the node of input j in the former layer of the network and the node of output I of the present layer.A function of transformation of the node, called activation function, applies transformation, which is usually nonlinear, to the summarization function outcome.

    The HOG and moments features are applied to each window, and the output is used as the input for the ANN with class labels.Upon completion of the training, every set testing image is utilized as an input for the binarization algorithm.The algorithm images a pixel of image through pixel.For every pixel, a minor square area of an identical window size with the pixel as the center is built, and the HOG and moment features of the area are extracted through the trained neural network and classified into two classes, namely, “text”and “nontext.”As mentioned previously, we use HOG and moments features (Fig.5) to describe objects in terms of structure and texture.The HOG represents a feature descriptor that facilitates object identification in digital images.The HOG is framed according to the gradient orientation quantification in localized image sections.The HOG feature extraction procedure includes taking a window called cell around the pixels.A mask [?1, 0, 1] is utilized to calculate the image gradients.In our version of this extraction technique (Fig.6) for orientation binning, we utilize the gradient directly for every image position in the matching orientations.The orientation of the cells is set to 0°-180°with 9 bins.The contrast normalization of the local histogram is employed to improve the invariance to illumination, shadowing, etc.

    Figure 5:Framework of binarization step

    Figure 6:Rotation angles

    The extraction algorithm for the HOG descriptor functions over a sequence of steps.Initially,the input image is divided to minor interlinked regions (cells), and a gradient direction histogram relative to the cell pixels is created for every cell.

    ? Step 1:The gradient orientation is utilized to delineate each cell in angular bins.

    ? Step 2:The weighted gradient is provided through the pixel of each cell for its corresponding angular bin.

    ? Step 3:Adjacent cells are clustered into spatial regions called blocks for histogram foundation normalization and aggregation.

    ? Step 4:The histogram of each block represents the consequence of a histogram group that has been regularized.The descriptor is also represented through a set of block histograms.

    Moments feature extraction is used to describe texture.The following four features are considered to identify each window:

    Homogeneity

    Contrast

    Entropy

    Correlation

    One of the points that have been used to support trainable models is the fusion of two features.In this process, each feature is used as an input for the ANN model; on the basis of such feature, two trained models are subsequently derived.Majority voting for the two ANNs is used to arrive at the final decision.

    Image Skewing Processing as Skew detection is an important stage that can facilitate segmentation and the extraction of good features or signs for WI.To avoid known limitations, we propose a framework on the basis of the binary image produced from the initial segmentation stage.Specifically, we use image projections and the radon transform.Our results indicate that this method requires angles for estimation to detect the correct angle within a given range.Therefore,we propose a model that can estimate two angles (maximum and minimum angles) by using the model based on the histogram of the binary image.In the first stage, we use a binary image and calculate the frequencies of the pixels that comprise the text for each line in the image.The histogram for the lines for each image is then generated.In this stage, the binary image includes 0s and 1s.This image is produced by the trainable model described in the previous stage.To detect the angle of rotation, we propose a model that can help rotate text in the right angle.In this model, we use the ANN.The training stage involves taking a number of images to train the model.These images are selected randomly with different angles to generate four classes (0-90,90-180, 180-270, and 270-360), as shown in Fig.6.

    Fig.7 shows the classes of skewed images.For each binary image, we extract the histogram representing the number of white pixels.Suppose that there is a binary image I; the image size is M × N, where N denotes the number of rows and M represents the number of columns.The number of bins of the histogram is M.In each row, we calculate the number of pixels comprising the text.As shown in Fig.8, we take a sub-image and calculate its histogram.

    Figure 7:Example of histogram of a sub-image

    We apply the same process to three images with different rotations to show the differences between the histograms of skewed images.As shown in Fig.8, the distribution of the histogram is present in each line of text in the image.As for the second image, we note that it is skewed.The third image shows a different histogram distribution.We thus decide to use this histogram to detect the skew angle.This histogram helps us to develop a new method that can rotate images in the correct angles.The histogram is calculated for all the training images, and the result is fed to the ANN described previously.In addition, we use the labels for the four classes with features to train the ANN model to predict labels for the new testing images.

    Fig.9 shows the whole model for identifying the angles of the skewed images.In the testing stage, we use the same process.Specifically, the testing image is used as the input for feature extraction, and the output features are used as the input for the trained model.The main objective is to identify the range of angles.Then, radon transform is used to correct the rotation.

    3.3 Post Segmentation Stage

    We process a binary image to correct the angle of the skewed image.We calculate the histogram for the binary image by using the same process as that mentioned previously.As shown in Fig.10, the histogram represents each line in the image and we label the histogram to show that the image includes six lines and one unwanted object.Therefore, we should use a suitable threshold that can help to distinguish the text line from the unwanted object by analyzing each line in the image.

    Figure 8:Histograms of different skewed images

    Figure 9:Model for identifying the skew angle

    Figure 10:Histogram of binary image for segmentation

    4 Results and Discussion

    A wide range of experiments are carried out to evaluate the efficiency of the proposed segmentation method, that is, the HOG and deep learning method for WI.We compare its execution with that of other methods in related works.The datasets utilized in this study are described in terms of the experiments used and the application of data augmentation to increase the number of samples.The execution of the proposed segmentation framework is assessed.Finally, broad tests are performed to discover the optimal trainable model for WI.The model is then compared with existing methods in terms of rank 1 identification rate and identification time.

    4.1 Experiment Setup

    Extensive experiments are conducted to evaluate the WI of Arabic handwriting using the hybrid trainable model.Specifically, 2D maximum embedding difference is utilized in feature extraction.The experiments are performed in MATLAB using a 64-bit Windows 10 Proficient machine equipped with an Intel Pentium 2.20 GHz processor and 8 GB RAM.The sum of 10 distinctive tests is used to evaluate WI of Arabic handwriting from different datasets.

    4.2 Data Preparation

    An unlimited handwritten Arabic text database published by 1,000 individual authors contains a dataset that applies to a list of KHATT’s (KFUPM Handwritten Arabic TexT).This database was developed by Professor Fink from Germany, Dr.M?rgner, and a research group headed by Professor Sabri Mahmoud from Saudi Arabia.The database comprises 2,000 images with identical texts and 2,000 images of different texts with the text line images removed.The images are followed by a manually verified reality and a Latin ground-truth representation.For other handwriting recognition-related studies, including those on writer detection and text recognition,the same database may be used.For further details about the selected database, reviewers may refer to [37,38].For academic purposes, version 1.0 of the KHATT platform is open to academics.

    Description of the database:(1) forms written by a thousand different authors; (2) scanned in various resolutions (200, 300, and 600 DPI); (3) authors from various cultures, races, age classes, expertise levels, and rates of literacy with accepted writings with clear writing styles;(4) 2,000 specific images of paragraphs (text origins of different subjects, such as art, education,fitness, nature, and technology) and their segmented line images; (5) 2000 paragraphs, each of which includes all Arabic letters and types, and their line images, including related materials.Free paragraphs written by writers about their selected topics:(i) manually checked ground realities are given for paragraph and line images; (ii) the database is divided into three rare collections of testing (15%), education (70%), and validation (15%); and (iii) encourage research in areas such as WI, line segmentation, and noise removal and binarization techniques in addition to manual text recognition.Fig.11 shows four samples from one writer, that is, KHATT_Wr0001_Para1,where the first part of the label refers to the name of the data set (KHATT), the second part refers to the number of the writer, and the third part refers to the paragraph number.In Fig.11,we introduce four images that include texts from the same writer.The first and second images involve similar text paragraphs, and the remaining images show different texts.Fig.11 also shows paragraphs from a second set of writers.The texts in the first and second images are the same, but they are written by different writers.As explained previously, the database includes 4,000 samples from 1,000 writers, each of whom produced 4 samples.In this study, forms are gathered from 46 various sources covering 11 various topics.Tab.1 presents the sources’topics, the number of gathered passages in each topic, and the number of pages from which the passages are extricated.The forms were filled out by individuals from 18 nations.The distribution of the number of forms by nation is presented in Tab.2.Individuals from Syria, Bahrain, Algeria, Sudan, Australia,Libya, Canada, Lebanon, and Oman also filled out the forms.

    On the basis of the data, we build and test our proposed model.As mentioned in the previous section, the deep learning method is used to identify writers, and its features are fused with HOG features.Deep learning is powerful, but it usually needs to be trained on massive amounts of data to perform well [39-43].Such requirement can be considered as a major limitation.Deep learning models trained on small datasets show low performance in the aspects of versatility and generalization from validation and test sets.Hence, these models suffer from problems caused by overfitting.Several methods have been proposed to reduce the overfitting problem.Data augmentation, which increases the amount and diversity of data by “augmenting” them, is an effective strategy for reducing overfitting in models and improving the diversity of datasets and the generalization performance.In the application of image classification, various improvements have been made; examples include flipping the image horizontally or vertically and translating the image by a few pixels.Data augmentation is the simplest and most common method for image classification [44-46].It artificially enlarges the training dataset by using different techniques [47,48].In this section, we propose an augmentation method that can help to enlarge the training data.Fig.12 shows the proposed augmenter.

    Figure 11:Sample of four images of text from the same writer.The first and second images show similar text paragraphs, and the remaining images use different text, The sample on the right shows paragraphs from second writers

    As shown in Fig.12, we use two methods for generating more samples from the training samples.The first method is based on images generated by using the line text of images and reconstructing new images.In this method, we randomly select the lines and obtain new images that include different orders of text lines.Then, rotation augmentation is performed by rotating the image to right or left on an axis between 1°and 359°.The safety of rotation augmentation is heavily determined by the rotation degree parameter.Slight rotations, such as that between 1 and 20 or between ?1 and ?20 could be useful in digit recognition tasks, such as MNIST.However, as the rotation degree increases, the label of the data is no longer preserved after the transformation.

    Table 1:Source data’s topics, paragraphs, and source pages

    Table 2:Writers’countries of origin

    Figure 12:Proposed augmenter

    4.3 Evaluation of Preprocessing Stage

    The noise removal in the image processing refers to the presence of pixels of diverse intensities relative to the neighborhood.The noise is ordinarily discernible in smooth zones of an picture,but it debases the whole picture, involving the critical highlights like the edge.In the noise of contents and text all pixels distributed around the content don’t have a place to any of the veritable content components.It is the outcome of filtering a text written on a diverse type of page employing a diverse type of ink.The determination level of the scanner could be too create undesired noises in and around the content.The scanner may thus reduce image quality and produce dark images.In addition, the quality of some images of papers written in different times is prone to be influenced by the scanner used.For the text image segmentation in our work, we try to resolution text overlap and other preprocessing stages could present artifacts that might have the similar result as text noise as they do not belong to the honest text constituents.Thus, the proposed denoising processes must be used after the preprocessing step.We propose an adaptive histogram equalization method that can help to enhance images and highlight texts to ensure clarity.Fig.13 shows the affective result of the proposed adaptive histogram equalization.This method clearly facilitates the recovery of text from poor-quality images.

    Figure 13:Stage 1 of preprocessing (adaptive histogram equalization)

    Skew detection is an important preprocessing task that is used as the second stage in correcting the angle of orientation.As mentioned previously, we propose a new model for detecting image skewness.Specifically, we binarize an image by using a trainable model to segment the image and then extract the text from the rest of the image.The proposed model uses four features:homogeneity, contrast, entropy, and correlation.Fig.14 shows the affective result of each feature on sample KHATT_Wr0001_Para1_1, which is selected randomly from the dataset.

    On the basis of our investigation, we fuse the four features together to produce a powerful model that can help segment texts from backgrounds.Figs.15 and 16 show the affective result of the proposed trainable binarization model for three samples, namely, KHATT_Wr0001_Para1_1,KHATT_Wr0013_Para1_1, and KHATT_Wr0131_Para1_1.

    Skew detection is an important stage that can help realize a suitable segmentation and extract good features or signs for WI.We use machine learning to identify the angles of rotation, on the basis of which we can rotate an image.To detect angles, we propose a model that can help rotate text in the correct angle.In this model, we use ANN.The training stage involves taking a number of images to train the model.These images are selected randomly with different angles to generate four classes (0-90, 90-180, 180-270, and 270-360).Tab.3 shows the affective result of the proposed model.

    Figure 14:Affective result of proposed trainable binarization method.(A) Original image,(B) binary image based on homogeneity, (C) binary image based on contrast, (D) binary image based on entropy, and (E) binary image based on correlation

    On the basis of the skew detection model, we correct the skew, as shown in Fig.17.The proposed model estimates the angle and reconstructs the image.At this stage, the image is ready for segmentation.

    Figure 15:Affective result of proposed fusion model for sample KHATT_Wr0001_Para1_1.(A) Original image, and (B) binary image based on homogeneity for proposed fusion model

    Figure 16:Affective result of proposed fusion model for sample KHTT_Wr0013_Para1_1.(A) Original image, and (B) binary image based on homogeneity for proposed fusion model

    Table 3:Accuracy of skew detection model

    Figure 17:Correction of rotation of sample KHATT_Wr0001_Para1_1.(A) Original image,(B) binary image based on homogeneity for proposed fusion model, and (C) binary image based on contrast of rotation of sample

    4.4 Segmentation Results

    After preprocessing the binarized scanned text samples and correcting the text skew method to validated our goal for arabic handwriting segmentation.In this stage comprises a number of processes, begain with the segmentation process to divided the lines of the text page, fragment every line into associated components, and after that fragmenting the subwords.Thus, in every stride, we may got to move forward the result by recouping feature extraction that will get misplaced through every stride.The segmentation of the Line stage aimed to extricated content lines from paragraph(s) or the pages.The text lines or Page include single features or attributes,therefore, such this features were of no significance to our use because our focusing on small texts or lines, involving subwords and their diacritics.In any case, a few line traits offer assistance protect subwords and diacritic highlights.In specific, the incline feature may ended up debased as a outcome of the method of deskewing a page for line division.Hence, we recommend re-rotating the lines of the sample content on the premise of the initial page point.The pages segmentation into lines and after that into characters and/or into words is commonly performed by OCR and WI analysts.We follow the similar approach in developing the suggested algorithm for WI.After deskewing the text as deliberated in the earlier sections that identify the minim arguments for horizontal projection histogram in the deskewed page text to be respected as line fragment focuses,such as we check the histogram for each picture row to recognize the valley points.The valley point could be a minima point in a flat projection content picture to be utilized as a segmented points.It encourages the automted of the method.These valleys demonstrate the space between the lines of content and are respected as fragmented line focuses.Fig.18 shows the proposed model successfully segments an image into lines.

    Figure 18:Image segmented into lines on the basis of the histogram of the image lines

    5 Conclusion

    This study proposes a new affective segmentation model by modifying the ANN model and making it suitable for binarization based on blocks.This model is combined with the new effective rotation model to achieve accurate segmentation through the analysis of the histograms of binary images.This work presents all the experimental results in accordance with the proposed objectives.Series of experiments are conducted using a database comprising 2,000 images with identical texts and 2,000 images with different texts and with their text line images removed.The images are followed by a manually verified reality and a Latin ground-truth representation.For other handwriting recognition-related studies, such as those on writer detection and text recognition, the same database may be used.The proposed technique is employed to evaluate WI of Arabic handwriting by using the hybrid trainable model.The experimental results are analyzed, interpreted, compared,validated, and discussed in depth.Diverse investigates and discussions of the outcomes are also performed.The weaknesses and strengths of the proposed system are highlighted accordingly.One of the limitations of this study is that the dataset includes few samples for each writer.This drawback prompts us to use an augmentation method to increase the number of samples for each writer.However, such method does not provide enough flexibility to extract different patterns from texts.Using more samples might help us to obtain more paragraphs that can support the method for WI using different words.For our future work, we will enhance images with other effective filters to remove noise and then highlight the ROI.Finally, the proposed method is found to perform remarkably better than those recently proposed in this area.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美高清性xxxxhd video| 99热只有精品国产| 18禁裸乳无遮挡免费网站照片| 性欧美人与动物交配| 美女被艹到高潮喷水动态| 日本-黄色视频高清免费观看| 深夜a级毛片| 成人特级av手机在线观看| 国产淫片久久久久久久久| 国产亚洲av片在线观看秒播厂 | 亚洲一区高清亚洲精品| 日韩国内少妇激情av| 我的老师免费观看完整版| 夜夜看夜夜爽夜夜摸| av在线老鸭窝| 国产探花在线观看一区二区| 免费无遮挡裸体视频| 欧美精品一区二区大全| 色噜噜av男人的天堂激情| 爱豆传媒免费全集在线观看| 亚洲精品国产成人久久av| 综合色丁香网| 老司机影院成人| 中文字幕制服av| 老女人水多毛片| 久久久久久久久久成人| 在线观看免费视频日本深夜| 亚洲av中文av极速乱| 最近最新中文字幕大全电影3| 天堂√8在线中文| 黑人高潮一二区| 亚洲国产欧洲综合997久久,| 亚洲精品国产成人久久av| 日本欧美国产在线视频| 六月丁香七月| 99久国产av精品国产电影| 欧美3d第一页| 观看免费一级毛片| 中文资源天堂在线| 三级经典国产精品| 亚洲精品亚洲一区二区| 免费看美女性在线毛片视频| 欧美成人精品欧美一级黄| 一个人看视频在线观看www免费| 2021天堂中文幕一二区在线观| 国产视频首页在线观看| 青春草国产在线视频 | 国产爱豆传媒在线观看| 综合色av麻豆| 免费看日本二区| 两性午夜刺激爽爽歪歪视频在线观看| 欧美bdsm另类| 亚洲精品国产av成人精品| 国产大屁股一区二区在线视频| 国产亚洲精品久久久久久毛片| 精品一区二区免费观看| 久久久国产成人免费| 亚洲三级黄色毛片| 午夜福利在线在线| av视频在线观看入口| 一级av片app| 国产探花极品一区二区| 亚洲成人中文字幕在线播放| 在线观看午夜福利视频| 日韩一区二区视频免费看| 日韩人妻高清精品专区| 91在线精品国自产拍蜜月| 欧美成人精品欧美一级黄| 亚洲欧美日韩东京热| 久久99精品国语久久久| 一个人观看的视频www高清免费观看| 蜜桃久久精品国产亚洲av| h日本视频在线播放| 日韩欧美一区二区三区在线观看| 久久久国产成人免费| 日韩一区二区视频免费看| 日韩一本色道免费dvd| 91久久精品国产一区二区三区| 欧美xxxx性猛交bbbb| 97超碰精品成人国产| 久久人人精品亚洲av| 亚洲av第一区精品v没综合| 男人狂女人下面高潮的视频| 夫妻性生交免费视频一级片| 少妇的逼好多水| 三级毛片av免费| 日韩成人伦理影院| 精品一区二区免费观看| 一本久久中文字幕| 欧洲精品卡2卡3卡4卡5卡区| 亚洲国产精品久久男人天堂| 听说在线观看完整版免费高清| 插阴视频在线观看视频| 中文字幕精品亚洲无线码一区| 一卡2卡三卡四卡精品乱码亚洲| 又爽又黄a免费视频| 日韩欧美在线乱码| 成年女人看的毛片在线观看| 久久人人精品亚洲av| 色哟哟哟哟哟哟| 国产一区二区三区av在线 | 麻豆国产av国片精品| 成年av动漫网址| 日韩大尺度精品在线看网址| 婷婷精品国产亚洲av| 黄片无遮挡物在线观看| 少妇的逼好多水| 国模一区二区三区四区视频| 男人狂女人下面高潮的视频| 国产激情偷乱视频一区二区| 免费av不卡在线播放| 亚洲精品自拍成人| 免费电影在线观看免费观看| 亚洲av熟女| av黄色大香蕉| 国产极品天堂在线| 亚洲av熟女| 中文在线观看免费www的网站| 国产免费一级a男人的天堂| 精品久久国产蜜桃| 亚洲久久久久久中文字幕| 女同久久另类99精品国产91| 淫秽高清视频在线观看| 亚洲国产精品久久男人天堂| 久久精品人妻少妇| 亚洲国产精品成人综合色| 国产综合懂色| 亚洲精品久久久久久婷婷小说 | 一区福利在线观看| 免费人成视频x8x8入口观看| 成人亚洲精品av一区二区| 99视频精品全部免费 在线| 天堂√8在线中文| 高清毛片免费观看视频网站| 看片在线看免费视频| 91午夜精品亚洲一区二区三区| 亚洲av一区综合| 精品99又大又爽又粗少妇毛片| 小说图片视频综合网站| 成人漫画全彩无遮挡| 精品99又大又爽又粗少妇毛片| 夜夜看夜夜爽夜夜摸| 国国产精品蜜臀av免费| 别揉我奶头 嗯啊视频| 色哟哟哟哟哟哟| 嫩草影院精品99| 校园春色视频在线观看| 日本-黄色视频高清免费观看| 色综合站精品国产| 99热这里只有精品一区| 国产女主播在线喷水免费视频网站 | 麻豆久久精品国产亚洲av| 一级毛片电影观看 | 天堂网av新在线| 亚洲精品乱码久久久久久按摩| 国产黄色小视频在线观看| 级片在线观看| 久久99蜜桃精品久久| 亚洲婷婷狠狠爱综合网| 国产精品不卡视频一区二区| 日韩欧美三级三区| 亚洲精品456在线播放app| 熟女电影av网| 免费观看人在逋| 午夜久久久久精精品| 日韩一本色道免费dvd| 亚洲av熟女| 成人美女网站在线观看视频| 男女下面进入的视频免费午夜| 毛片女人毛片| 色综合色国产| 最近视频中文字幕2019在线8| 成熟少妇高潮喷水视频| 22中文网久久字幕| 亚洲va在线va天堂va国产| 日韩一区二区视频免费看| 亚洲精品成人久久久久久| 久久国内精品自在自线图片| www.av在线官网国产| 亚洲18禁久久av| 又爽又黄无遮挡网站| 亚洲人成网站在线播放欧美日韩| 日本爱情动作片www.在线观看| 国产高清三级在线| 婷婷色综合大香蕉| 老司机影院成人| 特级一级黄色大片| 精品久久久久久久久av| 亚洲精品乱码久久久v下载方式| 国产精品综合久久久久久久免费| 一边摸一边抽搐一进一小说| 在线播放无遮挡| 丰满的人妻完整版| 成人鲁丝片一二三区免费| 亚洲国产高清在线一区二区三| 国产精品精品国产色婷婷| 亚洲国产精品国产精品| 91aial.com中文字幕在线观看| 亚洲欧美清纯卡通| 国产成人freesex在线| 精品午夜福利在线看| 久久久国产成人精品二区| 成人特级黄色片久久久久久久| or卡值多少钱| 青春草视频在线免费观看| 国产真实伦视频高清在线观看| 日本一二三区视频观看| 老熟妇乱子伦视频在线观看| 深爱激情五月婷婷| 日韩一区二区视频免费看| 国内揄拍国产精品人妻在线| 麻豆乱淫一区二区| 成人毛片60女人毛片免费| av视频在线观看入口| 免费人成在线观看视频色| 国产不卡一卡二| 国产精品一区www在线观看| 看免费成人av毛片| 美女xxoo啪啪120秒动态图| 真实男女啪啪啪动态图| 搡女人真爽免费视频火全软件| 欧美xxxx性猛交bbbb| 婷婷精品国产亚洲av| 淫秽高清视频在线观看| 青春草视频在线免费观看| 一区福利在线观看| 久久久国产成人精品二区| 精品一区二区三区人妻视频| 欧美成人a在线观看| 国产精品人妻久久久久久| 国产私拍福利视频在线观看| 国产精品久久电影中文字幕| 美女大奶头视频| 久久九九热精品免费| 久久久久国产网址| 噜噜噜噜噜久久久久久91| 一本精品99久久精品77| 国产高清激情床上av| 久久精品国产亚洲网站| 22中文网久久字幕| 久久久色成人| 成人特级黄色片久久久久久久| 久久久精品欧美日韩精品| 如何舔出高潮| 日韩欧美在线乱码| 日韩欧美精品v在线| 校园人妻丝袜中文字幕| 嘟嘟电影网在线观看| 岛国毛片在线播放| 日韩一本色道免费dvd| 国产av不卡久久| 51国产日韩欧美| 特级一级黄色大片| 五月伊人婷婷丁香| 美女黄网站色视频| 亚洲精品国产av成人精品| 午夜精品一区二区三区免费看| 成人毛片60女人毛片免费| a级毛片免费高清观看在线播放| 免费在线观看成人毛片| 久久久久久久久久成人| 男女下面进入的视频免费午夜| 一本久久精品| 啦啦啦韩国在线观看视频| 午夜福利高清视频| 一级二级三级毛片免费看| 欧美另类亚洲清纯唯美| 国产精品一区二区在线观看99 | 国产精品蜜桃在线观看 | 色哟哟·www| 国产在视频线在精品| 91av网一区二区| 人妻少妇偷人精品九色| 色哟哟哟哟哟哟| 亚洲高清免费不卡视频| 看十八女毛片水多多多| 日日摸夜夜添夜夜爱| 欧美日本视频| 久久久精品94久久精品| 亚洲欧美日韩高清在线视频| 亚洲一级一片aⅴ在线观看| 成人亚洲精品av一区二区| 日本五十路高清| 亚洲国产色片| 国产午夜精品论理片| 亚洲国产欧洲综合997久久,| 亚洲人成网站高清观看| 亚洲一级一片aⅴ在线观看| 国产91av在线免费观看| 成人三级黄色视频| 干丝袜人妻中文字幕| 国产乱人视频| 国产成人aa在线观看| 久久精品91蜜桃| 亚洲国产精品成人综合色| 亚洲性久久影院| 97在线视频观看| 天堂网av新在线| 美女国产视频在线观看| 别揉我奶头 嗯啊视频| 成人三级黄色视频| 国产单亲对白刺激| 日韩欧美 国产精品| 亚洲精品国产av成人精品| 九草在线视频观看| 最近视频中文字幕2019在线8| 啦啦啦韩国在线观看视频| 国产精品,欧美在线| 国产久久久一区二区三区| 哪个播放器可以免费观看大片| 精品无人区乱码1区二区| 人妻夜夜爽99麻豆av| 男插女下体视频免费在线播放| 亚洲经典国产精华液单| 亚洲高清免费不卡视频| 人人妻人人澡人人爽人人夜夜 | 久久国产乱子免费精品| 久久草成人影院| 久久99精品国语久久久| 国产亚洲欧美98| 亚洲精品亚洲一区二区| 久久人人爽人人片av| 22中文网久久字幕| 欧美色视频一区免费| 成年女人看的毛片在线观看| 亚洲四区av| 全区人妻精品视频| 美女cb高潮喷水在线观看| 在线a可以看的网站| 欧美激情在线99| 天天一区二区日本电影三级| 一区二区三区四区激情视频 | 好男人视频免费观看在线| 岛国毛片在线播放| 一个人观看的视频www高清免费观看| 免费看光身美女| 少妇裸体淫交视频免费看高清| 97超碰精品成人国产| 一级毛片电影观看 | av在线亚洲专区| 久久久精品欧美日韩精品| 成人午夜精彩视频在线观看| 国产一区二区亚洲精品在线观看| 久久久久久久久中文| 床上黄色一级片| 联通29元200g的流量卡| 久久久久久国产a免费观看| 99精品在免费线老司机午夜| 99在线人妻在线中文字幕| 午夜精品一区二区三区免费看| 久久久久久久久久黄片| 国产日韩欧美在线精品| 国产黄a三级三级三级人| 99热这里只有是精品50| АⅤ资源中文在线天堂| 成人午夜精彩视频在线观看| 麻豆国产97在线/欧美| 精品国内亚洲2022精品成人| 国产精品日韩av在线免费观看| 国产av麻豆久久久久久久| 国产成人福利小说| 国产精品人妻久久久久久| 淫秽高清视频在线观看| 亚洲性久久影院| 免费看美女性在线毛片视频| 亚洲最大成人中文| 男女视频在线观看网站免费| 九九在线视频观看精品| 在现免费观看毛片| 国产高潮美女av| 乱系列少妇在线播放| 大型黄色视频在线免费观看| a级毛色黄片| 亚洲乱码一区二区免费版| 麻豆av噜噜一区二区三区| 国产亚洲5aaaaa淫片| 成年免费大片在线观看| 91av网一区二区| 人妻久久中文字幕网| 在线观看一区二区三区| 级片在线观看| 长腿黑丝高跟| 赤兔流量卡办理| 噜噜噜噜噜久久久久久91| 中国美白少妇内射xxxbb| 欧美一区二区亚洲| 日本熟妇午夜| 中国美女看黄片| 国产熟女欧美一区二区| 亚洲四区av| 悠悠久久av| 国产美女午夜福利| 自拍偷自拍亚洲精品老妇| 免费看av在线观看网站| 日韩中字成人| 别揉我奶头 嗯啊视频| 亚洲综合色惰| 亚洲在线观看片| 精品久久国产蜜桃| 91狼人影院| 91在线精品国自产拍蜜月| 少妇高潮的动态图| 99riav亚洲国产免费| 午夜福利在线观看免费完整高清在 | 一个人看视频在线观看www免费| а√天堂www在线а√下载| 老熟妇乱子伦视频在线观看| 亚洲欧美清纯卡通| 国产女主播在线喷水免费视频网站 | 26uuu在线亚洲综合色| 午夜爱爱视频在线播放| 一个人看的www免费观看视频| 观看美女的网站| 亚洲国产精品合色在线| 欧美zozozo另类| 国产高清不卡午夜福利| 亚洲四区av| 男的添女的下面高潮视频| 99热6这里只有精品| 亚洲成人精品中文字幕电影| 国产精品久久电影中文字幕| 日韩三级伦理在线观看| 成人欧美大片| 我要搜黄色片| 亚洲经典国产精华液单| 久久久色成人| 只有这里有精品99| 亚洲久久久久久中文字幕| 亚洲av男天堂| 国产精品野战在线观看| 午夜亚洲福利在线播放| 两性午夜刺激爽爽歪歪视频在线观看| 中国国产av一级| 免费一级毛片在线播放高清视频| 看非洲黑人一级黄片| 一级av片app| 久久热精品热| 美女内射精品一级片tv| 中文字幕制服av| 尾随美女入室| 亚洲精品粉嫩美女一区| 99久久成人亚洲精品观看| 18禁黄网站禁片免费观看直播| 最近视频中文字幕2019在线8| 99热精品在线国产| 久久这里只有精品中国| 99热这里只有是精品在线观看| 全区人妻精品视频| 天美传媒精品一区二区| 成人一区二区视频在线观看| 爱豆传媒免费全集在线观看| 亚洲图色成人| 精品国产三级普通话版| 91精品国产九色| 自拍偷自拍亚洲精品老妇| 乱系列少妇在线播放| 我的老师免费观看完整版| 一本久久精品| 变态另类成人亚洲欧美熟女| 亚洲aⅴ乱码一区二区在线播放| 国产午夜精品论理片| 日韩强制内射视频| 国产欧美日韩精品一区二区| 波多野结衣高清作品| 99久久人妻综合| 亚洲综合色惰| 欧美变态另类bdsm刘玥| 69av精品久久久久久| 欧美三级亚洲精品| 欧美色欧美亚洲另类二区| 亚洲精品成人久久久久久| 亚洲va在线va天堂va国产| 精品一区二区三区人妻视频| 国产69精品久久久久777片| 菩萨蛮人人尽说江南好唐韦庄 | 国产高清三级在线| 亚洲av免费在线观看| 在线国产一区二区在线| 国产成人a∨麻豆精品| 成年女人看的毛片在线观看| 欧美色欧美亚洲另类二区| 五月伊人婷婷丁香| 中文字幕人妻熟人妻熟丝袜美| 亚洲欧洲日产国产| 国产精品一区二区三区四区免费观看| 国产精品嫩草影院av在线观看| av在线观看视频网站免费| 亚洲国产精品久久男人天堂| 精品久久国产蜜桃| 亚洲国产精品合色在线| 好男人在线观看高清免费视频| 人人妻人人澡欧美一区二区| 尤物成人国产欧美一区二区三区| 九九热线精品视视频播放| 大香蕉久久网| 麻豆成人午夜福利视频| 精华霜和精华液先用哪个| 国产在线精品亚洲第一网站| 黄色视频,在线免费观看| 国产精品一区二区在线观看99 | 狂野欧美激情性xxxx在线观看| 国产精品一区二区三区四区免费观看| 麻豆国产av国片精品| 国产一区二区三区av在线 | 狂野欧美激情性xxxx在线观看| 色综合色国产| 简卡轻食公司| 男女下面进入的视频免费午夜| 日韩大尺度精品在线看网址| 亚洲七黄色美女视频| 国产一级毛片七仙女欲春2| 啦啦啦啦在线视频资源| 97超视频在线观看视频| 麻豆成人av视频| 中国美女看黄片| 亚洲精品亚洲一区二区| 别揉我奶头 嗯啊视频| 国产一区二区亚洲精品在线观看| 婷婷亚洲欧美| 可以在线观看的亚洲视频| 欧美bdsm另类| 日韩一区二区视频免费看| 日日摸夜夜添夜夜爱| 边亲边吃奶的免费视频| 一卡2卡三卡四卡精品乱码亚洲| 国产av不卡久久| 久久鲁丝午夜福利片| 麻豆国产av国片精品| av免费观看日本| 日韩大尺度精品在线看网址| 岛国在线免费视频观看| 男的添女的下面高潮视频| 成熟少妇高潮喷水视频| 有码 亚洲区| 老熟妇乱子伦视频在线观看| 国产黄色视频一区二区在线观看 | 中国美女看黄片| 神马国产精品三级电影在线观看| 级片在线观看| 免费大片18禁| 色哟哟哟哟哟哟| 特大巨黑吊av在线直播| 国产成人福利小说| 国产白丝娇喘喷水9色精品| 国产麻豆成人av免费视频| 男女做爰动态图高潮gif福利片| a级一级毛片免费在线观看| av在线播放精品| 边亲边吃奶的免费视频| 内地一区二区视频在线| a级毛片a级免费在线| 在线观看66精品国产| 人妻系列 视频| 亚洲一级一片aⅴ在线观看| 国产精品99久久久久久久久| 国产老妇伦熟女老妇高清| 欧美bdsm另类| 少妇的逼水好多| 日韩高清综合在线| 国产真实伦视频高清在线观看| 国内精品久久久久精免费| 欧美激情在线99| 91午夜精品亚洲一区二区三区| 99久久成人亚洲精品观看| 99九九线精品视频在线观看视频| 女同久久另类99精品国产91| 免费看av在线观看网站| 天堂√8在线中文| 看十八女毛片水多多多| 免费在线观看成人毛片| 亚洲av免费在线观看| 久久鲁丝午夜福利片| 亚洲成人久久爱视频| 国产精品久久久久久精品电影| 日韩在线高清观看一区二区三区| 国产精华一区二区三区| 成人三级黄色视频| 久久这里有精品视频免费| videossex国产| 91aial.com中文字幕在线观看| 免费观看精品视频网站| 嫩草影院入口| 亚洲乱码一区二区免费版| 日韩大尺度精品在线看网址| 天美传媒精品一区二区| 国产爱豆传媒在线观看| 国产在视频线在精品| 日韩成人av中文字幕在线观看| 一进一出抽搐gif免费好疼| 直男gayav资源| 亚洲av一区综合| 国产爱豆传媒在线观看| 亚洲天堂国产精品一区在线| 亚洲av第一区精品v没综合| 22中文网久久字幕| 一本精品99久久精品77| а√天堂www在线а√下载| 国产乱人视频| 真实男女啪啪啪动态图| 欧美在线一区亚洲| 日本五十路高清| 亚洲三级黄色毛片| 国产精品久久视频播放| 国产精品无大码| 99久久精品一区二区三区| 日韩高清综合在线| 精品久久久久久久久亚洲| 在线国产一区二区在线| 久久午夜亚洲精品久久| 男女做爰动态图高潮gif福利片| 国产伦在线观看视频一区| 国产精华一区二区三区| 日韩精品有码人妻一区| 欧美变态另类bdsm刘玥| 国内精品宾馆在线|