• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Text Extraction with Optimal Bi-LSTM

    2023-10-26 13:14:58BaheraNayefSitiNorulHudaSheikhAbdullahRossilawatiSulaimanandAshwaqMukredSaeed
    Computers Materials&Continua 2023年9期

    Bahera H.Nayef ,Siti Norul Huda Sheikh Abdullah ,Rossilawati Sulaiman and Ashwaq Mukred Saeed

    1Computer Techniques Engineering Department,Ibn Khaldun University College,Baghdad,10011,Iraq

    2Faculty of Information Science and Technology,Universiti Kebangsaan Malaysia,Bangi,Selangor,43600,Malaysia

    3School of Electrical Engineering and Artificial Intelligence,Xiamen University Malaysia,Sepang,43900,Malaysia

    ABSTRACT Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks (CNN) layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extract more compelling features and link the time sequence from forward and backward directions during the training phase.The Connectionist Temporal Classification (CTC) function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09% for the average loss errors with the proposed dual Maxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2% by applying concatenating CNN layers,and Relu.

    KEYWORDS Deep neural network;text features;dual max-pooling;concatenating convolution neural networks;bidirectional long short memory;text connector characteristics

    1 Introduction

    Optical Character Recognition (OCR) can detect and categorize visual patterns into characters from a digital text image [1].The pattern recognition process includes applying a set of image segmentation,feature extraction,and classification.OCR systems are a combination of pattern recognition and artificial intelligence techniques.OCR technology assists users in converting various types of documents,such as scanned papers,pdf files,and images taken by a digital camera,into editable data [2].Since OCR technology became PC-based,users can perform OCR on the image to recognize the text using their computers,mobile phones,and tablets to select,copy,search,and edit the text[3].There are two approaches to optical recognition,either offline or online.The offline recognition process is used for the preprinted characters.

    Nevertheless,online recognition will activate its process while writing the characters.The quality of the recognition process relies on the quality of the input text (single character or script),either printed or handwritten[2].The offline recognition system performance depends on static data such as bitmap images,making it more complicated than online recognition due to the need for image segmentation techniques[4].According to[5],the full availability of online systems is easy to develop,has good accuracy,and is compatible with tablets and PDFs.

    In correspondence to the input data type,the OCR can be classified into two types,the first is Machine printed recognition which the characters have uniform dimensions [4].The second is Handwriting recognition which is more complicated due to various handwritten styles and different kinds of pen movements by a single user for a unique character[6].

    Since the early nineties,a new era of recognition and classification research started by using deep learning networks (DNN) [7].DNN performance depends on the initial initialization using unsupervised pre-training it will give outstanding results and acceptable running time [8].DNN proved its superior in speech recognition,Natural language processing,and image classification over the classical state-of-the-art AI machine learning techniques [9–11].The unsupervised pre-training approach is not the only method to train DNN efficiently.

    The above-discussed studies revealed that the performance of deep learning applied in text extraction from images was outstanding.Our contribution is to improve the design of the deep learning model proposed by [12] by concatenating the first two CNN layers to extract the best features from both these layers individually and then merge them to be sent to the next layer.Also,propose dual max-pooling techniques for downsampling the extracted feature size and speeding up the training phase with maintaining good model performance.In addition,using OLRelu with CNN enhanced features extraction to include both the positive and negative features as well.

    The proposed methodology aims to answer the following questions:

    1-Does concatenate CNN layers improve text feature extraction from the word level?

    2-How do downsample the size of the extracted features and speed up the training process?

    This study aims to recognize handwritten characters from word images more accurately and reconstruct the words with minimum error per character.Also,the study introduces a model that reduces the number of epochs and the number of model parameters required to reach the best results for the training and validation error rates.

    2 Related Work

    Lately,the attention of researchers has been redirected toward the use of Deep Learning for digitizing handwritten documents in different languages.In the study[13],the researchers used Dropout to prevent overfitting and speedup running time and this leads to improving DNN performance.Convolutional Neural Networks are used by[14],Deep belief and Boltzmann NN were also conducted as an approach to overcome the overfitting issue[8,15].The[16]and[17]studies discussed extracting handwritten characters of Urdu and Jawi languages from raw images using CNN and Recurrent Neural Networks (RNN).Other researchers,such as [18,19],discussed cursive and multi scales text detection in the natural scene images problem but using different approaches,where the study by[18]discussed three neural network architectures,namely Visual Geometry Group(VGG16),Scalable Vector Graphics (SVG),and Residual Network (ResNet50),for feature extraction.The datasets used in their performance test of the proposed approaches were the Incidental Text proposed in the International Conference on Document Analysis and Recognition (ICDAR2015),(MSRA-TD500),the Focused Text ICDAR2013,Reading Chinese Text in the Wild (RCTW-17),Chinese scene text dataset (CASIA-10K),and Multi-lingual scene text (MLT-17) Also,while the latter [19] used only two datasets,the ICDAR2013,and ICDAR2015,to extract features from Region of Interest (ROI)only.This shift occurred due to cluster computing,GPUs,and better deep learning architecture performance,which embraces(RNN),CNN,and Long Short Term Memory networks(LSTM)[20].An interesting study for multi scales object detection was proposed by [21].The study framework is divided into two parts,the first part is concerned with extracting features from five CNN layers.These multi scale features are converted into multi-scale proposal generators by concatenating multiple Region Proposal Networks (RPN).The proposed methodology showd outstanding results for all datasets in the experiment.But,the proposed methods suffer from a high computation time and require a processor with high specifications.

    The output of reviewed studies was outstanding,Albeit,the error rates for recognizing characters are still high.Especially when the text image contains noise such as shadows,cropped characters,and faded characters.

    3 Text Extraction Using Deep Learning

    A typical OCR system consists of several components,as shown in Fig.1.The first step is to digitize the analog document using an optical scanner.Segmentation techniques can locate and extract each symbol from the region of interest.The extracted symbols will pre-process to remove noise to execute the feature extraction step.Recognizing each symbol is performed by comparing the extracted features to the learned symbol classes from the learning stage.The last step is to reconstruct the words and numbers of the original text using the context information.

    The continuous development of deep learning shows an increasing advantage in the field of computer vision.Currently,the most popular methods are the top-down approaches [22] based on CNN and RNN represented by LSTM.After using deep learning methods,the accuracy of text detection is highly improved.

    3.1 Long Short-Term Memory(LSTM)

    The traditional recurrent is a one-direction layer that can use the previous context.Sequence labeling tasks require two directions for processing sequence.So,a bidirectional convolutional recurrent layer that uses two hidden layers is adopted.These two hidden layers enable the bidirectional convolutional layer to iterate from forward and backward correspondingly [23].LSTM networks combine different gates and memory cells to overcome the vanishing gradient problem.LSTM can learn essential information and discards unwanted information[12].

    3.2 Connectionist Temporal Classification(CTC)

    The output layer of deep learning architecture for text extraction is the CTC.It calculates the recognition loss and decodes the output of the previous layers as in Eq.(1)[24].

    where:

    x:training sample,and

    y:the produced sequence of labels.

    Figure 1:OCR stages[2]

    4 The Proposed Methodology

    The proposed architecture Fig.2 comprises CNN block layers,two bidirectional Long Short-Term Memory(BiLSTM)layers,and Connectionist Temporal Classification(CTC)[25].

    It starts with reading the images and appending them to the labels and dividing the dataset into training and validation sets.Next is generating sequences for mini-batches for both training and validation sets.The training set with size(128,32)is passed to the first conv1 layer with kernel size(4,4)to extract 32 feature maps.Relu or OLRelu from[26]is used to activate these features.The extracted text features are then passed to the second conv2 layer with kernel size[3,3]to extract 32 feature maps and activate them with Relu or OLRelu.The extracted text features from the first and the second CNN layers are concatenated and pooled with the maxpooling1 layer.The output is then passed to conv3 and then conv4 layers.A second Maxpooling is applied to reduce the size of the features.The reduced output is convoluted with conv5 and conv6 and(3×3)kernel size to extract 64 text feature maps.The batch normalization layer follows conv5 and conv6.

    Figure 2:Feature extraction flowchart

    The output of the conv6 layer is reduced using dual Maxpooling layers(3&4).Next,conv7 with(2×2)kernel size is used to extract 64 text feature maps and then pass them to the last Maxpooling layer5(2×1).The next step is applying two BiLSTM layers to label the sequences.The output from BiLSTM layers is passed to the CTC layer.CTC is used to manage the different alignments of the words.Moreover,it uses as a loss function when time is a variable quantity.

    4.1 The Proposed Concatenate CNN Layers and Features Extraction

    CNN proposed architecture for text extraction features consists of seven CNN layers,five Maxpooling layers,and two batches of normalizing layers.The CNN output is used for the feature extraction step by concatenating CNN layers defined in Eqs.(2)&(3).

    wherel1and l2are representing the extracted text features from conv1 and conv2 layers.(M,N)represents the image width and height,(i,j)represents the position of the current feature map.The output of these two layers is concatenated as in Eq.(4).

    The convoluted featuresC(i,j)are activated with either Relu or OLRelu as in Eqs.(5) &(6) in sequence:

    4.2 The Proposed Dual Max-Pooling

    Another proposed approach applies dual Maxpooling layers for dual dimensionality reduction of the text features extracted from conv6.The output of the Maxpooling3(MaxP3)is the input for the Maxpooling4(MaxP4).The proposed dual Maxpooling is illustrated in Eqs.(7)and(8)[27].

    iandjrepresent the current index andf BNandf MaxP3represent the output features of conv6 after batch normalization and the output features of MaxP3,respectively.

    4.3 The Proposed BiLSTM

    The proposed architecture of CNN layers used for extracting text feature maps is presented in Fig.2.The sequence of the feature maps is then passed to two cyclic layers of Bidirectional Long Short Term Memory(BiLSTM).BiLSTM will extract more text feature information from the received feature maps[28].BiLSTM can learn bidirectional long-term dependencies between time steps of time series of sequence data.A single BILSTM comprises dual LSTM layers called causal and anti-casual counterparts.These two layers process time series in the same way,forward and backward in time but in opposite time order as shown in Fig.3[29].

    Figure 3:Bi-LSTM structure for three consequence steps.The previous →ht-1, the next ←ht+1 and the current ←ht.The arrows represent forward(→)and backward(←)time steps[12]

    4.4 Text Connector

    The CTC layer calculates the loss rate during the training and validation phases as Eq.(9).Also,it is used to output the prediction results in the testing phase.The dimension of the extracted text features is not appropriate to be fed directly to the CTC layer.The extracted text features from CNN layers have a 3D tensor(height,width,and number of feature maps).The input to the CTC layer should be a 2D tensor.So,an additional layer is required and called the transposition layer.Moreover,the CTC layer has another function decoding the output of the previous layers[28].

    where:

    s:training dataset of handwritten text images(Mjsynth or IAM),

    P(y|x): the probability of the ground truthygiven to the training sample of handwritten text imagesx,

    x:training sample of handwritten text images(Mjsynth or IAM),and

    y:the produced sequence by the recurrent layers fromx.

    The CTC works in three main concepts,Encoding the text,Loss function,and Decoding text.When dividing the characters into 31-time steps as shown in Fig.4,some characters take more than one time step,as shown in Fig.4.The character“J”takes seven-time steps,“o”takes four-time steps,“y”takes five-time steps,“f”takes four-time steps,“u”takes four-time steps,and“l(fā)”takes two-time steps.So,the output from the network using CTC is“JJJJJJJooooyyyyyffffuuuull”.CTC encodes it by adding a blank character denoted with“-”and then the word is encoded as“JJJJJJJ-oooo-yyyyyffff-uuuu-ll”.Then the encoded text using CTC is trained using BiLSTM layers.

    To train the BiLSTM,each image,and its label is given a calculated loss.The results will be a matrix of the probability(p)for each character at every time step,including the blank.The total probability equals 1,as shown in Fig.5.

    Figure 4:Dividing the word“joyful”image sample from Mjsynth Dataset into 31-time steps

    The corresponding character probabilities are multiplied together to get the probability for a single path.For example the path“j–”the probability is(0.1×0.3×0.7)=0.021,and for the path“jjjj”the probability is(0.1×0.2×0.2×0.5=0.002).To get the corresponding probability for the given ground truth,the probabilities of all possible paths are summed up (0.021+0.002=0.023).The loss can be calculated by applying the negative logarithm of the probability.The loss can be backpropagated and trained in the network.The output of the trained BiLSTM is the unseen text images.Then choose the best path by considering the character with maximum probability at every time step.For example,to t0 the max probability is for“-”and the same for t1,and t2.So,the output text is“-”For t3,t4,and t5 the max probability is for character“j”which means the output text is“j”.The CTC merges the duplicated characters and removes the blanks to get the final decoded text.According to the example in Fig.4 the output is“Joyful”.

    5 The Experiments and Results

    This section presents and discusses all the experimental results,analyze the results,and conducts comparisons.

    5.1 Experiment 1:Extracting Feature with Individual Max-Pooling

    This experiment tests the model’s performance with a set of CNN,Relu (OLRelu),and Maxpooling blocks.The total number of parameters is 759,103,and the number of trained parameters is 759,103.The results of ten runs are presented in Table 1 and Fig.6.The results validation averages for Mjsynth data with OLRelu and Relu are 4.5 and 3.0664,with standard deviations of 0.23 and 0.288 in sequence.While with the IAM dataset,the Validation loss averages are 3.167 and 2.5,with standard deviations of 0.181 and 0.569 for Relu and OLRelu in sequence.Also,the performance of OLRelu is better than Relu due to the cropped characters resulting from segmenting the image with a line of words into separated images with single words.This indicator has increased the ambiguity of the characters.

    Table 1:The train and validation loss rates for individual Maxpooling with Relu and OLRelu for ten runs

    Figure 6:The average loss rates for individual Maxpooling with Relu and OLRelu for words extraction from Mjsynthatic and IAM images

    The performance of Relu showed less error rate than with OLRelu using the Mjsynth dataset.Nevertheless,the validation loss rate with OLRleu is less than with Relu using the IAM dataset.According to Fig.7,the loss rates using Relu and OLRelu with the IAM dataset are less than with the Mjsynth dataset.Due to the structure of the Mjsynth data,preprocessing techniques are used to create it.Such as adding noise shadows and using different image resolutions.It increases the negative feature maps.Relu works by eliminating all nodes with negative values.On the other hand,OLRelu considers both positive and negative feature maps,increasing the probability of the character’s misclassification.

    Figure 7:The performance of Relu and OLRelu with(a)IAM dataset and(b)Mjsynth dataset with individual Maxpooling and no CNN concatenate

    For more illustrations,this study chose the best run from both datasets and presented the results per epoch as in Fig.8.With the IAM dataset,there was a slight difference in the loss rates between Relu and OLRelu.On the contrary,the performance of Relu is better than OLRelu with Mjsynth data.It shows fewer loss rates between epochs 1–32.Then equal to OLRelu for the rest of the epochs.

    Figure 8:Samples from the predicted words using the proposed model with individual Maxpooling with(a)Relu.(b)OLRelu

    The model’s performance with Relu using MJsyenthtic data showed slower divergence than with Relu.The loss rate reached ten at epoch 5 with Relu and OLRelu at epoch 14.Nevertheless,at epoch 33,the loss rates of both Relu and OLRelu are matched.

    Fig.8a shows samples from the predicted words using Rule and individual Maxpooling for a run.From the predicted words,we can conclude the following:with the Mjsynth dataset,the word images are either horizontally,with an angle,or curved aligned.When created,many different text effects were used,such as adding a shadow,dark or light brightness,thin or thick font,and different font styles.We noticed that the model could correctly predict most word characters in datasets like“Copyleft”and“Tragicomedy”.While it missed one or two characters in some words,such as the predicted word“tostily”and the text in the image is“testily”also the“Surerstition”and in the image,it should be“Superstition”for the Mjsynth dataset.Similar to the IAM dataset,some words and characters are correctly predicted like“the”,“to”,and“altered”but some wrongly predicted characters such as“posstion”instead of“position”,“aren”instead of“Grean”,“olgen”instead of“edges”and more.

    Fig.8b shows samples from the predicted images for both datasets using the OLRelu activation function.With the Mjsyenthtic dataset,some characters are mispredicted,such as“JAPIING”instead of“JAPING”,and“bons”instead of“hons”.At the same time,some characters are correctly predicted,such as“Granulating”,and“Fquivocal”.IAM data also found some correctly extracted characters,and some are wrong as shown in Fig.8b for IAM data.The correct extracted words are“part”,“that”,“No”,“missile”,and painful.Some extraction exhales wrong characters such as“omelling”instead of“smelling”,disfference”instead of“difference”,and“parience”instead of“patience”.

    5.2 Experiment 2:Extracting Feature with Dual Max-Pooling

    This experiment is conducted to study the effect of applying dual Maxpooling layers on eliminating the low-valued features with Relu and OLRelu activation functions.The total number of parameters is 361,696 and the trainable parameters are 361,696.

    The results of the validation loss rate for both activation functions of ten runs are presented in Table 2 and Fig.9.The resulting validation loss rate for the Mjsynth dataset with Relu is worse than with individual Maxpooling layers due to the construction of the dataset.The IAM dataset obtains better results than individual Maxpooling layers.On the other hand,the OLRelu with dual Maxpooling performed better than with individual pooling layers for both datasets.The validation loss rate decreased for both datasets with OLRelu.

    This can be explained as follow: the dual reduction for the dimensionality of the IAM samples led to the extraction of the best text feature maps with high values.These features increased the rate of distinguishing character classes and decreased the loss rates with Relu and OLRelu.While with Mjsynth data,dual reduction for the dimensionality caused losing important feature maps,which led to an increase in the loss rates with both Relu and OLRelu.

    For further explanation,Fig.10 presents the performance of both activation functions with the two datasets for the best run.The performance of Relu with the Mjsynth dataset and dual Maxpooling showed a high validation loss rate than with individual max-pooling.However,the dual Maxpooling with Relu and OLRelu enhanced gradient divergence and speed error rates.Also,with the IAM dataset,the loss rate gradient diverged smoothly with Relu and OLRelu.

    Table 2:The train and validation loss rates with Relu and OLRelu for 10 runs with dual Maxpooling

    Figure 9:The validation loss rates with Relu and OLRelu using dual Maxpooling with Mjsynth and IAM datasets

    Figure 10:Relu and OLRelu performance with dual Maxpooling for 40 epochs for(a)Mjsynth data(b)IAM data

    Fig.11 a presents a set from the predicted words for both datasets using Relu.As noticed from the images,some words are horizontal,and curved,with clockwise and anticlockwise angles,some images are dark,and some are light with shadows.The model performance is enhanced by overcoming the overfitting problem that combined Relu and individual Maxpooling with the Mjsynth dataset.Also,we still have missed characters from words such as“Obressionaly”instead of“Obsessionally”.On the other hand,some characters are extracted correctly even from unclear or curved images such as“BEHARPEN”and“commodiously”.

    Regarding the IAM dataset with Relu,the model performance is enhanced.The loss rates decreased noticeably.Some of the missed extracted characters with individual Maxpooling and Relu are correctly extracted using dual Maxpooling,such as the words“wanted”,“Here”,“up”,and“officer’s”.Yet,we still have missed extracted characters such as the words“bte”instead of“l(fā)ate”,“aljers”instead of“edges”,and“sceukcked”instead of“sczatched”.

    Figure 11:Samples from the predicted words using the proposed model with dual Maxpooling with(a)Relu.(b)OLRelu

    Fig.11 b shows some correctly and incorrectly extracted characters with OLRelu activation functions with Mjsynth and IAM datasets.For the Mjsynth dataset,it shows better performance with most characters,but it shows the disability to extract characters that have a similar form like“i”and“l(fā)”in the word“Disabites”and they stack to each other.It considers them as three similar characters and replaces them with one character.Also,it shows more correctly extracted words such as“Biro”,“SEQUESTER”,“KHUFU”,and“ GREAIPES”.With the IAM dataset and OLRelu,the words extracted with incorrect characters with Relu are correctly extracted with OLRelu such as“painful”,and“differences”.But we still have miss extracted characters such as the words“omelling”must be“smelling”,and“parience”instead of“patience”.

    5.3 Experiment 3:Extracting Text Feature with Concatenating CNN Layers

    This experiment was conducted to evaluate the performance of the proposed model with Relu,OLRelu and concatenate the first two CNN layers.The total number of parameters is 422,839 and the trainable parameters are 422,839.The experiment aims to improve the value of the extracted text feature maps to enhance the model performance.The loss results of ten runs are presented in Table 3 and Fig.12.

    The results showed a better performance for the Relu with Mjsyenth data than with the IAM dataset.But in contrast with OLRelu,it showed better performance with the IAM dataset than with Mjsynth data.We believe the reason is related to the function of Relu with the negative text feature maps.Also,it is due to how the dataset set is collected and processed.The IAM dataset words are originally segmented from a line-based text.So,some words were cropped from more than one side.Concatenating CNN layers with Mjsenthetic data enhanced the extracted high values text features maps.On the other hand,the IAM data concatenating led to an increase in the unimportant features with negative values,leading to needing more epochs to improve the model performance.

    Table 3:The model performance with Concatenate CNN layers

    Figure 12:Min,max,and average loss rates of concatenate CNN layers

    The model’s performance with both Relu and OLRelu per epoch with the Mjsynth dataset is shown in Fig.13 below.As clear from Fig.13,the OLRelu was slower than Relu in the early epochs,whereas,with Relu,the validation loss rate reached 5 at epoch five,while with OLRelu,the validation loss reached five at epoch 9.The model performance with Relu and OLRelu are matched in the later epochs.

    Figure 13:The performance of the model with concatenating CNN and dual Maxpooling layers per epoch with the Mjsynth dataset

    The results for the model with Relu and OLRelu for the IAM dataset are presented in Fig.14.The performance of the model with OLRelu showed better results than those with Relu.We need to run over 40 epochs to gain fewer validation loss rates.

    Figure 14:The performance of the model with concatenating CNN and dual Maxpooling layers per epoch with the IAM dataset

    6 Comparison with State of the Art

    Table 4 contains a list of the recent studies’results regarding text extraction from images problem.Our proposed method showed better results than the state-of-the-art studies from the presented results.

    Table 4:Comparison with state-of-the-art

    7 Conclusion

    The study was proposed to examine using dual Maxpooling for dimensionality reduction.In addition to using concatenate CNN layers to enhance the feature maps extraction from images.The highly valued extracted features are passed to two layers of BLSTM with 50 units to extract more features and find the time sequence between the word characters.The CTC loss function is performed to calculate the training and validation loss rates by updating the training parameters.The experiments were conducted with individual max-pooling,dual max-pooling,and concatenated CNN layers.Two datasets were used;the first is the Mjsynth dataset,and the second is the IAM dataset.The results were compared to the state of art studies.The proposed approach achieved better results than the state-ofthe-art.Samples reduction,concatenation,and merging of the extracted features from CNN layers led to activating more related features from the images.The number of parameters of the proposed model with dual MaxPooling and concatenating CNN layers(361,696 and 422,839 in sequence)is less than that of using individual MaxPooling(759,103).

    The difference between individual and Dual Maxpooling in terms of the number of epochs required to reach the reported error rate for the Mjsynth dataset is shown in Table 5.The reported validation error rate for individual Maxpooling for all experiments was only for the last 40 epochs of 300 and 200 epochs.Since the results kept repeating,we consider only the last 40 epochs for comparison.As shown dual Mapooling with Relu takes 40 epochs to reach an error rate that is equivalent to the error rate with individual Maxpooling and Relu with 100 and 150 epochs.This shows that Dual Maxpooling speeds up the training and reduces the number of epochs.The results are related to the Mjsynth dataset because it was the first data used to test the proposed method and then apply the same settings for the IAM dataset.

    The study presents samples from the predicted words combined with the original image of these words.In general,the proposed methods showed better performance in terms of validation loss rate with OLRelu and IAM datasets than with the Mjsynth dataset.The Mjsynth dataset was created using a special application and contains shadows and noise.In addition,the created words were rotated at different angles.These data augmentation techniques led to reducing the value of the image features and increasing misclassification rates.

    The resulting loss (Error Per character) of the state of art studies [24,30,32] with IAM dataset using CNN.BLSTM and CTC were 7.9%.17.4% and 13.77%,respectively.While the proposed CNN concatenation and dual max-pooling showed less loss rate (2.091%,2.165% in turn) than the mentioned state of art studies.The resulting loss rate of the study [31] with the Syenth dataset was 3.57%.while with the proposed CNN concatenating the results showed better results (2.22%).The proposed Concatenating CNN led to improving the quality of the activated features by eliminating the noise data represented by the ambiguous samples that contain shadow or mixed text with different resolutions.But since the number of extracted features increased the time for training the model per epoch also increased.Also,to speed up training the model a high-specification GPU processor is requested.With handwritten character recognition problems,handwritten datasets contain different handwritten styles and sizes.This increases the difficulty of recognizing characters.for future work,more image enhancement techniques will be applied to improve the quality of the images for better recognition.

    Acknowledgement:We would like to convey our gratitude to the research team members at the Digital Forensic Lab and Medical and Health Informatics Lab at the Faculty of Information Science and Technology,Universiti Kebangsaan Malaysia,who contributed to this project.Also our thanks to Ms.Manal Mohamed for her support.

    Funding Statement:The Ministry of Higher Education,Malaysia,which supported this project under the Fundamental Research Grant Scheme(FRGS)FRGS/1/2019/ICT02/UKM/02/9 entitled“Convolution Neural Network Enhancement Based on Adaptive Convexity and Regularization Functions for Fake Video Analytics”.This grant was received by Prof.Assis.Dr.S.N.H.Sheikh Abdullah,https://www.ukm.my/spifper/research_news/instrumentfunds.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design:Dr.B.H.Hani,Prof.Assis.Dr.S.N.H.Sheikh Abdullah;data collection:Dr.B.H.Hani;analysis and interpretation of results:Dr.A.Saeed,Dr.R.Sulaiman;draft manuscript preparation:Dr.B.H.Hani,Prof.Assis.Dr.S.N.H.Sheikh Abdullah.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The readers can obtain all datasets by sending a reasonable request to the corresponding author(bahera_hani@yahoo.com).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲人成77777在线视频| 午夜福利影视在线免费观看| 国产精品欧美亚洲77777| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲av成人精品一二三区| 免费观看a级毛片全部| 婷婷色麻豆天堂久久| 男女免费视频国产| 男女下面插进去视频免费观看| 精品福利永久在线观看| 最近最新中文字幕大全免费视频 | 久久精品国产a三级三级三级| 国产精品蜜桃在线观看| 国产成人欧美| 欧美 日韩 精品 国产| 熟妇人妻不卡中文字幕| 久久久久久免费高清国产稀缺| 久久久久久免费高清国产稀缺| 悠悠久久av| 国产欧美日韩一区二区三区在线| 久久久久久人人人人人| 黄色 视频免费看| 999精品在线视频| 黄色 视频免费看| av线在线观看网站| 午夜福利视频在线观看免费| 欧美精品高潮呻吟av久久| tube8黄色片| 亚洲国产毛片av蜜桃av| 搡老岳熟女国产| 天天影视国产精品| 日韩视频在线欧美| 日韩一区二区视频免费看| 精品一品国产午夜福利视频| 亚洲欧美成人精品一区二区| 国产伦理片在线播放av一区| 亚洲国产精品999| 丝袜美腿诱惑在线| 五月开心婷婷网| 日韩,欧美,国产一区二区三区| 国产欧美亚洲国产| 欧美激情极品国产一区二区三区| 国产有黄有色有爽视频| 一级,二级,三级黄色视频| 亚洲熟女精品中文字幕| 国产成人91sexporn| 国产精品 欧美亚洲| 青春草亚洲视频在线观看| 天美传媒精品一区二区| 热re99久久国产66热| netflix在线观看网站| 大片免费播放器 马上看| 亚洲欧美清纯卡通| 久久久久网色| 国产黄色视频一区二区在线观看| 国产精品一二三区在线看| 中文字幕亚洲精品专区| av线在线观看网站| 精品亚洲成国产av| 成人午夜精彩视频在线观看| kizo精华| 精品人妻熟女毛片av久久网站| 亚洲成av片中文字幕在线观看| 黄色一级大片看看| 看十八女毛片水多多多| 欧美激情高清一区二区三区 | 国产成人av激情在线播放| 午夜免费鲁丝| 91精品国产国语对白视频| 综合色丁香网| 大片免费播放器 马上看| 一区二区三区乱码不卡18| 日本欧美国产在线视频| 国产精品一区二区精品视频观看| 妹子高潮喷水视频| 伊人久久大香线蕉亚洲五| 亚洲国产毛片av蜜桃av| h视频一区二区三区| 秋霞伦理黄片| 日本黄色日本黄色录像| 叶爱在线成人免费视频播放| 国产av精品麻豆| 中文字幕另类日韩欧美亚洲嫩草| 亚洲精品aⅴ在线观看| 丝袜人妻中文字幕| 别揉我奶头~嗯~啊~动态视频 | 97人妻天天添夜夜摸| 日韩中文字幕欧美一区二区 | 韩国高清视频一区二区三区| 天堂俺去俺来也www色官网| 久久久久久人妻| 下体分泌物呈黄色| 日本av手机在线免费观看| 男男h啪啪无遮挡| 咕卡用的链子| av女优亚洲男人天堂| 亚洲精品在线美女| 丝瓜视频免费看黄片| 国产精品久久久av美女十八| 美女中出高潮动态图| 人妻 亚洲 视频| 欧美黑人欧美精品刺激| 欧美日韩综合久久久久久| 午夜福利网站1000一区二区三区| 国产极品天堂在线| 哪个播放器可以免费观看大片| 国产精品秋霞免费鲁丝片| 亚洲精品国产一区二区精华液| 久久热在线av| 少妇猛男粗大的猛烈进出视频| 国产精品嫩草影院av在线观看| 青春草国产在线视频| 亚洲美女搞黄在线观看| h视频一区二区三区| 免费在线观看完整版高清| 欧美激情极品国产一区二区三区| 亚洲成人av在线免费| 男女边摸边吃奶| 亚洲av成人精品一二三区| 成人三级做爰电影| 久久热在线av| av国产久精品久网站免费入址| 五月天丁香电影| 婷婷成人精品国产| 天天影视国产精品| 不卡视频在线观看欧美| 成年美女黄网站色视频大全免费| 国产女主播在线喷水免费视频网站| 成人黄色视频免费在线看| 国产精品人妻久久久影院| 亚洲精品久久午夜乱码| 精品国产国语对白av| 亚洲欧美清纯卡通| 久久久久久久大尺度免费视频| 免费少妇av软件| 看免费成人av毛片| 午夜福利网站1000一区二区三区| 国产精品偷伦视频观看了| 国产成人免费观看mmmm| 中国国产av一级| 一区二区三区乱码不卡18| 悠悠久久av| 纯流量卡能插随身wifi吗| 十八禁高潮呻吟视频| 一本一本久久a久久精品综合妖精| 你懂的网址亚洲精品在线观看| 视频在线观看一区二区三区| 精品国产一区二区久久| 夫妻午夜视频| 色播在线永久视频| 日韩中文字幕欧美一区二区 | 欧美日韩亚洲高清精品| 精品亚洲成a人片在线观看| 18禁国产床啪视频网站| 国产有黄有色有爽视频| 香蕉国产在线看| 午夜激情久久久久久久| 日韩成人av中文字幕在线观看| 热99国产精品久久久久久7| 韩国高清视频一区二区三区| 91成人精品电影| 亚洲国产精品999| 欧美最新免费一区二区三区| 亚洲国产欧美一区二区综合| 欧美日韩一级在线毛片| 中文字幕色久视频| 亚洲,一卡二卡三卡| 亚洲精品日本国产第一区| 久久精品国产a三级三级三级| a级片在线免费高清观看视频| 99国产综合亚洲精品| 亚洲一区二区三区欧美精品| 色吧在线观看| 久久久久久久精品精品| 老司机影院成人| 亚洲精品国产av蜜桃| 久久久欧美国产精品| 叶爱在线成人免费视频播放| 视频在线观看一区二区三区| 成年女人毛片免费观看观看9 | 亚洲视频免费观看视频| 欧美最新免费一区二区三区| 香蕉国产在线看| 精品一区在线观看国产| 欧美国产精品va在线观看不卡| 18禁动态无遮挡网站| 深夜精品福利| 午夜福利网站1000一区二区三区| 国产国语露脸激情在线看| 亚洲国产精品一区三区| 久久久久精品性色| 日本欧美视频一区| 香蕉丝袜av| 菩萨蛮人人尽说江南好唐韦庄| 啦啦啦中文免费视频观看日本| 少妇 在线观看| 午夜影院在线不卡| 国产免费现黄频在线看| 欧美亚洲 丝袜 人妻 在线| 国产成人啪精品午夜网站| 王馨瑶露胸无遮挡在线观看| 狠狠精品人妻久久久久久综合| av国产久精品久网站免费入址| 悠悠久久av| 国产午夜精品一二区理论片| 制服丝袜香蕉在线| 精品国产一区二区久久| xxxhd国产人妻xxx| 韩国av在线不卡| 久久国产精品男人的天堂亚洲| 性高湖久久久久久久久免费观看| 汤姆久久久久久久影院中文字幕| 在线观看三级黄色| 一级毛片我不卡| 999久久久国产精品视频| 18禁国产床啪视频网站| 国产精品久久久久久人妻精品电影 | 九九爱精品视频在线观看| 国产成人午夜福利电影在线观看| 最近最新中文字幕大全免费视频 | 精品久久蜜臀av无| 嫩草影院入口| 国产成人欧美| 18禁国产床啪视频网站| 老鸭窝网址在线观看| xxx大片免费视频| 国产成人a∨麻豆精品| 飞空精品影院首页| 免费看不卡的av| 日韩一本色道免费dvd| 成人国产av品久久久| 自拍欧美九色日韩亚洲蝌蚪91| 99久久综合免费| 久久久久久久大尺度免费视频| 国产毛片在线视频| 久久久久久久久久久久大奶| 亚洲综合色网址| 一边摸一边抽搐一进一出视频| 亚洲国产毛片av蜜桃av| 丝袜美腿诱惑在线| 亚洲av成人精品一二三区| av国产精品久久久久影院| 国产 精品1| 天天影视国产精品| 亚洲成人av在线免费| 精品久久久久久电影网| 久久韩国三级中文字幕| 久久久久久久国产电影| 欧美黑人欧美精品刺激| 日本av免费视频播放| 观看美女的网站| 久久久久久人人人人人| 成年人免费黄色播放视频| 天天躁夜夜躁狠狠久久av| 成人漫画全彩无遮挡| 97精品久久久久久久久久精品| 少妇精品久久久久久久| 性高湖久久久久久久久免费观看| 婷婷色综合大香蕉| xxxhd国产人妻xxx| 中文字幕制服av| 成人免费观看视频高清| 日本vs欧美在线观看视频| 中国国产av一级| 亚洲精品在线美女| 免费在线观看完整版高清| av免费观看日本| 久久精品久久精品一区二区三区| 国产一级毛片在线| 亚洲精品美女久久av网站| 一区在线观看完整版| av国产久精品久网站免费入址| 97在线人人人人妻| 亚洲成色77777| 亚洲国产av影院在线观看| 欧美乱码精品一区二区三区| 涩涩av久久男人的天堂| 丝袜美腿诱惑在线| 亚洲精品乱久久久久久| 一本大道久久a久久精品| 欧美在线黄色| 午夜福利网站1000一区二区三区| www.精华液| 国产精品国产三级国产专区5o| 国产成人一区二区在线| 亚洲人成电影观看| 国产成人av激情在线播放| 超碰97精品在线观看| 在线精品无人区一区二区三| 欧美人与性动交α欧美精品济南到| 啦啦啦在线免费观看视频4| 亚洲国产成人一精品久久久| 伦理电影大哥的女人| 国产国语露脸激情在线看| netflix在线观看网站| 一个人免费看片子| 中文字幕人妻丝袜制服| 国产成人免费观看mmmm| 免费在线观看视频国产中文字幕亚洲 | 九草在线视频观看| 久久精品久久久久久噜噜老黄| 国产1区2区3区精品| 亚洲图色成人| 久久人妻熟女aⅴ| 免费av中文字幕在线| 久久久久国产精品人妻一区二区| 考比视频在线观看| av网站免费在线观看视频| 国产亚洲午夜精品一区二区久久| 国产精品国产三级国产专区5o| 男女床上黄色一级片免费看| 精品国产乱码久久久久久小说| 欧美日韩国产mv在线观看视频| 人人妻,人人澡人人爽秒播 | 亚洲av在线观看美女高潮| 久久人妻熟女aⅴ| 一级片'在线观看视频| 精品少妇一区二区三区视频日本电影 | 久久精品熟女亚洲av麻豆精品| 母亲3免费完整高清在线观看| 久久久精品94久久精品| 欧美人与性动交α欧美软件| 在线观看一区二区三区激情| 亚洲av中文av极速乱| 无限看片的www在线观看| 欧美日韩亚洲高清精品| 亚洲av日韩精品久久久久久密 | 免费观看人在逋| 国产亚洲午夜精品一区二区久久| 国产乱来视频区| 99国产精品免费福利视频| 国产极品粉嫩免费观看在线| 中文字幕精品免费在线观看视频| h视频一区二区三区| 国产 一区精品| 中文字幕高清在线视频| 欧美av亚洲av综合av国产av | 中文字幕精品免费在线观看视频| 青春草视频在线免费观看| 欧美精品人与动牲交sv欧美| 中国国产av一级| 999久久久国产精品视频| 精品免费久久久久久久清纯 | 人体艺术视频欧美日本| 国产精品人妻久久久影院| 9热在线视频观看99| 国产探花极品一区二区| 人人妻人人澡人人看| 无限看片的www在线观看| 国产精品久久久人人做人人爽| 亚洲精品,欧美精品| 久久 成人 亚洲| 欧美精品高潮呻吟av久久| 国产精品国产三级专区第一集| 一二三四中文在线观看免费高清| 午夜福利视频在线观看免费| 日韩一本色道免费dvd| 啦啦啦中文免费视频观看日本| 国产精品久久久av美女十八| 电影成人av| 中文字幕av电影在线播放| 精品亚洲成a人片在线观看| 午夜日本视频在线| 少妇猛男粗大的猛烈进出视频| 日韩一区二区视频免费看| 99香蕉大伊视频| 曰老女人黄片| 成人免费观看视频高清| 大片电影免费在线观看免费| 欧美亚洲日本最大视频资源| 肉色欧美久久久久久久蜜桃| 精品一区在线观看国产| 男人舔女人的私密视频| 亚洲av欧美aⅴ国产| 卡戴珊不雅视频在线播放| 国产精品香港三级国产av潘金莲 | 日韩精品有码人妻一区| 中文字幕av电影在线播放| 少妇精品久久久久久久| 两性夫妻黄色片| 又黄又粗又硬又大视频| 黄片小视频在线播放| 最近最新中文字幕免费大全7| 亚洲欧美精品综合一区二区三区| 国产欧美亚洲国产| 大片电影免费在线观看免费| 一区二区三区激情视频| 久久女婷五月综合色啪小说| 免费av中文字幕在线| 999久久久国产精品视频| 亚洲一码二码三码区别大吗| 综合色丁香网| 精品一区二区三卡| 国产精品一国产av| 国产免费一区二区三区四区乱码| 成年动漫av网址| 亚洲av成人精品一二三区| 一级a爱视频在线免费观看| 精品国产露脸久久av麻豆| 黄色毛片三级朝国网站| 熟女av电影| 亚洲国产中文字幕在线视频| 国产野战对白在线观看| av在线观看视频网站免费| 中文字幕精品免费在线观看视频| 亚洲成av片中文字幕在线观看| 成年女人毛片免费观看观看9 | 人体艺术视频欧美日本| 男女国产视频网站| 极品少妇高潮喷水抽搐| 国产极品天堂在线| 中文字幕av电影在线播放| 一区二区日韩欧美中文字幕| 在线观看免费午夜福利视频| 国产精品熟女久久久久浪| 看免费成人av毛片| 一本色道久久久久久精品综合| 蜜桃国产av成人99| 高清黄色对白视频在线免费看| 美女福利国产在线| 欧美变态另类bdsm刘玥| 一级毛片 在线播放| 免费看av在线观看网站| 国产福利在线免费观看视频| 99久久99久久久精品蜜桃| 午夜免费男女啪啪视频观看| 18禁裸乳无遮挡动漫免费视频| 亚洲第一青青草原| 欧美日韩一区二区视频在线观看视频在线| 亚洲欧洲日产国产| 日韩制服丝袜自拍偷拍| 只有这里有精品99| 不卡av一区二区三区| av有码第一页| 国产一区二区 视频在线| 巨乳人妻的诱惑在线观看| 日韩精品有码人妻一区| 亚洲精品在线美女| 亚洲av电影在线观看一区二区三区| 韩国精品一区二区三区| 成人手机av| 曰老女人黄片| 黄频高清免费视频| 午夜福利视频精品| 欧美黑人欧美精品刺激| 国产男女内射视频| 午夜福利视频在线观看免费| 看非洲黑人一级黄片| 精品少妇一区二区三区视频日本电影 | 久久鲁丝午夜福利片| 搡老乐熟女国产| 日韩视频在线欧美| 久久久精品国产亚洲av高清涩受| 久久久精品区二区三区| 纵有疾风起免费观看全集完整版| 99久久99久久久精品蜜桃| 九草在线视频观看| 久久热在线av| 久久久精品免费免费高清| 桃花免费在线播放| 亚洲av综合色区一区| 电影成人av| netflix在线观看网站| 久久精品国产亚洲av涩爱| 国产精品欧美亚洲77777| 天天躁狠狠躁夜夜躁狠狠躁| 国产 一区精品| 最近2019中文字幕mv第一页| 中文字幕另类日韩欧美亚洲嫩草| 国精品久久久久久国模美| 宅男免费午夜| 美女扒开内裤让男人捅视频| 在线观看人妻少妇| 日本午夜av视频| 国产亚洲av高清不卡| 男女之事视频高清在线观看 | 国产亚洲欧美精品永久| 国产深夜福利视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 国产精品国产三级国产专区5o| av网站免费在线观看视频| 一区二区三区乱码不卡18| 五月开心婷婷网| 日韩成人av中文字幕在线观看| 秋霞伦理黄片| 中国三级夫妇交换| 人人妻,人人澡人人爽秒播 | 亚洲成色77777| 我的亚洲天堂| 少妇被粗大的猛进出69影院| 女人精品久久久久毛片| 中文字幕亚洲精品专区| 中文欧美无线码| 亚洲精品成人av观看孕妇| 美女大奶头黄色视频| 国产老妇伦熟女老妇高清| 国产精品av久久久久免费| 啦啦啦视频在线资源免费观看| 91精品三级在线观看| 久久影院123| 亚洲视频免费观看视频| 国产精品欧美亚洲77777| 精品一区二区三卡| 亚洲一级一片aⅴ在线观看| 精品亚洲成a人片在线观看| 哪个播放器可以免费观看大片| 欧美成人精品欧美一级黄| 午夜老司机福利片| 中文字幕制服av| 免费观看a级毛片全部| 热re99久久精品国产66热6| 天天躁夜夜躁狠狠久久av| 亚洲一区中文字幕在线| 纵有疾风起免费观看全集完整版| 不卡av一区二区三区| av.在线天堂| av电影中文网址| 国产成人精品在线电影| 国产99久久九九免费精品| 久久av网站| 精品第一国产精品| 亚洲,欧美,日韩| 日本av手机在线免费观看| 老汉色av国产亚洲站长工具| 制服人妻中文乱码| 另类亚洲欧美激情| 亚洲精品国产色婷婷电影| 午夜福利一区二区在线看| 国产精品欧美亚洲77777| 秋霞伦理黄片| 欧美少妇被猛烈插入视频| 日韩电影二区| 国产男女内射视频| 一区二区日韩欧美中文字幕| 国产成人啪精品午夜网站| 考比视频在线观看| 国产精品三级大全| 精品人妻在线不人妻| av一本久久久久| 免费看不卡的av| 国产视频首页在线观看| 18禁国产床啪视频网站| 成人漫画全彩无遮挡| 尾随美女入室| 亚洲av电影在线观看一区二区三区| 宅男免费午夜| 9色porny在线观看| 人人妻人人爽人人添夜夜欢视频| 丰满迷人的少妇在线观看| 亚洲国产精品999| 国产成人av激情在线播放| 久久久久精品人妻al黑| 久久国产亚洲av麻豆专区| 久久女婷五月综合色啪小说| 国产免费一区二区三区四区乱码| 90打野战视频偷拍视频| 欧美日韩国产mv在线观看视频| 香蕉丝袜av| 亚洲一区中文字幕在线| 亚洲精品日韩在线中文字幕| 国产精品一二三区在线看| 在线观看免费日韩欧美大片| 建设人人有责人人尽责人人享有的| 两性夫妻黄色片| 飞空精品影院首页| 日本一区二区免费在线视频| 亚洲av在线观看美女高潮| 国产精品亚洲av一区麻豆 | 免费在线观看黄色视频的| 丝瓜视频免费看黄片| 久久久久视频综合| 久久久精品免费免费高清| 国产人伦9x9x在线观看| 午夜免费男女啪啪视频观看| 国产精品一区二区在线不卡| 99re6热这里在线精品视频| 日日爽夜夜爽网站| 午夜福利免费观看在线| 老司机深夜福利视频在线观看 | 国产日韩欧美视频二区| 日本av免费视频播放| 天天躁夜夜躁狠狠躁躁| 性少妇av在线| 一级爰片在线观看| 欧美另类一区| 人人妻人人爽人人添夜夜欢视频| 久久 成人 亚洲| 女人久久www免费人成看片| 亚洲精品国产av成人精品| 大话2 男鬼变身卡| 水蜜桃什么品种好| 久久精品亚洲av国产电影网| 如何舔出高潮| 午夜老司机福利片| 亚洲欧美色中文字幕在线| 黑人猛操日本美女一级片| 国产成人一区二区在线| 一二三四中文在线观看免费高清| 国产av精品麻豆| 曰老女人黄片| 香蕉国产在线看| 亚洲国产精品国产精品| 色综合欧美亚洲国产小说| 日韩欧美一区视频在线观看| 成人影院久久| 国产欧美亚洲国产| 无限看片的www在线观看| 一级片免费观看大全| 亚洲精品国产区一区二| 亚洲国产av影院在线观看| 欧美精品高潮呻吟av久久| svipshipincom国产片| 尾随美女入室| 久久 成人 亚洲| 免费女性裸体啪啪无遮挡网站|