• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    KurdSet:A Kurdish Handwritten Characters Recognition Dataset Using Convolutional Neural Network

    2024-05-25 14:39:42SardarHasenAliandMaiwanBahjatAbdulrazzaq
    Computers Materials&Continua 2024年4期

    Sardar Hasen Aliand Maiwan Bahjat Abdulrazzaq

    Computer Science Department,University of Zakho,Duhok,Iraq

    ABSTRACT Handwritten character recognition (HCR) involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network (CNN) in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called (KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a custom CNN model.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset (AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.

    KEYWORDS CNN models;Kurdish handwritten recognition;KurdSet dataset;Arabic handwritten recognition;DenseNet121 model;InceptionV3 model;Xception model

    1 Introduction

    Handwritten recognition stands as a prominent domain within artificial intelligence.Despite over three decades of digitization progress,the advancement in recognition techniques remains rapid.Handwriting analysis poses one of the classification challenges in artificial intelligence,capturing the interest of various experts,including computer scientists and specialists in handwriting.This technology significantly simplifies data storage for users,eliminating the need for manual typing in text documents.Such recognition processes are crucial within image recognition.According to Rajyagor et al.[1],handwriting recognition continues to be a fascinating and vital field within artificial intelligence.Its integration into digitization for over three decades has streamlined data storage,saving users considerable time and effort in document handling.This recognition process plays an integral role within image recognition methodologies [2].Within handwriting recognition,Recurrent Neural Networks(RNNs),CNNs and their advanced iterations play pivotal roles in mechanical systems and signal-processing applications.In mechanical systems,they excel in fault diagnosis and predictive maintenance,leveraging their hierarchical feature learning stated by Baldominos et al.[3].Long Short-Term Memory(LSTM),alongside various types of CNNs,has exhibited promising outcomes.Notably,CNNs deliver more precise results than alternative learning methods,as corroborated in research by Shamim et al.[4],revealing CNNs’use in image processing,Natural Language Processing(NLP) by Alayyoub et al.[5],and sentiment analysis.Recognizing human hand formations is an incredibly intricate task that necessitates a robust hand-tracking approach reliant on efficient features and classifiers.Deciphering human handwriting is an exceedingly challenging research endeavor that demands a strong hand-tracking system based on effective features and classifiers.Presently,the most popular and viable methodologies involve machine learning techniques,such as CNNs,RNNs Multilayer Perceptions (MLPs),Support Vector Machine algorithm (SVM),and K-Nearest Neighbors algorithm.This work by Lecun et al.[6]refers to the brief description of the application of CNN.Here,CNN is described shortly as employed in this work.

    CNNs,also known as ConvNets,represent multi-layered neural networks primarily utilized for image processing and object detection.CNNs consist of multiple layers that process,analyze,and extract features from data.They utilize a convolutional layer equipped with numerous filters to execute convolution operations.Additionally,the Rectified Linear Unit (ReLU) layer is utilized to derive a rectified feature map(RFM)from processing elements.The RFM is then directed to a pooling layer,which reduces the dimensionality of the feature map through a down-sampling process.Subsequently,the resulting two-dimensional arrays from the pooled feature map are flattened to form a single,extended,continuous linear vector.The primary challenges in handwritten recognition revolve around inaccuracies and inconsistent patterns,underscoring the pivotal role of feature extraction.However,manually selecting features,as emphasized by researchers Pasi et al.[7],may result in insufficient data to accurately predict class characteristics,while an excessive number of features may lead to issues due to increased dimensionality[8].This study’s key contribution is the comprehensive exploration of both deep learning and machine learning for handwritten recognition,consolidating insights from various studies in the field.The survey encompasses recent works in handwriting recognition proposed by different researchers,presenting diverse methodologies and comparing the accuracy of state-of-theart techniques.

    Presently,Arabic stands as one of the most prevalent international languages across the Middle East and North Africa.It holds a prominent position as one of the most widely spoken languages globally,with a speaker population exceeding half a billion,as stated by Altwaijry et al.[9].Additionally,various languages incorporate Arabic letters,words,and sentence structures into their linguistic frameworks[10].The standardized written form of the Arabic language comprises 28 letters,traditionally written from right to left [11].It is noteworthy that certain letters may exhibit distinct appearances depending on whether they appear at the beginning,middle,or end of a word,signifying that the letter’s shape undergoes alterations based on its position within the word[12].As technological progress continues,the potential uses for Arabic handwriting recognition become exceedingly vast.The current era is particularly delightful for this domain,given the swift advancements in machine learning and deep neural networks.These ongoing developments are poised to result in further significant breakthroughs shortly stated by Alayyoub et al.[5].The successful implementation of Arabic handwritten recognition has been accomplished through diverse machine learning methodologies Maytham et al.[13].Nevertheless,the integration of deep learning (DL) architectures stands to significantly elevate AHR technology.Notably,the utilization of CNNs in AHR models has yielded commendable results in the accurate recognition of handwritten text.We have created the Kurdish handwritten dataset(KurdSet)in response to the absence of any pre-existing datasets in the Kurdish language.The creation of this dataset represents a significant contribution to the field,addressing resources for Kurdish language-based studies and applications.

    The main contribution between Arabic and Kurdish character datasets exhibits similarities in certain characters,despite differing sources of acquisition.The Arabic character dataset AHCD is derived from either native Arabic speakers or written materials,whereas the Kurdish character dataset is sourced from a distinct cohort of Kurdish participants.Notably,we created the KurdSet dataset,named KurdSet,which was prompted by the absence of any existing Kurdish dataset in the academic literature.In the evaluation of deep learning models on the Kurdish dataset,a comparative analysis of the two approaches has not been conducted in prior research.Our KurdSet dataset is now available for use by individuals or organizations interested in developing Kurdish language systems.This dataset can serve as a valuable resource for training and evaluating models in various applications related to the Kurdish language.To address this gap,we employed various CNN models,including InceptionV3,Xception,DenseNet121,and custom CNN models.The primary objective of this study is to assess and compare the performance of these two datasets.

    The rest of the paper is organized as follows:Section 2 presents the related work.Section 3 reports the methodology of the study while Section 4 discusses the results and discussion of the proposed methods and the last Section 5 explains the conclusion of the work.

    2 Related Work

    Numerous researchers have attempted to create models for recognizing handwritten text in the past,but none have achieved flawless accuracy.Further research in this field is still necessary to improve the technology.

    The study by Alheraki et al.[14] advances Arabic handwritten character recognition through deep learning techniques,addressing intricacies in the challenging Hijja dataset featuring children’s handwriting.By integrating augmentation with the AHCD dataset and employing a strokes-based approach,the proposed model surpasses baseline accuracy on both Hijja and AHCD datasets,with enhanced performance upon their amalgamation.Contributing significantly to the progression of Arabic handwritten character recognition systems,the model achieves commendable accuracy of 91%on the Hijja dataset and an impressive 97%on the AHCD.While,an innovative multi-model approach,incorporates stroke count insights,resulting in a noteworthy 96% average prediction accuracy upon merging the Hijja and AHCD datasets.A study conducted by Alrobah et al.[15]introduced a novel approach in their research for recognizing Arabic handwritten characters.They utilized a hybrid model merging CNN with SVM and extreme Gradient Boosting(XGBoost)classifiers.The study identified inconsistencies in the widespread Arabic dataset,Hijaa,previously employed for CNN-based feature extraction and classification,achieving an accuracy of 88%.The authors tackled dataset imbalances by employing a hybrid model that leveraged CNN for feature extraction and Machine Learning(ML)models for classification.This resulted in an enhanced accuracy of 96.3%,marking an 8% increase over the original Hijaa experiment.The hybrid’s model performance was similar to the one conducted on the AHCD dataset using the Hijaa model,demonstrating its efficacy.

    The study conducted by Ghanim et al.[16]introduced a multi-phase sequential method for recognizing offline Arabic handwriting,which underwent testing using the IFN/ENIT Arabic database.The study separately assessed the impacts of SVM and CNN for classification.The CNN classification was based on the direct pixel values of the input image,while the SVM classification relied on a feature vector extracted from the input image.When employed on the database,this multi-stage method achieved an accuracy of 95.6% using the AlexNet CNN architecture without necessitating character-level segmentation.Awni et al.[17] compared randomly initialized deep convolutional neural networks with a pre-trained ResNet18 model from ImageNet for Arabic handwritten word recognition.They introduced a sequential transfer learning approach using the pre-trained ResNet18 model to enhance recognition accuracy on the AlexU-W and IFN/ENIT datasets.Utilizing the ImageNet data improved recognition accuracy by 14%,and their proposed method achieved a notable 35.45%enhancement for frequently misclassified words in the IFN/ENIT dataset.The authors initially trained the ResNet18 model,freezing the first Conv1 layer and two subsequent residual blocks.They fine-tuned the remaining layers with the IFN/ENIT dataset at an image size of 224 ?224,employing progressive resizing to 300 ?300.Their approach achieved up to 96.11% accuracy,demonstrating significant progress compared to existing methods,with 99.52%accuracy on the AlexU-W dataset.

    In a study conducted by Aljarrah et al.[18],the authors devised a CNN model for the recognition of Arabic letters.They incorporated various optimization techniques to enhance the model’s performance,achieving a testing accuracy of 94.9%when applied to the AHCD dataset.Kamal et al.[19]used CNN architecture consisting of 18 layers,encompassing convolution,pooling,batch normalization,dropout,and culminating in a final layer of global average pooling and dense.Through evaluation of AHCD and‘Modified Arabic handwritten digits Database(MadBase),’the model achieved remarkable accuracies of 96.93% and 99.35%,respectively.These results,comparable to state-of-the-art models of their work,underscore its suitability for practical real-life applications.Almansari et al.[20]presented a deep learning architecture for the recognition of isolated handwritten Arabic letters.This architecture featured CNN and Multi-Layer Perceptron (MLP) neural networks.The performance evaluation of the model was conducted using the AHCD dataset,where it achieved an impressive testing accuracy of 95.27%.This demonstrated the capability of the deep learning architecture in accurately recognizing isolated handwritten Arabic letters.

    In another study by Ullah et al.[21],the research introduces a method for recognizing handwritten Arabic letters,utilizing a CNN model.The model integrates regularization techniques,including batch normalization and dropout operations.Evaluation encompassed testing the model both with and without dropout,revealing a notable performance difference and effectively mitigating overfitting.The model was then applied to the utilized AHCD.The evaluation,considering various metrics,illustrated the model’s superiority,achieving an accuracy of 96.78%and surpassing established approaches in the literature.Whereas,Alzebdeh et al.[22]presented a sophisticated deep neural network for recognizing AHCD,employing CNN models with regularization techniques like batch normalization to counter overfitting.The Deep CNN achieved high classification accuracies of 94.8% for the AIA9k dataset and 97.6%for the AHCD dataset.

    3 Methodology

    In this section,we present an overview of the methodology employed in the development of the proposed model.The following paragraphs provide a general idea about the models,datasets,methods,and other key aspects utilized in this study.For a comprehensive understanding,we discuss the overall approach and specific details such as the Kurdish dataset,data preprocessing,and the design of the proposed CNN,which will be elaborated upon in the subsequent sections.

    3.1 Dataset

    In this section,two datasets are discussed which are the Kurdish and Arabic datasets,they are used in this paper.

    3.1.1 Kurdish Dataset(KurdSet)

    We have developed a newly generated dataset known as the Kurdish handwritten character recognition dataset called(KurdSet),which comprises three pages of content consisting of characters,digits,texts,and symbols in the Kurdish language.Kurdish is the language spoken in the Kurdistan region.The dataset was compiled with the involvement of both males and females,ranging in age from 18 to 65 years.The participants encompassed individuals from various professional backgrounds,including employees,and students from different departments.Approximately 2000 users took part in the dataset creation process;however,certain forms were written outside the designated boxes so we excluded or disregarded those entries,resulting in a final count of 1560 participants.The specific structure of this module is illustrated in Fig.1.

    Figure 1: Flowchart of Kurdish handwritten dataset architecture

    For this article,we choose the characters from the KurdSet dataset.Our database typically includes thousands of handwritten character images more than 45,000 images,where each image represents a single Kurdish character.These images are manually annotated with corresponding labels indicating the characters they represent.The database is commonly utilized for training and evaluation algorithms.and models for recognizing Kurdish characters in handwritten documents.Fig.2 illustrates character examples of the Kurdish handwritten recognition dataset.

    3.1.2 Arabic Handwritten Character Dataset

    AHCD that has 16,800 Arabic characters done by 60 different participants applied to Arabic characters for recognition.The database is partitioned into two sets:A training set(13,440 characters to 480 images per class) and a test set (3360 characters to 120 images per class).The difference between these two datasets is the writers in which Kurdish handwritten dataset participants are Kurdish participants whose native language is Kurdish while Arabic handwritten characters dataset participants are Arabs.Fig.3 shows the sample of the Arabic handwritten character dataset.

    Figure 2: Samples of Kurdish handwritten character recognition dataset

    Figure 3: Samples from the AHCD dataset

    3.2 Data Preprocessing

    As for preprocessing techniques,we applied various methods,techniques,and operations to input images of handwritten characters before they were analyzed by a recognition system.Its purpose is to improve image quality and extract important features for accurate recognition.We implemented specific preprocessing steps.We resized the image size from 32×32 pixels to 80×80 pixels and centered them with the process of 50 epochs to enhance the models’ability to capture more features and patterns within the images,potentially leading to improved performance and accuracy.The models utilized in this paper are the custom CNN model,DenseNet121,InceptionV3,and Xception.The metric used in these models is accuracy,using pre-trained models(DenseNet121,InceptionV3,and Xception)on ImageNet with learned features from large datasets.By preserving lower layers responsible for general low-level features,the focus can shift to fine-tuning higher layers specialized for the task at hand.This strategy conserves computational resources and mitigates overfitting,particularly when training data is scarce.Data augmentation parameters are used for enriching image-based deep learning training datasets,the parameters of data augmentation are rotation_range which introduces random rotations,width_shift_range and height_shift_range bring horizontal and vertical shifts,which apply shear transformations,and zoom_range allows random zooming.These variations enhance the model’s ability to generalize across diverse data,improving its robustness and performance on unseen data.

    3.3 The Utilized Models

    We created a custom CNN model for our dataset,the Kurdish handwritten character dataset which contains 29 classes as an output and is divided into two primary sets: 80% is utilized for the training phase,20% for the testing phase for the custom CNN model.The proposed custom CNN model contains thirteen layers and utilizes the ReLU activation function and max pooling.It has a final softmax activation for classifying the output classes.We employed a custom CNN feedforward architecture without skip connections,ensuring sequential layer-wise information processing in our model.The model uses hyperparameters like sparse categorical cross-entropy loss,Adam optimizer,and accuracy metric.It also has a Learning Rate Scheduler which is a callback that adjusts the learning rate during training based on a specified schedule.Besides,Early Stopping is utilized to stop training when a monitored metric has stopped improving.This may involve transfer learning or fine-tuning,where initial layers are frozen to retain pre-trained knowledge while training only the later layers on a new task.Additionally,we created a custom CNN model for Arabic handwritten character recognition,besides,similar parameters that we used in our dataset are implemented on the Arabic handwritten character dataset also.Fig.4 explains the flowchart of the proposed method for both Arabic and Kurdish handwritten.These models DenseNet121,InceptionV3,and Xception are implemented to Kurdish and Arabic handwritten character recognition datasets having the same parameters used in the custom CNN model and then applied to Kurdish and Arabic handwritten datasets.

    Figure 4: The flowchart of the proposed method for Arabic and Kurdish handwritten recognition

    CNNs are extensively used in the analysis of visual data,such as images and videos,and are highly effective in tasks like image classification,object detection,and image segmentation utilized by Sharma et al.[23].Inspired by the organization of the visual cortex,CNNs consist of interconnected layers that perform specialized operations on the input data.Convolutional layers are crucial in CNNs as they use learnable filters or kernels to scan and extract spatial features from the input data.The resulting feature maps capture local patterns and features of Jain et al.[24].Pooling layers reduce spatial dimensions while preserving important information,often using max pooling to select the maximum value within a defined window.Fully connected layers connect all neurons in one layer to every neuron in the next,aggregating the extracted features for final predictions referred to by Naidu et al.[25].Besides,Haghighi et al.[26]used CNNs that are trained with labeled data through backpropagation,where the network learns to map inputs to corresponding labels.Parameters,such as filter weights and biases,are updated based on the prediction error.Training continues until the network achieves good performance.CNN is a highly recognized and extensively utilized technique in deep learning.

    DenseNet121 is a CNN architecture designed for image classification tasks and has achieved notable success in various computer vision applications.DenseNet121 refers to a DenseNet model with 121 layers The name “DenseNet” is derived from the dense connectivity pattern it employs.Unlike traditional CNN architectures,which connect adjacent layers,DenseNet connects each layer to every other layer in a feed-forward manner,this was elaborated by Ghosh et al.[27].This dense connectivity allows for direct information flow between layers and facilitates the exchange of feature maps.As a result,DenseNet models have a strong feature reuse mechanism,enabling better gradient flow and alleviating the vanishing gradient problem often encountered in deep networks.DenseNet121 consists of a convolutional block followed by multiple dense blocks and transition layers.Its efficient parameter utilization makes DenseNet121 computationally efficient and effective for tasks like image classification and object detection[28].

    InceptionV3 is an advanced CNN designed for image classification tasks,representing a progression within the Inception architecture.Highlighted by its inception modules,the network integrates parallel convolutional operations with diverse filter sizes (1 × 1,3 × 3,and 5 × 5),facilitating the extraction of multi-scale features for hierarchical pattern recognition.It is a deep neural network that includes a total of 48 layers.These layers consist of a combination of convolutional layers,pooling layers,fully connected layers,and auxiliary classifiers Uddin et al.explored[29].Batch normalization stabilizes training,factorization techniques optimize computational resources,and auxiliary classifiers address the vanishing gradient problem during training.The meticulous instrumentation of these architectural elements positions InceptionV3 as a powerful tool for image classification,striking a balance between discerning nuanced features and computational efficiency.

    Xception,an extension of the Inception architecture,is CNN known for its use of depth-wise separable convolutions.Xception has a total of 71 convolutional layers.It diverges from convolutional architectures by separating spatial and depth-wise convolutions.It replaces standard convolutions with a sequence of depth-wise convolutions followed by point-wise convolutions,reducing computational complexity while preserving expressive power for enhanced model efficiency.The utilization of depthwise separable convolutions enables Xception to capture spatial and channel-wise dependencies optimally,resulting in a more streamlined network structure.Xception’s efficacy lies in achieving a nuanced balance between model depth and computational efficiency,making it well-suited for tasks like image classification,where both accuracy and computational resources are critical considerations[30].

    4 Results

    4.1 Kurdish Handwritten Character Dataset Results

    The results of the diagrams for the Kurdish dataset using the custom CNN model indicate a highly successful performance.The model accurately classified 99.86%of the training accuracy.The validation accuracy of 98.07%reflects the model’s performance on a separate dataset not used during training,indicating its ability to generalize well to new,unseen data with accuracy.Furthermore,the test accuracy of 98.04% signifies the model’s effectiveness in making accurate predictions on an independent test set.These high accuracy values across training,validation,and test sets suggest the custom CNN model has learned well from the training data and demonstrates strong generalization capabilities,making it a promising and reliable model for the given Kurdish dataset.The extremely low training loss of 0.006%indicates that the model performed exceptionally well during the training phase,effectively minimizing errors and accurately learning the patterns within the training data.On the other hand,the validation loss is slightly higher at 0.114%,suggesting that the model may face challenges in generalizing to new,unseen data not used during training.Whereas the low training loss is encouraging,the comparatively higher validation loss implies a need for careful consideration to enhance the model’s generalization capabilities.Further investigation and potential adjustments may be necessary to strike a balance between effective learning from the training set and robust performance on validation and,ultimately,real-world test datasets.The results clarified in Fig.5 provide the results of the model.

    Figure 5: (a)Training and validation accuracy and(b)training and validation loss of custom CNN model

    The results obtained from the diagrams for the Kurdish dataset using the DenseNet121 model indicate highly promising performance.The model achieved a perfect training accuracy of 1.000%,suggesting that during the training phase,it successfully learned and memorized the features of the training dataset,achieving a flawless fit.The validation accuracy is notably high at 99.82%,demonstrating the model’s ability to generalize well to new,unseen data that it did not encounter during training.This minor difference between training and validation accuracies suggests that the model has avoided overfitting and maintains a high level of accuracy on previously unseen instances.The test accuracy of 99.80%further affirms the model’s robustness in making accurate predictions on an independent dataset.Overall,these results indicate that the DenseNet121 model is highly effective and capable of achieving near-perfect accuracy across training,validation,and test datasets,making it a strong candidate for applications in the context of the Kurdish dataset.In terms of loss,the results indicate outstanding performance of training and validation loss.The low training loss of 0.0005%suggests that the model effectively minimized errors and discrepancies between its predicted outputs and the actual targets in the training dataset during the learning process.The slightly higher validation loss of 0.0130%indicates that the model’s performance is still strong but is being assessed on a separate dataset not used during training.While a low training loss reflects an effective learning,the low but comparable validation loss implies that the model generalizes well to hidden data,avoiding overfitting.Overall,these results highlight the model’s ability to learn patterns from the training set and apply them effectively to new data,demonstrating a well-balanced performance on the Kurdish dataset.Fig.6 demonstrates this model’s results.

    Figure 6: (a)Training and validation accuracy and(b)training and validation loss of the DenseNet121 model

    The results from the diagrams for the Kurdish dataset using the InceptionV3 model are highly remarkable.The model achieved a perfect training accuracy of 1.000%,indicating a flawless fit to the training data.The validation accuracy is very high at 99.41%,signifying the model’s ability to generalize well to new,unseen data not encountered during training.This minor discrepancy between training and validation accuracies suggests that the model has effectively avoided overfitting and maintains a high level of accuracy on previously unseen instances.The test accuracy of 99.29%further underscores the model’s reliability in making accurate predictions on an independent dataset.Overall,these results indicate that the InceptionV3 model is highly effective and capable of achieving high accuracy across training,validation,and test datasets for the KurdSet,making it a robust choice for applications in this context.The results indicate highly favorable performance.The training loss of 0.0002%is extremely low,signifying that during the training phase,the model effectively minimized errors and learned the features of the training data.The validation loss,slightly higher at 0.0350%,suggests that the model’s performance remains strong.While a low training loss is encouraging,the relatively low and comparable validation loss indicates that the model generalizes well to new,unseen data,demonstrating effectiveness in preventing overfitting.These results underscore the InceptionV3 model’s robustness in learning and generalizing patterns from the training set to achieve accurate predictions on independent data in the context of the Kurdish dataset.The model’s accuracy and loss are shown in Fig.7.

    Figure 7: (a)Training and validation accuracy and(b)training and validation loss of the InceptionV3 model

    The results from the diagrams for the Kurdish dataset using the Xception model show excellent performance.The model achieved a perfect training accuracy of 1.000%,indicating a comprehensive understanding and memorization of the training dataset features.The validation accuracy is very high at 99.49%,demonstrating the model’s capability to generalize effectively to new,unseen data not encountered during training.The minimal difference between training and validation accuracies suggests that the model has successfully avoided overfitting and maintains strong accuracy on unseen instances.The test accuracy of 99.27%highlights the model’s reliability in making accurate predictions on an independent dataset.Whereas,the results reveal strong performance of training and validation losses.The low training loss of 0.0007%indicates that the model effectively minimized the difference between its predicted outputs and the actual targets in the training dataset,showcasing efficient learning and fitting to the training data.The slightly higher validation loss of 0.0254% shows when evaluated on a separate dataset not used during training,the model’s performance is still robust but may face some challenges in generalization.While the training loss is low,the validation loss is relatively reasonable,indicating that the model has learned patterns from the training set and can apply them effectively to new,data.Further analysis and potential adjustments may be considered to enhance the model’s generalization capabilities beyond the training set for optimal real-world performance.Fig.8 provides details of the performance of the model.

    The performance summary of the evaluated models using KurdSet dataset reveals high accuracy across various datasets.The custom CNN Model exhibits robust performance,achieving a remarkable 99.86%accuracy on the training set and maintaining high accuracy on both the validation(98.07%)and test (98.04%) sets,showcasing its ability to generalize well.DenseNet121,InceptionV3,and Xception,with perfect training accuracies of 1.000%,demonstrate exceptional learning capabilities.These models further validate their efficacy on unseen data,with DenseNet121 achieving 99.82%validation and 99.80% test accuracy,InceptionV3 achieving 99.41% validation and 99.29% test accuracy,and Xception achieving 99.49% validation and 99.27% test accuracy.The consistently high accuracies across training,validation,and test sets underscore the models’proficiency in image classification tasks,highlighting their potential utility in real-world applications.Tables 1 and 2 depict the results of the KurdSet dataset using CNN models.The reason for using custom CNN is to have a feedforward model to check the results of a feedforward model that has the skip connection like the other models such as DenseNet121,InceptionV3,and Xception.

    Table 1: Performance comparison of models on Kurdish dataset-training and validation accuracy results

    Table 2: Performance comparison of models on Kurdish dataset-training and validation loss results

    Figure 8: (a) Training and validation accuracy and (b) training and validation loss of the Xception model

    4.2 Arabic Handwritten Character Dataset Reults

    Provided accuracy values indicate the performance of the model during different stages of training and evaluation in the custom CNN model in the Arabic dataset.The training accuracy of 94.30%suggests that,during the training phase.On the other hand,the higher validation accuracy of 98.10%indicates that the model performed even better when tested on a separate dataset that it had not seen during training.Finally,the test accuracy of 98.43%signifies the model’s performance on an independent test set,demonstrating a high level of accuracy in making predictions.The results suggest the custom CNN model generalizes well to new,unseen data,as evidenced by the high validation and test accuracies,indicating its effectiveness in classifying the given data.CNN model exhibits a training loss of 0.175%and a validation loss of 0.086%.The training loss reflects how well the model fits the training data,and the low value indicates effective learning.However,the slightly higher validation loss suggests that the model may face challenges in generalizing to new,unseen data.This discrepancy could signal overfitting or limitations in generalization.Further investigation and adjustments to the model may be needed to enhance its overall performance and ensure robustness beyond the training dataset.The results are clarified in Fig.9.

    Figure 9: (a) Training and validation accuracy (b) training and validation loss of the custom CNN model

    The training accuracy for the DenseNet121 model on an AHCD dataset is 1.000%,demonstrating a precise fit to the training data.In contrast,the validation accuracy of 98.84%indicates the model’s ability to generalize effectively during training,with a minimal difference between training and validation accuracies,suggesting avoidance of overfitting.The subsequent test accuracy of 98.36%affirms the robust performance on an independent dataset,showing its capability for accurate predictions beyond the training and validation phases.This underscores the well-balanced nature of the model.Furthermore,the training loss is very low at 0.0008%,indicating effective performance on the training dataset and minimal errors during the training process.In contrast,the validation loss is slightly higher at 0.063%,indicating a strong performance.While a low training loss is encouraging,a comparable validation loss signifies the model’s effective generalization to hidden data,avoiding overfitting.The disparity between training and validation losses may suggest a well-balanced model that has learned patterns from the training set,effectively applying them to new,unseen data.Fig.10 shows the outcomes.

    Figure 10: (a)Training and validation accuracy and(b)training and validation loss of DenseNet121

    The reported training accuracy of 99.96% shows that the InceptionV3 model in the Arabic dataset achieved flawless accuracy on the training dataset,indicating that it successfully learned and memorized the training samples.On the other hand,the validation accuracy of 98.60%represents the model’s performance on a separate dataset not used during training,demonstrating a slightly lower but still highly accurate generalization ability.The fact that the model’s accuracy on the validation set is close to the training accuracy indicates that the model did not overfit the training data.The reported test accuracy of 98.24%provides an additional measure affirming its capability to generalize well beyond the training and validation sets.The training loss is notably low at 0.003%,indicating that the model performs well on the training data and effectively minimizes errors during the learning process.On the other hand,the validation loss is slightly higher at 0.080%,suggesting a slight increase in errors when the model is tested on unseen data not used during training.While the training loss reflects the model’s ability to fit the training data,the higher validation loss indicates the need for careful consideration to prevent overfitting and ensure the model’s generalization to new,unseen data.The low training loss is promising,but close monitoring and potential adjustments may be necessary to enhance the model’s performance on validation data and real-world applications.The performance of the InceptionV3 model is revealed in Fig.11.

    The training accuracy of an Xception model on the Arabic dataset is exceptionally high at 1.000%,indicating that the model perfectly predicts the training data it was exposed to.However,the validation accuracy,assessed on a separate set of data not used during training,is slightly lower at 98.59%.This suggests that the model generalizes well to unseen data but may not perform with absolute perfection outside of the training set.The test accuracy,which is a crucial metric for evaluating a model’s performance on entirely new and unseen data,is reported at 98.66%.The discrepancy between the training and validation accuracies indicates that while the model has learned the training data thoroughly,there may be a slight drop in performance on new,unseen data.The training loss of the Xception model on the same dataset is exceptionally low,standing at 0.007%.This indicates that during the training phase,the model effectively minimized the difference between its predicted outputs and the actual targets in the training dataset.On the other hand,the validation loss,measured at 0.095%,reflects the model’s performance on a separate dataset not used during training.While the training loss signifies how well the model learned from the training data,the higher validation loss suggests that the model might not generalize as well to unseen data.The relatively low training loss is positive,but the larger validation loss warrants attention,indicating a potential need for further optimization to enhance the model’s generalization capabilities beyond the training set.Fig.12 illustrates the results of the Xception model.

    Figure 11: (a)Training and validation accuracy and(b)training and validation loss of InceptionV3

    Figure 12: (a)Training and validation accuracy and(b)training and validation loss of the Xception model

    In summary,the evaluation of custom CNN,DenseNet121,InceptionV3,and Xception models on the Arabic dataset reveals their impressive performance.The custom CNN model demonstrates high accuracy in training,validation,and test sets,indicating effective learning and robust generalization to new,unseen data.However,a slightly higher validation loss suggests potential challenges in generalization,prompting the need for further investigation and model adjustments.The DenseNet121 model showcases near-perfect accuracy with low training and validation losses,reflecting a wellbalanced model capable of effective learning and generalization.Similarly,the InceptionV3 model exhibits high accuracy across datasets,indicating robustness in classifying new instances,though careful monitoring is advised for potential overfitting.The Xception model demonstrates exceptional training accuracy,with a slight drop in validation accuracy suggesting a need for optimization.Table 3 provides the results of Arabic handwritten dataset models.

    Table 3: Models’results of training and validation accuracy of Arabic handwritten dataset

    While the low training loss is positive,the larger validation loss indicates potential challenges in generalization,warranting further optimization efforts.The reason for using custom CNN is to have a feedforward model to check the result of a feedforward model that has the skip connection like the other models such as DenseNet121,InceptionV3,and Xception.Table 4 provides the results of Arabic handwritten dataset models for validation loss.

    Table 4: The training and validation loss results of Arabic handwritten dataset

    4.3 The Performance of Proposed Models in Recognizing Arabic Handwriting Works

    The evaluation of various models for Arabic handwritten character recognition,as presented in the provided table,underscores the effectiveness of the proposed models in comparison to existing approaches.Notably,the proposed custom CNN,DenseNet121,InceptionV3,and Xception consistently outperform many established models across different datasets,particularly excelling on the AHCD dataset.ResNet-34[17]stands out as a top performer with an exceptional accuracy of 99.60%on specific datasets.The proposed models demonstrate competitive or superior accuracy percentages,ranging from 98.24% to 98.66% on AHCD,highlighting their efficacy in handling the nuances of Arabic handwritten characters.In contrast,existing models such as CNN [14],Hybrid CNN [15],AlexNet CNN [16],CNN [18],CNN [19],CNN and MLP [20],CNN [21],and CNN [22] achieve competitive but slightly lower accuracies,while CNN [19] notably outperforms on both AHCD and MadBase datasets.This comprehensive evaluation emphasizes the promising capabilities of the proposed models for practical applications in Arabic handwritten character recognition,showcasing their robust performance across diverse datasets.Table 5 illustrates the results.

    Table 5: Evaluation of proposed approach and other methods on the different datasets

    4.4 Comparison between Arabic and Kurdish Datasets

    The provided analysis highlights the performance of various models on two distinct datasets:“KurdSet”for Kurdish character recognition and the AHCD for Arabic character recognition.The evaluation metrics encompass training accuracy,validation accuracy,test accuracy,training loss,and validation loss for each model.In the case of “KurdSet,”the models,including custom CNN,DenseNet121,InceptionV3,and Xception,consistently demonstrate high training accuracy(ranging from 99.86%to 1.000%)and validation accuracy(ranging from 98.07%to 99.82%),indicative of their effectiveness in learning and generalization.The test accuracy values,ranging from 98.04%to 99.80%,underscore the models’proficiency in handling new,unseen data.Similarly,on the AHCD dataset,the models exhibit high training accuracy(ranging from 94.30%to 1.000%)and validation accuracy(ranging from 97.32%to 98.84%),with test accuracy values ranging from 98.24%to 98.66%.The low values of training loss and validation loss for both datasets signify effective error minimization during training and validation.These comprehensive results showcase the robust capabilities of the proposed models in accurately recognizing characters in both Kurdish and Arabic datasets,underscoring their adaptability and performance across different language contexts.

    The analysis of KurdSet reveals noteworthy results concerning noise in the dataset,which refers to irrelevant or erroneous data that could potentially disrupt the learning process,appears to be effectively managed,as evidenced by the high training and validation accuracies and overfitting.Overfitting,which is a phenomenon where the model becomes too tailored to the training data and struggles to generalize to new data,is seemingly well-mitigated,given the consistent high-test accuracies.The models showcase strong generalization capabilities,suggesting a robust resistance to noise.The models’ability to perform well on unseen data from the test set indicates a balanced learning process and effective generalization,reflecting the efficacy of the models in adapting to the intricacies of the Kurdish handwriting dataset.

    5 Conclusions and Future Work

    In this comprehensive study,we conducted an extensive evaluation of various models applied to the intricate tasks of Arabic and Kurdish handwritten character recognition,employing the AHCD and the distinct KurdSet dataset,respectively.The proposed models,including custom CNN,DenseNet121,InceptionV3,and Xception,consistently exhibit exceptional performance across key metrics,including training accuracy,validation accuracy,and test accuracy.Notably,the models achieve impressively high accuracy percentages on both datasets,indicating their effectiveness in capturing and generalizing intricate patterns inherent in Arabic and Kurdish characters.The evaluation of KurdSet unveils notable results concerning noise and overfitting.The models showcase a remarkable ability to manage noise,as evidenced by their high training and validation accuracies.This robust generalization suggests a resilient resistance to irrelevant or erroneous data,a crucial characteristic for real-world applications.Furthermore,the well-mitigated overfitting,as demonstrated by consistent high-test accuracies,underscores the balanced learning process of the models.Their ability to perform exceptionally well on unseen data from the test set affirms their capability to effectively adapt to the intricacies of Kurdish handwriting,positioning them as powerful tools in character recognition tasks within this linguistic context.Additionally,the comparative analysis with existing models on the AHCD dataset highlights the superiority of our proposed models.The models consistently outperform established approaches,emphasizing their efficacy in handling the nuanced intricacies of Arabic handwritten characters.

    Despite the promising results,our proposed method has certain limitations such as accuracy and participants that warrant consideration in future research.For future work,we plan to make our system better and more accurate,and we are planning to include more participants in our study with different handwriting styles.This way,our system can understand and recognize a wider range of writing.We are also working on improving accuracy by refining our model and trying out new training methods.It may involve expanding datasets for Arabic and Kurdish character recognition,exploring domain adaptation for diverse handwriting styles,and developing multilingual models.

    Acknowledgement:We express our gratitude to the University of Zakho for their support,which was instrumental in the successful completion of our paper.Special thanks are extended to the Computer Science Department for their valuable assistance throughout the research process.

    Funding Statement:No specific grant for this research was obtained from any public,commercial,or not-for-profit funding agency.

    Author Contributions:Sardar Hasen Ali formulated the research plan,coordinated the study,organized data analysis and interpretation of results.Maiwan Bahjat Abdulrazzaq participated in editing and contributed to manuscript writing.Both authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The training data used in this paper were obtained from available online via the following link:https://www.kaggle.com/datasets/rashwan/arabic-chars-mnist.

    Conflicts of Interest:The authors affirm that they have no conflicts of interest to declare in relation to the current study.

    国产成人精品无人区| 美女xxoo啪啪120秒动态图| 麻豆精品久久久久久蜜桃| 日韩av免费高清视频| 涩涩av久久男人的天堂| 一二三四在线观看免费中文在 | 777米奇影视久久| 亚洲精品乱码久久久久久按摩| 赤兔流量卡办理| 啦啦啦啦在线视频资源| 亚洲欧洲精品一区二区精品久久久 | 久久韩国三级中文字幕| 一本—道久久a久久精品蜜桃钙片| 婷婷色麻豆天堂久久| 国产成人精品在线电影| 涩涩av久久男人的天堂| 久久精品夜色国产| 亚洲 欧美一区二区三区| 成人国产麻豆网| av.在线天堂| 国产精品国产av在线观看| 久久久久国产精品人妻一区二区| 久久国内精品自在自线图片| 日韩伦理黄色片| 国产av码专区亚洲av| 久久人人爽人人片av| 永久免费av网站大全| 成年动漫av网址| 久久精品久久久久久久性| 黄色怎么调成土黄色| 日韩大片免费观看网站| 26uuu在线亚洲综合色| 2022亚洲国产成人精品| 久久久久久久久久成人| 美女视频免费永久观看网站| av国产久精品久网站免费入址| 国产男女内射视频| 黄片无遮挡物在线观看| 亚洲性久久影院| 成年av动漫网址| 久久久久国产精品人妻一区二区| 大香蕉97超碰在线| 亚洲国产精品一区三区| 一边亲一边摸免费视频| 久久久久久伊人网av| 欧美日韩成人在线一区二区| 人人妻人人澡人人爽人人夜夜| 国产日韩欧美在线精品| 亚洲人与动物交配视频| 久久ye,这里只有精品| 看非洲黑人一级黄片| 永久免费av网站大全| 久久久精品免费免费高清| 热re99久久精品国产66热6| 人妻人人澡人人爽人人| 97在线视频观看| freevideosex欧美| 日韩av在线免费看完整版不卡| 你懂的网址亚洲精品在线观看| 看十八女毛片水多多多| 日韩在线高清观看一区二区三区| 97精品久久久久久久久久精品| 天美传媒精品一区二区| www.色视频.com| 亚洲av国产av综合av卡| 看十八女毛片水多多多| 久久久精品免费免费高清| tube8黄色片| 亚洲三级黄色毛片| 日韩在线高清观看一区二区三区| 黑人巨大精品欧美一区二区蜜桃 | 1024视频免费在线观看| 久久人妻熟女aⅴ| 纵有疾风起免费观看全集完整版| 亚洲一区二区三区欧美精品| 国产精品国产三级专区第一集| 日韩av不卡免费在线播放| 边亲边吃奶的免费视频| 国产精品国产三级专区第一集| 精品国产一区二区久久| 亚洲综合色网址| 大片电影免费在线观看免费| 国产男女内射视频| 狂野欧美激情性xxxx在线观看| 成人影院久久| av国产久精品久网站免费入址| 99热6这里只有精品| 国产69精品久久久久777片| 纯流量卡能插随身wifi吗| 久久久久久人妻| 巨乳人妻的诱惑在线观看| 日韩精品免费视频一区二区三区 | 国产av精品麻豆| 一级,二级,三级黄色视频| 蜜臀久久99精品久久宅男| 日本色播在线视频| 最新中文字幕久久久久| 精品久久久久久电影网| 精品久久蜜臀av无| 中国国产av一级| 亚洲精品av麻豆狂野| 美国免费a级毛片| 丝袜脚勾引网站| 最近2019中文字幕mv第一页| 久久精品久久久久久久性| 女的被弄到高潮叫床怎么办| 久久久久久久国产电影| 十八禁高潮呻吟视频| 久久婷婷青草| 免费看光身美女| 亚洲精品乱码久久久久久按摩| 免费观看a级毛片全部| 菩萨蛮人人尽说江南好唐韦庄| 黄色毛片三级朝国网站| 亚洲婷婷狠狠爱综合网| 国内精品宾馆在线| 日韩一区二区三区影片| 婷婷色av中文字幕| 丝袜喷水一区| 午夜福利乱码中文字幕| 免费在线观看完整版高清| 国产男女超爽视频在线观看| 久久精品国产自在天天线| 国产精品一区二区在线不卡| 高清欧美精品videossex| 青春草国产在线视频| 中文字幕免费在线视频6| 91成人精品电影| 18禁国产床啪视频网站| 大陆偷拍与自拍| 人成视频在线观看免费观看| av在线观看视频网站免费| 精品一区二区三区四区五区乱码 | 欧美精品一区二区大全| 街头女战士在线观看网站| 日韩大片免费观看网站| 成人18禁高潮啪啪吃奶动态图| 老司机亚洲免费影院| 黄色 视频免费看| 免费日韩欧美在线观看| 国产男女超爽视频在线观看| 欧美日韩一区二区视频在线观看视频在线| 国产有黄有色有爽视频| 大香蕉久久成人网| av片东京热男人的天堂| 2018国产大陆天天弄谢| 久久久久久久精品精品| 大香蕉97超碰在线| 国产69精品久久久久777片| 国产色爽女视频免费观看| 99国产精品免费福利视频| 女性生殖器流出的白浆| 男女边摸边吃奶| 色婷婷久久久亚洲欧美| 女性被躁到高潮视频| 日韩一区二区三区影片| 精品亚洲乱码少妇综合久久| 青春草视频在线免费观看| 91精品三级在线观看| 美女大奶头黄色视频| 桃花免费在线播放| 青春草亚洲视频在线观看| 国产成人a∨麻豆精品| 熟女人妻精品中文字幕| 欧美成人午夜免费资源| 日本黄色日本黄色录像| 久久99一区二区三区| 欧美日韩视频精品一区| 极品人妻少妇av视频| 伦理电影免费视频| 亚洲欧美中文字幕日韩二区| 久久精品国产鲁丝片午夜精品| 免费人成在线观看视频色| av有码第一页| 只有这里有精品99| 精品久久蜜臀av无| 少妇猛男粗大的猛烈进出视频| 欧美日韩av久久| 男人舔女人的私密视频| 国产精品不卡视频一区二区| 成人免费观看视频高清| 男的添女的下面高潮视频| 搡女人真爽免费视频火全软件| 国产视频首页在线观看| 久久精品国产综合久久久 | 99国产综合亚洲精品| 久久99蜜桃精品久久| 中文精品一卡2卡3卡4更新| 啦啦啦视频在线资源免费观看| 国产精品一区www在线观看| 天天躁夜夜躁狠狠躁躁| 日本黄色日本黄色录像| 欧美bdsm另类| 精品午夜福利在线看| 亚洲色图综合在线观看| 国产精品99久久99久久久不卡 | 爱豆传媒免费全集在线观看| 成人18禁高潮啪啪吃奶动态图| 免费观看无遮挡的男女| 国产成人aa在线观看| 自线自在国产av| 久久久欧美国产精品| 在线看a的网站| 久久精品国产自在天天线| 97在线视频观看| 老司机亚洲免费影院| 亚洲av在线观看美女高潮| 国产毛片在线视频| 丝袜人妻中文字幕| 黄片播放在线免费| 9热在线视频观看99| 极品少妇高潮喷水抽搐| 免费av不卡在线播放| 国产精品一国产av| 欧美少妇被猛烈插入视频| 99热6这里只有精品| 国产69精品久久久久777片| 久久婷婷青草| 免费观看av网站的网址| 成人国产麻豆网| 制服人妻中文乱码| 97人妻天天添夜夜摸| 在现免费观看毛片| 十八禁高潮呻吟视频| 免费在线观看完整版高清| 中文字幕精品免费在线观看视频 | tube8黄色片| 久久国产精品大桥未久av| 人人妻人人澡人人看| 亚洲精品av麻豆狂野| 亚洲精品456在线播放app| 午夜免费男女啪啪视频观看| 欧美xxⅹ黑人| 成人国语在线视频| 日韩欧美精品免费久久| 午夜精品国产一区二区电影| 考比视频在线观看| 欧美亚洲 丝袜 人妻 在线| 亚洲av电影在线观看一区二区三区| 欧美丝袜亚洲另类| 亚洲国产日韩一区二区| 视频区图区小说| 国产免费一区二区三区四区乱码| 国产成人精品一,二区| 波野结衣二区三区在线| 人人妻人人爽人人添夜夜欢视频| 日韩伦理黄色片| 日本欧美国产在线视频| 久久久精品免费免费高清| 欧美亚洲 丝袜 人妻 在线| 毛片一级片免费看久久久久| 国产深夜福利视频在线观看| 看十八女毛片水多多多| 色婷婷av一区二区三区视频| 中文字幕av电影在线播放| 自线自在国产av| 亚洲国产精品一区二区三区在线| 欧美精品国产亚洲| 秋霞在线观看毛片| 久久久久久久久久人人人人人人| 国产国拍精品亚洲av在线观看| 如何舔出高潮| av有码第一页| 午夜激情久久久久久久| 99热全是精品| 亚洲成色77777| 免费av中文字幕在线| 亚洲国产成人一精品久久久| 九九在线视频观看精品| 一级a做视频免费观看| 性色av一级| 国产高清不卡午夜福利| 国产高清国产精品国产三级| 三上悠亚av全集在线观看| 精品熟女少妇av免费看| 少妇的逼好多水| 蜜桃国产av成人99| 午夜福利,免费看| 交换朋友夫妻互换小说| 多毛熟女@视频| 亚洲美女搞黄在线观看| 最近最新中文字幕大全免费视频 | 亚洲情色 制服丝袜| 久久精品国产a三级三级三级| 日本黄大片高清| 搡老乐熟女国产| 黄色 视频免费看| 99国产综合亚洲精品| 99久久中文字幕三级久久日本| 最后的刺客免费高清国语| 男人添女人高潮全过程视频| 一区二区三区精品91| 久久 成人 亚洲| 久久久久久久亚洲中文字幕| 伊人亚洲综合成人网| 日本爱情动作片www.在线观看| 韩国精品一区二区三区 | 欧美 亚洲 国产 日韩一| 少妇的逼水好多| 男人操女人黄网站| 日韩免费高清中文字幕av| 日韩一区二区视频免费看| 香蕉国产在线看| 少妇猛男粗大的猛烈进出视频| 亚洲三级黄色毛片| 国产一区二区三区av在线| 2021少妇久久久久久久久久久| 亚洲人成网站在线观看播放| av免费在线看不卡| 亚洲精品一区蜜桃| www.色视频.com| 中文字幕亚洲精品专区| 日韩av免费高清视频| av线在线观看网站| 亚洲美女视频黄频| 久久精品国产a三级三级三级| 国产熟女午夜一区二区三区| 国产一级毛片在线| 一区在线观看完整版| 欧美成人午夜精品| 最新的欧美精品一区二区| 国产老妇伦熟女老妇高清| 欧美日韩综合久久久久久| 少妇猛男粗大的猛烈进出视频| 十八禁网站网址无遮挡| 久久精品久久精品一区二区三区| 久久精品国产亚洲av涩爱| 久久热在线av| 高清视频免费观看一区二区| 午夜免费鲁丝| 国产在线视频一区二区| 欧美激情极品国产一区二区三区 | 国产一区亚洲一区在线观看| 欧美精品高潮呻吟av久久| 亚洲国产av影院在线观看| 在线亚洲精品国产二区图片欧美| 国产精品成人在线| 超碰97精品在线观看| 国产精品偷伦视频观看了| 亚洲五月色婷婷综合| 国产女主播在线喷水免费视频网站| av不卡在线播放| 亚洲av电影在线进入| 国产免费视频播放在线视频| 国产精品一区二区在线观看99| 国产成人免费无遮挡视频| 国产欧美亚洲国产| 亚洲成人手机| 大香蕉久久网| 99久久综合免费| 国产免费现黄频在线看| 丝袜美足系列| 香蕉丝袜av| 最近手机中文字幕大全| 精品熟女少妇av免费看| 中文乱码字字幕精品一区二区三区| 午夜福利在线观看免费完整高清在| 最近手机中文字幕大全| 高清黄色对白视频在线免费看| 国产精品熟女久久久久浪| 免费观看在线日韩| 五月天丁香电影| 久久99一区二区三区| 五月天丁香电影| 国产福利在线免费观看视频| 欧美日韩av久久| 亚洲av欧美aⅴ国产| 亚洲国产色片| 亚洲精品乱码久久久久久按摩| 欧美xxxx性猛交bbbb| 亚洲精品视频女| 少妇熟女欧美另类| 如日韩欧美国产精品一区二区三区| 久久精品久久久久久久性| 中文欧美无线码| 成人二区视频| 成人国产麻豆网| 老熟女久久久| 精品一区二区免费观看| 久久久久久久亚洲中文字幕| 国产激情久久老熟女| 涩涩av久久男人的天堂| 欧美日韩精品成人综合77777| 免费看av在线观看网站| 99热国产这里只有精品6| 纯流量卡能插随身wifi吗| 久久久久精品久久久久真实原创| 伊人久久国产一区二区| 亚洲国产欧美在线一区| 考比视频在线观看| 亚洲情色 制服丝袜| 亚洲在久久综合| 久久久久久久精品精品| 亚洲成色77777| 日韩欧美精品免费久久| 最近最新中文字幕大全免费视频 | 成人亚洲欧美一区二区av| 一区二区三区乱码不卡18| 九九在线视频观看精品| 国产精品久久久久久精品电影小说| 欧美精品人与动牲交sv欧美| 麻豆乱淫一区二区| 美女主播在线视频| av黄色大香蕉| 高清av免费在线| 秋霞伦理黄片| 国产日韩一区二区三区精品不卡| 国产在线视频一区二区| 成人无遮挡网站| 中文字幕人妻丝袜制服| 国产精品一区www在线观看| 视频在线观看一区二区三区| 国产一级毛片在线| 菩萨蛮人人尽说江南好唐韦庄| 在线观看人妻少妇| 国产精品三级大全| 久久久久久久久久成人| 美女福利国产在线| 中文字幕人妻丝袜制服| 欧美xxxx性猛交bbbb| 国产精品偷伦视频观看了| 飞空精品影院首页| 一级毛片 在线播放| 亚洲国产精品国产精品| 在线观看三级黄色| 午夜免费男女啪啪视频观看| 久久99热这里只频精品6学生| 性高湖久久久久久久久免费观看| 春色校园在线视频观看| 22中文网久久字幕| 国产精品久久久久久精品古装| 欧美bdsm另类| 国产一区有黄有色的免费视频| 自拍欧美九色日韩亚洲蝌蚪91| 成人午夜精彩视频在线观看| 青青草视频在线视频观看| 久久鲁丝午夜福利片| 久久久久久久精品精品| 99国产综合亚洲精品| 在线 av 中文字幕| 亚洲av国产av综合av卡| 亚洲精品国产av蜜桃| 99久久中文字幕三级久久日本| 亚洲三级黄色毛片| 久久人人爽av亚洲精品天堂| 久久女婷五月综合色啪小说| 中文乱码字字幕精品一区二区三区| 视频在线观看一区二区三区| 久久久久久人人人人人| 一级毛片 在线播放| 成年动漫av网址| 一级毛片黄色毛片免费观看视频| 午夜福利影视在线免费观看| 男女边摸边吃奶| 亚洲精品中文字幕在线视频| 少妇人妻久久综合中文| 亚洲精品久久成人aⅴ小说| 黑人欧美特级aaaaaa片| 看十八女毛片水多多多| www日本在线高清视频| 日韩av在线免费看完整版不卡| 蜜桃国产av成人99| 十分钟在线观看高清视频www| 午夜福利乱码中文字幕| 久久99热这里只频精品6学生| 国产男女超爽视频在线观看| 国产 精品1| 国产精品成人在线| av在线app专区| 黑人高潮一二区| 男女午夜视频在线观看 | 亚洲精品久久成人aⅴ小说| 国产视频首页在线观看| 18+在线观看网站| 国国产精品蜜臀av免费| 日本黄色日本黄色录像| 国产亚洲精品久久久com| 2022亚洲国产成人精品| 久久精品国产a三级三级三级| 大片电影免费在线观看免费| 精品少妇黑人巨大在线播放| 夜夜骑夜夜射夜夜干| 亚洲av国产av综合av卡| 日韩制服丝袜自拍偷拍| 少妇的逼好多水| 99国产精品免费福利视频| 亚洲国产最新在线播放| 午夜日本视频在线| 老熟女久久久| 欧美另类一区| 看十八女毛片水多多多| 国产精品一区www在线观看| 亚洲熟女精品中文字幕| 久久久久久久亚洲中文字幕| 国产精品女同一区二区软件| 亚洲人成77777在线视频| 国产 精品1| 99香蕉大伊视频| 又黄又爽又刺激的免费视频.| av天堂久久9| 久久女婷五月综合色啪小说| 最近2019中文字幕mv第一页| 人人妻人人澡人人看| 国产淫语在线视频| 亚洲,一卡二卡三卡| 男女下面插进去视频免费观看 | 亚洲精品乱码久久久久久按摩| 黄网站色视频无遮挡免费观看| 国产极品粉嫩免费观看在线| 午夜福利乱码中文字幕| 国产成人免费无遮挡视频| 永久免费av网站大全| 国产精品久久久久久久久免| 国产一区亚洲一区在线观看| 亚洲久久久国产精品| 亚洲欧美清纯卡通| 大香蕉久久成人网| 丁香六月天网| 两个人看的免费小视频| 五月开心婷婷网| 人妻人人澡人人爽人人| 亚洲第一区二区三区不卡| 天堂俺去俺来也www色官网| 色婷婷av一区二区三区视频| kizo精华| 亚洲成人手机| 国产黄色免费在线视频| 97在线人人人人妻| 亚洲国产欧美在线一区| 欧美日本中文国产一区发布| 中文乱码字字幕精品一区二区三区| 如日韩欧美国产精品一区二区三区| 久久久a久久爽久久v久久| av天堂久久9| 欧美日韩综合久久久久久| 欧美成人午夜精品| 十分钟在线观看高清视频www| 中国美白少妇内射xxxbb| 日韩欧美精品免费久久| 国产精品偷伦视频观看了| 久久久a久久爽久久v久久| 精品一区在线观看国产| 人人妻人人爽人人添夜夜欢视频| 亚洲欧美日韩卡通动漫| 男女下面插进去视频免费观看 | 亚洲国产欧美在线一区| 国产在线视频一区二区| 9热在线视频观看99| 视频区图区小说| 久久这里有精品视频免费| av卡一久久| 最新的欧美精品一区二区| videos熟女内射| 青青草视频在线视频观看| 寂寞人妻少妇视频99o| 欧美日韩亚洲高清精品| 夫妻午夜视频| 久久久久久久久久人人人人人人| 黄网站色视频无遮挡免费观看| 18禁裸乳无遮挡动漫免费视频| 伦精品一区二区三区| 色婷婷久久久亚洲欧美| 国产极品粉嫩免费观看在线| 国产av一区二区精品久久| 成人无遮挡网站| 黄片无遮挡物在线观看| 精品一区二区三卡| 男男h啪啪无遮挡| 亚洲激情五月婷婷啪啪| 新久久久久国产一级毛片| 青青草视频在线视频观看| 亚洲av日韩在线播放| 中文字幕制服av| av一本久久久久| 插逼视频在线观看| 999精品在线视频| 有码 亚洲区| 波野结衣二区三区在线| 久久精品久久久久久久性| 国产男人的电影天堂91| 成人免费观看视频高清| 日韩一区二区三区影片| 国产一区亚洲一区在线观看| 国产成人精品福利久久| 少妇人妻精品综合一区二区| 亚洲成国产人片在线观看| 黄色配什么色好看| 最后的刺客免费高清国语| 亚洲精品自拍成人| 欧美成人精品欧美一级黄| 男女边吃奶边做爰视频| 插逼视频在线观看| 晚上一个人看的免费电影| 国国产精品蜜臀av免费| 香蕉丝袜av| 中文字幕制服av| 制服人妻中文乱码| 在线看a的网站| 国产深夜福利视频在线观看| 久久综合国产亚洲精品| 久久99一区二区三区| 久久精品国产亚洲av天美| 黄片播放在线免费| 22中文网久久字幕| 免费黄频网站在线观看国产| 啦啦啦中文免费视频观看日本| 午夜日本视频在线| 亚洲第一av免费看| 丝袜美足系列| 国产 一区精品| 欧美变态另类bdsm刘玥| 久久国内精品自在自线图片| 国产国语露脸激情在线看| 久热这里只有精品99| 啦啦啦视频在线资源免费观看|