• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Swarming Behavior of Harris Hawks Optimizer for Arabic Opinion Mining

    2021-12-15 07:11:06DiaaSalamAbdElminaamNabilNeggazIbrahimAbdulatiefAhmedandAhmedElSawyAbouelyazed
    Computers Materials&Continua 2021年12期

    Diaa Salam Abd Elminaam,Nabil Neggaz,Ibrahim Abdulatief Ahmed and Ahmed El Sawy Abouelyazed

    1Department of Information Systems,Faculty of Computers and Artificial Intelligence,Benha University,12311,Egypt

    2Department of Computer Science,Faculty of Computer Science,Misr International University,Egypt

    3Universite des Sciences at de la Technologie d’Oran Mohamed Boudiaf,USTO-MB,BP 1505,EL M’naouer,31000 Oran,Laboratoire Signal Image Parole(SIMPA),Department d’informatique,Faculte des Mathematiques et Informatique,Algerie

    4Department of Computer Science,Faculty of Computers and Artificial Intelligence,Benha University,12311,Egypt

    5Department of Computer Science,Obour High Institute for Computers and Informatics,Egypt

    Abstract:At present,the immense development of social networks allows generating a significant amount of textual data,which has facilitated researchers to explore the field of opinion mining.In addition,the processing of textual opinions based on the term frequency-inverse document frequency method gives rise to a dimensionality problem.This study aims to detect the nature of opinions in the Arabic language employing a swarm intelligence(SI)-based algorithm, Harris hawks algorithm, to select the most relevant terms.The experimental study has been tested on two datasets:Arabic Jordanian General Tweets and Opinion Corpus for Arabic.In terms of accuracy and number of features,the results are better than those of other SI based algorithms,such as grey wolf optimizer and grasshopper optimization algorithm, and other algorithms in the literature,such as differential evolution,genetic algorithm,particle swarm optimization,basic and enhanced whale optimizer algorithm,slap swarm algorithm,and ant–lion optimizer.

    Keywords: Arabic opinion mining; Harris hawks optimizer; feature selection; AJGT and OCA datasets

    1 Introduction

    The development of information technology and associated services forums, specialized sites,etc.—has opened the doors to a vast mode of opinion expression on a wide range of subjects.It prompted us to study the opinions of the public and particular people, such as consumer reviews.The comments and responses expressed in various blogs, forums, and social media platforms are considered essential sources of textual data that can be analyzed to derive useful information [1].Opinion mining or opinion analysis is a branch of the automatic extraction of knowledge from data that uses computational linguistics techniques to assess the opinions expressed in the textual form [2].The richness of social media in terms of opinion and sentiment has sparked research interest.This interest is intense for the Arabic language, given the massive number of internet users speaking Arabic and its dialects.The term “opinion mining”refers to the automatic processing of opinions, feelings, and subjectivity in texts.It is well known as word polarity extraction or sentiment analysis (SA), often associated with a classification problem on evaluative texts, such as those available on Amazon and Facebook.Recently, with the COVID-19 pandemic,great importance has been attached to social networks and online shopping.Therefore, analyzing opinions has become essential in daily activities; it is paramount for business enterprises to respect consumers’opinions to increase their profits [3].

    The process of massive data resulted from social media, such as Facebook and Twitter,required to apply SA over the text.However, several features contain irrelevant information,negatively influencing the classification results based on machine learning (ML) techniques.Thus,feature selection (FS) has been employed for several natural language processing (NLP) applications [4].FS can be classified into three as a filter, wrapper, and embedded techniques [5].In the first technique, the process of FS is based on the results of learner performance, and the correlation between the features is used during the process of evaluation (no external evaluators are involved).In the third technique, the classifier is trained by the available features.The obtained results are used for evaluating the correlation of each attribute.FS-based wrapper method engages the classifier into the ranking process using a subset of features.Various meta-heuristic optimization algorithms have been proposed to solve complex optimization problems, such as text document clustering, data mining, image segmentation, computer vision, and opinion mining.Several inspirations are derived from natural biological behavior such as genetic algorithm (GA) [6],differential evolution (DE) [7], and genetic programming [8], swarm intelligence (SI) artificial bee colony [9], grey wolf optimizer (GWO) [10], whale optimization algorithm (WOA) [11], Improved whale optimization algorithm [12], sports volleyball premier league [13], league championship algorithm [14], and football optimization algorithm [15] and physical/mathematical rules sine–cosine algorithm (SCA) [16], thermal exchange optimization [17], and Henry gases solubility optimizer [18], fruit fly optimization [19], and Big data analytics using spark [20].

    Several studies on natural languages, such as English, French, and Spanish, have been conducted using SA.This due to their formal nature, unlike the Arabic language, which can be depicted using formal and informal language or dialectical language (Algerian, Moroccan,Tunisian, Egyptian, and Jordanian dialects, to mention only a few), spoken by 423 million people.Therefore, Arabic SA (ASA) is still challenging due to its vast vocabulary, different dialects, and the Qur’an language.Besides, several SI- and physical/mathematical-inspired algorithms are used for FS [21,22], which motivated us to treat ASA using Harris hawks optimizer (HHO).

    The main contributions of this paper are as follows:

    ? Designing a new framework of Arabic sentiment by imitating the behavior of Harris hawks.

    ? We are introducing the wrapper FS using HHO for Arabic opinion mining (AOM).

    ? Comparing the performance of HHO with well-known optimizers, such as GWO, SCA,and grasshopper optimizer algorithm (GOA), using two Arabic opinion datasets—Arabic Jordanian General Tweets (AJGT) and Opinion Corpus for Arabic (OCA).

    ? Comparing the efficiency of HHO to state-of-the-art methods, such as DE, GA, particle swarm optimization (PSO), primary and modified WOA, slap swarm algorithm (SSA), and ant–lion optimizer (ALO).

    The remainder of this paper is laid out as follows:We present a detailed related work in Section 2.The preprocessing stage of the NLP, GWO, and k-NN classifiers is discussed in Section 3.The architecture of FS for AOM based on HHO is defined in Section 4.Section 5 describes the data and metrics that were used, as well as the findings that were obtained.Finally,in Section 6, we summarise our findings and discuss future research directions.

    2 Related Work

    Several studies have been conducted for the ASA.For example, five ML classifiers, such as support vector machine (SVM), stochastic gradient descent, naive Bayes (NB), multinomial NB(MNB), and decision tree (DT), have been employed on a large scale Arabic book review dataset.The obtained results showed that the MNB classifier has tremendous potential compared with other algorithms.The authors used several feature extraction models based on these classifiers.The experimental study showed that the best performance is obtained by the MNB classifier using the unigram.Finally, GA is introduced by [23] as a new contribution to select relevant features for the MNB classifier, which enhanced the classification rate to 85%.

    As part of the research conducted by [24], a novel dataset for ASA called AJGT was designed.The authors compared the efficiency of SVM and NB classifiers using different scenarios of preprocessing fusions.Mainly, they compared three techniques for extracting characteristics based on N-grams, such as unigrams, bigrams, and trigrams, tested using the AJGT dataset.Besides,a fair comparison was realized using the TF/TF–IDF weighting technique (TF:term frequency;IDF:inverse document frequency).The experimental study showed that the fusion between SVM and TF–IDF weighting method outperformed other techniques and achieved an accuracy rate of 88.72% and 88.27%, respectively, in terms of F-measure.

    A set of ML classifiers based on the majority voting algorithm combined with four classifiers,including NB, SVM, DT, andk-NN, has been proposed [25] for ASA.The experiments showed that the set of ML classifiers have better performance compared with the basic classifiers.The voting method highlighted a practical classification approach for ASA.It uses different classifiers to classify each case.The majority vote of all classifiers’decisions is combined to predict the instance under test.

    In [26], the authors enhanced the basic WOA for solving the problem of AOM based on FS by adopting an improved WOA (IWOA).The novelty of their work is the merging of two phases of dimensional reduction.The first phase used a filter based on the information gain (IG)method, which a wrapper WOA will optimize.The IWOA employed several operators, such as elite opposite learning and evolutionary operators, inspired by DE optimizer to produce a new generation.The IWOA obtained a significant result in terms of classification accuracy and the selected ratio compared with other optimization algorithms over several datasets.

    A new hybrid system was designed for ASA based on filter and wrapper FS as IG and SSA,respectively [27].The proposed method was assessed using the AJGT dataset, and the obtained results achieved 80.8% accuracy.

    The authors of [28] designed a new tool for ASA using GWO.Their idea comprises selecting the features using wrapper GWO to determine the polarity of opinions.The experiment was conducted using two datasets (AJGT and OCA).The GWO achieved approximately 86% and 95%for AJGT and OCA, respectively.

    3 Background

    3.1 Preprocessing Step

    Before the learning phase for ASA, the preprocessing steps are required to convert text features to vectors, crucial.This study employed steps such as tokenization, noise removal, stop word removal and stemming [29].

    3.1.1 Tokenization

    The process of tokenization comprises identifying words and phrases in a text.Simple tokenization can use white space or the carriage return as a word separator.Notably, punctuation marks (“?” “!” and “.”) are very useful in separating sentences.

    3.1.2 Noise Removal

    The result of the tokenization process provides two types of tokens.

    ? The first corresponds to recognizable units, including punctuation marks, numeric data, and dates.

    ? The second requires deep morphological analysis.In this context, the tokes defined by one or two characters, non-Arabic language, and digit numbers are eliminated.

    3.1.3 Stop Word Suppression

    Stop words correspond to terms that appear in texts but do not contain useful information.This process is to eliminate stop words.These words are usually personal pronouns, articles,prepositions, or conjunctions.A dictionary of stop words is usually employed for eliminating the same from the text.

    3.1.4 Stemming

    Stemming is the extraction of lexical root or stem by adopting morphological heuristics to remove affixes from words before indexing them.For example, the Arabic wordsshare the same root

    3.2 Feature Extraction

    After the preprocessing phase, the dataset must be prepared in a suitable form to start the learning phase.Consequently, the most relevant text features are extracted and converted into vectors.The vector space is represented as a two-dimensional matrix, where the columns denote the features, and the rows denote the documents (reviews).The entries of the matrix are the weights of the features in their corresponding reviews.The TF–IDF scheme is employed to assign weights to terms [30].The weight is determined from Eqs.(1)–(3) as follows:

    TF(i, j) is the frequency of term i in review j, IDF(i, j) is the frequency of features concerning all reviews.Finally, the weight of feature i in review j, W(i, j) is calculated by Eq.(3).

    3.3 Harris Hawks Optimization

    The HHO [31] as a new SI algorithm is inspired by the cooperative behaviors of Harris hawks in hunting preys.Harris hawks demonstrate various chasing styles depending on the dynamic nature of circumstances and escaping patterns of a prey.In this intelligent strategy, several Harris hawks try to attack from different directions cooperatively and simultaneously converge on a detected prey outside the cover, showing different hunt strategies.The candidate solutions are the Harris hawks, and the intended prey is the best candidate solution (nearly the optimum) in each step.The three phases of the HHO algorithm are highlighted as follows:exploration phase, the transition from exploration to exploitation phase, and exploitation phase.

    3.3.1 The Exploration Phase

    The hunting is modeled as follows:

    where the current position of ithhawk and its new position in iteration t + 1 is represented byand;xrandandxbestAre randomly selected hawk location and the best solution (target:rabbit).Lower and upper bounds of the jthdimension are denoted bylbjandubj;τ1–τ5are random numbers in the interval [0, 1].The average hawk positionis defined as follows:

    In Eq.(4), the first scenario (τ5≥0.5) grants a chance to the hawks to hunt randomly,spreading in the planned space; meanwhile, the second scenario explains context when the hawks hunt beside other hawks close to a target.

    3.3.2 The Transformation from Exploration to Exploitation

    In this phase, the prey attempt to escape from the capture, so the escaping energy Enlevel of the prey decreases gradually.The energy is given by

    where the initial energy (En0) is defined byEn0= 2*rand - 1, randomly changed inside (-1, 1),and T is the maximum number of iterations.HHO remains in the exploration mode as long as|En| ≥1, and hawks continue exploring global regions, whereas it swaps into exploitation mode when |En|<1.R refers to escaping probability of the target.

    3.3.3 The Exploitation Phase

    It aims to avoid fall into local optima.According to the value of energy escaping and the value of R, four strategies are applied named:surrounding soft, surrounding hard, surrounding soft beside advanced rapid dives, and surrounding hard beside advanced rapid dives.

    The first task(surrounding soft):The surrounding soft can be formulated mathematically when R ≥12, and the level of energy is greater than 1, 2 (i.e., |En| ≥12), given by

    wheredenotes the distance between the best prey (a rabbit) and the ithhawk’s current location.J denotes the prey’s random strength jump, andτ6is a random number between 0 and 1.

    The second task (surrounding hard):When the level of energy is less than 12 (|En|<12) and R ≥12, the rabbit becomes exhausted, and the possibility of escaping low (or escaping becomes hard) because the level of energy is decreased.This behavior can be modeled by

    The third task (surrounding soft beside advanced rapid dives):This task is applicable when the level of energy is greater than 1 2 (|En| >1 2) and R<1 2, where the rabbit still has sufficient energy to run away.Hence, the hawk tries progressive dives to take the best position to catch the rabbit.This behavior is modeled by integrating the Lévy flight function [32].

    The position of ithhawk should be modified

    with

    where D is the dimensionality space,rvcontains D components generated randomly in the interval(0, 1),Lvrepresents the Lévy flight function,βis a constant with defaultβ= 1.5, and fit indicates the fitness function computed by Eq.(9).

    The fourth task (surrounding hard beside advanced rapid dives):In this task, it is assumed that R<1 2 and the level of energy is less than 1 2 (|En|<1 2), the prey has a lower energy level to escape, and the hawks are close to realizing successive dives for catching.This process can be described by

    The general steps of HHO are depicted in Algorithm 1.

    ?

    4 HHO for AOM

    This section explains the process of AOM deeply using HHO.After extracting the TF–IDF matrix, HHO aims to keep the relevant terms by ensuring a compromise between high accuracy and a low number of selected features.The following steps summarize the required steps for AOM.

    4.1 Initialization Phase

    In this step, HHO generates N swarm agents in the first population, where each individual represents a set of terms (features) to be selected for evaluation.The population X is generated as follows:

    The minimum and maximum bounds, Maxjand MinjRespectively, for each candidate solution, i, are in the range of [0, 1].Theδjis a random number between 0 and 1.An intermediate binary conversion step is necessary before fitness evaluation to select a subset of terms.So, each solutionxiis converted using binary operator) as follows:

    For example, we generate a solutionxithat contains five terms in TF–IDF asxi=[0.6,0.2,0.9,0.33,0.75].The operation of conversion is applied using Eq.(14) to generate a binary vector=[1,0,1,0,1], where one should be selected, and zero should be deselected.It means that the first, third, and last terms in original datasets are relevant and should be selected,whereas the others are irrelevant features and should be eliminated.After determining the subset of the selected terms, the fitness function is calculated for each agentTo determine the quality of these features.The fitness of theithsolution is defined by

    whereζ= 0.99 represents the equalizer parameter employed to ensure a relationship between the error rate of classification (Eri= 1 - Accuracy) and the size of selected terms (di);Dis the total size of terms in the original dataset.Thek-NN is utilized as a classifier in the FS cycle.The hold-out strategy is used as a classification strategy, which divides the dataset into training and test sets of 80% and 20%.Eridenotes the error rate of test datasets computed by k-NN [33].The lower value of fitness through all agents is assigned to the best prey (xbest).

    4.2 Updating Phase

    The process of updating solutions consists of the exploration phase, which aims to apply a global search when the energy is more significant than one.Afterward, the transformation from exploration to exploitation is applied.Then, the exploitation phase is employed, which contains four tasks surrounding soft, surrounding hard, surrounding soft beside advanced rapid dives, and surrounding hard beside advanced rapid dives.

    The process is reproduced while the termination condition is met.The stop criterion corresponds to the maximum amount of iterations that evaluate the HHO algorithm’s performance.Then, the best solutionxbestReturned and converted to determine the number of relevant features.

    The ASA framework required three steps preprocessing data, features extraction, and FS using HHO.In the first step (preprocessing data), the Arabic reviews are treated by tokenization, noise removal, stop words suppression, and stemming.The second step consists of converting the text to a vector shape model by weighting each term using TF–IDF.The third step used HHO as a wrapper FS.The detailed ASA framework using HHO is shown in Fig.1.

    Figure 1:The proposed model of HHO for Arabic opinion mining

    5 Experimental Results and Discussion

    In this section, several tests and experiments were performed to determine the efficiency of HHO for ASA.Two datasets are exploited to automatically determine the nature of opinion review (positive/negative) OCA and AJGT datasets.First, the experiment results are compared with well-known population-based algorithms GWO, SCA, and GOA tested 30 times using ten search agents and T = 100.Second, the performance of HHO is compared with some works in the literature, which used the same datasets as WOA, IWOA, SSA, GA, PSO, DE, and ALO.

    5.1 Datasets Description

    ? AJGT dataset:This data was gathered from Twitter on various subjects, including arts and politics.It contains 2000 Arabic tweet reviews, 1000 of which are positive and 1000 of which are negative.Due to the differences between Modern Standard Arabic and the Jordanian dialect, this data presents a significant challenge [34].

    ? OCA dataset:This data was compiled from Arabic film blogs and web pages devoted to Arabic film reviews.There are 500 Arabic reviews, evenly divided into binary categories (250 positives and 250 negatives) [35].

    After the preprocessing steps, the TF–IDF allows determining 3054 and 9404 terms for AJGT and OCA datasets, respectively.

    5.2 Parameters Settings

    Parameters settings of the GWO, SCA, GOA and HHO algorithms are listed in Tab.1.

    Table 1:Parameters settings of the SI algorithms

    5.3 Evaluation Measures

    To investigate the efficiency of the HHO algorithm in the area of ASA-based FS.First, we define the confusion matrix depicted by Tab.2.Then, specific metrics must be evaluated, such as Accuracy (Ac), Recall (Re), Precision (Pr), and F-score (Fsc).

    Table 2:Confusion matrix

    ? TP:The classifier manages to identify the text as being a positive opinion.

    ? TN:The classifier manages to identify the text as being a negative opinion.

    ? FP:The classifier identifies the text as being a positive opinion knowing that the actual label indicates that the review is negative.

    ? FN:The classifier identifies the text as negative, knowing that the actual label indicates positive reviews.

    In this study, we note that the HHO algorithm is executed 30 times.So, all metrics are expressed in terms of average with their standard deviation.In addition, for comparing the efficiency of HHO, three meta-heuristic algorithms GWO, SCA, and GOA, were employed under the same conditions.

    ? Mean accuracy (μAc):The accuracy metric (AC) represents the rate of correct data classification, given by

    The number of runs is fixed to 30, so the mean accuracyμAcis calculated as follows:

    ? Average recall (μRe):The recall metric (Re) is also called actual positive rate, which indicates the percentage of predicting positive reviews, given by

    Thus,μReis calculated from the best prey (xbest) using

    ? Average precision (μPr):The precision (Pr) indicates the rate of correctly predicted samples,given by

    Thus, the average precision (μPr) can be computed by the following equation:

    ? Average fitness value (μfit):The fitness value metric evaluates the performance of algorithms,which puts the relationship between minimizing the error rate of classification and reducing the selection ratio as in Eq.(15).The average fitness value is given by

    ? The average size of selected features (μsize):This metric represents the size of relevant features.It is computed as follows:

    wheredenotes the cardinality of the best agent’s selected features for thekthexecution.

    ? Mean F-score (μFSc):This metric represents the harmonic average between recall and precision.It is already used for balanced data, which can be computed as follows:

    Thus, the mean F-score can be determined by

    ? Average CPU time (μCpu):It is the average of computation time for each method, given by

    ? Standard deviation (σ):This is the quality of each algorithm and analysis of the obtained results over different executions and metrics.It is calculated for all metrics defined above.

    5.4 Results and Discussions

    In terms of the average and standard deviations of fitness and CPU time, Tab.3 reports the mean fitness values obtained by the HHO, GWO, SCA, and GOA algorithms.It can be deduced that HHO outperformed the other for both AJGT and OCA datasets.The GWO and SCA ranked second for AJGT and OCA datasets, respectively.In addition, the GOA is the worst optimizer for both datasets.The CPU time consumed by the HHO and counterparts is listed in Tab.3.From the results, it can be observed that the SCA is very fast, especially for the OCA dataset, when the number of reviews is lower, whereas the HHO and GOA require more time.The complex exploitation/exploration operators can interpret this behavior.For both datasets, the SCA provides the lowest time due to using a simple updating operator using trigonometric functions.

    Table 3:Performance of HHO and counterpart algorithm for AOM based on fitness metric

    In terms of mean and standard deviations of accuracy and selected features, the performance of four swarm competitor algorithms in terms of accuracy and number of selected features is illustrated in Tabs.5 and 6.It is essential to highlight that the HHO achieves a high classification accuracy of 88.28% while keeping 2042 features from 3054 for the AJGT dataset.In addition,it can be observed that the HHO recognizes most of the OCA reviews correctly, with 96.40%in terms of accuracy.Moreover, the SCA finds the most informative features, exhibiting high accuracy for both datasets used.However, similar performance was seen between the GWO and SCA for the OCA datasets.A slight advantage is shown for SCA with a margin of 0.8% in terms of average accuracy.From Tab.4, it can be seen that SCA determines the optimal set of terms by keeping 1178 terms from 3054 provided by TF–IDF for the AJGT dataset.Further, the HHO can eliminate 7459 irrelevant terms for the OCA dataset.

    Table 4:Performance of HHO and counterpart algorithms for AOM based on CPU time

    Table 5:Performance of HHO and counterpart algorithms for AOM based on accuracy metric

    Table 6:Performance of HHO and counterpart algorithms for AOM based on selected features size

    In terms of the average and standard deviations of recall and precision metrics, the comparison of the performance of four meta-heuristics algorithms based on recall and precision is illustrated in Tabs.7 and 8.The performance of HHO in terms of recall and precision is better than all other counterpart algorithms for both datasets.We can observe a clear advantage obtained by the HHO in terms of standard deviation based on recall and precision metrics due to a good balance between the exploration and exploitation operators.It provides more stability to the algorithm.

    Table 7:Performance of HHO and counterpart algorithms for AOM based on recall metric

    In terms of mean and standard deviations of the F-score, Tab.9 summarized the mean and standard deviation of the F-score.For both datasets, the HHO outperforms other algorithms in terms of average F-score.A high advantage is highlighted for the OCA dataset compared with the AJGT dataset due to the standard Arabic language instead of Jordanian dialect.In addition,low standard deviation values are obtained, which indicate stability.

    Table 8:Performance of HHO and counterpart algorithms for AOM based on precision metric

    Table 9:Performance of HHO and counterpart algorithms for AOM based on F-score metric

    5.5 A Numerical Example of HHO Based AOM

    To understand deeply the process of the HHO algorithm for Arabic Opinion Mining based feature selection.A numerical example is illustrated for selecting the important terms extracted using TF-IDF.We consider a population with four solutions (Popinit) that contains 3054 words for the AJGT dataset (features) as shown in Tab.10.

    Table 10:Initial population in the range [0, 1]

    After initializing the first population, we evaluate the fitness of each solution.So, this step required an intermediate process called binary conversion based on thresholding operator as depicted in Tab.11, i.e., If the value is more significant than 0.5, the word is selected else the word is eliminated.Also, the fitness is computed by introducing a k-NN classifier, which allows assessing the fitness using Eq.(15).It can be seen that the second solution represents the best solution with a fitness value of 0.2431.

    For each solution, some control parameters are generated to apply the adequate steps of HHO (Exploration, transition from exploration to exploitation, and exploitation).Tab.12 shows the value of escaping energy (En) computed by Eq.(6) and a random number (R).The last column indicates the adequate operator of HHO, which will be applied.

    Table 11:The binary operator

    Table 12:The values of (En, R) for applying HHO operators

    This table evaluates firstly the value of escaping energy (En), while R is randomly generated in the range [0, 1].By inspecting the obtained results of (En, R), we can conclude that Sol1, sol2,and sol4 will be transmitted to the exploitation step by applying the strategy of surrounding hard using Eq.(8).However, sol3 is updated by surrounding soft using Eq.(7).

    The update values using the previous operators (soft and hard surrounding) create a new temporary population, illustrated in Tab.13.

    Table 13:The novel temporary population obtained by exploitation mode

    The bounds of each component must be checked to respect the range between 0 and 1.This process is illustrated in Tab.14.

    Table 14:The check of lower and higher bounds

    The second iteration verifies the value of escaping energy provided in Tab.15.It can be seen that all values are less than 1, which required generating a random number R.This parameter will determine the adequate strategy of exploitation step as illustrated in Tab.15.

    Table 15:The control parameters of En and R

    Based on the evaluation of fitness between X1 and each solution described in Tab.16, we determine the novel population shown in Tab.17.

    Table 16:The fitness comparison

    Table 17:The novel population

    In the third iteration, we can see that the algorithm HHO generates a higher value of escaping energy (En), as shown in Tab.18 (all values are more significant than 1).So, the HHO applies the exploration mode defined by Eq.(4).for each solution.In this operator, the random number(ζ5) updates the solution based on two scenarios.

    In the first scenario, hawks hunt randomly spread in the planned space, while, in the second scenario, the hawks hunt beside family members close to a target (Best).

    Tab.19 illustrated the final population determined by exploration mode.In this step, we should compare each solution from the previous iteration and the current population to select the best ones using the fitness metric as illustrated in Tab.20.Also, we can conclude that the third solution determines the best solution (Rabbit) because it has a lower value of fitness.We conclude that word3 is a significant feature based on this solution, while Word1, Word2, Word3, Word3053,and Word3054are irrelevant features.

    Table 18:The values of En and ζ5

    Table 19:The novel population

    Table 20:The fitness comparison

    5.6 A Comparative Study with Literature Review

    Three works from literature are selected.Figs.2 and 3 show the results of counterpart optimizers (SSA, WOA, PSO, GA, DE, SSA, IWOA, and ALO) [26,27] to investigate the swarming behavior of the HHO deeply for AOM.

    In terms of accuracy and selected ratio over ASA (AJGT and OCA datasets), and From Fig.2, the HHO outperforms all optimizers except the IWOA, which exhibits identical performance in terms of the mean accuracy and selection ratio for the OCA dataset.

    In conclusion, the HHO attains the best performance in terms of accuracy and selected ratio because the higher accuracy is equal to 96%.The lower selection ratio reached 40%, which means 60% of irrelevant features are eliminated.So, a good compromise is ensured between accuracy and selection ratio.Also, for the OCA dataset, HHO outperforms four optimizers, including DE,WOA, PSO, and ALO, in terms of accuracy and the same performance compared to IWO and GA.Furthermore, in terms of selection ratio HHO outperforms all optimizer except IWO which provide the same performance to HHO.

    Figure 2:The comparative study of HHO with the state-of-the-art OCA dataset

    Figure 3:The comparative study of HHO with the state-of-the-art AJET dataset

    From Fig.3, HHO achieved higher performance accuracy with 88%; however, the selection ratio is ranked in the last position.This behavior can be interpreted by the informal language of Jordanian dialect, representing a real challenge in preprocessing step.Also, the application of different types of Steemer (ISRI, KHODJA) influences the selection ratio.

    In addition, the initial terms extracted by the works of [26,27] are less than our study (2257 for AJGT in [26] instead of 3054 in our study), which provides a lower selection ratio compared to HHO.By analyzing Fig.3, a slight advantage for GA in terms of selection ratio for the AJGT dataset.

    6 Conclusion

    The use of social networks allows people to express their opinions freely.Hence, the automatic SA has become essential, especially in e-commerce, catering, and hotel services.Several studies have been conducted for SA languages, such as English and Spanish.However, few works have devoted to the Arabic language despite their practical use and importance.This study focuses on SA of the Arab language using the HHO technique, which mimics the behavior of Harris hawks.The main objective is to ensure a compromise between a high accuracy rate and a reduced number of significant attributes.HHO provides a good balance, especially for OCA instead of AJGT,because the second dataset contains opinions in informal dialectal language.For this reason, the number of significant attributes still higher compared to the literature review, and on the other hand, the choice of steemer (isri or KHODJA) play an essential role in the feature selection process

    The studied approach shows a precise performance over both AJGT and OCA datasets in terms of accuracy, but it required more time than the other algorithms.As future work, we will consider more powerful bio-inspired algorithms in terms of performance and response time.

    Funding Statement:This research was supported by Misr International University (MIU), (Grant Number.DSA28211231302952) to Diaa Salama, https://www.miuegypt.edu.eg/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩欧美免费精品| 国产亚洲av高清不卡| 亚洲成人免费av在线播放| 1024香蕉在线观看| 深夜精品福利| 淫秽高清视频在线观看| 叶爱在线成人免费视频播放| 99久久国产精品久久久| 亚洲精品av麻豆狂野| 午夜两性在线视频| 他把我摸到了高潮在线观看| cao死你这个sao货| 人妻久久中文字幕网| 一本综合久久免费| 欧美在线黄色| 51午夜福利影视在线观看| 国产精品美女特级片免费视频播放器 | 中亚洲国语对白在线视频| 首页视频小说图片口味搜索| 亚洲自偷自拍图片 自拍| 久久欧美精品欧美久久欧美| 久久久久久大精品| 国产区一区二久久| 两性午夜刺激爽爽歪歪视频在线观看 | 又大又爽又粗| 亚洲欧洲精品一区二区精品久久久| 香蕉久久夜色| 亚洲少妇的诱惑av| 国产精品爽爽va在线观看网站 | 亚洲欧美日韩无卡精品| 亚洲欧美一区二区三区黑人| av片东京热男人的天堂| 又黄又粗又硬又大视频| 成人三级黄色视频| 久久人妻福利社区极品人妻图片| 久久草成人影院| 午夜免费鲁丝| 午夜两性在线视频| 亚洲精品美女久久久久99蜜臀| 电影成人av| 大码成人一级视频| 亚洲色图av天堂| 欧美在线黄色| av网站免费在线观看视频| 99久久综合精品五月天人人| 欧美 亚洲 国产 日韩一| 欧美日本亚洲视频在线播放| 国产伦人伦偷精品视频| 熟女少妇亚洲综合色aaa.| 在线十欧美十亚洲十日本专区| 色在线成人网| 99国产精品一区二区蜜桃av| 国产av一区在线观看免费| 欧美日韩精品网址| 国产免费av片在线观看野外av| 久久久久久久久久久久大奶| 成人18禁在线播放| 精品国产超薄肉色丝袜足j| 国产91精品成人一区二区三区| 久久天躁狠狠躁夜夜2o2o| 国内久久婷婷六月综合欲色啪| 老熟妇乱子伦视频在线观看| a级片在线免费高清观看视频| 免费看十八禁软件| 99精品在免费线老司机午夜| 国产av在哪里看| 久久久精品欧美日韩精品| 三级毛片av免费| 亚洲国产欧美网| 婷婷精品国产亚洲av在线| 精品久久久久久久毛片微露脸| 国产成人一区二区三区免费视频网站| 高清欧美精品videossex| 天天影视国产精品| 一个人观看的视频www高清免费观看 | 极品人妻少妇av视频| 热re99久久国产66热| 97碰自拍视频| 久久精品国产亚洲av高清一级| 久久久久国产一级毛片高清牌| 99国产精品一区二区三区| 天天影视国产精品| 一边摸一边抽搐一进一小说| 久久青草综合色| 一进一出抽搐gif免费好疼 | 女同久久另类99精品国产91| 国产午夜精品久久久久久| 一进一出抽搐动态| av电影中文网址| 国产精品久久久av美女十八| 最近最新中文字幕大全免费视频| svipshipincom国产片| 久久青草综合色| 国产欧美日韩精品亚洲av| 人人妻,人人澡人人爽秒播| 国产欧美日韩一区二区三区在线| 一边摸一边抽搐一进一出视频| 欧美乱色亚洲激情| 亚洲一区中文字幕在线| 亚洲自偷自拍图片 自拍| 中文字幕精品免费在线观看视频| 国产精品 国内视频| 日本a在线网址| 精品熟女少妇八av免费久了| www.精华液| 免费一级毛片在线播放高清视频 | 国产熟女xx| 亚洲精品在线美女| 亚洲国产毛片av蜜桃av| 国产精品一区二区三区四区久久 | 日日爽夜夜爽网站| 欧美av亚洲av综合av国产av| a在线观看视频网站| 50天的宝宝边吃奶边哭怎么回事| 亚洲全国av大片| 午夜福利在线免费观看网站| 久久伊人香网站| 女同久久另类99精品国产91| 国产一区二区三区在线臀色熟女 | a级片在线免费高清观看视频| 亚洲色图 男人天堂 中文字幕| 国产激情欧美一区二区| 在线观看一区二区三区激情| 一个人观看的视频www高清免费观看 | 搡老乐熟女国产| 精品久久久久久成人av| 亚洲三区欧美一区| 久久久水蜜桃国产精品网| 精品一区二区三区四区五区乱码| a级毛片黄视频| 午夜免费观看网址| aaaaa片日本免费| 亚洲国产精品合色在线| 精品国产美女av久久久久小说| 久久九九热精品免费| 一级毛片高清免费大全| 国产高清激情床上av| 国产色视频综合| 丝袜美足系列| av视频免费观看在线观看| 亚洲色图 男人天堂 中文字幕| 国产成人精品久久二区二区免费| 另类亚洲欧美激情| 很黄的视频免费| 欧美不卡视频在线免费观看 | 亚洲国产欧美网| 国产真人三级小视频在线观看| 日韩大码丰满熟妇| 久久精品亚洲精品国产色婷小说| 国产高清激情床上av| tocl精华| 国产亚洲精品一区二区www| 国产精品香港三级国产av潘金莲| 大香蕉久久成人网| 久久精品亚洲精品国产色婷小说| 亚洲熟妇中文字幕五十中出 | 亚洲国产精品合色在线| 激情在线观看视频在线高清| a级毛片黄视频| 老熟妇乱子伦视频在线观看| 在线免费观看的www视频| 在线观看免费午夜福利视频| 亚洲av五月六月丁香网| 露出奶头的视频| 欧美一级毛片孕妇| 午夜精品国产一区二区电影| 啪啪无遮挡十八禁网站| 久久久精品国产亚洲av高清涩受| 国产区一区二久久| 成熟少妇高潮喷水视频| 久久国产精品影院| videosex国产| 国产高清国产精品国产三级| 精品久久久久久,| 最近最新免费中文字幕在线| 男男h啪啪无遮挡| 国产xxxxx性猛交| 99riav亚洲国产免费| 日韩大尺度精品在线看网址 | 大陆偷拍与自拍| 法律面前人人平等表现在哪些方面| 亚洲精品中文字幕在线视频| 精品免费久久久久久久清纯| 国产激情欧美一区二区| 少妇被粗大的猛进出69影院| 最近最新免费中文字幕在线| 国产单亲对白刺激| 亚洲中文日韩欧美视频| 午夜精品国产一区二区电影| 在线观看午夜福利视频| 人人澡人人妻人| 一边摸一边做爽爽视频免费| 久久精品国产清高在天天线| 男女下面插进去视频免费观看| 日本一区二区免费在线视频| 精品日产1卡2卡| 国产人伦9x9x在线观看| 免费一级毛片在线播放高清视频 | 亚洲国产精品合色在线| 亚洲成人精品中文字幕电影 | 精品午夜福利视频在线观看一区| 91在线观看av| 日韩精品青青久久久久久| 成年人免费黄色播放视频| 欧美日韩视频精品一区| 两个人看的免费小视频| 在线观看免费视频日本深夜| 精品人妻在线不人妻| 日韩av在线大香蕉| 久久久国产成人精品二区 | 久久香蕉国产精品| 成人黄色视频免费在线看| 亚洲中文日韩欧美视频| av欧美777| 一个人观看的视频www高清免费观看 | 国产极品粉嫩免费观看在线| 精品少妇一区二区三区视频日本电影| 欧美激情高清一区二区三区| 国产aⅴ精品一区二区三区波| 日韩欧美国产一区二区入口| 亚洲色图 男人天堂 中文字幕| 日韩国内少妇激情av| 88av欧美| 亚洲五月天丁香| 人妻丰满熟妇av一区二区三区| 天天影视国产精品| 老司机午夜十八禁免费视频| 久久精品亚洲熟妇少妇任你| 亚洲激情在线av| 亚洲成a人片在线一区二区| 久久香蕉精品热| 黄色视频,在线免费观看| 成人18禁高潮啪啪吃奶动态图| 国产野战对白在线观看| 91国产中文字幕| 高清欧美精品videossex| 长腿黑丝高跟| 国产日韩一区二区三区精品不卡| 日本 av在线| 日本wwww免费看| 久热爱精品视频在线9| 黄色视频,在线免费观看| 久久久久久久久免费视频了| 他把我摸到了高潮在线观看| 亚洲avbb在线观看| 亚洲国产精品999在线| 国产精品成人在线| 亚洲 欧美一区二区三区| 99re在线观看精品视频| 十分钟在线观看高清视频www| 国产99白浆流出| 亚洲aⅴ乱码一区二区在线播放 | 亚洲一区二区三区欧美精品| 亚洲专区字幕在线| 日本黄色日本黄色录像| 亚洲精品国产色婷婷电影| 欧美日本亚洲视频在线播放| 视频区图区小说| 亚洲色图av天堂| 国产精品自产拍在线观看55亚洲| 免费在线观看日本一区| 欧美黄色片欧美黄色片| 亚洲欧洲精品一区二区精品久久久| 久久国产乱子伦精品免费另类| 日韩高清综合在线| 国产激情欧美一区二区| 久久中文看片网| 精品日产1卡2卡| 久久久久亚洲av毛片大全| 午夜免费激情av| 久久久久国产精品人妻aⅴ院| 天堂√8在线中文| www.精华液| 村上凉子中文字幕在线| 性少妇av在线| 亚洲伊人色综图| 波多野结衣一区麻豆| 变态另类成人亚洲欧美熟女 | 成人三级做爰电影| 欧美日本亚洲视频在线播放| 无人区码免费观看不卡| 99久久久亚洲精品蜜臀av| 久久精品成人免费网站| 一边摸一边抽搐一进一出视频| 免费高清在线观看日韩| 国产精品野战在线观看 | 9191精品国产免费久久| av有码第一页| 免费在线观看日本一区| 午夜福利,免费看| √禁漫天堂资源中文www| 男女午夜视频在线观看| 日本wwww免费看| 黄色 视频免费看| 又大又爽又粗| 黄色成人免费大全| 亚洲精华国产精华精| 一本综合久久免费| 成人免费观看视频高清| 99精品在免费线老司机午夜| 亚洲精品美女久久av网站| 91大片在线观看| 欧美在线黄色| 成熟少妇高潮喷水视频| 久久狼人影院| 久久久精品国产亚洲av高清涩受| 色婷婷久久久亚洲欧美| 久久久精品欧美日韩精品| 国产99久久九九免费精品| 一进一出好大好爽视频| 国产人伦9x9x在线观看| 91麻豆精品激情在线观看国产 | 亚洲精品美女久久久久99蜜臀| www国产在线视频色| 岛国在线观看网站| 国产野战对白在线观看| 少妇裸体淫交视频免费看高清 | 啦啦啦 在线观看视频| 久久精品亚洲av国产电影网| 久久久久久久久免费视频了| 男人舔女人的私密视频| 中文字幕另类日韩欧美亚洲嫩草| 色尼玛亚洲综合影院| 麻豆久久精品国产亚洲av | 国产成+人综合+亚洲专区| 少妇被粗大的猛进出69影院| 无人区码免费观看不卡| 又黄又爽又免费观看的视频| 亚洲精品美女久久av网站| 一a级毛片在线观看| 男人舔女人下体高潮全视频| 99在线视频只有这里精品首页| 国产成+人综合+亚洲专区| avwww免费| 国产熟女午夜一区二区三区| 成人永久免费在线观看视频| 宅男免费午夜| 热99re8久久精品国产| 国产精品98久久久久久宅男小说| 91成年电影在线观看| 亚洲精品国产一区二区精华液| 欧美日韩亚洲高清精品| 中文字幕另类日韩欧美亚洲嫩草| 免费在线观看日本一区| 一区福利在线观看| 欧美人与性动交α欧美精品济南到| 搡老熟女国产l中国老女人| 亚洲视频免费观看视频| 99香蕉大伊视频| 亚洲自偷自拍图片 自拍| 亚洲男人天堂网一区| 成人国语在线视频| 在线观看免费视频日本深夜| 亚洲九九香蕉| 国产aⅴ精品一区二区三区波| av中文乱码字幕在线| 18禁裸乳无遮挡免费网站照片 | 嫩草影视91久久| 久久久久国内视频| 国产成人精品久久二区二区免费| 欧美午夜高清在线| 国产国语露脸激情在线看| 激情在线观看视频在线高清| 国产伦人伦偷精品视频| 色老头精品视频在线观看| 亚洲aⅴ乱码一区二区在线播放 | 精品熟女少妇八av免费久了| 97碰自拍视频| aaaaa片日本免费| 日本五十路高清| 老司机午夜十八禁免费视频| 日韩成人在线观看一区二区三区| 亚洲欧美精品综合久久99| 亚洲国产毛片av蜜桃av| 亚洲国产中文字幕在线视频| 1024香蕉在线观看| 日韩大尺度精品在线看网址 | www.自偷自拍.com| 国产精品美女特级片免费视频播放器 | 久久人人精品亚洲av| 久久久久久亚洲精品国产蜜桃av| 欧美av亚洲av综合av国产av| 免费久久久久久久精品成人欧美视频| 在线永久观看黄色视频| 人人澡人人妻人| 91精品三级在线观看| 久久青草综合色| 亚洲精品美女久久av网站| 日日爽夜夜爽网站| 日韩视频一区二区在线观看| 久久精品亚洲精品国产色婷小说| 69av精品久久久久久| 欧美精品亚洲一区二区| 久久精品亚洲av国产电影网| 亚洲三区欧美一区| 欧美在线一区亚洲| 久久久久精品国产欧美久久久| 人妻久久中文字幕网| 国产精品98久久久久久宅男小说| 久久久水蜜桃国产精品网| 国产高清videossex| 国产野战对白在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 国产精品国产高清国产av| 国产午夜精品久久久久久| 国产av一区在线观看免费| ponron亚洲| 精品福利永久在线观看| 免费一级毛片在线播放高清视频 | 老熟妇仑乱视频hdxx| 一边摸一边抽搐一进一小说| 热99国产精品久久久久久7| 亚洲精品国产精品久久久不卡| 日日夜夜操网爽| av视频免费观看在线观看| 精品国产一区二区久久| 777久久人妻少妇嫩草av网站| 他把我摸到了高潮在线观看| 丰满人妻熟妇乱又伦精品不卡| 韩国精品一区二区三区| 高清毛片免费观看视频网站 | 搡老岳熟女国产| 国产黄色免费在线视频| 91精品国产国语对白视频| 久久天堂一区二区三区四区| 中出人妻视频一区二区| 亚洲精品一二三| 两性夫妻黄色片| 757午夜福利合集在线观看| 亚洲中文av在线| 免费在线观看黄色视频的| 国产精品98久久久久久宅男小说| 久久中文字幕一级| 国产精品电影一区二区三区| 午夜a级毛片| 国产亚洲欧美精品永久| 国产av在哪里看| tocl精华| 国产乱人伦免费视频| 亚洲性夜色夜夜综合| 夜夜躁狠狠躁天天躁| 精品国产一区二区三区四区第35| 午夜精品久久久久久毛片777| 一二三四在线观看免费中文在| 国产有黄有色有爽视频| 我的亚洲天堂| 精品人妻在线不人妻| 国产成人欧美在线观看| 黑人巨大精品欧美一区二区蜜桃| 久久久久国内视频| 一边摸一边抽搐一进一出视频| 日韩欧美免费精品| 欧美乱色亚洲激情| 亚洲av日韩精品久久久久久密| 精品国产乱子伦一区二区三区| 黄色女人牲交| av网站在线播放免费| 久久国产乱子伦精品免费另类| 精品久久久久久电影网| 国产高清视频在线播放一区| 在线观看免费日韩欧美大片| 午夜两性在线视频| 一级,二级,三级黄色视频| 老熟妇乱子伦视频在线观看| 日本免费a在线| 国产成人精品久久二区二区免费| 男女下面进入的视频免费午夜 | 国产视频一区二区在线看| 免费高清在线观看日韩| 99精品在免费线老司机午夜| 国产精品一区二区三区四区久久 | 午夜视频精品福利| 麻豆av在线久日| 亚洲va日本ⅴa欧美va伊人久久| 免费女性裸体啪啪无遮挡网站| 黑人操中国人逼视频| 啦啦啦 在线观看视频| 人妻丰满熟妇av一区二区三区| 日韩欧美一区二区三区在线观看| 熟女少妇亚洲综合色aaa.| 在线观看免费视频网站a站| 成人18禁在线播放| 亚洲视频免费观看视频| 人人澡人人妻人| 国产一区二区在线av高清观看| 亚洲av成人一区二区三| 免费女性裸体啪啪无遮挡网站| 亚洲午夜精品一区,二区,三区| 久久精品91无色码中文字幕| 亚洲中文字幕日韩| 香蕉国产在线看| 亚洲专区中文字幕在线| 操美女的视频在线观看| 天天影视国产精品| 亚洲avbb在线观看| 99re在线观看精品视频| 美女 人体艺术 gogo| 999精品在线视频| 久久精品成人免费网站| 757午夜福利合集在线观看| 日韩欧美免费精品| 两性午夜刺激爽爽歪歪视频在线观看 | 天堂√8在线中文| 免费观看精品视频网站| 日本精品一区二区三区蜜桃| 欧美日韩亚洲国产一区二区在线观看| 成人国产一区最新在线观看| 久9热在线精品视频| 身体一侧抽搐| 一级,二级,三级黄色视频| 欧美日韩瑟瑟在线播放| 国产精品av久久久久免费| 午夜日韩欧美国产| 亚洲精品国产精品久久久不卡| 怎么达到女性高潮| 亚洲成av片中文字幕在线观看| 最新美女视频免费是黄的| 80岁老熟妇乱子伦牲交| 欧美中文综合在线视频| av欧美777| 精品欧美一区二区三区在线| 91成人精品电影| 精品久久久久久电影网| 国产欧美日韩一区二区三区在线| 亚洲五月婷婷丁香| 亚洲精品成人av观看孕妇| 最近最新免费中文字幕在线| 丝袜人妻中文字幕| 悠悠久久av| 性少妇av在线| 欧美日本亚洲视频在线播放| 国产亚洲欧美98| 国产xxxxx性猛交| 亚洲精品国产一区二区精华液| 少妇被粗大的猛进出69影院| 级片在线观看| 天天影视国产精品| 婷婷丁香在线五月| 国产不卡一卡二| 国产精品永久免费网站| 丰满的人妻完整版| 人人妻人人添人人爽欧美一区卜| 99热只有精品国产| 国产激情欧美一区二区| 久久久久久久午夜电影 | 国产一区在线观看成人免费| 午夜老司机福利片| av天堂在线播放| 两人在一起打扑克的视频| 国产亚洲精品第一综合不卡| 人妻久久中文字幕网| 久热这里只有精品99| 精品久久久久久成人av| 国产av一区二区精品久久| 久久伊人香网站| 国产高清激情床上av| 国内久久婷婷六月综合欲色啪| 国产成年人精品一区二区 | 91麻豆精品激情在线观看国产 | 国产又色又爽无遮挡免费看| 午夜成年电影在线免费观看| 免费av毛片视频| 亚洲精品久久午夜乱码| 91成年电影在线观看| 日韩国内少妇激情av| 欧美成人性av电影在线观看| 亚洲一区中文字幕在线| 狠狠狠狠99中文字幕| 黑人巨大精品欧美一区二区mp4| 制服诱惑二区| 亚洲全国av大片| 欧美成人免费av一区二区三区| 18禁国产床啪视频网站| 久久国产亚洲av麻豆专区| 国产免费现黄频在线看| 麻豆一二三区av精品| 露出奶头的视频| 香蕉久久夜色| 国产成人精品久久二区二区91| 欧美日韩瑟瑟在线播放| 免费人成视频x8x8入口观看| 黄网站色视频无遮挡免费观看| bbb黄色大片| avwww免费| 怎么达到女性高潮| 亚洲人成77777在线视频| 老司机深夜福利视频在线观看| 中国美女看黄片| 动漫黄色视频在线观看| 欧美日韩福利视频一区二区| 国产精品久久久av美女十八| 高清在线国产一区| 免费在线观看完整版高清| 亚洲欧美日韩高清在线视频| 91精品三级在线观看| 50天的宝宝边吃奶边哭怎么回事| 亚洲中文av在线| 91在线观看av| 亚洲av第一区精品v没综合| www日本在线高清视频| 一边摸一边抽搐一进一小说| 国产精品综合久久久久久久免费 | 亚洲性夜色夜夜综合| 国产成人免费无遮挡视频| 午夜福利免费观看在线| 亚洲一区中文字幕在线| 后天国语完整版免费观看| 婷婷精品国产亚洲av在线| 在线av久久热| av欧美777| 国产精品一区二区免费欧美| 成人亚洲精品一区在线观看| 亚洲av五月六月丁香网| 欧美乱妇无乱码| 亚洲伊人色综图|