• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Semi-Supervised Approach for Aspect Category Detection and Aspect Term Extraction from Opinionated Text

    2023-12-12 15:49:26BishrulHaqSherMuhammadDaudpotaAliShariqImranZenunKastratiandWaheedNoor
    Computers Materials&Continua 2023年10期

    Bishrul Haq,Sher Muhammad Daudpota,Ali Shariq Imran,Zenun Kastratiand Waheed Noor

    1Department of Computer Science,Sukkur IBA University,Sukkur,65200,Pakistan

    2Department of Computer Science,Norwegian University of Science and Technology(NTNU),Gj?vik,2815,Norway

    3Department of Informatics,Linnaeus University,V?xj?,35195,Sweden

    4Department of Computer Science&IT,University of Balochistan,Quetta,87300,Pakistan

    ABSTRACT The Internet has become one of the significant sources for sharing information and expressing users’opinions about products and their interests with the associated aspects.It is essential to learn about product reviews;however,to react to such reviews,extracting aspects of the entity to which these reviews belong is equally important.Aspect-based Sentiment Analysis (ABSA) refers to aspects extracted from an opinionated text.The literature proposes different approaches for ABSA;however,most research is focused on supervised approaches,which require labeled datasets with manual sentiment polarity labeling and aspect tagging.This study proposes a semisupervised approach with minimal human supervision to extract aspect terms by detecting the aspect categories.Hence,the study deals with two main sub-tasks in ABSA,named Aspect Category Detection(ACD)and Aspect Term Extraction(ATE).In the first sub-task,aspects categories are extracted using topic modeling and filtered by an oracle further,and it is fed to zero-shot learning as the prompts and the augmented text.The predicted categories are the input to find similar phrases curated with extracting meaningful phrases(e.g.,Nouns,Proper Nouns,NER(Named Entity Recognition)entities)to detect the aspect terms.The study sets a baseline accuracy for two main sub-tasks in ABSA on the Multi-Aspect Multi-Sentiment(MAMS)dataset along with SemEval-2014 Task 4 subtask 1 to show that the proposed approach helps detect aspect terms via aspect categories.

    KEYWORDS Natural language processing;sentiment analysis;aspect-based sentiment analysis;topic-modeling;POS tagging;zero-shot learning

    1 Introduction

    Technological advancement has enabled countless ways to write,read,and share opinions,wherein web technologies play a significant role in everyday life.It has paved the way for the swift growth of many communication mediums such as social networks,online forums,product review sites,blogging sites,website reviews,etc.[1–4].These mediums are primarily used to express the opinions or sentiment embedded within the text,which can be identified with the help of sentiment analysis,a sub-field of Natural Language Processing(NLP).The sentiment can be represented with polarity being positive,negative,or neutral[5,6]or into several point scales ranging from very good to very bad[7,8].

    Sentiment analysis is considered to be a dominant part in many areas like Machine Learning,Deep Learning,and Rule-based approaches to categorize text into their relevant classes(e.g.,positive,negative,neutral) [9,10].Although sentiment analysis is an important task,whereas the traditional approaches are mainly dependent on detecting sentiments based on the whole text however,it is more effective when coupled with Aspect Based Sentiment Analysis(ABSA)to extract the correct sentiments based on the aspects,which refers to the extraction of aspects from the opinionated text.ABSA can be broken down into Aspect Term Extraction and Aspect Polarity Detection[11,12].

    The present literature for ABSA is dominantly centered around sentiment detection on opinionated text.Still,it is also imperative to extract the aspect categories and terms about which the sentiment has been written[1].The literature proposed many techniques for extracting different utterances from the text,primarily focused on supervised learning-based approaches to extract aspects from the text.The aspect from the text can be further identified under three main categories:lexicon-based,machine learning,and hybrid-based approaches[13].

    1.1 Approaches in ABSA

    Lexicon-based approaches are widely used across many studies under ABSA in which SentiWord-Net is generally used to allocate lexicon scores to each word in the sentence[14,15].Moreover,Term Frequency-Inverse Document Frequency(TF-IDF),Part-of-speech(POS)tagging along with aspect extraction have been applied to differentiate the characteristics or highlight concluding words from the sentences to extract aspects that lead to good performance in aspect extraction[16,17].POS tagging refers to labeling each word in a sentence with its relevant part of speech,such as noun,adverb,verb,etc.[18,19],a vital pre-processing step in NLP.It represents the linguistic rules,random patterns,or a combination of both[20].

    Machine learning is another area widely seen in the ABSA domain to find the aspects of an opinionated text.It makes use of two main sub-categories in machine learning as supervised(including Support Vector Machine (SVM),Naive Bayes (NB),Logistic Regression (LR),and KNearest Neighbor(KNN)algorithms)and unsupervised learning[21–23].The recent advancements in ABSA have also seen many approaches based on deep neural networks.Several algorithms are employed to find the best-performing classifier on ABSA sub-tasks,including Fully Connected Deep Neural Networks(FC-DNN),Recurrent Neural Networks(RNN),Convolutional Neural Networks(CNN),and Long Short-Term Memory (LSTM).In recent studies,Transformer based approaches[24–27]are also used to produce better accuracy by using an attention mechanism[28].

    The combination of lexicon and machine learning-based approaches are rising in ABSA,which uses hybrid approaches to detect the aspects.Several studies have applied lexicon for sentiment identification [29],and Machine Learning and Deep Learning techniques [4] to extract the aspects,whereas both supervised and unsupervised techniques are used alongside topic modeling [30] and Named Entity Recognition [31].In recent years,many state-of-the-art Transfer Learning methods have been proposed along with zero-shot learning [32],XLM-R [33],BERT [34],etc.,in the hybrid domain under ABSA.

    Moreover,prompt-based approaches like Few Shots or Zero Shot have recently been applied in ABSA [35],whereas prompts are given to the pre-trained model to complete the task.The usage of pre-trained models for Zero-Shot Learning(ZSL)has significantly improved ABSA due to its ability to understand linguistic features and possess better text encoders[36].

    There are several challenges found in the present state of ABSA,whereas most studies are centered around supervised approaches that demand labeled datasets.Moreover,unsupervised approaches mainly depend on broader linguistic patterns,seed words,etc.Hence it is more challenging for the model to extract aspects regardless of the domain[1].Hence,this study formulates a robust process to specifically address present challenges via a semi-supervised approach which helps the oracle(human)only to choose categories suggested via the study,which then detects categories,and further,the categories are used to extract the aspect terms.

    1.2 Study Objective&Research Questions

    The study helps to formulate the validity and the enhancement of ABSA along with a domainindependent semi-supervised approach.The focused questions based on the study objectives serve to detect aspect categories and aspect terms from users’textual reviews with minimal human supervision.The study addresses the following research questions(RQ).

    ■RQ1:Can a semi-supervised approach facilitate the extraction of aspect terms and categories from text reviews with minimal human supervision?

    ■RQ2:How far domain independent semi-supervised techniques can be exploited to extract meaningful aspect categories and terms from the opinionated text?

    ■RQ3:What is the performance of the models in extracting aspect categories and terms from the opinionated text?

    1.3 Study Contribution

    The study majorly contributes in the following ways:

    ■A domain-independent semi-supervised approach to detect aspect terms and categories from the opinionated text.

    ■Detect aspect terms based on aspect categories by employing a novel two-step approach.

    ■Report baseline accuracy on the MAMS dataset for ACD and ATE subtasks.

    This study employs two main steps to detect aspect terms and categories.Categories are derived by topic modeling,which serves as the input to the second approach for extracting aspect terms.Before the approaches,the study implements pre-processing steps with several techniques,such as removing stop words and special characters,filtering texts based on English,and then applying tokenization on the textual dataset.The study recognizes the main aspect categories in thefirst approachby employing topic modeling techniques,mainly using LDA with Trigram and TF-IDF settings and BERTopic Modeling.The topic models aid in extracting meaningful topics and then are used as the prompts for zero-shot learning with augmented texts,whereas contextual augmentation helps extract the aspect categories.In thesecond approach,the predicted categories from the first approach are used as input to find similar phrases from the extracted phrases(e.g.,Compound Nouns,Nouns,Proper Nouns,NER entities)in order to evaluate the performance of aspect terms extraction and validate our approach.

    The remaining part of the paper is structured as follows.The related works section describes similar works carried out within ABSA.The complete methodology of the proposed approach is presented in the methodology section.The study results are discussed in the results and discussion section.Finally,the conclusion section concludes the paper.

    2 Related Works

    Natural Language Processing(NLP)is one of the areas that have been applied in various domains to produce much-needed applications[37,38].Sentiment analysis is a subbranch of NLP[39]in which aspects are identified.ABSA includes several essential sentiment elements such as aspect term,opinion term,polarity,and category [1,40].Further,ABSA is recognized into several sub-tasks,as shown in Fig.1,including Aspect Category Detection(ACD),Aspect Term Extraction(ATE),Opinion Term Extraction(OTE),and Aspect Sentiment Classification(ASC)[41].

    Figure 1:ABSA sub task

    The subtasks in ABSA define the detection of an aspect or an entity.For example,in the sentence‘The Burger is tasty’,the aspect term is defined asBurger,which serves the subtask ATE.The term falls under the category offood,which the category ACD represents.The OTE specifies the opinion wordtasty[41].The sentiment resonated from the opinion or term accompanied by a sentiment polarity is addressed as ASC.

    The literature refers to ASC with many other terms,including ACSA(Aspect Category Sentiment Analysis)[26],ATSA(Aspect Term Sentiment Analysis)[26],ALSA(Aspect Level Sentiment Classification[42],ABSC(Aspect Based Sentiment Classification)[43].In comparison,a limited number of studies have been reported on ACD and ATE subtasks.Literature on ABSA reports several techniques,such as Lexicon-based,Machine Learning,and Hybrid-based approaches [44],whereas only a few studies have been reported based on unsupervised approaches targeting ACD and ATE.

    Garc’ia-Pablos et al.[45] used an unsupervised approach that takes a single seed word for each aspect and their sentiment with several unsupervised learning approaches.It uses Latent Dirichlet Allocation(LDA)as topic modeling to assess performance on SemEval 2016 dataset.A similar study by[46]took seed words for each category while focusing on the subtasks of ATE using Guided LDA.It also uses a BERT-based semantic filter on the restaurant domain under SemEval 2014,2015,and 2016.The study reported significant performance improvement compared to other supervised and unsupervised approaches.

    Purpura et al.[47]proposed a weekly supervised setting to classify the aspects and sentiments.The study uses Non-negative Matrix Factorization (NMF) topic Modeling coupled with short seed lists based on each aspect.The study compares the performance of SemEval 2015 and 2016 datasets against various weekly and semi-supervised models.Most studies are centered around human supervision based on seed words with categories or aspects for each category to perform ATE,ACD,or ASC under unsupervised settings.We have not found any study related to MAMS dataset for the subtasks ACD and ATE as per our knowledge.

    Zero-Shot Learning(ZSL)can predict unseen classes,which has been commonly used in computer vision[48].Kumar et al.[32]have implemented a three-step approach for detecting the aspects and their sentiments.They use Bidirectional Encoder Representations from Transformers(BERT)and construct vocabularies by using part-of-speech (POS) tagging to label the text in the subset by employing a Deep Neural Network(DNN).Another study[49]has applied for a zero-shot transfer with few shots learning.It focuses on three ABSA tasks: Aspect Extraction,End-to-End Aspect-Based Sentiment Analysis,and Aspect Sentiment Classification.The study is based on the post-training of several pretrained models.Similarly,another study[50]has applied zero-shot learning to classify sentiments with enhanced settings under SemEval 2014 dataset with few shot settings.The study has outperformed other supervised learning approaches.

    Data Augmentation in NLP is vital in generating new data,which is especially useful when data is scarce.It adds more contextual information by increasing the data size[9].Contextual augmentation is a novel method applied in many recent studies under NLP.It uses BERT models to stochastically change words with different words as per the contextual surroundings[51].Contextual augmentation tends to give a more robust output than EDA (Easy Data Augmentation) [52].To the best of our knowledge,no study has been conducted under contextual augmentation for ABSA.

    3 Methodology

    The methodology presented in Fig.2 explains the overall aspect detection and extraction process from the opinionated text.Initially,the study commences with the pre-processing step.Further,the study continues with the two main sub-tasks: Aspect Category Detection (ACD) and Aspect Term Extraction(ATE).

    Figure 2:Overall methodology of the study

    3.1 Text Corpus

    The study uses the MAMS (Multi Aspect Multi-Sentiment) dataset consisting of restaurant reviews,a challenge dataset for Aspect Term Sentiment Analysis (ATSA) and Aspect Category Sentiment Analysis(ACSA)[53]and also SemEval 2014 dataset subtask 1 addresses the Aspect Term Extraction(ATE).The MAMS dataset holds separate XML files,whereas this study uses train XML files,excluding test and validation XML files for each sub-task.

    Moreover,we have found that MAMS has not been applied in ACD or ATE sub-tasks.The MAMS dataset has multiple aspects and longer text compared to SemEval 2016 Restaurant domain[54].Hence,this study sets a baseline score for the subtask via extracting the aspect categories and terms from train XML files,as shown in Fig.3.The SemEval 2014 dataset subtask 1 is based on the laptop domain and has been applied in many studies for ATE hence,we are comparing the performance of our approach against the reported studies.

    3.1.1 ATSA and ACSA in MAMS Dataset

    The ATSA in train XML files contains more than 4000 sentences with aspect terms and their sentiment polarities.Similarly,ACSA in train XML file contains more than 3000 sentences with eight aspect categories(e.g.,‘staff’,‘menu’,‘food’,‘price’,‘service’,‘miscellaneous’,‘a(chǎn)mbiance’,‘place’)with their polarities.Sample data capture of ATSA and ACSA are given below:

    3.1.2 ATE and ATSC in SemEval 2014 Dataset Subtask 1

    SemEval 2014 consists of manually annotated reviews of laptop and restaurant domains,whereas this study explores Laptop reviews which include two subtasks as ATE and ATSC.The dataset holds test and training sets in separate XML files.This study uses training data that holds more than 3000 sentences.The capture of the training sample is given below:

    3.1.3 Text Pre-Processing

    The XML files were converted into data frames,and several steps were followed to pre-process the data,such as removing stop words and special characters,filtering texts based on English,and then applying tokenization to the textual dataset,as shown in the example below:

    ■Text:Food is pretty good but the service is horrific

    ■Pre-processed text:Food pretty good service horrific

    ■Tokenized text:[‘food’,‘pretty’,‘good’,‘service’,‘horrific’]

    3.2 Topic Modeling

    Topic Modeling is an analytical technique that helps to filter and determine the correlation between frequent and relevant topics,which helps to sort the most common keywords from the text [55].It has been widely applied in many studies to gather meaningful words within the text corpus[56].This study employed the most frequent topic modeling methods,Bidirectional Encoder Representations from Transformers (BERT) and Latent Dirichlet Allocation (LDA),to extract the most relevant topics as shown in Fig.4.Table 1 shows the topics gathered from each sub-task and the filtered topics via the aid of Oracle.

    Table 1:Filtered topics through topic modeling with the involvement of oracle

    3.2.1 LDA Topic Modeling

    Latent Dirichlet Allocation (LDA) is one of the topic modeling methods frequently applied in ABSA to extract topics by estimating a probabilistic word model over the corpus.LDA uses the Dirichlet process to model latent topics,representing a topic by the distribution of words occurring in the corpus.A given corpus represented byXconsists ofLtexts where each text represented byxconsists of Nxwords(x?1,...,L).The probability distribution ofXis computed with the following Equation [57,58] whereas this study applies LDA with TF-IDF and Trigram Linguistic features to extract the topics.

    Figure 4:Topic modeling

    3.2.2 BERTopic Modeling

    Recent development in Pre-trained Language Models(PLMs)has played an essential role in topic modeling.BERTopic Modeling uses word embedding along with class-based TF-IDF to create dense clusters with Uniform Manifold Approximation and Projection(UMAP)to lower the dimension of embedding,which yields promising results [59].The word with frequencieslis used for each class denoted withmwhich is then divided by the number of total words shown asx.The number of words as average per class represented withois divided by the total frequency of wordlin allqclasses as shown below:

    3.3 Aspect Category Detection

    ACD is one of the sub-tasks in ABSA,which categorizes the text based on its features.The study assigns categories for the text from the filtered topics,used as the prompts for zero-shot learning(ZSL).Contextual augmentation is used to augment the text,which serves as the input for ZSL.Fig.5 shows complete process of ACD.

    Figure 5:Aspect category detection

    3.3.1 Contextual Augmentation

    Data augmentation(DA)is used to avoid overfitting the data to build more robust models[60].Due to its success in computer vision,DA has also been adapted to many NLP tasks [61].Hence,this study applies contextual augmentation via randomly replacing words with different predictions based on substitutions and insertion techniques[51].This study uses several pre-trained models such asroberta-base,bert-base-uncased,distilbert-base-uncasedagainst the text,yielding the results for augmented text.A sample is shown in Table 2(for ZSL with‘bart-large-mnli’model).3.3.2 Zero-Shot Learning(ZSL)

    Table 2:Contextual augmentation with different pre-trained models

    Recently,several studies have applied ZSL on ABSA,specifically on Sentiment Classification[32,33].ZSL is represented withas the the set of seen labels wheredenotes a seen label and the set of unseen labels denoted bywhereis an unseen label.WhereasX∩Y=φand L is aK-dimensional feature space such thatK∈RKwhere R is real.Hence,denotes the set that is part of seen labeled;for each labeled instanceis the instance from the feature space,whererepresents as the corresponding label.denotes as the set of test instances where eachis represents a test instance in the feature space.denotes the labels forNtethat are to be predicted[48].

    This study takes the prompts for ZSL via topic modeling,and the input text was given via contextual augmentation by augmenting the data.We have tested two different zero shot transformer models such as‘bart-large-mnli’,‘Fb-improved-zeroshot’with insert and substitution settings available at Hugging Face library1https://huggingface.co/models.Moreover,this study sets a threshold of 0.93 with some extended selection,as shown in Algorithm 1.

    ■The bart-large-mnli model trained on MultiNLI(MNLI)dataset comprises spoken and written text based on ten sources and genres[59].

    ■The Fb-improved-zeroshot model is trained using the bart-large-mnli for English and German to classify academic search logs[60].

    Algorithm 1:Setting Threshold for ZSL Model

    3.4 Aspect Term Extraction

    Aspect Term Extraction aims to extract aspect terms in the opinionated text.This study helps extract a set of meaningful phrases with several techniques from each text.The similarity of the phrases is then measured against the categories predicted by ZSL with semantic similarity using the transformer model.Fig.6 shows the complete process of ATE.

    Figure 6:Aspect term extraction

    3.4.1 Extracted Phrases

    This study extracts the phrases by employing several techniques.Initially,COMPOUND NOUNS,NOUNS,and PRONOUNS are extracted with POS tagging.Secondly,NER annotations are extracted with applicable entities (e.g.,organizations,locations,etc.) using spaCy.Finally,the phrases which contain ADJECTIVE are removed from the set of phrases,as shown in Table 3.The steps for phrase extraction are shown in Algorithm 2.

    Table 3:Extracted phrases for each text ranging from L1 to L3

    Algorithm 2:Process of Extracting the Phrases

    3.4.2 Semantic Similarity

    The study compares the similarity of phrases against the categories predicted by ZSL by converting them into embedded vectors in which the semantically similar phrases corresponding to each sentence are selected.This study has tested the approach with three pre-trained Sentence-BERT NLI models[65],includingall-mpnet-base-v2 (B1),bert-base-nli-mean-tokens (B2),all-MiniLM-L6-v2 (B3).The models yield different results based on the threshold.This study has adapted in Algorithm 3 for the phrases as per the categories for L1,as shown in Table 4.

    Table 4:Semantic similarity for L1 text represented with the pre-trained models

    Algorithm 3:Selecting the Aspect Terms

    4 Results&Discussion

    The study mainly focuses on two main subtasks in ABSA,including ACD (Aspect Category Detection)and ATE(Aspect Term Extraction).ACD serves as the input to ATE to extract the aspect terms,as shown in Fig.2.The study begins with detecting aspect categories;further,the categories are served as the input to detect the aspect terms.

    4.1 Dataset and Their Sub Tasks

    The study has chosen two ABSA datasets to evaluate the extraction of including ACD (Aspect Category Detection) and ATE (Aspect Term Extraction),whereas Multi-Aspect Multi-Sentiment(MAMS)dataset holds two separate files for each subtask hence,we performed ACD separately for each XML file within the datasets.To the best of our knowledge,the MAMS challenge dataset does not contain any reported accuracy for ACD or ATE subtasks.Hence,this study establishes baseline results for both sub-tasks.

    The SemEval-2014 Task 4 subtask 1 restaurant dataset mainly addresses ATE and does not include ACD.Therefore,we tested topic modeling under the MAMS dataset for ACD.Moreover,for ATE we evaluated the performance of the SemEval 2014 restaurant dataset against other studies along with the MAMS ATE subtask.As per our knowledge,the MAMS challenge dataset does not contain any reported accuracy for ACD or ATE subtasks hence,this study sets a baseline for both the tasks.

    4.2 Performance Metrics

    The study uses the three most commonly used performance matrices in NLP and text mining,precision,recall,and F1 score,to evaluate the performance of ACD and ATE subtasks.The aspect terms or aspect categories are marked as correct only if they match the original values.The performance metrics for precision(P),recall(R),and F1 score(F1)are calculated as follows:

    TP,TN,FP,and FN indicate the total number of true positives,true negatives,false positives,and false negatives,respectively.

    4.3 Aspect Category Detection

    The Aspect Category Detection relies on topic modeling to classify the filtered topics as prompts with Zero-Shot Learning(ZSL)along with augmented texts.

    4.3.1 Topic Modeling

    The study has seen notable differences in two of the topic models employed.We have chosen MAMS ACSA as the benchmark for ACD to select the topics,whereas we have compared the original categories generated within the dataset(e.g.,‘staff’,‘menu’,‘food’,‘price’,‘service’,‘a(chǎn)mbiance’,‘place’,‘miscellaneous’).The BERTopic model has extracted more meaningful topics,which serve as the filtered categories.The topics identified with the BERTopic model have seven categories within the pool of topics.

    The study tested two linguistic features for LDA,including Trigram and TF-IDF.We observed that 39,37 topics for LDA with Trigram and TF-IDF were selected via choosing the highest probability emitted by the topics,and further,the topics are filtered with NOUNS and PROPER NOUNS from the total number of topics through LDA,whereas the same steps are followed in BERTopic.Both settings failed to extract the topic‘a(chǎn)mbiance’as shown in Table 1.Moreover,the topics extracted by the BERTopic model are more meaningful in choosing the categories.The topics identified with blue are the ones that are given by the oracle hence these selected topics are then served as the categories.

    4.3.2 Classifying the Categories

    We use ZSL along with two pre-trained models,including‘Fb-improved-zeroshot’(X1),‘bartlarge-mnli’(X2)to evaluate the performance,and then the filtered categories are turned into prompts along with contextual augmentation,as it includes more significant words within the text to predict the categories accurately.However,ZSL struggles to detect the category “miscellaneous”because it linguistically did not make sense in many texts.We have set the threshold value of 0.93 with several settings to filter the detected categories for each text.

    We further we have employed different contextual augmentation models,includingroberta-base(P1),bert-base-uncased (P2),distilbert-base-uncased (P3).We found that X1 along with P2 under substitution performs better compared to other settings though a somewhat similar performance is seen on X1 along with P1 under substitution.Whereas X1 with P1 to P3 under insertion,substitution performs better than X2 along with other contextual models.When comparing X1 and X2 with ZSL,we found that X1 has performed well,as shown in Figs.7 and 8.Similar F1 scores are found in X1,and X2 with contextual augmentation models ranging from 57% to 61%,whereas X1 with P2 under F1 score has a slight improvement with a score of 61.23%.P3 under X1 or X2 has the lowest performance compared to other contextual augmentations.Insertion and Substitution under X1 perform comparatively similar to X2,whereas we noticed X2 has a slight difference in F1 scores comparing insertion and substitution,as shown in Table 5.

    Table 5:Reported performance scores for acd with different contextual augmentation models

    4.4 Aspect Term Extraction

    For the Aspect Term Extraction,the pre-processed text is used to extract COMPOUND NOUN,NOUN,and PROPER NOUN via part-of-speech (POS) tagger along with the specified entities in a body or bodies of texts via several Named Entity Recognition(NER)models excluding adjectives.The phrases are compared with semantic similarity models,includingall-mpnet-base-v2 (B1),bertbase-nli-mean-tokens(B2),all-MiniLM-L6-v2(B3)to select the most similar phrases.Hence,based on the performance measures emitted with the different models under Table 5,we have selected X1 along with P2 as our base model settings to perform Aspect Term Extraction under ATSA in MAMS and SemEval-2014 Task 4 subtask 1.

    Figure 7:ZSL with Fb-improved-zeroshot

    Figure 8:ZSL with bart-large-mnli

    4.4.1 ATE in MAMS

    The study has used topics that are curated with the aid of topic modeling using BERTtopic.It performs comparatively better than LDA with trigram or TF-IDF settings under MAMS for ACSA.We have found the pool of topics as shown in Fig.9,and we chose eight categories/topics which describe the domain,including‘staff’,‘menu’,‘food’,‘price’,‘service’,‘a(chǎn)mbiance’and then the topics were served as the prompts to ZSL using the baseline settings of X1 along with P2.The extracted phrases in each text were compared with B1 to B3 pre-trained sentence transformer models,and we have found the following results in Table 6.

    Table 6:Reported performance scores for MAMS-ATE

    Figure 9:Pool of topics

    The study has found that B2 performs better compared to other pre-trained sentence transformer models.We have noticed that B1 and B2 have similar F1 scores though B2 has the highest recall compared to other models.Hence,the study sets a baseline for both MAMS ATE and ACD subtasks,as shown in Table 7.

    Table 7:Reported performance scores for ATE,ACD

    4.4.2 ATE in SemEval-2014 Task 4 Subtask 1

    The study has also applied the same experiment settings with SemEval-2014 Task 4 subtask 1,addressing the ATE.The dataset focuses on the laptop domain,whereas the MAMS dataset is an extended version of the SemEval-2014 Task 4 Restaurant domain.The study extracted the most useful words,including‘camera’,‘display’,‘keyboard’,‘memory’,‘battery’,‘price’,‘computer’,‘software’with the aid of the BERTopic model.

    The selected categories are then served as the prompt with default settings applied in MAMS dataset under the same pre-trained sentence transformer models,yielding the following results in Table 8.In contrast,the study has found that both MAMS ATSA and SemEval-2014 Task 4 subtask 1 on ATE has achieved significant performance with B2 pre-trained sentence transformer model compared to other models.

    Table 8:Reported performance scores for SemEval-2014-ATE

    4.5 Comparison of ATE in SemEval-2014 Task 4 Subtask 1

    We have compared the performance of SemEval-2014 Task 4 subtask 1 against the other baseline models,as shown in Table 9,whereas the corresponding state-of-the-art unsupervised/semi,supervised models and their performance metrics are shown as per the systems along with the methods.

    Table 9:Results of ATE in SemEval-2014 Task 4 subtask 1

    4.5.1 Supervised Models and Their Baselines

    ■Xue et al.[66]used a supervised approach to extract aspects with Bi-LSTM network in which word embedding helps to transform the input words into vectors.

    ■Luo et al.[67]proposed a Bi-directional dependency tree with embedded representation and Bi-LSTM with CRF for tree structure and sequential features for ATE.

    ■Agerri et al.[68]used a perceptron algorithm with three groups of features,including orthographic features,word shape,N-gram features,and context to cluster based on unigram matching.

    ■Akhtar et al.[69]used CNN and Bi-LSTM to extract aspects and sentiments.The Bi-LSTM model is deployed to understand the sequential pattern of the sentence,whereas the CNN evaluates the features of aspect terms and sentiment,working along with Bi-LSTM to gain correct predictions.

    4.5.2 Semi/Unsupervised Models and Their Baselines

    ■Wu et al.[70]proposed a hybrid approach with a rule,machine learning settings focusing on three main modules.The study begins with extracting the opinion targets and aspects and then uses the domain co-relation method to filter the opinion targets.The opinion targets and aspects are then fed into a deep-gated recurrent unit(GRU)network along with a rule-based approach for prediction.

    ■The study by Venugopalan et al.[46]took seed words for each category while focusing on the subtask ATE using Guided LDA along with a BERT-based semantic filter on the restaurant domain

    We have found that several studies have not included all the matrices rather than only reporting the F1 scores,whereas we have noticed that the F1 ranges from 60%to 85%.This study performs better compared to other supervised and unsupervised approaches.Further,we have noticed a recent study by Venugopalan et al.[46]achieved an 80.57%F1 score which primarily focused on extracting aspect terms.The study takes categories along with some seed words to extract the aspect terms,whereas this study is centered on a domain-independent approach and minimal human effort considering only filtered categories suggested with topic modeling.

    5 Conclusion

    ABSA is an essential part of NLP,extracting aspects from the opinionated text.This research formulates an approach to extract aspects based on terms via aspect categories using a semi-supervised approach which helps to combine human supervision with an unsupervised approach to extract the aspects.The study focuses on two main subtasks in ABSA,including ACD and ATE,from the opinionated text.We use ZSL to classify the categories with the prompt words gathered from topic modeling and contextual augmentation.Furthermore,the categories are compared with the extracted phrases via several techniques,which results in the proper extraction of aspect terms.The study sets a baseline accuracy for the MAMS dataset under ACD and ATE subtasks.For ACD with the P(Precision),R (Recall),F1 (F1 Score) as 65.44%,57.53%,61.23% along with the matrices for ATE 64.60%,73.24%,68.65%,respectively.Moreover,the study has achieved a comparable accuracy for SemEval-2014 Task 4 subtask 1.

    This study will open new ways to address aspect extraction in different domains with the help of aspect extraction.As a future direction,the study can be enhanced with a broader linguistic pattern to extract more phrases to support ATE,and performing post-training on models to increase the accuracies could enhance and contribute to the body of knowledge.

    This work can also be extended to other local languages.Customers usually prefer to express their sentiments in their first language which may not necessarily be English.So,the proposed approach can further be extended to extract aspects from opinionated text written in Urdu,Hindi,Spanish,French,and others[71,72].Another possible extension of the work is assessing the impact of text generation replacing augmentation.The latest text generation techniques based on LSTM and Transformer,such as [10,73] can be employed to generate synthetic text,which might produce comparable results as achieved through text augmentation in this study.

    Acknowledgement:None.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:The authors confirm contribution to the paper as follows:Experiments,analysis and interpretation of results and design: Bishrul Haq,Sher Muhammad Daudpota;Supervision of experiments and conceived the idea of the study: Sher Muhammad Daudpota;Draft manuscript preparation and contributed to writing: Zenun Kastrati,Ali Shariq Imran;Providing visualization to the paper:Zenun Kastrati;Improving the overall language of the paper:Waheed Noor.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The authors confirm the dataset used in the study is publicly available and is accessible within the article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    永久网站在线| 老女人水多毛片| 狂野欧美白嫩少妇大欣赏| 91久久精品国产一区二区三区| 22中文网久久字幕| 精品久久久久久久久久免费视频| 亚洲成人久久爱视频| 免费人成视频x8x8入口观看| 成人特级av手机在线观看| 午夜福利18| 色在线成人网| av国产免费在线观看| 18禁黄网站禁片免费观看直播| 日韩欧美三级三区| 在线观看美女被高潮喷水网站| 一夜夜www| 免费人成视频x8x8入口观看| 亚洲va在线va天堂va国产| 欧美一区二区国产精品久久精品| www.色视频.com| 日韩av不卡免费在线播放| 欧美+日韩+精品| 在线播放国产精品三级| 亚洲国产精品成人久久小说 | 麻豆久久精品国产亚洲av| 18+在线观看网站| 国产爱豆传媒在线观看| av专区在线播放| 久久久久性生活片| 国产亚洲91精品色在线| 亚洲欧美日韩无卡精品| 国产aⅴ精品一区二区三区波| 日本色播在线视频| 国内久久婷婷六月综合欲色啪| 乱人视频在线观看| 国产精品久久久久久亚洲av鲁大| 国产激情偷乱视频一区二区| 91久久精品国产一区二区成人| 日本与韩国留学比较| 大又大粗又爽又黄少妇毛片口| 毛片女人毛片| 国产精品久久久久久精品电影| 99久久九九国产精品国产免费| 性欧美人与动物交配| 久久久久久大精品| 国产乱人视频| 人妻丰满熟妇av一区二区三区| 欧美精品国产亚洲| 我的老师免费观看完整版| 国产在视频线在精品| 一级毛片久久久久久久久女| 12—13女人毛片做爰片一| 99热这里只有精品一区| 国产高清有码在线观看视频| 99久久精品热视频| 赤兔流量卡办理| 亚洲欧美中文字幕日韩二区| 国产精品国产三级国产av玫瑰| 人人妻人人澡人人爽人人夜夜 | 国产精品美女特级片免费视频播放器| 亚洲18禁久久av| 女同久久另类99精品国产91| a级一级毛片免费在线观看| 插阴视频在线观看视频| 露出奶头的视频| 精品一区二区三区视频在线观看免费| 日本黄色片子视频| 有码 亚洲区| 亚洲真实伦在线观看| 欧美潮喷喷水| 最近的中文字幕免费完整| 国产精品精品国产色婷婷| 高清午夜精品一区二区三区 | 插阴视频在线观看视频| 精品一区二区免费观看| 亚洲欧美日韩高清在线视频| 18禁在线无遮挡免费观看视频 | 丰满乱子伦码专区| av在线亚洲专区| 午夜免费激情av| 少妇人妻一区二区三区视频| 日韩,欧美,国产一区二区三区 | 91av网一区二区| 大又大粗又爽又黄少妇毛片口| 午夜激情福利司机影院| 国国产精品蜜臀av免费| 国产一区二区三区av在线 | eeuss影院久久| 此物有八面人人有两片| 不卡一级毛片| 精品少妇黑人巨大在线播放 | 噜噜噜噜噜久久久久久91| 99久国产av精品| 免费人成视频x8x8入口观看| 草草在线视频免费看| 国产黄片美女视频| 欧美国产日韩亚洲一区| 亚洲av免费高清在线观看| 日本与韩国留学比较| 色综合色国产| 一级毛片久久久久久久久女| 国产亚洲欧美98| 免费人成视频x8x8入口观看| 亚洲欧美中文字幕日韩二区| 国产又黄又爽又无遮挡在线| 精品久久久久久久久久免费视频| 看免费成人av毛片| 亚洲成人精品中文字幕电影| 欧美性感艳星| 最新在线观看一区二区三区| 啦啦啦观看免费观看视频高清| 国产伦一二天堂av在线观看| 亚洲精品一区av在线观看| 哪里可以看免费的av片| 好男人在线观看高清免费视频| 又黄又爽又刺激的免费视频.| 三级经典国产精品| 午夜a级毛片| 日日干狠狠操夜夜爽| 成人鲁丝片一二三区免费| 搡老熟女国产l中国老女人| 日韩精品青青久久久久久| 精品久久国产蜜桃| 热99re8久久精品国产| 人妻丰满熟妇av一区二区三区| 寂寞人妻少妇视频99o| 高清日韩中文字幕在线| 午夜老司机福利剧场| 亚洲图色成人| 欧美三级亚洲精品| 麻豆国产97在线/欧美| 亚洲欧美日韩高清专用| 免费人成在线观看视频色| 悠悠久久av| 精品久久久久久久久av| 最近2019中文字幕mv第一页| 人人妻人人看人人澡| 国产av在哪里看| 观看美女的网站| 亚洲一区高清亚洲精品| 国内精品一区二区在线观看| 99久久精品热视频| 国产精华一区二区三区| 亚洲三级黄色毛片| 蜜臀久久99精品久久宅男| av在线天堂中文字幕| 97超级碰碰碰精品色视频在线观看| 国产精品亚洲美女久久久| 国产真实伦视频高清在线观看| 在线观看美女被高潮喷水网站| 久久久国产成人精品二区| 欧美激情国产日韩精品一区| 精品久久久久久久久亚洲| 91久久精品国产一区二区成人| 99热这里只有精品一区| 白带黄色成豆腐渣| 日韩制服骚丝袜av| 久久久色成人| 亚洲中文日韩欧美视频| 亚洲精品成人久久久久久| 欧美在线一区亚洲| 亚洲精品乱码久久久v下载方式| 黄色日韩在线| 美女黄网站色视频| 少妇人妻精品综合一区二区 | 国产91av在线免费观看| 国产精品亚洲一级av第二区| 91av网一区二区| 成人精品一区二区免费| 久久午夜亚洲精品久久| 久久久久久久午夜电影| 久久热精品热| 淫妇啪啪啪对白视频| 成人av在线播放网站| 欧美成人a在线观看| 又黄又爽又免费观看的视频| 啦啦啦观看免费观看视频高清| 成人二区视频| 麻豆av噜噜一区二区三区| 国产精品一二三区在线看| 在线看三级毛片| 最近在线观看免费完整版| 亚洲欧美日韩高清专用| 免费人成视频x8x8入口观看| 欧美zozozo另类| 听说在线观看完整版免费高清| 国产精品女同一区二区软件| 岛国在线免费视频观看| а√天堂www在线а√下载| 天堂√8在线中文| 欧美一区二区国产精品久久精品| 国产欧美日韩精品亚洲av| 五月玫瑰六月丁香| 亚洲18禁久久av| 亚洲自偷自拍三级| av免费在线看不卡| 久久这里只有精品中国| www.色视频.com| 久久久精品94久久精品| 性插视频无遮挡在线免费观看| 少妇猛男粗大的猛烈进出视频 | 久久久欧美国产精品| 伊人久久精品亚洲午夜| av福利片在线观看| 91av网一区二区| 久久久久久大精品| 99久国产av精品国产电影| 国产单亲对白刺激| 精品一区二区三区av网在线观看| 亚洲综合色惰| 久久久久久久久久黄片| 五月伊人婷婷丁香| 激情 狠狠 欧美| 天堂√8在线中文| 欧美精品国产亚洲| 日韩强制内射视频| 亚洲精品粉嫩美女一区| 精品久久久久久久久av| 丰满的人妻完整版| 淫秽高清视频在线观看| 国产成人a∨麻豆精品| 91久久精品国产一区二区成人| 日韩在线高清观看一区二区三区| 国产亚洲精品av在线| 久久久色成人| 国产又黄又爽又无遮挡在线| 麻豆一二三区av精品| 国产伦精品一区二区三区四那| 亚洲成人av在线免费| 午夜福利18| 久久精品夜夜夜夜夜久久蜜豆| 长腿黑丝高跟| 一边摸一边抽搐一进一小说| 禁无遮挡网站| 欧美丝袜亚洲另类| 久久久久性生活片| 亚洲人成网站高清观看| 亚洲欧美成人精品一区二区| 国产精品久久久久久久电影| avwww免费| 国产黄色小视频在线观看| 色视频www国产| 成人精品一区二区免费| 12—13女人毛片做爰片一| 一级a爱片免费观看的视频| 国产一区二区三区在线臀色熟女| 卡戴珊不雅视频在线播放| 熟女人妻精品中文字幕| 国产av一区在线观看免费| 国产成人一区二区在线| 99久久成人亚洲精品观看| 国产精品久久久久久久久免| 99久国产av精品国产电影| 国产精品女同一区二区软件| 91在线精品国自产拍蜜月| 亚洲av不卡在线观看| 国产精品乱码一区二三区的特点| 亚洲美女黄片视频| 久久久精品欧美日韩精品| 少妇人妻精品综合一区二区 | 男女边吃奶边做爰视频| 老司机影院成人| 日本在线视频免费播放| 色综合色国产| 一a级毛片在线观看| 日韩av在线大香蕉| 国产精品久久久久久久久免| 久久亚洲国产成人精品v| av在线观看视频网站免费| 亚洲精品粉嫩美女一区| 亚洲最大成人手机在线| 国产色爽女视频免费观看| 在线天堂最新版资源| 高清日韩中文字幕在线| 国产成人a∨麻豆精品| 99久国产av精品| 久久国产乱子免费精品| 一进一出抽搐gif免费好疼| 国产私拍福利视频在线观看| 丝袜美腿在线中文| 国产亚洲欧美98| 国产亚洲精品av在线| 国产乱人视频| 少妇的逼好多水| 国产亚洲av嫩草精品影院| 日本三级黄在线观看| 伦精品一区二区三区| 久久久久久九九精品二区国产| av免费在线看不卡| 欧美在线一区亚洲| 欧美成人精品欧美一级黄| 成人永久免费在线观看视频| av天堂在线播放| 成年女人永久免费观看视频| 简卡轻食公司| 成人美女网站在线观看视频| 亚洲经典国产精华液单| 97人妻精品一区二区三区麻豆| a级毛片免费高清观看在线播放| 别揉我奶头~嗯~啊~动态视频| 欧美潮喷喷水| 女同久久另类99精品国产91| 丰满人妻一区二区三区视频av| 国产黄色视频一区二区在线观看 | 免费电影在线观看免费观看| 欧美xxxx黑人xx丫x性爽| 免费看日本二区| 内射极品少妇av片p| 亚洲欧美日韩无卡精品| 观看美女的网站| 久久久国产成人精品二区| 欧美日本视频| 两个人的视频大全免费| 久久人人爽人人爽人人片va| 国产亚洲精品久久久com| 久久久久精品国产欧美久久久| 一级黄色大片毛片| 国产蜜桃级精品一区二区三区| 欧美xxxx黑人xx丫x性爽| 国产精品久久电影中文字幕| 一个人看的www免费观看视频| 国产不卡一卡二| 欧美成人a在线观看| 内地一区二区视频在线| 99久久九九国产精品国产免费| 国产三级在线视频| 狂野欧美激情性xxxx在线观看| 国产精品一区www在线观看| 中文字幕久久专区| 成人av一区二区三区在线看| 国产亚洲91精品色在线| 麻豆成人午夜福利视频| av专区在线播放| 99久久久亚洲精品蜜臀av| 99热只有精品国产| 免费观看的影片在线观看| 久久久精品大字幕| 国内精品美女久久久久久| 精品午夜福利在线看| 免费观看的影片在线观看| 久久久久精品国产欧美久久久| 亚洲av二区三区四区| 久久精品夜夜夜夜夜久久蜜豆| 国产精品久久久久久av不卡| 午夜精品国产一区二区电影 | 国产免费一级a男人的天堂| 亚洲精品色激情综合| 男人和女人高潮做爰伦理| 午夜久久久久精精品| 波野结衣二区三区在线| 高清日韩中文字幕在线| 欧美成人一区二区免费高清观看| 国产精品久久电影中文字幕| 欧美最新免费一区二区三区| 91av网一区二区| 日本a在线网址| 亚洲人成网站在线播| 国产成人freesex在线 | 日本黄色片子视频| 国内精品一区二区在线观看| 日韩欧美一区二区三区在线观看| 一边摸一边抽搐一进一小说| 啦啦啦观看免费观看视频高清| 91久久精品电影网| 我要看日韩黄色一级片| 久久精品国产99精品国产亚洲性色| 亚洲最大成人中文| 日韩精品有码人妻一区| 69av精品久久久久久| 亚洲久久久久久中文字幕| 精品福利观看| 免费电影在线观看免费观看| 国产精品久久视频播放| 久久久久久久久久久丰满| 国产精品1区2区在线观看.| 一区福利在线观看| 中国美女看黄片| 女的被弄到高潮叫床怎么办| 搞女人的毛片| 黄片wwwwww| 久久6这里有精品| 别揉我奶头 嗯啊视频| av.在线天堂| 99热只有精品国产| 亚洲四区av| 亚洲欧美日韩高清在线视频| av.在线天堂| 亚洲精品色激情综合| 熟女人妻精品中文字幕| 在线播放无遮挡| 久久欧美精品欧美久久欧美| 露出奶头的视频| 女人被狂操c到高潮| 人人妻人人澡人人爽人人夜夜 | 真实男女啪啪啪动态图| 午夜福利在线观看免费完整高清在 | 国产黄色视频一区二区在线观看 | 一a级毛片在线观看| 一本久久中文字幕| 天堂网av新在线| 综合色av麻豆| 久久精品国产99精品国产亚洲性色| 熟妇人妻久久中文字幕3abv| 日韩欧美精品v在线| 色播亚洲综合网| 又爽又黄无遮挡网站| 一个人看视频在线观看www免费| 亚洲精品成人久久久久久| 偷拍熟女少妇极品色| 欧美色欧美亚洲另类二区| 日韩欧美在线乱码| 成人三级黄色视频| а√天堂www在线а√下载| 成人亚洲精品av一区二区| 成年版毛片免费区| 久久久久久久久大av| 最近在线观看免费完整版| 国产伦精品一区二区三区四那| 国产女主播在线喷水免费视频网站 | 国产成人一区二区在线| 日本 av在线| 免费搜索国产男女视频| 国产精品一区二区免费欧美| 亚洲精品在线观看二区| 日本a在线网址| 欧美国产日韩亚洲一区| 精品一区二区三区视频在线观看免费| 国产成年人精品一区二区| 久99久视频精品免费| 国产精品不卡视频一区二区| 日韩中字成人| 中文字幕精品亚洲无线码一区| 看片在线看免费视频| 国产 一区精品| 春色校园在线视频观看| 一级毛片aaaaaa免费看小| 久久精品国产自在天天线| 精品一区二区三区视频在线| 日韩成人伦理影院| 99热网站在线观看| 麻豆精品久久久久久蜜桃| 黄色日韩在线| 国国产精品蜜臀av免费| 欧美另类亚洲清纯唯美| 国产爱豆传媒在线观看| 国产精品久久久久久av不卡| 18+在线观看网站| 99热只有精品国产| 一级av片app| 全区人妻精品视频| 噜噜噜噜噜久久久久久91| 国内久久婷婷六月综合欲色啪| 国产免费一级a男人的天堂| 欧美一区二区亚洲| 中文字幕人妻熟人妻熟丝袜美| 亚洲精品在线观看二区| 亚洲va在线va天堂va国产| 麻豆精品久久久久久蜜桃| 日日摸夜夜添夜夜添av毛片| 久久精品影院6| 亚洲在线自拍视频| 成人国产麻豆网| 免费看av在线观看网站| 身体一侧抽搐| 亚洲精品国产av成人精品 | 日韩成人av中文字幕在线观看 | 综合色av麻豆| 人妻久久中文字幕网| 国产真实伦视频高清在线观看| 国产美女午夜福利| 国产成人精品久久久久久| 久久午夜福利片| 看黄色毛片网站| 亚洲av成人av| 老师上课跳d突然被开到最大视频| 亚洲成人精品中文字幕电影| 亚洲人与动物交配视频| 三级国产精品欧美在线观看| 久久久久国内视频| 人人妻人人澡人人爽人人夜夜 | 久久人人精品亚洲av| 简卡轻食公司| av福利片在线观看| 激情 狠狠 欧美| 成人综合一区亚洲| 九九久久精品国产亚洲av麻豆| 最新在线观看一区二区三区| 日本黄色视频三级网站网址| 亚洲欧美日韩卡通动漫| 国产精品一区二区免费欧美| 国内揄拍国产精品人妻在线| 亚洲成a人片在线一区二区| 午夜福利高清视频| 日本一二三区视频观看| 伊人久久精品亚洲午夜| 中国国产av一级| 亚洲在线自拍视频| 国产精品久久久久久精品电影| 亚洲精品456在线播放app| 欧美+亚洲+日韩+国产| 毛片女人毛片| 成年版毛片免费区| 听说在线观看完整版免费高清| 级片在线观看| 99久久精品热视频| 免费看av在线观看网站| 国产乱人视频| 午夜精品在线福利| av天堂中文字幕网| 一个人免费在线观看电影| 18+在线观看网站| 97超视频在线观看视频| 一卡2卡三卡四卡精品乱码亚洲| 亚洲精品一区av在线观看| 成人性生交大片免费视频hd| 久久精品91蜜桃| 18禁在线无遮挡免费观看视频 | 亚洲国产欧洲综合997久久,| 欧美国产日韩亚洲一区| 亚洲乱码一区二区免费版| 别揉我奶头~嗯~啊~动态视频| 国产亚洲精品久久久久久毛片| 亚洲aⅴ乱码一区二区在线播放| 欧美+日韩+精品| a级毛片a级免费在线| 午夜精品一区二区三区免费看| 久久亚洲精品不卡| 成人二区视频| 自拍偷自拍亚洲精品老妇| 国产真实乱freesex| 色哟哟哟哟哟哟| 1024手机看黄色片| 免费观看在线日韩| 男人的好看免费观看在线视频| 联通29元200g的流量卡| 99在线人妻在线中文字幕| 精品一区二区三区av网在线观看| 黄片wwwwww| 成人二区视频| 又粗又爽又猛毛片免费看| 在线免费十八禁| 精品一区二区三区人妻视频| 插阴视频在线观看视频| 欧美最新免费一区二区三区| 亚洲欧美日韩卡通动漫| 欧美绝顶高潮抽搐喷水| 国产高清有码在线观看视频| 亚洲性久久影院| 日韩强制内射视频| 国产午夜精品论理片| 嫩草影院精品99| 欧美高清成人免费视频www| 精品久久久久久久久av| 人妻丰满熟妇av一区二区三区| 国产精品免费一区二区三区在线| 久久久久精品国产欧美久久久| 久久精品久久久久久噜噜老黄 | 国产午夜精品久久久久久一区二区三区 | 一边摸一边抽搐一进一小说| 两性午夜刺激爽爽歪歪视频在线观看| 久久久精品欧美日韩精品| a级一级毛片免费在线观看| 亚洲国产欧美人成| 日韩一区二区视频免费看| 亚洲av成人av| 成人欧美大片| 91麻豆精品激情在线观看国产| 99久久成人亚洲精品观看| 久久婷婷人人爽人人干人人爱| 特大巨黑吊av在线直播| 2021天堂中文幕一二区在线观| 国产中年淑女户外野战色| 国产乱人视频| 精品无人区乱码1区二区| 欧美+亚洲+日韩+国产| 一级黄片播放器| 嫩草影视91久久| 97超级碰碰碰精品色视频在线观看| 国产三级中文精品| 免费在线观看成人毛片| 97碰自拍视频| 久久人人爽人人片av| 欧美+亚洲+日韩+国产| 久久久色成人| 亚洲一区二区三区色噜噜| 亚洲精品久久国产高清桃花| 色综合色国产| av视频在线观看入口| 黄色配什么色好看| 一个人观看的视频www高清免费观看| 老熟妇乱子伦视频在线观看| 亚洲av一区综合| 伊人久久精品亚洲午夜| 日韩成人av中文字幕在线观看 | 国产成年人精品一区二区| 淫妇啪啪啪对白视频| 91麻豆精品激情在线观看国产| 日本爱情动作片www.在线观看 | 精品午夜福利在线看| 久久人妻av系列| 欧美性猛交黑人性爽| 国产午夜精品论理片| 日日摸夜夜添夜夜爱| 国产老妇女一区| 精品一区二区三区视频在线| 亚洲精品一区av在线观看| 国产91av在线免费观看| 91麻豆精品激情在线观看国产| 男女那种视频在线观看| 免费高清视频大片| 美女被艹到高潮喷水动态| 日韩强制内射视频| 特级一级黄色大片| 一个人免费在线观看电影| 国产精品国产高清国产av|