• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving Sentiment Analysis in Election-Based Conversations on Twitter with ElecBERT Language Model

    2023-10-26 13:14:34AsifKhanHuapingZhangNadaBoudjellalArshadAhmadandMaqboolKhan
    Computers Materials&Continua 2023年9期

    Asif Khan ,Huaping Zhang,? ,Nada Boudjellal ,Arshad Ahmad and Maqbool Khan

    1School of Computer Science and Technology,Beijing Institute of Technology,Beijing,100081,China

    2The Faculty of New Information and Communication Technologies,University Abdel-Hamid Mehri Constantine 2,Constantine,25000,Algeria

    3Department of IT and Computer Science,Pak-Austria Fachhochschule:Institute of Applied Sciences and Technology,Haripur,22620,Pakistan

    ABSTRACT Sentiment analysis plays a vital role in understanding public opinions and sentiments toward various topics.In recent years,the rise of social media platforms (SMPs) has provided a rich source of data for analyzing public opinions,particularly in the context of election-related conversations.Nevertheless,sentiment analysis of electionrelated tweets presents unique challenges due to the complex language used,including figurative expressions,sarcasm,and the spread of misinformation.To address these challenges,this paper proposes Election-focused Bidirectional Encoder Representations from Transformers(ElecBERT),a new model for sentiment analysis in the context of election-related tweets.Election-related tweets pose unique challenges for sentiment analysis due to their complex language,sarcasm,and misinformation.ElecBERT is based on the Bidirectional Encoder Representations from Transformers (BERT) language model and is fine-tuned on two datasets: Election-Related Sentiment-Annotated Tweets(ElecSent)-Multi-Languages,containing 5.31 million labeled tweets in multiple languages,and ElecSent-English,containing 4.75 million labeled tweets in English.The model outperforms other machine learning models such as Support Vector Machines(SVM),Na?ve Bayes(NB),and eXtreme Gradient Boosting(XGBoost),with an accuracy of 0.9905 and F1-score of 0.9816 on ElecSent-Multi-Languages,and an accuracy of 0.9930 and F1-score of 0.9899 on ElecSent-English.The performance of different models was compared using the 2020 United States(US)Presidential Election as a case study.The ElecBERT-English and ElecBERT-Multi-Languages models outperformed BERTweet,with the ElecBERT-English model achieving a Mean Absolute Error(MAE)of 6.13.This paper presents a valuable contribution to sentiment analysis in the context of election-related tweets,with potential applications in political analysis,social media management,and policymaking.

    KEYWORDS Sentiment analysis;social media;election prediction;machine learning;transformers

    1 Introduction

    In recent years,social media has emerged as a powerful tool for public discourse,particularly in the context of politics and elections [1].As a result,sentiment analysis has become a crucial tool for understanding public opinion and sentiment during elections[2].However,sentiment analysis of election-related tweets poses unique challenges due to the complex nature of political language and the nuances of social dynamics involved[3].

    One of the main challenges in sentiment analysis is the lack of dependable and extensive datasets suitable for training machine learning models.

    Prior studies have utilized diverse datasets to address this challenge,including 3 million tweets related to the US presidential election[4],a dataset of 38,432,811 tweets from the US 2020 Presidential election [5],and a dataset consisting of 5,299 tweets from the 2022 Philippines national election [6].Additionally,other datasets have been used for sentiment analysis,such as 1,302,388 tweets from the Ecuadorian presidential elections of 2021[7]and 50K election-related tweets from the Indian General Election 2019[8].Moreover,a study explored the influence of tweet sentiment on opinions and retweet likelihood using datasets focused on various events,including a 2017 demonetization in India dataset(14,940 tweets),a 2016 US election dataset (397,629 tweets),and a 2018 American Music Awards dataset(27,556 tweets)[9].

    Furthermore,studies have focused on specific elections,such as Pakistan’s general election in 2018[10],the 2020 US presidential election [11],the Nigeria 2023 presidential election [12],and recent presidential elections in Latin America,utilizing over 65,000 posts from social media platforms[13].Additional datasets include 9,157 tweets from the 2017 Punjab assembly elections [14],a dataset of 5,000 messages from Twitter and Facebook annotated as neutral/partisan [15],a 100K #Politics dataset[16],and 29,462 tweets related to the West Bengal election in India[17].Despite the existing datasets utilized for sentiment analysis in elections,there are limitations in terms of their size and comprehensiveness.These datasets may not encompass the wide spectrum of political scenarios or provide a sufficient representation of sentiment variations.

    To address the scarcity of dependable datasets,the development of the US Presidential Election Tweets Dataset(UPETD)was undertaken.This dataset comprises 5.3 million election-related tweets and has been labeled with positive,negative,and neutral sentiments using the“Valence Aware Dictionary for sEntiment Reasoning(VADER)”technique.The resulting dataset,named the“ElecSent dataset,”serves as a valuable resource for training machine learning models like BERT,enabling more precise and effective sentiment analysis of election-related tweets[18–20].

    To further improve the accuracy of sentiment analysis in election-related tweets,this study proposes ElecBERT,a new sentiment analysis model specifically designed for political tweets.ElecBERT is fine-tuned on the ElecSent dataset and utilizes the BERT language model,taking into account the context and nuances of political language and social dynamics for more accurate sentiment analysis.

    This study implemented ElecBERT to predict the sentiment of election-related tweets during the US 2020 Presidential Election as a case study.The effectiveness of ElecBERT in analyzing electionrelated tweets and predicting public sentiment was evaluated by comparing the results with the actual election outcome.

    The implications of this study are far-reaching in terms of understanding public opinion in election-related situations.Accurate sentiment analysis can help political campaigns and policymakers to gauge public opinion,identify areas of concern,and design policies accordingly.This study provides a valuable contribution to sentiment analysis in the context of election-related tweets and has implications for a wide range of applications in the political and social domains.

    The main contributions of this study are as follows:

    1.ElecSent dataset: 5.3 million tweets related to politics,labeled with a positive,negative,and neutral sentiments.

    2.ElecBERT:a new sentiment analysis model specifically designed for political tweets,utilizing the BERT language model and taking into account the complexities of political language and social dynamics for improved accuracy.

    The paper is structured as follows:In Section 2,related work is presented.Section 3 introduces the ElecSent dataset and the proposed ElecBERT model methodology.Section 4 describes the experiments and presents the results,followed by a discussion.Finally,the study is concluded in the conclusion section.

    2 Related Work

    Sentiment analysis on social media data has gained significant attention in recent years due to the increasing importance of understanding public opinion in various domains.Several studies have explored different approaches to sentiment analysis,including rule-based methods such as VADER,and machine learning techniques such as logistic regression,SVMs,and NB.However,the field of sentiment analysis has also seen significant advancements in deep learning-based methods for text analytics.Deep learning techniques,such as Convolutional Neural Networks(CNNs)and Recurrent Neural Networks (RNNs) [21],have demonstrated remarkable performance in this domain.CNNs excel at capturing local features and patterns,while RNNs are effective in modeling sequential dependencies in text data[22].

    Furthermore,a recent study by [23] introduced a novel approach that utilizes capsule networks for sentiment analysis,with a specific focus on social media content from platforms like Twitter.The study findings demonstrate the effectiveness of capsule networks in sentiment analysis tasks,particularly when analyzing Twitter data.Another study[24]has introduced a novel neural network model for sentiment analysis,combining multi-head self-attention and character-level embedding.This model effectively tackles the challenges of sentiment word extraction and the out-of-vocabulary problem commonly encountered in existing methods.By employing an encoder-decoder architecture with Bidirectional Long Short-Term Memory (BiLSTM),the model captures contextual semantic information and extracts deeper emotional features,enhancing its ability to analyze sentiment in text.

    Domain-specific sentiment analysis,such as election prediction,has also been explored.For instance,in a study [25],the authors employed tools such as Natural Language Toolkit (NLTK),Tweet Natural Language Processing(TweetNLP)toolkit,Scikit-learn,and Statistical Package for the Social Sciences (SPSS) statistical package to predict ideological orientation (conservative or rightleaning,progressive or left-leaning)of 24,900 tweets collected over 9 h during an election,achieving an overall accuracy of 99.8%using Random Forest.Similarly,in[26],Scikit-learn was utilized to analyze 46,705 Greek tweets over 20 days during an election,achieving a Random Forest accuracy of 0.80 and precision values of Negative=0.74,Neutral=0.83,and Positive=1.

    In another study [27],Textblob was used to analyze 277,509 tweets from three states (Florida,Ohio,and North Carolina) over a month for sentiment analysis during the election,achieving NB accuracy of over 75%.Furthermore,in [28],the authors employed SVM,NB,and K-Nearest Neighbors (KNN) on 2018 Pakistani election data to predict the support for each political leader.They found that SVM outperformed other models with an accuracy of 79.89%.Similarly,in [29],SVM with a hybrid(unigram+bigram)was used on 100K tweets during the U.S.Election 2012 and Karnataka (India) Elections 2013,achieving accuracies of 88% and 68%,respectively,using NLTK and Stanford part-of-speech(POS)tagger.

    Additionally,study [30] utilized Waikato Environment for Knowledge Analysis (WEKA) to analyze 3,52,730 tweets over a month for sentiment analysis on political parties in India,while study[14] employed the Syuzhet package in R-language,WEKA,and Gephi were used to analyze 9,157 tweets over approximately a month regarding political parties,achieving an SVM accuracy of 78.63%.Furthermore,study[31]utilized KNN to analyze election-related data,achieving an average accuracy of 92.19%.

    Moreover,several studies have explored the use of Deep Learning and large language models for sentiment analysis in various domains.For example in[32],the authors analyzed US 2020 Presidential election using BERT and VADER,finding that VADER outperformed BERT.Additionally,in[33],Scikit-learn,NLTK,and VADER were used to analyze 121,594 tweets over two days about a candidate with an SVM accuracy of 0.99.Furthermore,study[34]employed Textblob,OpLexicon(Portuguese sentiment lexicon),and Sentilex(Portuguese sentiment lexicon)to analyze 158,279 tweets over 16 days about a candidate with SVM accuracy of 0.93 and 0.98 for OpLexicon/Sentilex.Moreover,study[35]used Long short-term memory (LSTM) to analyze 3,896 tweets over approximately three months,examining election trends,party,and candidate sentiment analysis,yielding precision=0.76,recall=0.75,and F1-score=0.74.

    However,these models often struggle to capture the in-depth nature of political language and social dynamics involved in election-related tweets.Recently,deep learning models such as Transformer-based models have shown remarkable performance in various natural language processing (NLP) tasks,including sentiment analysis.One of the most popular deep learning models for NLP tasks is BERT,which has achieved state-of-the-art performance on several benchmark datasets.However,fine-tuning BERT for specific domains,such as election-related tweets,can improve its performance and make it more effective for sentiment analysis.

    Several studies have utilized large language models for different domain-specific tasks.For instance,BERTweet is a pre-trained language model for English Tweets,trained on 850 million Tweets using the Robustly Optimized BERT Pretraining Approach(RoBERTa)pre-training procedure[36].BERTweet-COVID19 models were pre-trained on a corpus of 23 million COVID-19 English Tweets.It outperforms strong baselines on three Tweet NLP tasks and also achieved good results on several benchmarks,such as SemEval2017 where it achieved 0.732 AvgRec and0.728,and accuracy 0.72,while on SemEval2018 it achieved0.746 and accuracy 0.782.On WNUT16 it achieved 0.521 F1-score,and on WNUT17 it achieved 0.551 F1-score.

    Moreover,PoliBERTweet,a pre-trained language model trained on over 83M US 2020 electionrelated English tweets [37].The model is specifically designed to address the nuances of political language and can be used for a variety of downstream tasks such as political misinformation analysis and election public opinion analysis.The authors used a stance detection dataset to check the performance of PoliBERTweet [1].The F1-score for BIDEN using RoBERTa (RB) is 0.6,RB/PM is 0.663 TweetEval (TE) is 0.624,TE/P-M is 0.653,BERTweet (BT) is 0.650,BT/P-M is 0.673,poliBERTweet(PoliBERT)0.708,sentiment knowledge-enhanced pre-training(SKEP)is 0.746,and knowledge-enhanced masked language modeling(KEMLM)is 0.758.While the F1-score for TRUMP using RoBERTa(RB)is 0.771,RB/P-M is 0.779 TweetEval(TE)0.809,TE/P-M is 0.811,BERTweet(BT)is 0.828,BT/P-M is 0.831,poliBERTweet(PoliBERT)0.848,sentiment knowledge-enhanced pretraining(SKEP)is 0.772,and knowledge-enhanced masked language modeling(KEMLM)is 0.788.P-M indicates Poli-Medium.

    Numerous other studies utilized large language models for different tasks such as BioBERT,a pre-trained BERT model that has been specifically trained on biomedical text.It can be fine-tuned for various biomedical NLP tasks[38].FinBERT is a pre-trained BERT model that has been specifically designed for financial text.It can be fine-tuned for various financial NLP tasks [39].AraBERT is another pre-trained BERT model that has been specifically trained on Arabic text.It can be finetuned for various Arabic NLP tasks[19].ABioNER:A BERT-based model for identifying disease and treatment-named entities in Arabic biomedical text[40].MEDBERT.de,a pre-trained German BERT model designed specifically for the medical domain.Trained on a large corpus of 4.7 million German medical documents,it achieves state-of-the-art performance on eight different medical benchmarks[41].MolRoPE-BERT,an end-to-end deep learning framework for molecular property prediction that efficiently encodes the position information of SMILES sequences using Rotary Position Embedding(RoPE).The framework combines a pre-trained BERT model with RoPE for extracting potential molecular substructure information.The model is trained on four million unlabeled drug SMILES and is evaluated on four datasets,demonstrating comparable or superior performance to conventional and state-of-the-art baselines[42].

    However,to the best of our knowledge,no study has explored the use of BERT specifically for sentiment analysis on election-related tweets,accounting for the complexities of political language and social dynamics.Therefore,this study proposes ElecBERT,a fine-tuned BERT model tailored for sentiment analysis on election-related tweets.

    3 Materials and Methods

    This section presents the methodology employed in this study,including the ElecSent dataset and the ElecBERT sentiment analysis model.

    3.1 Dataset

    Twitter using its Application Programming Interface (APIs) allow for data collection.They provide developers with access to the platform’s vast collection of public data,including tweets,user profiles,and search results.Numerous tools can be employed to collect tweets,for instance,Twitter-Tap,Tweepy,TWurl,twarc,streamR,TweetMapper,Twitonomy,NodeXL,and Twython.This study utilized the Twitter Search API,which enables the retrieval of tweets based on specific criteria,such as keywords,hashtags,and dates.To collect tweets,the Tweepy Python library was applied,which is a widely used tool for interacting with the Twitter Search API.A substantial number of tweets were collected in JSON (JavaScript Object Notation) format,a lightweight data-interchange format that is both human-readable and machine-parseable.Each tweet contained various attributes,including user_id(the unique identifier of the user who posted the tweet),lang (the language of the tweet),id(the unique identifier of the tweet),created_at(the date and time that the tweet was posted),text(the text of the tweet),coordinates(the geographic coordinates of the tweet,if available),and others.

    3.1.1 UPETD Dataset

    The UPETD dataset comprised nearly 5.31 million tweets related to Joe Biden,Donald Trump,Democratic,Republican,and USElection2020 (during Timeline 1 (T1): 5thDec 2019 and 30thNov 2020,and Timeline 2(T2):=1stAug 2020 to 30thNov 2020).Table 1 shows the statistics of the UPETD dataset.

    Table 1:Statistics for UPETD

    3.1.2 ElecSent Dataset

    The ElecSent dataset is based on the UPETD dataset.Sentiments were assigned to each tweet in the UPETD dataset.Several studies used VADER to classify the tweets and later use other machine learning approaches for sentiment analysis [43–45].VADER is a lexicon and rule-based sentiment analysis tool specifically designed for social media text.It uses a sentiment lexicon that contains a list of words with their associated sentiment scores(positive,negative,or neutral),along with a set of grammatical rules and heuristics to analyze the sentiment of a given text.This study also employed VADER to assign the sentiments to each tweet in the UPETD dataset.This dataset was subsequently named the ElecSent dataset.The ElecSent dataset is presented in two forms,(i) ElecSent-Multi-Language dataset,and (ii) ElecSent-English.ElecSent-Multi-Language contains tweets in multiple languages,including English,and has a total of 5.31 million tweets.ElecSent-English includes only English-written tweets and has a total of 4.753 million tweets,which is almost 89.5% of the total dataset.

    To analyze the distribution of sentiments in the dataset,the percentage of positive,negative,and neutral tweets was calculated.Fig.1 shows that both ElecSent-Multi-Language and ElecSent-English have a similar distribution,with positive tweets being the highest,followed by negative and neutral tweets.ElecSent-Multi-Language has 41% positive,32% negative,and 27% neutral tweets,while ElecSent-English has 43% positive,31% negative,and 26% neutral tweets.ElecSent can be further used in machine learning election prediction models based on the public’s sentiment towards various candidates and political parties.

    The ElecSent dataset has an imbalanced distribution of labels in both its ElecSent-Multi-Languages and ElecSent-English versions,with the majority of samples being Positive,followed by Negative and Neutral.This imbalance can lead to overfitting issues in models trained on this dataset.To address this problem,this study applied the Synthetic Minority Over-sampling Technique(SMOTE),which creates synthetic samples for the minority classes.with an equal distribution of samples among all three classes[46–48].Specifically,Fig.2 shows the balanced dataset that contains 34% Positive,33% Negative,and 33% Neutral samples.

    Figure 1:Sentiment distribution of the ElecSent dataset(Multi-Languages&English-only)

    Figure 2:The balanced ElecSent dataset

    3.2 Building ElecBERT:Architecture and Fine-Tuning Approach

    BERT is a pre-trained language model that is trained on a large corpus of text data,and it is very effective for various natural language processing tasks,including sentiment analysis.The BERT model(bert-base-uncased)was fine-tuned on the ElecSent dataset,represented asD,with labelsL={positive,negative,neutral},and obtained a new model called“ElecBERT”.Fig.3 shows the process of fine-tuning the ElecBERT model.Fine-tuning was performed by minimizing the loss functionL(D,θ),whereθis the parameters of BERT,and the output of the fine-tuned model for an inputxis denoted asf(x;θ?).The statement can be defined as:

    LetDdenote the“ElecSent”dataset,which consists of a set of tweets labeled as either positive,negative,or neutral.LetXdenote the set of input features andYdenotes the corresponding set of labels inD.The dataset is split into two parts:a training set and a validation set,with a ratio of 80:20.

    Then,the datasetDcan be represented as follows:

    where eachxiis a feature vector and eachyiis a label inL.

    Before feeding the text data to BERT,the tweets in the“ElecSent”dataset have undergone preprocessed.The BERT tokenizer is utilized to tokenize the pre-processed text data.The tokenizer adds special tokens like[CLS]and[SEP]to the start and end of each sentence,truncates/pads the sentences to a maximum length of 64 tokens,maps the tokens to their IDs,and creates an attention mask.To prepare the data for training,the training set and validation set are concatenated,and the tokenizer is employed to encode the concatenated data.

    The BERT model (bert-base-uncased) is then fine-tuned on the ElecSent dataset using crossentropy loss and theAdamWoptimizer.The loss functionL(D,θ)is defined as:

    whereθdenotes the parameters of BERT,f(x;θ)is the output of the BERT model for inputxwith parametersθ,andlis the cross-entropy loss function that measures the discrepancy between the predicted output and the ground-truth label.

    TheTensorDatasetandDataLoaderclasses are utilized from thePyTorchlibrary to create data loaders for the training set and the validation set.This study uses theRandomSamplerclass to sample the data randomly during training and theSequentialSamplerclass to sample the data sequentially during validation.During training,the training loss,the validation loss,and the F1-score for each label(positive,negative,and neutral)are monitored using the validation set.

    The model undergoes training for 6 epochs with a batch size of 32 and a learning rate of2e-5.After each epoch,the model is evaluated on the validation set,and metrics such as the validation loss,accuracy,precision,recall,and the F1-score are computed.The resulting model is named“ElecBERT”,which is denoted as:

    whereθ?denotes the optimal parameters obtained after fine-tuning.

    4 Experiments and Results

    The experiments for SVM,NB,and XGBoost were conducted in Google Colab (python 3.7).Furthermore,for ElecBERT,NVIDIA TITAN XP 12 GB with 128 GB RAM was used.

    The proposed ElecBERT model was evaluated through extensive experimentation on two datasets:ElecSent-Multi-Language and ElecSent-English.The former comprises 5.3 million election-related tweets in various languages,while the latter has 4.753 million English-written tweets.The datasets were split into training-validation sets in an 80:20 ratio.The bert-base-uncased pre-trained model was utilized as the initial BERT model.The ElecBERT model was fine-tuned for six epochs with a batch size of 32 and a learning rate of 2e-5.The experiments used the AdamW optimizer with default parameters and the cross-entropy loss function.

    The performance metrics for the proposed ElecBERT model are impressive.Specifically,the ElecBERT-Multi-Languages model achieved an accuracy of 0.9905,precision of 0.9813,recall of 0.9819,and an F1-score of 0.9816 during its 5th and 6th epochs.The validation loss at the 6th epoch was 0.140.Comparatively,ElecBERT-English performed better with an accuracy of 0.9930,precision of 0.9906,recall of 0.9893,and an F1-score of 0.9899 during the 6th epoch.Figs.4a and 4b represents the training metrics of both models.Furthermore,Fig.5 shows the validation loss for both models.ElecBERT-English may have outperformed ElecBERT-Multi-Languages due to the use of VADER to label the ElecSent dataset.VADER is known to perform better in English than in other languages,and this could have led to a higher quality of labeled data for the ElecBERT-English model to train on,resulting in its better performance on the English dataset.These metrics indicate that ElecBERT has achieved excellent performance on the ElecSent dataset.

    Figure 4:(a) ElecBERT-Multi-Languages | Evaluation Metrics.(b) ElecBERT-English | Evaluation Metrics

    Figure 5:Validation loss for ElecBERT-Multi-Languages and ElecBERT-English

    In addition,the experiments on XGBoost,SVM,and NB were conducted using the ElecSent dataset.Interestingly SVM and Naive Bayes (NB) achieved lower F1-scores 0.802 and 0.826,and XGBoost achieved 0.8964 F1-score.Fig.6 shows the evaluation matric,F1-score for the ElecBERT model,as well as XGBoost,SVM,and NB.This suggests that ElecBERT was able to capture the nuances of sentiment in political tweets better than the traditional machine learning models,leading to superior performance on this task.

    Figure 6:F1-score|ElecBERT-Multi-Lang,ElecBERT-English,SVM,XGBoost,and NB

    4.1 Leveraging ElecBERT on the 2020 US Presidential Election

    This section presents a case study using ElecBERT to analyze sentiment in election-related tweets during the US 2020 Presidential Election and predict election outcomes.The study aims to explore the effectiveness of ElecBERT in predicting public sentiment and election outcomes.The results were compared with the actual election outcomes to evaluate the performance of the model.

    4.1.1 Data

    The data in this study was gathered from December 2019 to November 2020 using hashtags related to the Democratic and Republican Parties(#Democratic,#TheDemocrats,#GOP,and#Republican).The dataset consists of 1,637,150 tweets from Republican Party and 245,757 tweets from Democratic Party.

    4.1.2 Analyzing the Elections

    Fig.7 displays sentiment analysis results for tweets about the Democratic and Republican Parties using three different language models: BERTweet,ElecBERT-Multi-Lang,and ElecBERT-English.The majority of tweets in all categories are classified as neutral sentiments towards both Democratic and Republican politicians.This is likely because political tweets often contain objective statements of fact or news updates,which may not express a clear sentiment towards a particular politician.However,the results also indicate that both ElecBERT-Multi-Lang and ElecBERT-English are more effective than BERTweet in identifying positive sentiment towards both Democratic and Republican politicians.For example,ElecBERT-Multi-Lang had the highest percentage of positive sentiment classification for Republican politicians at 42.36%,which is significantly higher than BERTweet’s 7.94% positive sentiment classification.Similarly,ElecBERT-English had the highest percentage of positive sentiment classification for Democratic politicians at 39.62%,which is also significantly higher than BERTweet’s 13.43% positive sentiment classification.On the other hand,BERTweet had the highest percentage of negative sentiment classification for both Democratic and Republican politicians.For instance,BERTweet classified 44.67% of Republican tweets as negative sentiment,which is considerably higher than ElecBERT-Multi-Lang’s 33.36% negative sentiment classification.

    Figure 7:Sentiment analysis for democratic and republican parties using BERTweet,ElecBERT-Multi-Lang,and ElecBERT-English

    In addition,two equations(Eqs.(1)and(2))were utilized in this study to forecast the vote share for each political party.The approach involved assuming that positive sentiments expressed towards the Democratic Party and negative sentiments expressed towards the Republican Party represent support for the Democratic Party,and conversely for the Republican Party.

    Table 2 presents the vote share percentages for BERTweet,ElecBERT-Multi-Lang,and ElecBERTEnglish,as well as the actual US Election 2020 results.Additionally,the table includes the normalized Democratic and Republican results.The reason for providing the normalized results is that the study only focused on the Republican and Democratic parties and excluded tweets about other political parties.Therefore,the sum of the vote share percentages for the two parties in the actual US Election 2020 results does not add up to 100.To address this,the actual results were normalized to add up to 100,which facilitates comparison with the results obtained using the three language models.

    Table 2:Vote shares for BERTweet,ElecBERT-Multi-Lang,ElecBERT-English,and actual US Election 2020 results along with normalized results

    The results show that BERTweet predicted a significantly higher vote share for the Democratic Party (78.25%) than for the Republican Party (21.75%).In contrast,both ElecBERT-Multi-Lang and ElecBERT-English predicted a higher vote share for the Republican Party(54.02% and 53.88%,respectively)than for the Democratic Party(45.98% and 46.12%,respectively).These results indicate that the two ElecBERT models were more accurate in predicting the actual vote share distribution between the two parties.The actual US Presidential Election 2020 results indicate that the Democratic Party received 51.4% of the vote share,while the Republican Party received 46.9%.However,when normalized to 100,the Democratic and Republican results are 52.28% and 47.72%,respectively.This normalization facilitates comparison with the results obtained using the three language models.

    The Mean absolute error(MAE)and root mean square error(RMSE)was used to compare the predicted vote shares with the actual results of the US Presidential Election 2020.MAE measures the average absolute difference between the predicted and actual values,while RMSE measures the square root of the average squared difference between the predicted and actual values.The lower the MAE and RMSE values,the closer the predicted results are to the actual election results.The MAE and RMSE values for BERTweet,ElecBERT-Multi-Lang,and ElecBERT-English were calculated using Eqs.(3)and(4),respectively.

    In Table 3,both ElecBERT-Multi-Languages and ElecBERT-English outperform BERTweet in terms of MAE and RMSE.This suggests that the two models are better at predicting the vote share for the two major parties in the US election.In particular,the MAE for Democratic and Republican parties for both ElecBERT-Multi-Languages and ElecBERT-English is significantly lower than BERTweet.For instance,the MAE for Democratic party prediction using ElecBERT-Multi-Languages and ElecBERT-English are 5.42 and 5.28,respectively,while the MAE for BERTweet is 26.85.Similarly,the MAEs for Republican party prediction using ElecBERT-Multi-Languages and ElecBERT-English are 7.12 and 6.98,respectively,while the MAE for BERTweet is 25.15.

    Table 3:MAE,and RMSE for BERTweet,ElecBERT-Multi-Languages,and ElecBERT-English

    Moreover,the RMSE values for both ElecBERT-Multi-Languages and ElecBERT-English are also significantly lower than BERTweet for both Democratic and Republican parties,indicating that the predicted vote shares are closer to the actual vote shares.Finally,the MAE Normalized value in the table shows the average difference between the predicted and actual normalized vote share,which is equal to 100 for both parties.Here too,ElecBERT-English models perform better than ElecBERTMulti-Languages BERTweet,with lower MAE values.

    On the whole,the results suggest that ElecBERT models,both ElecBERT-Multi-Lang and ElecBERT-English,can perform well in analyzing election-related tweets and predicting election outcomes.These models outperformed BERTweet in terms of sentiment analysis and vote share prediction.Additionally,the MAE and RMSE values indicate that the ElecBERT models have a lower prediction error than BERTweet,especially when it comes to the Democratic party’s vote share prediction.Therefore,it is reasonable to assume that ElecBERT models can help analyze and predict future elections by analyzing large volumes of social media data.

    4.2 Practical Usage of the ElecBERT

    The proposed ElecBERT model has several practical applications in the field of natural language processing(NLP)and sentiment analysis.Here are some potential practical usages of ElecBERT:

    1.Sentiment Analysis:ElecBERT can be utilized for sentiment analysis tasks related to electionrelated tweets.By leveraging its fine-tuned knowledge of a large corpus of election tweets,ElecBERT can effectively classify the sentiment of new,unseen election-related tweets as positive,negative,or neutral.This can provide valuable insights into public opinion,sentiment trends,and the overall sentiment surrounding political candidates and election events.

    2.Election Monitoring: With its ability to analyze sentiment,ElecBERT can be used for realtime monitoring of elections.By processing a stream of tweets in real time,ElecBERT can help gauge the sentiment of the public towards candidates,parties,or specific election issues.This can be valuable for political campaigns,media outlets,and researchers seeking to understand public sentiment and adjust their strategies accordingly.

    3.Social Media Analytics: ElecBERT can contribute to social media analytics by providing a deep understanding of election-related conversations happening on platforms like Twitter.By applying ElecBERT to large volumes of election tweets,analysts can identify emerging topics,detect patterns,and gain insights into voter behavior,public opinion,and the sentiment dynamics throughout an election campaign.

    4.Opinion Mining: ElecBERT can assist in extracting and analyzing opinions expressed in election tweets.By leveraging its fine-tuned language understanding capabilities,ElecBERT can help identify and categorize different aspects of political discourse,such as policy issues,candidate attributes,or sentiment towards specific campaign promises.This can support opinion-mining tasks and provide a nuanced understanding of voter opinions.

    5.Election Prediction:With its fine-tuned knowledge of election-related tweets,ElecBERT can potentially contribute to election outcome prediction models.By analyzing sentiment patterns,trends,and public opinion expressed in tweets,ElecBERT can provide additional insights to complement traditional polling methods,enabling more accurate predictions of election results.

    6.Social Listening and Crisis Management: During elections,social media can be a breeding ground for misinformation,rumors,and crises.ElecBERT can be used as a tool for social listening and crisis management by monitoring election-related conversations on platforms like Twitter.It can help identify potentially problematic content,detect the spread of misinformation,and provide real-time sentiment analysis to assist in managing and addressing crises effectively.

    5 Conclusion and Future Work

    This paper presented ElecBERT,a new model for sentiment analysis in the context of electionrelated tweets.The model was fine-tuned on two datasets:ElecSent-Multi-Languages,containing 5.31 million labeled tweets in multiple languages,and ElecSent-English,containing 4.75 million labeled tweets in English.The ElecSent dataset is labeled (positive,negative,and neutral) using VADER.Notably,ElecBERT showcased superior performance when compared to SVM,NB,and XGBoost,achieving an accuracy of 0.9905 and an F1-score of 0.9816 on ElecSent-Multi-Languages,as well as an accuracy of 0.9930,and an F1-score of 0.9899 on ElecSent-English.Furthermore,this study conducted a comprehensive analysis of the 2020 US Presidential Election as a case study,comparing the performance of different models.Among them,both the ElecBERT-English and ElecBERT-Multi-Languages models outperformed BERTweet,with the ElecBERT-English model achieving an MAE of 6.13.This paper presents a valuable contribution to sentiment analysis in the context of election-related tweets,with potential applications in political analysis,social media management,and policymaking.

    The ElecBERT model was trained on the 2020 US Presidential Election data only,and its performance on other elections or political events may vary.Additionally,the sentiment labels for the ElecSent dataset were generated using VADER,an automated tool,without manual human reviewing and verification.In the future,more data from other elections should be added to make the model more robust and generalizable.Moreover,using other pre-trained models like PoliBERT can be explored to further improve the accuracy of sentiment analysis on election-related tweets.Finally,expanding the model to incorporate more complex features of political languages,such as sarcasm and irony,could lead to a more nuanced understanding of election-related sentiment on social media.

    Acknowledgement:The authors would like to express their sincere gratitude to the Beijing Municipal Natural Science Foundation,and the Foundation Enhancement Program for their generous for their financial support.The authors are deeply appreciative of the support and resources provided by these organizations.

    Funding Statement:The research work was funded by the Beijing Municipal Natural Science Foundation(Grant No.4212026),and Foundation Enhancement Program(Grant No.2021-JCJQ-JJ-0059).

    Author Contributions:The authors confirm contribution to the paper as follows:Conceptualization:A.K.,N.B.;methodology: A.K.,N.B.,and H.Z;software: A.K.,and N.B.;validation: A.K.,and N.B.;formal analysis: H.Z.,N.B.,A.A.,and M.K.;investigation: A.K.,N.B.,A.A.,and M.K.;resources:A.K.,H.Z.,and N.B.;data curation:A.K.,and N.B.;writing—original draft preparation:A.K.;writing—review and editing:A.K.,H.Z,N.B.,A.A,and M.K.;visualization:A.K.,and N.B.;supervision: H.Z.;project administration: H.Z.;funding acquisition: H.Z.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data used in this study is available at“https://doi.org/10.57967/h f/0813”.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    午夜老司机福利片| 黄色怎么调成土黄色| 黑人操中国人逼视频| 12—13女人毛片做爰片一| 国产一卡二卡三卡精品| 精品福利永久在线观看| 国产亚洲欧美98| 一区二区三区国产精品乱码| 动漫黄色视频在线观看| 免费高清视频大片| 在线观看免费午夜福利视频| 亚洲精品国产一区二区精华液| 国产成人精品久久二区二区免费| 涩涩av久久男人的天堂| 黄色视频,在线免费观看| 国产色视频综合| 国产激情久久老熟女| 欧美人与性动交α欧美软件| 欧美av亚洲av综合av国产av| 90打野战视频偷拍视频| bbb黄色大片| 91大片在线观看| 精品福利观看| 亚洲精品一二三| 国产精品久久视频播放| xxx96com| 国产高清视频在线播放一区| av有码第一页| 女人精品久久久久毛片| 免费日韩欧美在线观看| 日本 av在线| 久久久久久久精品吃奶| 黑人猛操日本美女一级片| 国产熟女xx| 黑人巨大精品欧美一区二区蜜桃| 日韩欧美免费精品| 熟女少妇亚洲综合色aaa.| 亚洲午夜理论影院| 搡老岳熟女国产| 午夜亚洲福利在线播放| 午夜久久久在线观看| 久久久久国产精品人妻aⅴ院| 久久久国产欧美日韩av| 亚洲一码二码三码区别大吗| 国产人伦9x9x在线观看| 999久久久精品免费观看国产| 国产精品国产av在线观看| 亚洲av熟女| 成年女人毛片免费观看观看9| 亚洲欧美一区二区三区久久| 天天躁狠狠躁夜夜躁狠狠躁| 在线看a的网站| 久久九九热精品免费| 美女 人体艺术 gogo| www国产在线视频色| 99国产精品99久久久久| 亚洲专区国产一区二区| 欧美激情 高清一区二区三区| 国产无遮挡羞羞视频在线观看| 久久亚洲精品不卡| 一级片免费观看大全| 亚洲男人的天堂狠狠| 女性被躁到高潮视频| 免费少妇av软件| 亚洲国产精品一区二区三区在线| 老司机福利观看| 在线十欧美十亚洲十日本专区| 无遮挡黄片免费观看| 日韩欧美免费精品| 国产av一区在线观看免费| 麻豆成人av在线观看| 精品一品国产午夜福利视频| 一区二区三区精品91| 9色porny在线观看| 亚洲精品国产区一区二| 日本精品一区二区三区蜜桃| 日本wwww免费看| 色综合婷婷激情| 99国产极品粉嫩在线观看| 免费高清在线观看日韩| 久久精品国产99精品国产亚洲性色 | 69av精品久久久久久| 精品欧美一区二区三区在线| 一级黄色大片毛片| 国产精品免费一区二区三区在线| 别揉我奶头~嗯~啊~动态视频| 国产精品野战在线观看 | 校园春色视频在线观看| 国产亚洲av高清不卡| 少妇被粗大的猛进出69影院| 成人手机av| 人人妻人人添人人爽欧美一区卜| 国产欧美日韩一区二区三区在线| 久久中文字幕人妻熟女| 午夜成年电影在线免费观看| 热re99久久精品国产66热6| 欧美日本中文国产一区发布| 亚洲成a人片在线一区二区| 日韩免费高清中文字幕av| 亚洲国产毛片av蜜桃av| 在线观看午夜福利视频| 日日摸夜夜添夜夜添小说| 亚洲av成人不卡在线观看播放网| 国产精品av久久久久免费| 美女 人体艺术 gogo| 美国免费a级毛片| 国产真人三级小视频在线观看| 亚洲黑人精品在线| 高潮久久久久久久久久久不卡| 久久久久久人人人人人| 国产亚洲精品久久久久5区| 精品日产1卡2卡| 变态另类成人亚洲欧美熟女 | 久久精品亚洲av国产电影网| 黄色毛片三级朝国网站| 精品一区二区三卡| 欧美成狂野欧美在线观看| 精品国内亚洲2022精品成人| 精品久久久久久,| 日韩av在线大香蕉| 成人影院久久| 999精品在线视频| 久久久久久大精品| 欧美一级毛片孕妇| 国产99白浆流出| 日韩三级视频一区二区三区| 久久久久国产一级毛片高清牌| 亚洲av五月六月丁香网| 天堂√8在线中文| 很黄的视频免费| 无限看片的www在线观看| 国产又色又爽无遮挡免费看| 丝袜人妻中文字幕| 久久久久久久久免费视频了| 日韩精品中文字幕看吧| 国产一卡二卡三卡精品| 亚洲成a人片在线一区二区| 美女国产高潮福利片在线看| 亚洲成av片中文字幕在线观看| 99热国产这里只有精品6| 好看av亚洲va欧美ⅴa在| 久久国产乱子伦精品免费另类| av网站免费在线观看视频| 久久热在线av| 国产亚洲精品第一综合不卡| 交换朋友夫妻互换小说| 丁香六月欧美| 亚洲国产看品久久| 久久香蕉激情| 亚洲午夜精品一区,二区,三区| 亚洲激情在线av| 久9热在线精品视频| 亚洲av成人一区二区三| 国产免费av片在线观看野外av| tocl精华| 中文字幕精品免费在线观看视频| 精品乱码久久久久久99久播| 女人被狂操c到高潮| 在线观看66精品国产| 亚洲av成人av| 精品午夜福利视频在线观看一区| 国产99白浆流出| 国产精品自产拍在线观看55亚洲| 正在播放国产对白刺激| 嫩草影视91久久| 日日干狠狠操夜夜爽| 国产aⅴ精品一区二区三区波| 天天躁夜夜躁狠狠躁躁| 男女下面插进去视频免费观看| 久久影院123| 成人av一区二区三区在线看| 91av网站免费观看| 国产高清videossex| 成人av一区二区三区在线看| 欧美在线一区亚洲| 一区二区三区国产精品乱码| 18禁国产床啪视频网站| 无人区码免费观看不卡| 亚洲欧美精品综合一区二区三区| 久久久精品欧美日韩精品| 多毛熟女@视频| 亚洲人成网站在线播放欧美日韩| 他把我摸到了高潮在线观看| 很黄的视频免费| 国产乱人伦免费视频| 国产精品一区二区三区四区久久 | 大陆偷拍与自拍| 国内久久婷婷六月综合欲色啪| 老司机靠b影院| 日本免费a在线| 成人18禁高潮啪啪吃奶动态图| 久久久久亚洲av毛片大全| 欧美在线一区亚洲| 91九色精品人成在线观看| av在线播放免费不卡| 国产黄色免费在线视频| 中文字幕色久视频| 欧美日韩国产mv在线观看视频| 成人永久免费在线观看视频| 亚洲第一青青草原| 91麻豆av在线| 丝袜在线中文字幕| a级毛片在线看网站| 美女福利国产在线| 亚洲av美国av| 夫妻午夜视频| 国产精品久久久av美女十八| 久久性视频一级片| 欧美日韩瑟瑟在线播放| 岛国视频午夜一区免费看| а√天堂www在线а√下载| 久久久国产成人精品二区 | 久久香蕉激情| 亚洲精品成人av观看孕妇| 女生性感内裤真人,穿戴方法视频| 成年女人毛片免费观看观看9| 超碰97精品在线观看| 午夜福利在线免费观看网站| 国产成人av教育| 韩国av一区二区三区四区| 天天躁狠狠躁夜夜躁狠狠躁| 国产伦人伦偷精品视频| 亚洲成国产人片在线观看| xxxhd国产人妻xxx| 日本黄色视频三级网站网址| a级片在线免费高清观看视频| 十八禁人妻一区二区| 丰满迷人的少妇在线观看| 日本黄色视频三级网站网址| 欧美黑人欧美精品刺激| tocl精华| 99国产精品99久久久久| 国产深夜福利视频在线观看| 久久久精品欧美日韩精品| 久久久久久亚洲精品国产蜜桃av| 国产免费男女视频| 美女午夜性视频免费| 亚洲黑人精品在线| svipshipincom国产片| 黄色 视频免费看| 国产成人精品无人区| 免费一级毛片在线播放高清视频 | 免费搜索国产男女视频| 国产成+人综合+亚洲专区| 亚洲成av片中文字幕在线观看| 精品午夜福利视频在线观看一区| 国产男靠女视频免费网站| 久久性视频一级片| 精品少妇一区二区三区视频日本电影| 男女下面插进去视频免费观看| 麻豆一二三区av精品| 国产免费av片在线观看野外av| av片东京热男人的天堂| 国产一区二区三区视频了| 最近最新中文字幕大全免费视频| 在线观看舔阴道视频| 深夜精品福利| 男女高潮啪啪啪动态图| 精品一区二区三卡| 少妇的丰满在线观看| 老熟妇仑乱视频hdxx| 午夜福利在线观看吧| av电影中文网址| 亚洲熟妇熟女久久| 欧美久久黑人一区二区| 亚洲激情在线av| 国产单亲对白刺激| 国产欧美日韩一区二区三区在线| 在线观看免费日韩欧美大片| 又黄又爽又免费观看的视频| 欧美日韩乱码在线| 脱女人内裤的视频| 欧美老熟妇乱子伦牲交| 成人三级做爰电影| 搡老岳熟女国产| 美女 人体艺术 gogo| 热re99久久精品国产66热6| 久久精品国产亚洲av高清一级| 国产1区2区3区精品| 欧美午夜高清在线| 成人18禁在线播放| 在线观看一区二区三区| 少妇的丰满在线观看| 日韩精品青青久久久久久| 久久精品亚洲av国产电影网| 亚洲人成网站在线播放欧美日韩| a级毛片在线看网站| 国产精品 欧美亚洲| 国产高清视频在线播放一区| 18美女黄网站色大片免费观看| 久久精品国产综合久久久| 国产欧美日韩精品亚洲av| 丰满人妻熟妇乱又伦精品不卡| 女人高潮潮喷娇喘18禁视频| 美女 人体艺术 gogo| 天堂俺去俺来也www色官网| 91精品三级在线观看| 亚洲欧美激情综合另类| 国产精品1区2区在线观看.| 俄罗斯特黄特色一大片| 国产熟女xx| 色综合站精品国产| 怎么达到女性高潮| 日本黄色日本黄色录像| 多毛熟女@视频| 国产又爽黄色视频| 欧美丝袜亚洲另类 | 99在线人妻在线中文字幕| av天堂久久9| 国产精品香港三级国产av潘金莲| 色综合欧美亚洲国产小说| 久久久久亚洲av毛片大全| 午夜精品久久久久久毛片777| 亚洲成人免费电影在线观看| 国产成人影院久久av| 又黄又爽又免费观看的视频| 久久久久久久精品吃奶| 国产免费男女视频| 亚洲欧美一区二区三区黑人| 又黄又粗又硬又大视频| 日本免费a在线| 久久精品国产综合久久久| 国产一区二区激情短视频| 高潮久久久久久久久久久不卡| 亚洲激情在线av| 日本三级黄在线观看| 国产一区在线观看成人免费| 99久久综合精品五月天人人| 欧美在线黄色| 美女扒开内裤让男人捅视频| 久久精品aⅴ一区二区三区四区| 91麻豆av在线| 男女下面进入的视频免费午夜 | 亚洲国产精品一区二区三区在线| 中文字幕精品免费在线观看视频| 黄频高清免费视频| 亚洲国产精品合色在线| 一夜夜www| 欧美乱色亚洲激情| 狠狠狠狠99中文字幕| 久久精品亚洲精品国产色婷小说| 欧美日韩乱码在线| 一本大道久久a久久精品| 黑丝袜美女国产一区| 色播在线永久视频| 久久中文字幕一级| 一本大道久久a久久精品| 国产一区二区在线av高清观看| www.精华液| 免费看a级黄色片| 久久九九热精品免费| 欧美精品啪啪一区二区三区| 国产精品一区二区三区四区久久 | 久久香蕉精品热| 露出奶头的视频| 满18在线观看网站| 亚洲一区高清亚洲精品| 成人特级黄色片久久久久久久| 嫩草影院精品99| 天堂影院成人在线观看| 亚洲精品在线观看二区| 久久精品成人免费网站| 亚洲国产欧美网| 国产91精品成人一区二区三区| av片东京热男人的天堂| 国产一区二区三区视频了| 99国产精品一区二区蜜桃av| 在线观看免费高清a一片| 一区二区日韩欧美中文字幕| 国产熟女午夜一区二区三区| 欧美日韩精品网址| 亚洲中文av在线| 一级片'在线观看视频| 欧美日韩亚洲高清精品| 满18在线观看网站| 窝窝影院91人妻| 精品人妻在线不人妻| 视频区图区小说| 水蜜桃什么品种好| 亚洲人成77777在线视频| 伦理电影免费视频| 亚洲自拍偷在线| tocl精华| 欧美日韩精品网址| 91成年电影在线观看| 级片在线观看| 天天影视国产精品| 午夜精品久久久久久毛片777| 国产精品98久久久久久宅男小说| 国产精品自产拍在线观看55亚洲| 淫妇啪啪啪对白视频| 欧美黑人欧美精品刺激| 麻豆一二三区av精品| 久久久精品欧美日韩精品| 女人爽到高潮嗷嗷叫在线视频| 亚洲性夜色夜夜综合| 韩国av一区二区三区四区| av有码第一页| 久久午夜综合久久蜜桃| 岛国视频午夜一区免费看| 国产av一区在线观看免费| 在线观看一区二区三区激情| 国产成人av教育| 九色亚洲精品在线播放| 香蕉丝袜av| 国产精品电影一区二区三区| 精品人妻在线不人妻| 成人国语在线视频| 少妇粗大呻吟视频| 久久久久九九精品影院| 欧美精品啪啪一区二区三区| 90打野战视频偷拍视频| 国产精品一区二区免费欧美| 国产xxxxx性猛交| 无人区码免费观看不卡| 日本黄色视频三级网站网址| 国产成人精品无人区| 久久精品国产综合久久久| 一二三四社区在线视频社区8| 日韩欧美在线二视频| 国产亚洲精品第一综合不卡| 精品熟女少妇八av免费久了| 精品少妇一区二区三区视频日本电影| 精品免费久久久久久久清纯| 看黄色毛片网站| 亚洲人成77777在线视频| 中文字幕精品免费在线观看视频| 亚洲五月天丁香| 黄网站色视频无遮挡免费观看| 男人操女人黄网站| 涩涩av久久男人的天堂| 国产成人啪精品午夜网站| 久久天堂一区二区三区四区| 啦啦啦在线免费观看视频4| 亚洲狠狠婷婷综合久久图片| 女性被躁到高潮视频| 18美女黄网站色大片免费观看| 久久久久久免费高清国产稀缺| 欧美人与性动交α欧美精品济南到| 自线自在国产av| 性欧美人与动物交配| 午夜福利欧美成人| 男女高潮啪啪啪动态图| 精品一区二区三区四区五区乱码| 涩涩av久久男人的天堂| 午夜成年电影在线免费观看| 欧美黄色片欧美黄色片| 免费一级毛片在线播放高清视频 | 97超级碰碰碰精品色视频在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 性欧美人与动物交配| 男女午夜视频在线观看| 国产区一区二久久| 80岁老熟妇乱子伦牲交| 国产激情欧美一区二区| 欧美激情高清一区二区三区| 一级黄色大片毛片| 女警被强在线播放| 精品一区二区三卡| 50天的宝宝边吃奶边哭怎么回事| 99精国产麻豆久久婷婷| 亚洲精品一卡2卡三卡4卡5卡| 制服诱惑二区| 欧美日本亚洲视频在线播放| www.熟女人妻精品国产| 国产欧美日韩精品亚洲av| 91九色精品人成在线观看| 交换朋友夫妻互换小说| 久久婷婷成人综合色麻豆| 免费不卡黄色视频| 精品人妻在线不人妻| 日韩大尺度精品在线看网址 | 欧美中文日本在线观看视频| 欧美午夜高清在线| 国产成人精品久久二区二区免费| 69av精品久久久久久| 在线观看www视频免费| 无人区码免费观看不卡| 一a级毛片在线观看| 黄片播放在线免费| 欧美日韩亚洲综合一区二区三区_| a级毛片黄视频| 国产色视频综合| 99国产极品粉嫩在线观看| 美女扒开内裤让男人捅视频| 亚洲精品美女久久av网站| 国产伦一二天堂av在线观看| 免费看十八禁软件| 日韩欧美国产一区二区入口| 中文字幕色久视频| 美女高潮喷水抽搐中文字幕| 在线观看舔阴道视频| 亚洲五月天丁香| 12—13女人毛片做爰片一| 首页视频小说图片口味搜索| 国产精品免费一区二区三区在线| 91在线观看av| 18美女黄网站色大片免费观看| 纯流量卡能插随身wifi吗| 欧美一级毛片孕妇| 91精品三级在线观看| 久久精品亚洲精品国产色婷小说| 午夜免费激情av| 婷婷精品国产亚洲av在线| svipshipincom国产片| 啦啦啦 在线观看视频| 欧美国产精品va在线观看不卡| 99riav亚洲国产免费| 嫩草影视91久久| 亚洲国产中文字幕在线视频| 啦啦啦 在线观看视频| www日本在线高清视频| 久久精品亚洲av国产电影网| 国产亚洲精品第一综合不卡| 久久午夜综合久久蜜桃| 极品教师在线免费播放| 中文亚洲av片在线观看爽| 精品一区二区三区四区五区乱码| 日韩欧美一区二区三区在线观看| 在线国产一区二区在线| 久久精品成人免费网站| av天堂在线播放| 正在播放国产对白刺激| 国产精品自产拍在线观看55亚洲| 又紧又爽又黄一区二区| 色婷婷av一区二区三区视频| 一进一出好大好爽视频| 91九色精品人成在线观看| 久久久久国内视频| 最好的美女福利视频网| 久久国产精品人妻蜜桃| 中文字幕另类日韩欧美亚洲嫩草| 在线观看www视频免费| 女人被狂操c到高潮| 人人妻人人爽人人添夜夜欢视频| 久久精品亚洲精品国产色婷小说| 国产色视频综合| 亚洲国产精品999在线| 久久午夜综合久久蜜桃| 香蕉丝袜av| 精品国产亚洲在线| av福利片在线| 国产男靠女视频免费网站| 精品日产1卡2卡| 19禁男女啪啪无遮挡网站| 国产免费男女视频| 免费人成视频x8x8入口观看| 亚洲专区国产一区二区| 国产成人一区二区三区免费视频网站| 在线永久观看黄色视频| 精品卡一卡二卡四卡免费| 成人亚洲精品一区在线观看| 国产精品99久久99久久久不卡| 日本一区二区免费在线视频| 另类亚洲欧美激情| 亚洲av成人一区二区三| 国产精华一区二区三区| 天天影视国产精品| 一级毛片精品| 18美女黄网站色大片免费观看| av中文乱码字幕在线| 黑人欧美特级aaaaaa片| 最近最新中文字幕大全免费视频| 精品电影一区二区在线| 黄色丝袜av网址大全| 丝袜美腿诱惑在线| 999精品在线视频| 熟女少妇亚洲综合色aaa.| 久久精品人人爽人人爽视色| 亚洲人成网站在线播放欧美日韩| 久久久国产成人精品二区 | 日韩国内少妇激情av| 69精品国产乱码久久久| 精品人妻在线不人妻| 黑人操中国人逼视频| 青草久久国产| 国产又爽黄色视频| 一级毛片高清免费大全| 十八禁人妻一区二区| 美女大奶头视频| 中文字幕av电影在线播放| 午夜福利在线观看吧| 午夜影院日韩av| 搡老乐熟女国产| 一夜夜www| 999久久久国产精品视频| 亚洲欧洲精品一区二区精品久久久| 精品国产乱码久久久久久男人| 黄色 视频免费看| 色尼玛亚洲综合影院| 久久人妻福利社区极品人妻图片| 999精品在线视频| 午夜成年电影在线免费观看| 成在线人永久免费视频| 欧美亚洲日本最大视频资源| 欧洲精品卡2卡3卡4卡5卡区| 亚洲九九香蕉| 久久久久久人人人人人| 中文字幕最新亚洲高清| 狠狠狠狠99中文字幕| 韩国精品一区二区三区| 两个人看的免费小视频| 亚洲aⅴ乱码一区二区在线播放 | 高清在线国产一区| 中文字幕人妻丝袜制服| 999久久久国产精品视频| 国产aⅴ精品一区二区三区波| 欧美成人性av电影在线观看| 黑人巨大精品欧美一区二区蜜桃| 久久人人爽av亚洲精品天堂| 国产精品爽爽va在线观看网站 | 国产精品 国内视频| 身体一侧抽搐| e午夜精品久久久久久久| 久久精品aⅴ一区二区三区四区|