• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    AMDnet:An Academic Misconduct Detection Method for Authors’Behaviors

    2022-08-23 02:21:58ShihaoZhouZiyuanXuJinHanXingmingSunandYiCao
    Computers Materials&Continua 2022年6期

    Shihao Zhou,Ziyuan Xu,Jin Han,,Xingming Sun,2 and Yi Cao

    1Nanjing University of Information Science&Technology,Nanjing,210044,China

    2Engineering Research Center of Digital Forensics,Ministry of Education,Nanjing,201144,China

    3Nanjing University(Suzhou)High and New Technology Research Institute,Suzhou,215123,China

    4Jiangsu Union Technical Institute,Wuxi,214145,China

    5Department of Electrical and Computer Engineering,University of Windsor,ON,N9B 3P4,Canada

    Abstract: In recent years,academic misconduct has been frequently exposed by the media, with serious impacts on the academic community.Current research on academic misconduct focuses mainly on detecting plagiarism in article content through the application of character-based and non-text element detection techniques over the entirety of a manuscript.For the most part, these techniques can only detect cases of textual plagiarism, which means that potential culprits can easily avoid discovery through clever editing and alterations of text content.In this paper, we propose an academic misconduct detection method based on scholars’submission behaviors.The model can effectively capture the atypical behavioral approach and operation of the author.As such, it is able to detect various types of misconduct,thereby improving the accuracy of detection when combined with a text content analysis.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.First, the effect of scholars’behavioral features on the model are considered and analyzed.Second, the Synthetic Minority Oversampling Technique(SMOTE)is applied to address the problem of imbalanced samples of positive and negative classes among contributing scholars.Finally, the text features of the papers are combined with the scholars’behavioral data to improve recognition precision.Experimental results on the imbalanced dataset demonstrate that our model has a highly satisfactory performance in terms of accuracy and recall.

    Keywords:Academic misconduct;neural network;imbalanced dataset

    1 Introduction

    In the last few years,scientific research around the world has shown an increasingly open position,as the emergence of excellent academic papers has contributed positively to social development.However,due to the low-risk and high-profit incentives of academic misconduct,an increasing number of violations of academic regulations are being frequently exposed,raising heightened attention from governments and the public.As cited in[1],the Office of Science and Technology Policy of the United States defines academic misconduct as“the fabrication,tampering with,or plagiarizing when recommending,conducting,or reporting research,or when seriously deviating from the accepted rules of the scientific community,excluding honest errors or discrepancies in data interpretation or assessment.”This definition has been universally recognized and accepted by scholars.In May 2019,the China Press and Publication Administration issued the“Academic Publishing Standards:Definition of Academic Misconduct in Journals”[2].This standard provides a detailed classification and definition of many different forms of academic misconduct,further regulating academic production activities.

    Despite these regulations,many studies have found that academic misconduct is still on the rise.Since 1975,the number of papers withdrawn due to some kind of misconduct has increased by nearly 10 times as a percentage of published articles.Grieneisen et al.[3] analyzed databases of multiple disciplines and collected 4449 retracted articles in 4 disciplines.Their research confirmed that the trend of withdrawals is on the rise,with most withdrawals due to misconduct.Although the number of retracted articles still accounts for a small proportion of the total number of publications, many researchers believe that the current academic frauds are only the tip of the iceberg.Therefore,how to prevent and detect academic misconduct in time has become an increasingly important topic.

    Fig.1 depicts the number of article retractions in different countries counted on Retraction Watch from 2015 to 2020.It can be observed that although most of the 9 countries have experienced a decline in the number of retracted manuscripts during these five years, the overall number of manuscripts withdrawn is still on the rise.2020 is a special year as the outbreak of Corona Virus Disease 2019(COVID-19)may be the main reason for the decline.While papers related to COVID-19 have increased, such as the telemedicine system proposed by Abdulkareem et al.[4] and the use of machine learning models to predict confirmed cases by Antor et al.[5],the overall trend is still a decline in 2020.

    Figure 1:Trends of paper withdrawals on Retraction Watch in different countries from 2015 to 2020

    The current mainstream techniques in character-based or non-text element detection only consider the content of papers, and can only detect cases of a single plagiarism type.At the same time,these methods cannot examine dynamic features from the outside.In this paper,academic misconduct is captured not only through paper content,but also through the user’s external behavior.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.This method is proven experimentally to improve accuracy and recall,and the model’s performance is further improved after combining text features.

    In summary, this study proposes an academic misconduct detection network (AMDnet) which includes the following innovative features:

    1) The academic misconduct detection method adopts a fusion model combining user behaviors and text content,in which the user behavior module captures abnormal operations and the text content analysis module calculates the probability of paper plagiarism.The proposed model overcomes the limitation that only a single plagiarism type can be detected.

    2) For the first time,we use real submission data for testing,and filter a large amount of data to obtain effective learning features.

    3) The AMDnet framework learns the representation of cross-domain features by employing a network group.

    The rest of this article is structured as follows:Chapter 2 introduces past work on detecting academic misconduct;Chapter 3 presents a neural network algorithm for detecting academic misconduct;Chapter 4 describes the processing and experimental scheme of the dataset,and Chapter 5 evaluates the model’s performance and experimental results.Chapter 6 draws a conclusion.

    2 Related Work

    Analyzing data from the Retraction Watch Database,Wu et al.[6]found that the number of papers retracted due to plagiarism accounted for the majority,reaching nearly 61%.Hence,the research on plagiarism,one type of academic misconduct,has been occupying the mainstream,while the discovery and detection of other types of academic misconduct have mainly occurred through human experience.

    As shown in Fig.2, from a technical perspective, plagiarism detection techniques are classified into two main categories: external plagiarism detection and internal plagiarism detection.External plagiarism detection methods compare suspicious documents with a collection of documents assumed to be genuine (reference collection), and retrieve all documents showing similarities that exceed a threshold as potential sources [7].Generally speaking, the collection of suspicious documents using this method is usually very large.It is computationally unfeasible to compare the input article with all the documents in the collection.To improve detection efficiency,most external plagiarism detection methods are divided into two stages:candidate retrieval and detailed analysis[8].The main task of the selective retrieval stage is to retrieve documents that share content with the input from the suspicious documents so as to reduce the amount of calculation in the next stage.In the detailed analysis stage,a careful document comparison is completed to identify parts of the input document that are similar to the source document.

    The concept of internal plagiarism detection was first proposed by Eissen et al.[9].This method assumes that each author has their own writing style and uses this to identify articles by different authors.The internal plagiarism detection method includes two tasks[10]:style vulnerability detection,which detects paragraphs with different styles;and author identification,which identifies the author of a document or paragraph.The main difference between internal plagiarism detection and external plagiarism detection is that the internal method does not require any reference documents.

    Figure 2:Plagiarism detection technology classification system

    In the past few decades, scholars have proposed many specific detection techniques to realize external or internal plagiarism detection.Character-based detection technology is a frequently used technology, most suitable for identifying copy and paste.Grozea et al.[11] used 16-gram to match the similarity of 16 consecutive entity sequences and to detect similar content.Tschuggnall et al.[12]detect suspicious places in a text document by analyzing the grammar of the sentences.Elhadi et al.[13] use syntactic location tags to represent text structure and as the basis for analysis, in which documents containing the same location tag features are used to identify the source of plagiarism.The semantic-based detection method is currently more important.This method can find the relevance between words and words,sentences and sentences,and paragraphs and paragraphs,so as to detect the similarity of papers at the semantic level,thereby improving the accuracy of detection.AlSallal et al.[14]proposed a new weighting method and used Latent Semantic Analysis(LSA)as the style feature for internal plagiarism detection.Resnik et al.[15] used the WordNet model to calculate semantic similarity.Salvador et al.[16]improved the weighting process by using skip-grams,and then applied graph similarity measures to generate semantic similarity scores for documents.

    All of the above detection and recognition methods are based on analyses of the text content.Foltynek et al.[17] called the technology of analyzing non-text elements to identify academic misconduct as idea-based methods.This type of method is an important supplement and expansion of text-based analysis methods,and enriches the technical ideas for detecting various types of academic misconduct.Gipp et al.[18] proposed citation-based plagiarism detection and analyzed the citation patterns in academic literature,checking,for instance,whether the same citations appear in a similar order in two documents.Meuschke et al.[19] proposed a detection method based on mathematical expressions, and through experiments showed that mathematical expressions are effective features independent of the text.Acuna et al.[20] analyzed the graphic elements in the literature, and used image similarity detection algorithms to find a large number of image reuse and plagiarisms.

    Current research focuses mainly on the detection of plagiarism-type academic misconduct.Systematic research on other types of misconduct is insufficient.However,research has been carried out on the detection of various abnormal user behaviors in social networks,such as building neural network models to detect deceptive comments [21], and detecting malicious social bots through graph networks[22].Taking those approaches into account,this paper analyzes user information and behavior data in the Tech Science Press (TSP) online submission system, and finds that some users display repeated submissions, one-site multiple submissions, and other problems.Thus, we propose an academic misconduct user classification model.This research method provides a new idea for academic misconduct detection and further supplements idea-based methods.

    3 Method

    The main purpose of this paper is to identify scholars who have a high probability of academic misconduct by building a neural network that can analyze scholars’non-text behavioral data.The classification model for academic misconduct has three main tasks: data preprocessing, in which behavioral data and text data are processed respectively; data sampling, which samples imbalanced and unevenly distributed data to improve classification accuracy; and result output, where the final result is calculated by a multilayer perceptron.The flow chart of the model is shown in Fig.3.

    Figure 3:Flowchart of the author classification model for academic misconduct

    3.1 Data Sampling

    The small proportion of true misconduct among all scholar users means that the data we use for experiments is an imbalanced dataset, which is reflected in a 1:13 ratio of positive to negative cases(see Section 4.1 below).The disparity between the positive class sample size and the negative class sample size will affect the performance of the classifier model to some extent,so it is necessary to use resampling techniques to overcome this problem.

    The random oversampling technique is a method of increasing the number of minority class samples.SMOTE[23]is an improved method of random oversampling.We use this technique to enrich our positive class sample instead of simply replicating it.The data processing flow is as follows.

    · Calculation of the K nearest neighbors of each minority sample

    First,set the oversampling magnification to determine the number of synthesized samples,then select a sample X arbitrarily from the positive sample set,calculate the distance from this sample to all other positive samples by Euclidean distance,and then sort the distance from smallest to largest to obtain the top K nearest neighbors to sample X.Here,K is a hyperparameter.

    · Linear interpolation

    For each positive sample X, randomly select M samples from its K nearest neighbors, then interpolate on the straight line between X and the M samples respectively; that is, a new sample is synthesized at any point on the line.The interpolation formula is as follows:

    Here,rand(0,1)means randomly select a value in the range(0,1),and ?xmeans randomly select any one of the K neighbors of sample X.

    · Generation of new dataset

    The positive samples generated by simulating a few samples are not copies of the original samples, but can be regarded as new samples similar in feature space, and we merge the original positive samples with the new samples to form a new positive class sample set.The ratio of positive and negative samples is expanded from the original 1:13 to 1:1.

    3.2 Neural Network Method

    The proposed model needs to process two different types of data,so a neural network model based on word vectors and a multilayer perceptron is used to build the whole framework.The initial word vector is converted by the Word2vec model into a low-dimensional vector for subsequent semantic analysis.The multilayer perceptron takes the computed text data and the preprocessed behavioral data as input and obtains the output vector through multilayered neural units.The overall structure of the model is shown in Fig.4.

    Figure 4:Neural network framework combining scholar behavior and text features

    3.2.1 Word Vector Model Module

    In observing and analyzing the dataset, we found that some academic users usually modify the title and abstract of the paper to‘skin the paper’for the purpose of misconduct.The related findings are as follows:

    1) Authors will submit similar papers to different journals or sections repeatedly to get a higher probability of acceptance.Generally,these papers have previously been rejected by editors for publication.

    2) Authors sometimes submit a slightly revised paper to a different journal or section as a completely new paper in order to deceive editors.

    In response to these phenomena,we decided to use the text data from the authors’submissions for analysis.By calculating the semantic similarity of the text data and analyzing the repetition ratio,similar papers can be identified quickly.The hierarchical structure of the model based on word vectors is shown in Fig.5.

    Figure 5:Hierarchy of text semantic analysis module

    First,paper titles and abstracts are stored in pairs and processed for the status of the component words in the data preprocessing stage;they are then fed into the model as input.Subsequently,each word is weighted with Term Frequency–Inverse Document Frequency (TF-IDF) weights so that keywords in the text are given different levels of attention to improve the similarity matching accuracy.The word vectors with weights are then fed into the pre-trained Word2vec model.Participles are then mapped by the Word2vec model into another low-dimensional word vector space and are represented by a set of new vectors.In this space, the distance between words with similar semantics becomes shorter, whereas the distance between words with more distant relationships becomes longer, which produces a natural clustering effect.Finally, we average the word embedding vectors of each word segmentation to obtain the representation of the entire text in vector space, and then use the cosine formula to calculate the cosine of the angle between the vectors to obtain the similarity between the texts.

    3.2.2 Multilayer Perceptron Module

    This paper introduces the innovation of using data on scholars’submission behavior to assess the probability of misconduct,and to construct a multilayered neural network which performs calculations on two different kinds of data to classify scholars.The processing in Sections 3.2.1 produce data coded into a computable form for the neural network.Two kinds of data are combined through the aggregation layer to generate the final input set, and the output vector is finally returned after the multilayer perceptron calculation.The structure of the neural network model incorporating multiple features is shown in Fig.6.

    Figure 6:Hierarchy of neural network fused with multiple features

    The multilayer perceptron network consists of three layers.The first layer is the input layer and is composed of six neural units.Each neuron processes different feature data,such as the number of submissions,the frequency of submissions,and title similarity.After testing,five neurons are set in the hidden layer.Too many neurons will trigger the possibility of overfitting,while too few neurons will lead to a decrease in prediction accuracy.To keep the weights updated continuously and to accelerate the convergence,we use a Parametric Rectified Linear Unit(PReLU)[24]as the activation function in this layer.The specific formula is as follows.

    For the binary classification problem,only one output unit needs to be set in the output layer of the neural network.The SoftMax activation function is then used to obtain the output vector.

    4 Experiments

    In this section,we present the details of experiments and propose different experimental schemes based on the experimental data.Experimental results are then analyzed and discussed.

    The relevant parameters of the model in this study were finally determined after many tests.The parameters stipulate that the length of the paper title shall be calculated by the number of words,the length of the title shall not exceed 50, the abstract shall not exceed 500, and the keywords shall not exceed 30.To facilitate calculations,the difference between paper numbers is used as a measure of the interval between paper submissions.The activation function of the hidden layer in the neural network is set as the PReLU function, and the activation function of the output layer is set as the SoftMax function.The model is used with the Adam optimizer [25], which sets the learning rate to 0.01, the batch size to 200,and the epoch count to 100.

    4.1 Dataset Introduction

    The experimental data in this study come from the submission system used by TSP,a publisher with which we cooperate.While the publisher publishes a large number of academic papers every year,it has also found many malicious submissions,such as multiple submissions and duplicate submissions.We collected a series of 25,238 items of behavioral data,basic information on scholars,and summary data of papers from 1,823 users from 2020 to 2021, including paper titles and submission behavior data.

    There were many records with little or no relevance to this experiment because of low correlation or because they were too evenly distributed(for instance,scholar username,scholar login registration time,etc.).Therefore,we removed these records from the data,counted contributors who submitted more than twice,and constructed the dataset for this experiment after re-screening.As shown in Tab.1,the scholar users are divided into positive and negative cases,and the ratio of positive to negative cases is about 1:13.

    Table 1: Scholar submission dataset

    4.2 Evaluation Metrics

    In order to test the performance of the model, we use accuracy, precision and recall as the evaluation metrics,as expressed in the formulae below.False Positives(FP)denotes the number of true negative samples predicted as positive samples,True Positives(TP)denotes the number of true positive samples predicted as positive samples, True Negatives (TN) denotes the number of true negative samples predicted as negative samples,and False Negatives(FN)denotes the number of true positive samples predicted as negative samples.

    Due to the imbalance of the sample set, Receiver Operating Characteristic(ROC) curve is used to reflect the sensitivity and accuracy of the model under different thresholds.Area Under Curve(AUC)[26]represents the area under the ROC curve,which can be used to measure the generalization performance of the model and can be a more intuitive indication of the classification effectiveness.The True Positive Rate (TPR) value will be used as the horizontal coordinate of the ROC curve as shown in Eq.(6).The False Positive Rate(FPR)value will be used as the vertical coordinate as shown in Eq.(7),and AUC is calculated by the formula shown in Eq.(8).

    In the above,MandNare the numbers of positive and negative samples respectively,sirepresents the serial number of thei-th sample,Ranksidenotes the ranking of the score obtained by thei-th sample,andsi∈positiverefers to the serial numbers of the positive samples.

    4.3 Experimental Results

    4.3.1 Experimental Scheme Using Behavioral Data

    Here,we describe the innovative attempt to use scholar behavior data to analyze the probability of academic misconduct.

    To better test the effect of the data on the classification results,we specified and limited the range of data values.Among them, an excessive number of submitted papers and submission frequencies are considered to have an abnormal tendency; paper acceptance and rejection rates measure the scientific level of the authors;and the number of different journals or special issues to which papers are submitted is used as a feature to calculate the probability of duplicate submissions and multiple submissions.In this experiment, 70% of the samples are used as the training set, and the remaining 30%as the test set.The experimental data are shown in Tab.2.

    Table 2: Scholar behavior data

    Fig.7 shows the results of the model evaluation after using scholar behavior data only.The results show that scholars with academic misconduct can be effectively detected by using behavioral data as features.The upper left figure shows the loss value of the model under training and testing.As the epochs increase, the loss values gradually stabilize and are maintained at around 0.38 and 0.3 respectively.The upper right figure evaluates the accuracy of the model.It can be seen that after 100 iterations,the accuracy of the test set is close to the accuracy level of the training set,reaching about 84%.Because the dataset is an imbalanced dataset, which contains far fewer positive samples than negative samples,the accuracy of the model is not a good reflection of the model’s performance.The recall rate in the lower left figure is used to measure the level of positive samples identified by the classifier.It can be observed that the average recall of the model reaches the better result of about 88%.The lower right figure shows the ROC curve.The dashed black line represents the random classifier,and the solid red curve measures the model’s ability.The AUC value of 0.9 shows objectively that the model has a good performance.

    Figure 7:Model performance using behavioral features as input(loss,accuracy,recall,and ROC curve)

    4.3.2 Experimental Scheme Combining Behavioral and Text Data

    The text data which is used in the mainstream plagiarism method will be adde d to our behavioral data to test the comprehensive effect of two different data types on the classification results.Here again,the text data length is restricted to filter noise,as shown in Tab.3.

    Table 3: Text data of authors’papers

    To test whether the detection performance can be further improved by adding text features,the experiment shown in Fig.7 is continued with text data,and the results are shown in Fig.8.In terms of loss rate,both the test set and the training set show better performance,indicating that the model is better trained.As for the accuracy rate, both have gained a large improvement from the original highest point of less than 90%to a basically stable point of 90%.On the other hand,the recall rate of the model has not changed significantly and is basically maintained at the same level as the original one, indicating that adding text data does not have a significant effect on improving the recall rate.The AUC value of the ROC curve,however,is 0.925,which is larger than the 0.901 in Fig.7 and shows that the combination of text features and behavioral features can enable the model to perform better.

    Figure 8: Continued

    Figure 8:Model performance combining behavioral and text features(loss,accuracy,recall,and ROC curve)

    The analysis and mining of text data can detect plagiarism effectively,but have difficulties finding other forms of academic misconduct.The experimental results here show that,by using website data on scholars’behavior in the submission process, we can detect multiple submissions and duplicate submissions effectively, while the identification rate can be further improved by adding text data features.

    5 Conclusion

    With academic misconduct becoming a growing problem, it is far from sufficient to detect academic misconduct by analyzing article contents merely.More problems of academic misconduct are caused by authors and related stakeholders in their behaviors.Just as He et al.[27]uses abnormal trading behavior to predict the results,we integrate behavioral features to assist detection.This paper builds a neural network model to update automatically the weights of behavioral features, and to classify scholars based on the probability of their behavioral misconduct.Experimental results prove that using author behaviors is an effective method of detecting problems such as multiple submissions and duplicate submission phenomena.Moreover,the organic combination of behavioral data and text data can bring a considerable improvement in the accuracy and recognition rate of the model, and indicates that the comprehensive analysis of different types of data can help to improve the model’s performance.

    Funding Statement:This work is supported by the National Key R&D Program of China under grant 2018YFB1003205; by the National Natural Science Foundation of China under grants U1836208 and U1836110; by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund;and by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.

    Conflicts of Interest:The authors declare no conflicts of interest regarding the present study.

    亚洲欧美成人精品一区二区| 亚洲国产精品专区欧美| 在线观看免费视频网站a站| 最后的刺客免费高清国语| 草草在线视频免费看| 国产女主播在线喷水免费视频网站| 男人添女人高潮全过程视频| 秋霞在线观看毛片| 麻豆国产97在线/欧美| 欧美激情国产日韩精品一区| 久久久久久久大尺度免费视频| 国产精品久久久久久av不卡| 日本色播在线视频| 亚洲精品一二三| 中文乱码字字幕精品一区二区三区| 美女国产视频在线观看| 国产有黄有色有爽视频| 亚洲av在线观看美女高潮| 啦啦啦视频在线资源免费观看| 亚洲欧美精品专区久久| av在线app专区| 六月丁香七月| 九色成人免费人妻av| 亚洲精品久久久久久婷婷小说| 亚洲性久久影院| 嘟嘟电影网在线观看| 在线观看免费高清a一片| 女的被弄到高潮叫床怎么办| 一区二区三区精品91| 99国产精品免费福利视频| 青春草国产在线视频| 国产 精品1| 欧美成人a在线观看| 国产精品.久久久| 18禁在线播放成人免费| 亚洲欧美一区二区三区黑人 | 亚洲成人一二三区av| 国产精品爽爽va在线观看网站| av网站免费在线观看视频| 久热这里只有精品99| 国产精品久久久久久久久免| av卡一久久| 精品午夜福利在线看| 男人舔奶头视频| 久久国产精品男人的天堂亚洲 | 亚洲av成人精品一二三区| 国产精品一区二区三区四区免费观看| 熟妇人妻不卡中文字幕| 国产成人免费无遮挡视频| 精品久久久久久久久av| 少妇被粗大猛烈的视频| 一级毛片黄色毛片免费观看视频| 国产精品人妻久久久久久| 亚洲美女搞黄在线观看| 国产片特级美女逼逼视频| 天堂8中文在线网| 各种免费的搞黄视频| 午夜日本视频在线| 欧美激情极品国产一区二区三区 | 男人爽女人下面视频在线观看| 亚洲综合色惰| 2022亚洲国产成人精品| 亚洲av二区三区四区| 欧美变态另类bdsm刘玥| 国产高清三级在线| 亚洲精品456在线播放app| 国产精品熟女久久久久浪| 日本av手机在线免费观看| 99久国产av精品国产电影| 亚州av有码| 亚洲激情五月婷婷啪啪| 中国美白少妇内射xxxbb| av黄色大香蕉| 麻豆乱淫一区二区| 一区二区三区乱码不卡18| 国产精品一区二区在线不卡| 爱豆传媒免费全集在线观看| 久久国产乱子免费精品| 男女啪啪激烈高潮av片| 大香蕉久久网| 丝袜喷水一区| 如何舔出高潮| 狂野欧美激情性bbbbbb| 我要看黄色一级片免费的| 日本av手机在线免费观看| 一本色道久久久久久精品综合| 日本色播在线视频| 欧美亚洲 丝袜 人妻 在线| 男女边摸边吃奶| 国产熟女欧美一区二区| 人人妻人人爽人人添夜夜欢视频 | 少妇的逼好多水| 七月丁香在线播放| 国产精品av视频在线免费观看| 欧美精品人与动牲交sv欧美| 亚洲成色77777| 久久久久久久久久久免费av| 另类亚洲欧美激情| 高清毛片免费看| 99热这里只有是精品在线观看| 黄色视频在线播放观看不卡| 精品久久久久久久末码| 成人漫画全彩无遮挡| 各种免费的搞黄视频| 一区在线观看完整版| 国产亚洲5aaaaa淫片| 欧美精品人与动牲交sv欧美| av福利片在线观看| 亚洲,欧美,日韩| 赤兔流量卡办理| 一个人免费看片子| 涩涩av久久男人的天堂| 国产美女午夜福利| 亚洲欧美中文字幕日韩二区| 一二三四中文在线观看免费高清| 国产精品爽爽va在线观看网站| 日韩制服骚丝袜av| 亚洲欧美日韩无卡精品| 久久热精品热| av在线蜜桃| 最近中文字幕2019免费版| 伦理电影大哥的女人| 赤兔流量卡办理| 亚洲人成网站在线观看播放| 日韩欧美一区视频在线观看 | 精品一区二区三区视频在线| 高清欧美精品videossex| 午夜老司机福利剧场| 美女福利国产在线 | 国产高清有码在线观看视频| 一级a做视频免费观看| 国内揄拍国产精品人妻在线| 亚洲精品久久久久久婷婷小说| 国内少妇人妻偷人精品xxx网站| 老司机影院毛片| 精品少妇黑人巨大在线播放| 久久久久久久亚洲中文字幕| 夜夜爽夜夜爽视频| 一级毛片aaaaaa免费看小| 在线观看一区二区三区激情| 青春草视频在线免费观看| 亚洲人成网站高清观看| 国产深夜福利视频在线观看| 极品教师在线视频| 麻豆精品久久久久久蜜桃| 国产淫片久久久久久久久| 中文乱码字字幕精品一区二区三区| 午夜老司机福利剧场| 久久午夜福利片| 最后的刺客免费高清国语| 一本一本综合久久| 久久久久网色| 下体分泌物呈黄色| 国产精品久久久久成人av| 卡戴珊不雅视频在线播放| 欧美日韩视频高清一区二区三区二| 成人影院久久| 亚洲国产欧美人成| 亚洲精品视频女| 男女啪啪激烈高潮av片| 国产毛片在线视频| 最近手机中文字幕大全| 综合色丁香网| 美女主播在线视频| 久久99精品国语久久久| 亚洲精品aⅴ在线观看| 亚洲av欧美aⅴ国产| 91久久精品电影网| 欧美日韩一区二区视频在线观看视频在线| 精品酒店卫生间| 尾随美女入室| 国产又色又爽无遮挡免| 国产免费又黄又爽又色| 最近最新中文字幕大全电影3| 老女人水多毛片| 国产亚洲av片在线观看秒播厂| 亚洲不卡免费看| 99久久人妻综合| 亚洲图色成人| 亚洲人成网站高清观看| 亚洲三级黄色毛片| 亚洲丝袜综合中文字幕| 午夜视频国产福利| 中文字幕久久专区| 秋霞伦理黄片| h视频一区二区三区| 国产在线男女| 尾随美女入室| 麻豆成人av视频| 色5月婷婷丁香| 久久亚洲国产成人精品v| 秋霞伦理黄片| 国产伦精品一区二区三区四那| av在线app专区| 国产高清三级在线| 丝袜喷水一区| 少妇的逼好多水| 大话2 男鬼变身卡| 免费观看在线日韩| 亚洲欧美精品自产自拍| 高清午夜精品一区二区三区| 国产av精品麻豆| 国产精品一区二区在线观看99| 美女脱内裤让男人舔精品视频| 岛国毛片在线播放| 日本-黄色视频高清免费观看| 男女边摸边吃奶| 精品人妻视频免费看| 少妇人妻 视频| 日韩三级伦理在线观看| 国国产精品蜜臀av免费| 97在线视频观看| 国产精品av视频在线免费观看| 久久人妻熟女aⅴ| 熟女av电影| 久久精品国产a三级三级三级| 日韩亚洲欧美综合| 国产高清国产精品国产三级 | av一本久久久久| 亚洲欧美一区二区三区国产| 99热这里只有精品一区| 国产精品久久久久久av不卡| 丰满人妻一区二区三区视频av| 亚洲精品乱码久久久久久按摩| 亚洲无线观看免费| 国产在线一区二区三区精| 人妻少妇偷人精品九色| 国产黄片视频在线免费观看| 久久99热这里只有精品18| 国产女主播在线喷水免费视频网站| 久久精品人妻少妇| 久久99精品国语久久久| 国产在线免费精品| 中国国产av一级| 美女福利国产在线 | 91精品国产国语对白视频| 18禁裸乳无遮挡动漫免费视频| 日韩三级伦理在线观看| 成人影院久久| 欧美成人a在线观看| 精品一区二区三区视频在线| 天堂俺去俺来也www色官网| 国产乱人偷精品视频| 国产在线男女| 伦理电影大哥的女人| 国产免费福利视频在线观看| 一区二区三区精品91| 国产成人91sexporn| 视频中文字幕在线观看| 久久精品久久久久久久性| 狂野欧美白嫩少妇大欣赏| 国产精品国产三级国产专区5o| 午夜精品国产一区二区电影| 国产乱人偷精品视频| 人人妻人人看人人澡| 五月开心婷婷网| 亚洲av欧美aⅴ国产| 最黄视频免费看| 亚洲成人中文字幕在线播放| 亚洲自偷自拍三级| 丰满迷人的少妇在线观看| 午夜福利视频精品| 亚洲人与动物交配视频| 如何舔出高潮| 欧美一级a爱片免费观看看| 亚洲国产精品999| av天堂中文字幕网| 蜜臀久久99精品久久宅男| 我的女老师完整版在线观看| 97超视频在线观看视频| 亚洲国产欧美在线一区| 国产视频首页在线观看| 国产高清有码在线观看视频| 少妇猛男粗大的猛烈进出视频| 一本久久精品| 欧美xxxx性猛交bbbb| 秋霞在线观看毛片| 亚洲国产成人一精品久久久| 在线观看一区二区三区激情| 国产精品99久久久久久久久| 欧美日韩一区二区视频在线观看视频在线| 精品人妻偷拍中文字幕| 国产一区二区在线观看日韩| 日日撸夜夜添| 狂野欧美激情性xxxx在线观看| 大片免费播放器 马上看| 最近的中文字幕免费完整| 国产精品久久久久久久久免| 国产免费又黄又爽又色| 在线播放无遮挡| 在线天堂最新版资源| 黑人高潮一二区| 自拍欧美九色日韩亚洲蝌蚪91 | 成年美女黄网站色视频大全免费 | 国产精品久久久久久av不卡| 狂野欧美白嫩少妇大欣赏| 内地一区二区视频在线| 人妻 亚洲 视频| 成人免费观看视频高清| 国产毛片在线视频| 大话2 男鬼变身卡| 日本av免费视频播放| 午夜激情久久久久久久| 国产成人a∨麻豆精品| 99热全是精品| 日韩视频在线欧美| 久久久久久久久久人人人人人人| 天天躁夜夜躁狠狠久久av| 亚洲国产精品专区欧美| 少妇裸体淫交视频免费看高清| 丝袜脚勾引网站| 又黄又爽又刺激的免费视频.| 免费大片黄手机在线观看| 日韩人妻高清精品专区| 免费播放大片免费观看视频在线观看| 国产精品爽爽va在线观看网站| 在现免费观看毛片| 国产亚洲av片在线观看秒播厂| 欧美+日韩+精品| 国产高潮美女av| 中文精品一卡2卡3卡4更新| 观看美女的网站| 亚洲国产最新在线播放| 免费黄网站久久成人精品| 中国美白少妇内射xxxbb| 亚洲国产高清在线一区二区三| 少妇人妻 视频| 欧美少妇被猛烈插入视频| 亚洲精品国产av成人精品| 精品99又大又爽又粗少妇毛片| 欧美日本视频| 欧美三级亚洲精品| 国产亚洲5aaaaa淫片| 一本—道久久a久久精品蜜桃钙片| 成人漫画全彩无遮挡| 18禁裸乳无遮挡免费网站照片| 亚洲精品中文字幕在线视频 | 亚洲国产高清在线一区二区三| 美女主播在线视频| 狂野欧美激情性xxxx在线观看| 下体分泌物呈黄色| 久久人人爽人人片av| 国产成人a区在线观看| 波野结衣二区三区在线| 日本与韩国留学比较| 久久久国产一区二区| 大码成人一级视频| 丰满人妻一区二区三区视频av| 又大又黄又爽视频免费| 欧美日韩在线观看h| 亚洲精品视频女| 2022亚洲国产成人精品| 亚洲精品日韩在线中文字幕| 成人毛片a级毛片在线播放| 国产黄频视频在线观看| 久久久国产一区二区| 一级毛片 在线播放| www.色视频.com| av一本久久久久| 青青草视频在线视频观看| 亚洲,一卡二卡三卡| 毛片一级片免费看久久久久| 夜夜骑夜夜射夜夜干| 黄色视频在线播放观看不卡| 在线免费观看不下载黄p国产| 少妇 在线观看| 干丝袜人妻中文字幕| 国产在线一区二区三区精| 精品一区二区三卡| 国产精品久久久久久久电影| 日韩中文字幕视频在线看片 | 中文资源天堂在线| 中文欧美无线码| 激情 狠狠 欧美| 妹子高潮喷水视频| 18+在线观看网站| 久久99蜜桃精品久久| 中国国产av一级| 韩国高清视频一区二区三区| 免费看不卡的av| 男的添女的下面高潮视频| 草草在线视频免费看| 啦啦啦视频在线资源免费观看| 18禁动态无遮挡网站| 美女中出高潮动态图| 久久热精品热| 国国产精品蜜臀av免费| 天堂俺去俺来也www色官网| 免费看光身美女| 亚洲av成人精品一二三区| 亚洲va在线va天堂va国产| 晚上一个人看的免费电影| 久久女婷五月综合色啪小说| 七月丁香在线播放| 女性被躁到高潮视频| a级一级毛片免费在线观看| 国产精品偷伦视频观看了| 午夜福利视频精品| 国产色爽女视频免费观看| 两个人的视频大全免费| 狂野欧美激情性xxxx在线观看| 免费在线观看成人毛片| 中文精品一卡2卡3卡4更新| 在线 av 中文字幕| 日韩欧美一区视频在线观看 | 久久久久久久久久久免费av| 十分钟在线观看高清视频www | 国产精品一区二区在线观看99| 欧美日韩视频精品一区| 观看美女的网站| 亚洲精品国产av蜜桃| 网址你懂的国产日韩在线| 欧美bdsm另类| 丰满少妇做爰视频| 亚洲三级黄色毛片| 日日啪夜夜撸| 欧美成人a在线观看| 久久久久久久亚洲中文字幕| 日韩人妻高清精品专区| 中国国产av一级| 99久久精品国产国产毛片| 久久精品熟女亚洲av麻豆精品| 亚洲精品视频女| 最近中文字幕高清免费大全6| 婷婷色av中文字幕| 中文字幕久久专区| 免费人妻精品一区二区三区视频| 免费久久久久久久精品成人欧美视频 | 国产爽快片一区二区三区| 18禁动态无遮挡网站| 国产精品.久久久| 国产精品福利在线免费观看| 日本av手机在线免费观看| 99热这里只有精品一区| 欧美三级亚洲精品| 一级a做视频免费观看| 人妻系列 视频| 大香蕉久久网| 国产成人精品婷婷| 国产精品久久久久久久久免| 黄色配什么色好看| 免费不卡的大黄色大毛片视频在线观看| 我要看黄色一级片免费的| 最近最新中文字幕大全电影3| 免费观看在线日韩| a级毛片免费高清观看在线播放| 99热这里只有是精品在线观看| 欧美最新免费一区二区三区| 国产男人的电影天堂91| 久久人人爽人人片av| 中文资源天堂在线| 精品国产乱码久久久久久小说| 最近手机中文字幕大全| 少妇的逼水好多| 少妇被粗大猛烈的视频| 欧美日韩国产mv在线观看视频 | 日韩伦理黄色片| 国语对白做爰xxxⅹ性视频网站| 80岁老熟妇乱子伦牲交| 久久99精品国语久久久| 美女福利国产在线 | 国产成人精品福利久久| 中文精品一卡2卡3卡4更新| 嘟嘟电影网在线观看| 日本黄大片高清| 国产亚洲最大av| 亚洲精品国产av蜜桃| 国产成人午夜福利电影在线观看| 免费观看在线日韩| 深爱激情五月婷婷| 国产乱人偷精品视频| 一级av片app| 内射极品少妇av片p| 又粗又硬又长又爽又黄的视频| 熟女人妻精品中文字幕| 男女无遮挡免费网站观看| 汤姆久久久久久久影院中文字幕| 欧美xxxx黑人xx丫x性爽| tube8黄色片| 精品人妻视频免费看| 日本欧美国产在线视频| kizo精华| 久久99热这里只有精品18| 在线免费观看不下载黄p国产| 性高湖久久久久久久久免费观看| 亚洲内射少妇av| 久久精品国产亚洲av天美| 亚洲精品,欧美精品| 美女cb高潮喷水在线观看| 日韩国内少妇激情av| 久久人人爽人人爽人人片va| 18禁动态无遮挡网站| 久久热精品热| av国产久精品久网站免费入址| 91精品一卡2卡3卡4卡| 亚洲精品乱久久久久久| 国产精品久久久久久久久免| 国内少妇人妻偷人精品xxx网站| 欧美日韩国产mv在线观看视频 | 亚洲综合色惰| 久热久热在线精品观看| 久久久亚洲精品成人影院| 精品久久久精品久久久| 久久精品国产亚洲av涩爱| 亚洲内射少妇av| 蜜桃久久精品国产亚洲av| 国产精品国产三级专区第一集| 亚洲综合精品二区| 伊人久久精品亚洲午夜| 色哟哟·www| 老师上课跳d突然被开到最大视频| 91aial.com中文字幕在线观看| 国产片特级美女逼逼视频| 女人久久www免费人成看片| 国产精品免费大片| 国产久久久一区二区三区| 国产黄色免费在线视频| 国产亚洲精品久久久com| 欧美激情极品国产一区二区三区 | 少妇丰满av| 午夜激情福利司机影院| 亚洲中文av在线| 18禁裸乳无遮挡动漫免费视频| 纯流量卡能插随身wifi吗| 国产永久视频网站| 久久韩国三级中文字幕| 麻豆成人av视频| 97在线视频观看| 少妇精品久久久久久久| 亚洲欧美成人综合另类久久久| 热re99久久精品国产66热6| 永久网站在线| 18禁在线无遮挡免费观看视频| 97超碰精品成人国产| 国产欧美另类精品又又久久亚洲欧美| 欧美高清性xxxxhd video| 欧美亚洲 丝袜 人妻 在线| 国产老妇伦熟女老妇高清| av天堂中文字幕网| 国产av国产精品国产| 国产欧美日韩精品一区二区| 我的女老师完整版在线观看| 国产v大片淫在线免费观看| 亚洲av中文av极速乱| 黄色视频在线播放观看不卡| 精品国产露脸久久av麻豆| 久久国产亚洲av麻豆专区| 国产精品人妻久久久久久| 日日撸夜夜添| 国产91av在线免费观看| 亚洲欧美日韩无卡精品| 欧美xxxx性猛交bbbb| 亚洲av日韩在线播放| 精品久久久精品久久久| 国产伦在线观看视频一区| 精品一区在线观看国产| 亚洲精品456在线播放app| 在线精品无人区一区二区三 | 青青草视频在线视频观看| 欧美极品一区二区三区四区| 亚洲经典国产精华液单| 国产一区亚洲一区在线观看| 久久久久国产精品人妻一区二区| 亚洲性久久影院| 日韩不卡一区二区三区视频在线| 亚洲成人手机| 美女中出高潮动态图| 一区二区av电影网| 免费观看无遮挡的男女| 新久久久久国产一级毛片| 肉色欧美久久久久久久蜜桃| 精品久久国产蜜桃| 热99国产精品久久久久久7| 国产午夜精品一二区理论片| 哪个播放器可以免费观看大片| 最近手机中文字幕大全| 欧美日本视频| 国产精品秋霞免费鲁丝片| 国产片特级美女逼逼视频| 麻豆成人av视频| 各种免费的搞黄视频| 国产v大片淫在线免费观看| 成年美女黄网站色视频大全免费 | 日韩一区二区视频免费看| 免费久久久久久久精品成人欧美视频 | 精品人妻视频免费看| 免费看av在线观看网站| 久久99精品国语久久久| 久久99热6这里只有精品| 国产真实伦视频高清在线观看| 嘟嘟电影网在线观看| 男女下面进入的视频免费午夜| 亚洲美女搞黄在线观看| 久久99热这里只有精品18| 老师上课跳d突然被开到最大视频| 亚洲一级一片aⅴ在线观看| 夜夜爽夜夜爽视频| 免费久久久久久久精品成人欧美视频 | 欧美精品国产亚洲| 日韩一区二区视频免费看| 激情 狠狠 欧美| 日韩国内少妇激情av| 久久国产亚洲av麻豆专区| 欧美日韩综合久久久久久| 毛片女人毛片| av网站免费在线观看视频| 久久人人爽人人片av| 欧美日韩在线观看h| 久久久久久久大尺度免费视频| 久久久久精品久久久久真实原创| 亚洲av不卡在线观看| 国产 一区精品| 国产爽快片一区二区三区| www.色视频.com| 免费少妇av软件|