• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    AMDnet:An Academic Misconduct Detection Method for Authors’Behaviors

    2022-08-23 02:21:58ShihaoZhouZiyuanXuJinHanXingmingSunandYiCao
    Computers Materials&Continua 2022年6期

    Shihao Zhou,Ziyuan Xu,Jin Han,,Xingming Sun,2 and Yi Cao

    1Nanjing University of Information Science&Technology,Nanjing,210044,China

    2Engineering Research Center of Digital Forensics,Ministry of Education,Nanjing,201144,China

    3Nanjing University(Suzhou)High and New Technology Research Institute,Suzhou,215123,China

    4Jiangsu Union Technical Institute,Wuxi,214145,China

    5Department of Electrical and Computer Engineering,University of Windsor,ON,N9B 3P4,Canada

    Abstract: In recent years,academic misconduct has been frequently exposed by the media, with serious impacts on the academic community.Current research on academic misconduct focuses mainly on detecting plagiarism in article content through the application of character-based and non-text element detection techniques over the entirety of a manuscript.For the most part, these techniques can only detect cases of textual plagiarism, which means that potential culprits can easily avoid discovery through clever editing and alterations of text content.In this paper, we propose an academic misconduct detection method based on scholars’submission behaviors.The model can effectively capture the atypical behavioral approach and operation of the author.As such, it is able to detect various types of misconduct,thereby improving the accuracy of detection when combined with a text content analysis.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.First, the effect of scholars’behavioral features on the model are considered and analyzed.Second, the Synthetic Minority Oversampling Technique(SMOTE)is applied to address the problem of imbalanced samples of positive and negative classes among contributing scholars.Finally, the text features of the papers are combined with the scholars’behavioral data to improve recognition precision.Experimental results on the imbalanced dataset demonstrate that our model has a highly satisfactory performance in terms of accuracy and recall.

    Keywords:Academic misconduct;neural network;imbalanced dataset

    1 Introduction

    In the last few years,scientific research around the world has shown an increasingly open position,as the emergence of excellent academic papers has contributed positively to social development.However,due to the low-risk and high-profit incentives of academic misconduct,an increasing number of violations of academic regulations are being frequently exposed,raising heightened attention from governments and the public.As cited in[1],the Office of Science and Technology Policy of the United States defines academic misconduct as“the fabrication,tampering with,or plagiarizing when recommending,conducting,or reporting research,or when seriously deviating from the accepted rules of the scientific community,excluding honest errors or discrepancies in data interpretation or assessment.”This definition has been universally recognized and accepted by scholars.In May 2019,the China Press and Publication Administration issued the“Academic Publishing Standards:Definition of Academic Misconduct in Journals”[2].This standard provides a detailed classification and definition of many different forms of academic misconduct,further regulating academic production activities.

    Despite these regulations,many studies have found that academic misconduct is still on the rise.Since 1975,the number of papers withdrawn due to some kind of misconduct has increased by nearly 10 times as a percentage of published articles.Grieneisen et al.[3] analyzed databases of multiple disciplines and collected 4449 retracted articles in 4 disciplines.Their research confirmed that the trend of withdrawals is on the rise,with most withdrawals due to misconduct.Although the number of retracted articles still accounts for a small proportion of the total number of publications, many researchers believe that the current academic frauds are only the tip of the iceberg.Therefore,how to prevent and detect academic misconduct in time has become an increasingly important topic.

    Fig.1 depicts the number of article retractions in different countries counted on Retraction Watch from 2015 to 2020.It can be observed that although most of the 9 countries have experienced a decline in the number of retracted manuscripts during these five years, the overall number of manuscripts withdrawn is still on the rise.2020 is a special year as the outbreak of Corona Virus Disease 2019(COVID-19)may be the main reason for the decline.While papers related to COVID-19 have increased, such as the telemedicine system proposed by Abdulkareem et al.[4] and the use of machine learning models to predict confirmed cases by Antor et al.[5],the overall trend is still a decline in 2020.

    Figure 1:Trends of paper withdrawals on Retraction Watch in different countries from 2015 to 2020

    The current mainstream techniques in character-based or non-text element detection only consider the content of papers, and can only detect cases of a single plagiarism type.At the same time,these methods cannot examine dynamic features from the outside.In this paper,academic misconduct is captured not only through paper content,but also through the user’s external behavior.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.This method is proven experimentally to improve accuracy and recall,and the model’s performance is further improved after combining text features.

    In summary, this study proposes an academic misconduct detection network (AMDnet) which includes the following innovative features:

    1) The academic misconduct detection method adopts a fusion model combining user behaviors and text content,in which the user behavior module captures abnormal operations and the text content analysis module calculates the probability of paper plagiarism.The proposed model overcomes the limitation that only a single plagiarism type can be detected.

    2) For the first time,we use real submission data for testing,and filter a large amount of data to obtain effective learning features.

    3) The AMDnet framework learns the representation of cross-domain features by employing a network group.

    The rest of this article is structured as follows:Chapter 2 introduces past work on detecting academic misconduct;Chapter 3 presents a neural network algorithm for detecting academic misconduct;Chapter 4 describes the processing and experimental scheme of the dataset,and Chapter 5 evaluates the model’s performance and experimental results.Chapter 6 draws a conclusion.

    2 Related Work

    Analyzing data from the Retraction Watch Database,Wu et al.[6]found that the number of papers retracted due to plagiarism accounted for the majority,reaching nearly 61%.Hence,the research on plagiarism,one type of academic misconduct,has been occupying the mainstream,while the discovery and detection of other types of academic misconduct have mainly occurred through human experience.

    As shown in Fig.2, from a technical perspective, plagiarism detection techniques are classified into two main categories: external plagiarism detection and internal plagiarism detection.External plagiarism detection methods compare suspicious documents with a collection of documents assumed to be genuine (reference collection), and retrieve all documents showing similarities that exceed a threshold as potential sources [7].Generally speaking, the collection of suspicious documents using this method is usually very large.It is computationally unfeasible to compare the input article with all the documents in the collection.To improve detection efficiency,most external plagiarism detection methods are divided into two stages:candidate retrieval and detailed analysis[8].The main task of the selective retrieval stage is to retrieve documents that share content with the input from the suspicious documents so as to reduce the amount of calculation in the next stage.In the detailed analysis stage,a careful document comparison is completed to identify parts of the input document that are similar to the source document.

    The concept of internal plagiarism detection was first proposed by Eissen et al.[9].This method assumes that each author has their own writing style and uses this to identify articles by different authors.The internal plagiarism detection method includes two tasks[10]:style vulnerability detection,which detects paragraphs with different styles;and author identification,which identifies the author of a document or paragraph.The main difference between internal plagiarism detection and external plagiarism detection is that the internal method does not require any reference documents.

    Figure 2:Plagiarism detection technology classification system

    In the past few decades, scholars have proposed many specific detection techniques to realize external or internal plagiarism detection.Character-based detection technology is a frequently used technology, most suitable for identifying copy and paste.Grozea et al.[11] used 16-gram to match the similarity of 16 consecutive entity sequences and to detect similar content.Tschuggnall et al.[12]detect suspicious places in a text document by analyzing the grammar of the sentences.Elhadi et al.[13] use syntactic location tags to represent text structure and as the basis for analysis, in which documents containing the same location tag features are used to identify the source of plagiarism.The semantic-based detection method is currently more important.This method can find the relevance between words and words,sentences and sentences,and paragraphs and paragraphs,so as to detect the similarity of papers at the semantic level,thereby improving the accuracy of detection.AlSallal et al.[14]proposed a new weighting method and used Latent Semantic Analysis(LSA)as the style feature for internal plagiarism detection.Resnik et al.[15] used the WordNet model to calculate semantic similarity.Salvador et al.[16]improved the weighting process by using skip-grams,and then applied graph similarity measures to generate semantic similarity scores for documents.

    All of the above detection and recognition methods are based on analyses of the text content.Foltynek et al.[17] called the technology of analyzing non-text elements to identify academic misconduct as idea-based methods.This type of method is an important supplement and expansion of text-based analysis methods,and enriches the technical ideas for detecting various types of academic misconduct.Gipp et al.[18] proposed citation-based plagiarism detection and analyzed the citation patterns in academic literature,checking,for instance,whether the same citations appear in a similar order in two documents.Meuschke et al.[19] proposed a detection method based on mathematical expressions, and through experiments showed that mathematical expressions are effective features independent of the text.Acuna et al.[20] analyzed the graphic elements in the literature, and used image similarity detection algorithms to find a large number of image reuse and plagiarisms.

    Current research focuses mainly on the detection of plagiarism-type academic misconduct.Systematic research on other types of misconduct is insufficient.However,research has been carried out on the detection of various abnormal user behaviors in social networks,such as building neural network models to detect deceptive comments [21], and detecting malicious social bots through graph networks[22].Taking those approaches into account,this paper analyzes user information and behavior data in the Tech Science Press (TSP) online submission system, and finds that some users display repeated submissions, one-site multiple submissions, and other problems.Thus, we propose an academic misconduct user classification model.This research method provides a new idea for academic misconduct detection and further supplements idea-based methods.

    3 Method

    The main purpose of this paper is to identify scholars who have a high probability of academic misconduct by building a neural network that can analyze scholars’non-text behavioral data.The classification model for academic misconduct has three main tasks: data preprocessing, in which behavioral data and text data are processed respectively; data sampling, which samples imbalanced and unevenly distributed data to improve classification accuracy; and result output, where the final result is calculated by a multilayer perceptron.The flow chart of the model is shown in Fig.3.

    Figure 3:Flowchart of the author classification model for academic misconduct

    3.1 Data Sampling

    The small proportion of true misconduct among all scholar users means that the data we use for experiments is an imbalanced dataset, which is reflected in a 1:13 ratio of positive to negative cases(see Section 4.1 below).The disparity between the positive class sample size and the negative class sample size will affect the performance of the classifier model to some extent,so it is necessary to use resampling techniques to overcome this problem.

    The random oversampling technique is a method of increasing the number of minority class samples.SMOTE[23]is an improved method of random oversampling.We use this technique to enrich our positive class sample instead of simply replicating it.The data processing flow is as follows.

    · Calculation of the K nearest neighbors of each minority sample

    First,set the oversampling magnification to determine the number of synthesized samples,then select a sample X arbitrarily from the positive sample set,calculate the distance from this sample to all other positive samples by Euclidean distance,and then sort the distance from smallest to largest to obtain the top K nearest neighbors to sample X.Here,K is a hyperparameter.

    · Linear interpolation

    For each positive sample X, randomly select M samples from its K nearest neighbors, then interpolate on the straight line between X and the M samples respectively; that is, a new sample is synthesized at any point on the line.The interpolation formula is as follows:

    Here,rand(0,1)means randomly select a value in the range(0,1),and ?xmeans randomly select any one of the K neighbors of sample X.

    · Generation of new dataset

    The positive samples generated by simulating a few samples are not copies of the original samples, but can be regarded as new samples similar in feature space, and we merge the original positive samples with the new samples to form a new positive class sample set.The ratio of positive and negative samples is expanded from the original 1:13 to 1:1.

    3.2 Neural Network Method

    The proposed model needs to process two different types of data,so a neural network model based on word vectors and a multilayer perceptron is used to build the whole framework.The initial word vector is converted by the Word2vec model into a low-dimensional vector for subsequent semantic analysis.The multilayer perceptron takes the computed text data and the preprocessed behavioral data as input and obtains the output vector through multilayered neural units.The overall structure of the model is shown in Fig.4.

    Figure 4:Neural network framework combining scholar behavior and text features

    3.2.1 Word Vector Model Module

    In observing and analyzing the dataset, we found that some academic users usually modify the title and abstract of the paper to‘skin the paper’for the purpose of misconduct.The related findings are as follows:

    1) Authors will submit similar papers to different journals or sections repeatedly to get a higher probability of acceptance.Generally,these papers have previously been rejected by editors for publication.

    2) Authors sometimes submit a slightly revised paper to a different journal or section as a completely new paper in order to deceive editors.

    In response to these phenomena,we decided to use the text data from the authors’submissions for analysis.By calculating the semantic similarity of the text data and analyzing the repetition ratio,similar papers can be identified quickly.The hierarchical structure of the model based on word vectors is shown in Fig.5.

    Figure 5:Hierarchy of text semantic analysis module

    First,paper titles and abstracts are stored in pairs and processed for the status of the component words in the data preprocessing stage;they are then fed into the model as input.Subsequently,each word is weighted with Term Frequency–Inverse Document Frequency (TF-IDF) weights so that keywords in the text are given different levels of attention to improve the similarity matching accuracy.The word vectors with weights are then fed into the pre-trained Word2vec model.Participles are then mapped by the Word2vec model into another low-dimensional word vector space and are represented by a set of new vectors.In this space, the distance between words with similar semantics becomes shorter, whereas the distance between words with more distant relationships becomes longer, which produces a natural clustering effect.Finally, we average the word embedding vectors of each word segmentation to obtain the representation of the entire text in vector space, and then use the cosine formula to calculate the cosine of the angle between the vectors to obtain the similarity between the texts.

    3.2.2 Multilayer Perceptron Module

    This paper introduces the innovation of using data on scholars’submission behavior to assess the probability of misconduct,and to construct a multilayered neural network which performs calculations on two different kinds of data to classify scholars.The processing in Sections 3.2.1 produce data coded into a computable form for the neural network.Two kinds of data are combined through the aggregation layer to generate the final input set, and the output vector is finally returned after the multilayer perceptron calculation.The structure of the neural network model incorporating multiple features is shown in Fig.6.

    Figure 6:Hierarchy of neural network fused with multiple features

    The multilayer perceptron network consists of three layers.The first layer is the input layer and is composed of six neural units.Each neuron processes different feature data,such as the number of submissions,the frequency of submissions,and title similarity.After testing,five neurons are set in the hidden layer.Too many neurons will trigger the possibility of overfitting,while too few neurons will lead to a decrease in prediction accuracy.To keep the weights updated continuously and to accelerate the convergence,we use a Parametric Rectified Linear Unit(PReLU)[24]as the activation function in this layer.The specific formula is as follows.

    For the binary classification problem,only one output unit needs to be set in the output layer of the neural network.The SoftMax activation function is then used to obtain the output vector.

    4 Experiments

    In this section,we present the details of experiments and propose different experimental schemes based on the experimental data.Experimental results are then analyzed and discussed.

    The relevant parameters of the model in this study were finally determined after many tests.The parameters stipulate that the length of the paper title shall be calculated by the number of words,the length of the title shall not exceed 50, the abstract shall not exceed 500, and the keywords shall not exceed 30.To facilitate calculations,the difference between paper numbers is used as a measure of the interval between paper submissions.The activation function of the hidden layer in the neural network is set as the PReLU function, and the activation function of the output layer is set as the SoftMax function.The model is used with the Adam optimizer [25], which sets the learning rate to 0.01, the batch size to 200,and the epoch count to 100.

    4.1 Dataset Introduction

    The experimental data in this study come from the submission system used by TSP,a publisher with which we cooperate.While the publisher publishes a large number of academic papers every year,it has also found many malicious submissions,such as multiple submissions and duplicate submissions.We collected a series of 25,238 items of behavioral data,basic information on scholars,and summary data of papers from 1,823 users from 2020 to 2021, including paper titles and submission behavior data.

    There were many records with little or no relevance to this experiment because of low correlation or because they were too evenly distributed(for instance,scholar username,scholar login registration time,etc.).Therefore,we removed these records from the data,counted contributors who submitted more than twice,and constructed the dataset for this experiment after re-screening.As shown in Tab.1,the scholar users are divided into positive and negative cases,and the ratio of positive to negative cases is about 1:13.

    Table 1: Scholar submission dataset

    4.2 Evaluation Metrics

    In order to test the performance of the model, we use accuracy, precision and recall as the evaluation metrics,as expressed in the formulae below.False Positives(FP)denotes the number of true negative samples predicted as positive samples,True Positives(TP)denotes the number of true positive samples predicted as positive samples, True Negatives (TN) denotes the number of true negative samples predicted as negative samples,and False Negatives(FN)denotes the number of true positive samples predicted as negative samples.

    Due to the imbalance of the sample set, Receiver Operating Characteristic(ROC) curve is used to reflect the sensitivity and accuracy of the model under different thresholds.Area Under Curve(AUC)[26]represents the area under the ROC curve,which can be used to measure the generalization performance of the model and can be a more intuitive indication of the classification effectiveness.The True Positive Rate (TPR) value will be used as the horizontal coordinate of the ROC curve as shown in Eq.(6).The False Positive Rate(FPR)value will be used as the vertical coordinate as shown in Eq.(7),and AUC is calculated by the formula shown in Eq.(8).

    In the above,MandNare the numbers of positive and negative samples respectively,sirepresents the serial number of thei-th sample,Ranksidenotes the ranking of the score obtained by thei-th sample,andsi∈positiverefers to the serial numbers of the positive samples.

    4.3 Experimental Results

    4.3.1 Experimental Scheme Using Behavioral Data

    Here,we describe the innovative attempt to use scholar behavior data to analyze the probability of academic misconduct.

    To better test the effect of the data on the classification results,we specified and limited the range of data values.Among them, an excessive number of submitted papers and submission frequencies are considered to have an abnormal tendency; paper acceptance and rejection rates measure the scientific level of the authors;and the number of different journals or special issues to which papers are submitted is used as a feature to calculate the probability of duplicate submissions and multiple submissions.In this experiment, 70% of the samples are used as the training set, and the remaining 30%as the test set.The experimental data are shown in Tab.2.

    Table 2: Scholar behavior data

    Fig.7 shows the results of the model evaluation after using scholar behavior data only.The results show that scholars with academic misconduct can be effectively detected by using behavioral data as features.The upper left figure shows the loss value of the model under training and testing.As the epochs increase, the loss values gradually stabilize and are maintained at around 0.38 and 0.3 respectively.The upper right figure evaluates the accuracy of the model.It can be seen that after 100 iterations,the accuracy of the test set is close to the accuracy level of the training set,reaching about 84%.Because the dataset is an imbalanced dataset, which contains far fewer positive samples than negative samples,the accuracy of the model is not a good reflection of the model’s performance.The recall rate in the lower left figure is used to measure the level of positive samples identified by the classifier.It can be observed that the average recall of the model reaches the better result of about 88%.The lower right figure shows the ROC curve.The dashed black line represents the random classifier,and the solid red curve measures the model’s ability.The AUC value of 0.9 shows objectively that the model has a good performance.

    Figure 7:Model performance using behavioral features as input(loss,accuracy,recall,and ROC curve)

    4.3.2 Experimental Scheme Combining Behavioral and Text Data

    The text data which is used in the mainstream plagiarism method will be adde d to our behavioral data to test the comprehensive effect of two different data types on the classification results.Here again,the text data length is restricted to filter noise,as shown in Tab.3.

    Table 3: Text data of authors’papers

    To test whether the detection performance can be further improved by adding text features,the experiment shown in Fig.7 is continued with text data,and the results are shown in Fig.8.In terms of loss rate,both the test set and the training set show better performance,indicating that the model is better trained.As for the accuracy rate, both have gained a large improvement from the original highest point of less than 90%to a basically stable point of 90%.On the other hand,the recall rate of the model has not changed significantly and is basically maintained at the same level as the original one, indicating that adding text data does not have a significant effect on improving the recall rate.The AUC value of the ROC curve,however,is 0.925,which is larger than the 0.901 in Fig.7 and shows that the combination of text features and behavioral features can enable the model to perform better.

    Figure 8: Continued

    Figure 8:Model performance combining behavioral and text features(loss,accuracy,recall,and ROC curve)

    The analysis and mining of text data can detect plagiarism effectively,but have difficulties finding other forms of academic misconduct.The experimental results here show that,by using website data on scholars’behavior in the submission process, we can detect multiple submissions and duplicate submissions effectively, while the identification rate can be further improved by adding text data features.

    5 Conclusion

    With academic misconduct becoming a growing problem, it is far from sufficient to detect academic misconduct by analyzing article contents merely.More problems of academic misconduct are caused by authors and related stakeholders in their behaviors.Just as He et al.[27]uses abnormal trading behavior to predict the results,we integrate behavioral features to assist detection.This paper builds a neural network model to update automatically the weights of behavioral features, and to classify scholars based on the probability of their behavioral misconduct.Experimental results prove that using author behaviors is an effective method of detecting problems such as multiple submissions and duplicate submission phenomena.Moreover,the organic combination of behavioral data and text data can bring a considerable improvement in the accuracy and recognition rate of the model, and indicates that the comprehensive analysis of different types of data can help to improve the model’s performance.

    Funding Statement:This work is supported by the National Key R&D Program of China under grant 2018YFB1003205; by the National Natural Science Foundation of China under grants U1836208 and U1836110; by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund;and by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.

    Conflicts of Interest:The authors declare no conflicts of interest regarding the present study.

    亚洲自偷自拍三级| 午夜久久久在线观看| 欧美日韩在线观看h| 国产精品久久久久久久电影| 蜜桃在线观看..| 亚洲图色成人| 久久久久精品性色| 亚洲国产精品一区三区| 欧美另类一区| 丰满人妻一区二区三区视频av| 婷婷色综合大香蕉| 中文天堂在线官网| 亚洲,欧美,日韩| 免费看不卡的av| 精品视频人人做人人爽| 色视频www国产| 国产淫片久久久久久久久| 天堂8中文在线网| 欧美少妇被猛烈插入视频| 菩萨蛮人人尽说江南好唐韦庄| 国产永久视频网站| 久久鲁丝午夜福利片| 国产淫语在线视频| 亚洲精品一区蜜桃| 日本黄色片子视频| 嫩草影院入口| 丰满人妻一区二区三区视频av| h日本视频在线播放| 日韩一区二区视频免费看| 蜜桃久久精品国产亚洲av| 国产69精品久久久久777片| 国产黄色视频一区二区在线观看| 王馨瑶露胸无遮挡在线观看| 亚洲av电影在线观看一区二区三区| 女人精品久久久久毛片| av福利片在线| 日本av免费视频播放| 成人毛片a级毛片在线播放| 永久网站在线| 插逼视频在线观看| 99国产精品免费福利视频| 欧美一级a爱片免费观看看| 三上悠亚av全集在线观看 | 黄色日韩在线| 午夜福利,免费看| 久久狼人影院| 亚洲欧美日韩另类电影网站| 久久久久久伊人网av| 3wmmmm亚洲av在线观看| 在线观看免费日韩欧美大片 | 人人妻人人看人人澡| 久久久国产欧美日韩av| 高清在线视频一区二区三区| 亚洲成人一二三区av| 人妻少妇偷人精品九色| 亚洲精品国产av成人精品| 美女福利国产在线| 久久人人爽av亚洲精品天堂| 国产亚洲91精品色在线| 欧美日韩视频精品一区| 日韩av在线免费看完整版不卡| 天堂中文最新版在线下载| 99re6热这里在线精品视频| 久热这里只有精品99| 在线观看人妻少妇| 国产日韩欧美亚洲二区| 内地一区二区视频在线| 国产精品国产三级国产专区5o| 亚洲国产色片| 精品久久久精品久久久| 国产黄片视频在线免费观看| 亚洲国产精品一区三区| 高清av免费在线| 免费不卡的大黄色大毛片视频在线观看| 深夜a级毛片| 在线观看免费日韩欧美大片 | 大香蕉久久网| 国产成人午夜福利电影在线观看| 精品一区二区三卡| 欧美一级a爱片免费观看看| 涩涩av久久男人的天堂| 特大巨黑吊av在线直播| 成年av动漫网址| 妹子高潮喷水视频| 最近2019中文字幕mv第一页| 婷婷色综合www| 欧美日韩视频高清一区二区三区二| 国产精品一区二区在线不卡| 中文精品一卡2卡3卡4更新| 各种免费的搞黄视频| 亚洲欧美成人综合另类久久久| 国产精品成人在线| 永久免费av网站大全| 人体艺术视频欧美日本| 人人妻人人澡人人爽人人夜夜| 在线精品无人区一区二区三| a级一级毛片免费在线观看| 伊人久久国产一区二区| 亚洲国产成人一精品久久久| 超碰97精品在线观看| 久久精品久久久久久久性| 青春草国产在线视频| 人妻少妇偷人精品九色| 久久久久网色| 精品一区二区三区视频在线| 欧美精品亚洲一区二区| 老司机亚洲免费影院| 国产精品熟女久久久久浪| 春色校园在线视频观看| 男人和女人高潮做爰伦理| 91精品国产国语对白视频| 亚洲,一卡二卡三卡| 久久精品久久精品一区二区三区| 日日爽夜夜爽网站| 久久 成人 亚洲| 人人澡人人妻人| 搡老乐熟女国产| 在线观看美女被高潮喷水网站| 我要看日韩黄色一级片| 成人特级av手机在线观看| 夫妻午夜视频| 又粗又硬又长又爽又黄的视频| 国产成人精品婷婷| 精品午夜福利在线看| 九色成人免费人妻av| 我的女老师完整版在线观看| 97超视频在线观看视频| 亚州av有码| 男女边吃奶边做爰视频| 亚洲精品国产成人久久av| 精品一区在线观看国产| 亚洲精品国产色婷婷电影| √禁漫天堂资源中文www| 久久久精品94久久精品| 久久久国产一区二区| 国国产精品蜜臀av免费| 九色成人免费人妻av| 美女xxoo啪啪120秒动态图| 国产精品三级大全| 精品久久久久久久久av| 久久ye,这里只有精品| 免费观看性生交大片5| 国国产精品蜜臀av免费| 日韩av免费高清视频| 中国国产av一级| 嫩草影院新地址| 国产免费一区二区三区四区乱码| 亚洲av免费高清在线观看| 熟女人妻精品中文字幕| 丝袜脚勾引网站| 丰满饥渴人妻一区二区三| 婷婷色av中文字幕| 国产亚洲一区二区精品| 美女视频免费永久观看网站| 久久精品国产a三级三级三级| 日本91视频免费播放| 欧美97在线视频| 另类亚洲欧美激情| 国产精品99久久久久久久久| 大香蕉久久网| 亚洲成色77777| 国产av精品麻豆| 全区人妻精品视频| 成年美女黄网站色视频大全免费 | 高清视频免费观看一区二区| 亚洲中文av在线| 亚洲成色77777| 男的添女的下面高潮视频| 亚洲欧美精品专区久久| 免费高清在线观看视频在线观看| 欧美日韩在线观看h| 日韩精品有码人妻一区| 草草在线视频免费看| 我的女老师完整版在线观看| 青春草亚洲视频在线观看| 国产高清有码在线观看视频| 王馨瑶露胸无遮挡在线观看| 精品久久久精品久久久| 国产黄片视频在线免费观看| 久久国产精品大桥未久av | 亚洲av国产av综合av卡| 亚洲真实伦在线观看| 国产白丝娇喘喷水9色精品| 丰满饥渴人妻一区二区三| 99热网站在线观看| 免费不卡的大黄色大毛片视频在线观看| 美女国产视频在线观看| 少妇的逼好多水| 综合色丁香网| 亚洲婷婷狠狠爱综合网| av女优亚洲男人天堂| 狂野欧美激情性bbbbbb| 欧美丝袜亚洲另类| 黑人巨大精品欧美一区二区蜜桃 | 久久国产乱子免费精品| 亚洲激情五月婷婷啪啪| 亚洲欧美精品专区久久| 亚洲综合色惰| 国产亚洲91精品色在线| 狂野欧美激情性xxxx在线观看| 国产精品女同一区二区软件| 啦啦啦视频在线资源免费观看| 国产在视频线精品| 久久久久国产精品人妻一区二区| 91精品国产九色| 国产在线一区二区三区精| 国产色爽女视频免费观看| 中文资源天堂在线| 久久久久久久久久人人人人人人| 亚洲一区二区三区欧美精品| 九九久久精品国产亚洲av麻豆| av又黄又爽大尺度在线免费看| 成人影院久久| 午夜91福利影院| 最近的中文字幕免费完整| 欧美 日韩 精品 国产| 国产成人精品一,二区| 日日摸夜夜添夜夜添av毛片| av福利片在线观看| 国内少妇人妻偷人精品xxx网站| 精华霜和精华液先用哪个| 七月丁香在线播放| av天堂久久9| 能在线免费看毛片的网站| 日韩一区二区三区影片| 97在线人人人人妻| 国产精品人妻久久久久久| 亚洲中文av在线| 亚洲精品aⅴ在线观看| 精品久久久久久久久亚洲| 国产成人精品婷婷| 啦啦啦中文免费视频观看日本| 五月天丁香电影| 久久久久久久大尺度免费视频| 在线观看人妻少妇| 亚洲性久久影院| 九九在线视频观看精品| 久久99一区二区三区| 亚洲美女黄色视频免费看| 亚洲国产精品专区欧美| 大香蕉久久网| 波野结衣二区三区在线| 三级国产精品欧美在线观看| av免费观看日本| 中文精品一卡2卡3卡4更新| 亚洲av电影在线观看一区二区三区| 成人无遮挡网站| 一级黄片播放器| 涩涩av久久男人的天堂| 国产伦在线观看视频一区| 天堂中文最新版在线下载| 少妇被粗大猛烈的视频| 多毛熟女@视频| 嫩草影院入口| 免费黄色在线免费观看| 国产在线一区二区三区精| 日韩成人伦理影院| 日韩欧美 国产精品| 美女国产视频在线观看| 久久综合国产亚洲精品| 国产在视频线精品| 91精品国产九色| 男人添女人高潮全过程视频| 女性生殖器流出的白浆| 久久久午夜欧美精品| 18禁裸乳无遮挡动漫免费视频| 亚洲情色 制服丝袜| 麻豆精品久久久久久蜜桃| 老女人水多毛片| 免费看不卡的av| 老司机影院毛片| 国产精品国产三级专区第一集| 99热网站在线观看| 日本av手机在线免费观看| 大香蕉97超碰在线| 色视频www国产| 欧美一级a爱片免费观看看| 黄色一级大片看看| 高清黄色对白视频在线免费看 | 在线观看免费高清a一片| 中文字幕精品免费在线观看视频 | 大陆偷拍与自拍| 天天躁夜夜躁狠狠久久av| 九色成人免费人妻av| 精品一区二区三区视频在线| 国产伦精品一区二区三区四那| 亚洲国产日韩一区二区| 午夜日本视频在线| 成人国产av品久久久| 国产午夜精品久久久久久一区二区三区| 免费播放大片免费观看视频在线观看| 欧美国产精品一级二级三级 | 五月开心婷婷网| 国产免费福利视频在线观看| av免费在线看不卡| 校园人妻丝袜中文字幕| 少妇 在线观看| av专区在线播放| 天堂中文最新版在线下载| 26uuu在线亚洲综合色| 9色porny在线观看| 精品人妻熟女av久视频| 国产免费又黄又爽又色| 亚洲电影在线观看av| 日本色播在线视频| 夫妻午夜视频| 黄色视频在线播放观看不卡| av黄色大香蕉| 国产一区二区三区综合在线观看 | 精品久久久精品久久久| 三上悠亚av全集在线观看 | 国产视频首页在线观看| 国产 精品1| 国产乱人偷精品视频| 三级经典国产精品| 观看av在线不卡| 99九九线精品视频在线观看视频| 欧美日韩av久久| 欧美97在线视频| 永久网站在线| 亚洲国产精品专区欧美| 亚洲婷婷狠狠爱综合网| 国产欧美日韩一区二区三区在线 | 亚洲性久久影院| 日产精品乱码卡一卡2卡三| 国产精品免费大片| 日韩亚洲欧美综合| 久久久久久久久久久丰满| 国产白丝娇喘喷水9色精品| 美女主播在线视频| 久久97久久精品| 久久久久久久大尺度免费视频| 亚洲性久久影院| 免费高清在线观看视频在线观看| 国产亚洲最大av| 狂野欧美白嫩少妇大欣赏| 另类亚洲欧美激情| 久久免费观看电影| 男女国产视频网站| 最近的中文字幕免费完整| 日韩欧美一区视频在线观看 | 我要看日韩黄色一级片| 在线免费观看不下载黄p国产| 国产精品国产av在线观看| 亚洲精品日韩在线中文字幕| 国国产精品蜜臀av免费| 国产黄色视频一区二区在线观看| 午夜免费观看性视频| 这个男人来自地球电影免费观看 | 精品人妻熟女av久视频| 高清欧美精品videossex| 色5月婷婷丁香| 亚洲欧洲日产国产| 国产片特级美女逼逼视频| 成人免费观看视频高清| 99热网站在线观看| 18禁动态无遮挡网站| 国内少妇人妻偷人精品xxx网站| 亚洲,一卡二卡三卡| 国产 精品1| 国产熟女欧美一区二区| 色吧在线观看| 九九爱精品视频在线观看| av播播在线观看一区| 久久99热这里只频精品6学生| 国产乱人偷精品视频| 国国产精品蜜臀av免费| 国产成人精品婷婷| 一区在线观看完整版| 精品熟女少妇av免费看| 久久久欧美国产精品| 免费人成在线观看视频色| 久久久久久久久久人人人人人人| 久久99蜜桃精品久久| 日本爱情动作片www.在线观看| 午夜免费观看性视频| 丰满人妻一区二区三区视频av| 丝袜在线中文字幕| 天天躁夜夜躁狠狠久久av| 亚洲成人手机| 亚洲人成网站在线观看播放| 99久久精品热视频| 亚洲人与动物交配视频| 免费观看在线日韩| 91精品伊人久久大香线蕉| 国产av一区二区精品久久| 人人妻人人澡人人看| 麻豆精品久久久久久蜜桃| 插逼视频在线观看| 久久久久久久精品精品| 国产在线男女| av免费在线看不卡| 中文字幕制服av| 久久久午夜欧美精品| 成年人午夜在线观看视频| 大陆偷拍与自拍| 啦啦啦视频在线资源免费观看| 日韩欧美一区视频在线观看 | 中文字幕久久专区| 亚洲欧美日韩东京热| 最新中文字幕久久久久| 午夜老司机福利剧场| 99热这里只有是精品50| 天天操日日干夜夜撸| 色哟哟·www| 国产精品久久久久久精品古装| 精品少妇黑人巨大在线播放| 少妇被粗大猛烈的视频| 王馨瑶露胸无遮挡在线观看| 免费黄频网站在线观看国产| 欧美xxⅹ黑人| 99久久精品热视频| 一本一本综合久久| 秋霞伦理黄片| 日韩中字成人| 王馨瑶露胸无遮挡在线观看| 久久久久久久亚洲中文字幕| 欧美精品一区二区免费开放| 久久国产亚洲av麻豆专区| 久久久久久久国产电影| 人人妻人人澡人人爽人人夜夜| 大码成人一级视频| 人妻少妇偷人精品九色| 97超碰精品成人国产| 精品人妻偷拍中文字幕| 一级二级三级毛片免费看| 狠狠精品人妻久久久久久综合| 国产熟女欧美一区二区| 精品久久国产蜜桃| 99热这里只有精品一区| 午夜福利在线观看免费完整高清在| 亚洲欧洲精品一区二区精品久久久 | 只有这里有精品99| 97超视频在线观看视频| 最近的中文字幕免费完整| 女性生殖器流出的白浆| 国产亚洲欧美精品永久| 欧美日韩亚洲高清精品| 亚洲第一区二区三区不卡| 国产精品一区二区在线不卡| 亚洲不卡免费看| 午夜福利在线观看免费完整高清在| 伦理电影免费视频| 女的被弄到高潮叫床怎么办| 寂寞人妻少妇视频99o| 青青草视频在线视频观看| 国产日韩欧美亚洲二区| 国产精品国产三级专区第一集| 王馨瑶露胸无遮挡在线观看| 80岁老熟妇乱子伦牲交| av免费观看日本| 女性被躁到高潮视频| 成年人免费黄色播放视频 | 丰满少妇做爰视频| 久久毛片免费看一区二区三区| 26uuu在线亚洲综合色| 女性被躁到高潮视频| 九草在线视频观看| 亚洲欧洲日产国产| 中文字幕制服av| 精品人妻偷拍中文字幕| 亚洲高清免费不卡视频| 欧美高清成人免费视频www| 一级毛片我不卡| 青春草国产在线视频| 久久国产精品大桥未久av | 中文字幕久久专区| 婷婷色综合大香蕉| 国产片特级美女逼逼视频| 国产精品99久久久久久久久| 最近最新中文字幕免费大全7| 少妇熟女欧美另类| 最近中文字幕2019免费版| 在线观看国产h片| 亚洲不卡免费看| 国产黄色免费在线视频| 99久久综合免费| 人人妻人人看人人澡| 国产伦精品一区二区三区视频9| 久久精品夜色国产| 亚洲成人av在线免费| 国产视频首页在线观看| 在线观看人妻少妇| 成人黄色视频免费在线看| 亚洲国产精品成人久久小说| 色94色欧美一区二区| 一级二级三级毛片免费看| 少妇精品久久久久久久| 精品人妻偷拍中文字幕| 亚洲婷婷狠狠爱综合网| 少妇的逼水好多| 日本vs欧美在线观看视频 | 亚洲精品乱码久久久久久按摩| 美女福利国产在线| 97超碰精品成人国产| 最近最新中文字幕免费大全7| 另类精品久久| 国产女主播在线喷水免费视频网站| 久久 成人 亚洲| 国产精品不卡视频一区二区| 国产一区亚洲一区在线观看| 亚洲国产成人一精品久久久| 精品人妻偷拍中文字幕| 国产成人a∨麻豆精品| 日韩在线高清观看一区二区三区| 亚洲精品乱久久久久久| 18禁动态无遮挡网站| 汤姆久久久久久久影院中文字幕| 精品一区二区三区视频在线| 看十八女毛片水多多多| 我要看日韩黄色一级片| 久久这里有精品视频免费| 国产精品免费大片| 午夜影院在线不卡| 少妇人妻久久综合中文| 免费看日本二区| 两个人的视频大全免费| 男男h啪啪无遮挡| 欧美高清成人免费视频www| 亚洲av福利一区| 久久精品国产a三级三级三级| 国产精品无大码| 最近的中文字幕免费完整| 高清黄色对白视频在线免费看 | 人人妻人人爽人人添夜夜欢视频 | 最近手机中文字幕大全| 久久久久国产网址| 久久午夜综合久久蜜桃| 午夜免费观看性视频| 日韩三级伦理在线观看| 黑人巨大精品欧美一区二区蜜桃 | 国产乱来视频区| 天天操日日干夜夜撸| 一本久久精品| 久久久久久久久久人人人人人人| 一区二区三区四区激情视频| 日日摸夜夜添夜夜添av毛片| 男女国产视频网站| h视频一区二区三区| 欧美精品国产亚洲| 又黄又爽又刺激的免费视频.| 国产伦精品一区二区三区视频9| 久热久热在线精品观看| 在线观看免费日韩欧美大片 | 麻豆成人午夜福利视频| 欧美区成人在线视频| 精品少妇内射三级| 九草在线视频观看| 久久青草综合色| 国产伦在线观看视频一区| 国产精品一区二区在线不卡| 80岁老熟妇乱子伦牲交| 一级a做视频免费观看| 久久狼人影院| 国产男人的电影天堂91| 在线观看人妻少妇| 亚洲怡红院男人天堂| 丝瓜视频免费看黄片| 国产极品粉嫩免费观看在线 | 日本爱情动作片www.在线观看| kizo精华| 中国国产av一级| 亚洲精华国产精华液的使用体验| 国产精品三级大全| 国产伦精品一区二区三区四那| 国产成人精品婷婷| 男人添女人高潮全过程视频| 五月玫瑰六月丁香| 一个人免费看片子| 美女国产视频在线观看| av黄色大香蕉| av免费在线看不卡| 在线观看人妻少妇| 亚洲一级一片aⅴ在线观看| tube8黄色片| 亚洲自偷自拍三级| 国产日韩欧美在线精品| 国产男人的电影天堂91| 美女中出高潮动态图| 少妇的逼好多水| 亚洲丝袜综合中文字幕| 国产精品免费大片| 男女无遮挡免费网站观看| 又粗又硬又长又爽又黄的视频| 青春草视频在线免费观看| 日韩av免费高清视频| 国产伦精品一区二区三区四那| 黑人猛操日本美女一级片| 免费看不卡的av| 国产综合精华液| 99国产精品免费福利视频| 久久人人爽av亚洲精品天堂| 国产毛片在线视频| 亚洲精品久久午夜乱码| 国产欧美亚洲国产| 男男h啪啪无遮挡| 美女脱内裤让男人舔精品视频| 免费观看的影片在线观看| 三级国产精品欧美在线观看| 日韩亚洲欧美综合| 国产永久视频网站| 国产精品国产三级国产专区5o| 日本av免费视频播放| 香蕉精品网在线| 美女国产视频在线观看| 久久久国产欧美日韩av| 又粗又硬又长又爽又黄的视频| 久久久精品免费免费高清| 一级二级三级毛片免费看| 久久精品国产亚洲av涩爱| 91精品伊人久久大香线蕉| 精品国产露脸久久av麻豆| 女性被躁到高潮视频| 亚洲一区二区三区欧美精品| 男人狂女人下面高潮的视频|