• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Behavioral Patterns Employing a Hybrid Approach of Computational Techniques

    2022-08-24 12:57:18RohitRajaChetanSwarupAbhishekKumarKamredUdhamSinghTeekamSinghDineshGuptaNeerajVarshneyandSwatiJain
    Computers Materials&Continua 2022年7期

    Rohit Raja, Chetan Swarup, Abhishek Kumar, Kamred Udham Singh, Teekam Singh,Dinesh Gupta, Neeraj Varshneyand Swati Jain

    1Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, 495009, India

    2Department of Basic Science, College of Science & Theoretical Studies, Saudi Electronic University, 13316, Saudi Arabia

    3Department of Computer Science & IT, JAIN (Deemed to be University), Bangalore, 560069, India

    4Computer Science and Information Science, Cheng Kung University, 621301, Taiwan

    5School of Computer Science, University of Petroleum and Energy Studies, Dehradun, 248007, India

    6Department of CSE, I K Gujral Punjab Technical University, Jalandhar, 144603, India

    7Department of Computer Engineering and Applications, GLA University, Mathura, 281406, India

    8Department of Computer Science, Government J Yoganandam Chhattisgarh College, Raipur, 492001, India

    Abstract: As far as the present state is concerned in detecting the behavioral pattern of humans (subject) using morphological image processing, a considerable portion of the study has been conducted utilizing frontal vision data of human faces.The present research work had used a side vision of human-face data to develop a theoretical framework via a hybrid analytical model approach.In this example, hybridization includes an artificial neural network(ANN)with a genetic algorithm (GA).We researched the geometricalproperties extracted from side-vision human-face data.An additional studywas conducted to determine the ideal number of geometrical characteristics to pick while clustering.The close vicinity of minimum distance measurements is done for these clusters, mapped for proper classification and decision process of behavioral pattern.To identify the data acquired, support vector machines and artificial neural networks are utilized.A method known as an adaptive unidirectional associative memory (AUTAM) was used to map one side of a human face to the other side of the same subject.The behavioral pattern has been detected based on two-class problem classification, and the decision process has been done using a genetic algorithm with best-fit measurements.The developed algorithm in the present work has been tested by considering a dataset of 100 subjects and tested using standard databases like FERET,Multi-PIE, Yale Face database, RTR, CASIA, etc.The complexity measures have also been calculated under worst-case and best-case situations.

    Keywords: Adaptive-unidirectional-associative-memory technique; artificial neural network; genetic algorithm; hybrid approach

    1 Introduction

    To detect the behavioral pattern of any subject (human) is the most challenging task, especially in the defense field.The current study examines similar challenges using a side-by-side perspective of human-face data.According to the literature, only a few researchers have used side visions of human faces to identify behavioral traits.Most research has been conducted using frontal-vision data of human faces, either for face recognition or as a biometric characteristic assessment.Until now,very few types of research have been carried out to detect behavioral patterns.Several significant improvements have been made before identifying human faces from the side (parallel to the picture plane), using a five-degree switching mechanism in regression or decreasing step method.Bouzas et al.[1] used a similar method in his method for dimensional space reducing switching amount based on the requirement of mutual information between the altered data and their associated class labels.Later on, [2] enhanced the work performance by using descriptions to describe human-face pictures and a clustering algorithm to choose and classify variables for human-face recognition.

    Furthermore, Chatrath et al.[3] used facial emotion to interact between people and robots by employing a human-face front-vision.Furthermore, Zhang et al.[4] also achieved some frontal-vision related work while the target is distance.Many researchers suggested a better regression analysis classification technique by the upfront perspective of human-face data.Zhao et al.[5] has shown that learning representation to predict the placement and shape of face imagesmay boost emotion detection from human images.Similarly,Wang et al.[6,7] suggested a technique of interactive frontal processing and segmentation for human-face recognition.The literature analysis indicated that relatively few scholars carried out their work to discover behavioral patterns from human-face data.Also, statistical methodologies and classic mathematical techniques were discovered, although most of the study was conducted.Formerly, some artificial neural network components and other statistical approaches have achieved some significant satisfactory results [8-10].

    A subsequent study was conducted to recognize when the subject’s human face is aligned to the picture plane using hybrid methodology [11,12].The current research study was also conducted employing hybrid cloud computing.

    In the same year, Algorithms were proposed for secured photography using the dual camera.This method helps to identify issues such as authentication, forgery detection, and ownership management.This algorithm was developed for Android phones having dual cameras for security purposes [13].

    Introduce fuzzy logic-based facial expression recognition system which identifies seven basic facial expressions like happy, anger, sad, neutral, fear, surprise, disgust.This type of system is used in the intelligent selection of areas in the facial expression recognition system [14].An algorithm was proposed which is used in a video-based face recognition system.This algorithm can compare any still image with video, and matching videos with videos.For optimizing the rank list across video frames three-stage approach is used for effective matching of videos and still images [15].The method has introduced a method for exploring facial asymmetry using optical flow.In terms of shape and texture, the human face is not bilaterally symmetry and the attractiveness of human facial images can be increased by artificial reconstruction and facial beautification.And using optical flow image can be reconstructed according to needed symmetry [16] has proposed an effective, efficient, and robust method for face recognition based on image sets (FRIS) known as locally Grassmannian Discriminant Analysis (LGDA).For finding the optimal set of local linear bases a novel accelerated proximal gradient-based learning algorithm is used.LGDA is combined with the clustering technique linearityconstrained nearest neighborhood (LCNN) for expressing the main fold by a collection of the local linear model (LLMs).

    An algorithm was proposed to change the direction of the subject’s face parallel (zero degrees)to the picture plane diagonally (45 degrees) to the image plane.In this research, artificial neural networks (ANN) and genetic algorithms (GA) were used.Detailed research is divided into two parts:During the first part, features are obtained from the front face and database is built, and in the second portion, a test face image but with all feasible alignment must be developed and hybridized forwards computing approach performed for proper identification of the subject’s face.An actual perfectly matched classification-decision proceduremust be performed utilizing the datasets generated in the current research activity.Other statistics like FERET and so on datasets were examined for an acceptable optimization method.An algorithm was designed to identify cognitive qualities and the subject’s physiologic attributes to support the biometric safety system.To support the biometric security system, specific instance analysis must also be conducted.Development has been widely datasets, and a suitable comparison methodology has been analyzed [16].These studies reveal various features with varying performance [17].The work was structured for biometric study.Using Deep CNN with genetic segmentation, this research proposes a method for autonomous detection and recognition of animals.Standard recognition methods such as SU, DS, MDF, LEGS, DRFI, MR,andGC are compared to the suggested work.Adatabase containing 100 different subjects, two classes,and ten photos is produced for training and examining the suggested task [18].The CBIR algorithm examined visual image characteristics, such as colour, texture, shape, etc.The non-visual aspects also play a key role in image recovery.The image is extracted using a neural network which enables the computation to be improved using the Corel dataset [19,20].This paper presents a new age function modelling technique based on the fusion of local features.Image normalization is initially performed and a feature removal process is performed.The classifier for Extreme Learning Machine (ELM) is used to evaluate output pictures for the respective input images [21].The proposed algorithm has a higher recall value, accuracy and error rate than previous algorithms.New 5-layer SegNet-based algorithm encoder enhances the accuracy of various dataset benchmarks.The detection rate was up to 97 percent and the lifespan is reduced to one second per image [22].

    Modeling of Datasets

    In the present work, how the modeling of datasets is done is described briefly.The complete work has been carried out in two phases: themodeling phase and the understanding phase.In the first phase,a knowledge-based model called the RTR database model as corpus has been formed over human face images.The strategies that have been applied for the formation of the corpus are the image warping technique (IWT) and artificial neural network (ANN).The model has been formed after capturing the human face image through a digital camera or through scanning the human face image (Refer Appendix-B).Also, the collections of the human images have been done from the different standard databases (Refer Appendix-A).In the present work, how the human face images have been captured has been depicted in Fig.1 below.

    Figure 1: Functional block diagram for capturing the human face image

    From Fig.1, a known human face image has been captured through hardware, which means a camera or a scanner.During capturing of an image, a feedback control mechanism has been applied manually.The adjustments for two factors have been done.The factors are resolution and distance.A fixed resolution has been kept while capturing a known human face image.The distance has been fixed at 1 meter between face and camera.Although the second factor has been overcome by proper scaling and rectification of the image.This process has been jointly called as image warping technique (IWT).After proper adjustment of an image, it has been stored in a file with extension jpg (joint photographic group) format.

    The objective and highlight of research work are represented by the following steps: -

    ?Enhanced and compressed image (human-face image) has to be obtained.

    ?Segmentation of the face image has to be done.

    ?Relevant features have to be extracted from the face image.

    ?Modeling of face features using artificial neural network (ANN) technique, wavelet transformation, fuzzy c-means, and k-means clustering techniques, forward-backward dynamic programming.

    ?Development of an algorithm for the formation of the above model.

    ?Understanding of the above-framed model for automatic human face recognition (AHFR)using genetic algorithm method and classification using fuzzy set rules or theory.

    ?Development of an algorithm for the understanding of human face model for AHFR.

    The following sections comprise the planned study: Section 2 provides a solution methodology using mathematical formulations, Section 3 discusses actual results and discussions, Section 4 finishes with remarks and an expanded area of study, and Section 5 contains all references to the last section of the paper.

    2 Solution Methodology with a Mathematical Formulation

    This section may be divided by subheadings.It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

    The mathematical formulations and their practical execution of the current study are described in succeeding subcategories.

    As far as the present situation in the field of morphological image processing is concerned, a great deal of human-face identification research has been carried out using a 90-degree orientation in the imaging plane of any subject.The majority of the work was done using statistical methods and traditional mathematical approaches.A number of soft computing components and other statistical methods have yielded remarkable and good results.There has been little effort when the subject’s human face is parallel to the image plane in order to recognize the subject, using hybrid approaches of soft-computing technology.Soft-computing technology hybrid means a combination of softcomputing tools.The current research has also been done using advanced computing techniques.High-end technology involves combining soft-computing and symbolic computing.The technology of soft computing includes artificial neural networks, fuzzy set theory and genetic algorithms.Symbolic computing is a special type of data known as symbolic objects.Such computing enables researchers to perform mathematical operations without numerical valves calculation.Analytical symbolic calculations also include differentiation, partial differentiation, definite and indefinite integration and take into consideration limitations.Comprehensive transformation, Symbolic objects with symbolic variables, symbolic numbers and symbolic expressions and matrices.Very few previous contributions had been made using the neurogenetic approach to recognize the human face of a side-view subject(parallel to the image plane).The mechanism was applied as a fully 2-D method of transformation with a five-degree switch to reduce steps or regressive strategies.The algorithm was developed for the process of recognition, in which the orientation of the subject’s human face was moved to the diagonal position of the image plane (45 degrees) at zero degrees.The techniques used are the Artificial Neural Network (ANN), Genetic Algorithms (GA) and a little useful computing concept to identify a human face from the side view (parallel to the image plane).The research was done in two phases: the phase of modelling and the phase of understanding.In the first phase, only the frontal image of the human face must be studied to extract relevant features for the development of a corpus called a human facemodel.In the second phase, a test face image with all possible orientation has been captured and therefore the advanced computer approach is applied with high-end computing in order to properly recognize the person of the subject.A correct matching-classification-decision process using the data sets created during the present research has been carried out.Other datasets such as FERET, CASIA and so on were also tested for acceptable performance measures.Furthermore, computation of polynomial complexities with the proper transmission capability has been studied with adequate justification rather than spatial and time complexities for improving the performance of secured systems for the promotion of global cyber safety.An algorithm has been developed to support overall identification of the subject’s behavioral and physiological features in order to justify the biometric security system.A case-based study has also taken into account various datasets to justify the biometric safety system and a proper comparison model with different characteristics and performance variations was shown.The comparative study and final observations were experimentally tested with at least 10 images of 100 subjects with different ages and updates.Calculation of the complexity of developed algorithms and the performance measurements of them were also compared with other databases such as FERET,Multi-PIE, Yale Face and RTR.The corpus and algorithms developed in this work have been found to be satisfactory.Complete flow diagram of the work in Fig.2.

    2.1 Behavioral Pattern Detection

    Let side-vision of human-face data gathered under the situations stated below:

    ?The subject is either standing or sitting idle

    ?The outfit of the subject is actual

    ?The subject is either talking with someone face to face

    Let“ZLnFL1”,“ZLnFL2”,“ZLnFL3”,“ZLnFL4”, and“ZLnFL5”be the left-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZLn.’Similarly, also assume“ZRnFR1”,“ZRnFR2”,“ZRnFR3”,“ZRnFR4”and“ZRnFR5”be the right-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZRn,’where‘n’is the number, whose range is infinity≥n≥1.

    Normally, for the human face left-side-vision pattern is“ZLnFLm”and that for the right-side-vision of the human face is“ZRnFRm.”

    So, the frontal-vision of human-face data“ZRnVLRm”will yield to,

    (A) Clustering of geometrical features from left-side-vision of human-face data

    Clusters have even-odd elements.Distinguishing the clustering of left-hand data,ZLnFLm, into even and odd components, consider‘FZT’training datasets, where‘F’represents human-face data and‘T’represents total training datasets.Even components and Old component is‘FZE’and‘FZO’,respectively,for a left-side-vision‘L1’.Hence it yields to,

    Figure 2: Complete flow diagram of work

    So, the total training image ‘T’ gives even training sample image ‘E’ and odd training sample image‘O’sums.So mathematical linearity combined effect (2) gets,

    Thus, the equation for highly interconnected and poorly interconnected human-face collected data is represented by,ZLnFLm.And it becomes,

    where convolution operator is ρT, and?and the linearity factor for total training datasets,

    Now for even cluster We,the mean μefor even human face sample Neand for odd cluster Wothe mean μo for odd human face sample No, sample mean value is represented by μT,

    The diversion was represented in equation of the projected means on training human face odd and even sample images yields to,

    Let LFin= {LF1, LF2,..., LFn} and DFout= {DF1, DF2,..., DFm} input left-side-vision and output code word respectively, which are of maximum size.BF test_data_set = {BF1, BF2,..., BFu}have been AF trained_data_set = {AF1, AF2,..., AFq} linearity-index condition, DFout= DFin,

    Mathematically the relation is,

    The previous metrics provide the closest representation of the human face’s left-side image with tightly crucial elements.Thus, the system’s experience and understanding database analyzed each extracted feature in the data processing stream, C matching, and the greatest codeword was picked as the minimum average distance.If the unknown vector is inaccessible to the known vector, this condition is considered an OOL issue.Attributing values to all database codewords have reduced the OOL issue.Highest vector values, thus, yield to,

    Eq.(10) is CDIFFthe absolute difference and it is the cropped pattern and yields to,

    Dividing Eq.(8) by Eq.(10), it yields, to CCMR,

    B) Clustering of geometrical features from right-side-vision of human-face data

    Similarly, to distinguish the clusters of right-side-vision of human-face data,ZRnFRm, into even and odd components, consider‘FZT’.The mathematical formulations for this part of this paper follow Eq.(2) through (11).

    2.2 Gradient or Slope of Human-Face Data with Strongly Connected Components

    Let the slope of human face pattern isx?SCF, where the SCF means strongly connected features:“shape”,“size”,“effort”and“momentum”.The superscript‘x’is the number of strongly connected features.Let slope of left-side-vision of human-face and right-side-vision of human-face data isx?LSCFandx?RSCFbe the slope or gradient,

    3 Results and Discussions

    Practical investigations and discussions regarding identifying behavioral patterns have been conducted following image processing pre-processing activities.The performance data were initially processed using a schematic diagram and standardized image processing methods.Signal processing has been achieved utilizing the discrete-cosine transform technique, as it has been shown that it functions flawlessly for real-coded vectored analysis.As a result, the segmentation procedure was carried out by statistical analysis.Boundary detection is achieved using morphologicalmethods used in digital image processing, such as erosion and dilation.By initially picking the region of interest (ROI)and hence the items of interest, the image distortion approach was employed objects of interest (OOI).Cropping the item and hence image rectification are performed to ensure that the system’s efficiency does not suffer.Cropping the picture results in the storage of the cropped picture in a separate file and the extraction of crucial geometrical information.As seen in Fig.3, the clusters of these obtained traits are shown.

    Figure 3: Clusters of features extracted from test data of human-face

    As seen in Fig.4, only a few factors exhibit uniformity.As a result, additional analysis was conducted by switching the data set with five degrees of freedom between regressive and advanced modes.The research was graphed and is shown in Fig.4.

    Figure 4: Comparison of the different testing frames with different switching patterns

    Fig.5, the pattern’s typical behavior is very uniform.This indicates that the curve’s behavior is not highly diverse in origin.The test set of data has been subjected further to descriptive statistics.From Fig.5, it has been observed that the clusters of trained and test data set of human-face are almost close to linearity.Further, the cumulative distribution of the above test data sets has been performed, shown in Fig.5.

    Figure 5: Normal distribution of different data sets of human-face

    Asseenin Fig.6,the boundary values of both the test and training data sets are exceptionally near to fitness.Therefore, the most acceptable test was done utilizing the genetic algorithm methodology.Adjusted for most vital measurements.If compatibility fails, take another subject sample.As a result,further analysis is performed to determine the better measures.This was accomplished by gradually or regressively switching human faces with a five to ten-degree displacement.Subsequently, it was discovered that most variables follow the regular pattern of a corpus’s training samples.

    Figure 6: Boundary of face code detection using unidirectional temporary associative memory(UTAM)

    As a consequence, best-fit measurements were chosen, and further segmentation and detecting analyses utilizing the soft-computing approach genetic algorithm were done.Using face-code formation, one-to-one mappings were performed all through this technique.Fig.7, the border for detecting face-code using unidirectional transitory memory.Perceptual memories are neural network designs that store and retrieve correlated patterns in essential elements when prompted by associated matches.In other words, associative memory is the storage of related patterns in acceptable form.Fig.7 shows graphical behavioral pattern matching test data sets.“Over act”is defined as“Abnormal behavior”while“Normal act”is labeled as“Normal behavior.”Whenever the behavioral characteristic curve is missing, the behavior is expected.When the behavioral characteristic curve has a large number of interruptions, it is considered to be under act behavior; when the cognitive characteristic curve has a smaller number of disruptions, it is considered to be over act behavior and attitude.

    The whole behavioral detection system’s performance is proportional to the size of the corpus or database.If the unknown pattern is not matched against the known pattern throughout the detection procedure, an OOC (Out of corpus) error occurs.A substantial corpus has been generated in this study to prevent such issues, therefore resolving the OOC issue.Numerous common databases such as FERET, Multi-Pie, and Yale face databases were also examined in the current study for automated facial recognition.Tab.1 below illustrates the contrast.

    Figure 7:Applying a distinct subject’s face image to the overacting, regular act, and underactingmoods

    Table 1: Performance measure comparison for automatic face recognition using FERET, Multi-Pie,Yale face, RTR face database

    The obtained corpus RTR database was evaluated in combination with FERET, Multi-Pie,and FERET, and the results were determined to be almost adequate.Fig.6 illustrates the contrast graphically.

    As seen in Fig.8, the corpus created in this research endeavor produces findings that are pretty similar to those found in the FERET database.Additionally, it has been shown that after selecting two features, the behavioral detection system’s performance increases, and the whole evaluation procedure stays positive with seven features picked with the highest difficulty levels.Additionally,Tab.2 illustrates the overall performance for behavioral detection when the maximum number of characteristics is included.

    Fig.9 represents the full behavioral detection system results graphically, with an average detection rate of 93 percent for the Normal behavioral pattern.

    Fig.10 illustrates the developed algorithms’behavioral performance metrics and their comparability to other algorithms that use human face patterns.In addition to the findings and comments in the current study, Fig.10 depicts the general behavioral pattern for the training and testing datasets.

    Figure 8: Comparative study and analysis as per complexity measures

    Table 2: Performance measures of behavioral detection

    Figure 9: Graphical representations of performance measures

    Figure 10: Overall behavioral pattern of trained and test data sets

    As seen in Fig.11, when the appropriate set of attributes is found using a genetic algorithm methodology, the behavior of training and test data sets shows a similar pattern.For about the same dataset, the actual result was 93 percent recognition accuracy for usual behavior.Fig.11 shows the result.

    The method used to get and describe the given findings is given here, along with its sophistication.

    Figure 11: Outcome of normal behavioral pattern of the test data set

    Developed Algorithm HCBABD (Hybrid Computing based Automatic Behavior Detection):

    Algorithm 1: The developed algorithm called HCBABD (Hybrid Computing based Automatic Behavior Detection) has been depicted below.Main program {starts}Step 1.GEN1: Initial input data1 stream X(o) = (X1, X2,...,XM) and Y(o, p) = (Y1, Y2,..., YN)Step 2.READ1: CorpusRTR (Max_Size Z) and counter q is set to=0 Step 3.DO WHILE (b≤Z)GEN1: An input data1 Xr(b) & Yr(b)GENNEXT1: State input data1 X(b+1) and Y(b+1) for Xr(b) & Yr(b)COMPUTE1: WEIGHTn condition for Linearity Index, INPUTn ==OUTTPUTn COMPUTE1: Gradient_n or Slope_n for {geometrical feature: HFM}INR: Increment the size or length, b = b + 1 and the counter, q =q + 1 FITNESS_TEST1: f(Xi) & f(Yi) of each data stream Xi & Yi MAPPING: AUTAM (left_side_shape, right_side_shape)IF (TRUE), THEN{BEHAVIOUR1: Display“Normal and Acceptable”placed in category“NOT BOL”}ELSE{BEHAVIOUR1: Display“Abnormal and Not Acceptable”placed in category“BOL“and Display“Data is OOL (Out-Of-Limit)”}Main program {ends}

    Complexity measures for the develop system:In the worst-case assumption, let‘p’denote the total number of training data.Thus, the complexity is proportional to the number of loop executions divided by the total number of events.In the worst-case scenario, the cycle will execute as‘p+4’.Thus,in the worst-case scenario, the complexity measure is‘(p+4)/p’.Similarly, in the best situation, the smallest number of features necessary for the mapping procedure is one, which increases the execution time.Thus, in the best-case scenario, the complexity measures in the best-case scenario is‘(p+1)/p’.Current automatic emotion recognizers typically assign category labels to emotional states, such as“angry”or“sad,”relying on signal processing and pattern recognition techniques.Efforts involving human emotion recognition have mostly relied on mapping cues such as speech acoustics (for example energy and pitch) and/or facial expressions to some target emotion category or representation.The comparative analysis of algorithms is shown in Tab.3 and the time complexity represented in Tab.4.

    Table 3: Comparative analysis some recent algorithms with each dataset

    Table 3: Continued

    Table 4: Comparison of time complexity and memory consumption for different datasets

    4 Conclusion and Further Work

    In this present study effort, two behavioral patterns, namely standard and aberrant, have been categorized.The categorization in this study is based on four geometrical characteristics taken from human-face data for left- and right-side-vision s.Then, to make decisions about detecting behavioral patterns, these characteristics were grouped, and a correct mapping method was done.The gradation of each of the human-face extracted features from the left- and right-side vision s has been computed,and when shown, a uniformity index attributes feature is generated.The dispersion of the gradients has been calculated, providing either positive or negative values.For normal behavior, the decision is good; for aberrant behavior, the option is unfavorable.The efficiency of a proposed approach has been determined.In the worst-case scenario, the complexity of the suggested method is“(p+4)/p”;in the best-case scenario, the complexity of the suggested method is“(p+1)/p,”where‘p’is the total frequency of occurrence.The current work might be developed to incorporate identification and comprehension of human-brain signals and human-face-speech patterns to establish a trimodal biometrics security system.The assignment might be broadened to include diagnosing various health concerns related to breathing, speaking, brain function, and heart function.Furthermore, this technology might be utilized to further the development of a global multi-modal biometric network security.

    Acknowledgement:The authors extend their appreciation to the Saudi Electronic University for funding this research work.

    Funding Statement:The author thanks to Saudi Electronic University for financial support to complete the research.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    中国美白少妇内射xxxbb| 亚洲经典国产精华液单| freevideosex欧美| 精品一区在线观看国产| 亚洲欧美中文字幕日韩二区| 精品一区二区三卡| 人妻夜夜爽99麻豆av| 亚洲美女视频黄频| 亚洲欧美日韩卡通动漫| 特大巨黑吊av在线直播| 人妻一区二区av| 欧美精品一区二区大全| 国产男女内射视频| 午夜福利视频1000在线观看| 中文在线观看免费www的网站| 免费大片黄手机在线观看| 久久99精品国语久久久| 一级二级三级毛片免费看| 男插女下体视频免费在线播放| 久久久久久久国产电影| 在线精品无人区一区二区三 | 亚洲av欧美aⅴ国产| 一区二区av电影网| 一二三四中文在线观看免费高清| 成人无遮挡网站| 麻豆精品久久久久久蜜桃| 免费在线观看成人毛片| 男女无遮挡免费网站观看| 欧美日韩综合久久久久久| 国产一区二区在线观看日韩| 久久97久久精品| 建设人人有责人人尽责人人享有的 | 日韩成人av中文字幕在线观看| 国产美女午夜福利| 国产高清三级在线| 国产女主播在线喷水免费视频网站| 亚洲av男天堂| 性色avwww在线观看| 熟妇人妻不卡中文字幕| 欧美最新免费一区二区三区| 日韩不卡一区二区三区视频在线| 卡戴珊不雅视频在线播放| 成人二区视频| 国产精品99久久久久久久久| 成人黄色视频免费在线看| 一级二级三级毛片免费看| 秋霞在线观看毛片| 日韩成人伦理影院| 国产精品成人在线| 91在线精品国自产拍蜜月| 啦啦啦啦在线视频资源| 十八禁网站网址无遮挡 | 午夜福利网站1000一区二区三区| 久久人人爽人人爽人人片va| 午夜爱爱视频在线播放| 另类亚洲欧美激情| 男人舔奶头视频| 日本猛色少妇xxxxx猛交久久| 国产淫片久久久久久久久| 亚洲av成人精品一区久久| 免费观看av网站的网址| 久久人人爽人人爽人人片va| 国产片特级美女逼逼视频| 男女那种视频在线观看| 亚洲av欧美aⅴ国产| 国产精品三级大全| 亚洲欧美日韩另类电影网站 | 久久久久国产网址| 国产伦精品一区二区三区视频9| 亚洲色图综合在线观看| 91精品国产九色| 欧美人与善性xxx| 国产 一区精品| 校园人妻丝袜中文字幕| 蜜桃亚洲精品一区二区三区| 国产精品一区二区三区四区免费观看| 精品久久久久久电影网| 最近手机中文字幕大全| 国产亚洲av片在线观看秒播厂| .国产精品久久| 搡女人真爽免费视频火全软件| 亚洲色图综合在线观看| 99久久人妻综合| 欧美最新免费一区二区三区| 成人黄色视频免费在线看| 一区二区av电影网| 大码成人一级视频| 亚洲精品日韩av片在线观看| 久久精品人妻少妇| 亚洲真实伦在线观看| av国产免费在线观看| 国产成人福利小说| 91在线精品国自产拍蜜月| 中文乱码字字幕精品一区二区三区| 一级黄片播放器| 新久久久久国产一级毛片| 欧美+日韩+精品| 亚洲第一区二区三区不卡| 菩萨蛮人人尽说江南好唐韦庄| 听说在线观看完整版免费高清| 久久久久九九精品影院| 天堂俺去俺来也www色官网| 午夜免费观看性视频| 网址你懂的国产日韩在线| 日韩欧美 国产精品| www.色视频.com| 亚洲国产精品成人综合色| 在现免费观看毛片| av在线app专区| 日本熟妇午夜| 亚洲一区二区三区欧美精品 | 91精品国产九色| av在线亚洲专区| 自拍偷自拍亚洲精品老妇| 一区二区av电影网| 搡老乐熟女国产| 卡戴珊不雅视频在线播放| 久久久久国产精品人妻一区二区| 亚洲国产成人一精品久久久| 免费人成在线观看视频色| 日韩,欧美,国产一区二区三区| 国产永久视频网站| www.色视频.com| 观看美女的网站| 国内精品宾馆在线| 婷婷色av中文字幕| 精品久久久噜噜| 国产成人a区在线观看| 成人高潮视频无遮挡免费网站| 中文字幕av成人在线电影| 亚洲av男天堂| 性插视频无遮挡在线免费观看| 超碰97精品在线观看| 久久久久国产网址| 国产精品人妻久久久影院| 狂野欧美白嫩少妇大欣赏| 亚洲性久久影院| 日韩中字成人| 国产淫片久久久久久久久| 日韩在线高清观看一区二区三区| av免费观看日本| av国产久精品久网站免费入址| 丝袜喷水一区| 亚洲精品久久久久久婷婷小说| 啦啦啦中文免费视频观看日本| 亚洲久久久久久中文字幕| 欧美区成人在线视频| 中文精品一卡2卡3卡4更新| 在线免费观看不下载黄p国产| 天天一区二区日本电影三级| 我的女老师完整版在线观看| 国产 精品1| av在线亚洲专区| 一区二区av电影网| 精品一区二区三区视频在线| av在线亚洲专区| 草草在线视频免费看| 国产精品熟女久久久久浪| 美女被艹到高潮喷水动态| 国产精品无大码| 欧美成人一区二区免费高清观看| a级一级毛片免费在线观看| 亚洲熟女精品中文字幕| 一二三四中文在线观看免费高清| 女的被弄到高潮叫床怎么办| 在线播放无遮挡| 男的添女的下面高潮视频| 亚洲美女视频黄频| 午夜亚洲福利在线播放| 亚洲国产色片| 成人二区视频| 麻豆成人av视频| 国内少妇人妻偷人精品xxx网站| 亚洲一级一片aⅴ在线观看| 久久97久久精品| 特大巨黑吊av在线直播| 国产又色又爽无遮挡免| 国产高清不卡午夜福利| 亚洲精品亚洲一区二区| 国产精品国产三级国产av玫瑰| 国产成人a∨麻豆精品| 干丝袜人妻中文字幕| 人人妻人人看人人澡| 1000部很黄的大片| 国产精品国产三级国产av玫瑰| 一级毛片我不卡| 九九爱精品视频在线观看| 欧美+日韩+精品| 2021天堂中文幕一二区在线观| 色哟哟·www| 国产精品久久久久久精品电影小说 | 少妇 在线观看| 成人免费观看视频高清| 久久影院123| 伊人久久国产一区二区| 亚洲激情五月婷婷啪啪| 久久午夜福利片| 一级毛片aaaaaa免费看小| 亚洲欧美精品专区久久| 亚洲综合精品二区| 看黄色毛片网站| 少妇人妻 视频| 在线 av 中文字幕| 波野结衣二区三区在线| 夫妻午夜视频| 亚洲av在线观看美女高潮| 国产成人精品久久久久久| 18禁在线播放成人免费| 午夜精品一区二区三区免费看| 91aial.com中文字幕在线观看| 一区二区三区免费毛片| 天天一区二区日本电影三级| 激情五月婷婷亚洲| 最近最新中文字幕大全电影3| 婷婷色麻豆天堂久久| 亚洲精品影视一区二区三区av| av专区在线播放| 国产一区二区在线观看日韩| 免费在线观看成人毛片| 国产免费一级a男人的天堂| av国产精品久久久久影院| 国产精品麻豆人妻色哟哟久久| 国产精品成人在线| 亚洲成人久久爱视频| 亚洲经典国产精华液单| 国产免费一区二区三区四区乱码| 夜夜爽夜夜爽视频| 99热国产这里只有精品6| 国产老妇伦熟女老妇高清| 国产欧美亚洲国产| 亚洲精品国产成人久久av| 中文字幕人妻熟人妻熟丝袜美| 自拍偷自拍亚洲精品老妇| 日韩免费高清中文字幕av| 久久久久久久久久成人| 精品久久国产蜜桃| 99热这里只有是精品50| 少妇丰满av| 人妻少妇偷人精品九色| 九九在线视频观看精品| 欧美激情在线99| 成年女人在线观看亚洲视频 | 日韩视频在线欧美| 久久久久久久久大av| 99热这里只有是精品在线观看| 免费观看av网站的网址| av卡一久久| 欧美老熟妇乱子伦牲交| 国产av不卡久久| 亚洲av不卡在线观看| 边亲边吃奶的免费视频| 99久久精品国产国产毛片| 色5月婷婷丁香| 亚洲av免费在线观看| 一级av片app| 身体一侧抽搐| 91久久精品国产一区二区成人| 小蜜桃在线观看免费完整版高清| 国产男人的电影天堂91| 欧美日韩亚洲高清精品| 国产精品无大码| 丝袜脚勾引网站| 大陆偷拍与自拍| 黄色怎么调成土黄色| 男女无遮挡免费网站观看| 草草在线视频免费看| 美女高潮的动态| 寂寞人妻少妇视频99o| 午夜福利视频精品| 另类亚洲欧美激情| 精品久久国产蜜桃| 欧美97在线视频| 国内揄拍国产精品人妻在线| 亚洲精品第二区| 亚洲欧美精品自产自拍| 能在线免费看毛片的网站| 十八禁网站网址无遮挡 | 97超视频在线观看视频| 久久久久久久久久久丰满| 2021天堂中文幕一二区在线观| 中文乱码字字幕精品一区二区三区| 亚洲av二区三区四区| 青春草亚洲视频在线观看| 在线观看一区二区三区激情| www.色视频.com| 如何舔出高潮| 深夜a级毛片| 亚洲人与动物交配视频| 国产免费福利视频在线观看| 国产免费视频播放在线视频| 亚洲性久久影院| 亚洲国产欧美人成| av专区在线播放| 各种免费的搞黄视频| 18禁动态无遮挡网站| 欧美亚洲 丝袜 人妻 在线| 男人爽女人下面视频在线观看| 夫妻性生交免费视频一级片| 天天躁日日操中文字幕| 国产白丝娇喘喷水9色精品| 听说在线观看完整版免费高清| 成人欧美大片| 国产黄a三级三级三级人| 极品少妇高潮喷水抽搐| 精品国产乱码久久久久久小说| 99热全是精品| 久久久久久九九精品二区国产| 一级毛片我不卡| 色综合色国产| 欧美性感艳星| 日韩成人伦理影院| 美女xxoo啪啪120秒动态图| 亚洲av中文字字幕乱码综合| 久久久久久国产a免费观看| tube8黄色片| 国产伦精品一区二区三区视频9| 国产v大片淫在线免费观看| 亚洲真实伦在线观看| 最近的中文字幕免费完整| 大话2 男鬼变身卡| 亚洲精品日本国产第一区| 国产欧美亚洲国产| 日日摸夜夜添夜夜爱| 尾随美女入室| 亚洲av电影在线观看一区二区三区 | 久久99热这里只有精品18| 18禁动态无遮挡网站| 亚洲自拍偷在线| 少妇人妻精品综合一区二区| 日日啪夜夜撸| 一级黄片播放器| 国产乱人视频| 亚洲精品成人av观看孕妇| 国内精品美女久久久久久| 精品久久久久久久末码| av福利片在线观看| 亚洲欧美日韩卡通动漫| 成人免费观看视频高清| 好男人视频免费观看在线| 久久鲁丝午夜福利片| 国产综合精华液| 日日摸夜夜添夜夜添av毛片| 欧美性猛交╳xxx乱大交人| 国产伦精品一区二区三区四那| 一级片'在线观看视频| 国产 一区 欧美 日韩| 久久久久久伊人网av| 内地一区二区视频在线| 亚洲美女视频黄频| av在线app专区| 国产精品久久久久久精品电影| 成年av动漫网址| 欧美日韩精品成人综合77777| 成人二区视频| 久久精品国产自在天天线| 欧美成人精品欧美一级黄| 国产精品成人在线| 欧美日韩亚洲高清精品| 亚洲欧洲国产日韩| 亚洲图色成人| 99热网站在线观看| 乱码一卡2卡4卡精品| 国产精品一及| 夜夜爽夜夜爽视频| 欧美精品国产亚洲| 久热这里只有精品99| 国产亚洲午夜精品一区二区久久 | 午夜亚洲福利在线播放| 岛国毛片在线播放| 午夜福利视频精品| 国产av码专区亚洲av| 日韩欧美精品免费久久| 亚洲欧美一区二区三区黑人 | av国产久精品久网站免费入址| 免费av不卡在线播放| 好男人视频免费观看在线| 国产精品一区二区在线观看99| 亚洲aⅴ乱码一区二区在线播放| 少妇高潮的动态图| 性色avwww在线观看| 日本免费在线观看一区| 一本一本综合久久| 国产永久视频网站| 国产亚洲一区二区精品| 欧美日韩在线观看h| 亚洲成人一二三区av| 久久国产乱子免费精品| 亚洲精品国产av蜜桃| 亚洲天堂av无毛| 国产精品久久久久久精品电影小说 | 国产成人精品久久久久久| 插阴视频在线观看视频| 午夜福利在线观看免费完整高清在| 视频中文字幕在线观看| 欧美97在线视频| 自拍欧美九色日韩亚洲蝌蚪91 | 六月丁香七月| 国产av码专区亚洲av| 国产亚洲精品久久久com| 2022亚洲国产成人精品| 久久人人爽人人片av| 国产乱人视频| 超碰97精品在线观看| 亚洲成人中文字幕在线播放| 国产在线一区二区三区精| 国产成人免费观看mmmm| 男女无遮挡免费网站观看| 一级a做视频免费观看| 另类亚洲欧美激情| 青青草视频在线视频观看| 搞女人的毛片| 大片免费播放器 马上看| 精品国产露脸久久av麻豆| 99久国产av精品国产电影| 制服丝袜香蕉在线| 黄色视频在线播放观看不卡| 亚洲精品久久久久久婷婷小说| 欧美3d第一页| 国产伦精品一区二区三区视频9| 极品少妇高潮喷水抽搐| 国产成人精品婷婷| 赤兔流量卡办理| 欧美性感艳星| 亚洲色图av天堂| 最后的刺客免费高清国语| 永久网站在线| 免费在线观看成人毛片| 欧美日韩精品成人综合77777| 亚洲不卡免费看| www.色视频.com| 成人美女网站在线观看视频| 日韩强制内射视频| 黄色一级大片看看| 男插女下体视频免费在线播放| 欧美三级亚洲精品| 97精品久久久久久久久久精品| 神马国产精品三级电影在线观看| 晚上一个人看的免费电影| 精品一区二区免费观看| 欧美97在线视频| 亚洲精品国产色婷婷电影| 日韩成人伦理影院| 毛片一级片免费看久久久久| 亚洲av日韩在线播放| 97精品久久久久久久久久精品| 黄片无遮挡物在线观看| 久久精品国产a三级三级三级| 国产精品国产三级国产av玫瑰| 可以在线观看毛片的网站| 亚洲精品国产成人久久av| 久久精品国产亚洲av涩爱| 大片免费播放器 马上看| 欧美xxⅹ黑人| 亚洲欧美日韩另类电影网站 | 免费不卡的大黄色大毛片视频在线观看| 大又大粗又爽又黄少妇毛片口| 狂野欧美激情性xxxx在线观看| 国产成人freesex在线| 精品久久久久久久末码| 边亲边吃奶的免费视频| 99热6这里只有精品| 汤姆久久久久久久影院中文字幕| 久久久久久久国产电影| 五月玫瑰六月丁香| 日日摸夜夜添夜夜添av毛片| 国产精品久久久久久av不卡| 大话2 男鬼变身卡| 性色av一级| 国产精品av视频在线免费观看| 国产日韩欧美在线精品| 国产亚洲最大av| 麻豆成人av视频| 王馨瑶露胸无遮挡在线观看| 欧美精品人与动牲交sv欧美| 蜜桃亚洲精品一区二区三区| 美女国产视频在线观看| 80岁老熟妇乱子伦牲交| 嫩草影院新地址| 99精国产麻豆久久婷婷| 少妇 在线观看| 3wmmmm亚洲av在线观看| 免费黄频网站在线观看国产| 97在线视频观看| 久久久久久久久久成人| 性色avwww在线观看| 天天躁夜夜躁狠狠久久av| 成人鲁丝片一二三区免费| www.色视频.com| 99久久精品一区二区三区| 嫩草影院新地址| 日本免费在线观看一区| a级毛色黄片| 99久久精品热视频| 联通29元200g的流量卡| 少妇裸体淫交视频免费看高清| 色网站视频免费| 水蜜桃什么品种好| 久久久久精品久久久久真实原创| 男人添女人高潮全过程视频| 一级毛片久久久久久久久女| 国产成人精品福利久久| 在线观看免费高清a一片| 网址你懂的国产日韩在线| 中文乱码字字幕精品一区二区三区| av播播在线观看一区| 内地一区二区视频在线| 一边亲一边摸免费视频| 国产成人freesex在线| av卡一久久| 国产黄片美女视频| 一区二区三区免费毛片| 91午夜精品亚洲一区二区三区| 中文字幕亚洲精品专区| 最新中文字幕久久久久| 狠狠精品人妻久久久久久综合| 精品久久久久久电影网| 午夜免费男女啪啪视频观看| 91在线精品国自产拍蜜月| av.在线天堂| 成年免费大片在线观看| 日韩大片免费观看网站| 亚洲欧美成人精品一区二区| 亚洲国产最新在线播放| 亚洲一区二区三区欧美精品 | 2018国产大陆天天弄谢| 亚洲最大成人中文| 国产精品嫩草影院av在线观看| 亚洲自拍偷在线| 丝袜喷水一区| 视频中文字幕在线观看| 国产免费福利视频在线观看| 蜜桃亚洲精品一区二区三区| 国产成人精品婷婷| 久久久久国产网址| 丝袜脚勾引网站| 看非洲黑人一级黄片| 亚洲三级黄色毛片| 久久6这里有精品| 亚洲精品aⅴ在线观看| 18禁动态无遮挡网站| 一级片'在线观看视频| 免费看光身美女| 国产亚洲av嫩草精品影院| 国产午夜精品久久久久久一区二区三区| 老女人水多毛片| 免费大片18禁| 免费观看在线日韩| 蜜桃亚洲精品一区二区三区| 久久久国产一区二区| 岛国毛片在线播放| 国产精品99久久99久久久不卡 | 欧美一级a爱片免费观看看| 午夜亚洲福利在线播放| 亚洲av日韩在线播放| 欧美日韩亚洲高清精品| 色哟哟·www| 亚洲成色77777| 秋霞在线观看毛片| 免费电影在线观看免费观看| 欧美+日韩+精品| 男人和女人高潮做爰伦理| 日本色播在线视频| 免费人成在线观看视频色| 国产成人免费无遮挡视频| 日韩一本色道免费dvd| 内地一区二区视频在线| 国产精品爽爽va在线观看网站| 精品国产一区二区三区久久久樱花 | 好男人视频免费观看在线| 午夜福利在线在线| 丝袜喷水一区| 免费观看性生交大片5| 亚洲av欧美aⅴ国产| 久久精品国产亚洲av天美| 久久久精品94久久精品| 国产男女超爽视频在线观看| 99热6这里只有精品| 最近最新中文字幕免费大全7| 尾随美女入室| 大香蕉97超碰在线| 欧美bdsm另类| 国产成人午夜福利电影在线观看| 亚洲欧美清纯卡通| 精华霜和精华液先用哪个| 日韩欧美一区视频在线观看 | 观看美女的网站| 国产高潮美女av| 成年av动漫网址| 嫩草影院精品99| 永久网站在线| 蜜桃久久精品国产亚洲av| 国产高清国产精品国产三级 | 国产亚洲91精品色在线| 成人免费观看视频高清| a级毛色黄片| 中文精品一卡2卡3卡4更新| 成人免费观看视频高清| 99热这里只有是精品在线观看| 亚洲自偷自拍三级| 成年版毛片免费区| 麻豆乱淫一区二区| 大码成人一级视频| 国产黄片美女视频| 国产亚洲一区二区精品| 国产国拍精品亚洲av在线观看| 久久女婷五月综合色啪小说 | 纵有疾风起免费观看全集完整版| av线在线观看网站| 在线观看美女被高潮喷水网站| 寂寞人妻少妇视频99o| 九九爱精品视频在线观看| 美女xxoo啪啪120秒动态图| 永久免费av网站大全| 亚洲精品乱码久久久v下载方式| 欧美一区二区亚洲|