• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Human Pose Estimation and Object Interaction for Sports Behaviour

    2022-08-24 12:56:16AyeshaArifYazeedYasinGhadiMohammedAlarfajAhmadJalalShaharyarKamalandDongSeongKim
    Computers Materials&Continua 2022年7期

    Ayesha Arif, Yazeed Yasin Ghadi, Mohammed Alarfaj, Ahmad Jalal, Shaharyar Kamaland Dong-Seong Kim

    1Department of Computer Science, Air University, Islamabad, 44000, Pakistan

    2Department of Computer Science and Software Engineering, Al Ain University, Al Ain, 15551, UAE

    3Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia

    4Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Korea

    Abstract: In the new era of technology, daily human activities are becoming more challenging in terms of monitoring complex scenes and backgrounds.To understand the scenes and activities from human life logs, human-object interaction (HOI) is important in terms of visual relationship detection and human pose estimation.Activities understanding and interaction recognition between human and object along with the pose estimation and interaction modeling have been explained.Some existing algorithms and feature extraction procedures are complicated including accurate detection of rare human postures, occluded regions, and unsatisfactory detection of objects, especially small-sized objects.The existing HOI detection techniques are instancecentric (object-based) where interaction is predicted between all the pairs.Such estimation depends on appearance features and spatial information.Therefore, we propose a novel approach to demonstrate that the appearance features alone are not sufficient to predict the HOI.Furthermore, we detect the human body parts by using the Gaussian Matric Model (GMM) followed by object detection using YOLO.We predict the interaction points which directly classify the interaction and pair them with densely predicted HOI vectors by using the interaction algorithm.The interactions are linked with the human and object to predict the actions.The experiments have been performed on two benchmark HOI datasets demonstrating the proposed approach.

    Keywords: Human object interaction; human pose estimation; object detection; sports estimation; sports prediction

    1 Introduction

    In the digital era, technology is the most significant tool to ease daily human life.Artificial Intelligence (AI) is a vast field of technology used in various research developments of expert systems and computer vision.In automated systems, technology has progressed significantly in the last couple of decades towards the computerization of humans in several applications [1].Human object interaction is a vast domain and it has many complexities in artificially intelligent systems.Moreover,in a recent study, psychophysicists declared that the understanding of an image or video in a single glimpse is not easy for humans [2].Although all the social events have been categorized in different fields whereas each field consists of different circumstances.This variety of events requires different kinds of classification between human, object, scene, and background.Therefore, a lot of research has been done in the past and on humans and objects for understanding the events.To detect the human,the authors have considered the human first followed by differentiating it from the background and estimating the pose of the human.After this, they employ object detection and classification techniques.

    Event classification and human-object interaction have been used in many applications such as surveillance systems, railways platforms, airports, and seaports where detection of normal and abnormal events along with detection for real-time data is critical [3].However, there are massive challenges in the way of improving the accuracy of human-object interaction for sports and security agencies that need to identify daily activities [4], office work, gym activities, smart homes and smart hospitals, and understanding of activities in educational institutions.All human activity-based and smart systems need to understand the event and take a decision to arrange activities in a well-organized manner.

    In this article, we proposed a unique method forHOI recognition using object detection via Gaussian Matric Model (GMM) and human pose estimation (HPE).We developed a hybrid approach for pre-processing images including salientmaps, skin detection, HSV plusRGB detection, and extraction of geometrical features.We designed a systemwith the combination ofGaussian matrix model (GMM) and k-means to detect the human skeleton, draw ellipsoids of human body parts, and detect the object by a combination of k-means as well as YOLO.A combination of SK-learn and HOG was used for classification and activity recognition.We used two publically available benchmarks datasets for our model and fully validated our results against other state-of-the-art models.The proposed technique processes the data through four different aspects: unwanted components elimination from the image, extraction of hierarchal features, detection of an object based on some factors, and classifiers.In addition, the proposed methodology has been applied to publically available datasets: the PAMI’09 and the UIUC Sports datasets and obtained significant improvement in activities recognition rate over other state-of-the-art techniques.

    The rest of the paper is organized as follows.Section 2 contains relatedwork, Section 3 presents the architecture of the proposed model.Section 4 describes the performance evaluation of the proposed work, Section 5 discusses the proposed methodology.Section 6 reports related discussions.Section 6 concludes the paper and provides some future directions

    2 Related Works

    In this research article, we discuss human-object interactions using interaction algorithms over sports datasets.

    2.1 Human Pose Estimation

    Various researches have been done using different approaches to improve the labeling of scenes since it causes false recognition [5].Designed the object recognition and representation methods that compare the overall pixels to understand the status of the image.Then, they match the kernel to understand the object properly.By combining two different techniques, the MRG technique and the segmentation tree, they show the contextual relationship with respect to detection of the edges by following the connected components [6].In [7] the adopted an approach is to use depth maps for CRF modeling and system development for scene understanding using less bright images with a simple background.Classification is done on the basis of kernel features [8].By using depth images, detection and localization of objects in 3D and foreground segmentation of RGB-D images are performed by [9].

    2.2 Action Recognition via Inertial Sensor

    In [10] the extended model of a semantic manifold by combining manually local contextual relationships and semantic relationships to classify the events.Many researchers considered the human object interaction detection by using object detection and human body detection.Human and object attention maps in that approach are constructed using contextual appearance features and local encoding.Only detection of object takes place instead of identication, Use of the complex in the term of time and computationally expensive because of the additional neural network inference [11].Similarly; hierarchical segmentation proposed by [12] performed contour detection and boundarydetection techniques onRGB images.After the segmentation, a histogram of oriented gradient (HOG) is obtained along with the combination of deformable part model (DPM) for the object detection.

    3 Material and Methods

    The proposed system based on pre-processes, segmentation and objects detection as initial steps in the input images.After the detection of an object from the image, the human has been detected using salient maps and the pose of the human body has been estimated using GMM [13].After detecting the human and object, we compute the geometrical features from the segmented images including object centroid, object length, object width, and object acquired area.We take the extreme points of the object (extreme left point, extreme right point, topmost point, and bottom-most point) from the centroid of the object.Next, we apply naive Bayes to find the features from both techniques.On the other hand, from the human body, we detect SK features and full-body features from ellipsoids.Then,to symbolize the reduced features, we optimize the features and find human object co-occurrence.We optimize the features and find co-occurrence between human and object.Finally, the last step is the classification of the event.An overview of the proposed system is shown in Fig.1.

    Figure 1: The system architecture of the proposed model of Human Object Interaction (HOI)

    3.1 Preprocessing of the Data

    First of all, we need segmented noticeable regions to detect the human-object interaction.For this,we perform foreground extraction [14] and smart image resizing.We used the Salient maps method to extract the salient regions and salient object detection as shown in Fig.2.Low-Rank and LSDM models extract the saliency maps and also capture tree-structure sparsity with the norm.For efficient results, we make partitions of the images and non-overlapping patches.The input image has been divided intoNpatches {Pi}N(i= 1).

    Figure 2: Steps of human silhouette detection.(a) Result of skin tone detection (b) result of salient region detection and (c) result of smoothing filters and bounding box

    we extract theDdimension features for each patchPiand use a vectorFi∈R Dfor representation.The feature vectorF= {f1,f3,f3,...fn}∈R D×Nis extracted from the matrix representation of the input image.

    Therefore, we have designed an algorithm for the decomposition of feature matrix into(F=L+S), some redundant information (L) as well as some structured salient maps (S) in which,Ω(S) is the norm of s-sparsity-inducing.To regularize and preserve the structures, we use the relevant and latent structures and their relationships.

    3.1.1 Foreground Extraction

    To enhance the silhouette extracted by saliency maps, we perform the segmentation via skin tone detection by using the color-space transformation approach [15] which is achieved by using heuristic thresholds.We extract some skin tone regions by using the YCBCRmodel.The values of the threshold are specified as R= 0.299, G= 0.287 and B= 0.11.

    Through random thresholds of color space transformation, some enhanced regions are extracted.For the chrominance segmentation, we classify the skin and the non-skin regions into different parts precisely.

    3.1.2 Smoothing Threshold

    We applied filters and some manual thresholds on the foreground to make the resultant image accurate.We have applied a manual threshold to fill the holes and region connector for the small region connection.The threshold range of above 30 is considered as 255 and below 30 as 0.

    These are the detection steps of human silhouette from images with 2 different techniques and in Fig.2c both the techniques have been merged.

    3.2 Human Detection

    The segmentation of the human body leads to the extraction of active regions between human and object.We extract the centroid, obtain the boundary, and highlight the boundary points and eight peak points [16].

    3.2.1 Centroid

    After the detection of one human body from an image, the spatial feature detection has been performed and the centroid of the human body has been found, which is a torso point Fig.3a.This helps in detecting the human posture, leading towards finding the action.

    Figure 3: Detection of human body (a) Centroid (b) Boundary (c) Boundary points (d) peak points

    3.2.2 Boundary

    We performed the boundary extraction algorithms to find the boundary and highlight spatial points on the boundary Fig.3c.This leads to estimating the human body posture.

    3.2.3 Peak Points

    After boundary extraction, we find the peak point on the boundary Fig.3d to estimate the human posture

    3.3 Object Segmentation

    The segmentation of objects leads to active regions between humans and objects [17].We detected the objects using YOLO.Image classification along with localization on each matrix has been performed to predict the bounding boxes and probabilities for objects by extracting geometrical features.

    In object segmentation to extract the spatial features, we extract four different extreme points and four features of length, width, centroid, and area of the region.These extracted points and features are in further processing, i.e., the parameter of rectangle or square and area of a rectangle (See Fig.4).For the measuring of the distance between the x and y extreme points, we use Euclidean distance as

    Figure 4: Detection of objects by using K-mean and geometrical features extraction

    where, ||d|| represents the distance between two points by using Euclidean,AxandA′xrepresent the x-coordinates.WhileByandB′yrepresent the y-coordinates.

    3.4 Human Body Parts Detection

    The skeleton model has been used to detect the human silhouette and by using k-means clustering,circles have been drawn on the human body.GMM model has been used for the ellipse fitting on human body poses by usingKnumber of fixed ellipses for performance prediction.GMM Ellipse fitting algorithm is responsible for drawing theKnumber ofEellipses for the best coverage of the targeted region α(E).

    We use skeleton tree by implementing compact representation for all the skeleton branches, we draw all the possiblecircles by using the tangent from the central branch.The compact and the lossless representation uses the medial axis transform (MAT) for the denotation of the centroid and the radius which represents the maximum inscribed circle and their coloring.“V”and“W”are the edges of a node of the graph G = (V,W), where V is endpoint node and W is skeleton segment which is the part of Li∈S, S∈{1,..., |W|} denoting the number of edges.

    where Pij is jth bin histogram of the ith node.The term log|S| represents the overall information of the skeleton as shown in Fig.5.

    Figure 5: Estimation of skeleton for pose estimation by using skeleton model.(a) 2D image, (b) Medial Axis Transform and (c) Centroid of a circle that is tangent from the boundary

    3.5 Object Detection

    We used two novel approaches, Fuzzy C-Means and Random forest, for super-pixels and objects segmentation respectively.

    3.5.1 Clustering

    First of all, clusters are the centroids for each object of the class and then randomly initialized.Feature vectors determine the dimensions of centroids and Euclidian distance is used for assigning the cluster for each object [18].Additionally, the pixels are assigned to the clusters having the minimum distance from the centroid.These clusters are assigned to classes of objects and the object mean is calculated again to check the difference from the previous value, and the process continues until a constant value is obtained.Fig.6 represents the clustering.We used 6 classes of PAMI’09 and 8 activities of the UIUC dataset’s feature vectors divided into 6 and 8 clusters respectively, which is obtained by some primary steps.To calculate the area ? of a rectangle, we use extreme connected points and connect them to make a rectangle for the object segmentation.For the particular rectangle,the area ? is calculated as:

    Figure 6: Similar-pixels clustering for object detection

    where ? represents one side of the perimeter, (Q-P) is one side of the rectangle, and (m-n) is the other side of the rectangle.? symbolizes the area of a rectangle.

    3.5.2 Features Computation Using HOG

    We performed segmentation on the images by using edge detection and preserved the edges and used HOG to improve the accuracy.

    3.6 Features of Human Pose Using GMM

    For the extraction of features, we draw ellipses on the human body using a fixed number of ellipses [19].GMM is performed for computing parameters and best coverage for ellipses.We did this using a two-step method: drawing the ellipses using k-means and fixing the maximum numberK.

    3.6.1 Ellipses Using K-Means

    The central skeleton is the medial axis as used for the tangent drawing on the boundary and is continuously changing.A 16-bit histogram is used for each circle and the radius of each circle is computed.The circular shape is defined using MAT-based histograms.

    3.6.2 Fixing the Maximum Number K

    A GMM model is used to draw the ellipsoids on the different body poses by a fixed number K of ellipsesEof the image and to predict the performed activity accordingly [20].GMM-EM algorithm is responsible for computing the parameters for a fixed numberkof ellipsesEthat has achieved the best coverage α(E) in two steps.Ellipses are devolved using the GMM Ellipses fitting model.We wanted to calculate the parameters of ellipsesEand by fixing them untilKin Algorithm 1 for the best coverage of the region from the silhouette.

    Algorithm 1: Ellipse Fitting Algorithm (EFA)Input: EHS: Extracted Human Silhouettes Input: Binary image I.Output: Set of ellipses E with the lowest IC[S,R] = ShapeSkeleton(I)C = ShapeComplexity(S,R)CC = InitializeEllipse(S,R)K=1 IC*=∞Repeat SCC = SelectHypothesis(k,CC)E = GMM-EM(I,SCC,K)IC = ComputeIC(I,E,C)ICmin = C.log(1-0.99)+ 2.k if IC<AIC*then IC*= IC E*= E End K= K+1 Until K==12

    P∈F×Gis the probability of pixels belonging to the ellipseEiin the model.Ciis the origin ofEiandMiis a positive-definite 2×2 matrix which represents the eccentric orientation ofEi.The Gaussian amplitude is used asAi= 1, so the value of probabilityPi(p) on the boundary is the same for all ellipses.The possibility that a point belongs to an ellipseEiis not dependent on the size of the ellipse.We checked this on both datasets as we fixed the number of ellipses by fixing the value of K with the minimum range of 12.This threshold value of K is only applied to get the fixed number of ellipses as represented in Fig.7.

    Figure 7: Detection of human body parts by using skeleton model and GMM (a) Ellipsoids from the circles on same centroids by covering all possible pixels (b) Sub-regions of GMM and (c) Limit the number of ellipses k = 12

    3.7 Human Object Co-Occurrence

    We apply a 3×3 convolutional layer to make the vector V as interaction vector of size×2.At the interface, we extracted the four possible location points for the human body center based on interaction points and interaction vectors.

    During the training process, the interaction points (IP) and the related human body and the object centroids have fixed geometric structures.While at the inference stage, our generated interaction-points(IP) need to be in grouped-form with the detection of object and their results (bounding-boxes of human-body and object).The points generated by using the interaction points p, center of human h,and center of object o imply a condition on the model: h≈P + v and O≈P - v.

    In Fig.8, it illustrates the interaction points grouping.It has 3 different inputs including the human body/object bounding boxes (green and red), the interaction points (redpoint) extracted from interaction heat maps, and interaction vectors (IV) in (red arrow) at the location of the interaction points (IP).The four corners of the (green) outlines of interaction boxes (red) are obtained by the given interaction points and the un-signed interaction vectors as shown in Eqs.(9)-(11).

    Figure 8: 3×3 convolutional layer is used to examine human-object interaction

    Fig.8 represents the procedure of finding the interaction between human and object.The three inputs,namely,the human body/object bounding-box from the object detection branch,the interaction vector is from the interaction points and it is predicted by the interaction vector branch [21].The current human body/object bounding boxes and their interaction points are regarded as true positive human-object pairs.

    Here,HboxandOboxare the obtained boxes of humans and objects from the human and object detection.ibox is the interaction box and it is generated by combining the interaction points and corresponding interaction vectors.dtl,dtr,dbl, anddbrare four different vectors with lengths of corners in between the interaction boxiboxand the reference boxrbox.dτ is the threshold of vector length set for filtering the negative human-object interaction pairs.The interaction grouping schemes are presented in Algorithm 2.

    For the prediction of interaction vectors, we compared point heat mapsPfrom the ground-truth with the heat maps∧Pof all inter-action points.All of these points are with the Gaussian kernel.We used the modified local loss which is proposed in for balancing the positive and negative values.Where Np represents the number of interaction pointsIPin the concerned image.The α and β points are the hyper-parameter points for the contribution of each point.For the interaction points prediction and interaction vector mapsVprediction.We use the value of the un-signed vectorV′k= (|Vx|k, |Vy|k)at various interaction points’pandkas the ground-truth.After that, theL1loss is used for the related interaction points and here |Vp|krepresents the vector and the point which is predicted by loss l function.

    Algorithm 2: Co-Occurance Human Object Interaction(HOI)Input: Human/Object detector H,O.Interaction points and vectors P,V Human, Object and Interaction threshold Hτ, Oτ, Iτ Output: Final Human Object Interaction box If//Interaction point p makes point set P//Interaction vector v makes set of vectors V for Hbox C Hτ, Oscore C Oτ, p CP do if H scorebox>H τ, O score box>Oτ, p score>pτ then//Interaction box ibox//calculate reference box rbox and Obox if Hbox, Obox, ibox, rbox satisfy condition 2 then Sf←Hscore .Oscore .pscore// output the current HOI Score Sf End if End if End if

    Meanwhile, we used the point heat mapPand ground heat mapP′for the prediction and all of them are defined by the Gaussian kernel.

    In(Eq.(14)),Nprepresents the number of interaction points and αandβ are the parameters that control the contribution of every point.For interaction, we use the vector maps V by using the value of the interaction vector at the interaction pointIPas the ground truth.The interaction vectorVisvi= (|vx|i,|vy|i) and the lossL1is directed for all the corresponding interaction points.

    whereVpirepresents the interaction vectorsVat theIPpoint.The loss function is:

    Here,?mis a weight for all vector loss terms.Here, we simply specify?m= 0.1 for all the experiments.

    4 Experiments and Results

    This section is organized into five sub-sections.First, two benchmark datasets are described in detail.Second, results evaluation is discussed.Third, human pose estimation is discussed.Fourth,estimation of human-object interaction is explained, and fifth, our proposed work is compared with other state-of-the-art advanced deep learning techniques.

    4.1 Datasets Description

    To evaluate the performance of the proposed system, we used images based benchmark datasets,namely, the PAMI’09 sports dataset and the UIUC sports dataset contains vast range of backgrounds and verity of sports.These datasets are further divided into testing sets for experiments and testing purposes.These two datasets have been used to detect human bodies and objects and to find the interactions between humans and objects.Both datasets are further classified into different classes of different sports and activities to recognize the different outdoor and indoor activities.

    4.1.1 PAMI’09 Dataset

    This dataset contains six classes and each class has thirty train, thirty ground truth, and twenty test images.The PAMI2009 dataset contains 480 images with few annotations [22].Each class has 80 images including 30 images for training, 30 for ground truth, and 20 for testing.Each picture has been cataloged with 12 ellipsoids.

    4.1.2 UIUC Dataset

    TheUIUC sports dataset consists of eight sports activities.In each class, there are100-240images.This dataset is comprised of 2000 images, mainly of sportsmen and sportswomen [23].

    4.2 Results Evaluations

    For efficient results, the dataset has been provided to the Gaussian mixture models in batches of classes.To minimize reconstruction errors, we set the number of training samples according to crossvalidation.

    4.2.1 Experiment I: Human Pose Estimation

    The accuracy of human pose estimation has been measured using Euclidean distance from ground truth [22] of the dataset, which is explained in Eq.(15).

    where the ground truth of datasetXis the position of human body parts.D’is the threshold, which is 12, and it is used to measure the accuracy between the ground truth and our model.

    In Tab.1, columns 2 and 4 represent the distances fromthe dataset’s ground truthwhereas columns 3 and 5 show the human body part recognition accuracies over the PAMI’09 andUIUC sports datasets respectively.

    Table 1: Human body key point’s detection accuracy

    Tabs.2 and 3 represent the mean accuracy of both the datasets respectively.

    Table 2: Mean recognition accuracy of PAMI’09 sports dataset

    Table 3: Mean recognition accuracy of UIUC sports dataset

    4.2.2 Experiment II: HOI

    For human-object interaction (HOI) detection and prediction, we use Hourglass as a feature extraction method for pre-training.We randomly initialized the network for generating the interaction points and vectors.During the training of the system, we resize the input images to a resolution of 512×512.Standard data augmentation techniques have been employed and an Adam optimizer has been used for the optimization of the loss function during training.Through the testing phase, we perform the flip augmentation method to get final detections and predictions.Moreover, we use a batch with the size of 30 and a learning rate of 2.5.

    For the detection branch, we go after the previously proposed HOI estimation methods and employ the Faster R-CNN method with the ResNet-50-FPN and pre-train it on the UIUC training dataset.To acquire the bounding-boxes at the inference, we have set the score thresh-hold for the human to be greater than 0.4 and for the object, it is 0.1.When the interaction box is generated by our interaction points and vectors.The generation of interaction systems has taken about 7s.The interaction group we have has the complexity ofO(Nh No Ni), whereNh,No,Niis the number of humans, objects, and interaction points, respectively.In the testing, our grouping scheme is timeefficient and takes less than 2s (<20% of total time).

    4.2.3 Experiment III: Classification of HOI

    By following the standard evaluation and testing methods as performed in [24] to analyze our proposed approach, the results are assembled in the form of role-mean-average precision (mAProle).In role-mean-average precision (mAProle), we apply the HOI model and perform it in a way that if and only if one HOI triplet is rewarded as a true-positive when both of the bounding boxes have IoU intersection-over-union (union of interactions) greater than or equal to 0.5 with the labeled data (ground-truth) [25] and the linked interaction class is accurately classified.Firstly, we compare our proposed technique with other state-of-the-art techniques in the literature.Tab.4 represents the comparison on the PAMI’09 and UIUC dataset.The existing approaches utilize human and object features in multi-stream architecture.

    Table 4: State-of-the-art comparison (in terms of mAProle) on the PAMI’09 and UIUC datasets, our approach by combining the HOI and IG with the mAProle of 53.6

    The work of denoted in Tab.4 as DCA, introduces an interactive network to put on noninteraction suppression and reports with a mAProle of 48.3.Our technique achieves state-of-the-art performance by comparing it to existing techniques with a mAProle of 53.4.Fig.9 shows that our results are improved by comparing them (mAProle of 53.6) in first pre-training our model on PAMI’09 and UIUC datasets and then fine-tuning and pre-training the model on both datasets.

    Figure 9: Some results by using Human Object Interaction (HOI) model

    4.2.4 Experiment IV: Qualitative Analysis of our Proposed System

    Finally, after the classification and recognition of human lifelong activities those are performed in this phase.Tab.5 shows the accuracy of different classes in the form of confusion matrix of PAMI’09 dataset with 90.0% of mean accuracy.This shows the significant improvement and better results from the proposed methodology.

    Table 5: Confusion matrix table on PAMI’09 sports dataset

    After that, the classification and recognition of human activities are performed over the UIUC sports dataset set.Tab.6 shows the accuracy of different classes in the form of confusion matrix of the UIUC sports dataset with 87.71% of mean accuracy, which shows significant improvement and better results from the proposed methodology.

    Table 6: Confusion matrix table on UIUC sports dataset

    5 Conclusion

    We proposed a novel approach to estimate the HOI in images.Our approach refers to HOI estimation as a fundamental problem of research work in which we perform the pose estimation using skeleton model and GMM.After that, we detect the object by combining the features of K-means clustering and YOLO.Moreover, we generate the interaction points and interaction vectors by using key-point detection and pair those interaction points and vectors with the human and object by using the bounding boxes.HOI interaction was performed by using the HOI interaction group method.Through reference boxes and reference vectors, we estimate the interaction.Our experiments are performed on two HOI benchmark sports datasets, PAMI’09 and UIUC.Our approach outperforms state-of-the-art methods on both datasets with the accuracies of 90.0% and 87.71%, respectively.

    In the future, we will extend the interaction vector concept by using multiple vectors from the interaction point to the human body and object to improve the results of our model.We also aim to implement this model in other applications and indoor HOI datasets.

    Funding Statement:This research work was supported by Priority Research Centers Program through NRF funded by MEST (2018R1A6A1A03024003) and the Grand Information Technology Research Center support program IITP-2020-2020-0-01612 supervised by the IITP by MSIT, Korea.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    好看av亚洲va欧美ⅴa在| 国产三级黄色录像| 欧美乱色亚洲激情| 免费在线观看黄色视频的| 国产精品九九99| 女人高潮潮喷娇喘18禁视频| 亚洲五月色婷婷综合| 人人妻人人添人人爽欧美一区卜| 亚洲免费av在线视频| 一夜夜www| 久久精品国产亚洲av香蕉五月| av免费在线观看网站| 国产aⅴ精品一区二区三区波| 国产成人av教育| 免费高清在线观看日韩| 欧美av亚洲av综合av国产av| 精品国产乱子伦一区二区三区| 日本vs欧美在线观看视频| 黑丝袜美女国产一区| 精品人妻在线不人妻| 亚洲 欧美 日韩 在线 免费| 日日夜夜操网爽| 三级毛片av免费| 日日干狠狠操夜夜爽| 精品乱码久久久久久99久播| 国产欧美日韩一区二区三区在线| 久久久久久久久中文| 亚洲欧美日韩高清在线视频| 高清在线国产一区| 18美女黄网站色大片免费观看| 亚洲成av片中文字幕在线观看| 国产一区二区三区视频了| 香蕉久久夜色| xxx96com| 夜夜夜夜夜久久久久| 久久中文字幕人妻熟女| 在线观看免费视频网站a站| 国产成人欧美| 黑人操中国人逼视频| 国产激情久久老熟女| 久久午夜亚洲精品久久| 成在线人永久免费视频| 80岁老熟妇乱子伦牲交| www.自偷自拍.com| 老司机亚洲免费影院| 国产激情欧美一区二区| 女人被狂操c到高潮| 国产精品偷伦视频观看了| 高清黄色对白视频在线免费看| 精品福利观看| 淫妇啪啪啪对白视频| 日本vs欧美在线观看视频| 一区在线观看完整版| 国产亚洲精品久久久久5区| 免费在线观看完整版高清| 三上悠亚av全集在线观看| 欧美不卡视频在线免费观看 | 亚洲狠狠婷婷综合久久图片| 精品人妻1区二区| 国产精品久久久久成人av| 久久热在线av| 国产精品亚洲av一区麻豆| 久久久国产成人免费| 大型av网站在线播放| 久久久久久久午夜电影 | 国产精品秋霞免费鲁丝片| 日本a在线网址| 五月开心婷婷网| 在线免费观看的www视频| 99国产极品粉嫩在线观看| 亚洲人成网站在线播放欧美日韩| 美国免费a级毛片| 欧美中文综合在线视频| 精品一区二区三区四区五区乱码| 99riav亚洲国产免费| 91av网站免费观看| 欧美成人性av电影在线观看| 91成年电影在线观看| 99热只有精品国产| 免费看a级黄色片| 国产精品98久久久久久宅男小说| 国产亚洲精品一区二区www| 亚洲精品av麻豆狂野| 中文字幕最新亚洲高清| 可以免费在线观看a视频的电影网站| 久久精品亚洲av国产电影网| 欧美在线一区亚洲| 亚洲九九香蕉| 精品久久久久久久毛片微露脸| 91精品三级在线观看| 国产精品香港三级国产av潘金莲| 精品人妻1区二区| 国产av在哪里看| 91字幕亚洲| 麻豆久久精品国产亚洲av | 一个人免费在线观看的高清视频| 午夜福利一区二区在线看| 国产又色又爽无遮挡免费看| 日韩 欧美 亚洲 中文字幕| 天天影视国产精品| 亚洲精品一区av在线观看| 欧美亚洲日本最大视频资源| 激情在线观看视频在线高清| 午夜91福利影院| 精品一区二区三区av网在线观看| 操美女的视频在线观看| 在线观看日韩欧美| 欧美成狂野欧美在线观看| 欧美在线一区亚洲| 99国产综合亚洲精品| 久久久水蜜桃国产精品网| 国产欧美日韩一区二区三| 欧美日本亚洲视频在线播放| ponron亚洲| 亚洲欧美日韩另类电影网站| 曰老女人黄片| 国产99久久九九免费精品| 一级片'在线观看视频| 亚洲一卡2卡3卡4卡5卡精品中文| a级毛片在线看网站| 亚洲av电影在线进入| 亚洲精品一二三| 80岁老熟妇乱子伦牲交| 国产三级在线视频| 大型黄色视频在线免费观看| 手机成人av网站| 曰老女人黄片| 99国产精品一区二区三区| 男男h啪啪无遮挡| 国产成人av激情在线播放| 制服人妻中文乱码| 国产片内射在线| 黄色丝袜av网址大全| 老司机福利观看| 91麻豆av在线| www.精华液| 欧美中文日本在线观看视频| 久久精品国产99精品国产亚洲性色 | 性色av乱码一区二区三区2| 久久香蕉国产精品| www.999成人在线观看| 欧美乱妇无乱码| 视频在线观看一区二区三区| 18美女黄网站色大片免费观看| 这个男人来自地球电影免费观看| 最近最新免费中文字幕在线| 视频在线观看一区二区三区| 在线观看免费日韩欧美大片| 国产av又大| 无人区码免费观看不卡| 亚洲人成伊人成综合网2020| 黄片大片在线免费观看| 美女大奶头视频| 91av网站免费观看| 成人18禁在线播放| 国产99白浆流出| 国产精品电影一区二区三区| svipshipincom国产片| 亚洲免费av在线视频| 日本wwww免费看| 三级毛片av免费| 女人精品久久久久毛片| 操出白浆在线播放| 国产麻豆69| 亚洲人成伊人成综合网2020| 两人在一起打扑克的视频| 免费在线观看黄色视频的| 免费在线观看黄色视频的| 亚洲成a人片在线一区二区| 后天国语完整版免费观看| 高清毛片免费观看视频网站 | av网站免费在线观看视频| 久久精品人人爽人人爽视色| 黄色视频不卡| 中文字幕人妻丝袜一区二区| 国产一区二区三区综合在线观看| 午夜免费成人在线视频| 国产无遮挡羞羞视频在线观看| 亚洲色图综合在线观看| 脱女人内裤的视频| 日日夜夜操网爽| 美国免费a级毛片| 欧美国产精品va在线观看不卡| 超碰97精品在线观看| 伊人久久大香线蕉亚洲五| 免费在线观看影片大全网站| 欧美一区二区精品小视频在线| 日韩免费av在线播放| 动漫黄色视频在线观看| 黄色 视频免费看| 精品国产亚洲在线| 99久久久亚洲精品蜜臀av| 丝袜在线中文字幕| 欧美+亚洲+日韩+国产| 日本免费一区二区三区高清不卡 | 99久久99久久久精品蜜桃| 亚洲国产欧美网| 免费日韩欧美在线观看| 亚洲五月色婷婷综合| 色哟哟哟哟哟哟| 天天躁夜夜躁狠狠躁躁| 欧美+亚洲+日韩+国产| 女人被躁到高潮嗷嗷叫费观| 999久久久国产精品视频| 人人妻,人人澡人人爽秒播| 巨乳人妻的诱惑在线观看| 国产免费av片在线观看野外av| 天天添夜夜摸| 波多野结衣一区麻豆| 精品国产一区二区久久| 国产单亲对白刺激| 精品日产1卡2卡| 黑人猛操日本美女一级片| 国产精品一区二区在线不卡| 亚洲精品久久午夜乱码| 亚洲色图综合在线观看| 国产亚洲精品一区二区www| 欧美精品亚洲一区二区| 日韩精品中文字幕看吧| 亚洲av成人一区二区三| av国产精品久久久久影院| 99国产精品99久久久久| 精品高清国产在线一区| 中文字幕高清在线视频| netflix在线观看网站| 国产1区2区3区精品| 亚洲av片天天在线观看| 欧美乱色亚洲激情| 欧美 亚洲 国产 日韩一| 欧美乱色亚洲激情| 日韩免费高清中文字幕av| 麻豆一二三区av精品| 一本大道久久a久久精品| 欧美日韩av久久| 亚洲国产看品久久| 久久这里只有精品19| 国产精品亚洲一级av第二区| 最近最新中文字幕大全免费视频| 搡老熟女国产l中国老女人| 国产一卡二卡三卡精品| 在线av久久热| 久久人人97超碰香蕉20202| 亚洲国产欧美一区二区综合| 老司机深夜福利视频在线观看| 在线天堂中文资源库| 国产精品久久电影中文字幕| 一级a爱片免费观看的视频| 精品熟女少妇八av免费久了| 久久精品91蜜桃| 日日干狠狠操夜夜爽| 窝窝影院91人妻| 午夜福利一区二区在线看| 色婷婷av一区二区三区视频| 悠悠久久av| 亚洲黑人精品在线| 日韩国内少妇激情av| 国产一区二区三区综合在线观看| 久久人妻av系列| 巨乳人妻的诱惑在线观看| 一进一出抽搐动态| 老汉色∧v一级毛片| 老鸭窝网址在线观看| 咕卡用的链子| 亚洲熟妇熟女久久| 亚洲情色 制服丝袜| 午夜福利欧美成人| 精品久久蜜臀av无| 久久久国产欧美日韩av| 色婷婷av一区二区三区视频| 欧美+亚洲+日韩+国产| 黄片大片在线免费观看| 女同久久另类99精品国产91| 最近最新免费中文字幕在线| 欧美日韩av久久| 久久中文字幕人妻熟女| 亚洲av成人一区二区三| 欧美日韩中文字幕国产精品一区二区三区 | 国产欧美日韩一区二区精品| 天堂俺去俺来也www色官网| 一夜夜www| 久久久久久久久中文| 深夜精品福利| 久久热在线av| 在线免费观看的www视频| 免费久久久久久久精品成人欧美视频| 久久国产精品男人的天堂亚洲| 三级毛片av免费| 国产99白浆流出| 18美女黄网站色大片免费观看| 日韩国内少妇激情av| 一个人免费在线观看的高清视频| 国产精品野战在线观看 | 亚洲午夜理论影院| 91成年电影在线观看| 日韩大尺度精品在线看网址 | 桃红色精品国产亚洲av| 亚洲五月色婷婷综合| 久久99一区二区三区| 久久精品国产综合久久久| 亚洲精品美女久久久久99蜜臀| 校园春色视频在线观看| 国产又色又爽无遮挡免费看| 一级片'在线观看视频| 村上凉子中文字幕在线| 十分钟在线观看高清视频www| 天天躁夜夜躁狠狠躁躁| 国产精品美女特级片免费视频播放器 | 大型黄色视频在线免费观看| 日韩有码中文字幕| 日韩免费av在线播放| 欧美乱妇无乱码| 麻豆成人av在线观看| 亚洲一码二码三码区别大吗| 热99国产精品久久久久久7| 成在线人永久免费视频| 韩国av一区二区三区四区| 亚洲精品国产精品久久久不卡| 欧美久久黑人一区二区| 美女高潮喷水抽搐中文字幕| 国产成人av激情在线播放| 他把我摸到了高潮在线观看| 高清在线国产一区| 亚洲视频免费观看视频| 久久影院123| av视频免费观看在线观看| 高清av免费在线| 国产精品久久久av美女十八| 中出人妻视频一区二区| 巨乳人妻的诱惑在线观看| 国产亚洲欧美98| 日韩欧美国产一区二区入口| 日本一区二区免费在线视频| 嫩草影视91久久| 99在线视频只有这里精品首页| 热re99久久精品国产66热6| 日韩高清综合在线| 天堂俺去俺来也www色官网| 18美女黄网站色大片免费观看| 成年版毛片免费区| 老鸭窝网址在线观看| 欧美激情久久久久久爽电影 | 老司机午夜福利在线观看视频| 最好的美女福利视频网| 欧美丝袜亚洲另类 | 亚洲av日韩精品久久久久久密| av国产精品久久久久影院| 日本精品一区二区三区蜜桃| 免费在线观看日本一区| av有码第一页| 亚洲午夜理论影院| 淫秽高清视频在线观看| 日日爽夜夜爽网站| 午夜福利免费观看在线| 国产熟女xx| 99精国产麻豆久久婷婷| 搡老熟女国产l中国老女人| 久久久精品欧美日韩精品| 免费一级毛片在线播放高清视频 | 亚洲国产欧美一区二区综合| 国产精品一区二区精品视频观看| 又大又爽又粗| 超碰97精品在线观看| 成年人黄色毛片网站| 999久久久精品免费观看国产| 午夜免费激情av| 黄网站色视频无遮挡免费观看| 高清毛片免费观看视频网站 | 大型av网站在线播放| 亚洲国产精品合色在线| 一级,二级,三级黄色视频| 亚洲精品一卡2卡三卡4卡5卡| 夜夜爽天天搞| 9热在线视频观看99| 大型av网站在线播放| 美女国产高潮福利片在线看| 18美女黄网站色大片免费观看| 侵犯人妻中文字幕一二三四区| 1024视频免费在线观看| 日本黄色视频三级网站网址| 久9热在线精品视频| 女人被躁到高潮嗷嗷叫费观| 精品无人区乱码1区二区| 久久久久亚洲av毛片大全| 日本vs欧美在线观看视频| 国产亚洲精品第一综合不卡| 婷婷丁香在线五月| 国产免费av片在线观看野外av| 丰满人妻熟妇乱又伦精品不卡| 国产欧美日韩一区二区三| 制服人妻中文乱码| 99在线人妻在线中文字幕| 一级毛片精品| 99精国产麻豆久久婷婷| 色婷婷久久久亚洲欧美| 18美女黄网站色大片免费观看| 在线观看免费视频日本深夜| 波多野结衣一区麻豆| 国产精品综合久久久久久久免费 | 美女扒开内裤让男人捅视频| 午夜免费观看网址| 脱女人内裤的视频| 天天添夜夜摸| 久久久久国产精品人妻aⅴ院| 免费人成视频x8x8入口观看| 久久国产精品影院| 免费av毛片视频| 国产片内射在线| 免费在线观看完整版高清| 人人妻人人添人人爽欧美一区卜| 视频区欧美日本亚洲| 欧美乱码精品一区二区三区| 亚洲一码二码三码区别大吗| 国产亚洲av高清不卡| 欧美成狂野欧美在线观看| 少妇粗大呻吟视频| 美女 人体艺术 gogo| 久久久久亚洲av毛片大全| 极品人妻少妇av视频| 亚洲美女黄片视频| 夫妻午夜视频| 日本精品一区二区三区蜜桃| 色综合婷婷激情| 欧美日韩乱码在线| 一级a爱视频在线免费观看| 超色免费av| 身体一侧抽搐| 97超级碰碰碰精品色视频在线观看| 亚洲国产精品999在线| 亚洲国产中文字幕在线视频| 国产91精品成人一区二区三区| 夜夜躁狠狠躁天天躁| 十八禁网站免费在线| 精品高清国产在线一区| 狠狠狠狠99中文字幕| 亚洲专区国产一区二区| 久9热在线精品视频| 神马国产精品三级电影在线观看 | 国产一卡二卡三卡精品| 啦啦啦在线免费观看视频4| 久久久精品欧美日韩精品| 超碰成人久久| 18禁裸乳无遮挡免费网站照片 | 国产精品野战在线观看 | 一边摸一边做爽爽视频免费| 日韩有码中文字幕| 美女大奶头视频| 国内久久婷婷六月综合欲色啪| 深夜精品福利| netflix在线观看网站| 欧美成人性av电影在线观看| 久久午夜亚洲精品久久| 老熟妇仑乱视频hdxx| 亚洲免费av在线视频| av在线播放免费不卡| av天堂久久9| 国产一区二区在线av高清观看| 香蕉久久夜色| 18美女黄网站色大片免费观看| 亚洲一区二区三区色噜噜 | 久久婷婷成人综合色麻豆| 久久这里只有精品19| 99re在线观看精品视频| 黄片播放在线免费| 成在线人永久免费视频| 九色亚洲精品在线播放| 99香蕉大伊视频| 色在线成人网| 正在播放国产对白刺激| 亚洲av日韩精品久久久久久密| 免费在线观看视频国产中文字幕亚洲| a级毛片在线看网站| 麻豆久久精品国产亚洲av | 亚洲激情在线av| 日日夜夜操网爽| 免费观看人在逋| 少妇 在线观看| 欧美人与性动交α欧美精品济南到| 亚洲av成人不卡在线观看播放网| 自拍欧美九色日韩亚洲蝌蚪91| 久久中文看片网| 国产1区2区3区精品| 免费女性裸体啪啪无遮挡网站| 又紧又爽又黄一区二区| 黄色片一级片一级黄色片| 999久久久国产精品视频| 久久久国产成人免费| 天天躁夜夜躁狠狠躁躁| 高清av免费在线| 欧美黑人精品巨大| 黑人猛操日本美女一级片| 国产蜜桃级精品一区二区三区| 精品国产乱子伦一区二区三区| 亚洲九九香蕉| 午夜两性在线视频| aaaaa片日本免费| 精品人妻在线不人妻| 免费在线观看日本一区| 日本a在线网址| 亚洲 欧美 日韩 在线 免费| e午夜精品久久久久久久| 国产蜜桃级精品一区二区三区| 一级毛片精品| 免费人成视频x8x8入口观看| 午夜视频精品福利| av网站在线播放免费| 99国产精品免费福利视频| 国产熟女xx| 丝袜人妻中文字幕| 午夜免费成人在线视频| 亚洲欧洲精品一区二区精品久久久| 美女高潮到喷水免费观看| 岛国在线观看网站| tocl精华| 国产xxxxx性猛交| 高潮久久久久久久久久久不卡| 精品国产乱码久久久久久男人| 亚洲人成网站在线播放欧美日韩| 日本撒尿小便嘘嘘汇集6| 男女做爰动态图高潮gif福利片 | 亚洲午夜精品一区,二区,三区| 波多野结衣一区麻豆| 搡老岳熟女国产| 午夜免费鲁丝| 日日干狠狠操夜夜爽| 色哟哟哟哟哟哟| 久久久久国产一级毛片高清牌| 午夜福利,免费看| 日韩免费高清中文字幕av| 男人的好看免费观看在线视频 | 国内久久婷婷六月综合欲色啪| 国产亚洲欧美在线一区二区| 九色亚洲精品在线播放| 精品国产一区二区久久| 他把我摸到了高潮在线观看| 欧美日韩国产mv在线观看视频| 这个男人来自地球电影免费观看| 日本五十路高清| 制服人妻中文乱码| 日韩欧美免费精品| 久久精品91蜜桃| 国产成人欧美在线观看| 99国产精品免费福利视频| 热99国产精品久久久久久7| 国产欧美日韩一区二区精品| 亚洲国产看品久久| 看片在线看免费视频| 亚洲五月天丁香| 亚洲伊人色综图| 国产精品一区二区在线不卡| 日韩欧美免费精品| 身体一侧抽搐| 校园春色视频在线观看| 成人免费观看视频高清| 亚洲五月色婷婷综合| 两人在一起打扑克的视频| 在线免费观看的www视频| 欧美乱色亚洲激情| 成人黄色视频免费在线看| 国产又色又爽无遮挡免费看| 在线观看免费高清a一片| 午夜福利在线观看吧| 女人高潮潮喷娇喘18禁视频| 人人澡人人妻人| 最近最新中文字幕大全免费视频| 日韩免费av在线播放| www.熟女人妻精品国产| 叶爱在线成人免费视频播放| 亚洲一区二区三区色噜噜 | 欧美日本亚洲视频在线播放| 91老司机精品| 欧美在线一区亚洲| 亚洲av成人不卡在线观看播放网| 最新美女视频免费是黄的| 中文字幕人妻丝袜制服| 色哟哟哟哟哟哟| 亚洲国产中文字幕在线视频| 国产精品爽爽va在线观看网站 | 淫妇啪啪啪对白视频| 亚洲一码二码三码区别大吗| 嫩草影视91久久| 最近最新中文字幕大全电影3 | 一区福利在线观看| 在线国产一区二区在线| 久久精品亚洲av国产电影网| 99re在线观看精品视频| 中文字幕高清在线视频| 一区二区三区精品91| 精品国产一区二区三区四区第35| 久久国产亚洲av麻豆专区| 又大又爽又粗| 在线观看一区二区三区激情| 日日爽夜夜爽网站| 丝袜人妻中文字幕| 国产精品久久久av美女十八| 久久国产亚洲av麻豆专区| 高清毛片免费观看视频网站 | 日本撒尿小便嘘嘘汇集6| 亚洲成av片中文字幕在线观看| 手机成人av网站| av片东京热男人的天堂| 久久精品影院6| 久热爱精品视频在线9| 最新在线观看一区二区三区| 成年人免费黄色播放视频| 亚洲欧美日韩另类电影网站| 可以免费在线观看a视频的电影网站| 久久香蕉国产精品| 久久香蕉精品热| 成熟少妇高潮喷水视频| 久久久精品欧美日韩精品| 老汉色∧v一级毛片| www.熟女人妻精品国产| 男女之事视频高清在线观看| 伊人久久大香线蕉亚洲五| 9色porny在线观看|