• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Human Pose Estimation and Object Interaction for Sports Behaviour

    2022-08-24 12:56:16AyeshaArifYazeedYasinGhadiMohammedAlarfajAhmadJalalShaharyarKamalandDongSeongKim
    Computers Materials&Continua 2022年7期

    Ayesha Arif, Yazeed Yasin Ghadi, Mohammed Alarfaj, Ahmad Jalal, Shaharyar Kamaland Dong-Seong Kim

    1Department of Computer Science, Air University, Islamabad, 44000, Pakistan

    2Department of Computer Science and Software Engineering, Al Ain University, Al Ain, 15551, UAE

    3Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia

    4Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Korea

    Abstract: In the new era of technology, daily human activities are becoming more challenging in terms of monitoring complex scenes and backgrounds.To understand the scenes and activities from human life logs, human-object interaction (HOI) is important in terms of visual relationship detection and human pose estimation.Activities understanding and interaction recognition between human and object along with the pose estimation and interaction modeling have been explained.Some existing algorithms and feature extraction procedures are complicated including accurate detection of rare human postures, occluded regions, and unsatisfactory detection of objects, especially small-sized objects.The existing HOI detection techniques are instancecentric (object-based) where interaction is predicted between all the pairs.Such estimation depends on appearance features and spatial information.Therefore, we propose a novel approach to demonstrate that the appearance features alone are not sufficient to predict the HOI.Furthermore, we detect the human body parts by using the Gaussian Matric Model (GMM) followed by object detection using YOLO.We predict the interaction points which directly classify the interaction and pair them with densely predicted HOI vectors by using the interaction algorithm.The interactions are linked with the human and object to predict the actions.The experiments have been performed on two benchmark HOI datasets demonstrating the proposed approach.

    Keywords: Human object interaction; human pose estimation; object detection; sports estimation; sports prediction

    1 Introduction

    In the digital era, technology is the most significant tool to ease daily human life.Artificial Intelligence (AI) is a vast field of technology used in various research developments of expert systems and computer vision.In automated systems, technology has progressed significantly in the last couple of decades towards the computerization of humans in several applications [1].Human object interaction is a vast domain and it has many complexities in artificially intelligent systems.Moreover,in a recent study, psychophysicists declared that the understanding of an image or video in a single glimpse is not easy for humans [2].Although all the social events have been categorized in different fields whereas each field consists of different circumstances.This variety of events requires different kinds of classification between human, object, scene, and background.Therefore, a lot of research has been done in the past and on humans and objects for understanding the events.To detect the human,the authors have considered the human first followed by differentiating it from the background and estimating the pose of the human.After this, they employ object detection and classification techniques.

    Event classification and human-object interaction have been used in many applications such as surveillance systems, railways platforms, airports, and seaports where detection of normal and abnormal events along with detection for real-time data is critical [3].However, there are massive challenges in the way of improving the accuracy of human-object interaction for sports and security agencies that need to identify daily activities [4], office work, gym activities, smart homes and smart hospitals, and understanding of activities in educational institutions.All human activity-based and smart systems need to understand the event and take a decision to arrange activities in a well-organized manner.

    In this article, we proposed a unique method forHOI recognition using object detection via Gaussian Matric Model (GMM) and human pose estimation (HPE).We developed a hybrid approach for pre-processing images including salientmaps, skin detection, HSV plusRGB detection, and extraction of geometrical features.We designed a systemwith the combination ofGaussian matrix model (GMM) and k-means to detect the human skeleton, draw ellipsoids of human body parts, and detect the object by a combination of k-means as well as YOLO.A combination of SK-learn and HOG was used for classification and activity recognition.We used two publically available benchmarks datasets for our model and fully validated our results against other state-of-the-art models.The proposed technique processes the data through four different aspects: unwanted components elimination from the image, extraction of hierarchal features, detection of an object based on some factors, and classifiers.In addition, the proposed methodology has been applied to publically available datasets: the PAMI’09 and the UIUC Sports datasets and obtained significant improvement in activities recognition rate over other state-of-the-art techniques.

    The rest of the paper is organized as follows.Section 2 contains relatedwork, Section 3 presents the architecture of the proposed model.Section 4 describes the performance evaluation of the proposed work, Section 5 discusses the proposed methodology.Section 6 reports related discussions.Section 6 concludes the paper and provides some future directions

    2 Related Works

    In this research article, we discuss human-object interactions using interaction algorithms over sports datasets.

    2.1 Human Pose Estimation

    Various researches have been done using different approaches to improve the labeling of scenes since it causes false recognition [5].Designed the object recognition and representation methods that compare the overall pixels to understand the status of the image.Then, they match the kernel to understand the object properly.By combining two different techniques, the MRG technique and the segmentation tree, they show the contextual relationship with respect to detection of the edges by following the connected components [6].In [7] the adopted an approach is to use depth maps for CRF modeling and system development for scene understanding using less bright images with a simple background.Classification is done on the basis of kernel features [8].By using depth images, detection and localization of objects in 3D and foreground segmentation of RGB-D images are performed by [9].

    2.2 Action Recognition via Inertial Sensor

    In [10] the extended model of a semantic manifold by combining manually local contextual relationships and semantic relationships to classify the events.Many researchers considered the human object interaction detection by using object detection and human body detection.Human and object attention maps in that approach are constructed using contextual appearance features and local encoding.Only detection of object takes place instead of identication, Use of the complex in the term of time and computationally expensive because of the additional neural network inference [11].Similarly; hierarchical segmentation proposed by [12] performed contour detection and boundarydetection techniques onRGB images.After the segmentation, a histogram of oriented gradient (HOG) is obtained along with the combination of deformable part model (DPM) for the object detection.

    3 Material and Methods

    The proposed system based on pre-processes, segmentation and objects detection as initial steps in the input images.After the detection of an object from the image, the human has been detected using salient maps and the pose of the human body has been estimated using GMM [13].After detecting the human and object, we compute the geometrical features from the segmented images including object centroid, object length, object width, and object acquired area.We take the extreme points of the object (extreme left point, extreme right point, topmost point, and bottom-most point) from the centroid of the object.Next, we apply naive Bayes to find the features from both techniques.On the other hand, from the human body, we detect SK features and full-body features from ellipsoids.Then,to symbolize the reduced features, we optimize the features and find human object co-occurrence.We optimize the features and find co-occurrence between human and object.Finally, the last step is the classification of the event.An overview of the proposed system is shown in Fig.1.

    Figure 1: The system architecture of the proposed model of Human Object Interaction (HOI)

    3.1 Preprocessing of the Data

    First of all, we need segmented noticeable regions to detect the human-object interaction.For this,we perform foreground extraction [14] and smart image resizing.We used the Salient maps method to extract the salient regions and salient object detection as shown in Fig.2.Low-Rank and LSDM models extract the saliency maps and also capture tree-structure sparsity with the norm.For efficient results, we make partitions of the images and non-overlapping patches.The input image has been divided intoNpatches {Pi}N(i= 1).

    Figure 2: Steps of human silhouette detection.(a) Result of skin tone detection (b) result of salient region detection and (c) result of smoothing filters and bounding box

    we extract theDdimension features for each patchPiand use a vectorFi∈R Dfor representation.The feature vectorF= {f1,f3,f3,...fn}∈R D×Nis extracted from the matrix representation of the input image.

    Therefore, we have designed an algorithm for the decomposition of feature matrix into(F=L+S), some redundant information (L) as well as some structured salient maps (S) in which,Ω(S) is the norm of s-sparsity-inducing.To regularize and preserve the structures, we use the relevant and latent structures and their relationships.

    3.1.1 Foreground Extraction

    To enhance the silhouette extracted by saliency maps, we perform the segmentation via skin tone detection by using the color-space transformation approach [15] which is achieved by using heuristic thresholds.We extract some skin tone regions by using the YCBCRmodel.The values of the threshold are specified as R= 0.299, G= 0.287 and B= 0.11.

    Through random thresholds of color space transformation, some enhanced regions are extracted.For the chrominance segmentation, we classify the skin and the non-skin regions into different parts precisely.

    3.1.2 Smoothing Threshold

    We applied filters and some manual thresholds on the foreground to make the resultant image accurate.We have applied a manual threshold to fill the holes and region connector for the small region connection.The threshold range of above 30 is considered as 255 and below 30 as 0.

    These are the detection steps of human silhouette from images with 2 different techniques and in Fig.2c both the techniques have been merged.

    3.2 Human Detection

    The segmentation of the human body leads to the extraction of active regions between human and object.We extract the centroid, obtain the boundary, and highlight the boundary points and eight peak points [16].

    3.2.1 Centroid

    After the detection of one human body from an image, the spatial feature detection has been performed and the centroid of the human body has been found, which is a torso point Fig.3a.This helps in detecting the human posture, leading towards finding the action.

    Figure 3: Detection of human body (a) Centroid (b) Boundary (c) Boundary points (d) peak points

    3.2.2 Boundary

    We performed the boundary extraction algorithms to find the boundary and highlight spatial points on the boundary Fig.3c.This leads to estimating the human body posture.

    3.2.3 Peak Points

    After boundary extraction, we find the peak point on the boundary Fig.3d to estimate the human posture

    3.3 Object Segmentation

    The segmentation of objects leads to active regions between humans and objects [17].We detected the objects using YOLO.Image classification along with localization on each matrix has been performed to predict the bounding boxes and probabilities for objects by extracting geometrical features.

    In object segmentation to extract the spatial features, we extract four different extreme points and four features of length, width, centroid, and area of the region.These extracted points and features are in further processing, i.e., the parameter of rectangle or square and area of a rectangle (See Fig.4).For the measuring of the distance between the x and y extreme points, we use Euclidean distance as

    Figure 4: Detection of objects by using K-mean and geometrical features extraction

    where, ||d|| represents the distance between two points by using Euclidean,AxandA′xrepresent the x-coordinates.WhileByandB′yrepresent the y-coordinates.

    3.4 Human Body Parts Detection

    The skeleton model has been used to detect the human silhouette and by using k-means clustering,circles have been drawn on the human body.GMM model has been used for the ellipse fitting on human body poses by usingKnumber of fixed ellipses for performance prediction.GMM Ellipse fitting algorithm is responsible for drawing theKnumber ofEellipses for the best coverage of the targeted region α(E).

    We use skeleton tree by implementing compact representation for all the skeleton branches, we draw all the possiblecircles by using the tangent from the central branch.The compact and the lossless representation uses the medial axis transform (MAT) for the denotation of the centroid and the radius which represents the maximum inscribed circle and their coloring.“V”and“W”are the edges of a node of the graph G = (V,W), where V is endpoint node and W is skeleton segment which is the part of Li∈S, S∈{1,..., |W|} denoting the number of edges.

    where Pij is jth bin histogram of the ith node.The term log|S| represents the overall information of the skeleton as shown in Fig.5.

    Figure 5: Estimation of skeleton for pose estimation by using skeleton model.(a) 2D image, (b) Medial Axis Transform and (c) Centroid of a circle that is tangent from the boundary

    3.5 Object Detection

    We used two novel approaches, Fuzzy C-Means and Random forest, for super-pixels and objects segmentation respectively.

    3.5.1 Clustering

    First of all, clusters are the centroids for each object of the class and then randomly initialized.Feature vectors determine the dimensions of centroids and Euclidian distance is used for assigning the cluster for each object [18].Additionally, the pixels are assigned to the clusters having the minimum distance from the centroid.These clusters are assigned to classes of objects and the object mean is calculated again to check the difference from the previous value, and the process continues until a constant value is obtained.Fig.6 represents the clustering.We used 6 classes of PAMI’09 and 8 activities of the UIUC dataset’s feature vectors divided into 6 and 8 clusters respectively, which is obtained by some primary steps.To calculate the area ? of a rectangle, we use extreme connected points and connect them to make a rectangle for the object segmentation.For the particular rectangle,the area ? is calculated as:

    Figure 6: Similar-pixels clustering for object detection

    where ? represents one side of the perimeter, (Q-P) is one side of the rectangle, and (m-n) is the other side of the rectangle.? symbolizes the area of a rectangle.

    3.5.2 Features Computation Using HOG

    We performed segmentation on the images by using edge detection and preserved the edges and used HOG to improve the accuracy.

    3.6 Features of Human Pose Using GMM

    For the extraction of features, we draw ellipses on the human body using a fixed number of ellipses [19].GMM is performed for computing parameters and best coverage for ellipses.We did this using a two-step method: drawing the ellipses using k-means and fixing the maximum numberK.

    3.6.1 Ellipses Using K-Means

    The central skeleton is the medial axis as used for the tangent drawing on the boundary and is continuously changing.A 16-bit histogram is used for each circle and the radius of each circle is computed.The circular shape is defined using MAT-based histograms.

    3.6.2 Fixing the Maximum Number K

    A GMM model is used to draw the ellipsoids on the different body poses by a fixed number K of ellipsesEof the image and to predict the performed activity accordingly [20].GMM-EM algorithm is responsible for computing the parameters for a fixed numberkof ellipsesEthat has achieved the best coverage α(E) in two steps.Ellipses are devolved using the GMM Ellipses fitting model.We wanted to calculate the parameters of ellipsesEand by fixing them untilKin Algorithm 1 for the best coverage of the region from the silhouette.

    Algorithm 1: Ellipse Fitting Algorithm (EFA)Input: EHS: Extracted Human Silhouettes Input: Binary image I.Output: Set of ellipses E with the lowest IC[S,R] = ShapeSkeleton(I)C = ShapeComplexity(S,R)CC = InitializeEllipse(S,R)K=1 IC*=∞Repeat SCC = SelectHypothesis(k,CC)E = GMM-EM(I,SCC,K)IC = ComputeIC(I,E,C)ICmin = C.log(1-0.99)+ 2.k if IC<AIC*then IC*= IC E*= E End K= K+1 Until K==12

    P∈F×Gis the probability of pixels belonging to the ellipseEiin the model.Ciis the origin ofEiandMiis a positive-definite 2×2 matrix which represents the eccentric orientation ofEi.The Gaussian amplitude is used asAi= 1, so the value of probabilityPi(p) on the boundary is the same for all ellipses.The possibility that a point belongs to an ellipseEiis not dependent on the size of the ellipse.We checked this on both datasets as we fixed the number of ellipses by fixing the value of K with the minimum range of 12.This threshold value of K is only applied to get the fixed number of ellipses as represented in Fig.7.

    Figure 7: Detection of human body parts by using skeleton model and GMM (a) Ellipsoids from the circles on same centroids by covering all possible pixels (b) Sub-regions of GMM and (c) Limit the number of ellipses k = 12

    3.7 Human Object Co-Occurrence

    We apply a 3×3 convolutional layer to make the vector V as interaction vector of size×2.At the interface, we extracted the four possible location points for the human body center based on interaction points and interaction vectors.

    During the training process, the interaction points (IP) and the related human body and the object centroids have fixed geometric structures.While at the inference stage, our generated interaction-points(IP) need to be in grouped-form with the detection of object and their results (bounding-boxes of human-body and object).The points generated by using the interaction points p, center of human h,and center of object o imply a condition on the model: h≈P + v and O≈P - v.

    In Fig.8, it illustrates the interaction points grouping.It has 3 different inputs including the human body/object bounding boxes (green and red), the interaction points (redpoint) extracted from interaction heat maps, and interaction vectors (IV) in (red arrow) at the location of the interaction points (IP).The four corners of the (green) outlines of interaction boxes (red) are obtained by the given interaction points and the un-signed interaction vectors as shown in Eqs.(9)-(11).

    Figure 8: 3×3 convolutional layer is used to examine human-object interaction

    Fig.8 represents the procedure of finding the interaction between human and object.The three inputs,namely,the human body/object bounding-box from the object detection branch,the interaction vector is from the interaction points and it is predicted by the interaction vector branch [21].The current human body/object bounding boxes and their interaction points are regarded as true positive human-object pairs.

    Here,HboxandOboxare the obtained boxes of humans and objects from the human and object detection.ibox is the interaction box and it is generated by combining the interaction points and corresponding interaction vectors.dtl,dtr,dbl, anddbrare four different vectors with lengths of corners in between the interaction boxiboxand the reference boxrbox.dτ is the threshold of vector length set for filtering the negative human-object interaction pairs.The interaction grouping schemes are presented in Algorithm 2.

    For the prediction of interaction vectors, we compared point heat mapsPfrom the ground-truth with the heat maps∧Pof all inter-action points.All of these points are with the Gaussian kernel.We used the modified local loss which is proposed in for balancing the positive and negative values.Where Np represents the number of interaction pointsIPin the concerned image.The α and β points are the hyper-parameter points for the contribution of each point.For the interaction points prediction and interaction vector mapsVprediction.We use the value of the un-signed vectorV′k= (|Vx|k, |Vy|k)at various interaction points’pandkas the ground-truth.After that, theL1loss is used for the related interaction points and here |Vp|krepresents the vector and the point which is predicted by loss l function.

    Algorithm 2: Co-Occurance Human Object Interaction(HOI)Input: Human/Object detector H,O.Interaction points and vectors P,V Human, Object and Interaction threshold Hτ, Oτ, Iτ Output: Final Human Object Interaction box If//Interaction point p makes point set P//Interaction vector v makes set of vectors V for Hbox C Hτ, Oscore C Oτ, p CP do if H scorebox>H τ, O score box>Oτ, p score>pτ then//Interaction box ibox//calculate reference box rbox and Obox if Hbox, Obox, ibox, rbox satisfy condition 2 then Sf←Hscore .Oscore .pscore// output the current HOI Score Sf End if End if End if

    Meanwhile, we used the point heat mapPand ground heat mapP′for the prediction and all of them are defined by the Gaussian kernel.

    In(Eq.(14)),Nprepresents the number of interaction points and αandβ are the parameters that control the contribution of every point.For interaction, we use the vector maps V by using the value of the interaction vector at the interaction pointIPas the ground truth.The interaction vectorVisvi= (|vx|i,|vy|i) and the lossL1is directed for all the corresponding interaction points.

    whereVpirepresents the interaction vectorsVat theIPpoint.The loss function is:

    Here,?mis a weight for all vector loss terms.Here, we simply specify?m= 0.1 for all the experiments.

    4 Experiments and Results

    This section is organized into five sub-sections.First, two benchmark datasets are described in detail.Second, results evaluation is discussed.Third, human pose estimation is discussed.Fourth,estimation of human-object interaction is explained, and fifth, our proposed work is compared with other state-of-the-art advanced deep learning techniques.

    4.1 Datasets Description

    To evaluate the performance of the proposed system, we used images based benchmark datasets,namely, the PAMI’09 sports dataset and the UIUC sports dataset contains vast range of backgrounds and verity of sports.These datasets are further divided into testing sets for experiments and testing purposes.These two datasets have been used to detect human bodies and objects and to find the interactions between humans and objects.Both datasets are further classified into different classes of different sports and activities to recognize the different outdoor and indoor activities.

    4.1.1 PAMI’09 Dataset

    This dataset contains six classes and each class has thirty train, thirty ground truth, and twenty test images.The PAMI2009 dataset contains 480 images with few annotations [22].Each class has 80 images including 30 images for training, 30 for ground truth, and 20 for testing.Each picture has been cataloged with 12 ellipsoids.

    4.1.2 UIUC Dataset

    TheUIUC sports dataset consists of eight sports activities.In each class, there are100-240images.This dataset is comprised of 2000 images, mainly of sportsmen and sportswomen [23].

    4.2 Results Evaluations

    For efficient results, the dataset has been provided to the Gaussian mixture models in batches of classes.To minimize reconstruction errors, we set the number of training samples according to crossvalidation.

    4.2.1 Experiment I: Human Pose Estimation

    The accuracy of human pose estimation has been measured using Euclidean distance from ground truth [22] of the dataset, which is explained in Eq.(15).

    where the ground truth of datasetXis the position of human body parts.D’is the threshold, which is 12, and it is used to measure the accuracy between the ground truth and our model.

    In Tab.1, columns 2 and 4 represent the distances fromthe dataset’s ground truthwhereas columns 3 and 5 show the human body part recognition accuracies over the PAMI’09 andUIUC sports datasets respectively.

    Table 1: Human body key point’s detection accuracy

    Tabs.2 and 3 represent the mean accuracy of both the datasets respectively.

    Table 2: Mean recognition accuracy of PAMI’09 sports dataset

    Table 3: Mean recognition accuracy of UIUC sports dataset

    4.2.2 Experiment II: HOI

    For human-object interaction (HOI) detection and prediction, we use Hourglass as a feature extraction method for pre-training.We randomly initialized the network for generating the interaction points and vectors.During the training of the system, we resize the input images to a resolution of 512×512.Standard data augmentation techniques have been employed and an Adam optimizer has been used for the optimization of the loss function during training.Through the testing phase, we perform the flip augmentation method to get final detections and predictions.Moreover, we use a batch with the size of 30 and a learning rate of 2.5.

    For the detection branch, we go after the previously proposed HOI estimation methods and employ the Faster R-CNN method with the ResNet-50-FPN and pre-train it on the UIUC training dataset.To acquire the bounding-boxes at the inference, we have set the score thresh-hold for the human to be greater than 0.4 and for the object, it is 0.1.When the interaction box is generated by our interaction points and vectors.The generation of interaction systems has taken about 7s.The interaction group we have has the complexity ofO(Nh No Ni), whereNh,No,Niis the number of humans, objects, and interaction points, respectively.In the testing, our grouping scheme is timeefficient and takes less than 2s (<20% of total time).

    4.2.3 Experiment III: Classification of HOI

    By following the standard evaluation and testing methods as performed in [24] to analyze our proposed approach, the results are assembled in the form of role-mean-average precision (mAProle).In role-mean-average precision (mAProle), we apply the HOI model and perform it in a way that if and only if one HOI triplet is rewarded as a true-positive when both of the bounding boxes have IoU intersection-over-union (union of interactions) greater than or equal to 0.5 with the labeled data (ground-truth) [25] and the linked interaction class is accurately classified.Firstly, we compare our proposed technique with other state-of-the-art techniques in the literature.Tab.4 represents the comparison on the PAMI’09 and UIUC dataset.The existing approaches utilize human and object features in multi-stream architecture.

    Table 4: State-of-the-art comparison (in terms of mAProle) on the PAMI’09 and UIUC datasets, our approach by combining the HOI and IG with the mAProle of 53.6

    The work of denoted in Tab.4 as DCA, introduces an interactive network to put on noninteraction suppression and reports with a mAProle of 48.3.Our technique achieves state-of-the-art performance by comparing it to existing techniques with a mAProle of 53.4.Fig.9 shows that our results are improved by comparing them (mAProle of 53.6) in first pre-training our model on PAMI’09 and UIUC datasets and then fine-tuning and pre-training the model on both datasets.

    Figure 9: Some results by using Human Object Interaction (HOI) model

    4.2.4 Experiment IV: Qualitative Analysis of our Proposed System

    Finally, after the classification and recognition of human lifelong activities those are performed in this phase.Tab.5 shows the accuracy of different classes in the form of confusion matrix of PAMI’09 dataset with 90.0% of mean accuracy.This shows the significant improvement and better results from the proposed methodology.

    Table 5: Confusion matrix table on PAMI’09 sports dataset

    After that, the classification and recognition of human activities are performed over the UIUC sports dataset set.Tab.6 shows the accuracy of different classes in the form of confusion matrix of the UIUC sports dataset with 87.71% of mean accuracy, which shows significant improvement and better results from the proposed methodology.

    Table 6: Confusion matrix table on UIUC sports dataset

    5 Conclusion

    We proposed a novel approach to estimate the HOI in images.Our approach refers to HOI estimation as a fundamental problem of research work in which we perform the pose estimation using skeleton model and GMM.After that, we detect the object by combining the features of K-means clustering and YOLO.Moreover, we generate the interaction points and interaction vectors by using key-point detection and pair those interaction points and vectors with the human and object by using the bounding boxes.HOI interaction was performed by using the HOI interaction group method.Through reference boxes and reference vectors, we estimate the interaction.Our experiments are performed on two HOI benchmark sports datasets, PAMI’09 and UIUC.Our approach outperforms state-of-the-art methods on both datasets with the accuracies of 90.0% and 87.71%, respectively.

    In the future, we will extend the interaction vector concept by using multiple vectors from the interaction point to the human body and object to improve the results of our model.We also aim to implement this model in other applications and indoor HOI datasets.

    Funding Statement:This research work was supported by Priority Research Centers Program through NRF funded by MEST (2018R1A6A1A03024003) and the Grand Information Technology Research Center support program IITP-2020-2020-0-01612 supervised by the IITP by MSIT, Korea.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久9热在线精品视频| av中文乱码字幕在线| 色综合亚洲欧美另类图片| 女生性感内裤真人,穿戴方法视频| 日韩高清综合在线| 亚洲国产精品久久男人天堂| 国产精品99久久99久久久不卡| 天天躁狠狠躁夜夜躁狠狠躁| 18禁美女被吸乳视频| 国产欧美日韩一区二区精品| 一级毛片女人18水好多| 欧美日韩黄片免| 国产免费av片在线观看野外av| 精品免费久久久久久久清纯| 色av中文字幕| 757午夜福利合集在线观看| 国产亚洲精品第一综合不卡| or卡值多少钱| 免费在线观看成人毛片| 欧美乱码精品一区二区三区| 伊人久久大香线蕉亚洲五| 免费高清视频大片| 男人的好看免费观看在线视频 | 成人国产综合亚洲| 黄色女人牲交| 18禁裸乳无遮挡免费网站照片 | 日本五十路高清| 亚洲av日韩精品久久久久久密| 男人操女人黄网站| 国产激情偷乱视频一区二区| 日韩欧美国产一区二区入口| 欧美性猛交╳xxx乱大交人| 人人妻人人澡人人看| 看免费av毛片| 一区二区三区高清视频在线| 久久久久久久午夜电影| 97超级碰碰碰精品色视频在线观看| 波多野结衣巨乳人妻| 亚洲成人久久性| 亚洲,欧美精品.| 俺也久久电影网| 国产精品永久免费网站| 在线观看免费视频日本深夜| 天堂动漫精品| 黄色片一级片一级黄色片| 免费观看精品视频网站| 不卡一级毛片| 亚洲中文av在线| 亚洲成人久久爱视频| 日日夜夜操网爽| 黄色 视频免费看| 岛国视频午夜一区免费看| 男人的好看免费观看在线视频 | 欧美日韩黄片免| 在线免费观看的www视频| 午夜福利在线观看吧| 色尼玛亚洲综合影院| 777久久人妻少妇嫩草av网站| 国产一区在线观看成人免费| 视频在线观看一区二区三区| 久久青草综合色| 亚洲国产欧美日韩在线播放| 变态另类丝袜制服| 欧美色欧美亚洲另类二区| 午夜日韩欧美国产| 村上凉子中文字幕在线| 亚洲黑人精品在线| 久久国产精品人妻蜜桃| 天天躁夜夜躁狠狠躁躁| 午夜亚洲福利在线播放| 在线播放国产精品三级| 老汉色∧v一级毛片| 亚洲精品美女久久av网站| 好男人在线观看高清免费视频 | 18禁美女被吸乳视频| 欧美成人免费av一区二区三区| 女性被躁到高潮视频| 一区福利在线观看| 一区二区三区精品91| 亚洲一区高清亚洲精品| 中文在线观看免费www的网站 | 国产日本99.免费观看| 久久精品国产99精品国产亚洲性色| 中文字幕av电影在线播放| 国语自产精品视频在线第100页| 丁香欧美五月| 国产精品久久久久久人妻精品电影| 亚洲男人的天堂狠狠| 国产精品98久久久久久宅男小说| 88av欧美| 夜夜夜夜夜久久久久| 亚洲专区字幕在线| 日韩精品免费视频一区二区三区| 视频在线观看一区二区三区| 看片在线看免费视频| 欧美一区二区精品小视频在线| 成人亚洲精品一区在线观看| 校园春色视频在线观看| 女性被躁到高潮视频| 亚洲精品久久成人aⅴ小说| 色精品久久人妻99蜜桃| 脱女人内裤的视频| 国内少妇人妻偷人精品xxx网站 | 天堂√8在线中文| 久99久视频精品免费| 黄色毛片三级朝国网站| 午夜福利18| 国产亚洲精品久久久久5区| 欧美成人免费av一区二区三区| 777久久人妻少妇嫩草av网站| 欧美午夜高清在线| 欧美乱妇无乱码| 丁香六月欧美| 亚洲av成人不卡在线观看播放网| 日韩av在线大香蕉| 免费在线观看日本一区| 婷婷精品国产亚洲av在线| 黄频高清免费视频| 国产在线精品亚洲第一网站| 亚洲精品在线观看二区| 欧美在线一区亚洲| 黄色a级毛片大全视频| 欧美一区二区精品小视频在线| 日韩视频一区二区在线观看| 成人亚洲精品av一区二区| 又紧又爽又黄一区二区| 哪里可以看免费的av片| 欧美中文日本在线观看视频| 色婷婷久久久亚洲欧美| 嫩草影视91久久| 国产又爽黄色视频| 又紧又爽又黄一区二区| 国产不卡一卡二| 亚洲黑人精品在线| 亚洲国产精品久久男人天堂| 国产精品一区二区精品视频观看| 亚洲片人在线观看| 国产精品久久久久久人妻精品电影| 成熟少妇高潮喷水视频| 成人国产综合亚洲| 久热这里只有精品99| 亚洲七黄色美女视频| 国产97色在线日韩免费| 国产免费男女视频| 亚洲精品一区av在线观看| 老汉色av国产亚洲站长工具| www.www免费av| 欧美一级毛片孕妇| 国产精品亚洲美女久久久| 亚洲成人久久性| 男人舔女人下体高潮全视频| 少妇裸体淫交视频免费看高清 | 亚洲,欧美精品.| 这个男人来自地球电影免费观看| 美女 人体艺术 gogo| 久久久久精品国产欧美久久久| 久久精品亚洲精品国产色婷小说| 丝袜在线中文字幕| 久热这里只有精品99| 满18在线观看网站| 久久久久国产一级毛片高清牌| 中文字幕高清在线视频| 999久久久精品免费观看国产| 草草在线视频免费看| 中文字幕另类日韩欧美亚洲嫩草| 久久这里只有精品19| 人妻丰满熟妇av一区二区三区| 白带黄色成豆腐渣| 国产av不卡久久| 亚洲一区高清亚洲精品| 亚洲精品国产区一区二| 黄色视频,在线免费观看| 老司机午夜福利在线观看视频| 看片在线看免费视频| 精品少妇一区二区三区视频日本电影| 日韩精品免费视频一区二区三区| 国产黄色小视频在线观看| 久久午夜亚洲精品久久| 高清在线国产一区| 中文字幕精品免费在线观看视频| 免费看a级黄色片| or卡值多少钱| 中文字幕精品亚洲无线码一区 | ponron亚洲| 免费高清视频大片| 制服诱惑二区| 夜夜躁狠狠躁天天躁| 精品国产超薄肉色丝袜足j| 黄片小视频在线播放| 日韩高清综合在线| 91大片在线观看| 变态另类成人亚洲欧美熟女| avwww免费| 最近最新免费中文字幕在线| 中文字幕最新亚洲高清| 久久久久久免费高清国产稀缺| 欧美激情极品国产一区二区三区| aaaaa片日本免费| 好看av亚洲va欧美ⅴa在| 午夜免费成人在线视频| 欧美性猛交╳xxx乱大交人| 国产精品亚洲av一区麻豆| 黑人操中国人逼视频| 精品人妻1区二区| 大型av网站在线播放| 日本三级黄在线观看| 亚洲精品在线美女| 成人免费观看视频高清| 久久久国产成人精品二区| 精品福利观看| 精品国产美女av久久久久小说| 国产成人影院久久av| 日本一本二区三区精品| 一进一出抽搐gif免费好疼| 成人免费观看视频高清| 久久久久久人人人人人| 久久久久久久久久黄片| 高清毛片免费观看视频网站| 中文字幕久久专区| 国内少妇人妻偷人精品xxx网站 | 777久久人妻少妇嫩草av网站| 久久中文看片网| 久久久国产成人免费| 亚洲午夜理论影院| 日韩成人在线观看一区二区三区| 最近最新免费中文字幕在线| 国产精品一区二区免费欧美| 日韩免费av在线播放| 男女之事视频高清在线观看| 国产av一区在线观看免费| 夜夜看夜夜爽夜夜摸| www.999成人在线观看| 久9热在线精品视频| 麻豆一二三区av精品| 自线自在国产av| 精品国产乱码久久久久久男人| 国产成人一区二区三区免费视频网站| 国产亚洲精品av在线| 一进一出好大好爽视频| 欧美一级毛片孕妇| 女人高潮潮喷娇喘18禁视频| 国产熟女午夜一区二区三区| 熟女电影av网| 亚洲欧美一区二区三区黑人| 午夜日韩欧美国产| 国产精品九九99| 中出人妻视频一区二区| 搞女人的毛片| 久久久久亚洲av毛片大全| 免费高清在线观看日韩| 国产区一区二久久| 在线观看66精品国产| 亚洲av电影在线进入| 日韩欧美在线二视频| 不卡一级毛片| 制服丝袜大香蕉在线| 九色国产91popny在线| www.自偷自拍.com| 免费人成视频x8x8入口观看| 欧美一级a爱片免费观看看 | 国产在线观看jvid| 老鸭窝网址在线观看| 欧美一级a爱片免费观看看 | 欧美+亚洲+日韩+国产| 极品教师在线免费播放| 成年女人毛片免费观看观看9| 女性被躁到高潮视频| 日韩欧美一区二区三区在线观看| 黄片大片在线免费观看| 日韩欧美在线二视频| 1024手机看黄色片| 欧美大码av| 久久精品亚洲精品国产色婷小说| 人人妻人人澡欧美一区二区| 亚洲av美国av| 午夜精品在线福利| 可以在线观看毛片的网站| 国内揄拍国产精品人妻在线 | 日本熟妇午夜| av在线天堂中文字幕| 啦啦啦韩国在线观看视频| 久久99热这里只有精品18| 国产亚洲欧美98| 午夜视频精品福利| 麻豆国产av国片精品| 一级a爱视频在线免费观看| 黄色视频,在线免费观看| 久久青草综合色| 人人妻,人人澡人人爽秒播| 国产精品精品国产色婷婷| 在线观看日韩欧美| 国产激情偷乱视频一区二区| 一区二区三区激情视频| 一二三四社区在线视频社区8| 精品欧美一区二区三区在线| 亚洲 国产 在线| 成年人黄色毛片网站| 精品福利观看| 久久久久久久精品吃奶| 欧美激情久久久久久爽电影| 欧美成人性av电影在线观看| 99国产精品一区二区三区| 亚洲av电影在线进入| 美女扒开内裤让男人捅视频| 久久中文字幕人妻熟女| 无人区码免费观看不卡| 亚洲午夜理论影院| 中文字幕人成人乱码亚洲影| 黄片大片在线免费观看| 看黄色毛片网站| 人人妻人人澡欧美一区二区| 亚洲精华国产精华精| 天天一区二区日本电影三级| 18禁美女被吸乳视频| 91九色精品人成在线观看| 欧美日韩中文字幕国产精品一区二区三区| 免费观看精品视频网站| 十八禁网站免费在线| 在线播放国产精品三级| 久久天堂一区二区三区四区| 亚洲精品国产一区二区精华液| 午夜福利18| 亚洲欧美激情综合另类| 亚洲精品一区av在线观看| 黄色片一级片一级黄色片| 12—13女人毛片做爰片一| 18禁裸乳无遮挡免费网站照片 | 嫩草影视91久久| 国产亚洲精品久久久久5区| 亚洲avbb在线观看| 一边摸一边做爽爽视频免费| 久久香蕉国产精品| 啦啦啦韩国在线观看视频| 久久久久久久久久黄片| 国产成人av激情在线播放| 18禁国产床啪视频网站| 久久久久久久久免费视频了| 亚洲成av片中文字幕在线观看| 国产片内射在线| 一区福利在线观看| 国产精品永久免费网站| 国产欧美日韩一区二区三| √禁漫天堂资源中文www| 精品国产一区二区三区四区第35| 欧美成人免费av一区二区三区| 波多野结衣av一区二区av| 99久久精品国产亚洲精品| 亚洲av成人不卡在线观看播放网| aaaaa片日本免费| 欧美又色又爽又黄视频| 欧美色欧美亚洲另类二区| 午夜福利在线在线| 欧美丝袜亚洲另类 | 国产一级毛片七仙女欲春2 | 无遮挡黄片免费观看| 亚洲成a人片在线一区二区| 人人妻,人人澡人人爽秒播| 久久这里只有精品19| 成人国产一区最新在线观看| 99久久精品国产亚洲精品| 日本五十路高清| 亚洲欧美激情综合另类| 波多野结衣av一区二区av| 女人被狂操c到高潮| 这个男人来自地球电影免费观看| 日本黄色视频三级网站网址| 香蕉国产在线看| 精品福利观看| 国产精品 国内视频| 美女国产高潮福利片在线看| 中亚洲国语对白在线视频| 国产黄片美女视频| 99国产综合亚洲精品| 在线观看舔阴道视频| 欧美性猛交黑人性爽| 久久久久久久久中文| 淫秽高清视频在线观看| 亚洲一区二区三区不卡视频| av视频在线观看入口| 国产精品乱码一区二三区的特点| 久久亚洲真实| 熟女电影av网| 午夜精品久久久久久毛片777| 每晚都被弄得嗷嗷叫到高潮| 欧美日韩福利视频一区二区| 男人舔奶头视频| 日韩一卡2卡3卡4卡2021年| 亚洲av成人不卡在线观看播放网| 首页视频小说图片口味搜索| 久久国产精品男人的天堂亚洲| 婷婷丁香在线五月| 亚洲片人在线观看| 九色国产91popny在线| 国产精品久久久久久精品电影 | 亚洲 国产 在线| 中亚洲国语对白在线视频| 女人爽到高潮嗷嗷叫在线视频| 国产成年人精品一区二区| 老司机午夜十八禁免费视频| 国产午夜福利久久久久久| 大型av网站在线播放| 色综合婷婷激情| 国产精品,欧美在线| 黄色片一级片一级黄色片| www日本在线高清视频| 国产精品av久久久久免费| 香蕉久久夜色| 日本a在线网址| 白带黄色成豆腐渣| 一区二区三区精品91| 身体一侧抽搐| 久久久精品国产亚洲av高清涩受| 天天躁狠狠躁夜夜躁狠狠躁| 国产久久久一区二区三区| 国产一卡二卡三卡精品| 脱女人内裤的视频| av在线天堂中文字幕| 天天一区二区日本电影三级| 午夜福利在线观看吧| 精品国产亚洲在线| 三级毛片av免费| 91成人精品电影| 成人亚洲精品av一区二区| 久久99热这里只有精品18| 久久午夜综合久久蜜桃| 国产精品九九99| 中文资源天堂在线| 成人午夜高清在线视频 | 成年人黄色毛片网站| 18禁观看日本| 精品久久蜜臀av无| 香蕉av资源在线| 精品国产乱子伦一区二区三区| 女生性感内裤真人,穿戴方法视频| 9191精品国产免费久久| cao死你这个sao货| 亚洲成人免费电影在线观看| 一进一出抽搐动态| 日韩成人在线观看一区二区三区| 亚洲第一电影网av| 在线天堂中文资源库| 91在线观看av| 色播亚洲综合网| 日韩中文字幕欧美一区二区| 日本一区二区免费在线视频| 国产精品久久久av美女十八| 一本精品99久久精品77| 在线天堂中文资源库| 欧美乱码精品一区二区三区| 一个人观看的视频www高清免费观看 | 亚洲自偷自拍图片 自拍| 国产一区二区三区在线臀色熟女| 两性午夜刺激爽爽歪歪视频在线观看 | 一区二区三区精品91| 国产精品综合久久久久久久免费| 欧美黄色淫秽网站| 他把我摸到了高潮在线观看| 亚洲精品av麻豆狂野| 国内毛片毛片毛片毛片毛片| 少妇 在线观看| 久久午夜亚洲精品久久| 最好的美女福利视频网| 亚洲国产中文字幕在线视频| 亚洲av成人av| 国产免费男女视频| 高清毛片免费观看视频网站| 此物有八面人人有两片| www.精华液| 亚洲国产欧洲综合997久久, | 亚洲精品粉嫩美女一区| 色综合站精品国产| 欧美丝袜亚洲另类 | 午夜福利高清视频| 久久午夜亚洲精品久久| 男女午夜视频在线观看| 黄色丝袜av网址大全| 黄色视频不卡| 亚洲最大成人中文| 午夜精品在线福利| 亚洲成人精品中文字幕电影| 中文在线观看免费www的网站 | xxxwww97欧美| 亚洲专区中文字幕在线| 十八禁人妻一区二区| 亚洲国产日韩欧美精品在线观看 | 嫁个100分男人电影在线观看| 丝袜在线中文字幕| 无人区码免费观看不卡| 成人国产一区最新在线观看| 给我免费播放毛片高清在线观看| 日韩国内少妇激情av| 久久天堂一区二区三区四区| 国产单亲对白刺激| www日本黄色视频网| 亚洲天堂国产精品一区在线| 亚洲人成77777在线视频| 成年版毛片免费区| 欧美日韩一级在线毛片| 精品国产一区二区三区四区第35| 精品国产乱子伦一区二区三区| 欧美日韩亚洲综合一区二区三区_| 少妇 在线观看| 国产国语露脸激情在线看| 中文字幕人成人乱码亚洲影| 免费观看精品视频网站| 亚洲七黄色美女视频| 国产99白浆流出| 欧美丝袜亚洲另类 | 美女高潮喷水抽搐中文字幕| 狂野欧美激情性xxxx| 日韩欧美 国产精品| 亚洲av熟女| 一本大道久久a久久精品| www.999成人在线观看| 国产精品永久免费网站| 国产一区在线观看成人免费| 成人手机av| 亚洲专区国产一区二区| 香蕉丝袜av| 国产精品野战在线观看| 午夜福利免费观看在线| 婷婷精品国产亚洲av| 欧美日韩福利视频一区二区| avwww免费| 中文字幕av电影在线播放| 99在线人妻在线中文字幕| 人妻丰满熟妇av一区二区三区| 一本精品99久久精品77| 999久久久国产精品视频| 国产av又大| 亚洲精品在线美女| 一区二区三区精品91| 国产精品亚洲一级av第二区| 亚洲精华国产精华精| а√天堂www在线а√下载| 不卡一级毛片| 精品午夜福利视频在线观看一区| 国产av不卡久久| 岛国在线观看网站| 国产真实乱freesex| 亚洲国产看品久久| 成熟少妇高潮喷水视频| 亚洲全国av大片| 91麻豆精品激情在线观看国产| 欧美另类亚洲清纯唯美| 99re在线观看精品视频| 一卡2卡三卡四卡精品乱码亚洲| 婷婷六月久久综合丁香| 精品第一国产精品| 亚洲av成人av| 日日摸夜夜添夜夜添小说| 色综合站精品国产| 欧美乱码精品一区二区三区| 男人的好看免费观看在线视频 | 久久天堂一区二区三区四区| 香蕉久久夜色| 真人一进一出gif抽搐免费| 国产真人三级小视频在线观看| 啦啦啦韩国在线观看视频| 国产精品久久久久久人妻精品电影| 亚洲 国产 在线| 国产片内射在线| 久久中文字幕一级| 亚洲一区二区三区色噜噜| 老司机午夜十八禁免费视频| 久久人人精品亚洲av| 亚洲免费av在线视频| 色在线成人网| 婷婷丁香在线五月| 国产又爽黄色视频| 午夜成年电影在线免费观看| 久久精品国产清高在天天线| 搡老熟女国产l中国老女人| 成人手机av| 午夜福利欧美成人| 黄色毛片三级朝国网站| 丝袜人妻中文字幕| 国产野战对白在线观看| 人人澡人人妻人| 亚洲人成77777在线视频| 久久久久国产精品人妻aⅴ院| 白带黄色成豆腐渣| 18禁美女被吸乳视频| 曰老女人黄片| 又大又爽又粗| 日韩免费av在线播放| 国产成人av教育| 大型黄色视频在线免费观看| 亚洲成a人片在线一区二区| 国产又色又爽无遮挡免费看| 在线观看免费日韩欧美大片| 天堂影院成人在线观看| 亚洲精品美女久久av网站| 国产亚洲精品综合一区在线观看 | 亚洲av日韩精品久久久久久密| 精品少妇一区二区三区视频日本电影| 亚洲成国产人片在线观看| 国产一区二区在线av高清观看| 一级黄色大片毛片| 免费看日本二区| 国产亚洲欧美在线一区二区| 黄网站色视频无遮挡免费观看| 91麻豆av在线| 淫秽高清视频在线观看| 久久久久久免费高清国产稀缺| 亚洲国产中文字幕在线视频| 丝袜美腿诱惑在线| 老汉色∧v一级毛片| 国产精品永久免费网站| 免费女性裸体啪啪无遮挡网站| 国内少妇人妻偷人精品xxx网站 | 亚洲第一电影网av| 日本免费a在线| 久久精品人妻少妇| xxx96com|