• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust camera pose estimation by viewpoint classification using deep learning

    2017-06-19 19:20:22YoshikatsuNakajimaHideoSaito
    Computational Visual Media 2017年2期

    Yoshikatsu NakajimaHideo Saito

    Robust camera pose estimation by viewpoint classification using deep learning

    Yoshikatsu Nakajima1Hideo Saito1

    Camera pose estimation with respect to target scenes is an important technology for superimposing virtualinformation in augmented reality (AR).However,it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective.We propose a novelmethod of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint,in which the feature descriptor of each keypoint is almost invariant.Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class.We give two ways to prepare these images for deep learning and generating databases.In the first method,images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds. The second method uses real images to learn a given environment around a planar pattern.Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.

    pose estimation;augmented reality(AR); deep learning;convolutional neural network

    1 Introduction

    Since augmented reality(AR)toolkit[1]introduced the superimposition of virtual information onto planar patterns in images by real-time estimation of camera pose,technologies for markerless camera-tracking technology have become mainstream[2, 3].Markerless tracking needs to find a point of correspondence between the input image and the planar pattern for any camera pose.

    Lowe’s SIFT[4]is one of the most famous algorithms in computer vision for detecting keypoints and describing local features in images. SIFT detects keypoints using differences of Gaussians to approximate a Laplacian of Gaussian filter and describes them using a 128-dimensional feature vector.Then,keypoint correspondences are obtained using Euclidean distances between feature vectors.Although SIFT is robust in the face of scaling and rotation[5],when the input image is distorted due to projection distortion of the planar pattern,we cannot find keypoint correspondences.Randomised trees(RT)[6]improve the problem by training a variety of descriptors for each keypoint using affine transformations,and generating a tree structure[7]based on the resulting brightness values,for real-time recognition of keypoint identity.Viewpoint generative learning(VGL),developed by Yoshida et al.[8],extends this idea to train various descriptors for every keypoint by generating images as ifthey were taken from various viewpoints using a projection transformation,and generating a database of keypoints and features from the images.

    However,methods based on training feature descriptors of keypoints,such as RT and VGL, trade robustness of the various descriptors against computation time when searching for matched keypoints.For example,VGL compresses the database of training descriptors usingk-means clustering[9]for fast search,but this sometimes results in wrong keypoint matching,especially when the camera angle is shallow.Because feature descriptors of keypoints change significantly at ashallow angle,weak compression of the database is required to allow such shallow camera angles,but this increases the computation for keypoint search.

    In this paper,we propose a novel method for camera pose estimation based on two-stage keypoint matching to solve the trade-off problem. The first stage is viewpoint classification using aconvolutional neural network(CNN)[10–12],so that the feature descriptors of every keypoint are similar from the classified viewpoints.The second stage is camera pose estimation based on accurate keypoint matching,which is achieved in the classified viewpoint using a nearest neighbor(NN)search for the descriptor.To achieve this two-stage camera pose estimation,in pre-processing,our method generates the uncompressed descriptor databases of a planar pattern for each partitioned viewpoint, including a shallow angle,and trains a CNN to classify the viewpoint of the input image.

    A CNN can perform stable classification against variations of a property for the same class by learning from a large amount of data with variations for each class.For instance,object recognition which is stable under viewpoint changes can be performed by learning from many images taken from various viewpoints for each object class[13].This stable performance against viewpoint change is not achieved by just the structure of the CNN,but through the capability of a CNN to learn from variable data for each class.For example,Agrawal et al.[14]applied a CNN to estimate egomotion by constructing a network model with two inputs comprising two images whose viewpoints slightly differ.In this paper,we apply a CNN to a viewpoint classification for a single object.

    Additional reasons for using a CNN for viewpoint classification are as follows.Firstly,a CNN is robust to occlusion.This is very important,as it widens the range of applications.Secondly,computation time is unchanged as the number ofviewpoint classes increases,enabling us to easily analyze the trade-off relationship between accuracy and size.

    We introduce two methods for generating a database and preparing the images for deep learning, under the assumption that those methods might be used in different ways.The first one is robust for a range of environments and is used to initialize camera pose estimation,etc.The second method learns the entire environment around the planar pattern and is used in a learned environment.

    The NN search in the second stage is not timeconsuming,because little variety is necessary in the descriptors in the classified viewpoint in the first stage.The camera pose of the input image is computed based on correspondences between matched keypoints.

    2 Method

    Figure 1 shows the flow of our proposed method, which consists ofthree parts.The first part generates databases offeatures for every viewpoint class,which are partitioned into viewing angles from the entire viewing angle range(?90°<θ<90°,?180°<φ<180°)with respect to a target planar pattern,as shown in Fig.2.The second part trains the CNN to classify the viewpoint of the input image.The last part estimates the camera pose of the input image. We now explain each part in detail.

    In particular,during database generation(Section 2.1)and CNN training(Section 2.2),we use two methods to prepare images for database generation and deep learning by the CNN,with the assumption that these methods are to be used in different ways.

    Fig.1 Flow of proposed method.Top:database generation,middle:deep learning by the CNN,bottom:camera pose estimation.

    Fig.2 Generating databases:viewpoint class,virtual camera,and angle definitions.

    The first method uses only one image of the planar pattern and generates many images by use of projection matrices(Pmatrices).This reduces the learning cost because it only uses a single image andPmatrices.Moreover,this enables the CNN to be robust with changes in background of the input image,because we can vary the backgrounds of the images generated for deep learning.Viewpoint class estimation will not be extremely accurate;however, because the CNN only uses the appearance of the planar pattern in the input image,so this first method is not suitable for movies.On the other hand,it can manage shallow angles better than the conventional method,so it is useful for the initialization of the camera pose,etc.From now on,we call this methodlearning based on generated viewpoints.

    The second method uses real images by fixing the planar pattern within the environment and taking pictures with a camera.The CNN can learn not only about the appearance of the planar pattern but also the environment around it,including the background,the lighting,and so on.Therefore,the viewpoint class of the input image can be estimated with almost perfect precision,so this method is suitable for movies.However,the CNN can be only used in the environment in which the planar pattern is fixed when images for deep learning are taken.In contrast to the first method,we call this methodlearning based on example viewpoints.

    2.1 Database generation

    In this part,we generate one feature database per viewpoint class.Each database is generated from one image because features sampled from a certain viewpoint are almost identical in the viewpoint class,so one image is enough.As mentioned in the introduction,we use two methods for preparing images for database generation.Firstly,we will explain the method using one image and aPmatrix,which is robust in various environments. Secondly,we will explain the method using real images taken by a camera,which is more robust in the particular environment in which the preprocessing is performed.This flow is shown in the upper part of Fig.1.

    2.1.1 Learning based on generated viewpoints

    Firstly,we partition the entire range of viewing angles of the camera’s viewpoint with respect to a target pattern.We call each partitioned viewpoint a viewpoint class(see Fig.2).Secondly,we compute the projection matrices that transform the frontal image to images which appear to have been taken from the center of each viewpoint class,using Eq.(1).From now on,we will denote the number of viewpoint classes byN,the viewpoint classes byVi(i=1,...,N),and the projection matrix for each viewpoint classVibyPi.In Eq.(1),let the intrinsic parameters ofthe virtualcamera,the rotation matrix for viewpoint classVi,and the translation vector,beA,Ri,andt,respectively.The matrixRiis given by Eq.(2),usingθ,φ,andψdefined as in Fig.2.

    Using the projection matrixPi,we obtain an imageIifor each viewpoint classVi.Next,we detect keypoints and describe their local features for each imageIi,using the appropriate algorithm. We denote the number of detected keypoints byMi, each keypoint bypij,and each feature bydij(j= 1,...,Mi).Then we compute a homography matrixHithat transforms the imageIi,which represents the viewpoint classVi,to the frontal image.We also generate the database in which the described featuresdijand their coordinatesin the frontal image are stored.The coordinatesare found by transforming the coordinates of each detected keypointpijto the frontalimage using the equationBy performing this process on all images that represent each viewpoint class,we obtain one uncompressed descriptor database per viewpointclass.

    2.1.2 Learning based on example viewpoints

    For this method,we first use the camera to take multiple viewpoints of the planar pattern that is fixed in the environment.Next,for each imageIi,we compute a homography matrixHithat transforms the imageIito the frontalimage.In this computation,we use four points whose coordinates in the frontal image are easily determined,like corners.Equation(3)can be used to compute the homography matrixHi:

    Here,we denote the coordinates in the frontal image of the planar pattern byand the coordinates in the taken image as~(x,y,1)T.Then,we detect keypoints and describe their local features in the imageIiusing the appropriate algorithm.We denote the number of detected keypoints byMi,each keypoint bypij,and each feature bydij(j=1,...,Mi).The keypointpijcan be projected intowhich represents the coordinates in the frontal image,usingFinally,we generate the database for each image; multiple sets ofanddijare stored.By judging whether the coordinates ofare on the planar pattern or not,we can eliminate features belonging to the environment when we store features belonging to the planar pattern in the database.

    2.2 Deep learning by the CNN

    We train a CNN for the purpose of classifying the viewpoint of the input image.A CNN is a deep neural network mainly used for object recognition. We apply a CNN to viewpoint classification of a single planar pattern.In this step,we only use images,and do not use features for deep learning, because we employ a CNN that only receives images as input.As we do with database generation,we willexplain the two methods of preparing images for deep learning.However,the deep learning processing explained below should use the same method as that used for the database generation step.This process is illustrated in the middle row of Fig.1.

    2.2.1 Learning based on generated viewpoints

    Firstly,we generate multiple images for each viewpoint classViusing Eq.(1).Then we randomly change the background of every image and the position and scale of the planar pattern.By using these images for deep learning,the weight of the background part is reduced and the CNN can classify the viewpoint robustly.Here,we employ a softmax function as the activation function of the output layer and make its number ofunits coincide with the number ofviewpoint classes—this is the CNN design recommended for classification problems.Finally, we perform deep learning by teaching the CNN the correct viewpoint class for each generated image using the techniques of back-propagation[15],pretraining[16],and drop-out[17].In general,it is a problem for deep learning to prepare images for training,but our method uses images synthesized from a single planar pattern,enabling us to reduce the learning cost.

    2.2.2 Learning based on example viewpoints

    For each imageIi,we take multiple images of the planar pattern for deep learning from the same viewpoint as that used for imageIi.We then change the scale and the rotation to help ensure that the CNN is robust.Deep learning is performed as in Section 2.2.1,i.e.,we employ a softmax function in the output layer,we make its number of units coincide with the number of the viewpoint classes, and we teach the CNN the correct viewpoint class for every image.

    2.3 Camera pose estimation

    In this section,we explain the details ofcamera pose estimation given the input image.This process is shown at the bottom of Fig.1.We detect keypoints and describe their local features in the image using the same algorithm as that used to generate the databases.Next,we input the image to the CNN, which has been tuned by deep learning.Because the activation function of the output layer is a softmax function,the percentage informs us which viewpoint class the image belongs to(see Fig.3).We select the viewpoint class with the highest percentage and compare keypoints in the database for that viewpoint class with keypoints in the input image in terms of the Euclidean distance of their feature descriptors. Then we search for the nearest keypoint and the next nearest as suggested by Mikolajczy et al.[18],so that we can use the ratio to reduce mismatches between keypoints.Only when the Euclidean distance to the nearest keypoint is sufficiently smaller than the Euclidean distance to the second one,there is a match.Thus,DAandDBare matched only whenEq.(4)is satisfied:

    Fig.3 Viewpoint class estimation using CNN.

    Here,DA,DB,andDCrepresent the feature descriptor of the input image,the feature descriptor of the nearest keypoint in the database,and the feature descriptor ofthe second nearest,respectively. Ifwe set the thresholdtlarge,the number ofmatches increases as well as the number of mismatches; conversely,if we set the thresholdtsmall,the number of matches reduces as well as the number of mismatches.

    By matching keypoints between the database and the input image,we can obtain corresponding points in the input image and the frontal image,as feature descriptors and their coordinates in the frontalimage are stored in each database.After mismatches are reduced by RANSAC[19],we estimate the camera pose of the input image by computing the homography that transforms the frontal image to the input image using the coordinates of those correspondinge points.

    3 Experimental evaluation

    In this section,we demonstrate the validity of our method through experiments.In Section 2,we introduce two methods of preparing images for database generation and deep learning.Because those methods have different uses,we evaluate them with different datasets.We use VGL[8]as a basis for comparison.Conventional methods of camera pose estimation with CNN are typified by PoseNet,as described by Kendall et al.;however, such a method does not use SIFT-like pointbased features,while VGL does use point-based matching.Furthermore,VGL is more robust than other conventional methods that use point-based matching like ASIFT[20]andrandom ferns[21]. Thus,we compare our method to VGL.

    3.1 Experimental setup

    The evaluation environment was as follows.CPU: Intel Core i7-4770K 3.5 GHz,GPU:GeForce GTX760,and RAM:16 GB.The definition of viewpoint class and the datasets are different for the two methods,and will be explained separately. The deep learning framework used in this evaluation experiment was Chainer[22].

    3.1.1 Learning based on generated viewpoints

    For this method,we defined the viewpoint classViby splitting the viewpoints for observing the planar pattern as shown in Table 1.As features change more at a shallow angle,we subdivided the viewpoint more as angleθincreased.Thus,the number ofviewpoint classes was 4+8+12+12=36 in this experiment.

    As forψ,due to use of rotation invariant features including SIFT,we obtained many keypoint matches between the input image and the database for every values of camera pose angleψfor the input image.

    Next,we generated imagesIito represent each viewpoint classViusing Eq.(1),using anglesθandφat the center of each viewpoint classVi. We used SIFT to detect keypointspijand describe their localfeaturesdij.We usednetwork-in-network(NIN)[23]to constitute the CNN.NIN is useful for reducing classification time by reducing the number of parameters while maintaining high accuracy.To tune the parameters of the CNN by deep learning, we generated about three thousand images for each viewpoint classVi.The background images were prepared by capturing each frame from a movie taken indoors.Furthermore,we randomly changed the radius ofthe sphere(see Fig.2)and the angleψwhen we generated the images for deep learning.Doing so allows estimation of the viewpoint class with the trained CNN even if the camera distance and thecamera orientation with respect to the input image change.

    Table 1 Viewpoint class definition

    Fig.4 Viewpoint class and camera pose estimation results using our method and VGL[8],using evaluation images.Left:estimated viewpoint class,center:our method,right:VGL[8].

    Fig.5 Number of keypoint matches.

    We prepared 71 images of the planar pattern, including ones taken from a shallow angle and ones in which the planar pattern was occluded.Using those images,we compared the accuracy of camera pose estimation,the number ofcorrect matches,and the processing time with the corresponding values for VGL.For VGL,we generated a database using the same images as in our method and set the number of clusters to five and the number of stable keypoints to 2000.

    3.1.2 Learning based on example viewpoints

    In this method,we took 22 imagesIi(N=22) of a planar pattern from multiple viewpoints after fixing the planar pattern onto a desk.These images define the viewpoint classVi(see Fig.2),so there were 22 viewpoint classes in this experiment. Using those imagesIi,we generated 22 feature databases containing the coordinatesof all detected keypoints and their local featuresdij.We employed SIFT as a keypoint detector and a feature descriptor,and used the coordinates of four corners to computeHiused to transform coordinatespijto coordinatesWe again employed NIN as the network model for the CNN.Next,we generated about 600 images for each viewpoint classViby clipping every frame of movies that we took from around the viewpoint of each of the 22 imagesIi. By teaching the correct viewpoint class for every prepared image to the CNN using deep learning,the CNN became able to estimate the viewpoint class for each input image.Again in this method,we randomly changed the camera distance and the angleψwhen we prepared the images for deep learning to make the CNN robust to changes in scale and rotation.

    For the evaluation experiment,we prepared a movie of the fixed planar pattern,including frames taken from a shallow angle,in the same environment as the one used for database generation and image preparation for deep learning.

    In this experiment,we evaluated the estimated camera pose from the re-projection error of the corners of the planar pattern.Denoting the coordinates of the corners observed in the test image byPk,and the coordinates of the corners reprojected using the estimated homographyHbyQk, the re-projection errorEis given by the following equation:

    Erepresents the average Euclidean distance of the four corners between the ground-truth coordinates and the estimated coordinates.MinimisingEgives the camera pose estimation.

    To compare VGL with this method,we generated a database with the same 22 images used for our method and set the number of clusters to five and the number of stable keypoints to 2000.

    3.2 Results

    We now describe the results of the experimental evaluation of each method.

    3.2.1 Learning based on generated viewpoints

    Figure 4 shows the results of viewpoint class estimation by the CNN,and camera pose estimation using our method and VGL.The left image indicates the viewpoint class estimated by the CNN for the input image,the center image shows the result of camera pose estimation by our method,and the right image is the result from VGL.We visualize camera pose estimation by re-projecting coordinates of the four corners ofthe frontalimage using the computed homography and connecting them with red lines. Images without red lines indicate lacked sufficient matches to compute the homography.Figure 5 gives the number ofkeypoint matches used to compute the homography for each of the 71 images.

    As Fig.4 shows,our method estimated camera pose more robustly for shallow angles than the conventional method.Figure 5 shows that the number of matches was higher for our method than for the conventional method,for almost all images. Because our method matches keypoints between the input image and a database that was generated using an image similar to the input image,matching was more accurate with our method.Although the planar pattern is occluded in some images in Fig.4,our method estimated the viewpoint class and camera pose accurately.Because deep CNNs give robust results in the presence of occlusion[13],and the uncompressed descriptor databases ofthe planar pattern are generated for each viewpoint class,our method was robust to occlusion.

    Regarding the accuracy ofviewpoint classification, 7 of 71 images were incorrectly classified.However, 4 of these 7 images were successfully classified into the adjacent viewpoint class,so that keypoint matching works well enough,and the camera pose is estimated reasonably precisely.Features in the adjacent viewpoint class are similar to features in the correct one since the database for the next viewpoint class is generated from an image taken from a viewpoint next to the correct viewpoint.This allows the second step of accurate localization to still have a chance to correct the errors,making the algorithm robust.In contrast,3 of 7 images were classified into a completely different viewpoint class,so camera pose estimation failed.Overall, viewpoint class estimation was about 90%accurate because the CNN only uses the appearance of the planar pattern in the input image.Therefore,this method is not suitable for movies.On the other hand,it copes with shallow angles(see Fig.4),so it is useful for initialization of the camera pose and similar tasks.Furthermore,when using our method in applications,we can easily combine it with a conventional tracking method.By doing that,we can estimate camera pose continuously while coping with shallow angles.

    Next,we consider processing time.Table 2 shows the average processing time for our method and VGL for all images,for each stage of camera pose estimation,and in total.

    The overhead for viewpoint class estimation in our method is small.Detecting keypoints and describing feature descriptors using SIFT account for the most of the processing time.We could easily apply our method to a binary algorithmlike AKAZE[24]because it generates uncompressed descriptor databases.Thus,we could reduce the processing time spent on detecting keypoints and describing features in our method.

    Table 2 Average time spent on each processing stage (Unit:ms)

    3.2.2 Learning based on example viewpoints

    Figure 6 shows some results of camera pose estimation using our method and VGL,for a movie that we prepared for this evaluation.The camera pose was estimated using the method described in Section 3.2.1.Figure 7 shows the re-projection error computed with Eq.(5)for each frame of the movie. The ground-truth coordinates of the corners were detected manually.Figure 8 shows the number of keypoint matches between the input image and the estimated ones used for computation of the homography.

    As shown in Fig.6,this method also estimated camera pose for shallow angles more robustly than the conventional method.In Fig.8,the number of matches fluctuates because the database used for keypoint matching was changed by the CNN every few frames.As shown by Figs.6–8,the accuracy of camera pose estimation using VGL decreased for shallow angles and the features changed drastically, because VGL compresses features usingk-means for fast computation.On the other hand,our method estimates the camera pose more robustly because the database that contains allfeatures sampled from images similar to the input image is appropriately selected by the CNN.

    Fig.6 Results of camera pose estimation using our method and VGL[8]using some evaluation frames.Left:our method,right:results from VGL[8].

    Fig.7 Re-projection errors.

    Fig.8 Number of matches.

    With this method,viewpoint class estimation accuracy is almost 100%(see Fig.7),because the CNN can learn not only the appearance ofthe planarpattern but also the environment around it:the background,the lighting,and so on.However,the CNN can be only used in the same environment as the one in which the planar pattern was given and deep learning has been performed.

    We next discuss processing time.Figure 9 shows the processing frame rate.Again,the overhead for viewpoint class estimation in the proposed method is sufficiently small.

    3.2.3 Number of viewpoint classes

    The number of viewpoint classes affects the results of camera pose estimation and the size of the databases.Therefore,we generated 100 test images with homographies and evaluated how the number of viewpoint classes aff ected the results for the method of learning based on generated viewpoints.Table 3 shows the re-projection error calculated by Eq.(5) and the database size when changing the number of viewpoint classes.

    The re-projection error decreases with an increasing number of viewpoint classes since the input image and the matching image in the database become closer by splitting the viewpoint more finely.However,the size of the database is also increased as the number of generated databases is also increased.Thus,accuracy and size must be traded-off according to the particular application.

    4 Conclusions

    Fig.9 Frame rate.

    Table 3 Re-projection error and database size with respect to the number of viewpoint classes

    We have proposed a method for robust camera pose estimation using uncompressed descriptor databases generated for each viewpoint class.Our method classifies the viewpoint of each input image using a CNN that is trained by deep learning so that keypoints of the input image can be matched almost perfectly with the database.We gave two ways of generating these databases and preparing the images for deep learning.These methods have different applications.The first is robust in a changing environment,while the second allows the CNN to learn the entire environment around the planar pattern.We have experimentally confirmed that the number of keypoint matches was higher,and the accuracy of camera pose estimation was better,than with a conventional method.

    The application of our method to threedimensional objects is our future work.

    [1]Kato,H.;Billinghurst,M.Marker tracking and HMD calibration for a video-based augmented reality conferencing system.In:Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality,85–94,1999.

    [2]Lee,T.;Hollerer,T.Hybrid feature tracking and user interaction for markerless augmented reality.In: Proceedings of IEEE Virtual Reality Conference,145–152,2008.

    [3]Maidi,M.;Preda,M.;Le,V.H.Markerless tracking for mobile augmented reality.In:Proceedings of IEEE International Conference on Signal and Image Processing Applications,301–306,2011.

    [4]Lowe,D.G.Distinctive image features from scale-invariant keypoints.International Journal of Computer VisionVol.60,No.2,91–110,2004.

    [5]Mikolajczyk,K.;Schmid,C.A performance evaluation of local descriptors.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.27,No.10, 1615–1630,2005.

    [6]Lepetit,V.;Fua,P.Keypoint recognition using randomized trees.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.28,No.9, 1465–1479,2006.

    [7]Breiman,L.Random forests.Machine LearningVol. 45,No.1,5–32,2001.

    [8]Yoshida,T.;Saito,H.;Shimizu,M.;Taguchi,A. Stable keypoint recognition using viewpoint generative learning.In:Proceedings of the International Conference on Computer Vision Theory and Applications,Vol.2,310–315,2013.

    [9]Hartigan,J.A.;Wong,M.A.Algorithm AS 136:Ak-means clustering algorithm.Journal of the Royal Statistical Society.Series C(Applied Statistics)Vol. 28,No.1,100–108,1979.

    [10]Fukushima,K.;Miyake,S.Neocognitron:A new algorithm for pattern recognition tolerant ofdeformations and shifts in position.Pattern RecognitionVol.15,No.6,455–469,1982.

    [11]Hubel,D.H.;Wiesel,T.N.Receptive fields,binocular interaction and functional architecture in the cat’s visualcortex.The Journal of PhysiologyVol.160,No. 1,106–154,1962.

    [12]LeCun,Y.;Boser,B.;Denker,J.S.;Henderson, D.;Howard,R.E.;Hubbard,W.;Jackel,L.D. Backpropagation applied to handwritten zip code recognition.Neural ComputationVol.1,No.4,541–551,1989.

    [13]Russakovsky,O.;Deng,J.;Su,H.;Krause,J.; Satheesh,S.;Ma,S.;Huang,Z.;Karpathy,A.;Khosla, A.;Bernstein,M.;Berg,A.C.;Fei-Fei,L.ImageNet large scale visual recognition challenge.International Journal of Computer VisionVol.115,No.3,211–252, 2015.

    [14]Agrawal,P.;Carreira,J.;Malik,J.Learning to see by moving.In:Proceedings of IEEE International Conference on Computer Vision,37–45,2015.

    [15]Rumelhart,D.E.;Hintont,G.E.;Williams,R.J. Learning representations by back-propagating errors.NatureVol.323,533–536,1986.

    [16]Hinton,G.E.;Srivastava,N.;Krizhevsky,A.; Sutskever,I.;Salakhutdinov,R.Improving neural networks by preventing co-adaptation of feature detectors.arXiv preprintarXiv:1207.0580,2012.

    [17]Krizhevsky,A.;Sutskever,I.;Hinton,G.E.ImageNet classification with deep convolutional neural network. In:Proceedings of Advances in Neural Information Processing Systems,1097–1105,2012.

    [18]Mikolajczyk,K.;Tuytelaars,T.;Schmid,C.; Zisserman,A.;Matas,J.;Schaff alitzky,F.;Kadir,T.; GooL,L.V.A comparison of affine region detectors.International Journal of Computer VisionVol.65,No. 1,43–72,2005.

    [19]Fischler,M.A.;Bolles,R.C.Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography.Communications of the ACMVol.24,No. 6,381–395,1981.

    [20]Yu,G.;Morel,J.-M.ASIFT:An algorithm for fully affine invariant comparison.Image Processing On LineVol.1,1–28,2011.

    [21]Ozuysal,M.;Calonder,M.;Lepetit,V.;Fua, P.Fast keypoint recognition using random ferns.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.32,No.3,448–461,2009.

    [22]Tokui,S.;Oono,K.;Hido,S.;Clayton,J.Chainer: A next-generation open source framework for deep learning.In:Proceedings of Workshop on Machine Learning Systems(LearningSys)in the 29th Annual Conference on NeuralInformation Processing Systems, 2015.

    [23]Lin,M.;Chen,Q.;Yan,S.Network in network.arXiv preprintarXiv:1312.4400,2013.

    [24]Alcantarilla,P.F.;Nuevo,J.;Bartoli,A.Fast explicit diff usion for accelerated features in nonlinear scale spaces.In:Proceedings of British Machine Vision Conference,13.1–13.11,2013.

    Hideo Saito received his Ph.D.degree in electrical engineering from Keio University,Japan,in 1992.Since then, he has been on the Faculty of Science and Technology,Keio University. From 1997 to 1999,he joined the Virtualized Reality Project in the Robotics Institute,Carnegie Mellon University as a visiting researcher.Since 2006,he has been a full professor in the Department of Information and Computer Science,Keio University.His recent activities for academic conferences include being Program Chair of ACCV2014,a General Chair of ISMAR2015,and a Program Chair of ISMAR2016.His research interests include computer vision and pattern recognition,and their applications to augmented reality,virtual reality,and human robotics interaction.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journalare available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    u Nakajima

    his B.E.degree in information and computer science from Keio University, Japan,in 2016.Since 2016,he has been a master student in the Department of Science and Technology at Keio University,Japan.His research interests include augmented reality,SLAM, object recognition,and computer vision.

    1 Department of Science and Technology,Keio University, Japan.E-mail:Y.Nakajima,nakajima@hvrl.ics.keio.ac. jpH.Saito,saito@hvrl.ics.keio.ac.jp.

    Manuscript received:2016-07-25;accepted:2016-11-13

    99久久久亚洲精品蜜臀av| 日本三级黄在线观看| 午夜福利一区二区在线看| 久久香蕉国产精品| 成人黄色视频免费在线看| 美女扒开内裤让男人捅视频| 日韩精品中文字幕看吧| 久久狼人影院| 琪琪午夜伦伦电影理论片6080| 精品福利永久在线观看| 国产精品亚洲av一区麻豆| 国产成人免费无遮挡视频| 日韩中文字幕欧美一区二区| 在线免费观看的www视频| 母亲3免费完整高清在线观看| 午夜老司机福利片| 色综合婷婷激情| 免费在线观看影片大全网站| 99热只有精品国产| 成年人黄色毛片网站| 日韩欧美在线二视频| 搡老熟女国产l中国老女人| 久久久久久人人人人人| 国产三级黄色录像| 久久久精品国产亚洲av高清涩受| 在线国产一区二区在线| 国产精品影院久久| 久久人妻福利社区极品人妻图片| 久久久国产成人精品二区 | 亚洲欧美日韩无卡精品| 午夜亚洲福利在线播放| 亚洲第一欧美日韩一区二区三区| 欧美激情久久久久久爽电影 | 国产av精品麻豆| 一级作爱视频免费观看| 精品高清国产在线一区| 91成人精品电影| 国产免费现黄频在线看| 亚洲成a人片在线一区二区| 真人做人爱边吃奶动态| 99国产极品粉嫩在线观看| 啦啦啦 在线观看视频| 久久精品亚洲av国产电影网| 久久青草综合色| 精品一区二区三区视频在线观看免费 | 美女大奶头视频| 一个人免费在线观看的高清视频| 视频区图区小说| 可以在线观看毛片的网站| 国产成人欧美| 亚洲狠狠婷婷综合久久图片| 大码成人一级视频| 老司机亚洲免费影院| 亚洲国产毛片av蜜桃av| 久久久久久久久久久久大奶| 最近最新中文字幕大全电影3 | 成人18禁在线播放| 国产精品免费视频内射| 99国产精品免费福利视频| 国产色视频综合| 9色porny在线观看| 亚洲全国av大片| 国产亚洲欧美在线一区二区| 久久久国产成人精品二区 | 国产精品偷伦视频观看了| 最新美女视频免费是黄的| 一边摸一边做爽爽视频免费| 99在线人妻在线中文字幕| 99在线视频只有这里精品首页| 一级毛片女人18水好多| 9热在线视频观看99| 色老头精品视频在线观看| 国产精品成人在线| 亚洲狠狠婷婷综合久久图片| 91成人精品电影| 亚洲国产精品sss在线观看 | 在线永久观看黄色视频| 婷婷六月久久综合丁香| 亚洲成人国产一区在线观看| 精品乱码久久久久久99久播| 天堂√8在线中文| 国产精品一区二区三区四区久久 | 久久久久国产一级毛片高清牌| 丰满人妻熟妇乱又伦精品不卡| 亚洲在线自拍视频| 精品午夜福利视频在线观看一区| x7x7x7水蜜桃| 成年女人毛片免费观看观看9| 91麻豆av在线| 色尼玛亚洲综合影院| 久久久久国产一级毛片高清牌| 久久久久国产一级毛片高清牌| 亚洲avbb在线观看| 人妻久久中文字幕网| 在线播放国产精品三级| 国产精品一区二区免费欧美| 国产1区2区3区精品| 777久久人妻少妇嫩草av网站| 一进一出好大好爽视频| 热99re8久久精品国产| 欧美日韩国产mv在线观看视频| av中文乱码字幕在线| 国产精品自产拍在线观看55亚洲| 熟女少妇亚洲综合色aaa.| 在线观看66精品国产| 99精品欧美一区二区三区四区| 久久午夜亚洲精品久久| 国产亚洲精品综合一区在线观看 | 日韩一卡2卡3卡4卡2021年| 色综合婷婷激情| 国产亚洲精品第一综合不卡| 中国美女看黄片| 久久久久久久久免费视频了| 无限看片的www在线观看| 夜夜看夜夜爽夜夜摸 | 99香蕉大伊视频| 看片在线看免费视频| 又紧又爽又黄一区二区| 露出奶头的视频| av福利片在线| 免费av中文字幕在线| 国产成人欧美在线观看| 欧美日韩亚洲国产一区二区在线观看| 亚洲 欧美一区二区三区| 精品电影一区二区在线| 亚洲欧美一区二区三区黑人| 国产精品永久免费网站| 好男人电影高清在线观看| 亚洲九九香蕉| 成人影院久久| 日本五十路高清| 黄色视频,在线免费观看| 人妻久久中文字幕网| 精品少妇一区二区三区视频日本电影| 91成年电影在线观看| 少妇的丰满在线观看| 9热在线视频观看99| 精品乱码久久久久久99久播| 亚洲成a人片在线一区二区| 欧美激情久久久久久爽电影 | 国产精品偷伦视频观看了| 久久 成人 亚洲| 亚洲自拍偷在线| 免费人成视频x8x8入口观看| 免费在线观看日本一区| 成人三级黄色视频| 成年人黄色毛片网站| 看黄色毛片网站| 精品国产一区二区三区四区第35| 麻豆国产av国片精品| netflix在线观看网站| 国产又色又爽无遮挡免费看| 免费在线观看黄色视频的| 香蕉久久夜色| 国产91精品成人一区二区三区| 99精品久久久久人妻精品| 午夜日韩欧美国产| 免费av毛片视频| 久久久国产成人免费| 在线观看舔阴道视频| 精品国产一区二区三区四区第35| 久99久视频精品免费| 亚洲专区字幕在线| 国产欧美日韩精品亚洲av| 日韩一卡2卡3卡4卡2021年| a级毛片黄视频| 美女高潮喷水抽搐中文字幕| 两人在一起打扑克的视频| 精品福利观看| 精品久久久久久久毛片微露脸| 欧美另类亚洲清纯唯美| www.自偷自拍.com| 日韩大码丰满熟妇| 欧美乱色亚洲激情| 亚洲熟妇熟女久久| 高清在线国产一区| 在线视频色国产色| 欧美人与性动交α欧美精品济南到| 欧美日韩国产mv在线观看视频| 两个人免费观看高清视频| 国产av又大| 91九色精品人成在线观看| 亚洲精品久久成人aⅴ小说| 久久午夜亚洲精品久久| 欧美日韩av久久| 午夜福利,免费看| 99久久精品国产亚洲精品| 老司机在亚洲福利影院| 90打野战视频偷拍视频| 成人18禁高潮啪啪吃奶动态图| 久久人人97超碰香蕉20202| 高清黄色对白视频在线免费看| 亚洲情色 制服丝袜| 神马国产精品三级电影在线观看 | 老熟妇仑乱视频hdxx| 久久99一区二区三区| 久久久久久久久免费视频了| 亚洲片人在线观看| 桃色一区二区三区在线观看| 老汉色∧v一级毛片| 天堂中文最新版在线下载| 成人18禁高潮啪啪吃奶动态图| 婷婷精品国产亚洲av在线| 午夜福利在线观看吧| 一级毛片高清免费大全| 亚洲精品一二三| 日本一区二区免费在线视频| 亚洲成人免费电影在线观看| 99国产精品一区二区蜜桃av| 黑人操中国人逼视频| 他把我摸到了高潮在线观看| 久久久国产成人精品二区 | 亚洲成a人片在线一区二区| 国产视频一区二区在线看| 国产精品爽爽va在线观看网站 | 久久天躁狠狠躁夜夜2o2o| 日日摸夜夜添夜夜添小说| 99re在线观看精品视频| 久久久精品国产亚洲av高清涩受| 国产99白浆流出| а√天堂www在线а√下载| 91老司机精品| 久热这里只有精品99| 99久久国产精品久久久| 亚洲 欧美 日韩 在线 免费| 啪啪无遮挡十八禁网站| 欧美人与性动交α欧美软件| 国产精品爽爽va在线观看网站 | 大香蕉久久成人网| 亚洲av五月六月丁香网| 免费观看人在逋| 人人澡人人妻人| 成人三级黄色视频| 亚洲欧美激情综合另类| 身体一侧抽搐| 色精品久久人妻99蜜桃| 热99国产精品久久久久久7| 日本精品一区二区三区蜜桃| 日本三级黄在线观看| 自线自在国产av| 久久久久久久久久久久大奶| 日韩国内少妇激情av| www.999成人在线观看| 中出人妻视频一区二区| 亚洲精品一区av在线观看| 亚洲五月婷婷丁香| 少妇 在线观看| 黑丝袜美女国产一区| 亚洲精品国产色婷婷电影| 神马国产精品三级电影在线观看 | 91字幕亚洲| 久久天躁狠狠躁夜夜2o2o| 精品久久久精品久久久| av有码第一页| 亚洲欧美日韩高清在线视频| 亚洲成人国产一区在线观看| 国产精品永久免费网站| 久久人人精品亚洲av| tocl精华| 在线观看免费视频日本深夜| 男女下面插进去视频免费观看| 久久国产精品男人的天堂亚洲| 久久精品亚洲av国产电影网| 亚洲精品一卡2卡三卡4卡5卡| 国产精品av久久久久免费| 人妻丰满熟妇av一区二区三区| 日韩 欧美 亚洲 中文字幕| 日本wwww免费看| 亚洲成人免费电影在线观看| 国产成人精品久久二区二区91| 99国产精品99久久久久| 男女做爰动态图高潮gif福利片 | 夜夜躁狠狠躁天天躁| av欧美777| 色婷婷av一区二区三区视频| 97碰自拍视频| 99热国产这里只有精品6| 国产亚洲av高清不卡| 免费女性裸体啪啪无遮挡网站| 可以在线观看毛片的网站| 在线观看一区二区三区| 大码成人一级视频| 天天躁狠狠躁夜夜躁狠狠躁| 99久久精品国产亚洲精品| 99久久久亚洲精品蜜臀av| 视频在线观看一区二区三区| 成年版毛片免费区| 在线看a的网站| 在线国产一区二区在线| 色综合站精品国产| 亚洲色图综合在线观看| 国产欧美日韩精品亚洲av| 国产av一区在线观看免费| 搡老岳熟女国产| 老司机午夜十八禁免费视频| 好男人电影高清在线观看| 欧美成狂野欧美在线观看| 手机成人av网站| 中文字幕人妻丝袜制服| 成人精品一区二区免费| 欧美激情极品国产一区二区三区| 亚洲色图综合在线观看| 亚洲熟妇熟女久久| 国产激情欧美一区二区| 啦啦啦 在线观看视频| 精品第一国产精品| 国产91精品成人一区二区三区| 熟女少妇亚洲综合色aaa.| 波多野结衣av一区二区av| 一进一出抽搐动态| 日本免费一区二区三区高清不卡 | 91老司机精品| 男女床上黄色一级片免费看| 91老司机精品| 久久中文看片网| 高清在线国产一区| 午夜免费观看网址| 国产高清国产精品国产三级| 国产欧美日韩一区二区三| 亚洲欧美精品综合久久99| 涩涩av久久男人的天堂| e午夜精品久久久久久久| 亚洲中文字幕日韩| 大型av网站在线播放| 在线国产一区二区在线| 国产一区二区在线av高清观看| 成人影院久久| av在线播放免费不卡| 在线观看免费日韩欧美大片| 欧美日韩av久久| 欧美老熟妇乱子伦牲交| 欧美丝袜亚洲另类 | 亚洲精品国产精品久久久不卡| 欧美日韩亚洲国产一区二区在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 免费不卡黄色视频| 亚洲全国av大片| 大香蕉久久成人网| 亚洲精品成人av观看孕妇| 少妇裸体淫交视频免费看高清 | 9色porny在线观看| 国产av一区二区精品久久| 欧美黄色淫秽网站| 好男人电影高清在线观看| 欧美乱妇无乱码| av免费在线观看网站| 日韩欧美免费精品| 色播在线永久视频| av有码第一页| 丁香六月欧美| 国产精品日韩av在线免费观看 | 国产1区2区3区精品| 亚洲av美国av| 国产日韩一区二区三区精品不卡| 亚洲,欧美精品.| 可以免费在线观看a视频的电影网站| www.自偷自拍.com| 国产欧美日韩综合在线一区二区| 每晚都被弄得嗷嗷叫到高潮| tocl精华| 亚洲熟女毛片儿| 一级片免费观看大全| 国产成+人综合+亚洲专区| 国产免费现黄频在线看| 91精品国产国语对白视频| www.999成人在线观看| svipshipincom国产片| 新久久久久国产一级毛片| 无人区码免费观看不卡| 身体一侧抽搐| 欧美精品一区二区免费开放| 亚洲五月色婷婷综合| 免费观看人在逋| 99久久99久久久精品蜜桃| 99在线人妻在线中文字幕| 久久热在线av| 国产精品免费一区二区三区在线| 伦理电影免费视频| 欧美午夜高清在线| 夫妻午夜视频| av视频免费观看在线观看| 大码成人一级视频| 一级a爱片免费观看的视频| 国产亚洲欧美98| www.自偷自拍.com| 无人区码免费观看不卡| 国产深夜福利视频在线观看| 无人区码免费观看不卡| 乱人伦中国视频| 久久久国产成人免费| 不卡一级毛片| 后天国语完整版免费观看| 激情视频va一区二区三区| av网站免费在线观看视频| 欧美 亚洲 国产 日韩一| 亚洲人成77777在线视频| 看片在线看免费视频| 19禁男女啪啪无遮挡网站| 久久青草综合色| 亚洲熟妇中文字幕五十中出 | 亚洲成人国产一区在线观看| 天堂影院成人在线观看| 1024香蕉在线观看| 久久热在线av| 国产av一区二区精品久久| 性欧美人与动物交配| 亚洲成人免费电影在线观看| 免费搜索国产男女视频| 一边摸一边抽搐一进一出视频| 国产欧美日韩精品亚洲av| 自拍欧美九色日韩亚洲蝌蚪91| 久久久国产欧美日韩av| 欧美一级毛片孕妇| 好男人电影高清在线观看| 亚洲精华国产精华精| 91在线观看av| 日韩欧美三级三区| 精品午夜福利视频在线观看一区| 欧美日韩黄片免| 人成视频在线观看免费观看| 亚洲欧美精品综合一区二区三区| 亚洲一码二码三码区别大吗| 老熟妇乱子伦视频在线观看| 1024视频免费在线观看| 亚洲中文字幕日韩| 国产精品偷伦视频观看了| 成人特级黄色片久久久久久久| 超色免费av| 亚洲熟女毛片儿| 亚洲五月婷婷丁香| 丝袜在线中文字幕| 天天添夜夜摸| 又黄又粗又硬又大视频| 日韩人妻精品一区2区三区| 激情在线观看视频在线高清| 69av精品久久久久久| 国产精品99久久99久久久不卡| www.www免费av| 麻豆久久精品国产亚洲av | 国产一区在线观看成人免费| 中文字幕最新亚洲高清| 国产一区二区在线av高清观看| netflix在线观看网站| 成人三级做爰电影| 侵犯人妻中文字幕一二三四区| 中文字幕人妻丝袜一区二区| 国产97色在线日韩免费| 国产亚洲精品久久久久久毛片| 国内毛片毛片毛片毛片毛片| 亚洲欧美日韩高清在线视频| 免费av毛片视频| 91麻豆av在线| 国产av在哪里看| 香蕉国产在线看| 国产亚洲av高清不卡| 国产精华一区二区三区| 91字幕亚洲| 久久草成人影院| 丰满的人妻完整版| 国产1区2区3区精品| 免费女性裸体啪啪无遮挡网站| av免费在线观看网站| 制服诱惑二区| av中文乱码字幕在线| 校园春色视频在线观看| 99国产精品免费福利视频| 亚洲av片天天在线观看| 国产成人精品久久二区二区91| 国产视频一区二区在线看| 日韩有码中文字幕| 成人三级黄色视频| 精品第一国产精品| 黑丝袜美女国产一区| 欧美不卡视频在线免费观看 | 亚洲精品在线美女| e午夜精品久久久久久久| 久久亚洲精品不卡| 脱女人内裤的视频| 亚洲av第一区精品v没综合| 久久国产精品影院| 一进一出抽搐动态| 国产精品香港三级国产av潘金莲| 9色porny在线观看| 叶爱在线成人免费视频播放| 国产精品99久久99久久久不卡| 后天国语完整版免费观看| 涩涩av久久男人的天堂| 成人影院久久| 亚洲av电影在线进入| 亚洲一区中文字幕在线| 麻豆国产av国片精品| 亚洲美女黄片视频| 久久久久精品国产欧美久久久| 国产一区二区激情短视频| 免费日韩欧美在线观看| 久久午夜亚洲精品久久| 日韩成人在线观看一区二区三区| 久久久久精品国产欧美久久久| 国产一区二区激情短视频| 免费日韩欧美在线观看| 制服诱惑二区| 成人三级黄色视频| 亚洲在线自拍视频| 日本免费a在线| 国产亚洲av高清不卡| 无人区码免费观看不卡| 日本 av在线| 看免费av毛片| 51午夜福利影视在线观看| 天堂俺去俺来也www色官网| 亚洲欧美激情综合另类| 免费av毛片视频| 中文字幕高清在线视频| 日韩av在线大香蕉| 午夜精品国产一区二区电影| 美女扒开内裤让男人捅视频| 国产黄a三级三级三级人| 欧美人与性动交α欧美软件| 成人手机av| 午夜免费成人在线视频| 亚洲欧洲精品一区二区精品久久久| 琪琪午夜伦伦电影理论片6080| 亚洲午夜精品一区,二区,三区| 欧美日韩视频精品一区| 欧美av亚洲av综合av国产av| 亚洲在线自拍视频| 亚洲av第一区精品v没综合| 国产99白浆流出| 性欧美人与动物交配| 在线免费观看的www视频| 女人爽到高潮嗷嗷叫在线视频| 免费看十八禁软件| av中文乱码字幕在线| 国产高清国产精品国产三级| 极品教师在线免费播放| 国产精品亚洲一级av第二区| 国产麻豆69| 免费av毛片视频| 9191精品国产免费久久| 久久久久九九精品影院| 村上凉子中文字幕在线| 久久热在线av| 一夜夜www| 久久久久久免费高清国产稀缺| e午夜精品久久久久久久| 欧美老熟妇乱子伦牲交| 又黄又粗又硬又大视频| 男人操女人黄网站| 免费少妇av软件| 怎么达到女性高潮| 欧美在线一区亚洲| 男人舔女人的私密视频| √禁漫天堂资源中文www| 欧美日韩中文字幕国产精品一区二区三区 | 日本撒尿小便嘘嘘汇集6| 男女做爰动态图高潮gif福利片 | 欧美性长视频在线观看| 午夜精品在线福利| 99re在线观看精品视频| 中文字幕av电影在线播放| 老鸭窝网址在线观看| 男人舔女人的私密视频| 国产高清videossex| 亚洲狠狠婷婷综合久久图片| 又紧又爽又黄一区二区| 最新在线观看一区二区三区| 欧美黑人欧美精品刺激| 欧美激情久久久久久爽电影 | 99久久国产精品久久久| 免费av毛片视频| 韩国av一区二区三区四区| 天堂中文最新版在线下载| 国产精品98久久久久久宅男小说| 波多野结衣一区麻豆| 美女午夜性视频免费| 国产精品电影一区二区三区| 电影成人av| 国产av在哪里看| 人人澡人人妻人| 99国产精品一区二区蜜桃av| 韩国精品一区二区三区| 久久影院123| 免费观看人在逋| 午夜视频精品福利| 伦理电影免费视频| 免费女性裸体啪啪无遮挡网站| 亚洲国产欧美日韩在线播放| 夫妻午夜视频| 久久中文字幕一级| 99久久久亚洲精品蜜臀av| svipshipincom国产片| 俄罗斯特黄特色一大片| 亚洲精品中文字幕在线视频| 校园春色视频在线观看| 免费高清在线观看日韩| 999久久久精品免费观看国产| 熟女少妇亚洲综合色aaa.| 免费高清在线观看日韩| 国产一区二区三区视频了| 亚洲精品国产一区二区精华液| 午夜福利免费观看在线| 日韩中文字幕欧美一区二区| 成人三级做爰电影| 国产精品爽爽va在线观看网站 | 人妻丰满熟妇av一区二区三区| 久久亚洲真实| 女人高潮潮喷娇喘18禁视频| 淫妇啪啪啪对白视频| 久久亚洲真实| 无遮挡黄片免费观看| 国产1区2区3区精品| 大香蕉久久成人网| 男女下面进入的视频免费午夜 | 十分钟在线观看高清视频www| 国产一区二区三区综合在线观看|