• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Technology and Application of Intelligent Driving Based on Visual Perception

    2017-10-10 03:29:41XinyuZhangHongboGaoGuotaoXieBuyunGaoandDeyiLi
    關鍵詞:推動者如潮飛魚

    Xinyu Zhang, Hongbo Gao, Guotao Xie, Buyun Gao, and Deyi Li

    TechnologyandApplicationofIntelligentDrivingBasedonVisualPerception

    Xinyu Zhang, Hongbo Gao*, Guotao Xie, Buyun Gao, and Deyi Li

    The camera is one of the important sensors to realize the intelligent driving environment. It can realize lane detection and tracking, obstacle detection, traffic sign detection, identification and discrimination, and visual SLAM. The visual sensor model, quantity, and installation location are different on different intelligent driving hardware experimental platformas well as the visual sensor information processing module, thus a number of intelligent driving system software modules and interfaces are different. In this paper, the software architecture of the autonomous vehicle based on the driving brain is used to adapt to different types of visual sensors. The target segment is extracted by the image segmentation algorithm, and then the segmentation of the region of interest is carried out. According to the input feature calculation results, the obstacle search is done in the second segmentation region, the output of the accessible road area. As driving information is complete, we will increase or reduce one or more visual sensors, change the visual sensor model or installation location, which will not longer directly affect the intelligent driving decision, we make the multi-vision sensors adapted to the requirements of different intelligent driving hardware test platforms.

    driving brain; intelligent driving; visual perception

    1 Introduction

    As early as the 1950s, the United States carried out autonomous vehicle research, in 1950, the United States Barrett Electronics developed the world’s first independent navigation vehicle[1]. US Autonomous Vehicle Research originated in the US Department of DARPA (DARPA: Defense Advanced Research Projects Agency), its research level was the world leader. European countries since the 1980s began to develop autonomousdriving technology, made autonomous vehicles an independent individual, and mixed vehicles in the normal traffic flow[2]. In 1987, the Munich Federal Defense Force University, Daimler Benz, BMW, Peugeot, Jaguar and other well-known R & D institutions and automotive companies participated in the implementation of the Prometheus plan (PROMETHEUS, Program for a European Traffic of Highest Efficiency and Unprecedented Safety), which had a significant impact around the world[3]. Since the 1990s, the Advanced Transportation Engine System Association (AHSRA) set up an ASV (Advanced Safety Vehicle) project every five years to carry out autonomousdriving technology research[4]. Chineseautonomous technology research began in the late 80s of last century, supported by the national “eight six three” plan and the National Defense Science and Technology Commission related research program[5]. Since 2008, with the support of the National Natural Science Foundation of China, China started the future challenge of smart cars, the team number increased year by year, the difficulty of the game increased year by year, the number of completion racing team increased year by year, car companies’ enthusiasm in participation was gradually strengthened, which laid a solid foundation of the autonomous technology introduction in China and the rapid progress in autonomous driving technology[6].

    Intelligent driving perception technology is mainly through the sensor for autonomous vehicle’s internal and external environment information perception. Sensors include visual sensors, radar sensors, navigation and positioning sensors. Among them, the visual sensor is mainly used for lane detection, lane tracking, obstacle detection, traffic signs identification, including traffic signs detection, traffic signs identification, traffic light recognition, and visual SLAM technology.

    Zhang et al. proposed a method of locally improving GNSS (global satellite navigation system) positioning accuracy by using binocular vision using landmark information[7]. Ziegler et al. proposed a method of detecting lane lines using visual techniques[8]. Wolcott et al. proposed a map-based visual positioning method[9].Duan did some research on front-end composition, closed-loop detection and back-end optimization of SLAM based on thegraph[10]. Fang proposed the use of invariant moment theory, the occurrence of tilt, rotation, and scaling of the invariant nature of the image of the study[11]. Fernando et al. proposed a method based on adaptive color histograms[12]. Chen proposed two kinds of thetraffic signs detection method. The first method is based on color and shape information. The second method is based on color information and Adaboost traffic sign detection technology[13]. Hu proposed a method based on parallel strategy and contour geometry analysis traffic sign detection[14]. Smorawa et al. used the Canny edge detection operator to mark the position of the landmark[15]. Kaempchen et al. used cameras and laser scanners to detect and track the position, velocity, orientation and size of other moving objects, and to assess the motion of each object[16]. Li proposed aVparallax algorithm[17]. Zhu et al. used the re-projection transformation technology to achieve real-time monitoring of road obstacles[18].

    Compared with other sensors, the visual sensor has the advantages of a large amount of detection information and relatively low price. However, in the complex environment, getting the detection target and the background extracted has the disadvantages of large computational complexity and difficulty to realize the algorithm. At the same time, the visual hardware device and the visual processing algorithm determine the performance of the intelligent driving vehicle’s visual perception system. Intelligent driving vehicle sensor configuration is the foundation to perceive the surrounding environment and obtain their own state, and it is also the most cost of the test platform and the largest difference with other parts. As the different types of intelligent vehicle platform’s mechanical structure are different, autonomous vehicle sensor selection and installation location will be different as well, there is no unified program. Some research teams tend to rely on visual sensors to complete environmental awareness, typically represented by the smart car team at VisLab Labs, University of Parma, Italy. Others tend to rely on radar sensors to obtain environmental information, typically represented by Google’s autonomous vehicles. They mainly focus on the improvement of the algorithm and technology itself, but how to reduce the impact of driving thedecision on the number of sensors, type, and installation location changes, so that the technical architecture can adapt to different intelligent vehicle platforms with different sensor configuration and to decouple intelligent decision and sensor information. As driving information is complete, we will need increase or reduce one or more visual sensors, change the visual sensor model or installation location, which will not longer directly affect the intelligent driving decision. This paper puts forward the technical structure of anautonomous vehicle with driving brain as the core and solves the convenient portability and robustness of visual sensor sensing technology and application.

    In this paper, visual perception technology and application are described by the technical framework based on driving brain. In the second part, the classification of visual sensors is introduced. The third part introduces the role of visual sensors in the perception of intelligent driving environment. The fourth part introduces the visualization based on driving brain. The second part introduces environment perception technology by the visual sensor, the fifth part is the conclusion.

    2 Intelligent Driving Visual Sensor Classification

    2.1Monocularvisionsensor

    Domestic brands of industrial cameras are mainly Microvision, Aftvision, Daheng image, foreign brands of industrial cameras are German AVT camera, Canada Point Gray and so on. Visual research in the field of intelligent driving mainly uses AVT Guppy Firewire and Point Grey. The parameters of Point Grey camera and AVT Guppy Firewire camera is showed in Table 1.

    2.2 Binocular vision sensor

    Binocular vision technology is currently the most mature and the most widely used stereoscopic vision technology, which is mainly used in four areas: robot navigation, micro-operating system parameter detection, three-dimensional measurement and virtual reality. At present, the binocular vision detection distance and precision limitations make its advantages in intelligent vehicle obstacle detection that are still relatively weak, mainly stay in the theoretical research and experimental verification stage. The binocular vision products are shown in Fig.1. There are MEGA-DCS Megapixel Digital Stereo Head products, nDepthTMStereo Vision Cores products, TYZX Stereo Vision Cameras products, Bumblebee?2 Stereo Vision Camera products, Hydra Stereo Webcam products, Minoru 3D Webcam productsrespectively.

    Table 1 Camera parameters.

    Fig.1 The binocular vision products.

    2.3Panoramicvisiontechnology

    Panoramic vision technology is divided into single-camera 360° rotary imaging, multi-camera splicing imaging, fisheye camera imaging and folding reflection panoramic imaging, as shown in Fig.2. Among them, single-camera 360° rotary imaging needs to constantly rotate the camera, requiring a more stable mechanical structure. The system reliability requirements are high; multi-camera splicing imaging requires multiple cameras stitching, the system calibration is more complex, and needs to achieve multi- Synchronization and seamless splicing technology, and has a higher cost.For autonomous platform applications, fisheye camera uses imaging and folding reflective panoramic imaging technology.

    新世紀中國工業(yè)設計風起云涌,在設計創(chuàng)造的路途中,所有人都未曾停下腳步。作為行業(yè)的深潛者與推動者,當回憶如潮而起時,一樁樁、一件件故事在記憶中如飛魚穿行、騰躍出水,見證著改革開放40年間中國工業(yè)設計事業(yè)的發(fā)展。

    Fig.2Fromlefttoright,thefisheyelensimaging,foldingreflectivepanoramicimagingandmulti-cameraimaging,respectively.

    3 The Role of Visual Sensor in the Intelligent Driving Environment Perception

    To achieve the autonomous driving in an unknown environment, a variety of environments are needed to obtain real-time reliable external and internal information. In developing the domestic and international intelligent driving technology, one of the most useful environmental perception sensors is the visual sensor. From the American DAPAR Challenge to the China Intelligent Vehicle Future Challenge, in the practical research and application development, visual perception is mainly used in the lane departure warning system, the traffic sign security assistance system, the pedestrian detection safety system, the monocular vision advanced driving assistance system (ADAS), and binocular vision sensing system. Among them, the monocular vision advanced driving assistance system Mobileye’s key technology is based on an EyeQTMand EyeQ2TMprocessor using only one camera and bundles of multiple applications together. The functionsinclude Lane Departure Warning(LDW), Vehicle detection based on radar-vision fusion, Forward Collision Warning(FCW), Headway Monitoring and Warning(HMW), pedestrian detection, intelligent headlight control (IHC), traffic sign recognition (TSR), Visual adaptive cruise control (ACC), etc., as shown in Fig.3. Its technology has been used in BMW, GM, Volvo, Hyundai, Renault and other vehicles.

    Google’s autonomous vehicle uses the Point Grey Grasshopper 5 megapixel color camera to measure the traffic light status, by selecting the 2040×1080 region of the pixel area of interest to operate, thehorizontal viewing angle of 30 degrees. The farthest detection distance is 150 meters, as shown in Fig.4. And the precision positioning technology is applied to assist the detection. The position and height information of each traffic light is marked on the pre-acquired map. The autonomousdriving relies on its own precise positioning information to calculate the relative position relationship between the current traffic light and the camera. According to the projection relationship of the image, it will limit the location of each traffic light in the image, thus greatly reduces the interference and improves the efficiency of the algorithm, the preparation rate meets 0.99, as shown in Fig.5.

    Fig.3TheADASsystemandpedestriandetectiondiagram.

    Fig.4Googleautonomouscarcameralayout.

    Fig.5Googleshowsthetrafficlightlimitprocessingarea.

    Braive Smart Car from The University of Parma, Italy, is equipped with 10 cameras for lane lines, traffic signs, and obstacle detection. They installed four cameras at the rearview mirror, formed a long and short binocular vision system, while a single camera also had the function of detecting the road and traffic signs. Exterior mirrors and cameras on both sides of the front are used for lane or crossroads waiting for vehicle detection. The rear of the vehicle is fitted with a set of stereoscopic vision systems consisting of two cameras to detect rear proximity obstacles as shown in Fig.6.

    Fig.6“Braive”cameralayoutandfieldofview.

    Porter smart car from The University of Parma, Italy, has a set of two cameras composed of binocular stereoscopic vision system at the front and the rear of the car, the car behind the mirror with three 65 horizontal field angle camera, composed of the 180 degrees panoramic image in front of the vehicle, which makes it possible to do the vehicle detection and tracking in any direction, as shown in Fig.7.

    Fig.7Porterintelligentcarcameralayout.

    Oxford University Bowler Wildcat intelligent car used Point Grey Bumblebee2 stereo camera Stereo vision system, pre-collected road images to generate maps, and compared with the historical image, positioning the current vehicle location and navigation, while detecting the obstacles. Horizontal field of view is 65 degrees, the image resolution is 512×384, as shown in Fig.8.

    Fig.8BowlerWildcatintelligentcarcameralayout.

    Carnegie Mellon University BOSS intelligent Car uses two Point Grey Firefly color cameras for road detection, assesses road trends, and detects static obstacles on the road, as shown in Fig.9.

    Fig.9CMUBOSSintelligentcarcameralayout.

    Binocular vision technology is rarely used in intelligent vehicle obstacle detection, mainly stays in the theoretical research and experimental verification stage. In 2004 and 2005, the GOLD (Generic Obstacle and Lane Detection) system of the Parma University in Italy, using the V parallax algorithm, relying on three cameras to form two binocular vision systems with variable baseline lengths to achieve obstacle detection in unstructured roads. University of Oxford Bowler Wildcat intelligent vehicle Point Gray Bumblebee2 stereo camera Stereo vision system, used the reflection of panoramic imaging algorithm to achieve visual SLAM.

    3.1 Lane information extraction technology

    The lane information extraction is mainly done to determine the lateral position of the vehicle in the lane, the direction of the vehicle relative to the lane centerline, the lane geometry, such as the lane width, curvature, and so on. The extraction of lane information generally includes two steps: lane detection and tracking. In the absence of prior knowledge limits, lane detection determines the lane boundary in a single image. The lane tracking determines the position of the lane in the sequence image, and the current frame lane position can be limited by the information about the lane boundary position in the previous frame image. The main algorithms are Kalman filter, extended Kalman filter and particle filter.

    3.2 Obstacle detection technology

    Obstacle detection is an important guarantee for the safe driving of intelligent vehicles. Obstacles appear unpredictable, can not be based on pre-set electronic map to avoid obstacles, only in accordance with the vehicle in advance of the process of discovery, immediately deal with. Obstacle detection based on monocular vision technology is mainly used to detect vehicles and pedestrians. The main algorithms are feature-based obstacle detection technology and obstacle detection technology based on theoptical flow field.

    3.3 Identification of traffic signs

    The identification technology of traffic signs mainly includes two basic technical aspects. Firstly, the traffic signs are detected, including the traffic signs location and pretreatment. Secondly, the traffic signs are identified, including the feature extraction and classification of traffic signs. The main algorithm of traffic sign recognition is the traffic mark detection and template matching method based on the color image. The traffic sign discriminant algorithm is mainly the traffic sign difference method.

    3.4 Visual SLAM technology

    Simultaneous Localization And Mapping (SLAM) has high accuracy in direction measurement, low computational load, flexible and inexpensive, but deep recovery of information. The feature map is the main environmental representation based on the visual SLAM method. Its characteristic is mainly the corner feature. Harris, KLT and SIFT feature are the point feature detection methods commonly used in visual SLAM. Monocular vision SLAM is mostly based on the extended Kalman filter method. Oxford University has achieved the positioning and navigation by using visual SLAM method and applied it to real-time road test.

    4 Vision Sensor Architecture and Environmental Awareness and Processing

    Intelligent vehicle environment awareness technology includes road information, traffic signs information, pedestrian behavior information, and peripheral vehicle information. Information through a variety of vehicle heterogeneous sensor equipment to obtain, there are a variety of cameras, radar (such as laser radar, millimeter-wave radar, ultrasonic radar, infrared radar, etc.), GPS receiver, inertial navigation and so on. When a human driver performs a complex driving operation, such as lane changing, it will focus on different areas to ensure lane safety. Radar sensor has many different types, multi-heterogeneous radar sensor can get real-time information through effective extraction, and learn from the characteristics of human cognition. The sensor model, quantity and installation location of different intelligent driving test platform are different, and the sensor information processing module is different, as shown in Fig.10.The information provided by different driving maps has no fixed standard of its granularity. The number and interface of the autonomous system software modules are different. To drive the brain as the core, form the driving perception, and use the driving cognitive formal language, you can design a common autonomous software architecture. In this architecture, the intelligent decision module is not directly coupled with the sensor information. Through the sensor information and map of a priori information, we can form a comprehensive driving situation to complete intelligent decision-making, and use the image segmentation algorithm to extract the target area, and then focus on regional segmentation. According to the input characteristic calculation result, the driver searches for the obstacle in the secondary division area, outputs the travelable road area, and performs the information sensing method.

    Fig.10Intelligentcarhardwarearchitectureplatform.

    Human drivers integrated the coordination of sensory memory, working memory, long-term memory, computing center and learning different areas of the brain, complete driving activities. The driving function of the intelligent driving system simulates human-related functions and also requires a series of the above-mentioned key functions and allows these functions to work together, as shown in Fig.11. These key features include sensor information acquisition and processing, driving map calibration and mapping, driving cognitive formal language and intelligent decision making[19].

    Among them, the sensor information acquisition, including camera, laser radar, and millimeter-wave radar, combined positioning system and other information, in which the visual sensor to complete the lane marking identification, traffic signal recognition, traffic signs identification. In the field of image processing, it will involve the object classification and recognition, using SIFT, HOG and other image feature methods to identify, and using SVM and other machine learning algorithm to complete the classification and identification of objects[20-22]. For intelligent driving test, the application of driving map prior knowledge can improve object recognition performance. Google smart car records the traffic lights, traffic signs and other static traffic elements of the precise location in advance, during the identification of objects, uses the precise location of vehicles, the collection of traffic lights, traffic signs location information mapped to the image, to quickly determine the region of interest, which greatly improves the speed and accuracy of recognition[23]. In addition, simultaneous positioning and mapping (SLAM) can be accomplished using image information, which helps the vehicle acquire its own precise location[24]. As the visual sensor imaging by the perceived object’s light or reflection is affected by external lighting conditions greatly, limiting the scale of application of visual sensors.

    Fig.11 Driving brain and human brain function technical structure.

    According to the road segmentation algorithm based on inverse perspective transformation, the working area of the camera is shown in Fig.12, which can be used to analyze the effective working area of the visual sensor and the radar sensor. Fig.12 shows the working area of the two sensors. The intersection area is the common area of the camera and the radar sensor.

    Fig.12 Schematic diagram of the sensor operating area.

    Unearthed road surface unevenness will cause the body bumps, so the maximum speed of the intelligent vehicle is set at 20 km/h (5.6 m/s), the maximum deceleration of the intelligent driving vehicle is 6 m/s2. From the detection of the road, it can not travel to stop, we need at least 5 m distance. Braking is a gradual increase, taking into account the time-consuming of the algorithm and time-consuming response of the braking system, the safety distance is set to 10 m, the algorithm only concerned about the vehicle in front of 10 m within the road area. From the perspective of human cognition, the target area within 10 m of the intelligent driving vehicle is first visually judged. Then, whether there is an obstacle in the target area or not, if not it is considered to be a feasible area, on the contrary, it is considered that the target area in front of the obstacle is a travelable area. Based on the cognitive model algorithm, with reference to the principle of human cognition, we only need to consider the areas that may bring potential threats to the self driving of autonomous vehicles, and do not need to consider the farther areas, which save the computing resources for the algorithm to meet the real time principle. Finally, the algorithm should take into account the robustness criterion, so that the decision result of the input decision layer is expected to have better anti-interference as much as possible. In summary, the process of visual sensor information sensing and processing algorithm is shown in Fig.13.

    Fig.13Graphicsensorinformationfusionalgorithmflowchart.

    4.1Targetareaandobstacleinformationextraction

    Smart cars, when driving (especially autonomous exchange lane) in the process of real-time detection of peripheral vehicles, pedestrians and other obstacles, need to determine the travel area to ensure the safety. Obstacle detection is the key and difficult point of environmental perception technology. As the active sensor, it has the characteristics of good direction, high precision, and reliable data. The aim of this paper is to use the radar for road identification and the second decision of the obstacle in the area of interest, to obtain a safe path of the road which can be confirmed. Firstly, a target area is determined by the visual sensor. The target area is calculated by the image data obtained by the visual sensor. Then, the target area is searched and the safe and feasible road area is obtained.

    4.2Thetargetareaofsecondaryprocessing

    The original image obtained by the visual sensor is transformed into the road plane, and the initial target road area is obtained by the image segmentation method. In order to meet the real-time criteria, the algorithm focuses on the road area within 10 m of the intelligent vehicle. And to ensure the robustness, the judgment result is simple and practical. Therefore, the initial target road area needs to be further extracted and processed, and the Fig.14 shows the extraction process.

    Fig.14Theinitialroadareaofthemapisfurtherextractedfromtheprocessdiagram.

    The red areas in Fig.14 (b) and (c) show the road area obtained by the image segmentation algorithm. According to the camera to calibrate the internal and external parameters, you can calculate the corresponding relationship between the pixel and the actual distance, so you can get the extracted target area length, area, average width and other information. This paper intercept 10 m within the area of concern for processing. At the same time as the smart car does not necessarily travel in the middle of the road, so the road area is divided into two parts according to the middle of the car’s head, as shown in Fig.15, respectively, calculate its area, according to the relationship between the size of the pixel and the actual distance, and get the average road width of the left and right sides.

    Let the last calculated road width average of the left and right sides belandr, then the final target area is a rectangular area as is shown in Fig.16.

    Fig.15 The area of attention.

    Fig.16 The target area after secondary processing.

    4.3 The second judge of the obstacle

    The obstacle detection algorithm based on LIDAR can output the obstacle ID and obstacle number of all clustering results in the radar field of view, and can also obtain the coordinates information of the obstacle center point. Similarly, only concerned about the radar output within 10 m range of obstacle clustering results, for the further range of the obstacle detection results, it only acts as a basis for tracking comparison. Fig.17 is the result of the direct output of the obstacle detection algorithm. Fig.18 shows the output of the obstacle within 10 m after the second decision. The red sector shows the 10 m field of view of the radar. The obstacle in the 10 m field of view is signed by a red line.

    4.4 Accessible road area output

    Finally, the sensors need search obstacle for the target area. If there is no obstacle in the target area, it is considered that the target area is a road area that can be traveled; otherwise, the target area before the obstacle is considered to be a feasible road area. The specific output process of the road area is as follows:

    Fig.17 Initial barrier detection results.

    Fig.18 Obstacle detection after secondary judgment.

    First of all, set the camera output of the road width on both sides arel,r. In theOxycoordinate system of the vehicle plane, four straight lines ofx=-l,x=r,y=0 andy=10 are drawn so as to intersect with the previous two straight lines to obtain a closed rectangular space. The rectangular space is the target area.

    Secondly, search for obstacle object, if there is an obstacle object in the target area, extract the value of all the obstacles in this area. By comparing, get the minimum valueymin, draw liney=ymin, discard the liney=10, get a new rectangular area. The new rectangular area is the accessible road area. The feasible road area obtained by the algorithm is shown in Fig.19. Since the left and right sides of the road area obtained from the visual information do not contain an obstacle, it is considered that the length of the accessible road area is 10 meters oryminmeters, and the width of the left and right sides is (l+r) meters, which mees the intelligent vehicle accessible area output.

    Fig.19 Map travel road area diagram.

    5 Conclusion

    As the machine vision system can quickly obtain a lot of information, easy to integrate design information and processing control information, therefore, in the modern automated production process, people widely use it in machine vision system monitoring, product testing and quality control and other fields. In the intelligent driving vehicle, since the sensor is cheaper than LIDAR and can obtain rich information, thesensoris the intelligent driving vehicle technology research focus, which has broad application prospects. It is expected that with the maturity and development of machine vision technology, the intelligence of the algorithm will be improved, it will be more and more widely used in intelligent driving vehicles. The future intelligent driving vehicles will mainly rely on machine vision to conquer the environment perception task.

    Acknowledgment

    This work was supported by National Natural Science Foundation of China under Grant No.61035004, No.61273213, No.61300006, No.61305055, No.90920305, No.61203366, No.91420202, No.61571045, No.61372148, the National High Technology Research and Development Program (“863” Program) of China under Grant No.2015AA015401, and the National High Technology Research and Development Program (“973” Program) of China under Grant No. 2016YFB0100903, and the Junior Fellowships for Advanced Innovation Think-tank Program of China Association for Science and Technology under Grant No.DXB-ZKQN-2017-035, and the Beijing Municipal Science and Technology Commission special major under Grant No. D171100005017002.

    [1]D.W.Gage, UGV History 101: A brief history of unmanned ground vehicle (UGV) development efforts,UnmannedSystems, vol.13, pp.9-32, 1970.

    [2]T.Kanade and C.Thorpe,CMUstrategiccomputingvisionprojectreport: 1984to1985.Carnegie-Mellon University, The Robotics Institute, pp.10-90, 1986.

    [3]M.Williams, PROMETHEUS-The European research programme for optimising the road transport system in Europe, inProceedingsofIEEEColloquiumonDriverInformation, 1988, pp.1-9.

    [4]S.Tsugawa, M.Aoki, A.Hosaka, and K.Seki, A survey of present IVHS activities in Japan,ControlEngineeringPractice, vol.5, no.11, pp.1591-1597, 1997.

    [5]M.Yang, Overview and prospects of the study on driverless vehicles,JournalofHarbinInstituteofTechnology, vol.38, no.8, pp.1259-1262, 2006.

    [6]H.B.Gao, X.Y.Zhang, T.L.Zhang, Y.C.Liu, and D.Y.Li, Research of intelligent vehicle variable granularity evaluation based on cloud model,ActaElectronicaSinica, vol.44, no.2, pp.365-374, 2016.

    [7]Y.R.Zhang, C.J.Guo, and R.Z.Niu, Research on stereo-vision aided GNSS localization for intelligent vehicles,ComputerEngineeringandApplications, vol.52, no.17, pp.192-197, 2016.

    [8]J.Ziegler, H.Lategahn, M.Schreiber, C.Keller, C.Knoppel, J.Hipp, M.Haueis, and C.Stiller, Video based localization for bertha, inProceedingsof2014IEEEInternationalConferenceonIntelligentVehiclesSymposium, 2014, pp.1231-1238.

    [9]R.W.Wolcott and R.M.Eustice, Visual localization within LiDAR maps for automated urban driving,inProceedingsof2014IEEEInternationalConferenceonIntelligentRobotsandSystems, 2014, pp.176-183.

    [10] H.X.Duan,Outdoorvisionlocalizationandmapbuildingfortheunmannedvehiclebasedonbinocularvision.Dalian University of Technology, 2015.

    [11] Q.L.Fang,Researchontrafficmarkrecognitionmethodbasedonunmannedvehicleassistednavigation.Anhui University, 2012.

    [12] F.Bernuy, D.Solar, I.Parra, and P Vallejos, Adaptive and real-time unpaved road segmentation using color histograms and RANSAC, inProceedingsofthe9thIEEEInternationalConferenceonControlandAutomation, 2011, pp.136-141.

    [13] Z.X.Chen,Detectionandidentificationofroadtrafficsignsinurbanareas.University of Science & Technology China, 2012.

    [14] J.C.Hu,Researchontrafficsigndetectionandrecognitionbasedonstablefeature.Hunan University, 2012.

    [15] D.Smorawa and M.Kubanek, Analysis of advanced techniques of image processing based on automatic detection system and road sings recognition,JournalofAppliedMathematicsandComputationalMechanics, no.1, pp.13, 2014.

    [16] N.Kaempchen, B.Schiele, and K.Dietmayer, Situation Assessment of an Autonomous Emergency Brake for Arbitrary Vehicle-to-Vehicle Collision Scenarios, inProceedingsofIEEETransactionsonIntelligentTransportationSystems, 2009, no.10, pp.678-687.

    [17] Y.Li,Researchofobstaclerecognitionbasedonbinocularvision.Wuhan University of Technology, 2007.

    [18] Z.G.Zhu, X.Y.Lin, and D.J.Shi, A real-time visual obstacle detection system based on reprojection transformation,Journalofcomputerresearchanddevelopment, no.1, pp.77-84, 1999.

    [19] D.G.Lowe, Object recognition from local scale-invariant features, inProceedingsoftheseventhIEEEinternationalConferenceonComputerVision, 1999, no.2, pp.1150-1157.

    [20] D.G.Lowe, Distinctive image features from scale-invariant keypoints,Internationaljournalofcomputervision, vol.60, no.2, pp.91-110, 2004.

    [21] N.Dalal and B.Triggs, Histograms of oriented gradients for human detection, inProceedingsofIEEEComputerSocietyConferenceonComputerVisionandPatternRecognition, 2005, no.1, pp.886-893.

    [22] X.Y.Zhang, H.B.Gao, M.Guo, G.P.Li, Y.C.Liu, and D.Y.Li, A study on key technologies of unmanned driving,CAAITransactionsonIntelligenceTechnology, vol.1, no.1, pp.4-43, 2016.

    [23] J.Levinson, J.Askeland, J.Dolson, and S.Thrun, Traffic light mapping, localization, and state detection for autonomous vehicles, inProceedingsof2011IEEEInternationalConferenceonRoboticsandAutomation(ICRA), 2011, pp.5784-5791.

    [24] N.Engelhard, F.Endres, J.Hess, J.Sturm, and W.Burgard, Real-time 3D visual SLAM with a hand-held RGB-D camera, inProceedingsoftheRGB-DWorkshopon3DPerceptioninRoboticsattheEuropeanRoboticsForum, 2011, pp.180.

    ?Xinyu Zhang and Buyun Gao are with Information Technology Center, Tsinghua University, Beijing 100083, China.

    ?Hongbo Gao and Guotao Xie are with State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100083, China. Email: ghb48@mail.tsinghua.edu.cn.

    ?Buyun Gao is with School of Software, Beijing Institute of Technology, Beijing 100081, China.

    ?Deyi Li is with Institute of Electronic Engineering of China, Beijing 100039, China.

    *To whom correspondence should be addressed. Manuscript

    2017-08-06; accepted: 2017-9-18

    猜你喜歡
    推動者如潮飛魚
    是什么“網(wǎng)課”讓“學生們”好評如潮
    陜西檔案(2020年1期)2020-04-14 06:09:04
    飛魚真的會飛嗎?
    飛魚
    五臺縣:統(tǒng)防統(tǒng)治二代粘蟲好評如潮
    強而有力的4K HDR推動者 Sony索尼 VPL-VW268
    省級醫(yī)改的理性推動者
    于德志:省級醫(yī)改的理性推動者
    創(chuàng)新——發(fā)展的推動者
    跨越千山萬水 宇通T7好評如潮
    小飛魚BBS
    最新的欧美精品一区二区| 色婷婷久久久亚洲欧美| 国产成人精品在线电影| 精品亚洲成a人片在线观看| av在线播放精品| 免费女性裸体啪啪无遮挡网站| 少妇人妻精品综合一区二区| 七月丁香在线播放| 天堂中文最新版在线下载| 韩国精品一区二区三区| √禁漫天堂资源中文www| 欧美最新免费一区二区三区| 黄网站色视频无遮挡免费观看| xxx大片免费视频| 精品国产乱码久久久久久男人| 亚洲av成人不卡在线观看播放网 | 妹子高潮喷水视频| e午夜精品久久久久久久| 精品人妻在线不人妻| 天堂8中文在线网| 国产精品二区激情视频| 国产av国产精品国产| 久久人妻熟女aⅴ| 久久久国产欧美日韩av| 欧美乱码精品一区二区三区| 69精品国产乱码久久久| 国产av一区二区精品久久| 亚洲熟女毛片儿| 丝袜脚勾引网站| 亚洲免费av在线视频| 老司机影院毛片| 久久精品久久久久久久性| 在线观看三级黄色| 久久女婷五月综合色啪小说| 久久久精品区二区三区| 搡老乐熟女国产| 欧美亚洲 丝袜 人妻 在线| 国产亚洲av片在线观看秒播厂| 国产视频首页在线观看| 国产福利在线免费观看视频| 乱人伦中国视频| 一级毛片我不卡| 久久久久国产一级毛片高清牌| 亚洲精品美女久久av网站| 黄色视频在线播放观看不卡| 精品久久久久久电影网| 国产一区二区三区综合在线观看| 亚洲精品在线美女| 日日爽夜夜爽网站| 久久精品亚洲av国产电影网| 日韩欧美一区视频在线观看| 精品久久久精品久久久| 国产成人欧美在线观看 | 日本vs欧美在线观看视频| 高清黄色对白视频在线免费看| 日韩一区二区视频免费看| 国产精品久久久久久精品古装| 热99国产精品久久久久久7| 毛片一级片免费看久久久久| 国产一区二区 视频在线| 国产精品免费大片| 七月丁香在线播放| 日韩一本色道免费dvd| 99国产精品免费福利视频| 久久久精品免费免费高清| 欧美 日韩 精品 国产| 亚洲精品国产av成人精品| 国产精品秋霞免费鲁丝片| 高清黄色对白视频在线免费看| 欧美av亚洲av综合av国产av | 欧美变态另类bdsm刘玥| 夜夜骑夜夜射夜夜干| 黑人欧美特级aaaaaa片| 一级毛片 在线播放| 欧美在线一区亚洲| 久久久久久久国产电影| 9191精品国产免费久久| 中文精品一卡2卡3卡4更新| 街头女战士在线观看网站| 色网站视频免费| av视频免费观看在线观看| 日日啪夜夜爽| 亚洲欧美清纯卡通| 欧美久久黑人一区二区| 国产视频首页在线观看| 日韩中文字幕视频在线看片| 久久99一区二区三区| 无遮挡黄片免费观看| 一级黄片播放器| 国产 精品1| 国产精品久久久久久精品电影小说| 一本久久精品| 亚洲国产最新在线播放| 又黄又粗又硬又大视频| 亚洲伊人久久精品综合| 伊人亚洲综合成人网| 黄色视频不卡| 美女中出高潮动态图| 亚洲欧美色中文字幕在线| 国产老妇伦熟女老妇高清| 黑人猛操日本美女一级片| 日本猛色少妇xxxxx猛交久久| 狂野欧美激情性xxxx| 99热网站在线观看| 国产成人91sexporn| 国产一区二区在线观看av| 又大又黄又爽视频免费| 久久精品久久久久久噜噜老黄| 亚洲天堂av无毛| 国产精品无大码| 天天操日日干夜夜撸| 久久狼人影院| 国产片特级美女逼逼视频| 各种免费的搞黄视频| 国产免费福利视频在线观看| 高清视频免费观看一区二区| 超碰97精品在线观看| 高清不卡的av网站| 七月丁香在线播放| 超色免费av| 亚洲在久久综合| 悠悠久久av| 午夜91福利影院| 丰满乱子伦码专区| 波多野结衣一区麻豆| 老汉色av国产亚洲站长工具| 国产亚洲一区二区精品| 亚洲婷婷狠狠爱综合网| 伦理电影免费视频| 日本av免费视频播放| 女人久久www免费人成看片| 国产av一区二区精品久久| 亚洲国产欧美日韩在线播放| 毛片一级片免费看久久久久| 精品久久久久久电影网| 天天操日日干夜夜撸| 国产一区二区三区综合在线观看| 国产成人欧美| 国产成人免费无遮挡视频| 国产成人免费观看mmmm| 男人操女人黄网站| 看十八女毛片水多多多| 亚洲精品一区蜜桃| 欧美日韩av久久| 啦啦啦视频在线资源免费观看| 精品一区二区三区四区五区乱码 | 777久久人妻少妇嫩草av网站| 天堂8中文在线网| 色综合欧美亚洲国产小说| 亚洲精品日韩在线中文字幕| 亚洲国产日韩一区二区| 亚洲少妇的诱惑av| 老汉色∧v一级毛片| 亚洲综合精品二区| 免费av中文字幕在线| 国产一区二区在线观看av| 国产精品成人在线| 国产男女超爽视频在线观看| 久久久精品免费免费高清| 一区福利在线观看| 国产精品久久久久久久久免| 欧美 亚洲 国产 日韩一| 午夜福利视频精品| 激情五月婷婷亚洲| 又大又黄又爽视频免费| 日韩一本色道免费dvd| 男女下面插进去视频免费观看| 亚洲久久久国产精品| 亚洲av在线观看美女高潮| 97精品久久久久久久久久精品| 最近手机中文字幕大全| 黑人欧美特级aaaaaa片| 欧美日韩国产mv在线观看视频| 王馨瑶露胸无遮挡在线观看| 欧美日韩国产mv在线观看视频| 亚洲欧美一区二区三区国产| 另类精品久久| 免费黄频网站在线观看国产| 日韩一区二区视频免费看| 欧美人与善性xxx| 国产一区二区激情短视频 | 亚洲av男天堂| av福利片在线| 欧美人与性动交α欧美精品济南到| 欧美亚洲 丝袜 人妻 在线| 日韩精品有码人妻一区| 又黄又粗又硬又大视频| 精品少妇久久久久久888优播| 国产成人91sexporn| 久久这里只有精品19| 亚洲国产av影院在线观看| 69精品国产乱码久久久| 最黄视频免费看| 国产黄色免费在线视频| 久久女婷五月综合色啪小说| 黄频高清免费视频| 国产高清不卡午夜福利| 精品一区二区三卡| 又黄又粗又硬又大视频| 丰满乱子伦码专区| 中国国产av一级| 精品一区二区三区av网在线观看 | 老鸭窝网址在线观看| 视频在线观看一区二区三区| 日本午夜av视频| 精品国产乱码久久久久久小说| 免费观看a级毛片全部| 99精品久久久久人妻精品| 极品人妻少妇av视频| 久久久久国产一级毛片高清牌| 国产精品国产三级专区第一集| 国产成人精品久久久久久| 男女之事视频高清在线观看 | www.自偷自拍.com| 亚洲人成网站在线观看播放| 90打野战视频偷拍视频| 天天操日日干夜夜撸| 人成视频在线观看免费观看| 亚洲av日韩在线播放| 男女下面插进去视频免费观看| 天天躁夜夜躁狠狠久久av| 肉色欧美久久久久久久蜜桃| 日韩大码丰满熟妇| 国产黄频视频在线观看| 69精品国产乱码久久久| 可以免费在线观看a视频的电影网站 | 日本色播在线视频| 亚洲成国产人片在线观看| 搡老乐熟女国产| 无限看片的www在线观看| 亚洲精品在线美女| 一个人免费看片子| 久久精品熟女亚洲av麻豆精品| 如何舔出高潮| 久久久精品区二区三区| 亚洲国产成人一精品久久久| 亚洲五月色婷婷综合| 亚洲最大成人中文| 国产视频一区二区在线看| 女性被躁到高潮视频| 国产成+人综合+亚洲专区| 国产精品国产高清国产av| 9191精品国产免费久久| 热re99久久国产66热| 91九色精品人成在线观看| 一级毛片女人18水好多| 中文字幕久久专区| 精品久久久久久久人妻蜜臀av | 又紧又爽又黄一区二区| 欧美不卡视频在线免费观看 | 欧美日本中文国产一区发布| 国产精品精品国产色婷婷| 国产三级黄色录像| 91精品三级在线观看| av在线天堂中文字幕| 天堂动漫精品| 午夜福利,免费看| 长腿黑丝高跟| 人人妻人人爽人人添夜夜欢视频| 国产一卡二卡三卡精品| 亚洲,欧美精品.| 日本 av在线| 中文字幕人成人乱码亚洲影| 亚洲片人在线观看| 亚洲成人精品中文字幕电影| 91成人精品电影| 午夜福利18| 亚洲自拍偷在线| 日本五十路高清| 91字幕亚洲| 精品一区二区三区视频在线观看免费| 国产精品99久久99久久久不卡| 国语自产精品视频在线第100页| 女人爽到高潮嗷嗷叫在线视频| 可以在线观看毛片的网站| 亚洲精品av麻豆狂野| 99精品在免费线老司机午夜| 黄色毛片三级朝国网站| 国产精品久久久久久精品电影 | 精品国产国语对白av| 日韩精品中文字幕看吧| 亚洲最大成人中文| 亚洲精品一卡2卡三卡4卡5卡| 国产亚洲av高清不卡| 日韩中文字幕欧美一区二区| 亚洲 国产 在线| 久久久久亚洲av毛片大全| 久久久国产欧美日韩av| 天堂√8在线中文| 操出白浆在线播放| 亚洲欧美日韩高清在线视频| 婷婷丁香在线五月| 少妇 在线观看| 亚洲熟妇中文字幕五十中出| 精品国产一区二区久久| 少妇熟女aⅴ在线视频| 日韩欧美一区视频在线观看| 亚洲五月婷婷丁香| 法律面前人人平等表现在哪些方面| 777久久人妻少妇嫩草av网站| 国产精品,欧美在线| 欧美人与性动交α欧美精品济南到| 精品国产乱子伦一区二区三区| 日本免费一区二区三区高清不卡 | 国产精品秋霞免费鲁丝片| 国产成人免费无遮挡视频| www.自偷自拍.com| 久久人妻熟女aⅴ| 成人18禁在线播放| 久久精品91蜜桃| 黄色视频不卡| 精品无人区乱码1区二区| 在线国产一区二区在线| 亚洲少妇的诱惑av| 欧美成人一区二区免费高清观看 | 丝袜美足系列| 久久国产乱子伦精品免费另类| 18禁美女被吸乳视频| 日本vs欧美在线观看视频| 国产99久久九九免费精品| 国产欧美日韩一区二区三区在线| 久久久久久久午夜电影| 三级毛片av免费| 女人爽到高潮嗷嗷叫在线视频| 久久久国产欧美日韩av| 亚洲av第一区精品v没综合| 法律面前人人平等表现在哪些方面| 熟妇人妻久久中文字幕3abv| 成人国产一区最新在线观看| 长腿黑丝高跟| 亚洲精华国产精华精| 久久人人精品亚洲av| 亚洲午夜理论影院| 中文亚洲av片在线观看爽| 每晚都被弄得嗷嗷叫到高潮| 国产在线观看jvid| 亚洲av电影在线进入| www.www免费av| 女警被强在线播放| 麻豆一二三区av精品| 美女国产高潮福利片在线看| av欧美777| videosex国产| 欧美成狂野欧美在线观看| 脱女人内裤的视频| 757午夜福利合集在线观看| 18禁观看日本| 亚洲狠狠婷婷综合久久图片| 久久精品91无色码中文字幕| 亚洲免费av在线视频| 国产精品秋霞免费鲁丝片| 欧美大码av| 国产人伦9x9x在线观看| 日韩欧美国产在线观看| 久久热在线av| 亚洲在线自拍视频| 曰老女人黄片| 侵犯人妻中文字幕一二三四区| 女人被躁到高潮嗷嗷叫费观| 麻豆久久精品国产亚洲av| 在线观看www视频免费| 韩国精品一区二区三区| 欧美日韩黄片免| 久久精品亚洲熟妇少妇任你| 老司机深夜福利视频在线观看| 少妇熟女aⅴ在线视频| 久久婷婷成人综合色麻豆| 女人被躁到高潮嗷嗷叫费观| 成人av一区二区三区在线看| 国产精品综合久久久久久久免费 | 91精品国产国语对白视频| 中文字幕久久专区| 91精品国产国语对白视频| 老熟妇乱子伦视频在线观看| 久久久久国产精品人妻aⅴ院| 国产一级毛片七仙女欲春2 | 欧美日韩中文字幕国产精品一区二区三区 | 又黄又爽又免费观看的视频| 欧美不卡视频在线免费观看 | 一本综合久久免费| 韩国av一区二区三区四区| 国产一级毛片七仙女欲春2 | 日韩中文字幕欧美一区二区| 国产精品日韩av在线免费观看 | 久久国产精品影院| 国产精品99久久99久久久不卡| 亚洲精品中文字幕在线视频| 国产xxxxx性猛交| 91成年电影在线观看| 91av网站免费观看| 亚洲色图 男人天堂 中文字幕| 麻豆国产av国片精品| 操美女的视频在线观看| 中文字幕久久专区| 丝袜在线中文字幕| 久久精品影院6| 久久久国产成人免费| 国产av一区在线观看免费| 精品高清国产在线一区| 9191精品国产免费久久| 一进一出抽搐gif免费好疼| 自线自在国产av| 中文字幕另类日韩欧美亚洲嫩草| 老鸭窝网址在线观看| 久久精品国产99精品国产亚洲性色 | 一级毛片精品| 色婷婷久久久亚洲欧美| 亚洲 欧美 日韩 在线 免费| 久久婷婷人人爽人人干人人爱 | 成人国语在线视频| 日本三级黄在线观看| 丁香六月欧美| 亚洲少妇的诱惑av| 在线永久观看黄色视频| 男人的好看免费观看在线视频 | 精品午夜福利视频在线观看一区| 国产成人啪精品午夜网站| 淫妇啪啪啪对白视频| 国产成人免费无遮挡视频| 亚洲三区欧美一区| av免费在线观看网站| 99在线人妻在线中文字幕| 性色av乱码一区二区三区2| 90打野战视频偷拍视频| 88av欧美| 亚洲狠狠婷婷综合久久图片| 中国美女看黄片| 最新在线观看一区二区三区| 色综合站精品国产| 久久久精品欧美日韩精品| 制服诱惑二区| 91在线观看av| 18禁观看日本| 国产高清有码在线观看视频 | 久久香蕉国产精品| 亚洲国产欧美一区二区综合| 女人精品久久久久毛片| 老司机深夜福利视频在线观看| 99国产精品免费福利视频| 国产精品美女特级片免费视频播放器 | 国产三级黄色录像| 久久久久久久久久久久大奶| 久久久国产成人精品二区| 精品不卡国产一区二区三区| 国产一区在线观看成人免费| 人成视频在线观看免费观看| 亚洲色图综合在线观看| 国产99白浆流出| 精品一品国产午夜福利视频| 中文字幕人妻熟女乱码| 亚洲午夜理论影院| 男女做爰动态图高潮gif福利片 | 黄色丝袜av网址大全| 亚洲电影在线观看av| 欧美日韩中文字幕国产精品一区二区三区 | 日韩成人在线观看一区二区三区| 免费高清在线观看日韩| 欧洲精品卡2卡3卡4卡5卡区| 色婷婷久久久亚洲欧美| 婷婷精品国产亚洲av在线| 日韩三级视频一区二区三区| 十八禁人妻一区二区| 中文字幕高清在线视频| 一进一出抽搐gif免费好疼| 国产91精品成人一区二区三区| 久久精品国产亚洲av高清一级| av视频在线观看入口| 老司机午夜福利在线观看视频| 欧美日本视频| av中文乱码字幕在线| 男女下面进入的视频免费午夜 | 亚洲人成电影观看| 欧美色欧美亚洲另类二区 | 久久人妻av系列| 九色亚洲精品在线播放| 涩涩av久久男人的天堂| 99国产精品免费福利视频| 亚洲人成电影观看| 亚洲精品久久国产高清桃花| 少妇的丰满在线观看| 精品卡一卡二卡四卡免费| 亚洲全国av大片| 19禁男女啪啪无遮挡网站| 51午夜福利影视在线观看| 麻豆av在线久日| 别揉我奶头~嗯~啊~动态视频| 欧洲精品卡2卡3卡4卡5卡区| 热re99久久国产66热| 中文亚洲av片在线观看爽| 国产不卡一卡二| 一级,二级,三级黄色视频| 国内久久婷婷六月综合欲色啪| 精品国产乱子伦一区二区三区| 中文亚洲av片在线观看爽| 国产一卡二卡三卡精品| 日韩大码丰满熟妇| 老司机午夜福利在线观看视频| 午夜日韩欧美国产| 神马国产精品三级电影在线观看 | 亚洲成人久久性| 日韩精品免费视频一区二区三区| 亚洲欧美一区二区三区黑人| 可以免费在线观看a视频的电影网站| 欧美乱码精品一区二区三区| 日韩 欧美 亚洲 中文字幕| 亚洲黑人精品在线| 女警被强在线播放| 精品久久久久久久久久免费视频| 国产精品99久久99久久久不卡| 日本五十路高清| 国产一卡二卡三卡精品| 日本免费a在线| 一进一出抽搐动态| 国产1区2区3区精品| 老司机靠b影院| 男女床上黄色一级片免费看| 男女下面插进去视频免费观看| www国产在线视频色| 9191精品国产免费久久| 99国产极品粉嫩在线观看| 久久精品人人爽人人爽视色| 91国产中文字幕| 91在线观看av| 人人妻人人澡欧美一区二区 | 国产野战对白在线观看| 久久人妻福利社区极品人妻图片| 欧美丝袜亚洲另类 | 亚洲成人国产一区在线观看| 后天国语完整版免费观看| 亚洲在线自拍视频| 亚洲欧美日韩无卡精品| 欧美激情高清一区二区三区| 最新在线观看一区二区三区| netflix在线观看网站| 九色亚洲精品在线播放| 亚洲中文av在线| 精品国产一区二区久久| 99久久99久久久精品蜜桃| 男女午夜视频在线观看| 两性夫妻黄色片| 免费在线观看日本一区| 波多野结衣高清无吗| 大陆偷拍与自拍| 欧美黄色淫秽网站| 亚洲一区二区三区不卡视频| 久久国产乱子伦精品免费另类| 大陆偷拍与自拍| 精品久久久久久成人av| 亚洲av成人不卡在线观看播放网| 日韩视频一区二区在线观看| 首页视频小说图片口味搜索| 亚洲最大成人中文| 日韩精品免费视频一区二区三区| 精品熟女少妇八av免费久了| 91在线观看av| 97人妻精品一区二区三区麻豆 | 成人亚洲精品av一区二区| 一本大道久久a久久精品| 成人三级黄色视频| 国产亚洲精品久久久久5区| 精品欧美一区二区三区在线| 亚洲自偷自拍图片 自拍| 午夜免费成人在线视频| 久久精品人人爽人人爽视色| 日本一区二区免费在线视频| 脱女人内裤的视频| 久久婷婷成人综合色麻豆| 欧美黑人欧美精品刺激| 两个人视频免费观看高清| 美女扒开内裤让男人捅视频| 一级黄色大片毛片| av欧美777| 亚洲人成77777在线视频| 亚洲色图 男人天堂 中文字幕| 50天的宝宝边吃奶边哭怎么回事| 国产视频一区二区在线看| 99re在线观看精品视频| 成在线人永久免费视频| 国产极品粉嫩免费观看在线| 99精品在免费线老司机午夜| 亚洲伊人色综图| 51午夜福利影视在线观看| 欧美成人免费av一区二区三区| 一边摸一边抽搐一进一出视频| 一区福利在线观看| 视频在线观看一区二区三区| 日韩欧美国产在线观看| 亚洲一区二区三区色噜噜| 国产一区二区激情短视频| 欧美一级毛片孕妇| 又紧又爽又黄一区二区| 深夜精品福利| 亚洲人成电影免费在线| 两性午夜刺激爽爽歪歪视频在线观看 | 热99re8久久精品国产| 成人国产综合亚洲| 在线永久观看黄色视频| 99在线视频只有这里精品首页| 欧美日本视频| 亚洲国产看品久久| 亚洲人成电影观看| 看黄色毛片网站| 在线视频色国产色| tocl精华| 欧美乱色亚洲激情| 日韩三级视频一区二区三区| 色av中文字幕| 女人被躁到高潮嗷嗷叫费观| 99国产精品一区二区蜜桃av| 欧美最黄视频在线播放免费| 啦啦啦免费观看视频1| 午夜精品在线福利| 亚洲 欧美一区二区三区| 国产欧美日韩一区二区三区在线| 久久精品国产亚洲av高清一级|