• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Video Conference System in Mixed Reality Using a Hololens

    2023-01-22 09:00:22BaolinSunXuesongGaoWeiqiangChenQihaoSunXiaoxiaoCuiHaoGuoCishahayoRemeshaKevinShuaishuaiLiuandZhiLiu

    Baolin Sun,Xuesong Gao,Weiqiang Chen,Qihao Sun,Xiaoxiao Cui,Hao Guo,Cishahayo Remesha Kevin,Shuaishuai Liu and Zhi Liu,★

    1School of Information Science and Engineering,Shandong University,Qingdao,266000,China

    2State Key Laboratory of Digital Multi-Media Technology,Hisense Co.,Ltd.,Qingdao,266000,China

    ABSTRACT The mixed reality conference system proposed in this paper is a robust, real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference, which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing, and mixed reality presentation.The data layer is mainly used for data collection, integration, and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects, and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.

    KEYWORDS Mixed reality;AI segmentation;hologram;video conference;Web-RTC

    1 Introduction

    Image matting and compositing are image processing technologies that extract the attractive targets from the background in a static image or a video sequence and,at the same time,synthesize the separated attractive targets with other background images[1].And image matting is a popular research direction of computer vision.With improved hardware computing power,image matting algorithms based on deep learning are widely used in film and television special effects, intelligent advertising design,and other fields to extract target objects[2].

    Network video conferences have been widely used with the development of network technology,5G communication, and the further improvement of infrastructure.In particular, triggered by the necessity of social distancing due to the current pandemic,people increasingly need video conference technology for various activities such as study and work [3].People can easily communicate and conduct instant meetings and online learning remotely through the network video conference system.Admittedly, the technology of network video conference systems has been perfectly and widely used in various fields of social life.However, the traditional network video conference system has apparent shortcomings in many applications that require high three-dimensional sensory effects and can flexibly operate the 3D research model during the conference.At the same time,how to improve the authenticity of video conferencing and protect privacy has become a problem that needs to be paid attention to in the research of video conference system.

    Mixed Reality MR [4] is a technology that combines physical and digital worlds to coexist with one another and deals with maximum user interaction in the real world compared to other similar technologies[5].Mixed reality technology,including augmented reality and virtual reality,has developed rapidly and achieved many application results.They are widely used in scientific research,teaching,entertainment,and other social life,improving scientific research conditions and enriching people’s daily cultural lives[6].

    Generally speaking, traditional commercial video conference system mainly refers to 2D video conference,conference participants communicate with each other and share presentation documents through videos,such as Zoom,Cisco WebEx Teams,and Google Meet[7],which provides videotelephony and online chat services through a cloud-based peer-to-peer software platform and is used for teleconferencing, telecommunicating, distance education, and social relations [8].Furthermore,a more immersive video conference system based on telepresence technology increases the natural perception of the environment through communication media.For telepresence technology[9],some mature commercial products have emerged, such as Polycom RealPresence Immersive Studio [10],Cisco Immersive TelePresence [11].They generally need to build a conference room with relatively large space,adopt an integrated conference table,hidden microphone,and speaker,so that participants do not need to deliberately close to the microphone,maintain a natural way of communication,and provide excellent listening and position recognition function to realize a sense of space.

    The commercial teleconference system already has a high immersion framework and essential interactive functions.However, the presentation dimension is still two-dimensional, far from actual participation in the conference.The two-dimensional scene taken by the camera is not displayed on the screen with a three-dimensional sense.With the development of 3D reconstruction technology,audio,and video transmission,and 3D presentation technology,some video conference systems are upgraded from 2D to 3D based on virtual reality and mixed reality technology.MeetinVr[12],a business meeting and VR collaboration software,offers human interaction more intuitive and effective than real scene by creating a new reality optimized for exceptional collaboration.Holoportation[13]presented an endto-end system for augmented and virtual reality telepresence,demonstrates high-quality,real-time 3D reconstructions of an entire space,including people,furniture,and objects,using a set of new depth cameras,and allows users to wear virtual or augmented reality displays to see,listen and interact with remote participants in 3D.Mixed Reality Video Calling[14]for Microsoft Teams Together mode uses AI segmentation technology to digitally place participants in a shared background and make it feel like they are sitting in the same room with everyone else in the meeting.Still,Microsoft Teams Together mode is also a two-dimensional display through a PC or smart mobile phone.A low-cost mixed reality telepresence system presented by Joachimczak et al.[15] performs real-time 3D reconstruction of a person or an object and wirelessly transmits the reconstructions to Microsoft’s Hololens.

    At present, there are no real network video conference systems that combine AI segmentation technology based on computer vision with MR or AR head-mounted display devices.Furthermore,aiming at the immersion, interactivity, and authenticity of traditional network video conference systems, we propose implementing a novel network video conference system based on MR display devices and AI segmentation technology.We use the instance segmentation and image matting technology to extract the remote meeting participants from the background and interact with them in real-time in a mixed reality or argument reality head-mounted display device.Our mixed reality conference system is implemented by mixed reality technology and AI segmentation,thus the name of our system,MRCS.

    2 System Requirements Analysis

    To make the video conference system based on mixed reality and portrait matting described in this paper can be used in practice,and at the same time have a certain degree of advancement than the existing video conference system,we need to clarify the system requirements before development[16].Requirement analysis is a detailed analysis of the functions that the system needs to realize,interface presentation,and interactive performance.After research and analysis,the requirements to be met by the system are confirmed, and then the design and development work can be gradually carried out.System requirements are divided into functional requirements and performance requirements [17].Functional requirements ensure the integrity of system functions and have a guiding significance for the division of system modules;performance requirements ensure that the system can be implemented and used in practical applications.

    2.1 System Function Requirements Analysis

    The mixed reality conference system is mainly for users in different places.Through this conference system, they can appear in the same mixed reality conference room simultaneously, that is, you can observe the human portraits of other users.In addition to supporting traditional two-dimensional video conferences, it also supports manipulating 3D models in virtual scenes.Based on the above problems to be solved,we divide the functions of the mixed reality conference system into five parts:

    (1) Mixed reality display function:When users use this conference system, they can see the person who breaks away from the remote environment and enters the local holographic space.Therefore,this system should provide a remote mixed reality display function.

    (2) Real-time portrait matting function:As a conference system based on mixed reality,this system should provide a real-time portrait matting function, which can perform real-time portrait matting of users and register them in the mixed reality conference holographic space and human image compositing allows users to achieve an immersive experience.

    (3) Real-time audio and video transmission function:Real-time audio and video transmission is the most basic functional requirement for a video conference system.At the same time, it supports the local LAN network and WAN network.

    (4) Multi-person online function:To simulate the real meeting room scene, the system must support at least three people online simultaneously.

    (5) Conference presentation function:As an improvement of the real conference,this system needs to provide essential conference functions,such as PowerPoint presentation and video playing.

    2.2 System Performance Requirements Analysis

    To be genuinely used in practical applications,the system must achieve the above functions and meet specific performance requirements.

    (1) Real-time performance:For a video conference system,real-time performance is the essential requirement to ensure the regular progress of the meeting.Low delay can guarantee the quality of the meeting and improve the participant experience.Therefore,the modules of the system need to be closely connected, each module cannot have too much delay, and the total delay cannot exceed 1000 ms.

    (2) Image matting frame rate:The mixed reality conference system allows users to merge their body images with the remote meeting scene after removing the background.The performance of the portrait matting algorithm will directly affect the system’s fluency,and a smooth portrait display can significantly improve the user experience.Delays, freezes, or loss of portraits will substantially affect the user’s immersion.Therefore,the frame rate of the image matting algorithm of this system is required to be no less than 60 FPS to ensure that the portrait model in the remote mixed reality scene can recover user actions and expressions smoothly.

    (3) Scalability:The system will be expanded and upgraded as needed in practical application.Therefore,the system needs to have a modular structure,which is convenient for adding new functional modules and upgrading existing modules to have good scalability.

    (4) Security:The meeting room requires a set of keys to establishing a session, and different sessions are independent of each other and will not cause interference.

    3 Design of System Framework

    The system framework and software interfaces determine the skeleton and skin of the entire system software.The data and control flow can be clarified through the system framework design.The whole system can be divided into modules to facilitate the design and implementation.

    3.1 System Overview

    According to the workflow of the mixed reality conference system,the system is organized into four parts:data layer, network layer, processing layer, and presentation layer; the data information flows from bottom to top.The data layer contains the underlying physical equipment and is mainly used for the input and integration functions of the original video and audio information.The network layer includes client nodes and cloud server programs.It is primarily used to receive audio and video information and interactive information from the data layer and send the processed audio and video data to the presentation layer.The processing layer splits the received data,the video flows to the image matting module,and the interactive instruction flows to the conference presentation object function module.The presentation layer is used for mixed reality display of the entire system, including user interface,mixed reality scenes,human images,and conference presentation objects.Fig.1 shows the workflow of our MRCS system,and the system architecture diagram is shown in Fig.2.

    Figure 1:Workflow of the mixed reality conference system

    Figure 2:MRCS system architecture diagram

    3.2 Design of Data Layer

    The data layer is mainly for data collection and data integration.The video is collected through the camera, the video data format is RGB, the audio is collected through the microphone, and the audio data format is PCM.Encode the video stream from RGB format to H.264 [18], and encode audio data from PCM format to AAC[19].Optionally,screen recording information can be collected and encoded in the same way.Encode video and audio files into a multimedia container format with streaming media characteristics.Furthermore, this system is based on the WebRTC framework and pushes the audio and video streams to the cloud server for subsequent processing.

    3.3 Design of Network Layer

    This system selects WebRTC [20] as the network framework for application development.WebRTC(Web Real-Time Communication)is a real-time communication technology without plug-ins introduced by Google.It does not require any other plug-ins or applications for audio,video streaming,and data sharing, allowing browsers to directly exchange media information with other browsers in a peer-to-peer manner while providing higher security than other various streaming media systems.WebRTC provides the core technology of video conferencing.It currently supports VP8 and H.264 video codec standards.At the same time,WebRTC has realized audio and video codec transmission,NAT(Network Address Translation)penetration,and other functions,so we can focus on developing transmission content under the framework of WebRTC.

    Because WebRTC is based on P2P connection, the conference network topology of the system is a Mixer structure, that is, a fully connected structure.The design takes the cloud server as the center and establishes a peer-to-peer (P2P) data transmission with each client, and clients do not interfere with each other,which increases the system stability.The WebRTC mixer structure is shown in Fig.3; through the central cloud server, clients do not need to establish a P2P connection with other clients directly but only establish a P2P connection with the central cloud server.After receiving the audio and video streams from the client,the server is responsible for processing and forwarding them to other clients.Before transmission,it can also mix or transcode multiple users audio and video streams.Through the WebRTC Mixer structure, the system can easily make different customization requirements on the server-side.But the Mixer structure may introduce additional delay and loss of video and audio quality because of decoding and recoding on the server.

    Figure 3:The mixer structure base on WebRTC

    The system is developed on Unity3D[21],we select the plugin WebRTC Video Chat on the Unity Asset Store as the framework and expand on this basis.The plug-in allows users to stream audio and video between the two programs,send text messages,and have a complete signaling server program.

    Furthermore, in the process of audio and video transmission, we protect the privacy of MRCS participants by E2EE (End to End Encryption) [22], which is one of the most reliable methods to ensure the security of data exchange in the field of information security.We implement the application encryption class and encryption function based on WebRTC API.We generate two encryption keys for each user:the public key and private key;both keys are generated using the PGP(Pretty Good Privacy)[23] encryption algorithm, which has not been cracked since it was released in 1991.The public key encrypts information and exchange between user applications with audio and video transmission.The private key decrypts the information data encrypted by the public key.The private key is stored locally and not sent from the user’s device.Since the server does not participate in the key generation process,what the server receives and sends is the user encrypted message.Therefore,even if the user information is leaked on the server side,the user information cannot be parsed out.

    3.4 Design of Processing Layer

    The processing layer is the main part of the system, mainly divided into two parts:the portrait matting module and the human-computer interaction module.The portrait matting module first disassembles the mixed frame from the client and then extracts the human body image from the background based on the image matting algorithm.The human-computer interaction module is mainly used to operate presentation objects and 3D models in holographic scenes.The system pulls the multimedia stream pushed by the data layer in the cloud server, decomposes the stream in the multimedia container format into video data (H.264) and audio files (AAC), and performs GPU hardware decoding on the H.264 video data to the video stream with the video format as RGB[24].Through the parsed video stream,the portrait matting and hand posture estimation[25]are performed subsequently.

    We use AI segmentation technology(image segmentation algorithm and human matting method)for human target extraction and background replacement in the processing layer.With image segmentation,each annotated pixel in an image belongs to a single class,often used to label images for applications requiring high accuracy.The algorithm is manually intensive because it requires pixellevel accuracy.The output of image segmentation is a mask that outlines the object shape in the image.Image segmentation annotations come in many different types,such as semantic segmentation,instance segmentation,panoptic segmentation;the practice of image segmentation generally describes the need to annotate every pixel of the image with a class[26].Instance segmentation comes from object detection and semantic segmentation.Object detection or localization provides the classes and location of the image objects.Semantic segmentation gives fine inference by predicting labels for every pixel in the input image.Each pixel is labeled according to the object class enclosed within it.Furthering this evolution,instance segmentation gives different labels for separate objects belonging to the same class[27].Hence,we tried to use an instance segmentation algorithm to simultaneously solve the problem of human object detection and semantic segmentation.Our Instance segmentation algorithm is based on SOLO[28]and has a lightweight implementation.The model architecture diagram is shown in Fig.4.

    Figure 4:Our instance segmentation model architecture diagram

    Our instance segmentation model draws on the model design concept of the MMDetection[29]framework and divides the model into three parts:backbone,neck,and head.The model takes resnet50 as the backbone network for feature extraction, and the neck network uses FPN [30] to generate feature maps of different sizes with fixed 64 channels at each level.Each FPN network is followed by a head component that predicts the object category and mask.The head component is divided into two separate branches:semantic category and instance mask branch.The semantic category branch meshes the FPN feature map toS×S× 64, where a feature map is divided into a grid ofS×Scells and then extracts features through four 3×3 convolution layers.Next,fixed multiple frames are input to ConvGRU[31]layers to take temporal information into account to improve video instance segmentation quality.Finally, the features are output asS×S×Cthrough the 3×3 convolution layer,whereCis the number of object classes.The instance mask branch first performs CoordConv on the highest level features of the FPN.It then extracts features through four 3×3 convolution layers and ConvGRU layers to take inter-frame correlation into account.Finally,the features are output asH×W×S2through convolution layers.In our lightweight instance segmentation model based on SOLO,we adopt ConvGRU to aggregate temporal.GRU network is a recurrent neural network,which is a variant of LSTM(Long Short-Term Memory)[32],and ConvLSTM[33]uses a convolution kernel to replace the full connection layer in LSTM.In addition,the ConvGRU network is more parameter efficient than ConvLSTM.Formally,ConvGRU is defined as:

    wherewandbare the convolution kernel and bias term.htis the hidden state which is used as both the output and the recurrent state to the next time step asht-1.

    The training loss function is defined as follows:

    whereLcateis the conventional focal loss for semantic category classification,Lmaskis the dice loss for mask prediction:

    Nposis the number of positive samples,kis the index of the grid cells,p*kandm*krepresent category and mask target respectively.Anddmask(·,·)is the dice loss which is defined as:

    wherepx,yandqx,yrefer to the value of pixel located at(x,y)in predicted soft maskpand ground truth maskq.

    Our instance segmentation model is firstly trained on COCO and VOC datasets,and then the pretrained model is next trained on the portrait segmentation dataset of Hisense.COCO dataset provides 80 categories,and we only use the samples of humans to train our model.Similarly,we also only use the samples of humans in the VOC dataset to train our model.The portrait segmentation dataset of Hisense has 50000 pictures in the form of video frames.The human object is collected at a distance of 1–5 meters from the camera.The annotated contour of human objects is large and clear,and the pictures are 4 K (3840×2060) resolution.And the scenes are living rooms, bedrooms, study rooms,and offices.The scene types are similar and applicable to the video conference portrait segmentation model designed in this paper.We divide the dataset into 40000/5000/5000 clips for train/val/test splits.By using our instance segmentation model based on SOLO, we achieve HD (1920×1080) 67 FPS and 4 K(3840×2060)59 FPS on Nvidia Tesla V100,which the accuracy of the model decreased to a certain extent within the acceptable range.

    We also use the human video matting method to extract human objects from the image background.Still, this method cannot give different labels for separate instances of objects belonging to the same class.Image matting is the process of predicting the alpha matte and foreground color from an input frame[34].Formally,a frame I can be viewed as the linear combination of a foreground F and a background B through anαcoefficient:

    By extractingαand F,we can composite the foreground object to a new background,achieving the background replacement effect.

    Portrait matting aims to predict a precise alpha matte that can be used to extract people from a given image or video [35].In the MRCS system, we use RVM [36] (Robust Video Matting) as the image matting algorithm.RVM is a robust,real-time,high-resolution human video matting method to achieve new state-of-art performance.We do not modify the network structure of the model because the simplification of the model structure does not lead to the improvement of the model speed but leads to the decline of model accuracy.Based on the RVM author’s pre-trained model,we train and fine-tune the model on our dataset(the portrait segmentation dataset of Hisense)since our dataset has a large target of human objects,so it is more suitable for human subjects in a meeting room scene.By using the RVM algorithm and training it on the portrait segmentation dataset of Hisense,we achieve HD(1920×1080)154 FPS and 4 K(3840×2060)117 FPS on Nvidia Tesla V100,which is considered real-time for video conference and meet the requirements of the matting frame rate.To simulate the indoor scene of the video conference, our test results for indoor multi-targets are shown in Fig.5b,and Fig.5a is the original input image.

    Figure 5:The matting result of indoor multi-target.(a) the original input image from the camera,(b)the matting result of the original input image

    Table 1 compares our modified lightweight method based on SOLO and RVM trained on the portrait segmentation dataset of Hisense at low-resolution (1920×1080) and high-resolution(3840×2060).

    Table 1:The comparison between our modified model and the original model at low-resolution and high-resolution

    Table 1 (continued)Method Resolution mIOU FPS RVM HD 86% 154 4 K 84% 117 SOLO(Ours) HD 89% 67 4 K 90% 59 RVM(Ours) HD 92% 151 4 K 90% 112

    Human-computer interaction uses voice and gestures as interactive commands to control Power-Point presentations and video playback.Since we use the commercial mixed reality device Microsoft Hololens2 [37] as the system presentation layer implementation tool, the speech recognition in human-computer interaction is implemented by Microsoft Azure Natural Language Understanding Model (LUIS) [38].By adding custom command keywords to the LUIS system applications, we build application modules that understand natural language to identify user intentions and extract key information from conversation phrases and implement corresponding methods to perform the corresponding operation.Since we use Unity3D as the development and design tool, the gesture recognition function is implemented by Unity high-level composite gesture API(Gesture Recognizer)to recognize the user spatial input gestures[39].We only need a few steps required to capture gestures using a Unity3D gesture recognizer:create a new Unity3D gesture recognizer object, specify which gestures to watch for, subscribe to events for those gestures, start capturing gestures.In addition to using the grab, click, and stare gestures that come with the Hololens2 system, we have designed operation gestures including left swipe,right swipe,pull up,pull-down,and so on.

    3.5 Design of Presentation Layer

    The presentation layer is the critical part of the MRCS system,which directly affects interactivity and immersion,and determines the user experience.We use mixed reality technology to provide user with an immersive experience during the meeting.The mixed reality device used in the MRCS system is Microsoft Holelens2,a pair of mixed reality smartglasses developed and manufactured by Microsoft.It is widely used in manufacturing, engineering, healthcare, and education [40].The Hololens2 is a combination of the waveguide and laser-based stereoscopic full-color mixed reality smartglasses[41] features an inertial measurement unit (IMU) (which includes an accelerometer, gyroscope, and a magnetometer), four “environment understanding”sensors for spatial tracking, four visible light cameras for head tracking, two infrared radiation (IR) for eye tracking, 1-MP time-of-flight (ToF)depth sensor,an 8-megapixel photographic video camera,a four-microphone array,and an ambient light sensor [42].What’s more, the Hololens2 has a diagonal field of view (FOV) of 52 degrees,improving over the 34 degrees field of view of the first edition of the Hololens.In addition to an Intel Cherry Trail SoC containing the CPU and GPU, the Hololens2 features a custom-made Microsoft Holographic Processing Unit (HPU) [43].Although there are some mature commercial AR or MR smart glasses, like Google Glass [44], HTC VIVE [45], Meta2 [46], and so on.Google Glass is a lightweight smart glasses(an optical head-mounted display designed in the shape of a pair of glasses)that can only display digital information in a small range.Meta2 and HTC Vive are similar to Hololens,but Hololens far exceed Meta2 and HTC in the combination of reality and virtual information,and Hololens is more convenient for engineers to develop.Taking all these into consideration,we choose Microsoft Hololens2 as the MRCS system mixed reality hardware device,mainly for the following two reasons:on the one hand,the Hololens is one of the best mixed reality smartglasses at present,one the other hand, it runs the Windows Mixed Reality platform under the windows10 computer operating system and can be developed by Unity3D and Visual Studio,so it is more suitable for developing our MRCS system.The top and side views of the Microsoft Hololens2 are shown in Fig.6.

    Figure 6:Top and side views of the microsoft Hololens2

    According to MRCS system functional requirements,before establishing a meeting room,we need to enter the unique number of the meeting room,so that only users who enter the same number can enter the same mixed reality scene.When designing the software interface, the interface is divided into a two-dimensional interface which is used to realize user login function and system input device selection,and a three-dimensional interface which is used for mixed reality presentation after entering the meeting room.The interface is designed following the principles of conciseness,complete functions,easy operation,and learning,which is convenient for users to get started.

    The primary function of the initial two-dimensional interface is to guide users to create and join a mixed reality conference room,as shown in Fig.7.This interface is displayed on the user’s computer screen.The interface is divided into two states,the initial state,and the conference state.In the initial state,the user has not yet entered the meeting room,so you need to enter the room number and server address on this interface,and then click the“Join”button to enter the designated meeting room.At the same time,the device drop-down button can select the device to collect video and audio data.After entering the meeting room,the interface becomes the meeting state.

    Figure 7:The two-dimensional interface of the conference system

    At the same time,as shown in Fig.8,we have also implemented the mixed reality login window interface, which allows users to join the MRCS mixed reality meeting completely away from their personal computers.The mixed reality interface has all the same functions as the two-dimensional interface in Fig.7.

    The mixed reality device used in the MRCS system is Microsoft Hololens2, which is developed based on Unity3D and MRTK[47](Microsoft’s Mixed Reality Toolkit).Use Mixed Reality WebRTC to realize peer-to-peer real-time audio and video communication,and the application of MRCS system on PC is developed through Qt5.

    After the user successfully creates or joins the room,he enters the working interface.The user can wear the mixed reality device at this time.Users can experience the conference room function of the MRCS system in this interface.

    Figure 8:Mixed reality interface of the conference system

    In the remote meeting scene,as shown in Fig.2,the camera collects the real-time video stream of the remote meeting human subjects,encodes the video stream from RGB to H.264,and then pushes it to the server of the network layer.The cloud server pulls the video stream and decodes it.The video is processed through RVM or the instance segmentation model.The MRCS system performs portrait matting to get the alpha matte and the foreground image, and further obtains the accurate matting video stream of the portrait of the remote meeting object through the alpha value and the foreground image,and encodes the video stream from RGB to H.264 on the cloud server again.

    The mixed reality device pulls the matting video stream from the network layer server in realtime.It synchronously displays the human image constructed by the local mixed reality device in the conference scene.The final display of the MRCS system is shown in Fig.10.There are three main parts of mixed reality design:the display of human subjects in the mixed reality device,the live sharing of 3D models in the mixed reality device,the human-computer interaction in the mixed reality device.

    As shown in Fig.3,each mixed reality device(Microsoft Hololens2)can pull the video and audio stream from the server in real-time through the WebRTC Mixer structure.Firstly, the Hololens2 obtains video and audio streams through local media devices; Secondly, establish a WebRTC P2P connection with the remote cloud server;Then start or close a session with signaling communication;Finally,the Hololens2 and cloud server exchange video and audio streams.Note that the video stream that the Hololens2 pulls from the cloud server is only the final prediction foreground, that is, the human subjects, as shown in Fig.9d.We add a green background to the final prediction result for ease of display in the paper.And then,take the ground and the Hololens2 as the coordinate origin to establish the world coordinate system.Display the human subject video stream at fixed coordinates,the system default is 2 meters in front of the Hololens.What’s more,the location and size of the human subjects can be changed easily by the user who wears the Hololens through the voice and gestures command.

    Figure 9:(a)is the input frame collected from the camera.Through performing portrait matting,the foreground color,alpha matte is shown in(b)and(c).And(d)is the final prediction result with green background

    Figure 10:The final display result of the MRCS conference system, the person in the image is the remote conference object,and the blue background window is the conference presentation object

    For the live sharing of 3D models, we build a multi-user experience using Photon Unity Networking[48](PUN),one of several networking options available to mixed reality developers to create shared experiences[49].Firstly,we set up Photon Unity Networking to get the PUN App ID;Secondly,Through setting the same PUN App ID, we connect the unity project (the Hololens project) to the PUN application to connect multiple devices;Thirdly,we config PUN to instantiate the 3D models which need to be live shared.Finally,we integrate Azure spatial anchors into a shared experience for multi-device alignment.All participants of the MRCS conference system can collaborate and view each other’s interactions and see the 3D models move when other users move them.

    Human-computer interaction uses voice and gestures as interactive commands to control PowerPoint presentations, 3D models, and video playback.We use Microsoft Azure Natural Language Understanding Model (LUIS) to train and recognize speech commands.Adding custom command keywords to the LUIS system applications builds application modules that understand natural language to identify user intentions,extract key information from conversation phrases,and implement corresponding methods to perform the corresponding operation.Meanwhile, based on Hololens system gestures,we customize MRCS system gestures through unity high-level composite gesture API.

    4 System Evaluation

    This section mainly tests and evaluates the MRCS conference system based on mixed reality technology.The system performance evaluation primarily focuses on the system real-time performance,image quality,render frame rate when the MRCS system is running.In this paper,the hardware devices used in the MRCS system consist of a monocular camera,audio input device,client host,server host,mixed reality device,network devices.The specific devices and parameters are shown in Table 2.

    Table 2:Hardware configuration of the MRCS system

    Through the analysis of the system, the time delay of MRCS mainly consists of five parts:processing delay, algorithm delay, network delay, transmission delay, buffer delay.Processing delay refers to the time required for video and audio codec algorithms which is equal to the complexity of the codec algorithm divided by the execution speed of the hardware encoder and multiplied by video length.Algorithmic delay refers to the time consumed by segmentation of human image instances in the video which has been discussed in Section 3.4.Network delay is the time taken for the signal to reach the destination through the physical transmission medium,which depends on the quality of network connection and communication distance.Transmission delay refers to the time required to transmit the smallest bit of a decodable video signal.Buffer delay is caused by the storage of real-time data and the unpredictability of audio and video arrival time.Due to the WebRTC framework used in the system,it is not convenient to calculate the delay of five subparts separately.We calculate the overall time-consuming of the MRCS system.First,the Hololens A sends data to the Hololens B records the sending timet1.Then Hololens B returns the data after receiving the data, and Hololens A records the receiving timet2after receiving the complete returned data.Through the following formula,the overall transmission delayRTTcan be obtained:

    Repeat the experiment 1000 times and get an average system time delayRTTof 537 milliseconds.

    Image quality includes two parts:the image quality that the portrait instance segmentation is superimposed on the background of the physical environment, the quality of the PowerPoint or 3D models that need to be displayed.As shown in Figs.9 and 10, the portrait result of instance segmentation is almost the same as the original except for the edge of the portrait.Because of the instance segmentation algorithms, the edge of the portrait is a little blurred or unclear.Because PowerPoint or 3D models are built by Photon Unity Networking(PUN),they are no different from the original.

    For the render frame rate,we design a program to monitor it in real-time.When the program is running,there is no delay when the user wears the HoloLens,as shown in Fig.11,the render frame rate of the system is stable at 60 fps while MRCS is running,but there is a significant drop in the operation of 3D models, but the drop range is within the acceptable range.Fig.12 shows the field of view of Hololens while MRCS is running,and the left and right views are consistent with the head movement.

    Figure 11:The MRCS system frames per second

    Figure 12:Field of view test for Hololens while MRCS is running

    5 Conclusions

    In this paper,a network video conference system based on mixed reality and portrait matting is designed and implemented to improve the authenticity and immersion of traditional video conferences,which can make up for the shortcoming of traditional video conferences.The system realizes the functions of traditional video conference from a higher dimension,allowing users to participate in a conference in a new way, and is a breakthrough in the implementation of mixed reality technology.With the development of 5G communication technology, the information carried on the network will inevitably be upgraded to a higher dimension, not only simple audio and video but also the integration of three-dimensional information.Finally,it will be perfectly presented to people through mixed reality technology.Starting from the technical implementation of mixed reality application,this paper discusses the design of mixed reality video conference based on portrait matting, audio and video transmission based on traditional video coding and decoding technology, and humancomputer interaction in mixed reality,and finally realized the mixed reality remote video conference system MRCS.

    The MRCS system implements the remote video conference system in the mixed reality scene based on Microsoft Hololens2, realizes the integration of virtual and real scenes, improves the immersion and interactivity significantly during the conference.The system combines the AI instance segmentation algorithm with mixed reality technology, “moves”the remote conference participants to the current physical environment,which significantly improves the authenticity and immersion of the conference.And the MRCS system builds a multi-user live shared experience using Photon Unity Networking(PUN)based on the Hololens2,allowing users to share the movements of 3D models so that all MRCS conference participants can collaborate and view each other interactions.However,the MRCS system has four drawbacks due to the limited hardware and software.First,there is a system time delay in the MRCS system due to the network transmission,video and audio codecs,and portrait instance segmentation.Second, the camera used in the system is monocular, so the system cannot carry out an effective 3D reconstruction of the human subjects.The MRCS can be improved and upgraded later by using a binocular camera and depth sensor.Thirdly,when the human body moves fast,the edge of the human subjects obtained by the image matting algorithm is blurred,because of the portrait instance segmentation algorithm.Finally, the mixed-reality device used in the system is Microsoft Hololens2,which is relatively bulky and heavy,and needs to be worn for a long time.And the field of view (FOV) of Hololens2 is only 52 degrees, which may affect user experience.But the Hololens2 is the most advanced mixed reality smartglasses,so there is no better choice in the mixed reality hardware device.For future applications,we will develop instance segmentation algorithms to improve the quality of human image matting,use the binocular camera and depth sensor to get the image depth information for three-dimensional reconstruction, and explore mixed reality hardware devices to improve the user experience.

    Data Availability:The data and codes to support the findings of this study are available from the corresponding author upon request.

    Funding Statement:The authors would like to acknowledge the support by State Key Laboratory of Digital Multi-Media Technology of Hisense and School of Information Science and Engineering of Shandong University.This work was supported in part by the Major Fundamental Research of Natural Science Foundation of Shandong Province under Grant ZR2019ZD05;Joint fund for smart computing of Shandong Natural Science Foundation under Grant ZR2020LZH013;Open project of State Key Laboratory of Computer Architecture CARCHA202002;Human Video Matting Project of Hisense Co.,Ltd.under Grant QD1170020023.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    婷婷精品国产亚洲av| 黑人操中国人逼视频| 亚洲成a人片在线一区二区| 亚洲精品国产精品久久久不卡| 国产成+人综合+亚洲专区| 夜夜夜夜夜久久久久| 国产伦人伦偷精品视频| 观看美女的网站| 免费大片18禁| 99久久久亚洲精品蜜臀av| 免费一级毛片在线播放高清视频| 一进一出抽搐动态| 亚洲国产欧美一区二区综合| 成在线人永久免费视频| 噜噜噜噜噜久久久久久91| 俄罗斯特黄特色一大片| 99热只有精品国产| 色在线成人网| 天天躁狠狠躁夜夜躁狠狠躁| 国产乱人视频| 国产精品亚洲一级av第二区| 亚洲熟女毛片儿| 成人av一区二区三区在线看| 黄色 视频免费看| 国产视频一区二区在线看| www国产在线视频色| 欧美在线一区亚洲| 18禁裸乳无遮挡免费网站照片| 99riav亚洲国产免费| 男插女下体视频免费在线播放| 深夜精品福利| 99国产极品粉嫩在线观看| 少妇熟女aⅴ在线视频| 国产欧美日韩精品一区二区| 日本三级黄在线观看| 三级毛片av免费| 99riav亚洲国产免费| 国内毛片毛片毛片毛片毛片| 国产一区二区三区在线臀色熟女| 一二三四社区在线视频社区8| 色哟哟哟哟哟哟| 亚洲国产欧美网| 法律面前人人平等表现在哪些方面| 最近在线观看免费完整版| 两个人视频免费观看高清| 色综合亚洲欧美另类图片| 五月伊人婷婷丁香| 在线视频色国产色| 亚洲成人中文字幕在线播放| 手机成人av网站| 九色国产91popny在线| 一进一出抽搐动态| aaaaa片日本免费| 桃色一区二区三区在线观看| 午夜激情福利司机影院| 亚洲中文av在线| 国产一区二区激情短视频| 视频区欧美日本亚洲| 国产精品av久久久久免费| 亚洲精品国产精品久久久不卡| 国内少妇人妻偷人精品xxx网站 | 不卡av一区二区三区| 国产毛片a区久久久久| 熟妇人妻久久中文字幕3abv| 黄色成人免费大全| 激情在线观看视频在线高清| 亚洲第一电影网av| av女优亚洲男人天堂 | 久久国产精品人妻蜜桃| 观看免费一级毛片| 啦啦啦观看免费观看视频高清| 男女那种视频在线观看| 亚洲第一电影网av| 成人三级做爰电影| 国产成人啪精品午夜网站| 亚洲欧美激情综合另类| 一卡2卡三卡四卡精品乱码亚洲| 夜夜躁狠狠躁天天躁| 国产伦一二天堂av在线观看| 麻豆成人av在线观看| 国产精品影院久久| 嫩草影视91久久| 午夜两性在线视频| 中文字幕久久专区| 中文字幕最新亚洲高清| 中文字幕av在线有码专区| 久久久久国产精品人妻aⅴ院| 国产视频一区二区在线看| 大型黄色视频在线免费观看| 免费看光身美女| 真实男女啪啪啪动态图| 日本三级黄在线观看| 啦啦啦观看免费观看视频高清| 啦啦啦观看免费观看视频高清| svipshipincom国产片| 国产单亲对白刺激| 亚洲成a人片在线一区二区| 在线看三级毛片| 欧美3d第一页| 99久久综合精品五月天人人| 九色国产91popny在线| 国产精品久久电影中文字幕| 18禁黄网站禁片午夜丰满| 美女cb高潮喷水在线观看 | 级片在线观看| 色综合欧美亚洲国产小说| 国产成人欧美在线观看| 青草久久国产| 久久久久性生活片| 国产久久久一区二区三区| 国产精品 国内视频| 一边摸一边抽搐一进一小说| 欧美黑人巨大hd| 国产成人aa在线观看| 成人三级做爰电影| 亚洲在线自拍视频| 美女被艹到高潮喷水动态| 亚洲av片天天在线观看| 国语自产精品视频在线第100页| 悠悠久久av| 一a级毛片在线观看| 亚洲精品美女久久av网站| 欧美日韩乱码在线| 一个人免费在线观看电影 | 无人区码免费观看不卡| 国产亚洲av嫩草精品影院| 国产麻豆成人av免费视频| 成人三级做爰电影| 亚洲在线观看片| 午夜久久久久精精品| 久久久色成人| 成年人黄色毛片网站| 嫁个100分男人电影在线观看| 最近最新免费中文字幕在线| 久久性视频一级片| 国内精品久久久久久久电影| 99久久成人亚洲精品观看| 久久久久久久久免费视频了| 亚洲人成网站高清观看| 亚洲中文日韩欧美视频| 一级黄色大片毛片| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美精品综合一区二区三区| 黄色 视频免费看| 中文字幕人成人乱码亚洲影| 午夜福利欧美成人| 国产精品久久久久久亚洲av鲁大| 网址你懂的国产日韩在线| 日韩国内少妇激情av| 国产精品亚洲一级av第二区| 男人舔奶头视频| 人人妻,人人澡人人爽秒播| 熟女电影av网| 999久久久精品免费观看国产| 国产高潮美女av| 久久久久亚洲av毛片大全| 男女午夜视频在线观看| 国产一区二区在线观看日韩 | 人妻丰满熟妇av一区二区三区| 久久精品国产综合久久久| 啦啦啦观看免费观看视频高清| 欧美大码av| 中文在线观看免费www的网站| 制服人妻中文乱码| 真实男女啪啪啪动态图| 中文字幕高清在线视频| 精品国产超薄肉色丝袜足j| 成人三级黄色视频| 亚洲精品粉嫩美女一区| 最近最新中文字幕大全免费视频| 在线观看美女被高潮喷水网站 | 三级毛片av免费| 一本精品99久久精品77| 超碰成人久久| 动漫黄色视频在线观看| 三级男女做爰猛烈吃奶摸视频| 久久精品亚洲精品国产色婷小说| 亚洲精品一卡2卡三卡4卡5卡| 97碰自拍视频| 黄片小视频在线播放| 九色成人免费人妻av| 99在线人妻在线中文字幕| 精品乱码久久久久久99久播| 亚洲精品色激情综合| 欧美日韩乱码在线| 午夜精品久久久久久毛片777| 精品99又大又爽又粗少妇毛片 | 桃红色精品国产亚洲av| 欧美日韩乱码在线| 色精品久久人妻99蜜桃| 一本一本综合久久| 中出人妻视频一区二区| 人妻丰满熟妇av一区二区三区| 岛国在线观看网站| 久久久久亚洲av毛片大全| 亚洲精品美女久久av网站| 亚洲精品中文字幕一二三四区| 黄片大片在线免费观看| 性欧美人与动物交配| 99热这里只有是精品50| 一级毛片高清免费大全| 国产精品98久久久久久宅男小说| 国产精品影院久久| 国产黄a三级三级三级人| 最近最新免费中文字幕在线| xxxwww97欧美| www.熟女人妻精品国产| 99久久综合精品五月天人人| 欧洲精品卡2卡3卡4卡5卡区| 91在线观看av| 午夜福利高清视频| 五月玫瑰六月丁香| 亚洲无线在线观看| a级毛片a级免费在线| 亚洲欧美一区二区三区黑人| 欧美成人一区二区免费高清观看 | 国产精品1区2区在线观看.| 国产高清视频在线播放一区| 亚洲成人精品中文字幕电影| 在线观看一区二区三区| 久久久久久久久久黄片| a在线观看视频网站| 久久久国产成人精品二区| 国产av一区在线观看免费| 久久人妻av系列| 看片在线看免费视频| 亚洲无线观看免费| 黄色成人免费大全| 99久久久亚洲精品蜜臀av| 亚洲成人精品中文字幕电影| 九九久久精品国产亚洲av麻豆 | 日韩av在线大香蕉| 午夜精品在线福利| 久久这里只有精品19| 亚洲欧美精品综合久久99| 真实男女啪啪啪动态图| 亚洲午夜理论影院| 国产黄片美女视频| 免费在线观看影片大全网站| 99国产极品粉嫩在线观看| 午夜免费观看网址| 天堂√8在线中文| 亚洲aⅴ乱码一区二区在线播放| 亚洲人成伊人成综合网2020| 窝窝影院91人妻| 动漫黄色视频在线观看| 草草在线视频免费看| www.999成人在线观看| 我要搜黄色片| 高清毛片免费观看视频网站| 婷婷丁香在线五月| 亚洲自偷自拍图片 自拍| 男女午夜视频在线观看| or卡值多少钱| 欧美成人一区二区免费高清观看 | 成年女人看的毛片在线观看| 香蕉av资源在线| 天天躁日日操中文字幕| 久久伊人香网站| 一个人看的www免费观看视频| av中文乱码字幕在线| 五月伊人婷婷丁香| 88av欧美| 亚洲av第一区精品v没综合| 99精品久久久久人妻精品| 国产成人av激情在线播放| 午夜福利在线观看免费完整高清在 | 嫩草影院精品99| 99riav亚洲国产免费| 在线看三级毛片| 亚洲av中文字字幕乱码综合| 日本黄大片高清| 国产亚洲欧美在线一区二区| 三级毛片av免费| 亚洲国产日韩欧美精品在线观看 | 伊人久久大香线蕉亚洲五| 99在线人妻在线中文字幕| 久久久久国内视频| 欧美中文综合在线视频| 麻豆成人av在线观看| 亚洲熟妇中文字幕五十中出| 草草在线视频免费看| 国内毛片毛片毛片毛片毛片| 99在线视频只有这里精品首页| 亚洲人成网站高清观看| 午夜视频精品福利| 国产一区二区激情短视频| 亚洲成人久久爱视频| 99国产精品99久久久久| 亚洲电影在线观看av| 日本五十路高清| 日本精品一区二区三区蜜桃| 欧美日韩一级在线毛片| 亚洲无线观看免费| 人人妻,人人澡人人爽秒播| 亚洲国产欧美人成| 女警被强在线播放| 国产成+人综合+亚洲专区| 一进一出好大好爽视频| 午夜亚洲福利在线播放| av女优亚洲男人天堂 | 国产av在哪里看| 久久国产乱子伦精品免费另类| 亚洲九九香蕉| 最新美女视频免费是黄的| 久久久久亚洲av毛片大全| 亚洲欧美一区二区三区黑人| 九色国产91popny在线| 精品国产超薄肉色丝袜足j| 欧美性猛交黑人性爽| 两人在一起打扑克的视频| 国产精品一区二区精品视频观看| 99热只有精品国产| bbb黄色大片| 久久精品亚洲精品国产色婷小说| 色噜噜av男人的天堂激情| 久久性视频一级片| 免费在线观看视频国产中文字幕亚洲| 色在线成人网| 最近最新免费中文字幕在线| 看黄色毛片网站| 18禁黄网站禁片免费观看直播| 亚洲黑人精品在线| 国产伦精品一区二区三区四那| 亚洲欧美日韩无卡精品| 两个人视频免费观看高清| 日本在线视频免费播放| 久久香蕉精品热| 人人妻,人人澡人人爽秒播| 俄罗斯特黄特色一大片| 两人在一起打扑克的视频| www日本在线高清视频| 午夜福利在线观看免费完整高清在 | 在线免费观看不下载黄p国产 | 欧美精品啪啪一区二区三区| av欧美777| 操出白浆在线播放| 欧美日韩瑟瑟在线播放| 美女午夜性视频免费| 天天躁日日操中文字幕| 国产 一区 欧美 日韩| 999精品在线视频| 综合色av麻豆| 亚洲国产中文字幕在线视频| 精品不卡国产一区二区三区| 天堂动漫精品| 亚洲人成伊人成综合网2020| 免费在线观看视频国产中文字幕亚洲| 久久99热这里只有精品18| 别揉我奶头~嗯~啊~动态视频| 久久久久免费精品人妻一区二区| 免费在线观看影片大全网站| 欧美一区二区国产精品久久精品| 亚洲欧美精品综合久久99| 国产免费男女视频| 欧美在线黄色| 日本免费a在线| 高清在线国产一区| 国产一区二区三区视频了| 90打野战视频偷拍视频| 99久久综合精品五月天人人| 国产精品1区2区在线观看.| 国产成年人精品一区二区| 亚洲人成电影免费在线| 搞女人的毛片| 国产免费av片在线观看野外av| 黄片大片在线免费观看| 91在线观看av| 亚洲国产看品久久| 后天国语完整版免费观看| 国产1区2区3区精品| 97人妻精品一区二区三区麻豆| 免费人成视频x8x8入口观看| 高潮久久久久久久久久久不卡| 午夜免费成人在线视频| av视频在线观看入口| 成熟少妇高潮喷水视频| 国产极品精品免费视频能看的| 99在线人妻在线中文字幕| 国产69精品久久久久777片 | av国产免费在线观看| 亚洲五月天丁香| 夜夜躁狠狠躁天天躁| 男女午夜视频在线观看| 国产精品久久久久久人妻精品电影| 最近最新中文字幕大全电影3| 国内少妇人妻偷人精品xxx网站 | 丁香六月欧美| 2021天堂中文幕一二区在线观| 国产精品久久久久久人妻精品电影| 国产高清三级在线| 久久欧美精品欧美久久欧美| 性色av乱码一区二区三区2| svipshipincom国产片| 午夜福利高清视频| 亚洲aⅴ乱码一区二区在线播放| 久久热在线av| 国产av在哪里看| a在线观看视频网站| 亚洲国产日韩欧美精品在线观看 | 99久久精品热视频| 真人一进一出gif抽搐免费| 国产伦精品一区二区三区四那| 日韩欧美精品v在线| 男插女下体视频免费在线播放| 小蜜桃在线观看免费完整版高清| 色在线成人网| 男人的好看免费观看在线视频| 国产亚洲精品综合一区在线观看| 亚洲片人在线观看| 免费在线观看亚洲国产| 久久久久国产一级毛片高清牌| 999久久久精品免费观看国产| 亚洲欧美一区二区三区黑人| 看片在线看免费视频| 亚洲成人精品中文字幕电影| 在线观看日韩欧美| 亚洲精品一卡2卡三卡4卡5卡| 亚洲精品粉嫩美女一区| 欧美极品一区二区三区四区| 久久国产精品影院| 嫩草影院精品99| 九色国产91popny在线| 日韩成人在线观看一区二区三区| 男人的好看免费观看在线视频| 99国产精品一区二区三区| 1024香蕉在线观看| 午夜福利视频1000在线观看| 亚洲午夜精品一区,二区,三区| 在线观看一区二区三区| 亚洲国产色片| 国产精品1区2区在线观看.| 欧美性猛交黑人性爽| 级片在线观看| 久久国产乱子伦精品免费另类| 少妇人妻一区二区三区视频| 岛国视频午夜一区免费看| 久久久久性生活片| 99视频精品全部免费 在线 | 最近最新中文字幕大全电影3| 婷婷精品国产亚洲av在线| 亚洲国产欧美一区二区综合| 免费一级毛片在线播放高清视频| 中文字幕人成人乱码亚洲影| 熟女人妻精品中文字幕| 国产精品99久久99久久久不卡| 成在线人永久免费视频| 国产高清三级在线| 五月伊人婷婷丁香| 别揉我奶头~嗯~啊~动态视频| 激情在线观看视频在线高清| 成人特级av手机在线观看| 欧美成人免费av一区二区三区| 欧美日韩黄片免| 狠狠狠狠99中文字幕| 成人欧美大片| 久久亚洲精品不卡| 97超级碰碰碰精品色视频在线观看| 白带黄色成豆腐渣| 极品教师在线免费播放| 看免费av毛片| 久久这里只有精品中国| 国产97色在线日韩免费| 午夜福利在线观看免费完整高清在 | 日韩欧美精品v在线| 桃红色精品国产亚洲av| 国产精品亚洲美女久久久| 99精品欧美一区二区三区四区| 久久久久国产一级毛片高清牌| 日韩欧美 国产精品| 国产精品久久视频播放| 香蕉久久夜色| 国产亚洲欧美在线一区二区| 在线播放国产精品三级| 最好的美女福利视频网| 国产亚洲精品久久久com| 亚洲国产精品合色在线| 国产亚洲欧美在线一区二区| 午夜久久久久精精品| av国产免费在线观看| 别揉我奶头~嗯~啊~动态视频| 亚洲色图av天堂| 成人特级av手机在线观看| 真实男女啪啪啪动态图| 啦啦啦免费观看视频1| 成人av在线播放网站| 婷婷精品国产亚洲av| 淫秽高清视频在线观看| 色尼玛亚洲综合影院| 亚洲av成人不卡在线观看播放网| 亚洲国产欧美一区二区综合| 亚洲av片天天在线观看| 黄片小视频在线播放| 久久精品国产99精品国产亚洲性色| 1000部很黄的大片| 最近最新中文字幕大全免费视频| 免费在线观看日本一区| 午夜福利欧美成人| 无人区码免费观看不卡| 夜夜看夜夜爽夜夜摸| 亚洲精品中文字幕一二三四区| 国产av麻豆久久久久久久| 丰满的人妻完整版| 首页视频小说图片口味搜索| 麻豆久久精品国产亚洲av| 亚洲aⅴ乱码一区二区在线播放| 亚洲成av人片在线播放无| 亚洲av美国av| 亚洲国产中文字幕在线视频| 欧美日本亚洲视频在线播放| 亚洲欧美日韩高清专用| 欧美日韩精品网址| 久久久久久大精品| 天堂网av新在线| 久久婷婷人人爽人人干人人爱| 欧美3d第一页| 伊人久久大香线蕉亚洲五| 高清毛片免费观看视频网站| 老司机深夜福利视频在线观看| www.999成人在线观看| 国内久久婷婷六月综合欲色啪| 久久久国产欧美日韩av| 久99久视频精品免费| 日本黄色视频三级网站网址| cao死你这个sao货| 熟妇人妻久久中文字幕3abv| 国产精品自产拍在线观看55亚洲| 日韩大尺度精品在线看网址| 欧美午夜高清在线| 精品99又大又爽又粗少妇毛片 | 国产三级在线视频| 精品一区二区三区视频在线 | 激情在线观看视频在线高清| 极品教师在线免费播放| 美女被艹到高潮喷水动态| 日韩欧美三级三区| 高清在线国产一区| 亚洲美女视频黄频| 色av中文字幕| 精华霜和精华液先用哪个| 少妇的丰满在线观看| 午夜福利在线在线| 国产美女午夜福利| 老熟妇仑乱视频hdxx| 99riav亚洲国产免费| 欧美日韩精品网址| 久久久久久久精品吃奶| 国产亚洲精品av在线| 欧美一区二区国产精品久久精品| 人妻丰满熟妇av一区二区三区| 欧美成狂野欧美在线观看| 亚洲人成网站在线播放欧美日韩| 免费人成视频x8x8入口观看| 免费在线观看视频国产中文字幕亚洲| 一个人免费在线观看的高清视频| 99精品在免费线老司机午夜| 国产精品一区二区三区四区久久| 99热这里只有精品一区 | 国产精品日韩av在线免费观看| 国产在线精品亚洲第一网站| 精品国产美女av久久久久小说| 日韩欧美在线二视频| 最新中文字幕久久久久 | 在线观看舔阴道视频| 九色成人免费人妻av| 看片在线看免费视频| 国产精品精品国产色婷婷| 日本熟妇午夜| 18禁美女被吸乳视频| 99国产精品一区二区三区| 国产高清有码在线观看视频| 脱女人内裤的视频| 天堂√8在线中文| 免费在线观看日本一区| 亚洲熟妇中文字幕五十中出| 国产精品一区二区免费欧美| 亚洲激情在线av| 久久国产精品人妻蜜桃| 婷婷六月久久综合丁香| 亚洲男人的天堂狠狠| 五月伊人婷婷丁香| 老司机午夜福利在线观看视频| 亚洲国产精品999在线| 国产成人欧美在线观看| 美女免费视频网站| 欧美一级毛片孕妇| 97人妻精品一区二区三区麻豆| 亚洲精品456在线播放app | 91av网一区二区| 成年女人毛片免费观看观看9| 欧美av亚洲av综合av国产av| 国产单亲对白刺激| www日本黄色视频网| 熟女少妇亚洲综合色aaa.| 国产精品香港三级国产av潘金莲| 老司机福利观看| 99久久精品热视频| 天堂影院成人在线观看| 他把我摸到了高潮在线观看| 一本精品99久久精品77| 久久久久久久久久黄片| 高潮久久久久久久久久久不卡| 长腿黑丝高跟| 国产1区2区3区精品| 国产精品av久久久久免费| 国产精品亚洲美女久久久| 18禁美女被吸乳视频| 天天一区二区日本电影三级| 欧美激情在线99| av天堂中文字幕网| 嫁个100分男人电影在线观看| 亚洲中文字幕一区二区三区有码在线看 | 成人三级做爰电影| 久久精品aⅴ一区二区三区四区| 最近最新中文字幕大全电影3|