• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Advancing Wound Filling Extraction on 3D Faces:An Auto-Segmentation and Wound Face Regeneration Approach

    2024-03-02 01:34:24DuongNguyenThinhLePhuongNguyenNgaLeandNguyenXuan

    Duong Q.Nguyen,Thinh D.Le,Phuong D.Nguyen,Nga T.K.Le and H.Nguyen-Xuan,?

    1Department of Mathematics and Statistics,Quy Nhon University,Quy Nhon City,55100,Viet Nam

    2Applied Research Institute for Science and Technology,Quy Nhon University,Quy Nhon City,55100,Viet Nam

    3CIRTECH Institute,HUTECH University,Ho Chi Minh City,72308,Viet Nam

    ABSTRACT Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper, we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions.To achieve accurate segmentation, we conducted thorough experiments and selected a high-performing model from the trained models.The selected model demonstrates exceptional segmentation performance for complex 3D facial wounds.Furthermore,based on the segmentation model,we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study.Our method achieved a remarkable accuracy of 0.9999993%on the test suite,surpassing the performance of the previous method.From this result,we use 3D printing technology to illustrate the shape of the wound filling.The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design.By automating facial wound segmentation and improving the accuracy of wound-filling extraction,our approach can assist in carefully assessing and optimizing interventions,leading to enhanced patient outcomes.Additionally,it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants.Our source code is available at https://github.com/SIMOGroup/WoundFilling3D.

    KEYWORDS 3D printing technology;face reconstruction;3D segmentation;3D printed model

    1 Introduction

    Nowadays,people are injured by traffic accidents,occupational accidents,birth defects,diseases that have made them lose a part of their body.In which, defects when injured in the head and face areas account for a relatively high rate [1].Wound regeneration is an important aspect of medical care, aimed at restoring damaged tissues and promoting wound healing in patients with complex wounds [2].However, the treatment of craniofacial and facial defects can be challenging due to the many specific requirements of the tissue and the complexity of the anatomical structure of that region[3].Traditional methods used for wound reconstruction often involve grafting techniques using automated grafts(from the patient’s own body)or allogeneic grafts(from a donor)[4].However,these methods have limitations such as availability,donor morbidity,and potential for rejection.In recent years,the development of additive manufacturing technology has promoted the creation of advanced techniques in several healthcare industries [5–7].The implementation of 3D printing technology in the preoperative phase enables clinicians to establish a meticulous surgical strategy by generating an anatomical model that accurately reflects the patient’s unique anatomy.This approach facilitates the development of customized drilling and cutting instructions,precisely tailored to the patient’s specific anatomical features, thereby accommodating the potential incorporation of a pre-formed implant[8].Moreover, the integration of 3D printing technology and biomaterials plays a crucial role in advancing remedies within the field of regenerative medicine,addressing the pressing demand for novel therapeutic modalities[9–12].The significance of wound reconstruction using 3D bioprinting in the domain of regenerative medicine is underscored by several key highlights,as outlined below:

    -Customization and Precision:3D bioprinting allows for the creation of patient-specific constructs, tailored to match the individual’s wound geometry and requirements.This level of customization ensures a better fit and promotes improved healing outcomes.

    -Tissue Regeneration:The ability to fabricate living tissues using 3D bioprinting holds great promise for wound reconstruction.The technique enables the deposition of cells and growth factors in a controlled manner, facilitating tissue regeneration and functional restoration[13,14].

    -Reduced Donor Dependency:The scarcity of donor tissues and the associated risks of graft rejection are significant challenges in traditional wound reconstruction methods.3D bioprinting can alleviate these limitations by providing an alternative approach that relies on the patient’s own cells or bioinks derived from natural or synthetic sources[15].

    -Complex Wound Healing:Certain wounds, such as large burns, chronic ulcers, or extensive tissue loss, pose significant challenges to conventional wound reconstruction methods.3D bioprinting offers the potential to address these complex wound scenarios by creating intricate tissue architectures that closely resemble native tissues.

    -Accelerated Healing:By precisely designing the structural and cellular components of the printed constructs,3D bioprinting can potentially enhance the healing process.This technology can incorporate growth factors,bioactive molecules,and other therapeutic agents,creating an environment that stimulates tissue regeneration and accelerates wound healing[16].

    Consequently,3D bioprinting technology presents a promising avenue for enhancing craniofacial reconstruction modalities in individuals afflicted by head trauma.

    Wound dimensions, including length, width, and depth, are crucial parameters for assessing wound healing progress and guiding appropriate treatment interventions [17].For effective facial reconstruction, measuring the dimensions of a wound accurately can pose significant challenges in clinical and scientific settings [18].Firstly, wound irregularity presents a common obstacle.Wounds rarely exhibit regular shapes, often characterized by uneven edges, irregular contours, or irregular surfaces.Such irregularity complicates defining clear boundaries and determining consistent reference points for measurement.Secondly,wound depth measurement proves challenging due to undermined tissue or tunnels.These features, commonly found in chronic or complex wounds, can extend beneath the surface,making it difficult to assess the wound’s true depth accurately.Furthermore,the presence of necrotic tissue or excessive exudate can obscure the wound bed,further hindering depth measurement.Additionally,wound moisture and fluid dynamics pose significant difficulties.Wound exudate,which may vary in viscosity and volume,can accumulate and distort measurements.Excessive moisture or the presence of dressing materials can alter the wound’s appearance,potentially leading to inaccurate measurements.Moreover, the lack of standardization in wound measurement techniques and tools adds to the complexity.

    Currently, deep learning has indeed emerged as a predominant technique for wound image segmentation as well as various other applications in medical imaging and computer vision [19–21].Based on the characteristics of the input data [22,23], three deep learning methods are used for segmentation and wound measurement, as shown in Fig.1.The study of Anisuzzaman et al.[23]presented case studies of these three methods.The methods used to segment the wound based on the characteristics of the input data are as follows:

    -2D image segmentation:Deep learning methods in 2D for wound segmentation offer several advantages.Firstly, they are a well-established and widely used technique in the field.Additionally, large annotated 2D wound segmentation datasets are available, facilitating model training and evaluation.These methods exhibit efficient computational processing compared to their 3D counterparts, enabling faster inference times and improved scalability.Furthermore,deep learning architectures,such as convolutional neural networks,can be leveraged for effective feature extraction,enhancing the accuracy of segmentation results.However,certain disadvantages are associated with deep learning methods in 2D for wound segmentation.One limitation is the lack of depth information, which can restrict segmentation accuracy,particularly for complex wounds with intricate shapes and depth variations.Additionally,capturing the wound’s full spatial context and shape information can be challenging in 2D,as depth cues are not explicitly available.Furthermore,these methods are susceptible to variations in lighting conditions,image quality,and perspectives,which can introduce noise and affect the segmentation performance.

    -2D to 3D reconstruction:By incorporating depth information,the conversion to 3D enables a better capture of wounds’shape and spatial characteristics,facilitating a more comprehensive analysis.Moreover,there is a potential for improved segmentation accuracy compared to 2D methods,as the additional dimension can provide richer information for delineating complex wound boundaries.Nevertheless, certain disadvantages are associated with converting from 2D to 3D for wound segmentation.The conversion process itself may introduce artifacts and distortions in the resulting 3D representation, which can impact the accuracy of the segmentation.Additionally,this approach necessitates additional computational resources and time due to the complexity of converting 2D data into a 3D representation[24].Furthermore,the converted 3D method may not completely overcome the limitations of the 2D method.

    -3D mesh or point cloud segmentation:Directly extracting wound segmentation from 3D data (mesh/point cloud) offers several advantages.One notable advantage is the retention of complete 3D information on the wound, enabling accurate and precise segmentation.By working directly with the 3D data,this method effectively captures the wound’s intricate shape,volume,and depth details,surpassing the capabilities of both 2D approaches and converted 3D methods.Furthermore, the direct utilization of 3D data allows for a comprehensive analysis of the wound’s spatial characteristics,facilitating a deeper understanding of its structure and morphology.

    Figure 1:Methods of using deep learning in wound measurement by segmentation

    Hence,employing a 3D(mesh or point cloud)segmentation method on specialized 3D data,such as those obtained from 3D scanners or depth sensors,can significantly improve accuracy compared to the other two methods.The use of specialized 3D imaging technologies enables the capture of shape,volume, and depth details with higher fidelity and accuracy [25].Consequently, the segmentation results obtained from this method are expected to provide a more precise delineation of wound boundaries and a more accurate assessment of wound characteristics.Therefore, this method can enhance wound segmentation accuracy and advance wound assessment techniques.

    Besides,facial wounds and defects present unique challenges in reconstructive surgery,requiring accurate localization of the wound and precise estimation of the defect area [26].The advent of 3D imaging technologies has revolutionized the field, enabling detailed capture of facial structures.However, reconstructing a complete face from a 3D model with a wound remains a complex task that demands advanced computational methods.Accurately reconstructing facial defects is crucial for surgical planning,as it provides essential information for appropriate interventions and enhances patient outcomes[27].Some prominent studies,such as Sutradhar et al.[28]utilized a unique approach based on topology optimization to create patient-specific craniofacial implants using 3D printing technology; Nuseir et al.[29] proposed the utilization of direct 3D printing for the fabrication of a pliable nasal prosthesis,accompanied by the introduction of an optimized digital workflow spanning from the scanning process to the achievement of an appropriate fit;and some other prominent studies presented in survey studies such as [30,31].However, these methods often require a lot of manual intervention and are prone to subjectivity and variability.To solve this problem,the method proposed in [32,33] leverages the power of modeling [34] to automate the process of 3D facial reconstruction with wounds,minimizing human error and improving efficiency.To extract the filling for the wound,the study[32]proposed the method of using the reconstructed 3D face and the 3D face of the patient without the wound.This method is calledoutlier extractionby the authors.These advancements can be leveraged to expedite surgical procedures,enhance precision,and augment patient outcomes,thereby propelling the progression of technology-driven studies on facial tissue reconstruction,particularly in bio 3D printing.However,this method still has some limitations as follows:

    - The method of extracting filler for the wound after 3D facial reconstruction has not yet reached high accuracy.

    - In order to extract the wound filling,the method proposed by[32]necessitated the availability of the patient’s pre-injury 3D facial ground truth.This requirement represents a significant limitation of the proposed wound filling extraction approach, as obtaining the patient’s preinjury 3D facial data is challenging in real-world clinical settings.

    - To overcome these limitations,the present study aims to address the following objective:

    - Train the 3D facial wound container segmentation automatic model using a variety of appropriate loss functions to solve the data imbalance problem.

    - Propose an efficient approach to extract the 3D facial wound filling by leveraging the face regeneration model in the study[32]combined with the wound segmentation model.

    - Evaluate the experimental results of our proposed method and the method described in the study by Nguyen et al.[32].One case study will be selected to be illustrated through 3D printing.

    2 Methodology

    Research reported by Nguyen et al.[32] has proposed a method to extract the filling for the wound for 3D face reconstruction.However,as we analyzed in Section 1,study[32]still has certain limitations.To address those limitations, we propose a unique approach to 3D face reconstruction with the combination of segmentation on injured 3D face data.This section introduces the structure of the 3D segmentation model and presents our proposed method.

    2.1 Architecture of Two-Stream Graph Convolutional Network

    Recent years have witnessed remarkable advancements in deep learning research within the domain of 3D shape analysis,as highlighted by Ioannidou et al.[35].This progress has catalyzed the investigation into translation-invariant geometric attributes extracted from mesh data,facilitating the precise labeling of vertices or cells on 3D surfaces.Along with the development of 3D shape analysis,the field of 3D segmentation has advanced tremendously and brought about many applications across various fields, including computer vision and medical imaging [36].Geometrically grounded approaches typically leverage pre-defined geometric attributes, such as 3D coordinates, normal vectors,and curvatures,to differentiate between distinct mesh cells.Several noteworthy models have emerged,including PointNet[37],PointNet++[38],PointCNN[39],MeshSegNet[40],and DGCNN[41].While these methods have demonstrated efficiency,they often employ a straightforward strategy of concatenating diverse raw attributes into an input vector for training a single segmentation network.Consequently, this strategy can generate isolated erroneous predictions.The root cause lies in the inherent dissimilarity between various raw attributes,such as cell spatial positions(coordinates)and cell morphological structures (normal vectors), which leads to confusion when merged as input.Therefore, the seamless fusion of their complementary insights for acquiring comprehensive highlevel multi-view representations faces hindrance.Furthermore, the use of low-level predetermined attributes in these geometry-centric techniques is susceptible to significant variations.To address this challenge,the two-stream graph convolutional network(TSGCNet)[42]for 3D segmentation emerges as an exceptional technique, showcasing outstanding performance and potential in the field.This network harnesses the powerful geometric features available in the mesh to execute segmentation tasks.Consequently,in this study,we have selected this model as the focal point to investigate its applicability and effectiveness in the context of our research objectives.In[42],the proposed methodology employs two parallel streams, namely theCstream and theNstream.TSGCNet incorporates input-specific graph-learning layers to extract high-level geometric representations from the coordinates and normal vectors.Subsequently, the features obtained from these two complementary streams are fused in the feature-fusion branch to facilitate the acquisition of discriminative multi-view representations,specifically for segmentation purposes.An overview of the architecture of the two-stream graph convolutional network is shown in Fig.2.

    TheC-stream is designed to capture the essential topological characteristics derived from the coordinates of all vertices of a mesh.TheC-stream receives an input denoted asF0c,which is anM×12 matrix representing the coordinates(Mis the number of mesh cells).Each row of this matrix represents a node,and the columns correspond to the coordinates of the cell in a three-dimensional space.This stream incorporates an input-transformer module to align the input data with a canonical space.This module comprises shared Multilayer Perceptrons (MLPs) across nodes, as previously described by Charles et al.[37].TheC-stream progressively integrates a consecutive set of graph-attention layers along the forward path to systematically exploit multi-scale geometric attributes derived from the coordinates of the mesh.WhileC-stream can capture general geometric information, it lacks the sensitivity to distinguish subtle boundaries between adjacent nodes with different classes (e.g., the boundary between the injured and non-injured areas).To overcome this limitation, theN-stream is designed to extract boundary representations based on the normal vectors associated with the nodes.Unlike theC-stream, theN-stream uses graph max-pooling layers.This differentiation is essential as the normal vectors encompass unique geometric information that differs from the coordinates of the nodes.Because the normal vector carries only geometry information,theN-stream prefers to use max-pooling layers instead of graph-attention layers as in theC-stream.

    Figure 2:Architectural overview of the TSGCNet model for segmentation on injured 3D face data

    The TSGCNet model employs three layers in each stream to extract features.Subsequently, the features from these layers are combined as follows:

    In order for the model to comprehensively understand the 3D mesh structure, Zhang et al.[42]combinedFcandFn,which can be expressed as:

    wherePrepresents the feature matrices.Each row denotes the probabilities of a specific cell belonging toCdifferent classes.

    2.2 Filling Extraction

    We utilize the TSGCNet model, as presented in study [42], to perform segmentation of the wound area on the patient’s 3D face.This model demonstrates a remarkable capacity for accurately discriminating boundaries between regions harboring distinct classes.Our dataset comprises two distinct classes, namely facial abnormalities and normal regions.Due to the significantly smaller proportion of facial wounds compared to the normal area, an appropriate training strategy is necessary to address the data imbalance phenomenon effectively.To address this challenge,we utilize specific functions that effectively handle data imbalance within the semantic segmentation task.These functions include focal loss[43],dice loss[44],cross-entropy loss and weighted cross-entropy loss[45].

    1)Focal lossis defined as:

    whereptrepresents the predicted probability of the true class;αtis the balancing factor that assigns different weights to different classes;γis the focusing parameter that modulates the rate at which easy and hard examples are emphasized.Focal loss effectively reduces the loss contribution from wellclassified examples and focuses on samples that are difficult to classify correctly.This helps handle class imbalance and improves the model’s performance on minority classes.

    2)Dice lossalso known as the Sorensen-Dice coefficient is defined as:

    whereprepresents the predicted probability or output of the model;yis the ground truth or target labels;Nis the number of elements in the predicted and ground truth vectors;?is a small constant added to the denominator to avoid division by zero.

    3)Cross-entropy segmentation lossis defined as:

    whereyicdenotes the ground truth label for thei-th sample andc-th class;picrepresents the predicted probability for thei-th sample andc-th class;Mis the total number of samples;Cis the number of classes.

    4)Weighted Cross-Entropy lossis as follows:

    wherew=represents the weight assigned to each point based on its class.

    After identifying the optimal wound segmentation model, we proceed to extract the mesh containing the area that needs filling on the 3D face.Letv(x,y,z)∈M(V,F)is the vertices of the mesh containing the wound.In whichVandFare the set of vertices and faces of the mesh,respectively.By leveraging the 3D facial regeneration model for wound treatment trained in[32](G),along with a 3D facial wound segmentation model(S),we can extract a mesh denoted asM extracted,which contains the region to be filled for face reconstruction.Specifically, we utilize the results of 3D facial wound segmentation to extract the coordinates and face indices of the damaged area on the mesh.Subsequently,we create a mesh(M seg)that encompasses the injured area on the 3D face based on mesh segmentation.Concurrently,we extract the surface of the damaged area on the 3D face(M surface),reconstructed from the model presented in the study by[32].Finally,we obtain the wound filling mesh on the 3D face by combining the meshesM segandM surfaceinto a single watertight mesh,denoted asM extracted.Our proposal is described in detail in Algorithm 2 and is illustrated in Figs.3 and 4.

    Figure 3:Several filling extraction results

    Figure 4:An illustration of the wound-filling extraction algorithm

    3 Experimental Results

    3.1 Dataset Description

    We utilize a dataset of 3D faces with craniofacial injuries called Cir3D-FaIR [32].The dataset used in this study is generated through simulations within a virtual environment,replicating realistic facial wound locations.A set of 3,678 3D mesh representations of uninjured human faces is employed to simulate facial wounds.Specifically, each face in the dataset is simulated with ten distinct wound locations.Consequently,the dataset comprises 40,458 human head meshes,encompassing uninjured faces and wounds in various positions.In practice,the acquired data undergoes mesh data processing by reducing the sample to 15,000 cells of the mesh,eliminating redundant information while preserving the original topology.Each 3D face mesh consists of 15,000 cells of the mesh and is labeled according to the location of wounds on the face,specifically indicating the presence of the wounds.This simulation dataset has been evaluated by expert physicians to assess the complexity associated with the injuries.Fig.5 showcases several illustrative examples of typical cases from the dataset.The dataset is randomly partitioned into distinct subsets, with 80% of the data assigned to training and 20% designated for validation.The objective is to perform automated segmentation of the 3D facial wound region and integrate it with the findings of Nguyen et al.[32]regarding defect face reconstruction to extract the wound-filling part specific to the analyzed face.

    Figure 5:Illustrations of the face dataset with wounds

    3.2 Experimental Setup

    The wound area segmentation model on the patient’s 3D face is trained through experiments with different loss functions to select the most effective model, as outlined in Algorithm 1.The training process was conducted using a single NVIDIA Quadro RTX 6000 GPU over the course of 50 epochs.The Adam optimizer is employed in conjunction with a mini-batch size of 4.The initial learning rate was set at 1e-3,and it underwent a decay of 0.5 every 20 epochs.

    In this study, the quantitative evaluation of segmentation performance on a 3D grid is accomplished through two metrics: (1) Overall Accuracy (OA), which is obtained by dividing the number of correctly segmented cells by the total number of cells; and (2) the calculation of Intersectionover-Union (IoU) for each class, followed by the calculation of the mean Intersection-over-Union(mIoU).The IoU is a vital metric used in 3D segmentation to assess the accuracy and quality of segmentation results.It quantifies the degree of overlap between the segmented region and the ground truth,providing insights into the model’s ability to accurately delineate objects or regions of interest within a 3D space.Training 3D models is always associated with challenges related to hardware requirements,processing speed,and cost.Processing and analyzing 3D data is more computationally intensive compared to 2D data.The hardware requirements for 3D segmentation are typically higher,including more powerful CPUs or GPUs, more RAM, and potentially specialized hardware for accelerated processing.Especially, performing segmentation on 3D data takes more time due to the increased complexity.In essence,the right loss function can lead to faster convergence,better model performance,and improved interpretability.Therefore,experimentation and thorough evaluation are crucial to determining which loss function works best for data.The model was trained on the dataset using four iterations of experiments, wherein different loss functions were employed.The outcomes of these experiments are presented in Table 1.The utilized loss functions demonstrate excellent performance in the training phase, yielding highly satisfactory outcomes on large-scale unbalanced datasets.Specifically, we observe that the model integrated with cross-entropy segmentation loss exhibits rapid convergence,requiring only 16 epochs to achieve highly favorable outcomes.As outlined in Section 2.2,the model exhibiting the most favorable outcomes,as determined by the cross-entropy segmentation loss function,was selected for the segmentation task.This particular model achieved an impressivemIoUscore of 0.9999986.Some illustrations for the segmentation result on a 3D face are shown in Fig.3.

    Table 1: Results of training the model with the corresponding loss functions

    Furthermore, in the context of limited 3D data for training segmentation models in dentistry,Zhang et al.[42]showcased the remarkable efficacy of the TSGCNet model.Their training approach involved the utilization of 80 dental data meshes,culminating in an impressive performance outcome of 95.25%.To investigate the effectiveness of the TSGCNet model with a small amount of face data with injuries,we train the TSGCNet model including 100 meshes for training and 20 meshes for testing.The TSGCNet model was trained for 50 epochs, employing a cross-entropy segmentation loss function.This approach achieved an overall accuracy of 97.69%.This result underlines the effectiveness of the two-stream graph convolutional network in accurately segmenting complex and minor wounds,demonstrating its ability to capture geometric feature information from the 3D data.However,training the model with a substantial dataset is crucial to ensure a comprehensive understanding of facial features and achieve a high level of accuracy.Consequently, we selected a model that achieved anmIoUindex of 0.9999986,as depicted in Table 1,to accurately segment facial injuries.

    From the above segmentation result,our primary objective is to conduct a comparative analysis between our proposed wound fill extraction method and a method with similar objectives as discussed in the studies by Nguyen et al.[32,33].A notable characteristic of the Cir3D-FaIR dataset is that all meshes possess a consistent vertex order.This enables us to streamline the extraction process of the wound filler.Utilizing the test dataset,we employ the model trained in the study by Nguyen et al.[32]for the reconstruction of the 3D face.Subsequently, we apply our proposed method to extract the wound fill from the reconstructed 3D face.As previously stated,we introduce a methodology for the extraction of wound filling.The details of this methodology are explained in Algorithm 2 and Fig.4.

    For the purpose of notational convenience,we designate the filling extraction method presented in the study by Nguyen et al.[32]as the”old proposal”.We conduct a performance evaluation of both our proposed method and the old proposal method on a dataset consisting of 8090 meshes, which corresponds to 20%of the total dataset.A comprehensive description of the process for comparing the two methods is provided in Algorithm 3.The results show that our proposal has an average accuracy of0.9999993%,while the method in the old proposal is0.9715684%.The accuracy of the fill extraction method has been improved,which is very practical in the medical reconstruction problem.After that,the study randomly extracted the method outputs from the test set,depicted in Fig.3.We have used 3D printing technology to illustrate the results of the actual model, which is significantly improved compared to the old method, as shown in Fig.6 and illustrate a 3D printed model to extract the wound filling as shown in Fig.7.

    But he felt very bitterly parting from the home where he had been born, and where he had at least passed a short but happy childhood, and sitting down on a hill he gazed once more fondly on his native place

    Figure 6:Filling extraction results with 3D printing

    Figure 7:A 3D-printed pattern to fill a wound

    The results of this study emphasize the potential of utilizing appropriate 3D printing technology for facial reconstruction in patients.This can involve prosthetic soft tissue reconstruction or 3D printing of facial biological tissue [46].3D bioprinting for skin tissue implants requires specialized materials and methods to create customized skin constructs for a range of applications, including wound healing and reconstructive surgery.The choice of materials and fusion methods may vary based on the specific site(e.g.,face or body)and the desired characteristics of the skin tissue implant.In the realm of 3D printing for biological soft tissue engineering,a diverse array of materials is strategically employed to emulate the intricate structures and properties inherent to native soft tissues.Hydrogels,such as alginate,gelatin,and fibrin,stand out as popular choices,primarily owing to their high water content and excellent biocompatibility.Alginate,derived from seaweed,exhibits favorable characteristics such as good printability and high cell viability,making it an attractive option.Gelatin,a denatured form of collagen, closely replicates the extracellular matrix, providing a biomimetic environment conducive to cellular growth.Fibrin, a key protein in blood clotting, offers a natural scaffold for cell attachment and proliferation.Additionally,synthetic polymers like polycaprolactone(PCL)and poly(lactic-co-glycolic acid) (PLGA) provide the benefit of customizable mechanical properties and degradation rates.Studies [47–49] have presented detailed surveys of practical applications of many types of materials for 3D printing of biological tissue.Our research is limited to proposing an efficient wound-filling extraction method with high accuracy.In the future, we will consider implementing the application of this research in conjunction with physician experts at hospitals in Vietnam.By harnessing 3D printing technology, as illustrated in Fig.8, healthcare professionals can craft highly tailored and precise facial prosthetics, considering each patient’s unique anatomy and needs.This high level of customization contributes to achieving a more natural appearance and better functional outcomes,addressing both aesthetic and functional aspects of facial reconstruction[14].This approach holds significant promise for enhancing facial reconstruction procedures and improving the overall quality of life for patients who have undergone facial trauma or have congenital facial abnormalities.Moreover, high-quality 3D facial scanning applications on phones are becoming popular.We are able to implement our proposal integration into smartphones to support sketching the reconstruction process on the injured face.This matter is further considered in our forthcoming research endeavors.

    3.3 Limitations

    Although our 3D facial wound reconstruction method achieves high performance, it still has certain limitations.Real-world facial data remains limited due to ethics in medical research.Therefore,we amalgamate scarce MRI data from patients who consented to share their personal data with the data generated from the MICA model to create a dataset.Our proposal primarily focuses on automatically extracting the region to be filled in a 3D face,addressing a domain similar to practical scenarios.We intend to address these limitations in future studies when we have access to a more realistic volume of 3D facial data from patients.

    Furthermore, challenges related to unwanted artifacts, obstructions, and limited contrast in biomedical 3D scanning need to be considered.To tackle these challenges, we utilize cutting-edge 3D scanning technology equipped with enhanced hardware and software capabilities.This enables us to effectively mitigate artifacts and obstructions during data collection.We implement rigorous quality assurance protocols throughout the 3D scanning process, ensuring the highest standards of image quality.Additionally, we pay careful attention to patient positioning and provide guidance to minimize motion artifacts.Moreover, we employ advanced 3D scanning techniques, such as multi-modal imaging that combines various imaging modalities like CT and MRI.This approach significantly enhances image quality and improves contrast, which is essential for accurate medical image interpretation.

    4 Conclusions

    This study explored the benefits of using a TSGCNet to segment 3D facial trauma defects automatically.Furthermore,we have proposed an improved method to extract the wound filling for the face.The results show the most prominent features as follows:

    - An auto-segmentation model was trained to ascertain the precise location and shape of 3D facial wounds.We have experimented with different loss functions to give the most effective model in case of data imbalance.The results show that the model works well for complex wounds on the Cir3D-FaIR face dataset with an accuracy of 0.9999993%.

    - Concurrently, we have proposed a methodology to enhance wound-filling extraction performance by leveraging both a segmentation model and a 3D face reconstruction model.By employing this approach, we achieve higher accuracy than previous studies on the same problem.Additionally,this method obviates the necessity of possessing a pre-injury 3D model of the patient’s face.Instead,it enables the precise determination of the wound’s position,shape,and complexity,facilitating the rapid extraction of the filling material.

    - This research proposal aims to contribute to advancing facial reconstruction techniques using AI and 3D bioprinting technology to print skin tissue implants.Printing skin tissue for transplants has the potential to revolutionize facial reconstruction procedures by providing personalized, functional, and readily available solutions.By harnessing the power of 3D bioprinting technology, facial defects can be effectively addressed, enhancing both cosmetic and functional patient outcomes.

    - From this research direction, our proposed approach offers a promising avenue for advancing surgical support systems and enhancing patient outcomes by addressing the challenges associated with facial defect reconstruction.Combining machine learning, 3D imaging, and segmentation techniques provides a comprehensive solution that empowers surgeons with precise information and facilitates personalized interventions in treating facial wounds.

    Acknowledgement:We would like to thank Vietnam Institute for Advanced Study in Mathematics(VIASM)for hospitality during our visit in 2023,when we started to work on this paper.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Duong Q.Nguyen,H.Nguyen-Xuan,Nga T.K.Le;data collection:Thinh D.Le;analysis and interpretation of results:Duong Q.Nguyen,Thinh D.Le,Phuong D.Nguyen;draft manuscript preparation:Duong Q.Nguyen,Thinh D.Le,H.Nguyen-Xuan.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Our source code and data can be accessed at https://github.com/SIMOGroup/WoundFilling3D.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美 日韩 精品 国产| 精品国内亚洲2022精品成人| 欧美激情久久久久久爽电影| 国产永久视频网站| 精品国产三级普通话版| 丝袜美腿在线中文| 久久久久性生活片| 日日摸夜夜添夜夜添av毛片| 中文天堂在线官网| 美女国产视频在线观看| 美女内射精品一级片tv| 内地一区二区视频在线| 91精品一卡2卡3卡4卡| 中国美白少妇内射xxxbb| 日韩av在线大香蕉| 大片免费播放器 马上看| 日日干狠狠操夜夜爽| 国产成年人精品一区二区| 免费不卡的大黄色大毛片视频在线观看 | freevideosex欧美| 在线观看一区二区三区| 黑人高潮一二区| 国产亚洲最大av| 日本免费a在线| 插阴视频在线观看视频| 最近2019中文字幕mv第一页| 蜜桃亚洲精品一区二区三区| videos熟女内射| 国产成人午夜福利电影在线观看| 国产老妇女一区| 哪个播放器可以免费观看大片| 成人特级av手机在线观看| 欧美成人一区二区免费高清观看| 久久久a久久爽久久v久久| 三级男女做爰猛烈吃奶摸视频| 日韩伦理黄色片| 嫩草影院新地址| 边亲边吃奶的免费视频| www.色视频.com| 毛片一级片免费看久久久久| 九九爱精品视频在线观看| kizo精华| 国产成人精品一,二区| 国产成人a∨麻豆精品| 午夜亚洲福利在线播放| 69人妻影院| 97超视频在线观看视频| 国产免费一级a男人的天堂| 老司机影院毛片| 国产精品一区二区性色av| 婷婷色麻豆天堂久久| 又大又黄又爽视频免费| 男女国产视频网站| 一级毛片我不卡| 一区二区三区免费毛片| 麻豆精品久久久久久蜜桃| 久久久精品免费免费高清| 国产熟女欧美一区二区| 搡女人真爽免费视频火全软件| 18禁在线播放成人免费| 国内精品宾馆在线| 色视频www国产| 人妻少妇偷人精品九色| 国产精品国产三级国产专区5o| 成人综合一区亚洲| 在线a可以看的网站| 免费观看a级毛片全部| 黑人高潮一二区| 国产男人的电影天堂91| 中国国产av一级| 午夜精品在线福利| 亚洲图色成人| 久久鲁丝午夜福利片| freevideosex欧美| 国产国拍精品亚洲av在线观看| videossex国产| 91精品国产九色| 美女被艹到高潮喷水动态| 亚洲丝袜综合中文字幕| 国产成人一区二区在线| 亚洲,欧美,日韩| 在线a可以看的网站| 寂寞人妻少妇视频99o| 国产黄色免费在线视频| 精品一区二区免费观看| 噜噜噜噜噜久久久久久91| 久久午夜福利片| a级毛色黄片| 国产成人a区在线观看| 女人久久www免费人成看片| 神马国产精品三级电影在线观看| 搡女人真爽免费视频火全软件| 我要看日韩黄色一级片| 久久6这里有精品| 少妇人妻精品综合一区二区| 久久精品国产鲁丝片午夜精品| 特级一级黄色大片| 精品国产露脸久久av麻豆 | 一本久久精品| 中文字幕av在线有码专区| 神马国产精品三级电影在线观看| 午夜福利视频精品| 天堂网av新在线| 国产女主播在线喷水免费视频网站 | 青春草国产在线视频| 麻豆成人av视频| 亚洲成人中文字幕在线播放| 午夜精品在线福利| 精品少妇黑人巨大在线播放| 又大又黄又爽视频免费| 亚洲国产精品成人久久小说| 99re6热这里在线精品视频| av免费观看日本| 亚洲国产精品成人综合色| 女的被弄到高潮叫床怎么办| 久久久久久久久久久免费av| 中文精品一卡2卡3卡4更新| 人妻少妇偷人精品九色| 亚洲最大成人中文| 国产一区亚洲一区在线观看| 97精品久久久久久久久久精品| 久久久欧美国产精品| 国产午夜精品久久久久久一区二区三区| 亚洲av一区综合| 蜜桃久久精品国产亚洲av| 美女脱内裤让男人舔精品视频| 国产欧美另类精品又又久久亚洲欧美| 干丝袜人妻中文字幕| 国产成人精品福利久久| 国产亚洲91精品色在线| 亚洲成人一二三区av| 国产黄片美女视频| 少妇的逼好多水| 国产亚洲一区二区精品| 色网站视频免费| 亚洲熟妇中文字幕五十中出| 99久国产av精品| 免费av不卡在线播放| 日韩三级伦理在线观看| 久久草成人影院| 成人亚洲欧美一区二区av| 高清av免费在线| 韩国高清视频一区二区三区| 欧美区成人在线视频| 天堂√8在线中文| 久久人人爽人人片av| 不卡视频在线观看欧美| xxx大片免费视频| 国产乱来视频区| 人妻制服诱惑在线中文字幕| 2018国产大陆天天弄谢| 美女高潮的动态| 三级经典国产精品| 夫妻午夜视频| 99久久人妻综合| 身体一侧抽搐| 免费观看性生交大片5| 日日啪夜夜爽| 汤姆久久久久久久影院中文字幕 | 国产亚洲精品av在线| 少妇被粗大猛烈的视频| 简卡轻食公司| 九草在线视频观看| 有码 亚洲区| 亚洲一区高清亚洲精品| 国产单亲对白刺激| 夫妻性生交免费视频一级片| 国产亚洲最大av| 蜜桃久久精品国产亚洲av| 人妻夜夜爽99麻豆av| 青春草亚洲视频在线观看| 午夜精品国产一区二区电影 | 亚洲真实伦在线观看| 一二三四中文在线观看免费高清| 成年女人在线观看亚洲视频 | 啦啦啦中文免费视频观看日本| 亚洲精品一区蜜桃| 精品熟女少妇av免费看| 欧美日韩视频高清一区二区三区二| 丝瓜视频免费看黄片| 久久久久久九九精品二区国产| 一级毛片久久久久久久久女| 日韩视频在线欧美| 国产不卡一卡二| 亚洲四区av| 日日摸夜夜添夜夜添av毛片| 99久久中文字幕三级久久日本| 久久精品夜色国产| 插逼视频在线观看| 国产黄片视频在线免费观看| 国产欧美日韩精品一区二区| 亚洲怡红院男人天堂| av女优亚洲男人天堂| 免费看光身美女| 日韩三级伦理在线观看| 久久久久久久大尺度免费视频| 肉色欧美久久久久久久蜜桃 | 久久精品久久久久久久性| 插逼视频在线观看| 女人十人毛片免费观看3o分钟| 亚洲人成网站高清观看| 日韩精品有码人妻一区| 欧美3d第一页| 国产黄片美女视频| 久久99热这里只有精品18| 国产av在哪里看| av网站免费在线观看视频 | 九色成人免费人妻av| 啦啦啦中文免费视频观看日本| av女优亚洲男人天堂| 国产日韩欧美在线精品| 久久久久精品久久久久真实原创| 久久99热6这里只有精品| av福利片在线观看| av在线播放精品| 午夜精品一区二区三区免费看| 久久国产乱子免费精品| 亚洲精品影视一区二区三区av| 中文字幕亚洲精品专区| 日本爱情动作片www.在线观看| 好男人视频免费观看在线| 成人鲁丝片一二三区免费| 国产又色又爽无遮挡免| 婷婷色麻豆天堂久久| 狂野欧美激情性xxxx在线观看| 一本久久精品| 国产色婷婷99| 中文天堂在线官网| av一本久久久久| 国产中年淑女户外野战色| 午夜久久久久精精品| 不卡视频在线观看欧美| 精品人妻偷拍中文字幕| 久久精品夜夜夜夜夜久久蜜豆| 成人漫画全彩无遮挡| 精品人妻熟女av久视频| 99久国产av精品| 看黄色毛片网站| 男女啪啪激烈高潮av片| 精品一区二区免费观看| 在线免费观看的www视频| 91精品伊人久久大香线蕉| 男人狂女人下面高潮的视频| 少妇的逼水好多| 色综合色国产| 亚洲精品乱码久久久久久按摩| 国产国拍精品亚洲av在线观看| 三级毛片av免费| 亚洲美女搞黄在线观看| 中文字幕av在线有码专区| 亚洲成人av在线免费| 亚洲伊人久久精品综合| 97热精品久久久久久| 精品久久久久久成人av| 99视频精品全部免费 在线| 亚洲熟妇中文字幕五十中出| 亚洲四区av| 尾随美女入室| 欧美日韩精品成人综合77777| 免费观看无遮挡的男女| 色视频www国产| 色5月婷婷丁香| 久久国产乱子免费精品| 噜噜噜噜噜久久久久久91| 非洲黑人性xxxx精品又粗又长| 中国美白少妇内射xxxbb| 日本午夜av视频| 18禁裸乳无遮挡免费网站照片| 免费观看精品视频网站| 永久免费av网站大全| 色综合站精品国产| 又爽又黄无遮挡网站| 看十八女毛片水多多多| 一级毛片 在线播放| 成人性生交大片免费视频hd| 亚洲人成网站在线观看播放| 欧美日韩在线观看h| 亚洲aⅴ乱码一区二区在线播放| 国产综合懂色| 欧美精品一区二区大全| 亚洲最大成人av| 丝袜喷水一区| 亚洲伊人久久精品综合| 别揉我奶头 嗯啊视频| 在线观看av片永久免费下载| 欧美性感艳星| 晚上一个人看的免费电影| av天堂中文字幕网| 亚洲成人中文字幕在线播放| 亚洲国产精品sss在线观看| 性插视频无遮挡在线免费观看| 亚洲三级黄色毛片| 欧美bdsm另类| 一本一本综合久久| 国产探花极品一区二区| 在线免费观看的www视频| 欧美丝袜亚洲另类| 成人亚洲精品一区在线观看 | 精华霜和精华液先用哪个| 国产高清有码在线观看视频| 直男gayav资源| 午夜日本视频在线| 国产黄片视频在线免费观看| 久久久a久久爽久久v久久| 三级国产精品欧美在线观看| 视频中文字幕在线观看| 成人二区视频| 大话2 男鬼变身卡| 听说在线观看完整版免费高清| 别揉我奶头 嗯啊视频| av在线观看视频网站免费| 免费观看无遮挡的男女| 中文字幕久久专区| 日本色播在线视频| 亚洲高清免费不卡视频| 国产精品人妻久久久久久| 日本wwww免费看| 人妻少妇偷人精品九色| av在线亚洲专区| 亚洲成人中文字幕在线播放| videossex国产| 免费人成在线观看视频色| 少妇人妻一区二区三区视频| 好男人视频免费观看在线| 午夜视频国产福利| 免费播放大片免费观看视频在线观看| 亚洲综合精品二区| 精品国内亚洲2022精品成人| 亚洲成人中文字幕在线播放| 99久国产av精品国产电影| 久久鲁丝午夜福利片| 床上黄色一级片| 中文字幕免费在线视频6| 国产爱豆传媒在线观看| 久久精品久久精品一区二区三区| 联通29元200g的流量卡| 日产精品乱码卡一卡2卡三| 欧美xxxx黑人xx丫x性爽| 欧美日本视频| 菩萨蛮人人尽说江南好唐韦庄| 亚洲欧美中文字幕日韩二区| 久久久久久久久久成人| 亚州av有码| 超碰97精品在线观看| 久久久久久久久久人人人人人人| 噜噜噜噜噜久久久久久91| 日韩欧美国产在线观看| 久久国产乱子免费精品| 成人性生交大片免费视频hd| 国产一区二区三区综合在线观看 | 超碰97精品在线观看| 久久久久久久久久人人人人人人| 亚洲国产欧美在线一区| 国产亚洲一区二区精品| 岛国毛片在线播放| 国产亚洲午夜精品一区二区久久 | 国产一区有黄有色的免费视频 | 校园人妻丝袜中文字幕| 国产日韩欧美在线精品| 又大又黄又爽视频免费| 久久久久性生活片| 大陆偷拍与自拍| 在线观看美女被高潮喷水网站| 亚洲不卡免费看| 蜜臀久久99精品久久宅男| 日本一二三区视频观看| 亚洲国产日韩欧美精品在线观看| 色综合亚洲欧美另类图片| 超碰97精品在线观看| 午夜福利高清视频| 欧美成人一区二区免费高清观看| 亚洲欧美成人精品一区二区| 精品一区二区免费观看| 婷婷色av中文字幕| av在线蜜桃| 毛片一级片免费看久久久久| 18+在线观看网站| 日韩大片免费观看网站| 亚洲成人一二三区av| 男女啪啪激烈高潮av片| 一级毛片我不卡| 麻豆成人av视频| 国产精品人妻久久久影院| 99久国产av精品| 午夜福利高清视频| 久久韩国三级中文字幕| 免费看不卡的av| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产亚洲午夜精品一区二区久久 | 成人综合一区亚洲| 亚洲欧洲国产日韩| 日韩一区二区三区影片| 成年女人看的毛片在线观看| 日日摸夜夜添夜夜爱| 小蜜桃在线观看免费完整版高清| 我的女老师完整版在线观看| 欧美 日韩 精品 国产| 久久久久久九九精品二区国产| 日韩欧美精品免费久久| 看免费成人av毛片| 国产视频首页在线观看| 成人美女网站在线观看视频| 黄片wwwwww| 亚洲欧美中文字幕日韩二区| av.在线天堂| 一级二级三级毛片免费看| 国产成人午夜福利电影在线观看| 中文欧美无线码| 中文乱码字字幕精品一区二区三区 | 日韩精品青青久久久久久| or卡值多少钱| 久久精品熟女亚洲av麻豆精品 | 国语对白做爰xxxⅹ性视频网站| 伊人久久精品亚洲午夜| 一级毛片 在线播放| 免费看av在线观看网站| 日本-黄色视频高清免费观看| 偷拍熟女少妇极品色| 精品国内亚洲2022精品成人| 精品国产三级普通话版| 欧美xxxx性猛交bbbb| 久久久久久久久久久免费av| 日韩av免费高清视频| 女的被弄到高潮叫床怎么办| 国产爱豆传媒在线观看| 午夜福利视频1000在线观看| 欧美不卡视频在线免费观看| 噜噜噜噜噜久久久久久91| 美女被艹到高潮喷水动态| 在线免费观看的www视频| 精品99又大又爽又粗少妇毛片| freevideosex欧美| 在线 av 中文字幕| 插逼视频在线观看| 91久久精品国产一区二区成人| 97超碰精品成人国产| 亚洲自偷自拍三级| av黄色大香蕉| 久久久久精品久久久久真实原创| 欧美丝袜亚洲另类| ponron亚洲| 一本一本综合久久| 成人亚洲精品一区在线观看 | 亚洲伊人久久精品综合| 免费人成在线观看视频色| 噜噜噜噜噜久久久久久91| 极品教师在线视频| 成人国产麻豆网| 亚洲人成网站高清观看| 搡老乐熟女国产| 看免费成人av毛片| 一本久久精品| 亚洲精品视频女| 免费看日本二区| 精品久久久久久电影网| 中文字幕久久专区| 在线观看美女被高潮喷水网站| 亚洲av二区三区四区| 日本一二三区视频观看| 久久久午夜欧美精品| 亚洲国产成人一精品久久久| 一级毛片 在线播放| 91精品国产九色| 男人舔女人下体高潮全视频| 又爽又黄a免费视频| 日韩av免费高清视频| 亚洲无线观看免费| 能在线免费看毛片的网站| 国产精品综合久久久久久久免费| 欧美丝袜亚洲另类| 黄片wwwwww| 十八禁国产超污无遮挡网站| 搡老乐熟女国产| 看免费成人av毛片| 精品欧美国产一区二区三| 亚洲真实伦在线观看| 欧美高清性xxxxhd video| 欧美精品国产亚洲| 一个人看视频在线观看www免费| 欧美日韩国产mv在线观看视频 | 日本免费在线观看一区| 久久精品国产亚洲网站| 久久久成人免费电影| av专区在线播放| 国产黄片视频在线免费观看| 欧美三级亚洲精品| 国产精品国产三级国产专区5o| 日韩不卡一区二区三区视频在线| 欧美一级a爱片免费观看看| 久久久精品欧美日韩精品| 黄色配什么色好看| 在现免费观看毛片| 国产黄频视频在线观看| 久久久久久久久久久丰满| 有码 亚洲区| 少妇熟女aⅴ在线视频| 免费观看的影片在线观看| 日韩精品青青久久久久久| 99热这里只有精品一区| 国产黄a三级三级三级人| 久久精品国产亚洲av涩爱| 欧美成人一区二区免费高清观看| 亚洲激情五月婷婷啪啪| 国产久久久一区二区三区| 国产精品三级大全| 亚洲色图av天堂| 成人av在线播放网站| 精华霜和精华液先用哪个| 国产精品麻豆人妻色哟哟久久 | 大陆偷拍与自拍| 在线免费观看不下载黄p国产| 成人国产麻豆网| 秋霞伦理黄片| 国产精品久久久久久av不卡| 天天躁夜夜躁狠狠久久av| 欧美极品一区二区三区四区| 国产成人福利小说| 国产精品.久久久| 成人av在线播放网站| 黄片wwwwww| 日韩电影二区| 一区二区三区四区激情视频| 1000部很黄的大片| 亚洲av电影不卡..在线观看| 国产成人福利小说| 三级经典国产精品| 非洲黑人性xxxx精品又粗又长| 日本wwww免费看| 午夜免费男女啪啪视频观看| 18禁在线无遮挡免费观看视频| 亚洲精品乱码久久久久久按摩| 亚洲不卡免费看| 久久久精品94久久精品| 在线a可以看的网站| 久久人人爽人人片av| 国产成人91sexporn| 在线播放无遮挡| 最后的刺客免费高清国语| 久久99精品国语久久久| 午夜久久久久精精品| 美女xxoo啪啪120秒动态图| 国产高清不卡午夜福利| 99热这里只有是精品在线观看| 久久久久久久久大av| av在线亚洲专区| 99久久精品国产国产毛片| 街头女战士在线观看网站| 亚洲av成人精品一区久久| 一夜夜www| 国产成人精品久久久久久| 国产v大片淫在线免费观看| 国产精品人妻久久久影院| 天天一区二区日本电影三级| 久久久久精品性色| 日韩不卡一区二区三区视频在线| 80岁老熟妇乱子伦牲交| 亚洲高清免费不卡视频| 人人妻人人澡欧美一区二区| 国产 一区 欧美 日韩| 偷拍熟女少妇极品色| 国产高清不卡午夜福利| 日韩亚洲欧美综合| 91午夜精品亚洲一区二区三区| 韩国高清视频一区二区三区| 日本色播在线视频| 色尼玛亚洲综合影院| xxx大片免费视频| 80岁老熟妇乱子伦牲交| 欧美性猛交╳xxx乱大交人| 日韩国内少妇激情av| 免费观看性生交大片5| 伦理电影大哥的女人| 成人午夜高清在线视频| 三级男女做爰猛烈吃奶摸视频| 男人和女人高潮做爰伦理| 成人毛片a级毛片在线播放| 亚洲欧美一区二区三区国产| 中文欧美无线码| 日本三级黄在线观看| 五月玫瑰六月丁香| 赤兔流量卡办理| 亚洲精品久久午夜乱码| 国产精品一区二区在线观看99 | 中文字幕久久专区| 青春草亚洲视频在线观看| 色综合色国产| 亚洲最大成人手机在线| 亚洲av国产av综合av卡| 中文字幕亚洲精品专区| 亚洲av男天堂| 国产在线男女| 欧美zozozo另类| 2022亚洲国产成人精品| 亚洲在线观看片| 欧美成人a在线观看| 国产精品福利在线免费观看| 大又大粗又爽又黄少妇毛片口| 国产精品久久久久久av不卡| 日本与韩国留学比较| 午夜精品一区二区三区免费看| 91精品伊人久久大香线蕉| 国产三级在线视频| 成人一区二区视频在线观看| 性插视频无遮挡在线免费观看| 亚洲欧美精品专区久久| 欧美97在线视频| 天天躁日日操中文字幕| 亚洲成人精品中文字幕电影| 高清毛片免费看| 日韩av免费高清视频| av在线亚洲专区| 国模一区二区三区四区视频| 久久久亚洲精品成人影院| 久久久久久久大尺度免费视频| 美女国产视频在线观看|