• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MIDNet:Deblurring Network for Material Microstructure Images

    2024-05-25 14:42:00JiaxiangWangZhengyiLiPengShiHongyingYuandDongbaiSun
    Computers Materials&Continua 2024年4期

    Jiaxiang Wang ,Zhengyi Li ,Peng Shi ,Hongying Yu and Dongbai Sun,3,?

    1National Center for Materials Service Safety,University of Science and Technology Beijing,Beijing,100083,China

    2School of Materials,Sun Yat-Sen University,Shenzhen,518107,China

    3School of Materials Science and Engineering,Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai),Sun Yat-Sen University,Guangzhou,510006,China

    ABSTRACT Scanning electron microscopy(SEM)is a crucial tool in the field of materials science,providing valuable insights into the microstructural characteristics of materials.Unfortunately,SEM images often suffer from blurriness caused by improper hardware calibration or imaging automation errors,which present challenges in analyzing and interpreting material characteristics.Consequently,rectifying the blurring of these images assumes paramount significance to enable subsequent analysis.To address this issue,we introduce a Material Images Deblurring Network (MIDNet) built upon the foundation of the Nonlinear Activation Free Network (NAFNet).MIDNet is meticulously tailored to address the blurring in images capturing the microstructure of materials.The key contributions include enhancing the NAFNet architecture for better feature extraction and representation,integrating a novel soft attention mechanism to uncover important correlations between encoder and decoder,and introducing new multi-loss functions to improve training effectiveness and overall model performance.We conduct a comprehensive set of experiments utilizing the material blurry dataset and compare them to several state-of-theart deblurring methods.The experimental results demonstrate the applicability and effectiveness of MIDNet in the domain of deblurring material microstructure images,with a PSNR(Peak Signal-to-Noise Ratio)reaching 35.26 dB and an SSIM(Structural Similarity)of 0.946.Our dataset is available at:https://github.com/woshigui/MIDNet.

    KEYWORDS Image deblurring;material microstructure;attention mechanism;deep learning

    1 Introduction

    In the era of advanced imaging technology,modern material scientists delve into the microscopic realm,exploring and analyzing intricate phenomena.Among the array of methodologies available,scanning electron microscopy (SEM) emerges as a powerful tool for characterizing materials,and uncovering their morphologies,crystal structures,and chemical compositions [1].However,SEM images are susceptible to distortion,arising from instrument settings or operator inexperience,leading to blurred or defocused depictions that hinder research progress.When SEM images are blurry,the microstructural information of the material,such as crystal morphology,particle size,and pore structure,becomes less distinct,posing challenges to the accurate analysis of the material’s structural features.Additionally,the quantitative analysis of surface morphology,such as studying material texture and roughness,is also limited by the quality of SEM images.In the case of composite or multiphase materials,SEM images can reveal interface features between different phases.When the images are blurry,the interface structure may not be visible,thereby affecting the analysis and understanding of interface characteristics.SEM images are also employed for detecting defects in materials,such as cracks,voids,and particle non-uniformity.If the images are blurry,these defects may not be displayed,making defect detection and analysis difficult.The quest for effective deblurring techniques becomes paramount in ensuring the integrity of subsequent image analyses,particularly when grappling with suboptimal image quality.Our research is motivated by the imperative to investigate the paramount importance of this field,acknowledging the substantial adverse impact that blurry images can have on the further precise analysis of materials.Therefore,our study holds significant relevance in addressing this issue.We emphasize the pressing need for innovative deblurring solutions to address this issue effectively.

    Traditional image restoration techniques often lean on deconvolution methods that presuppose specific blur kernels,thereby crafting filters like local linear,nonlinear,non-local self-similarity,and Bayesian image restoration filters[2,3].However,their application in practical contexts remains challenging due to the prerequisite knowledge of blur kernels.The advent of deep learning revolutionizes image restoration,harnessing the prowess of deep neural networks to learn nonlinear mappings between degraded and sharp images,obviating the reliance on manually designed filters or blur kernels[4,5].Deep learning methods excel in preserving finer details,such as texture,edges,and structures,during the image reconstruction process [6].Furthermore,these methods demonstrate versatility in handling different levels of degradation and types of noise,allowing for image recovery across various scales [7].Its application has further extended to microscopic systems for enhancing image quality,encompassing optical microscopy[8],electromagnetic imaging,and scanning electron microscopy[9].

    While previous research has improved the quality of microscopic images,further investigation is warranted to explore the integration of deep learning for deblurring low-quality material microstructures.This inquiry begets key questions: (1) Can existing deblurring methods,which are applicable in real-world scenarios,be directly extended to address material data with blurred attributes using pre-trained weights?(2)Can retraining networks with material-specific blurry datasets lead to improvements in deblurring efficacy? (3)How can novel algorithms be developed to maximize their potential in enhancing the clarity of material microstructure images?

    In pursuit of these goals,we present a deep learning-based approach that combines soft attention mechanisms with multifaceted loss functions,aiming to enhance image quality while preserving intricate details.Our methodology tackles the challenge of image blurring in SEM images,arising from inaccurate hardware calibration or automation glitches.With our approach,researchers can efficiently rectify subpar images,saving significant time and resources that would otherwise be required for rescanning.This is particularly relevant for research projects that have limited budgets and require the rapid processing of numerous material samples within tight timeframes.In such circumstances,where image blurring continues to pose a recurring obstacle,our approach becomes especially crucial.In the dynamic field of high-throughput materials research,our innovation has the potential to enhance image quality and data fidelity,thereby accelerating the discovery and optimization of novel materials.In light of this,our approach emerges as a pivotal contribution,poised to catalyze diverse applications in the expanse of materials science research.The main contributions of this paper are as follows:

    (1) We propose a Material Images Deblurring Network (MIDNet) that specifically sharpens blurred images of material microstructures and outperforms current SOTA deblurring networks.

    (2) We introduce an attention mechanism that effectively mitigates the problem of inconsistent feature distributions by attending to the most informative features in both the encoder and decoder.This attention mechanism not only addresses the issue but also strengthens the interplay between components,enhancing overall performance.

    (3)We propose a novel multi-loss function that enhances the supervisory signal,thereby preserving intricate details and texture features more effectively.

    (4)Our MIDNet model’s superiority is thoroughly validated through rigorous experiments,both quantitatively and qualitatively.Through ablation experiments,we reveal the impact of different loss functions proposed in this paper on the model and demonstrate the effectiveness of constructing multiloss functions.

    2 Related Work

    2.1 Image Deblurring

    Several studies have combined computer science and materials science,with a particular emphasis on utilizing image processing methods for analyzing the microstructure images of materials.Varde[10]proposed a computational estimation method called AutoDomainMine,based on graph data mining.By integrating clustering and classification techniques,this method discovered knowledge from existing experimental data and utilized it for estimation.The main objective of this framework was to estimate the graphical results of experiments based on input conditions.Similar graph data mining methods can be employed for image deblurring tasks to analyze and extract patterns and features from image data to achieve image deblurring goals.Pan et al.[11] reviewed the evolution and impact of material microstructures during cutting processes,presenting a thermal-force-microstructure coupled modeling framework.They analyzed microstructural changes such as white layer formation,phase transformation,and dynamic recrystallization under different materials and cutting conditions,as well as the effects of these changes on cutting forces and surface integrity.Vibration of cutting tools or materials can cause motion in image acquisition devices (such as cameras) during the capturing process,resulting in image blurring.Therefore,studying the deblurring of material microstructures holds significant importance.

    Many traditional image enhancement methods employ regularization and manually crafted prior images for blur kernel estimation[12].Subsequent iterative optimization is used to gradually recover a clear image.However,this conventional approach involves intricate blur kernel estimation,leading to laborious sharpening,subpar real-time performance,and algorithmic limitations.To enhance the quality of image deblurring,many methods based on convolutional neural networks (CNN) have been proposed [13–15].Chakrabarti [13] designed a neural network to generate a global blur kernel for non-blind deconvolution.Song et al.[14]proposed a method using a neural network for reliable detection of motion blur kernels to detect image forgeries.Wang et al.[15]proposed a network-based framework that learned to remove raindrops by learning motion blur kernels.Sun et al.[16]predicted the probability distribution of non-uniform motion blur using CNNs.However,most neural-networkbased methods still rely on blur models to solve the blur kernel,limiting their performance.

    In recent years,with the development of deep learning,a series of methods based on deep learning have been used for image deblurring [17,18].Zhang et al.[19] proposed DMPHN,which is the first multi-scale network based on the multi-patch method for single-image deblurring.Chen et al.[20]proposed HINet,a deep image restoration network based on the HIN block.Fanous et al.[5]presented GANscan,a method for restoring sharp images from motion-blurred videos.The method was applied to reconstruct tissue sections under the microscope.Liang et al.[21]directly deblurred raw images using deep learning-based busy image-to-image blind-deblurring.DID-ANet[4]was designed specifically for single-image blur removal caused by camera misfocus.MedDeblur[18]was developed to remove blur in medical images due to patient movement or breathing.Xu et al.[22] proposed a deep-learning-based knowledge-enhanced image deblurring method for quality inspection in yarn production.Restormer[23]is an efficient transformer model that can be utilized for image restoration tasks at high resolutions.This model is effective for restoring high-resolution images.Chen et al.[7]found that nonlinear activation functions are not necessary and can be replaced or omitted,and developed NAFNet for both image denoising and deblurring.Due to the impressive performance of NAFNet in deblurring tasks,we are currently implementing modifications to its architecture.

    2.2 Attention-Based Deblurring Model

    In recent years,attention mechanisms have proven to be highly effective in various computer vision tasks[24,25].As a result,attention-based methods have gradually been adopted for the task of image deblurring[26,27].MSAN[28]is a convolutional neural network architecture based on attention that efficiently and effectively generalizes motion deblurring.D3-Net[26]can be used for deblurring,dehazing,and object detection,with the addition of a classification attention feature loss to improve deblurring and dehazing performance.Cui et al.[27] proposed a dual-domain attention mechanism that enhances feature expression in both spatial and frequency domains.Ma et al.[29] proposed an attention-based dehazing algorithm for deblurring to improve defect detection in inspection image pipelines.Shen et al.[30] introduced a supervised human-perception attention mechanism model,which performs exceptionally well in motion deblurring in an end-to-end manner.MALNET[31]is a lightweight network based on attention mechanisms,which also performs well in image deblurring.Zhang et al.[32]proposed an attention-based inter-frame compensation scheme for video deblurring.In this work,we also incorporate attention mechanisms into our image deblurring network to improve its deblurring capability.

    3 Method

    3.1 Architecture

    The network structure of this paper is shown in Fig.1.It follows a classical U-shape structure,which is an improvement from NAFNet[7].The structure comprises an encoder and a decoder,both belonging to the MID-Block.An attention mechanism is introduced between the blocks to improve the image restoration quality of the network.

    3.2 MID-Block

    MID-Block is the basic building block of MIDNet.To avoid high complexity between blocks,MID-Block does not use any nonlinear activation functions such as ReLU,GELU,and Softmax.We construct a MID-Block using analogies with NAFNet blocks,as illustrated in Fig.2.

    To stabilize the training process,the input is first passed through Layer Normalization.Next,the input undergoes convolution operations and is then processed by SimpleGate(SG)[7],which is a variant of Gated Linear Units(GLU)[33].The GLU formula is as follows:

    Figure 1: The MIDNet overview.The overall architecture of the network resembles a U-shape design,which is composed of MID-block and attention block

    Figure 2: Architecture of MID-block

    In Eq.(1),Xrepresents the feature map,fandgfunction as linear transformers,σrepresents a nonlinear activation function,such as Sigmoid,and ⊙represents element-wise multiplication.

    The GLU increases the intra-block complexity,which is not desirable.To remedy this issue,we reconsider the activation function in the block,specifically GELU[34],which is expressed as:

    whereφrepresents the cumulative distribution function of the standard normal distribution.According to reference[34],it is suggested that the Gaussian Error Linear Unit(GELU)activation function can be effectively approximated and implemented by employing the following methodology:

    In Eqs.(1)and(2),GELU is a specific case of GLU,where the activation functionsfandgare identity functions and the parameterσis substituted withφ.The GLU incorporates nonlinearity and is not reliant on the parameterσ.Even in the absence of the parameterσ,the expressionGate(X)=f(X) ⊙g(X)retains its nonlinearity.According to reference[7],we suggest a simple adjustment to GLU:Split the feature map into two parts along the channel dimension and multiply them.This could be done using a basic element-wise multiplication,which is represented by Eq.(4).

    In Eq.(4),XandYrepresent feature maps of equal proportions.

    The gating unit SG is a neural network component illustrated in Fig.3,which is used in the processing of feature maps.It operates by splitting the feature map into two parts along the channel dimension,which is then multiplied to generate the final output.By splitting the feature map in this manner,SG can selectively emphasize or de-emphasize specific channels in the feature map,which can be useful for enhancing certain features or suppressing noise in the signal.This process is often referred to as channel-wise gating.

    Figure 3: Simple gate as represented by Eq.(2).⊙:Element-wise

    Our novel approach introduces Simplified Channel Attention(SCA)[7],a new component that utilizes channel-wise attention to enhance relevant features in data.Compared to other approaches,SCA has a simpler structure which offers ease of implementation.Additionally,it adds minimal computational overhead to models,hence enhancing the efficiency of our approach.Please refer to Fig.4 for an illustration of SCA.

    Figure 4: Simplified channel attention(SCA).?:Channel-wise multiplicatio n

    SCA determines channel attention by computing the average of the feature map along the spatial dimensions and applying a fully connected layer to generate a channel-wise attention vector.This attention vector is then multiplied with the original feature map to selectively amplify important channels in the data while suppressing irrelevant or noisy channels.

    Our experiments demonstrate that incorporating SCA into a standard convolutional neural network yields improved performance,highlighting the efficacy of enhancing feature representation using channel attention.SCA can be easily integrated into existing neural network architectures and represents a useful tool for improving the performance of deep learning models in a variety of applications.

    SCA is derived from Channel Attention(CA)[35],which can be expressed by the Eq.(5).

    In Eq.(5),Xdenotes the feature map,pooldenotes the global average pooling operation,σdenotes an activation function such as Sigmoid,W1,andW2denote fully connected layers,and cross multiplication is the channel multiplication operation.By simplifying the Eq.(5),we can finally obtain SCA,as shown in Eq.(6).

    3.3 Attention Mechanism

    With the advancement of deep learning techniques,significant progress has been made in image restoration.The NAFNet model,in particular,has shown significant performance in various applications.However,a limitation of NAFNet is that the skip connections used for feature aggregation between the encoder and decoder have the potential to disrupt the feature distribution,resulting in inconsistencies between these components.Another shortcoming of NAFNet is that it only employs an intra-block attention mechanism and ignores attention-based skip connections.

    To address these challenges,we introduce a soft attention mechanism to capture the latent relationship between the encoder and decoder more adaptively.We refer to the proposed soft attention mechanism as ATT.The architecture of the attention gate ATT is shown in Fig.5.Specifically,the proposed attention gate ATT aggregates features from different blocks using a weighting scheme based on their relevance to the current image restoration task,instead of simple element-wise addition used in conventional skip connections.This allows the model to selectively focus on the most informative features while suppressing the irrelevant ones.

    Figure 5: The architecture of ATT

    Moreover,our attention mechanism enables us to incorporate attention-based skip connections,which further enhance the feature aggregation process.By attending to the most informative features in the encoder and decoder,the model can effectively alleviate issues related to feature distribution inconsistency and strengthen the correlation between these components.The formula of the soft attention mechanism can be expressed as follows:

    In Eqs.(7)and(8),σ1andσ2denote activation functions.The attention gate is represented by a set of parameters through mathematical formulas,including linear transformationsWa,Wb,ψ,and biasesbf,bψ.The linear transformations are obtained by performing convolution operations on the input tensors.The output of the attention gate is the product of the input feature map and the attention coefficient.

    3.4 Multi-Loss Function

    The paper utilizes multi-loss functions,as shown in Eq.(9),which comprise the deblurring loss,edge loss,and FFT loss.The hyperparametersλ1 andλ2 are assigned the values of 0.05 and 0.01,respectively.

    3.4.1 Deblurring Function

    The deblurred image is compared with its ground truth in the spatial domain,using the standardl1loss as shown in Eq.(10).We do not usel2loss because it sometimes over-penalizes errors and leads to poor deblurring performance.

    3.4.2 Edge Function

    To restore the high-frequency details of the image,we introduce an edge loss function.It aims to focus on the gradient information of the image and enhance the edge texture features.The edge loss function of this paper is as follows:

    In Eq.(11),Irrepresents the reconstructed image,Igtrepresents the clear ground truth image andΔdenotes the Laplacian operator.

    3.4.3 FFT Loss

    The FFT loss is a type of loss function based on the Fourier transform that is used for image restoration tasks.It aims to penalize the discrepancy between the reconstructed image and the ground truth image in the frequency domain.The FFT loss is represented as follows:

    In Eq.(12),the variablesWandHrefer to the width and height of the image being analyzed.The functionFrepresents the Fourier transform of the image,which is a mathematical technique used to analyze its frequency components.Wherewi,jrepresents the weight corresponding to the Fourier coefficient,andHrepresents the frequency response of the degradation function in the Fourier domain.

    Specifically,the FFT loss can be calculated as the weighted sum of the squared Euclidean distance between the discrete Fourier transform coefficients of the reconstructed image and the ground truth image.The weight factors,which correspond to different Fourier coefficients,are used to emphasize the importance of different frequencies in the loss function,allowing it to focus more on the crucial parts of the reconstructed image spectrum.In the Fourier domain,high-frequency information such as edges and textures has a more significant impact on the visual quality of the reconstructed image.Therefore,incorporating the FFT loss can help the network better preserve these details,ultimately leading to an improvement in the image quality.

    4 Experiments

    4.1 Dataset

    We utilize a dataset containing 120 paired images with both low and high quality to investigate material microstructure fuzziness.Specifically,low-quality images in this dataset are directly obtained from observations captured through the SEM rather than artificially blurred using blur kernels or algorithms.This approach replicates real-world scenarios more accurately while simultaneously presenting greater challenges for the process of deblurring.When low-quality images are captured in practice,operators take repeated images until high-quality ones are achieved.Consequently,we meticulously selected 120 matching low and high-quality images that met stringent criteria.All images are subsequently adjusted to 256 ?256 pixels.Several cropped images are displayed in Fig.6.The dataset is randomly divided into a training set comprising 108 image pairs and a test set containing 12 image pairs.

    4.2 Experiment Parameters

    We optimize the model using Adam (β1=0.9,β2=0.999) for 200 K iterations with a cosine annealing schedule that decreases the learning rate from 10–3 to 10–7.We crop the images to a size of 256 ?256 pixels and apply rotation and flipping as data augmentation techniques.We employed the skip-init method to ensure stable training and implemented our code in the PyTorch framework.We evaluate our model using peak signal-to-noise ratio(PSNR)and structural similarity(SSIM)metrics.All experiments are conducted on an NVIDIA Tesla V100 GPU.

    4.3 Experiments on SOTA Algorithms

    PSNR and SSIM are employed as quantitative evaluation metrics,with larger values indicating superior image quality.They are calculated according to Eqs.(13)and(14).

    Figure 6: A few sample images from our dataset.Column 1 shows the low-quality images,whereas Column 2 shows the high-quality images

    In Eq.(13),MAX represents the maximum pixel value of the image,typically 255 when each pixel is represented by an 8-bit binary.MSE(Mean Squared Error)is the mean squared error value between the blurred image and the clear image.In Eq.(14),xandydenote the original image and the deblurred image,respectively.μxandμyrepresent the mean pixel values of imagesxandy,σxandσyrepresent the standard deviations of pixel values in imagesxandy,andσxyis the covariance between the pixel values of the two images.C1andC2are constants introduced to prevent division by zero in the denominator.

    To assess the generalizability of models trained on natural images to material microstructure fuzziness data,we conduct a series of relevant studies.Specifically,we employ pre-trained weights from the original papers of DMPHN,HINet,Restormer,and NAFNet methods to conduct inference on material blurry images.The deblurred images are displayed in Fig.7,while the corresponding PSNR and SSIM values are summarized in Table 1.

    Table 1: Results of image deblurring by using pre-trained weights

    As observed in Fig.7,these methods exhibit certain levels of processing applied to the blurry images.However,their ability to achieve satisfactory deblurring outcomes remains limited,with minimal improvement over the initial blurry images.By referring to Table 1,the PSNR and SSIM values of both the original blurry and clear images are provided in the input row.Notably,these methods yield relatively low PSNR and SSIM scores,with instances where deblurred images demonstrate worse performance compared to their initial states.

    Figure 7: Image deblurring performance on the material blurry dataset is evaluated using several SOTA algorithms with pre-trained weights

    Interestingly,these methods have demonstrated proficiency on the GoPro dataset and have exhibited effective deblurring outcomes on real-world blurry images.Consequently,we postulate that their subpar performance on material images may be attributed to external factors rather than the inherent limitations of the methods themselves.

    Upon meticulous scrutiny of the GoPro dataset,a notable distinction emerges in the PSNR values of its blurry images,which average approximately 23.In contrast,the blurry images originating from our material microstructure exhibit a lower PSNR value of approximately 21.Building upon these observations,a hypothesis arises: The relatively lower quality of material images,resulting in reduced information content,poses a heightened challenge for the deblurring process.Consequently,this challenge could potentially contribute to network degradation and the suboptimal performance observed.

    Furthermore,an additional factor potentially influencing the subpar deblurring results is the unique visual characteristics inherent to material microstructures,setting them apart from real-world blurry images.This disparity in appearance might contribute to reduced reliability in the neural network’s performance when confronted with material microstructure fuzziness data.To address this challenge,we advocate for a proactive solution:Retraining and fine-tuning these methods using material blurry images.Our approach involves freezing the majority of the model layers and selectively unfreezing a small subset for training purposes.We apply data augmentation techniques,such as flip and rotate,to the dataset during the training process.Hyperparameters,including learning rate,batch size,and number of iterations,are adjusted based on the specific model to achieve optimal performance.Additionally,appropriate regularization strategies are employed to mitigate overfitting problems.Such an approach holds the promise of enhancing the network’s capability to effectively restore blurry images of materials.In line with this recommendation,we embarked on the process of retraining and fine-tuning these methods.To gauge the efficacy of this intervention,we present the deblurring outcomes in Fig.8.

    Figure 8: The outcomes of deblurring upon the retraining and fine-tuning of these methods with our blurry dataset

    This study utilizes a dataset of material blurry images to conduct a detailed analysis of the deblurring capability of the original method compared to the retraining and fine-tuning methods.The outcomes of this comparison reveal a significant enhancement in deblurring quality for material images through retraining and fine-tuning,surpassing the performance of the no-training scenario and yielding satisfactory results.Notably,the process of retraining and fine-tuning contributes to the restoration of intricate features within material images,underscoring the pivotal role of materialspecific data in optimizing deblurring effectiveness.These findings offer fresh insights into the efficacy of retraining and fine-tuning strategies in effectively addressing the intricate deblurring challenges posed by material images.Furthermore,they provide valuable guidance for the future development of more potent deblurring methodologies within the domain of material science and engineering.Importantly,this study also serves as a demonstration of the potential of deep learning techniques in enhancing the quality of visual data across a wide spectrum of scientific and industrial applications.

    4.4 Comparative Experiment

    4.4.1 Qualitative Results

    We undertake a comparative evaluation of MIDNet alongside several SOTA deblurring methods that have undergone retraining and fine-tuning,as discussed in the previous section.The deblurring outcomes produced by each of these methods are depicted in Fig.9.Within this array of tested approaches,Restormer’s results exhibit a residual blurriness accompanied by unclear edges,which implies a limited restorative impact.The HINet method,employing a patch-based testing strategy,manifests noticeable stripe artifacts,possibly attributed to boundary discontinuities.The DMPHN approach,although improved,still retains a degree of blurriness that hampers its ability to achieve significant image enhancement.The NAFNet method,while competent,sacrifices certain fine image details.In stark contrast,our proposed MIDNet method achieves a further elevation in image quality,facilitating the restoration of additional structural details without introducing any artifacts or related issues.By observing the image,we note that our method exhibits significantly clearer microstructural contours compared to other approaches,as indicated by the red arrow in Fig.9.This enhanced clarity allows for a more accurate analysis of the material’s surface morphology and structural features based on these finer details.

    Figure 9: Qualitative comparison of image deblurring methods on the dataset

    The comparison between the original image and the deblurred image obtained through the model proposed in this study is illustrated in Fig.10.In Fig.10a,we present the original image,while Fig.10b depicts the image after being processed by the model.Through visual observation,it is evident that the proposed model exhibits excellent deblurring performance.The outcomes of our study highlight the exceptional capabilities of MIDNet in effectively recovering intricate structures and details within material images.This showcases its potential as a promising solution for tackling intricate deblurring issues within the realm of materials science and engineering.

    4.4.2 Quantitative Results

    Table 2 outlines the quantitative findings of several deblurring techniques applied to material microstructure images.Our evaluation of image quality relies on two objective metrics: PSNR and SSIM,where higher values denote enhanced performance.Significant enhancements in PSNR are observed across HINet,Restormer,DMPHN,and NAFNet after the process of retraining and fine-tuning.The respective gains in PSNR are 7.89,9.43,10.13,and 13.53 dB.These compelling outcomes underscore the considerable potential of deep learning in addressing the intricate challenges associated with deblurring material microstructure images.This progress lays the foundation for practical applications within this domain.

    Table 2: Quantitative comparison of our proposed network with previous methods

    The insights provided by Table 2 highlight the substantial advancement brought forth by MIDNet,when compared with NAFNet,evaluated through both PSNR and SSIM metrics.Compared to NAFNet,MIDNet achieved an improvement of 1.45 dB in PSNR and 0.01 in SSIM.This indicates that our proposed method has an advantage in image deblurring.The efficacy of MIDNet in the deblurring task can be attributed to its integrative employment of an attention mechanism and a combination of diverse loss functions.

    Figure 10: Comparison between original images and deblurred images

    The attention mechanism significantly enhances the network’s ability to focus on pivotal features,leading to elevated deblurring performance.Our experiment results affirm that the simultaneous utilization of multiple loss functions empowers the network with enhanced image reconstruction supervision,consequently elevating image quality and augmenting fine detail preservation.

    4.4.3 Ablation Experiment

    To validate the efficacy of the newly introduced edge loss and FFT loss within the training process,we conduct ablation experiments.The outcomes of these experiments are meticulously presented in Table 3,showcasing the computed PSNR and SSIM values corresponding to each experimental configuration.The objective behind these ablation studies is to discern the impact and contribution of individual loss functions toward the process of image restoration.To achieve this,we train our model under different scenarios,each characterized by a distinct combination of loss functions.This systematic approach enables us to gain insights into the relative importance and effectiveness of each loss function in driving the enhancement of image quality.

    Table 3: Ablation experiments:We train our model using different combinations of loss functions to understand the importance of individual losses for image restoration

    In this study,we undertake a series of ablation experiments with the intent of examining the impact of integrating various loss functions during the training phase.To maintain consistency,the Ldloss function,which plays a pivotal role in image restoration,is kept constant across all experiments.The outcomes of these ablation studies are summarized in Table 3.We observe that the inclusion of the Leloss function results in noticeable improvements in both PSNR and SSIM metrics.This suggests that the network effectively retains more intricate edge details through the utilization of this loss function.Furthermore,the inclusion of Lfloss further improves the image quality by providing more structural guidance to the network solution,as observed in row 3.It is worth noting that by combining all the loss functions during training,the network achieved its best performance.These findings highlight the importance of the proposed multi-loss functions in enhancing image restoration capabilities and offer valuable insights for the advancement of effective image restoration methods.

    5 Conclusions and Future Work

    In this study,we propose a method named MIDNet to address the issue of blurry images in material microstructures.MIDNet is an end-to-end deblurring network that enhances the clarity of blurry images in material microstructures by incorporating an attention mechanism and introducing multiple loss functions.Thorough qualitative and quantitative analysis indicates that MIDNet surpasses other approaches in terms of the quality of reconstructed images,marked by enhanced clarity and texture richness.Ablation experiments have also showcased the effectiveness of different loss functions within the network.Our work has the potential to encourage the extended use of deep learning within materials science and promote advancements in the mutually beneficial partnership between computer science and materials science.

    The dataset utilized in this study comprises actual experimental material microstructural images.However,we acknowledge that the dataset size is relatively limited,which may potentially impact the accuracy of image deblurring when extrapolating our method to diverse materials.To address this limitation,our future research will emphasize the collection of SEM images encompassing a broader range of alloy materials,thereby expanding the dataset size.Through these endeavors,we aim to enhance the performance and adaptability of our model in the context of deblurring microstructural images across various materials.Our future work will be primarily focused on developing a video deblurring method that is specifically tailored to the demands of material science applications.Given the unique challenges posed by the complex and dynamic nature of material structures,a robust and effective video deblurring method would be of great value in enabling researchers to visualize and analyze material properties more accurately.

    Acknowledgement:The authors especially acknowledge Prof.Liwu Jiang of National Center for Materials Service Safety.

    Funding Statement:The current work was supported by the National Key R&D Program of China(Grant No.2021YFA1601104),National Key R&D Program of China(Grant No.2022YFA16038004),National Key R&D Program of China (Grant No.2022YFA16038002) and National Science and Technology Major Project of China(No.J2019-VI-0004-0117).

    Author Contributions:Study conception and design:J.X.Wang,H.Y.Yu and D.B.Sun;data collection:J.X.Wang,Z.Y.Li and P Shi;analysis and interpretation of result: J.X.Wang,P Shi.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Some dataset for the experiments uploaded to the author’s github repository:https://github.com/woshigui/MIDNet.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    三上悠亚av全集在线观看| 亚洲欧洲日产国产| 有码 亚洲区| 观看美女的网站| av国产久精品久网站免费入址| av电影中文网址| 一本大道久久a久久精品| 不卡av一区二区三区| 亚洲国产精品成人久久小说| 韩国高清视频一区二区三区| 成年av动漫网址| 97在线人人人人妻| 久久精品熟女亚洲av麻豆精品| 18禁国产床啪视频网站| 可以免费在线观看a视频的电影网站 | 天天躁夜夜躁狠狠久久av| 亚洲人成电影观看| 永久免费av网站大全| 麻豆av在线久日| 久久精品久久久久久噜噜老黄| 亚洲内射少妇av| 色视频在线一区二区三区| 王馨瑶露胸无遮挡在线观看| 成人影院久久| 日本av手机在线免费观看| 大片免费播放器 马上看| 老司机亚洲免费影院| 欧美日韩av久久| 国产成人欧美| 久久精品国产亚洲av天美| 九九爱精品视频在线观看| 丝瓜视频免费看黄片| 日韩电影二区| 精品国产一区二区三区四区第35| 国产不卡av网站在线观看| 大香蕉久久网| 午夜福利网站1000一区二区三区| 在线观看免费日韩欧美大片| 美女国产视频在线观看| 国产视频首页在线观看| 黄色毛片三级朝国网站| 女性被躁到高潮视频| 美女视频免费永久观看网站| 欧美av亚洲av综合av国产av | 午夜福利一区二区在线看| 大香蕉久久网| 高清黄色对白视频在线免费看| 亚洲熟女精品中文字幕| 欧美精品国产亚洲| 777米奇影视久久| 黑人欧美特级aaaaaa片| 精品少妇内射三级| 狂野欧美激情性bbbbbb| 99久久人妻综合| av视频免费观看在线观看| 美女高潮到喷水免费观看| 欧美精品人与动牲交sv欧美| 国产精品一国产av| 各种免费的搞黄视频| 两个人看的免费小视频| 一本大道久久a久久精品| 婷婷色av中文字幕| 久久久久久久亚洲中文字幕| 中文字幕人妻丝袜一区二区 | 国产精品国产三级专区第一集| 少妇人妻 视频| 国产精品三级大全| 亚洲成人手机| 韩国av在线不卡| 久久精品国产a三级三级三级| 考比视频在线观看| 亚洲av电影在线进入| 婷婷色麻豆天堂久久| 久久这里只有精品19| 国产高清国产精品国产三级| 亚洲欧美清纯卡通| 亚洲精品在线美女| 亚洲欧美色中文字幕在线| 91精品三级在线观看| 大话2 男鬼变身卡| 欧美bdsm另类| av网站在线播放免费| 高清不卡的av网站| 少妇人妻精品综合一区二区| 亚洲成人一二三区av| 国产色婷婷99| 久久久久久久久久久免费av| 亚洲av国产av综合av卡| 日本vs欧美在线观看视频| 亚洲国产精品一区三区| 国产精品久久久久成人av| 如何舔出高潮| 人妻 亚洲 视频| 色网站视频免费| 人成视频在线观看免费观看| 又大又黄又爽视频免费| 成人影院久久| 女性被躁到高潮视频| 久久精品国产亚洲av天美| 人人妻人人爽人人添夜夜欢视频| 人成视频在线观看免费观看| 精品国产一区二区久久| 丰满迷人的少妇在线观看| 亚洲精品国产av成人精品| 国产精品久久久久久精品古装| 成年动漫av网址| 精品国产一区二区三区久久久樱花| 国产 一区精品| 欧美国产精品一级二级三级| 一级毛片我不卡| 欧美日韩亚洲高清精品| 久久毛片免费看一区二区三区| 性高湖久久久久久久久免费观看| 亚洲av国产av综合av卡| 亚洲av福利一区| 精品少妇内射三级| 老汉色∧v一级毛片| 欧美变态另类bdsm刘玥| 在线天堂最新版资源| 欧美日韩一级在线毛片| 91久久精品国产一区二区三区| 看十八女毛片水多多多| 亚洲三级黄色毛片| 国产精品一区二区在线观看99| 国产黄频视频在线观看| 免费久久久久久久精品成人欧美视频| 精品国产国语对白av| 国产精品久久久久久精品古装| 国产av一区二区精品久久| 99精国产麻豆久久婷婷| 国产熟女午夜一区二区三区| 精品一区在线观看国产| av不卡在线播放| 久久精品国产a三级三级三级| 十八禁网站网址无遮挡| 欧美精品一区二区免费开放| 成人黄色视频免费在线看| 久久久精品94久久精品| a级毛片黄视频| 亚洲国产色片| 十八禁高潮呻吟视频| 国产成人精品福利久久| 国产精品国产三级专区第一集| 亚洲综合色网址| 一级毛片我不卡| 丝瓜视频免费看黄片| 久久这里有精品视频免费| 高清黄色对白视频在线免费看| 久久久久久伊人网av| 国精品久久久久久国模美| 丰满饥渴人妻一区二区三| 国产av一区二区精品久久| 人成视频在线观看免费观看| 一本—道久久a久久精品蜜桃钙片| 国产人伦9x9x在线观看 | 免费看不卡的av| 亚洲精品美女久久久久99蜜臀 | 久久精品国产亚洲av天美| 蜜桃在线观看..| 电影成人av| 大码成人一级视频| 一本久久精品| 久久久久久人妻| 不卡视频在线观看欧美| 欧美精品一区二区免费开放| 青春草亚洲视频在线观看| 少妇 在线观看| 日韩视频在线欧美| 叶爱在线成人免费视频播放| 精品国产一区二区久久| 黄色毛片三级朝国网站| 欧美成人精品欧美一级黄| 欧美激情 高清一区二区三区| 男的添女的下面高潮视频| av又黄又爽大尺度在线免费看| 亚洲欧美一区二区三区久久| 久久精品国产鲁丝片午夜精品| 岛国毛片在线播放| av在线观看视频网站免费| 免费av中文字幕在线| 下体分泌物呈黄色| 精品一区二区三卡| 欧美日韩一区二区视频在线观看视频在线| 国产精品秋霞免费鲁丝片| 久久 成人 亚洲| 纯流量卡能插随身wifi吗| 十八禁网站网址无遮挡| 国产 一区精品| 久久热在线av| 曰老女人黄片| 美女午夜性视频免费| 天堂中文最新版在线下载| av免费在线看不卡| 午夜老司机福利剧场| 女人高潮潮喷娇喘18禁视频| 女性生殖器流出的白浆| 哪个播放器可以免费观看大片| 久久综合国产亚洲精品| 麻豆乱淫一区二区| 18+在线观看网站| 这个男人来自地球电影免费观看 | 卡戴珊不雅视频在线播放| 999久久久国产精品视频| 女性生殖器流出的白浆| 国产国语露脸激情在线看| 91在线精品国自产拍蜜月| 国产一级毛片在线| 十八禁网站网址无遮挡| 婷婷色综合大香蕉| 亚洲 欧美一区二区三区| 欧美日韩亚洲高清精品| 女人久久www免费人成看片| 婷婷色综合大香蕉| 黄色毛片三级朝国网站| 国产爽快片一区二区三区| 女人久久www免费人成看片| 老司机午夜十八禁免费视频| 精品国产国语对白av| 国产亚洲精品第一综合不卡| 男女之事视频高清在线观看| 一级片'在线观看视频| 午夜激情av网站| 男女做爰动态图高潮gif福利片 | 免费女性裸体啪啪无遮挡网站| 成人国语在线视频| 不卡av一区二区三区| 啦啦啦免费观看视频1| 久久久精品欧美日韩精品| 黄色片一级片一级黄色片| 色播在线永久视频| 无遮挡黄片免费观看| 天堂中文最新版在线下载| 亚洲精品国产一区二区精华液| 97人妻天天添夜夜摸| 婷婷六月久久综合丁香| www.www免费av| 淫妇啪啪啪对白视频| 午夜精品国产一区二区电影| 免费高清视频大片| 别揉我奶头~嗯~啊~动态视频| 日日夜夜操网爽| 国产精品影院久久| 久久人妻福利社区极品人妻图片| 亚洲第一欧美日韩一区二区三区| a级毛片黄视频| 村上凉子中文字幕在线| 亚洲七黄色美女视频| 高清在线国产一区| 热re99久久精品国产66热6| 日本精品一区二区三区蜜桃| 99热国产这里只有精品6| 国产成人精品久久二区二区免费| 亚洲精品一区av在线观看| 久久久久久久精品吃奶| 黄色丝袜av网址大全| 亚洲精品国产区一区二| 久久香蕉国产精品| 日韩一卡2卡3卡4卡2021年| 午夜福利影视在线免费观看| 精品国产一区二区三区四区第35| 免费看十八禁软件| 精品久久久久久久久久免费视频 | 成人亚洲精品一区在线观看| 亚洲一区高清亚洲精品| 国产乱人伦免费视频| 久热这里只有精品99| 青草久久国产| 热re99久久国产66热| 国产野战对白在线观看| 亚洲七黄色美女视频| 久久伊人香网站| 精品国产亚洲在线| 女性生殖器流出的白浆| 首页视频小说图片口味搜索| 欧美不卡视频在线免费观看 | 别揉我奶头~嗯~啊~动态视频| 色综合婷婷激情| 日本欧美视频一区| 十八禁人妻一区二区| 亚洲色图综合在线观看| 日本五十路高清| 国产成人精品无人区| 黄片播放在线免费| 亚洲少妇的诱惑av| 欧美成人性av电影在线观看| 正在播放国产对白刺激| 18禁观看日本| 午夜亚洲福利在线播放| 男女高潮啪啪啪动态图| 亚洲精品中文字幕在线视频| 久久久精品欧美日韩精品| 国产精品久久电影中文字幕| 久久久久久免费高清国产稀缺| 黄色视频不卡| 97超级碰碰碰精品色视频在线观看| 美女国产高潮福利片在线看| 欧美激情 高清一区二区三区| 国产国语露脸激情在线看| 欧美国产精品va在线观看不卡| 欧美午夜高清在线| 国产欧美日韩综合在线一区二区| 国产伦一二天堂av在线观看| 男女做爰动态图高潮gif福利片 | 亚洲五月天丁香| 免费在线观看日本一区| 欧美日本亚洲视频在线播放| 国产精品久久视频播放| 国产精品野战在线观看 | 狂野欧美激情性xxxx| 色老头精品视频在线观看| 亚洲成av片中文字幕在线观看| 日本wwww免费看| 国产精品乱码一区二三区的特点 | 国产精品自产拍在线观看55亚洲| 中出人妻视频一区二区| 久热爱精品视频在线9| 91精品三级在线观看| 久久久国产成人精品二区 | 18禁裸乳无遮挡免费网站照片 | 国产精品爽爽va在线观看网站 | 精品国产美女av久久久久小说| 99热国产这里只有精品6| 女性被躁到高潮视频| 91精品三级在线观看| 日本撒尿小便嘘嘘汇集6| 国产精品99久久99久久久不卡| 久久久水蜜桃国产精品网| 亚洲第一欧美日韩一区二区三区| 亚洲精品在线美女| 黄频高清免费视频| 国产精品一区二区在线不卡| 99国产极品粉嫩在线观看| 国产精品久久久久久人妻精品电影| xxx96com| 久久人人97超碰香蕉20202| 大型av网站在线播放| 免费观看精品视频网站| 在线观看一区二区三区激情| 日韩高清综合在线| 国产成人精品久久二区二区免费| 黄色女人牲交| 很黄的视频免费| 变态另类成人亚洲欧美熟女 | 交换朋友夫妻互换小说| 国产激情欧美一区二区| 亚洲国产毛片av蜜桃av| 91国产中文字幕| 美国免费a级毛片| 精品乱码久久久久久99久播| 成人国语在线视频| 妹子高潮喷水视频| 少妇裸体淫交视频免费看高清 | 日韩免费高清中文字幕av| 欧美午夜高清在线| 女警被强在线播放| 黄色成人免费大全| 亚洲性夜色夜夜综合| 国产日韩一区二区三区精品不卡| 久久久国产欧美日韩av| 两性夫妻黄色片| 欧美日韩瑟瑟在线播放| 国产一区二区在线av高清观看| 欧美中文综合在线视频| 99久久人妻综合| 亚洲五月婷婷丁香| 国产精品久久久久成人av| 亚洲熟妇中文字幕五十中出 | 亚洲精品久久成人aⅴ小说| 日本一区二区免费在线视频| 国产精品98久久久久久宅男小说| 亚洲人成77777在线视频| 国产人伦9x9x在线观看| 亚洲成人精品中文字幕电影 | 亚洲国产欧美网| 国产91精品成人一区二区三区| 久久久久久亚洲精品国产蜜桃av| 亚洲一区二区三区色噜噜 | 色尼玛亚洲综合影院| 日韩精品免费视频一区二区三区| 久久精品亚洲av国产电影网| 老司机亚洲免费影院| 琪琪午夜伦伦电影理论片6080| 午夜成年电影在线免费观看| 韩国精品一区二区三区| 免费看十八禁软件| 国产精品久久久久成人av| 国产黄a三级三级三级人| 黑人猛操日本美女一级片| 涩涩av久久男人的天堂| 麻豆国产av国片精品| 在线观看免费视频日本深夜| 国产精品自产拍在线观看55亚洲| 最近最新中文字幕大全电影3 | 亚洲欧美精品综合久久99| 自线自在国产av| 久久天躁狠狠躁夜夜2o2o| 亚洲午夜理论影院| 欧美黄色片欧美黄色片| 日本撒尿小便嘘嘘汇集6| 国产一区二区三区视频了| 久久亚洲真实| 欧美日韩国产mv在线观看视频| 亚洲精品美女久久久久99蜜臀| 久久伊人香网站| 亚洲美女黄片视频| 中文字幕人妻熟女乱码| 国产亚洲精品久久久久久毛片| 免费女性裸体啪啪无遮挡网站| 国产激情欧美一区二区| 亚洲精品一二三| 可以免费在线观看a视频的电影网站| 每晚都被弄得嗷嗷叫到高潮| 中亚洲国语对白在线视频| 咕卡用的链子| 18禁国产床啪视频网站| 少妇的丰满在线观看| 纯流量卡能插随身wifi吗| 日韩大码丰满熟妇| 亚洲va日本ⅴa欧美va伊人久久| 琪琪午夜伦伦电影理论片6080| 久久国产精品人妻蜜桃| 99re在线观看精品视频| 国产精品秋霞免费鲁丝片| 999精品在线视频| 丝袜人妻中文字幕| 免费日韩欧美在线观看| 日韩视频一区二区在线观看| 夜夜躁狠狠躁天天躁| av欧美777| 精品久久蜜臀av无| 神马国产精品三级电影在线观看 | 久久人妻av系列| av欧美777| 天天躁狠狠躁夜夜躁狠狠躁| 操出白浆在线播放| 久久精品91蜜桃| 亚洲精华国产精华精| 高清欧美精品videossex| 亚洲人成77777在线视频| 亚洲精品久久午夜乱码| 久久精品国产清高在天天线| 免费人成视频x8x8入口观看| 精品国产乱子伦一区二区三区| 午夜福利一区二区在线看| 亚洲欧美精品综合久久99| 国产野战对白在线观看| 女警被强在线播放| 国产成人欧美| 久久精品亚洲av国产电影网| av免费在线观看网站| 亚洲国产精品一区二区三区在线| 精品久久久精品久久久| 久久精品人人爽人人爽视色| 免费在线观看日本一区| 中文字幕人妻丝袜一区二区| 制服人妻中文乱码| 两个人免费观看高清视频| 亚洲avbb在线观看| 久久国产乱子伦精品免费另类| 三级毛片av免费| 纯流量卡能插随身wifi吗| 欧美色视频一区免费| 亚洲中文字幕日韩| 久久精品91无色码中文字幕| 日韩精品中文字幕看吧| 男女之事视频高清在线观看| 国产亚洲精品一区二区www| 淫秽高清视频在线观看| 久久 成人 亚洲| av有码第一页| 中文字幕av电影在线播放| 岛国视频午夜一区免费看| 搡老乐熟女国产| 80岁老熟妇乱子伦牲交| 亚洲avbb在线观看| 国产又色又爽无遮挡免费看| 精品欧美一区二区三区在线| 亚洲国产精品sss在线观看 | 欧美国产精品va在线观看不卡| 99国产综合亚洲精品| 看黄色毛片网站| 国产一区二区激情短视频| 一区二区三区国产精品乱码| 19禁男女啪啪无遮挡网站| 一级毛片女人18水好多| 老汉色av国产亚洲站长工具| 怎么达到女性高潮| 久久精品影院6| 看免费av毛片| 国产精品一区二区三区四区久久 | 亚洲熟妇熟女久久| 精品国产乱子伦一区二区三区| 亚洲久久久国产精品| 国产精品一区二区免费欧美| 免费在线观看完整版高清| 级片在线观看| 国产91精品成人一区二区三区| av天堂久久9| 欧美激情 高清一区二区三区| 成人18禁高潮啪啪吃奶动态图| 欧美午夜高清在线| 搡老乐熟女国产| 在线免费观看的www视频| 欧美久久黑人一区二区| 男女床上黄色一级片免费看| 50天的宝宝边吃奶边哭怎么回事| 99精品欧美一区二区三区四区| 淫妇啪啪啪对白视频| 老汉色av国产亚洲站长工具| 国产精品电影一区二区三区| 五月开心婷婷网| 丝袜美足系列| av有码第一页| 热re99久久国产66热| 亚洲性夜色夜夜综合| 午夜福利影视在线免费观看| 日本免费a在线| 热99国产精品久久久久久7| 久久精品国产亚洲av高清一级| 手机成人av网站| 色精品久久人妻99蜜桃| 欧美成人午夜精品| 嫁个100分男人电影在线观看| 久久久久久免费高清国产稀缺| 成人黄色视频免费在线看| 亚洲成人久久性| 三级毛片av免费| 午夜影院日韩av| 99久久综合精品五月天人人| 午夜a级毛片| 级片在线观看| 国产成年人精品一区二区 | 黄色视频不卡| 美女福利国产在线| 亚洲狠狠婷婷综合久久图片| 黄色女人牲交| 欧美另类亚洲清纯唯美| 神马国产精品三级电影在线观看 | 伦理电影免费视频| 一区二区日韩欧美中文字幕| 亚洲国产中文字幕在线视频| 亚洲视频免费观看视频| 50天的宝宝边吃奶边哭怎么回事| 国产午夜精品久久久久久| 看黄色毛片网站| 免费观看人在逋| 国产精品国产高清国产av| 丝袜人妻中文字幕| 丰满饥渴人妻一区二区三| 一级作爱视频免费观看| 大型黄色视频在线免费观看| 亚洲免费av在线视频| 日本欧美视频一区| 黑人猛操日本美女一级片| 亚洲第一青青草原| 人人妻人人添人人爽欧美一区卜| 岛国视频午夜一区免费看| 国产精品野战在线观看 | 免费在线观看视频国产中文字幕亚洲| 国产成人精品久久二区二区91| 久热爱精品视频在线9| 欧美成狂野欧美在线观看| 亚洲全国av大片| 首页视频小说图片口味搜索| 在线观看一区二区三区| 国产aⅴ精品一区二区三区波| 国产一区二区三区综合在线观看| 看片在线看免费视频| 欧美日韩亚洲综合一区二区三区_| 日韩精品中文字幕看吧| a在线观看视频网站| 日韩免费av在线播放| 成人国语在线视频| 久久精品国产亚洲av香蕉五月| 国产成人av激情在线播放| 色综合站精品国产| 日韩免费高清中文字幕av| 母亲3免费完整高清在线观看| 夜夜看夜夜爽夜夜摸 | 韩国av一区二区三区四区| 高清黄色对白视频在线免费看| 亚洲人成网站在线播放欧美日韩| 操出白浆在线播放| 国产野战对白在线观看| 亚洲国产中文字幕在线视频| 亚洲成av片中文字幕在线观看| 亚洲av片天天在线观看| 最近最新中文字幕大全免费视频| 在线观看免费视频日本深夜| 在线观看日韩欧美| 亚洲国产毛片av蜜桃av| 国内久久婷婷六月综合欲色啪| 亚洲熟妇熟女久久| 国产伦一二天堂av在线观看| 亚洲黑人精品在线| 天堂俺去俺来也www色官网| 久久香蕉激情| 国产在线观看jvid| 亚洲精品国产色婷婷电影| 久久精品影院6| 韩国av一区二区三区四区| 色精品久久人妻99蜜桃| ponron亚洲| 精品欧美一区二区三区在线| 亚洲,欧美精品.| 看片在线看免费视频| 国内毛片毛片毛片毛片毛片| 午夜激情av网站| 一区在线观看完整版| 91成人精品电影| 国产人伦9x9x在线观看| 亚洲av日韩精品久久久久久密| 老司机亚洲免费影院| 欧美成狂野欧美在线观看| 亚洲美女黄片视频|