• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation

    2024-03-25 09:30:16XiaoyuCheng程曉昱ChenxueXie解晨雪YulunLiu劉宇倫RuixueBai白瑞雪NanhaiXiao肖南海YanboRen任琰博XilinZhang張喜林HuiMa馬惠andChongyunJiang蔣崇云
    Chinese Physics B 2024年3期
    關(guān)鍵詞:瑞雪南海

    Xiaoyu Cheng(程曉昱), Chenxue Xie(解晨雪), Yulun Liu(劉宇倫), Ruixue Bai(白瑞雪), Nanhai Xiao(肖南海),Yanbo Ren(任琰博), Xilin Zhang(張喜林), Hui Ma(馬惠), and Chongyun Jiang(蔣崇云),?

    1College of Electronic Information and Optical Engineering,Nankai University,Tianjin 300350,China

    2School of Physical Science and Technology,Tiangong University,Tianjin 300387,China

    Keywords: two-dimensional materials,deep learning,data augmentation,generating adversarial networks

    1.Introduction

    Atomically thin two-dimensional materials exhibit intriguing physical properties such as valley degree of freedom, single photon emission, strong excitonic effect, etc.,which open up a roadmap for the next-generation information devices.[1-4]At present,conventional methods for preparing single- and few-layer two-dimensional materials include mechanical cleavage,liquid phase exfoliation,gas phase synthesis, etc.,[5-7]with the mechanical cleavage method being widely adopted due to its simple preparation procedure and high crystal quality.However,the obtained flakes are random in size, location, shape and thickness due to the uncontrollable interactions between the adhesive tapes and the layered crystals.[8-10]With the optical microscopy,the thickness of the single-, few-layer and bulk flakes can be distinguished upon the optical contrast, which becomes a preliminary method to fabricate two-dimensional material devices.However, determining the thickness of two-dimensional materials with the naked eye is inefficient and requires repetitive work by expert operators, limiting the development of the two-dimensional devices.To save manual effort and improve efficiency, computer vision is being investigated as a substitute for recognizing flakes of different thicknesses.

    Traditional rule-based image processing methods, such as edge detection, image color contrast, and threshold segmentation can be cost-effective on condition that multiple adjustable parameters achieve their thresholds simultaneously to obtain the best contrast images,which are inefficient in recognizing thousands of images.[11-16]Meanwhile, deep learning allows computers to distinguish between thin and thick layers automatically, and insensitive to environmental changes,which is important for automation applications.Many contemporary deep learning algorithms,including as object detection,semantic segmentation and instance segmentation,can be employed to recognize two-dimensional material flakes.[17-19]However, a major challenge with deep learning-based approaches is that good performance is heavily dependent on the high quality and vast quantity of dataset used to train the model.[20,21]Meanwhile, a large number of raw images are difficult to obtain due to the low yield of flakes by mechanical cleavage.[22]Data sharing of the microscopic images may extend the training dataset, but the dataset acquired in different conditions of microscopes and cameras cannot be merged directly,making the collaborative efforts much less efficient.[23]Furthermore, the single- and few-layer targets account for a small portion of the total pixels compared with the bulk and the substrate background.This inter-category imbalance leads to a reduced accuracy in the identification of thin layers,which also calls for a large dataset.[24]

    In this work, we address the issue of data scarcity by training the StyleGAN3 network to generate synthetic images and expand the dataset.Identifying different thicknesses of two-dimensional material is achieved by training the DeepLabv3Plus network.During the training process,semi-supervised mechanism is introduced to generate pseudolabels,considerably reducing the cost of manual labeling.We enhance the model recognition accuracy to more than 90%using only 128 real images and synthetic images supplemented to the dataset,demonstrating that the addition of synthetic images reduces overfitting and improves recognition accuracy.Our work reduces the limitations imposed by a scarcity of training data for two-dimensional material recognition while improving recognition accuracy, which could help in further exploring the exotic properties of two-dimensional materials and speeding up the manufacture of layered materials devices with low cost.

    2.Methods

    2.1.Principle and process

    We undertake the work of recognizing two-dimensional materials by machine learning with database augmenting in three steps (Fig.1).With WSe2microscopic images as an empirical instance, the initial segment entails the annotation of raw data.Firstly, a total of 161 microscopic images are collected,manually labeled and divided into two groups with 128 (group A) and 33 (group B) images, respectively.Color space transformation and edge detection are employed as preprocessing for more accurate image classification.The 128 images in group A are then utilized in the training of the generative and segmentation networks to produce virtual images and pseudo-labels,whereas the remaining 33 images in group B are used to evaluate the accuracy of the trained model.Using 128 images to train the network is a comprehensive consideration of two factors,the collection difficulties and the high quality of generated images to improve segmentation accuracy.In the second step,StyleGAN3 is employed as the generative network, which is trained to learn the distributional features of the raw images in group A and generates virtual images.Under optimized generation conditions, the created virtual and real images are indistinguishable with the naked eye.In the third step, DeepLabv3Plus operates as the segmentation network(see the rationale and structure of the model in the supporting information section 1 and 2),[27,28]which is trained by the 128 manually labeled images in group A,and then serves to recognize the images generated in the second step and creates corresponding pseudo-labels.The synthetic images and pseudo-labels are sorted by edge detection, with the visually realistic synthetic images and the sharp edge pseudo-labels being used to expand the dataset.In this step,a semi-supervised mechanism[25,26]is used to reduce the labor cost of pixel-level label annotation.To improve the recognition accuracy,we iteratively train the segmentation model by adding 128 virtual synthetic images at a time.In the meantime, 33 images in group B are recognized using the network trained each time,and the intersection over union(IoU)[27]between the recognition results and the previously labeled results is calculated to estimate the recognition accuracy.

    Fig.1.Methodology and procedure.Three steps are included in the whole work, which are depicted by the enclosed blue dashed frames.Step 1: a total of 161 microscopic images of the mechanically cleaved WSe2 flakes are collected and manually annotated, which are divided into groups A and B.Step 2: the 128 images in group A are randomly selected as input into StyleGAN3 and used to produce synthetic images.Step 3: the 128 original images together with their corresponding labels in group A are used to train a preliminary segmentation network of DeepLabv3Plus.Subsequently,the segmentation network is employed to predict the synthetic images and obtain pseudo-labels.The pseudolabels and synthetic images are then filtered out and used to supplement the dataset for retraining the segmentation network.Finally, the 33 images in group B are recognized and the IoU is calculated to evaluate the recognition accuracy.

    As the capacity of the dataset gradually increases from 128 to 640 images,the IoU increases from 88.59%to 90.38%.This increment can be observed from the segmentation results,which display sharp edges and the misclassification for contamination decreases.Our work demonstrates that training StyleGAN3 models can effectively generate visually realistic virtual images of two-dimensional materials and that using virtual images for data augmentation can improve recognition accuracy in segmentation.Furthermore, unambiguous boundaries and improved recognition accuracy will minimize misclassification and allow for precise edge alignment during material stacking at scale, both of which are crucial for device performance.[28-31]The next sections will go through each stage in further detail.

    2.2.Acquisition and annotation of datasets

    In order to label the dataset initially used for training with more accurate labels, we use color space transformation and edge detection techniques.Figure 2 shows the process and results of acquisition,preprocessing and labeling of the dataset.We collect 161 microscopic images of WSe2thin-layer flakes in different lighting conditions to improve the generalization ability of the model.The magnification of the microscope is 50×.Figure 2(a)illustrates the process of manually annotating the 161 images assisted by traditional image processing,which improves the accuracy of the thin-layer edge labeling and reduces the manual cost of random shape labeling at pixel level.However,thin layers in the images are not easily identified by the naked eye for manually labeling the positions and edges (e.g., Fig.2(c)).Therefore, we preprocess the images by using color space transformation and edge detection before annotation.Figures 2(b)-2(d)show the collected WSe2microscopic images.When the images are transformed from RGB into HSV space, the single-and few-layers become more obvious(Figs.2(e)-2(g)).We denote 1-10 layers as thin layers or few layers,and more than 10 layers as thick layers.In order to label accurately,we grayscale the original images,perform edge detection using the Canny operator,and subsequently apply median filtering(Figs.2(h)-2(j)).[32,33]Labeling with traditional image processing is efficient for the entire work,since it helps to recognize images more accurately.

    Fig.2.Data acquisition and preprocessing.(a) Schematic diagram of the image preprocessing algorithm channel.(b)-(d)Original microscopic images of WSe2 flakes.(e)-(g)Images after the RGB to HSV color space conversion.(h)-(j)Images after the grayscale processing,edge detection and median filtering using Canny operator.Few layers are outlined with red color manually.(k)-(m)Labels for few layers(red),bulk(green)and background(black).

    3.Results and discussion

    3.1.Generating synthetic images by StyleGAN3 training

    Sufficient data is crucial for improving model generalization and reducing the risk of overfitting in machine learning.We choose the StyleGAN3 model to generate virtual images given that it is able to effectively control the features of the synthetic images and produce high-quality images.To evaluate the quality of synthetic images,Fr′echet inception distance(FID) is adopted to quantitatively evaluate the similarity between the real and synthetic images

    whereμA(μB) represents the mean of the feature vectors extracted from the real(generated)images set using the inception network.ΣA(ΣB)represents the eigenvalue and eigenvector of the covariance matrix of the real (generated) images.Equation(2)indicates that the smaller the FID value,the more similar the distribution between the generated and real images.[34]We compute the FID every 40 iterations.Figure 3(a)illustrates that FID gradually decreases with the progression of training and stabilizes at 46.17 after 1720 iterations.Additionally,synthetic images at iterations 160, 400, and 840 are shown with FID scores of 140.25,94.25,and 70.49,respectively.In these four positions, we can visually observe that the synthetic images become more similar to the real images as the FID decreases.Further examples of synthetic images that demonstrate the gradual approximation of the synthetic images to the real ones are presented in Fig.S3.This finding suggests that StyleGAN3 training can be used as an effective approach to generate high-quality images of 2D materials from limited original images.The FID score almost stops declining after 1720 iterations,implying a potential overfitting of the model.The FID is not particularly low due to the limited raw data.However, the synthetic images are sufficient to train the segmentation model.Even though not all synthetic images are realistic enough,we can obtain enough sample data by utilizing simply edge detection and a little manual sorting.

    To demonstrate the advantages of StyleGAN3 in generating high-quality images, we compare it with traditional data augmentation.Figures 3(b)-3(d)show that when the input is rotated, the output of StyleGAN3 rotates as well, and the elements that do not appear in the figure are drawn when rotating.The resulting images resemble rotating the sample under a microscope.In contrast, traditional data augmentation can only reduce the size (Fig.3(e)) or crop the image (Fig.3(f))in order to keep the image size when rotating it.This leads to information loss (see supporting information section 5 for details).In comparison,StyleGAN3 results can provide more information for training recognition models, and to some extent improve recognition accuracy.Subsequent experiments provide unambiguous evidence for this.

    Fig.3.Training process and results of StyleGAN3.(a) Evaluation of synthetic images on StyleGAN3.The red dots denote the FID with the iterations of 160,400,840,and 1720,indicating the effect of the current synthetic images.(b)-(d) Changes in the output images of StyleGAN3 when the input is rotated.(e) Traditional data augmentation of rotated graphs, using rotation followed by size reduction.(f) Traditional data augmentation of rotated graphs,rotation followed by cropping.

    3.2.Recognzing images by DeepLabv3Plus training

    In addition to images, the training of segmentation network also requires corresponding labels.Due to the huge amount of synthetic image data, semi-supervised approach will take an important role in reducing the labor cost in labeling production.Therefore,we first train the segmentation network to recognize the images to generate pseudo-labels.Then,the synthetic images and pseudo-labels with poor quality are filtered out by sorting,and the remaining high-quality images and pseudo-labels are added to the dataset.The network is trained with images in group A, and then its recognition accuracy is evaluated with images in group B.The recognition accuracy is estimated by the IoU,which is expressed by

    whereAandBrepresent the annotated target region and the predicted target region of the segmentation model, respectively.It is seen in Fig.4(a) that the recognition accuracy of the background and thick layers is always higher than that of the thin layers.We extract the IoU for each category independently and use the IoU of the thin layers as the model assessment indicator rather than using the average IoU for all categories because thin layers are widely used in optoelectronic devices.The best recognition accuracy of thin layers trained by 128 images in group A reaches only 88.59%(Fig.4(a)).The generated images and pseudo-labels are shown in Fig.4(b).The left column shows the synthetic images generated by a model with a FID of 46.17,while the middle column marked with pseudo-labels presents the predicted segmentation of the synthetic images using the initial training of the segmentation model.Given that the synthetic images are random and the accuracy of the segmentation model is not high, we perform edge detection processing on the synthetic images and illustrate the results in the right column.It can be observed that the three columns of images coincide well,and thus synthetic images and pseudo-labels will be added to the dataset for further training.Since the input is a set of noise,we can theoretically generate an infinite amount of images,dramatically reducing the cost of real images acquisition.Together with the edge detection and sorting for labeling,which is much more efficient than the manual pixel-level labeling,this method can easily exclude images with poor generation or segmentation results.

    In order to boost its performance,we repeatedly train the DeepLabv3Plus network by expanding the dataset in 128 increments.Since the Adam optimizer is employed, the training process is somewhat random.As a result, we train the dataset three times starting from scratch,using the same training parameters, and use the optimal IoU values from each time.As shown in Fig.4(c),the optimal IoU reaches 90.38%by dataset expansion, which is around 1.79%higher than the model trained only with the real data.When the dataset capacity is increased from 256 to 640,the IoU has increased,of which recognition results are presented in Fig.4(d).A good overlap between the segmented mask and the real images of 2D flakes can be seen from the correctly colored thin layers as red and thick layers as green.The detection process effectively removes pollutants like bubbles and tape residues,classifying them as background with minimal misjudgment (red arrows in Fig.4(d)).As the dataset is gradually expanded by the adversarial synthetic images, the recognition of complex layered structures becomes more precise, and the segmentation of thin layer boundaries becomes clearer,while the frequency of misjudgments gradually decreases(white box in Fig.4(d)).This finding demonstrates the effectiveness of the adversarial network for the dataset expansion and recognition accuracy improvement with limited real data.The improvement in recognition is mainly on the edges of the sample of few layers, which have a low percentage of pixels and thus are not remarkable in numerical terms.However,the recognition results in Fig.4(d),previously misclassified,are assigned to the correct classification with this improvement.This improvement of recognition accuracy in the detail of few layers is more valuable than that in thick layers.In the automatic production of two-dimensional material devices on large scale,this reduction of misclassification will be significant in stacking high quality devices and reducing production waste.

    It is demonstrated by Fig.4(c) that our method consistently improves recognition accuracy during the initial stages of data augmentation.The training results under the same conditions using the traditional method of data augmentation are given in Fig.S7.In contrast, the traditional method initially shows some improvement but lacks stability and eventually drops below the previous level as the dataset expands.We attribute this observation to the fact that traditional method loses image information during the augmentation process,while our approach generates new images by mimicking the existing distribution,providing more information and effectively enhancing the model performance.It should be noted that even with image generation-based augmentation,segmentation accuracy may decrease after 640 images.Synthetic images can reflect most of the information in real images,but not all of it.Therefore,as the proportion of synthetic images becomes too high,there is a decrease in the recognition accuracy of real images.Consequently, it is not possible to keep improving accuracy using this method indefinitely.Nevertheless,this approach allows for a stable increase in recognition accuracy during the early stages without incurring physical costs associated with material preparation and collection.It is highly practical for researchers aiming to develop machine learning algorithms tailored to their experimental environments and to improve device fabrication efficiency.We demonstrate that the trained network weights can be used directly for the recognition of other two-dimensional materials, and the recognition results can be viewed in the supplementary materials (Fig.S9).We suggest using a few pending data to fine-tune the model before applying the pre-trained weights, since this might result in higher recognition results.

    Fig.4.DeepLavV3Plus training results.(a)IoU changes with training epoch in the first training session for different thickness of two-dimensional materials and backgroud.(b)Generated images,pseudo-labels,and edge detection results.(c)IoU depends on the expanded dataset.The dashed line shows the optimal changing trend after three rounds of training(blue,orange and green histograms)with the same data volume.(d)From left to right:1st column,images to be recognized in group B.2nd-4th columns,recognition results with different accuracies for GAN augmented dataset sizes of 256, 384, 512, and 640, respectively.5th column, masking of 640 training model recognition results on the original images.Bubbles and contaminants indicated by the red arrows are classified as background in the predicted outcomes.White boxes present improvements in recognition details.

    4.Conclusion

    In conclusion, we demonstrate the feasibility of Style-GAN3 in generating synthetic images of two-dimensional materials and expanding dataset.We confirm that employing synthetic images for data augmentation aids in the recognition of two-dimensional materials and improves the recognition accuracy of segmentation network DeepLabv3Plus.The proposed data augmentation approach that we demonstrated is applicable to a wide range of two-dimensional materials.Our feasible and reliable method,prompted by the demand of scalable production of atomically thin materials, could helpfully explore the intriguing properties of layered materials and enable the rapid manufacturing of layered information devices.

    Acknowledgments

    Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900),the National Natural Science Foundation of China (Grant Nos.61974075 and 61704121), the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700),Tianjin Municipal Education Commission(Grant No.2019KJ028),and Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).C.Y.J.acknowledges the Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin and the Engineering Research Center of Thin Film Optoelectronics Technology,Ministry of Education of China.

    猜你喜歡
    瑞雪南海
    吳瑞雪作品
    大眾文藝(2022年21期)2022-11-16 14:21:20
    南海明珠
    北海北、南海南
    黃河之聲(2021年10期)2021-09-18 03:07:18
    一片瑞雪喜時(shí)光 歲月不匆忘
    瑞雪迎春
    金橋(2021年2期)2021-03-19 08:34:26
    美軍瀕海戰(zhàn)斗艦又來(lái)南海
    軍事文摘(2020年14期)2020-12-17 06:27:26
    插畫
    青年生活(2020年5期)2020-03-27 14:29:00
    新年瑞雪
    南海的虎斑貝
    「南?!埂?dú)s史、國(guó)際法尊重を
    国产成人精品一,二区| 国产在线一区二区三区精| 国产精品99久久99久久久不卡 | 亚洲欧洲国产日韩| 欧美精品一区二区免费开放| 一级,二级,三级黄色视频| av网站免费在线观看视频| 男女边摸边吃奶| 欧美精品一区二区大全| 成年美女黄网站色视频大全免费 | 久久人妻熟女aⅴ| 老司机影院毛片| 免费av不卡在线播放| 欧美最新免费一区二区三区| 熟女电影av网| 夫妻性生交免费视频一级片| 一级毛片黄色毛片免费观看视频| 国产成人一区二区在线| 黄色配什么色好看| 波野结衣二区三区在线| 午夜视频国产福利| 美女主播在线视频| 免费少妇av软件| 丝瓜视频免费看黄片| 精品一区二区三卡| 少妇被粗大的猛进出69影院 | 亚洲精品亚洲一区二区| 日韩精品有码人妻一区| 亚洲精品日韩在线中文字幕| 午夜av观看不卡| 少妇人妻久久综合中文| 国产一区亚洲一区在线观看| 亚洲欧洲国产日韩| 成人特级av手机在线观看| 国模一区二区三区四区视频| 日本黄色片子视频| 国产爽快片一区二区三区| 秋霞伦理黄片| 国产一区二区三区综合在线观看 | 免费看av在线观看网站| 国产在线一区二区三区精| 国产精品不卡视频一区二区| 免费看不卡的av| 一区二区三区四区激情视频| 视频中文字幕在线观看| 精品人妻偷拍中文字幕| av有码第一页| av在线观看视频网站免费| 国产午夜精品久久久久久一区二区三区| 久久99一区二区三区| 亚洲激情五月婷婷啪啪| 91午夜精品亚洲一区二区三区| 纵有疾风起免费观看全集完整版| 天堂中文最新版在线下载| av在线老鸭窝| 久久久久久久久久久丰满| 亚洲欧美一区二区三区国产| 一级黄片播放器| 日韩欧美精品免费久久| 91久久精品国产一区二区成人| 啦啦啦在线观看免费高清www| 欧美bdsm另类| 亚洲av国产av综合av卡| www.av在线官网国产| 国产伦在线观看视频一区| 国产精品蜜桃在线观看| 国产精品伦人一区二区| 在现免费观看毛片| 99久久精品国产国产毛片| 在线观看免费高清a一片| 人人澡人人妻人| av又黄又爽大尺度在线免费看| 久久国产乱子免费精品| 亚洲av欧美aⅴ国产| 五月开心婷婷网| 久久久久国产网址| 亚洲丝袜综合中文字幕| 精品人妻一区二区三区麻豆| 久热久热在线精品观看| 一区二区三区乱码不卡18| 高清不卡的av网站| 噜噜噜噜噜久久久久久91| 少妇丰满av| 少妇丰满av| av免费观看日本| 午夜av观看不卡| 婷婷色麻豆天堂久久| 在线看a的网站| 欧美+日韩+精品| 国产日韩欧美亚洲二区| 乱人伦中国视频| 欧美丝袜亚洲另类| 啦啦啦啦在线视频资源| 性色av一级| 最近2019中文字幕mv第一页| 亚洲欧美日韩卡通动漫| 亚洲欧美日韩卡通动漫| 久久人人爽av亚洲精品天堂| 噜噜噜噜噜久久久久久91| 精品久久久久久久久av| 国产国拍精品亚洲av在线观看| 国产一区亚洲一区在线观看| 少妇人妻一区二区三区视频| 少妇丰满av| 欧美bdsm另类| 久久久久精品久久久久真实原创| 欧美bdsm另类| 久久99一区二区三区| 国产精品无大码| 亚洲精品亚洲一区二区| 大话2 男鬼变身卡| 边亲边吃奶的免费视频| 午夜福利,免费看| 在线观看免费视频网站a站| av在线老鸭窝| 99热这里只有是精品50| 国产免费又黄又爽又色| 欧美精品高潮呻吟av久久| 精品人妻熟女av久视频| 美女xxoo啪啪120秒动态图| 精品久久久久久久久av| 少妇的逼好多水| 我的女老师完整版在线观看| 午夜激情久久久久久久| 色婷婷av一区二区三区视频| 日韩中字成人| 日韩av在线免费看完整版不卡| 日韩人妻高清精品专区| 永久网站在线| 99国产精品免费福利视频| 伊人久久精品亚洲午夜| 国产欧美亚洲国产| 久久久久久久久久成人| 两个人免费观看高清视频 | 欧美日本中文国产一区发布| 两个人免费观看高清视频 | 国产极品粉嫩免费观看在线 | 人妻 亚洲 视频| 超碰97精品在线观看| 亚洲av在线观看美女高潮| 少妇裸体淫交视频免费看高清| av一本久久久久| 精品久久国产蜜桃| 91久久精品国产一区二区三区| 我要看日韩黄色一级片| 亚洲精品国产色婷婷电影| 亚洲伊人久久精品综合| 国产一区二区在线观看av| 精品久久国产蜜桃| av免费在线看不卡| 成人毛片a级毛片在线播放| 大香蕉久久网| 亚洲第一av免费看| 国产成人aa在线观看| 国产高清有码在线观看视频| 99久久中文字幕三级久久日本| 亚洲国产精品专区欧美| 一区二区av电影网| 亚洲怡红院男人天堂| 美女大奶头黄色视频| 亚洲一区二区三区欧美精品| 国产亚洲5aaaaa淫片| 欧美xxⅹ黑人| 日韩av在线免费看完整版不卡| 亚洲精品视频女| freevideosex欧美| 亚洲精品乱码久久久久久按摩| 国产av精品麻豆| 亚洲国产日韩一区二区| 麻豆成人av视频| 国产伦理片在线播放av一区| 亚洲欧美一区二区三区黑人 | 久久精品国产亚洲av涩爱| 伦精品一区二区三区| 精品国产露脸久久av麻豆| 亚洲经典国产精华液单| 黄色配什么色好看| 99re6热这里在线精品视频| 国产精品.久久久| 精品熟女少妇av免费看| 在线观看av片永久免费下载| 成人黄色视频免费在线看| 中文欧美无线码| av在线观看视频网站免费| 91久久精品电影网| 美女国产视频在线观看| 亚洲综合色惰| 欧美国产精品一级二级三级 | 老熟女久久久| 久久精品国产鲁丝片午夜精品| 少妇被粗大的猛进出69影院 | 日韩一本色道免费dvd| 亚洲av日韩在线播放| 只有这里有精品99| 丰满人妻一区二区三区视频av| a级毛片免费高清观看在线播放| 天美传媒精品一区二区| 王馨瑶露胸无遮挡在线观看| 一区二区三区乱码不卡18| 亚洲国产色片| 亚洲四区av| 建设人人有责人人尽责人人享有的| 成年人午夜在线观看视频| 国产一级毛片在线| 精品亚洲成a人片在线观看| videos熟女内射| tube8黄色片| 五月开心婷婷网| 肉色欧美久久久久久久蜜桃| 少妇人妻一区二区三区视频| 国产男人的电影天堂91| 在线观看免费高清a一片| 国产老妇伦熟女老妇高清| 国产中年淑女户外野战色| 亚洲国产精品国产精品| 黄色欧美视频在线观看| 国产精品国产av在线观看| 岛国毛片在线播放| 中文字幕人妻丝袜制服| 成人免费观看视频高清| 高清黄色对白视频在线免费看 | 欧美日韩视频高清一区二区三区二| 嫩草影院入口| 2021少妇久久久久久久久久久| 亚洲av综合色区一区| 亚洲国产精品国产精品| 国产精品人妻久久久久久| 欧美激情极品国产一区二区三区 | 久久97久久精品| 夜夜骑夜夜射夜夜干| 欧美日韩亚洲高清精品| 特大巨黑吊av在线直播| 人妻 亚洲 视频| 国产白丝娇喘喷水9色精品| 国内少妇人妻偷人精品xxx网站| 免费人妻精品一区二区三区视频| 少妇人妻 视频| 激情五月婷婷亚洲| 男女边吃奶边做爰视频| 久久精品久久精品一区二区三区| 久久人人爽人人片av| av线在线观看网站| 伦精品一区二区三区| 中文字幕人妻丝袜制服| 国产熟女午夜一区二区三区 | 精品一区二区免费观看| 又粗又硬又长又爽又黄的视频| 2021少妇久久久久久久久久久| 在线观看www视频免费| 久久精品熟女亚洲av麻豆精品| 女人久久www免费人成看片| 天天操日日干夜夜撸| 搡女人真爽免费视频火全软件| 国产精品一区二区在线不卡| 丰满少妇做爰视频| 秋霞在线观看毛片| 国产深夜福利视频在线观看| 99re6热这里在线精品视频| av天堂中文字幕网| 一区在线观看完整版| 亚洲av福利一区| 王馨瑶露胸无遮挡在线观看| 亚洲精品中文字幕在线视频 | 亚洲av二区三区四区| 亚洲精品aⅴ在线观看| 亚洲国产av新网站| 欧美少妇被猛烈插入视频| 男男h啪啪无遮挡| 国产69精品久久久久777片| 青春草视频在线免费观看| 国产有黄有色有爽视频| 国产熟女午夜一区二区三区 | 亚洲真实伦在线观看| 日韩中文字幕视频在线看片| 久久综合国产亚洲精品| 午夜福利视频精品| 国产精品一二三区在线看| 国产午夜精品一二区理论片| 免费观看av网站的网址| 成人国产麻豆网| 男女边吃奶边做爰视频| 亚洲高清免费不卡视频| 久久国内精品自在自线图片| 国产深夜福利视频在线观看| 亚洲,欧美,日韩| 最新的欧美精品一区二区| 大码成人一级视频| 性色avwww在线观看| 中文字幕制服av| 成人综合一区亚洲| 在线观看av片永久免费下载| 极品人妻少妇av视频| 嫩草影院新地址| 乱系列少妇在线播放| 丁香六月天网| 国产成人午夜福利电影在线观看| 欧美xxⅹ黑人| 中国三级夫妇交换| 高清不卡的av网站| 国产日韩欧美视频二区| 免费少妇av软件| 久久久久国产网址| 欧美亚洲 丝袜 人妻 在线| 免费av中文字幕在线| 国产 精品1| 久久狼人影院| 亚洲av日韩在线播放| 亚洲av.av天堂| 黄色毛片三级朝国网站 | av天堂中文字幕网| 69精品国产乱码久久久| 最近中文字幕高清免费大全6| 亚洲精品成人av观看孕妇| a级一级毛片免费在线观看| 免费观看在线日韩| 日韩伦理黄色片| 亚洲情色 制服丝袜| 国产精品一区www在线观看| 欧美日韩国产mv在线观看视频| av线在线观看网站| 尾随美女入室| 一级,二级,三级黄色视频| 777米奇影视久久| 在现免费观看毛片| 日韩一区二区视频免费看| 男女免费视频国产| 一级片'在线观看视频| 一本一本综合久久| 中文字幕免费在线视频6| 成人免费观看视频高清| 国产一区二区三区av在线| 欧美性感艳星| av不卡在线播放| 一级爰片在线观看| 欧美xxⅹ黑人| 国产精品欧美亚洲77777| 国产黄色视频一区二区在线观看| 国产色婷婷99| 免费av不卡在线播放| 街头女战士在线观看网站| 午夜免费鲁丝| 热re99久久国产66热| av在线app专区| 日本黄大片高清| 一本—道久久a久久精品蜜桃钙片| av国产久精品久网站免费入址| 老熟女久久久| av.在线天堂| 亚洲美女视频黄频| 久久99热这里只频精品6学生| 97在线人人人人妻| 国产精品一区二区三区四区免费观看| 69精品国产乱码久久久| 精品国产国语对白av| 国产在线一区二区三区精| 欧美日韩综合久久久久久| 女人精品久久久久毛片| 久久99热这里只频精品6学生| 亚洲av不卡在线观看| 午夜福利,免费看| 亚洲精品aⅴ在线观看| 我要看日韩黄色一级片| 久久久久久人妻| 国产探花极品一区二区| 22中文网久久字幕| 国产视频首页在线观看| 丝袜脚勾引网站| 免费看不卡的av| 婷婷色麻豆天堂久久| 日韩不卡一区二区三区视频在线| 一二三四中文在线观看免费高清| 亚洲av不卡在线观看| 人妻一区二区av| 免费不卡的大黄色大毛片视频在线观看| 一本一本综合久久| 成年人免费黄色播放视频 | 99久久综合免费| 少妇的逼水好多| 日韩制服骚丝袜av| 一级黄片播放器| 夜夜看夜夜爽夜夜摸| 啦啦啦中文免费视频观看日本| 久久精品熟女亚洲av麻豆精品| 免费观看a级毛片全部| 色婷婷久久久亚洲欧美| 黄色配什么色好看| √禁漫天堂资源中文www| 国产精品伦人一区二区| 少妇的逼水好多| 又大又黄又爽视频免费| 少妇熟女欧美另类| 两个人的视频大全免费| 91精品伊人久久大香线蕉| 国产成人午夜福利电影在线观看| 亚洲欧美精品专区久久| 麻豆成人av视频| 国产伦精品一区二区三区视频9| 亚洲av免费高清在线观看| 亚洲欧美成人精品一区二区| 国产成人免费观看mmmm| 99热这里只有是精品在线观看| 免费av不卡在线播放| 久久午夜综合久久蜜桃| 日韩 亚洲 欧美在线| 精品国产国语对白av| 国产淫语在线视频| 免费高清在线观看视频在线观看| 精品少妇内射三级| 亚洲人成网站在线观看播放| 国产极品粉嫩免费观看在线 | 亚洲精品视频女| 婷婷色综合www| 成年人免费黄色播放视频 | 国产伦在线观看视频一区| 精品人妻偷拍中文字幕| 日日爽夜夜爽网站| 亚洲丝袜综合中文字幕| 亚洲欧洲国产日韩| 国产美女午夜福利| 在线播放无遮挡| 91aial.com中文字幕在线观看| 91久久精品国产一区二区成人| 亚洲丝袜综合中文字幕| 亚洲国产精品专区欧美| 另类精品久久| 午夜福利网站1000一区二区三区| 毛片一级片免费看久久久久| 午夜福利,免费看| 国产亚洲91精品色在线| 22中文网久久字幕| 少妇的逼好多水| 久久精品夜色国产| 黑人巨大精品欧美一区二区蜜桃 | 日本欧美国产在线视频| 久久毛片免费看一区二区三区| 国产一区二区在线观看日韩| 国语对白做爰xxxⅹ性视频网站| av视频免费观看在线观看| 九九久久精品国产亚洲av麻豆| 男人和女人高潮做爰伦理| 成人18禁高潮啪啪吃奶动态图 | 亚洲综合色惰| 中文乱码字字幕精品一区二区三区| 亚洲精品一二三| 日韩中文字幕视频在线看片| 蜜桃久久精品国产亚洲av| 亚洲av国产av综合av卡| 亚洲四区av| 波野结衣二区三区在线| 啦啦啦视频在线资源免费观看| 欧美高清成人免费视频www| 午夜激情久久久久久久| 涩涩av久久男人的天堂| 少妇被粗大猛烈的视频| 麻豆精品久久久久久蜜桃| 国产永久视频网站| 欧美 亚洲 国产 日韩一| 老司机影院毛片| 青春草亚洲视频在线观看| 日韩一区二区视频免费看| 日韩大片免费观看网站| 熟妇人妻不卡中文字幕| 国产一区二区三区av在线| 麻豆精品久久久久久蜜桃| 熟妇人妻不卡中文字幕| 久久国产乱子免费精品| 国产一级毛片在线| 亚洲美女黄色视频免费看| 狂野欧美白嫩少妇大欣赏| 亚洲av日韩在线播放| 夜夜骑夜夜射夜夜干| 精品国产一区二区三区久久久樱花| 18禁在线无遮挡免费观看视频| 亚洲经典国产精华液单| 毛片一级片免费看久久久久| 国产黄片美女视频| 99九九线精品视频在线观看视频| 精品亚洲成a人片在线观看| 亚洲欧美成人综合另类久久久| 18禁动态无遮挡网站| 69精品国产乱码久久久| 丝袜在线中文字幕| av免费观看日本| 晚上一个人看的免费电影| 最黄视频免费看| 亚洲精品国产av蜜桃| 久久精品久久久久久噜噜老黄| 午夜精品国产一区二区电影| 精品国产乱码久久久久久小说| 欧美日韩亚洲高清精品| 精品久久久久久久久亚洲| 天堂俺去俺来也www色官网| av播播在线观看一区| 免费看av在线观看网站| 成人黄色视频免费在线看| 亚洲国产欧美日韩在线播放 | 能在线免费看毛片的网站| 亚洲成人av在线免费| 久久国内精品自在自线图片| 日本-黄色视频高清免费观看| 久久国内精品自在自线图片| 天堂中文最新版在线下载| 一本久久精品| 国产高清不卡午夜福利| 国产精品久久久久久av不卡| 九草在线视频观看| 久久99热这里只频精品6学生| 在线精品无人区一区二区三| 久久久久人妻精品一区果冻| 午夜视频国产福利| 黑人猛操日本美女一级片| 午夜视频国产福利| 黑人猛操日本美女一级片| 日韩电影二区| 午夜激情久久久久久久| 青春草视频在线免费观看| 国产亚洲91精品色在线| 夜夜爽夜夜爽视频| 中文精品一卡2卡3卡4更新| 日韩视频在线欧美| 又粗又硬又长又爽又黄的视频| 国产色婷婷99| 2018国产大陆天天弄谢| 成年女人在线观看亚洲视频| 2018国产大陆天天弄谢| 日韩欧美一区视频在线观看 | 久久鲁丝午夜福利片| 国内精品宾馆在线| 如日韩欧美国产精品一区二区三区 | 偷拍熟女少妇极品色| 大陆偷拍与自拍| 久久人人爽人人片av| 精品久久久久久电影网| 人妻 亚洲 视频| 卡戴珊不雅视频在线播放| 交换朋友夫妻互换小说| 中文字幕免费在线视频6| 亚洲内射少妇av| 岛国毛片在线播放| 人妻少妇偷人精品九色| 人人妻人人澡人人看| 在线观看免费视频网站a站| 国精品久久久久久国模美| 亚洲欧美一区二区三区黑人 | 精品一区二区免费观看| 日本欧美视频一区| 人人妻人人添人人爽欧美一区卜| 成年美女黄网站色视频大全免费 | 亚洲在久久综合| 亚洲四区av| 成人黄色视频免费在线看| 欧美一级a爱片免费观看看| 人人妻人人添人人爽欧美一区卜| 欧美xxⅹ黑人| 欧美 日韩 精品 国产| 国产亚洲欧美精品永久| 精品酒店卫生间| 国产精品99久久99久久久不卡 | 国产成人a∨麻豆精品| a级毛片在线看网站| 丝瓜视频免费看黄片| 黑人猛操日本美女一级片| 波野结衣二区三区在线| 久久精品国产亚洲网站| 亚洲一级一片aⅴ在线观看| 伦精品一区二区三区| 国内精品宾馆在线| av福利片在线| av视频免费观看在线观看| av国产久精品久网站免费入址| 国产亚洲5aaaaa淫片| 亚洲国产毛片av蜜桃av| 国产黄片视频在线免费观看| 成人毛片60女人毛片免费| 三级国产精品片| 99热国产这里只有精品6| 新久久久久国产一级毛片| 日韩不卡一区二区三区视频在线| 久久综合国产亚洲精品| 日韩视频在线欧美| a级毛片免费高清观看在线播放| 欧美+日韩+精品| 啦啦啦中文免费视频观看日本| 老司机影院毛片| 下体分泌物呈黄色| 免费人妻精品一区二区三区视频| 麻豆乱淫一区二区| av免费观看日本| 国产乱人偷精品视频| 777米奇影视久久| 少妇被粗大的猛进出69影院 | 极品教师在线视频| 精品国产一区二区久久| 精品久久久噜噜| 成人特级av手机在线观看| 一级毛片aaaaaa免费看小| 欧美国产精品一级二级三级 | 啦啦啦视频在线资源免费观看| 精品熟女少妇av免费看| 国产成人aa在线观看| 只有这里有精品99| 晚上一个人看的免费电影| 国产精品成人在线| 黑人巨大精品欧美一区二区蜜桃 | 日韩欧美一区视频在线观看 | 国产在线男女| 人人澡人人妻人| 色94色欧美一区二区| av不卡在线播放| 欧美日韩在线观看h| 精品亚洲成a人片在线观看| 伦理电影免费视频| 在线精品无人区一区二区三| 好男人视频免费观看在线| 永久免费av网站大全|