• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Diffractive Deep Neural Networks at Visible Wavelengths

    2021-04-24 03:11:28HngChenJinnFengMinweiJingYiqunWngJieLinJiuinTnPengJin
    Engineering 2021年10期

    Hng Chen, Jinn Feng, Minwei Jing, Yiqun Wng, Jie Lin,c,*, Jiuin Tn, Peng Jin,c,*

    a Center of Ultra-precision Optoelectronic Instrument, Harbin Institute of Technology, Harbin 150001, China

    b Nanofabrication Facility, Suzhou Institute of Nano-Tech and Nano-Bionics, Chinese Academy of Sciences, Suzhou 215123, China

    c Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Ministry of Education, Harbin Institute of Technology, Harbin 150001, China

    Keywords:Optical computation Optical neural networks Deep learning Optical machine learning Diffractive deep neural networks

    ABSTRACT Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network (D2NN) based on three-dimensional printing technology operated in the terahertz spectral range.Since the terahertz bandwidth involves limited interparticle coupling and material losses, this paper extends D2NN to visible wavelengths. A general theory including a revised formula is proposed to solve any contradictions between wavelength, neuron size, and fabrication limitations. A novel visible light D2NN classifier is used to recognize unchanged targets(handwritten digits ranging from 0 to 9) and targets that have been changed (i.e., targets that have been covered or altered) at a visible wavelength of 632.8 nm.The obtained experimental classification accuracy(84%)and numerical classification accuracy(91.57%)quantify the match between the theoretical design and fabricated system performance.The presented framework can be used to apply a D2NN to various practical applications and design other new applications.

    1. Introduction

    Deep learning is a type of machine learning method that can be used to achieve data representation, abstraction, and advanced tasks by simulating a multi-layer artificial neural network in a computer [1]. Recent advances in deep learning have attracted much attention. In some fields, deep learning performance has been shown to be superior to that of human experts.Deep learning has revolutionized the fields of artificial intelligence(AI)and computer science, and great advances have been made in these fields.Deep learning has been widely applied to computer vision [2],voice/image recognition [3–5], robotics [6], and other applications[7–9]. However, electronic and active deep learning is strongly limited by the von Neumann architecture in terms of processing time and energy consumption [10,11]. In the last several decades,optical information processing, which implements the operations of convolution, correlation, and Fourier transformation in an optical system, has been found to exhibit unique advantages for parallel processing and has been widely investigated [12–17].Computer-based deep learning has been achieved in optical systems by use of diffractive optical elements, and optical deep learning based on diffractive optical elements has been validated using image classification [18–21]. The terahertz diffractive deep neural network (D2NN) based on three-dimensional (3D) printing technology is the landmark method for optical deep learning [18].

    Based on deep learning-based design and error backpropagation methods, D2NN is trained using a computer. During the training period, each pixel in the diffractive optical element layer is a neuron.The transmission coefficients(or complex amplitude distribution) for these pixels are optimized by the computer to control diffraction light from the input plane to the output plane to perform the required task.When this training phase is complete,these passive diffractive optical element layers can be physically fabricated. By stacking these layers together to form an alloptical network, these diffractive layers can execute the trained function without the use of energy,except for the coherent illumination light that is used to encode the input object’s information and the output detectors. Various nonlinear functions, such as Fourier-space D2NN, where the optical nonlinearity is introduced using ferroelectric thin films [17], and linear functions, such as handwritten digits and a fashion product classifier[18–20]without the use of nonlinear optoelectronic materials,have been performed using a D2NN [18–21].

    A 3D-printed terahertz D2NN has been proven to be a great success.The promising performance and validity of D2NN for terahertz platforms has been verified in many studies [18–20]. Despite the huge advantages, terahertz itself suffers from some well-known limitations in practical applications, such as material losses [22]and limited interparticle coupling[22]. Therefore, there is a growing requirement to revise the terahertz scheme for use with visible or near-infrared[23]wavelengths.Extension of the working wavelength from the terahertz bandwidth range to visible light has the potential to offer more novel perspectives[24–29].However,some restrictions and shortcomings must be addressed to adapt a D2NN from a long wavelength to a short wavelength. Some contradictions exist between the working wavelength,neuron size,and fabrication limitations. A shorter wavelength has a smaller neuron size, which makes the process more difficult,and traditional maximal half-cone diffraction angle theory cannot be used to overcome this contradiction [18–20].

    In this work,a new general formula containing detailed analysis of the D2NN framework, including the different parameters required for the design space, is proposed to overcome the abovementioned contradiction. This new formula is proposed by introducing a formula that includes related variables based on the traditional maximum half-cone diffraction angle theory. The proposed general formula is a quantitative analysis on how to expand D2NN for visible light sources such as a helium (He)–neon (Ne)laser source.A series of simulation analyses are designed to verify the proposed formula. As an example, a D2NN classifier is used with a subset of handwritten digits from the Mixed National Institute of Standards and Technology (MNIST) training dataset to reduce the training time and training complexity. The digital classifier is used to recognize handwritten digits from 0 to 9. In this situation, a phase-only, five-layer D2NN is used and a total of 32 000 neurons are used to prevent overfitting. The numerically blind testing accuracy for different cases with a new test dataset is used to verify the proposed formula.

    Based on the proposed general formula, a novel visible light D2NN classifier was designed and further experimentally verified.The working light was a He–Ne laser with a wavelength of 632.8 nm. In contrast with existing D2NN classifiers [18–20], the proposed visible light D2NN classifier can classify handwritten digits from 0 to 9 even for a changing target,such as the digits being covered or altered, which are two common cases. The visible light D2NN’s identification capability can be improved by extending the training dataset period.For this test,a total of 55 000 handwritten digits in the MNIST training dataset were extended to a new training dataset with 80 000 images of handwritten digits. The incremental 25 000 handwritten digits were used as the deformed MNIST training dataset, which were covered or altered. By using the five-layer, phase-only D2NN with five million neurons, a numerically blind testing accuracy of 91.57% was achieved in the new test dataset of 11 000 images.The visible light D2NN was fabricated using a multi-step photolithography-etching process on a silicon dioxide (SiO2) substrate. The inputs for the experiment were 50 transmissive digital objects that were fabricated using micro-fabrication technology. A blind testing accuracy of 84%was achieved. The experimental classification accuracy (84%) and the numerical classification accuracy (91.57%) quantify the match between theoretical design and the fabricated system performance. The relatively small reduction in the performance of the experimental network compared to the numerical testing proves the validity of the design theory for the visible light D2NN.

    By using these systematic advances for designing a D2NN, the reported method and the improvement set the state of the art for visible light D2NNs. Deep neural networks are sometimes called black boxes [30], as the hidden layers can be difficult to extract to explain how data is processed.The proposed D2NN may provide some insights into this issue. Additionally, understanding the interaction between biological neurons in the human brain is of fundamental interest when building deep neural networks [31].The proposed D2NN may provide insight into the current understanding of the interactions between biological neurons. The presented framework and theory can be used to apply D2NNs to various practical applications for human–computer interaction equipment and AI interaction devices and can promote progress in biology and computational science.

    2. Material and methods

    2.1. Deep learning and D2NN architecture

    The proposed visible light D2NN follows optical diffraction theory and a deep learning structure [32–35]. Compared with traditional deep learning systems, the D2NN system has some differences:①The D2NN system obeys optical diffractive theories,such as Huygens’ principle and Rayleigh–Sommerfeld diffraction[36],for the forward-propagation model;②the training loss function used in this system (i.e., the softmax-cross-entropy (SCE) loss function),is based on the light power incident on different detector regions in the output plane. Therefore, based on the calculated error with respect to the target output and according to the desired loss function, the network structure and its neuron phase values can be optimized using an error back-propagation algorithm.

    Beyond this stood her house, in the centre of a strange forest, in which all the trees and flowers were polypi, half animals and half plants; they looked like serpents with a hundred heads growing out of the ground

    Generally, in deep learning, the phase values for the neurons in each layer are iteratively adjusted (or trained) to perform a specific function by feeding training data to the input layer followed by computing the network’s output through optical diffraction. Before a specific analysis of the forward-propagation model within a D2NN, some definitions are provided for the lth layer as follows (where l represents the layer number of the network):

    where ° is the Hadamard product operation.

    After the secondary diffractive wave Wl+1propagates from the diffractive layer l to the diffractive layer l+1,a phase shift is introduced with a corresponding phase factor as follows:

    2.2. TensorFlow-based design for a D2NN and processing flow

    The proposed visible light D2NN was realized using Python(v3.7) and the TensorFlow (v1.12.0, Google Inc., USA) framework.The proposed D2NN system was trained for 20 epochs using a desktop computer with a GeForce GTX 1080 Ti graphical processing unit(GPU) and an Intel?CoreTME5-2650 central processing unit (CPU)@2.00 GHz and 128 GB of random access memory (RAM), running the Windows 10 operating system(Microsoft Corporation,USA).

    The trainable parameters in a diffractive neural network are the modulation values for each layer, which, here, were optimized using the back-propagation method of the adaptive moment estimation (Adam) optimizer with a learning rate of 10-3. Furthermore, the number of network layers and the axial distance between these layers are also design parameters.The training time for a five-layer visible light D2NN to classify both unchanged and changed (covered or altered) handwritten digits was approximately 20 h.

    The input digit objects were encoded based on the input amplitude into the D2NN and were fabricated by laser direct writing(LDW). The target objects were fabricated on a soda glass substrate.The glass substrate was first cleaned using acetone and isopropyl alcohol. The clear substrate was coated with a layer of chromium (Cr) with a thickness of a few hundred nanometres using electron beam evaporation. After spin-coating positive photoresist and a prebake process,the handwritten digit patterns were exposed using LDW technology. The exposed resist was stripped using a developer and the uncovered Cr was stripped using chrome mordant. Finally, any remnant resist was also cleaned using acetone and isopropyl alcohol.

    To enhance the fabrication of the visible wavelength D2NN, a sigmoid function was applied to limit the phase value of each neuron to 0–π, which enabled the neurons to be easily fabricated using a traditional multi-step photolithography-etching method.Before processing,the neuron phase values Φ need to be converted into a relative height map Δh (Δh = λΦ/(2πΔn), where Δn is the refractive index difference between the fabricated substrate and air). The D2NN layers were fabricated onto a SiO2substrate using a similar cleaning process as above. After cleaning, equipment pre-treated with hexamethyldisilazane was used to change the surface activity of the SiO2substrate to enhance the adhesion between the photoresist and the substrate.After spin-coating photoresist and exposure, the exposed resist was stripped using a developer. Then, after an oxygen plasma sizing treatment, magnetic neutral loop discharge etching was used. This process was repeated until the D2NN layer structure was achieved. More processing details can be found in Figs. S1 and S2 in Appendix A.

    3. Results

    By using a multi-step photolithography-etching process on a SiO2substrate, a D2NN was fabricated as five layers of diffractive optical elements, which were mounted as shown in Fig. 1. The identification capability of the visible light D2NN was improved by extending the target dataset for the training period.For a changing target, for example, a target being covered or altered, existing deep learning systems will falsely identify the target[18–20],even with sufficient object recognition accuracy improvements [20].Therefore, our D2NN was trained as a digit classifier to perform automated classification of handwritten digits. The designed D2NN can classify unchanged number targets from 0 to 9 as well as targets that have been changed (by being covered or altered),as shown in Fig. 1(a). For these tasks, a phase-only transmission five-layer D2NN was designed by training 80 000 images,comprising 55 000 unchanged handwritten digits that were obtained from the MNIST training dataset and 25 000 changed handwritten digits(i.e., covered or altered digits) which were derived from the deformed MNIST training dataset. The input digits were encoded into the D2NN based on the input amplitude. The diffractive neutral network was trained to map the input digits to eleven detector regions, which were marked by different numbers, as shown in Fig. 1(a). The unchanged input digits from 0 to 9 were mapped to the No.0 to No.9 detector regions,respectively.The changed input digits were all mapped to the No. X detector region. These detectors are also shown in Fig. 1(a). The classification criterion was used to find the detector with the maximum optical signal.

    Once the training was completed,the improved D2NN digit classifier was numerically tested using 11 000 additional images,which were not used as training image sets and comprised 10 000 unchanged handwritten digits that were obtained from the MNIST test dataset and 1000 deformed handwritten digits(i.e., covered or altered digits) that were derived from the deformed MNIST test dataset. The improved system achieved a blind classification accuracy of 91.57%.

    Fig.1. Schematic diagram and experimental setup of the visible light D2NN.(a)Schematic diagram of the classifier used for the unchanged handwritten digit targets from 0 to 9 and the changed handwritten digit goals, such as the covered digit 7 and the altered digit 5. The spatial distribution of the detectors is also shown in this figure.(b)Numerical phase values for the neurons of the five layers L1,L2,L3,L4,and L5.(c)The experimental schematic.(d,e)The experimental setup.CCD:charge coupled device.

    For the numerical testing of the 11 000 test images, the classification accuracy of the designed five-layer D2NN was determined to be 91.57%. The confusion matrix is given in Fig. 2(a) and shows the details and distribution of the correctly identified examples and the incorrectly identified examples. For the 50 digital objects fabricated by LDW, the experimental blind testing accuracy was found to be 84%.The relatively small reduction in the performance of the experimental network compared to the numerical testing indicates that the design theory is correct. The confusion matrix in Fig.2(b)shows the experimental details for examples of correct and incorrect identification. A CCD with a specially designed light barrier was applied to each illuminated input object to obtain the D2NN output.The transmission regions for the light barrier correspond to the detector positions from No.0 to No.X,respectively.The remaining regions are opaque.The first step in this test was to assess the recognition capability of the unchanged and changed handwritten test numbers. A handwritten 3, an altered handwritten 3, a handwritten 4, and a covered handwritten 4 were chosen as the input objects, as shown in Fig. 3(a). The simulated results and the experimental results in Fig. 3(b) indicate that the visible light D2NN can be used to easily classify the deformed object inputs. As shown in Fig. 3(c), the energy distribution shows that the system can identify the maximum optical signal for the correct detector. The second step in this test was to use four different forms of the handwritten digit 6 as input objects, as shown in Fig. 4. The simulated results and the experimental results in Fig.4(c)demonstrate that the fabricated diffractive neural network and the inference capability are valid.The average intensity distribution at the output plane of the visible light D2NN can converge the maximum input energy to the corresponding detector assigned to that digit.

    Fig.2. Confusion matrix for the simulated and experimental results.(a)Confusion matrix for the simulated results.Numerical testing of the five-layer D2NN design achieves a classification accuracy of 91.57% over around 11 000 different test images. (b) Confusion matrix for the experimental results obtained for 50 different handwritten digits prepared by LDW. The classification accuracy is approximately 84%.

    Fig.3. Handwritten digit classifier for a visible light D2NN.(a)Objects under an optical microscope,including a handwritten 3,an altered handwritten 3,a handwritten 4,and a covered handwritten 4. Amp: amplitude. (b) Simulated results and experimental results showing that the handwritten digit classifier D2NN successfully classifies handwritten input digits based on 11 different detector regions at the output plane of the network, each corresponding to one digit. As an example, the output of the handwritten input of 3 and 4 are focused onto the No.3 and No.4 detectors,as indicated by the white arrows.The altered and covered handwritten input of 3 and 4 are all indicated by the No.X detector.Max:maximal.(c)Energy distribution percentage for our experimental results and simulated results,which demonstrates the success of the fabricated diffractive neural network and its inference capability.

    In summary, the proposed D2NN illuminated by a He–Ne laser was demonstrated to successfully recognize unchanged targets(from 0 to 9) and changed targets (i.e., targets that are covered or altered) at a visible wavelength of 632.8 nm. Additionally, the proposed visible D2NN system was shown to have a transfer learning ability,as shown in Fig.5.When the laser is passed directly into the D2NN system without passing through any handwritten digits,the existing D2NN system[18–20]will still diffract light to a digital detector. This indicates that incident light is misjudged to be a number. When the laser is directly incident onto the proposed D2NN system, as shown in Fig. 5(a), the proposed visible light D2NN system focuses the incident light into the No. X detector,which indicates that an incorrect number has been identified,which is not part of the classification set.The experimental results shown in Fig. 5(b) show strong agreement with the simulated results.

    The demonstrated D2NN can be used to address the contradictions that occur when adapting from a long wavelength to a visible light source. The quantitative analysis performed here demonstrates the building of a visible light D2NN and addresses the existing contradictions between wavelength, neuron size, and processing difficulty.

    Connectivity between layers is a dominant factor that directly influences the diffraction of neurons. Therefore, the information transfer and the inference performance of the D2NN were determined. A fully connected D2NN can achieve sufficient information transfer and optical interconnection between neurons.A fully connected network requires that the diffraction angle of all neurons should be large enough to optically cover the diffractive optical element in the next layer.The maximal half-cone diffraction angle of a neuron (φmax) governed by wavelength and the neuron size can be qualitatively described for a fully connected structure as follows:

    where dfis the neuron size.

    Fig.4. Handwritten digit classifier for a visible light D2NN.(a)Objects under an optical microscope,including four different forms of the handwritten 6.(b)Simulated results and experimental results showing that the handwritten digit classifier D2NN successfully classifies different types of handwritten input digits.Four different forms of 6 were all focused onto the No. 6 detector, as indicated by the white arrows. (c) Energy distribution percentage for our experimental results and simulated results, which demonstrate the success of the fabricated diffractive neural network and its inference capability.

    Fig. 5. Verification of the transfer learning ability of the proposed D2NN. (a) Simulated results for the light-field distribution in the output plane when a plane wave passes directly into our system without passing through any handwritten digits. Most of the light is concentrated in the No.X region, which indicates that an incorrect number or incorrect case has been identified. (b) Experimental results for the light-field distribution in the output plane. The experimental results are in good agreement with the simulated results.

    To obtain a large diffraction angle,it is necessary to have a small neuron size and a long wavelength. In previous work [18], a terahertz source wavelength of 0.75 mm,neuron size of approximately 0.4 mm,and maximal half-cone diffraction angle of approximately 70° were used. However, for visible light, the wavelength of the He–Ne laser used here is 632.8 nm, which is approximately 1200 times smaller than the terahertz wavelength. To obtain a 70° halfcone diffraction angle,the neuron size should be less than 330 nm,which is also 1200 times smaller than the neuron size for the terahertz bandwidth. The maximal unit size of 330 nm requires a complicated fabrication technique,which can result in a contradiction between wavelength, neuron size, and processing difficulty.Therefore, a general method should be applied when designing visible light D2NNs.

    For a propagation distance D between two adjacent diffractive layers, the radius R of the diffraction spot of each neuron can be expressed as follows:

    The improved formula can be used to quantitatively analyse the D2NN connectivity. A fully connected D2NN has better inference performance when the parameters satisfy Eq. (9). This formula indicates that the connectivity of a D2NN is affected by wavelength, neuron size, number of neurons, and the distance between layers. The contradiction between wavelength, neuron size, and processing difficulty can be alleviated by adjusting the number of neurons and spacing but using a longer wavelength.This formula provides a general case for the application of a D2NN to any wavelength.

    The experimental parameters in Figs. 2–5 were chosen using Eq. (9) and the accuracy of this new formula was confirmed by the experimental results. To further verify the proposed formula,a series of simulation analyses were performed. To reduce the training time and complexity, a phase-only, five-layer D2NN was trained as a digital classifier to recognize only handwritten digits from 0 to 9 using a subset of the MNIST training image dataset,as shown in Fig.6(a).The training set contained 10 000 handwritten digit images (from 0 to 9); there were approximately 1000 images of each type of handwritten digit, which were randomly selected.These 10 000 input digits were encoded to the amplitude of the input field into the D2NN. The diffractive network was trained to map the input digits to ten detector regions, with one region for each digit. The classification criterion sought to find the detector with the maximum optical signal. After training, the D2NN digit classifier design was numerically tested using 500 images, which were also randomly selected from the MNIST test dataset and not contained within the training or validation image sets. The blind testing accuracy for the test set was used to verify the new proposed theory.

    A quantitative analysis of the D2NN connectivity was performed, and the fitting curve for Eq. (9) is given in Fig. 6(b). For example, in order to prevent overfitting, the number of neurons in each layer was assumed to be 6400(80×80)based on previous experience.The connectivity space for the D2NN was divided using Fig.6(b),taking the relationship between the wavelength,distance,and neuron size into consideration. Once the parameters are within or above the fitting curve, as indicated by the green arrow,the D2NN will realize full connectivity and perfect inference is guaranteed. For the case marked by the red arrow, the D2NN cannot achieve full connectivity. In Fig. 6(b), cases 1 and 2 are within the fitting curve, case 3 is above the fitting curve, and case 4 is below the fitting curve. The blind testing accuracies for cases 1 to 3 are all above 90%, while the accuracy for case 4 is approximately 0.1%. The confusion matrix for cases 1–4 are shown in detail in Figs. 6(d)–(g), respectively. These results prove the accuracy of the improved theory for the connectivity. The improved theory can offer a quantitative analysis for building a D2NN and demonstrates the performance advantages of a fully connected D2NN.The simulated results shown in Fig.6(c)are consistent with previous studies [18–20]. By comparing cases 1 and 2, it can be seen that the proposed fully connected theory can overcome the contradiction between neuron size and processing difficulty by adjustment of the distance.The D2NN performance over a long distance D (5.7 × 103λ) and a large neuron size df(6λ) is consistent with that for a short distance D (15λ) and a small neuron size df(0.53λ), since the neuron size dfis adjusted based on the distance D, which reduces the processing difficulty. We also analysed the influence of alignment errors between diffractive layers and the phase depth error for the diffractive layer. Further details can be found in Figs. S4 and S5 in Appendix A.

    4. Discussion and conclusions

    In this work,a general model for a D2NN at visible wavelengths was proposed.A visible wavelength D2NN can be used to overcome some of the drawbacks of the terahertz bandwidth and has many potential practical applications [25–29]. However, there are some restrictions and shortcomings, which make it challenging to change the bandwidth from terahertz to the visible light region.The first difficulty is the contradiction between the working wavelength, neuron size, and fabrication limitations. Shorter wavelengths require smaller neuron sizes, which make the processing more complex. A general theory that includes a revised formula was proposed to overcome these contradictions. A series of simulation analyses were designed that were able to successfully verify the proposed formula. Based on this theory, a novel visible light D2NN classifier was used to recognize unchanged targets (handwritten images of digits ranging from 0 to 9) as well as changed targets (i.e., covered or altered targets) at a visible wavelength of 632.8 nm. A numerical classification accuracy of 91.57% was obtained and is highly matched with an experimental classification accuracy of 84%,proving that both the theoretical analysis and the designed system can be successfully used.

    Although there has been some recent success implementing deep neural networks for optical platforms [18–22,24], an alloptical design has not yet been fully demonstrated and realized.For example, computers are still required for the training process and the advantages of low energy consumption and high speed offered by optical information processing have not yet been realized. Additionally, applications for optical deep learning techniques are still emerging and many early attempts [18–20] use standard machine learning models, which may not be the best choice for an optical deep learning design. Other learning paradigms, such as unsupervised learning [37], generative adversarial networks [38], and reinforcement learning [39,40], should also be integrated into an optical neural network.It is expected that faster and more accurate optical deep learning frameworks will be proposed in the future, which may be able to offer capabilities that go beyond even current human knowledge.

    Fig. 6. Schematic diagram of the D2NN classifier used to verify the improved Eq. (9). (a) Schematic diagram of the D2NN classifier used to reduce the training time for the unchanged handwritten digit targets from 0 to 9. The location of each detector is displayed. (b) The fitting curve. (c) Four cases. The blind testing accuracy for different parameters in(c)were investigated and the confusion matrix is shown in(d)case 1,(e)case 2,(f)case 3,and(g)case 4,which demonstrate the success of the revised theory.

    Acknowledgements

    This research was supported in part by National Natural Science Foundation of China (61675056 and 61875048).

    Compliance with ethics guidelines

    Hang Chen, Jianan Feng, Minwei Jiang, Yiqun Wang, Jie Lin,Jiubin Tan, and Peng Jin declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A. Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2020.07.032.

    777米奇影视久久| 日日摸夜夜添夜夜爱| 国产成人91sexporn| 日本-黄色视频高清免费观看| 赤兔流量卡办理| 国产一级毛片在线| 老司机影院成人| 天堂8中文在线网| 超色免费av| 国产精品一区二区在线观看99| 国产免费又黄又爽又色| 欧美日韩av久久| 亚洲高清免费不卡视频| 18禁在线播放成人免费| www.色视频.com| 哪个播放器可以免费观看大片| 在线看a的网站| 99九九线精品视频在线观看视频| 一区二区av电影网| 亚洲激情五月婷婷啪啪| 免费高清在线观看日韩| 国产不卡av网站在线观看| 99热这里只有是精品在线观看| 国产精品久久久久久av不卡| 2018国产大陆天天弄谢| 国产熟女午夜一区二区三区 | 一本色道久久久久久精品综合| 亚洲av欧美aⅴ国产| 成人亚洲欧美一区二区av| 日日啪夜夜爽| 一本一本综合久久| 国产黄片视频在线免费观看| 久久人人爽人人爽人人片va| 3wmmmm亚洲av在线观看| 日本黄大片高清| 国产欧美亚洲国产| 99久久精品一区二区三区| 久久精品国产亚洲av天美| 国产爽快片一区二区三区| 国产精品久久久久久av不卡| av免费在线看不卡| 2022亚洲国产成人精品| 亚洲第一区二区三区不卡| videosex国产| 人人妻人人爽人人添夜夜欢视频| 日本欧美视频一区| 亚洲人成网站在线播| 欧美丝袜亚洲另类| 建设人人有责人人尽责人人享有的| 简卡轻食公司| 亚洲四区av| 久久午夜福利片| xxxhd国产人妻xxx| 一级二级三级毛片免费看| 国产亚洲最大av| 国产精品国产三级国产av玫瑰| 91国产中文字幕| 交换朋友夫妻互换小说| 亚洲经典国产精华液单| 一个人看视频在线观看www免费| 80岁老熟妇乱子伦牲交| 黄色欧美视频在线观看| 国产深夜福利视频在线观看| 久久人人爽人人片av| 国产精品久久久久成人av| 精品少妇内射三级| 国精品久久久久久国模美| 欧美性感艳星| 日韩中文字幕视频在线看片| 久久久午夜欧美精品| a级毛片在线看网站| 制服丝袜香蕉在线| 高清午夜精品一区二区三区| 日本黄色日本黄色录像| 热99久久久久精品小说推荐| 国产色婷婷99| 又黄又爽又刺激的免费视频.| 亚洲精品国产av蜜桃| 国产精品一二三区在线看| 精品人妻熟女av久视频| 亚洲国产精品成人久久小说| 美女国产高潮福利片在线看| 性色avwww在线观看| 人人妻人人添人人爽欧美一区卜| 人人澡人人妻人| 黑人猛操日本美女一级片| 又粗又硬又长又爽又黄的视频| 中国三级夫妇交换| 成人国产av品久久久| 日日爽夜夜爽网站| 国产成人a∨麻豆精品| 高清不卡的av网站| 99九九线精品视频在线观看视频| 丰满乱子伦码专区| 多毛熟女@视频| 爱豆传媒免费全集在线观看| 日韩精品有码人妻一区| 观看av在线不卡| 国产精品无大码| 婷婷色麻豆天堂久久| 永久免费av网站大全| 日本免费在线观看一区| 久久女婷五月综合色啪小说| 国产在线视频一区二区| 欧美+日韩+精品| 精品视频人人做人人爽| 亚洲av综合色区一区| 狂野欧美激情性bbbbbb| a级毛片黄视频| 老司机亚洲免费影院| 午夜福利影视在线免费观看| 国产高清有码在线观看视频| 在线观看美女被高潮喷水网站| 国产精品一区二区在线不卡| 国产精品欧美亚洲77777| 精品久久蜜臀av无| 我的女老师完整版在线观看| 最新中文字幕久久久久| 哪个播放器可以免费观看大片| 一区二区三区精品91| 视频在线观看一区二区三区| 欧美精品人与动牲交sv欧美| 午夜福利网站1000一区二区三区| 寂寞人妻少妇视频99o| 一级毛片黄色毛片免费观看视频| 亚洲av二区三区四区| 精品酒店卫生间| videosex国产| 色94色欧美一区二区| 国产男女超爽视频在线观看| 亚洲国产精品一区二区三区在线| 欧美日韩综合久久久久久| 国产成人freesex在线| 久久婷婷青草| 亚洲三级黄色毛片| 国产淫语在线视频| 日本欧美视频一区| 国产淫语在线视频| 80岁老熟妇乱子伦牲交| 免费看不卡的av| 久久久欧美国产精品| 午夜视频国产福利| 国产不卡av网站在线观看| 国产精品国产av在线观看| 成人午夜精彩视频在线观看| 成年女人在线观看亚洲视频| 日韩一区二区视频免费看| 午夜视频国产福利| 秋霞伦理黄片| 18+在线观看网站| 亚洲性久久影院| 91精品三级在线观看| 91精品三级在线观看| 国产成人freesex在线| 日本黄色日本黄色录像| 不卡视频在线观看欧美| 久久久国产精品麻豆| 免费av不卡在线播放| 亚洲第一区二区三区不卡| 亚洲精品美女久久av网站| 亚洲无线观看免费| 国产成人一区二区在线| 免费看光身美女| 亚洲综合色网址| 五月玫瑰六月丁香| 国产精品欧美亚洲77777| 久久久久人妻精品一区果冻| 久久精品久久精品一区二区三区| av国产久精品久网站免费入址| a级毛片黄视频| 亚洲av综合色区一区| 亚洲精品乱码久久久v下载方式| 一个人免费看片子| 精品一品国产午夜福利视频| 另类亚洲欧美激情| 久久精品国产亚洲av涩爱| 黑丝袜美女国产一区| 国产熟女欧美一区二区| 两个人的视频大全免费| 亚洲精品自拍成人| 国产熟女午夜一区二区三区 | 另类精品久久| 久久精品国产a三级三级三级| 日韩视频在线欧美| 91精品一卡2卡3卡4卡| 少妇高潮的动态图| 日本-黄色视频高清免费观看| 国产一区二区在线观看日韩| 一级毛片电影观看| 在线观看美女被高潮喷水网站| 久久午夜福利片| 亚洲欧美一区二区三区国产| 久久久久久久大尺度免费视频| 国产日韩欧美在线精品| 国产又色又爽无遮挡免| 日韩成人伦理影院| 黄色一级大片看看| 免费观看在线日韩| 国产精品偷伦视频观看了| 一区二区av电影网| 自拍欧美九色日韩亚洲蝌蚪91| 精品久久国产蜜桃| 日本av手机在线免费观看| 国产精品久久久久成人av| 久久99热6这里只有精品| 极品少妇高潮喷水抽搐| 亚洲人成网站在线观看播放| 少妇高潮的动态图| 男女免费视频国产| 亚洲激情五月婷婷啪啪| 少妇人妻 视频| 精品久久国产蜜桃| 亚洲色图综合在线观看| 日韩亚洲欧美综合| 在线观看www视频免费| 丝袜喷水一区| 精品人妻熟女毛片av久久网站| 亚洲第一区二区三区不卡| 国产精品成人在线| 自拍欧美九色日韩亚洲蝌蚪91| 久久国产亚洲av麻豆专区| 亚洲av在线观看美女高潮| 91成人精品电影| 国产又色又爽无遮挡免| 波野结衣二区三区在线| 亚洲精品亚洲一区二区| 精品久久久久久电影网| 日韩强制内射视频| 午夜老司机福利剧场| 国产乱来视频区| 国产精品麻豆人妻色哟哟久久| 国产精品99久久久久久久久| 夜夜看夜夜爽夜夜摸| 如日韩欧美国产精品一区二区三区 | 亚洲人成网站在线观看播放| 国产av精品麻豆| 亚洲av成人精品一二三区| 久久久精品94久久精品| 91在线精品国自产拍蜜月| 老司机影院成人| 特大巨黑吊av在线直播| 精品人妻在线不人妻| 国产免费现黄频在线看| 男女边摸边吃奶| 菩萨蛮人人尽说江南好唐韦庄| 18禁在线播放成人免费| 亚洲精品国产av成人精品| 婷婷成人精品国产| 夜夜爽夜夜爽视频| 亚洲中文av在线| 亚洲av中文av极速乱| 亚洲综合色惰| 国产成人av激情在线播放 | 建设人人有责人人尽责人人享有的| 久久久国产欧美日韩av| 女性生殖器流出的白浆| 亚洲av欧美aⅴ国产| 亚洲欧美成人综合另类久久久| 欧美另类一区| 国产免费一区二区三区四区乱码| 亚洲精品成人av观看孕妇| 在线观看人妻少妇| 国产一区二区三区综合在线观看 | 国产成人freesex在线| 纵有疾风起免费观看全集完整版| 国产国语露脸激情在线看| 成人漫画全彩无遮挡| 久久久久网色| av线在线观看网站| 亚洲无线观看免费| 久久99一区二区三区| 国产成人免费无遮挡视频| 国产免费一区二区三区四区乱码| 国产亚洲一区二区精品| 美女中出高潮动态图| 国产高清不卡午夜福利| 国产精品无大码| 一级a做视频免费观看| 丰满少妇做爰视频| 国产精品久久久久久久久免| h视频一区二区三区| 99国产综合亚洲精品| 亚洲天堂av无毛| 一级毛片电影观看| 最近手机中文字幕大全| 久久久久久久大尺度免费视频| 中国美白少妇内射xxxbb| 色94色欧美一区二区| 黄片无遮挡物在线观看| 国产男女内射视频| 最后的刺客免费高清国语| 蜜桃国产av成人99| 午夜免费男女啪啪视频观看| 欧美三级亚洲精品| 少妇丰满av| 久久久久人妻精品一区果冻| 国产精品嫩草影院av在线观看| 亚洲国产精品国产精品| 亚洲在久久综合| 午夜日本视频在线| 97在线视频观看| 亚洲精品日本国产第一区| 欧美成人午夜免费资源| 亚洲久久久国产精品| 国产午夜精品久久久久久一区二区三区| √禁漫天堂资源中文www| 国产精品久久久久久精品电影小说| 午夜福利视频在线观看免费| 成人免费观看视频高清| 日本av免费视频播放| 精品久久国产蜜桃| 99久国产av精品国产电影| 国产乱人偷精品视频| 性高湖久久久久久久久免费观看| 国产一区二区三区av在线| 美女xxoo啪啪120秒动态图| 少妇 在线观看| 久久精品熟女亚洲av麻豆精品| 99精国产麻豆久久婷婷| 九九久久精品国产亚洲av麻豆| 国精品久久久久久国模美| 成人二区视频| 国产免费又黄又爽又色| 国产av精品麻豆| 日韩不卡一区二区三区视频在线| 久久精品久久精品一区二区三区| 建设人人有责人人尽责人人享有的| 热99久久久久精品小说推荐| 一级,二级,三级黄色视频| 少妇的逼好多水| 亚洲av免费高清在线观看| tube8黄色片| 你懂的网址亚洲精品在线观看| 亚洲一级一片aⅴ在线观看| 高清欧美精品videossex| 老熟女久久久| 国产精品一区二区在线观看99| 精品久久久久久久久亚洲| 亚洲国产精品一区三区| 国产精品一二三区在线看| 免费看光身美女| 午夜av观看不卡| 9色porny在线观看| 建设人人有责人人尽责人人享有的| 久久毛片免费看一区二区三区| 91午夜精品亚洲一区二区三区| 国产精品免费大片| 五月天丁香电影| 亚洲三级黄色毛片| 国产免费福利视频在线观看| 色吧在线观看| 国语对白做爰xxxⅹ性视频网站| 亚洲精品一区蜜桃| 久久国产精品大桥未久av| 精品国产一区二区久久| 黄色毛片三级朝国网站| a级毛色黄片| 亚洲精品久久成人aⅴ小说 | 一级毛片 在线播放| 久久97久久精品| 建设人人有责人人尽责人人享有的| 亚洲人成网站在线观看播放| a级毛片在线看网站| √禁漫天堂资源中文www| 欧美精品亚洲一区二区| 久热这里只有精品99| 亚洲一区二区三区欧美精品| 日本猛色少妇xxxxx猛交久久| 99久国产av精品国产电影| 国产成人精品一,二区| 欧美三级亚洲精品| 五月玫瑰六月丁香| 五月天丁香电影| 成年女人在线观看亚洲视频| 亚洲精品乱码久久久久久按摩| 日本猛色少妇xxxxx猛交久久| 男人爽女人下面视频在线观看| 欧美97在线视频| 欧美精品人与动牲交sv欧美| 精品久久久久久久久av| 日韩 亚洲 欧美在线| 精品人妻在线不人妻| freevideosex欧美| 欧美日韩视频高清一区二区三区二| 久热久热在线精品观看| 人妻制服诱惑在线中文字幕| 久久精品国产自在天天线| 日日摸夜夜添夜夜爱| 丝瓜视频免费看黄片| 99国产综合亚洲精品| 亚洲综合色网址| 亚洲国产欧美日韩在线播放| 不卡视频在线观看欧美| 69精品国产乱码久久久| 国产免费视频播放在线视频| 亚洲不卡免费看| 高清午夜精品一区二区三区| 大码成人一级视频| 狂野欧美白嫩少妇大欣赏| 亚洲无线观看免费| 少妇被粗大猛烈的视频| tube8黄色片| 天天操日日干夜夜撸| 国产高清国产精品国产三级| 欧美3d第一页| 成年人免费黄色播放视频| 免费播放大片免费观看视频在线观看| 久久人人爽av亚洲精品天堂| 一本久久精品| 大又大粗又爽又黄少妇毛片口| 99re6热这里在线精品视频| 国产精品成人在线| 91精品伊人久久大香线蕉| 老女人水多毛片| 久久久亚洲精品成人影院| 国产精品女同一区二区软件| 午夜精品国产一区二区电影| 欧美日韩综合久久久久久| 又粗又硬又长又爽又黄的视频| 日韩 亚洲 欧美在线| 国产一区二区三区综合在线观看 | 日韩一本色道免费dvd| 日韩成人av中文字幕在线观看| 久久久久久人妻| 免费观看av网站的网址| 国国产精品蜜臀av免费| 麻豆乱淫一区二区| 国产成人精品福利久久| 欧美精品国产亚洲| 18禁观看日本| 国产免费福利视频在线观看| 激情五月婷婷亚洲| 国产精品久久久久久久久免| 国产成人一区二区在线| 亚洲综合色惰| 熟女人妻精品中文字幕| 99re6热这里在线精品视频| 精品久久久久久电影网| a级毛片黄视频| 免费看光身美女| 日韩大片免费观看网站| 免费黄色在线免费观看| 岛国毛片在线播放| 日韩大片免费观看网站| 麻豆乱淫一区二区| 亚洲经典国产精华液单| 国产国拍精品亚洲av在线观看| 香蕉精品网在线| 在线看a的网站| 人人妻人人添人人爽欧美一区卜| 新久久久久国产一级毛片| 国产精品成人在线| 午夜91福利影院| 一级毛片aaaaaa免费看小| 亚洲美女黄色视频免费看| 午夜福利影视在线免费观看| 久久毛片免费看一区二区三区| 婷婷色综合大香蕉| 色婷婷久久久亚洲欧美| 人妻制服诱惑在线中文字幕| 黑人高潮一二区| 精品人妻在线不人妻| 国产在线视频一区二区| 精品久久蜜臀av无| 亚洲精品中文字幕在线视频| 寂寞人妻少妇视频99o| 在线观看美女被高潮喷水网站| 精品亚洲成a人片在线观看| 菩萨蛮人人尽说江南好唐韦庄| 久久久国产欧美日韩av| 交换朋友夫妻互换小说| 久久精品久久精品一区二区三区| 又黄又爽又刺激的免费视频.| 日韩av不卡免费在线播放| 王馨瑶露胸无遮挡在线观看| 欧美人与善性xxx| a级毛片免费高清观看在线播放| 一级毛片aaaaaa免费看小| 爱豆传媒免费全集在线观看| 欧美3d第一页| 欧美xxⅹ黑人| 日日摸夜夜添夜夜爱| 赤兔流量卡办理| 美女国产视频在线观看| 三级国产精品片| 国产熟女欧美一区二区| 午夜免费鲁丝| 国产成人精品一,二区| 一区在线观看完整版| 久久久久精品性色| 免费高清在线观看日韩| 国产亚洲最大av| 精品亚洲乱码少妇综合久久| 18在线观看网站| 一本大道久久a久久精品| 欧美人与善性xxx| 国产亚洲欧美精品永久| 高清毛片免费看| freevideosex欧美| 日韩免费高清中文字幕av| 女人久久www免费人成看片| 搡老乐熟女国产| 成年女人在线观看亚洲视频| 日韩不卡一区二区三区视频在线| 亚洲人与动物交配视频| 肉色欧美久久久久久久蜜桃| 亚洲精品久久午夜乱码| 午夜免费观看性视频| av免费在线看不卡| 999精品在线视频| 亚洲精品视频女| 春色校园在线视频观看| 国产欧美亚洲国产| 制服人妻中文乱码| 日本欧美视频一区| 日本wwww免费看| 亚洲av不卡在线观看| 亚洲激情五月婷婷啪啪| 26uuu在线亚洲综合色| a 毛片基地| 99国产综合亚洲精品| 免费观看无遮挡的男女| 国产不卡av网站在线观看| xxx大片免费视频| 99久久人妻综合| 亚洲天堂av无毛| 国产精品久久久久久精品古装| 美女主播在线视频| 青春草国产在线视频| 午夜福利网站1000一区二区三区| 国产精品久久久久久av不卡| 人妻一区二区av| 久久久精品94久久精品| 欧美精品高潮呻吟av久久| 少妇人妻 视频| 中文字幕制服av| av播播在线观看一区| 亚洲欧美日韩另类电影网站| 制服丝袜香蕉在线| 国产精品嫩草影院av在线观看| 国产免费现黄频在线看| 在线观看美女被高潮喷水网站| 少妇被粗大的猛进出69影院 | 久久久国产欧美日韩av| 少妇丰满av| 亚洲精品中文字幕在线视频| 男女边摸边吃奶| 免费日韩欧美在线观看| 99视频精品全部免费 在线| 在线观看免费高清a一片| 伦精品一区二区三区| 精品一区在线观看国产| 久久精品熟女亚洲av麻豆精品| 成人国语在线视频| 久久久a久久爽久久v久久| 菩萨蛮人人尽说江南好唐韦庄| 久久久久久久国产电影| 国产精品成人在线| 在线免费观看不下载黄p国产| 制服丝袜香蕉在线| 日韩免费高清中文字幕av| 精品国产露脸久久av麻豆| 亚洲成人手机| 亚洲成色77777| 精品少妇久久久久久888优播| 国产毛片在线视频| 久久久久人妻精品一区果冻| 少妇的逼水好多| 日韩强制内射视频| 亚洲人成网站在线观看播放| 久久久精品94久久精品| 久久精品熟女亚洲av麻豆精品| 2018国产大陆天天弄谢| 日本91视频免费播放| 一级毛片 在线播放| 精品亚洲乱码少妇综合久久| 搡女人真爽免费视频火全软件| 国产爽快片一区二区三区| 99久久中文字幕三级久久日本| 日本91视频免费播放| 一级毛片 在线播放| 极品少妇高潮喷水抽搐| 亚洲伊人久久精品综合| av在线app专区| 久久久久久久久大av| 极品少妇高潮喷水抽搐| 一本色道久久久久久精品综合| 日本91视频免费播放| 亚洲av福利一区| 国产免费又黄又爽又色| 国产国拍精品亚洲av在线观看| 亚洲三级黄色毛片| 夫妻午夜视频| 国产av一区二区精品久久| 亚洲精品国产色婷婷电影| 日韩一区二区视频免费看| 国产亚洲午夜精品一区二区久久| 免费人成在线观看视频色| 夫妻午夜视频| 另类亚洲欧美激情| 中文字幕最新亚洲高清| 久久av网站| 久久精品久久精品一区二区三区| 国产在线免费精品| 男男h啪啪无遮挡| 国产av码专区亚洲av| 一本大道久久a久久精品| 久久人妻熟女aⅴ| 久久久久网色| 日韩人妻高清精品专区| 久久久久国产精品人妻一区二区| 国产黄色免费在线视频| 青春草国产在线视频| 欧美日韩视频精品一区| 中文字幕亚洲精品专区|