• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Two-Stream Architecture as a Defense against Adversarial Example

    2022-04-19 05:49:06HaoGeXiaoGuangTuMeiXieZhengMa

    Hao Ge | Xiao-Guang Tu | Mei Xie | Zheng Ma

    Abstract—The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks (DNNs) based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks (CNNs) from attacks by adversarial examples.Our model applies the idea of “two-stream” used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in “high-resolution” and “l(fā)ow-resolution” networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms.

    Index Terms—Adversarial example,deep learning,neural network.

    1.lntroduction

    With the development of convolutional neural network (CNNs),computers can deal with many tasks,such as target classification,face recognition,and license plate recognition.In some tasks,their performance is even better than that of humans.Therefore,the number of tasks performed by computers is greater than that of humans.However,hackers attack computer systems,thereby threatening the security of computers;similarly,unscrupulous people want to benefit from finding security holes in security-sensitive fields,posing safety risks to CNN-based systems.

    As for the security of neural networks or almost every classification network,many adversarial examples[1]can be generalized to mislead classification results by simply adding small perturbations to original images[2].Such adversarial examples are potential threats to a wide range of applications.For example,a “No passing” sign can be detected as a “No parking” sign by a self-driving car because of some small perturbations that humans are not aware of[3].Therefore,a robust defensive method against adversarial attacks should be developed.

    Existing defense methods can be roughly divided into four categories:1) Hiding the information of a target model to increase the difficulty in generating adversarial examples,e.g.,defensive distillation[3]-[9];2) training a classifier with adversarial examples to improve its precision[2];3) removing adversarial perturbations by training a denoising autoencoder[4],[7];4) training a classifier to distinguish real images from adversarial examples[6].

    However,all these methods have disadvantages.For the first category,defensive distillation does not significantly increase the robustness of neural networks[10].For categories 2 and 3,adversarial examples are needed to train defenses,so these defenses are only effective against the generating adversarial examples.For the last category,Carlini and Wagner[8]showed that these adversarial detecting methods cannot defend their C&W attack with slight changes in the loss function.

    Even for some powerful defense methods,such as MagNet[6]and HGR[4],Athalye and Carlini found them ineffective several days after their publication[11].Considering these challenges,we change our mind to build a defense system in a smaller scope to avoid being easily cracked.

    In this paper,we propose an efficient and effective defense method against adversarial examples.Our method is independent of the generation process of adversarial samples,because it requires only real images for training.We discuss the working mechanism of our “two-stream” method and explain why it cannot be easily attacked.

    2.Motivation

    InFig.1,we utilize Lucid to help us analyze the kind of errors made by the neural network during information transmission,when the input image is an “adversarial example”,which causes the final misclassification.The images from left to right inFig.1correspond to the input images and the visualization results of each neuron in 4A,4D,and 5A layers in GoogleNet.Fig.1shows that the neurons in the “highresolution” network can accurately classify the categories of local areas according to textural information among receptive fields during the classification of a real image.These correct features can be delivered layer by layer.Consequently,the classification result is correct.However,as presented in the pictures on the second row,low-layer neurons cannot accurately extract local features because of the existence of adversarial perturbations;as a result,the final classification results are affected.Thus,adversarial examples influence “high-resolution” neural networks.

    A set of comparative experiments are conducted to explain the causes of errors made by the neural network from a human perspective.Fig.2is obtained by dividing the two images inFig.1into 10×10 small squares and then disturbing the arrangement of the small squares.The size of each small square inFig.2is approximately equal to the size of the receptive field of the 4A layer in GoogleNet.In other words,all the information possibly obtained by each neuron in the 4A layer is included in a small square.The disordered order prevents humans to judge the category based on the context information of each small square and look at each small square independently;thus,the angle of view of each neuron node in the 4A layer is simulated.For the picture on the left,humans can classify most of the small squares to “dog” without the outline information.However,for the picture on the right,they are unable to accurately classify these small squares into “dog”.The reason is that perturbations destroy texture features,so we cannot accurately classify the small squares in the right picture by using the texture features similar to that in the left picture.

    Fig.1.Feature visualization of the “Inc-v3” network by Lucid[12]:(a) real Image,(b) MIXED4A of real image,(c) MIXED4D of real image,(d) MIXED5A of real image,(e) attacked image,(f) 4A of attacked image,(g) 4D of attacked image,and(h) 5A of attacked image.

    The size of the small squares inFig.2is divided according to the size of the receptive field in the 4A layer in GoogleNet.As such,our neural network encounters the same problem in facing the adversarial examples.The change in texture features destroys the feature expression of each neuron,and these errors lead to a wrong classification result after these errors uploaded through layers.As for the disordered real image,the image on the left inFig.2can still be classified into “dog” with more than 90% confidence in GoogleNet.Therefore,the classification logic of GoogleNet,a high-resolution network,is different from that of humans who rely more on globe features for image classification.

    Bakeret al.[13]elaborated that neural networks do not rely on contour information in an article entitled“Deep convolutional networks do not classify based on global object shape”.In particular,[13] combined the texture of object A with the contour of object B to test the feature that the neural network is more dependent on.The previous study supported our study.

    Fig.2.Disordered real image and an adversarial example.

    2.1.Problem Lead to Failures of Defending Methods

    As mentioned above,many defending methods likely fail against adversarial examples.These methods cannot easily detect adversarial examples[8]and correctly classify them.However,these issues have yet to be explained.In this study,we assume that they are caused by the insufficient amount of data.From the perspective of information theory,all classification problems require a particular amount of information to support their classification results.Information originates from data involved in training.Adversarial perturbations increase the entropy of pictures,so the amount of information in such pictures reduces,thereby decreasing the amount of information for the correct classification.This decrease leads to the failure of the defending methods.Adversarial perturbations degenerate the textures of images.If we want to classify such images into the right label,the defending methods should not be dependent on texture features.To achieve this goal,we should enlarge the receptive field in CNNs.This simple method is similar to resizing an image to a smaller size.In our experiments in ImageNet,the training images are resized to 32×32 pixel to avoid interference from adversarial perturbations;as a result,the testing accuracy is less than 10%.Indeed,the amount of information in ImageNet is insufficient to support 1000 categories of classification without the texture features.According to information theory,the coding algorithm with insufficient information definitely fails.Similarly,classification tasks with insufficient information resemble making a dress with a handkerchief,but this process likely has several loopholes.

    In practical work,we are not able to obtain more data to offset the lack of information,so we change our mind to build the defense system in a smaller scope to avoid being easily attacked.Therefore,we propose a defense method that is extremely difficult to break under the following constraints:1) The size of the input images should be 299×299,which is the size of the input images in GoogleNet and 2) the input images should be involved in the 10 categories in CIFAR-10.

    2.2.Why is the “Two-Stream” Concept Chosen?

    The “two-stream” concept has been widely used in the security-sensitive field.For example,in the communications protocol,“checksum” along with the “body part” is used to transmit and check for errors during transmission.Another example is that a safe deposit box needs the keys of a banker and a customer to open.In addition,important experiments should be successfully replicated in different laboratories to be recognized.

    Moreover,our research shows that the transferability of adversarial examples is good when a target classifier is within GoogleNet,incv3,incv4,ResNet,and the networks derived from them.However,the fooling ratio is much lower when the target classifier is CapsNet[14].This phenomenon occurs because the extraction of low-level features is more likely to be affected by the size of the receptive field of low-layer neurons.The low-layer neurons of advanced classification algorithms have similar receptive fields.So,the low-level features that they extract are similar,resulting in the transferability of adversarial examples in these neural networks.However,low-layer neurons in CapsNet have a larger receptive field.Consequently,CapsNet is more robust to adversarial perturbations generated by other networks.

    In our “two-stream” architecture,a “l(fā)ow-resolution” network can be considered as a network with a large receptive field for low-layer neurons to deal with high-resolution images.As such,the transferability of adversarial examples between “high-resolution” and “l(fā)ow-resolution” networks is poor.Therefore,our method is effective.

    3.Our Method

    Similar to the workflows of SafetyNet[15]and MagNet[6],the workflow of our “two-stream” architecture consists of two steps:A detector that rejects adversarial examples and a classifier that classifies the remaining images into the right label.InFig.3,the classification results of “high-resolution” and “l(fā)owresolution” networks are fed to the comparation algorithm that acts as a detector and classifier.The specific comparation algorithm is shown in Algorithm 1.The mapping table maps the labels in ImageNet to the labels of CIFAR-10,e.g.,n02123045,n02124075,··· → “Cat”;n02110063,n02110806,···→ “Dog”.

    Fig.3.Framework of our two-stream network.

    This workflow is a generic backbone,and the network used in this framework can be replaced with others.For example,Incv3 in a “high-resolution” network can be replaced with VGG16,ResNet-152,or other networks trained in ImageNet.ResNet-32 in a “l(fā)ow-resolution” network can also be replaced with NiN,AllConv,or other networks trained in CIFAR-10.Thus,the flexibility of this framework greatly increases,and attackers experience difficulty in implementing white box attacks.

    Algorithm 1 is the comparation algorithm,wherep1andp2are hyperparameters that serve as thresholds,which are set to 10% and 20% in our experiments,respectively.Yindicates the labels,andPrefers to the probability of these labels.YhighandPhighdenote the label and its corresponding probability of the top-5 classification results of the “high-resolution” network;andYlowandPlowrepresent the same things of the “l(fā)owresolution” network.

    Algorithm 1.Classification

    Input:Xn

    Output:Classification resultYn

    To verify the practicality of our proposed method,we build a network of 10000 user nodes and 1 server node to simulate a real network environment.The user nodes consist of 9000 normal user nodes and 1000 adversarial user nodes.Each normal user node periodically sends a real picture to the server to request the classification result,and the adversarial user node periodically sends an adversarial example.The server node should find and add these adversarial user nodes to the blacklist to prevent them from accessing the server and return the correct classification results to the normal user nodes at the same time.To achieve this goal,we adopt the following Algorithm 2 on the server node to distinguish whether a user node should be blacklisted.We record the sources of the images inand set the confidence coefficient for each IP to CC[IP]and CC[IP]∈ [0,15].We also set the blacklist to Bl[] and record the detection result of our “twostream”network in,where“1”denotes a real image,and “0”indicates that it is not a real image.

    Algorithm 2.Real or fake

    Input:Images received by the server node,Imgn,and the results of our “two-stream” classifier for Imgn,Yn

    Output:The result sent back to the user node for Imgn:Clsn

    4.Attacking Methods

    In this study,we divide the attack methods into two categories,namely,types I and II.The attacks of types I and II aim to fool high-and low-resolution networks,respectively.We evaluate our defense against four popular attacks:Universal adversarial perturbations as a type I attack,the one-pixel attack and the carlini attack as type II attacks,and the fast gradient sign method (FGSM) as a both types I and II attacks.These attacks are explained as follow.

    1) FGSM.Goodfellowet al.[2]introduced this adversarial attack algorithm and developed a method to generate an adversarial example by solving the following problem:x′=x+εsign(?xLoss(x,lx)).This attack is simple yet effective.Kurakinet al.[16]described an iteration version of FGSM.For each iteration,the attack applies FGSM with a small step sizeα.After each iteration,this attack clips the updated result so that the updated image remains in theεneighborhood of the original image.However,this adversarial attack can hardly fool a black box model.To address this issue,Donget al.[17]proposed the momentum iterative fast gradient sign method (MI-FGSM) to boost adversarial attacks.

    2) Universal adversarial perturbations.Following their previous work[18],Moosavi-Dezfooliet al.[19]proposed this universal adversarial attack.Unlike other methods that compute perturbations to fool a network on a single image,this method can fool a network on all images.Moreover,they showed the universal perturbations to be generalized well across different neural networks.

    3) One-pixel attack.Suet al.[20]introduced this adversarial attack algorithm and generated adversarial examples by modifying one pixel only.They claimed that they successfully fooled three common deep neural networks in about 70% of the tested images.This attack method generates adversarial examples without any information about the parameter values or the gradients of the network.In our experiment,we utilize “onepixel” and “three-pixel” versions to test our method.

    4) Carlini attack.Carlini and Wagner[10]introduced an attack method for CIFAR-10 and MNIST.It is the most powerful type II attack we found.

    5.Evaluation

    We evaluate the properties of our “two-stream” architecture on three datasets:Car196[21],fgvc-aircraft[22],and ImageNet[23].Car196[21]and fgvc-aircraft[22]are fine-grained datasets,which contain 16185 images of 196 classes of cars and 10200 images of 102 kinds of aircraft,respectively.In this study,we use these two databases to examine the defensive performance of our architecture for the “automobile”,“truck”,and“airplane” categories.We apply ImageNet[23]composed of CIFAR-10-related categories selected from the original ImageNet database,e.g.,n01582220,n01601694$→bird,and n01644373,n01644900→frog.For the 1000 categories in ImageNet,217 can be classified into the 10 categories in CIFAR-10,and 783 are labeled with “other”.

    The classification results of the “high-resolution” and “l(fā)ow-resolution” networks are directly used to determine whether an image is an adversarial example,so the presence of an attack method that can affect both networks will be disastrous for our framework.An experiment is performed to test the performance of advanced attack algorithms on both networks.The experimental results are shown inTable 1,and “Nonattack data” are the control group.

    For the type I attack on CIFAR-10,we resize the images from 32×32 to 299×299 so that they can be susceptible to type I attacks,such as high-resolution datasets.The process shown inFig.4is applied to achieve type II attacks on high-resolution datasets.For a type II attack in high-resolution images,we apply three steps to achieve this attack.In particular,we resize the image to 32×32,expose the resized image to a type II attack,and calculate the difference between the obtained adversarial example and the 32×32 original image.We zoom in the difference map and overlay it into the original image.

    InTable 1,H-Net gives the top 5 accuracy of the “high-resolution” network.L-Net indicates the top 1 accuracy of the “l(fā)ow-resolution” network.Table 1shows that the type I attack can affect the classification result of “H-Net” only,and the type II attack can influence the classification result of “L-Net” only.In other words,neither the type I attack nor the type II attack can be effective in both networks,indicating that their misclassification results are irrelevant.Therefore,their classification results should be compared,to determine whether an input image is an adversarial example.In addition,attacking methods are not the cause of the decrease in the accuracy of L-Net while attacking CIFAR-10 with the type I attacks.In our comparative experiment,we resize the images in CIFAR-10 to 299×299 and then resize them back to 32×32.We obtain 88.9% classification results of these images.Therefore,“resizing”,not attack methods,is the one that reduces accuracy.

    Table 1:Classification accuracy of “high-resolution” and “l(fā)ow-resolution” networks on adversarial examples generated by different attack methods

    Fig.4.Flowchart of implementing type II attacks in high-resolution images:(a) original image,(b) resize image,(c) attacked image,and (d) result image.

    Table 2shows the detection and classification results of our “two-stream” architecture.“Reject” indicates the rate at which the images are detected as adversarial examples and rejected by our “two-stream”architecture.“Right” denotes the rate at which the images are not rejected and classified with the right label,and “Wrong” indicates the rate at which the images are not rejected and classified with a wrong label.Table 2further reveals that almost all the images can be either detected as an “adversarial example” or classified into the right label.In other words,producing an example that is mislabeled and not detected as an adversarial example by the “two-stream” architecture is difficult.Luet al.[15]proposed this standard to evaluate the quality of a defense method.

    Table 2:Summary of the reaction of our “two-stream” architecture on various attacks

    Fig.5illustrates the experimental results of simulating a real-world network environment.Each polyline represents the proportion of a class of blacklisted user nodes.The horizontal axis shows the number of images sent by a user node,and the vertical axis represents the proportion of blacklisted user nodes.

    Adversarial user nodes with strong perturbations(Incv3 and Universal) are rapidly blacklisted,and the nodes with weak perturbations have a high probability of being blacklisted.InTable 2,the classification results that return to these unshielded nodes are often the correct classification results.Therefore,it is not that our defending algorithm is not strong enough,but these attack algorithms(DeepFool and Three-pixel) are not able to change all of the classification results,so we do not need to add all of them to the blacklist.As for normal user nodes,the proportion is almost equal to 0.In the 50 times of experiments we performed,only 17 normal user nodes were added to the blacklist.Therefore,our defense algorithm performs efficiently in single images and simulated real-world network environments.

    Fig.5.Proportion of user nodes being blacklisted by the server node.

    6.Conclusion

    We propose a “two-stream” architecture to defend against adversarial examples.We compare two kinds of networks,not two specific networks,in our “two-stream” framework and analyze the effect of adversarial perturbations on neural networks to determine the possible reason why the “two-stream” concept works.Our comparison of the classification results of the “high-resolution” and “l(fā)ow-resolution” networks reveals that our“two-stream” framework can detect adversarial examples without requiring either adversarial examples or the knowledge of the generation process.The results show that:1) The framework can be further enhanced by new datasets and new backbones;2) an attacker experiences difficulty in implementing a white box attack.Experiments also show that producing an example that is mislabeled and not detected as an adversarial example is difficult via the “two-stream” architecture.Our “two-stream” architecture provides a research idea for researchers in the field of “adversarial examples”.

    Disclosures

    The authors declare no conflicts of interest.

    一本—道久久a久久精品蜜桃钙片| 国产成人精品婷婷| 久久人人爽av亚洲精品天堂| 99热全是精品| 欧美变态另类bdsm刘玥| 男女啪啪激烈高潮av片| 多毛熟女@视频| videossex国产| 如何舔出高潮| 免费看光身美女| 国产精品国产三级专区第一集| av片东京热男人的天堂| 日韩免费高清中文字幕av| 国产精品久久久久久久久免| 欧美日本中文国产一区发布| 久久99精品国语久久久| 男女国产视频网站| 亚洲欧美中文字幕日韩二区| 久久精品国产a三级三级三级| 精品人妻熟女毛片av久久网站| 2022亚洲国产成人精品| 欧美另类一区| 99热6这里只有精品| 亚洲三级黄色毛片| 美女大奶头黄色视频| 亚洲精品乱久久久久久| 国产精品熟女久久久久浪| 十八禁高潮呻吟视频| 国产白丝娇喘喷水9色精品| 欧美日韩视频精品一区| 午夜免费鲁丝| 在线亚洲精品国产二区图片欧美| 看免费成人av毛片| 国产熟女午夜一区二区三区| 国产成人免费无遮挡视频| 亚洲精品国产av成人精品| 精品99又大又爽又粗少妇毛片| 精品少妇黑人巨大在线播放| 99热国产这里只有精品6| 久久青草综合色| 丰满饥渴人妻一区二区三| 人妻 亚洲 视频| 欧美日韩一区二区视频在线观看视频在线| av又黄又爽大尺度在线免费看| 一级毛片我不卡| 高清毛片免费看| 黄色一级大片看看| 美女国产视频在线观看| 老司机影院成人| 亚洲国产毛片av蜜桃av| 全区人妻精品视频| 中文精品一卡2卡3卡4更新| 咕卡用的链子| 日本欧美视频一区| 一级片免费观看大全| 国产亚洲欧美精品永久| 日韩 亚洲 欧美在线| 免费人成在线观看视频色| 日本黄大片高清| 久久久久视频综合| av女优亚洲男人天堂| 夜夜爽夜夜爽视频| 国产成人精品无人区| 精品久久蜜臀av无| 在现免费观看毛片| 日韩视频在线欧美| 2022亚洲国产成人精品| 日日摸夜夜添夜夜爱| 久久久久久久大尺度免费视频| 看免费成人av毛片| 18禁裸乳无遮挡动漫免费视频| 丰满少妇做爰视频| 老女人水多毛片| 大陆偷拍与自拍| 99热国产这里只有精品6| 国产1区2区3区精品| 香蕉精品网在线| 免费看光身美女| 欧美bdsm另类| 一二三四中文在线观看免费高清| 天天躁夜夜躁狠狠躁躁| 看十八女毛片水多多多| 爱豆传媒免费全集在线观看| 久久久久国产精品人妻一区二区| 日韩欧美一区视频在线观看| 国产精品偷伦视频观看了| 精品第一国产精品| 久久99热6这里只有精品| 国产毛片在线视频| 国产 一区精品| 精品少妇久久久久久888优播| 国产一级毛片在线| 成人亚洲欧美一区二区av| av免费在线看不卡| 日韩 亚洲 欧美在线| 亚洲精华国产精华液的使用体验| 国产一区二区在线观看日韩| 99久久综合免费| 国产在线一区二区三区精| 夫妻性生交免费视频一级片| 又粗又硬又长又爽又黄的视频| 男人爽女人下面视频在线观看| 在线观看免费视频网站a站| 精品人妻偷拍中文字幕| 97在线视频观看| 97精品久久久久久久久久精品| 午夜免费男女啪啪视频观看| 国产欧美日韩综合在线一区二区| 亚洲四区av| 国产淫语在线视频| 男人操女人黄网站| 亚洲精品国产av成人精品| 夜夜爽夜夜爽视频| 亚洲色图 男人天堂 中文字幕 | 少妇熟女欧美另类| 成年人免费黄色播放视频| 中文字幕人妻熟女乱码| 高清欧美精品videossex| 久久国产精品男人的天堂亚洲 | 亚洲国产欧美日韩在线播放| 九九在线视频观看精品| 高清av免费在线| 侵犯人妻中文字幕一二三四区| 国产不卡av网站在线观看| 只有这里有精品99| 精品国产国语对白av| tube8黄色片| 日本黄大片高清| 激情五月婷婷亚洲| 内地一区二区视频在线| 纵有疾风起免费观看全集完整版| 视频中文字幕在线观看| 国产色爽女视频免费观看| 高清黄色对白视频在线免费看| 国产亚洲午夜精品一区二区久久| 黄网站色视频无遮挡免费观看| 欧美日本中文国产一区发布| 午夜激情久久久久久久| 久久人人爽人人片av| 国产成人一区二区在线| 成人国语在线视频| 韩国高清视频一区二区三区| 美女脱内裤让男人舔精品视频| 久久久国产一区二区| 国产黄色视频一区二区在线观看| 少妇高潮的动态图| 女人精品久久久久毛片| 欧美精品亚洲一区二区| videosex国产| 黑人猛操日本美女一级片| 丰满饥渴人妻一区二区三| 成人免费观看视频高清| 男女高潮啪啪啪动态图| 国产乱人偷精品视频| 亚洲第一区二区三区不卡| 精品一区二区免费观看| 观看美女的网站| 国产精品嫩草影院av在线观看| 欧美日韩亚洲高清精品| 亚洲丝袜综合中文字幕| 日韩,欧美,国产一区二区三区| 黑人巨大精品欧美一区二区蜜桃 | 久久久久久伊人网av| 制服人妻中文乱码| 欧美成人精品欧美一级黄| 老司机影院成人| 丰满少妇做爰视频| 国产一区二区三区综合在线观看 | 777米奇影视久久| 七月丁香在线播放| 五月伊人婷婷丁香| 成人国产麻豆网| av福利片在线| 日本色播在线视频| 欧美人与性动交α欧美精品济南到 | av福利片在线| 免费人成在线观看视频色| 久热久热在线精品观看| 在线亚洲精品国产二区图片欧美| 国产精品久久久久久久久免| 女的被弄到高潮叫床怎么办| 极品少妇高潮喷水抽搐| 日韩欧美一区视频在线观看| 国产一区二区三区综合在线观看 | 久久 成人 亚洲| 热re99久久国产66热| 人妻人人澡人人爽人人| 老司机亚洲免费影院| 一边摸一边做爽爽视频免费| 国产一级毛片在线| 国产深夜福利视频在线观看| 国产成人欧美| 乱人伦中国视频| 亚洲三级黄色毛片| 女性生殖器流出的白浆| 成人毛片a级毛片在线播放| 男女啪啪激烈高潮av片| 国产精品成人在线| 日韩精品有码人妻一区| 欧美精品国产亚洲| 亚洲情色 制服丝袜| 啦啦啦中文免费视频观看日本| 毛片一级片免费看久久久久| 99久国产av精品国产电影| 亚洲精品自拍成人| 日本-黄色视频高清免费观看| 男女边摸边吃奶| 亚洲美女视频黄频| 午夜影院在线不卡| 蜜桃在线观看..| av有码第一页| 永久网站在线| 日本av免费视频播放| 国产成人免费观看mmmm| www.熟女人妻精品国产 | 赤兔流量卡办理| 欧美人与善性xxx| 肉色欧美久久久久久久蜜桃| 一级片免费观看大全| 成人国产麻豆网| 激情五月婷婷亚洲| 精品少妇内射三级| 国产 精品1| 如何舔出高潮| 久久精品国产鲁丝片午夜精品| 亚洲国产看品久久| 久久精品国产亚洲av涩爱| 人体艺术视频欧美日本| 这个男人来自地球电影免费观看 | 久久婷婷青草| 久久精品人人爽人人爽视色| 男女边摸边吃奶| 日韩伦理黄色片| 丰满迷人的少妇在线观看| 高清av免费在线| 大片免费播放器 马上看| 亚洲欧美一区二区三区国产| 欧美日韩亚洲高清精品| 亚洲国产毛片av蜜桃av| 久久精品人人爽人人爽视色| 日韩av不卡免费在线播放| 日韩中字成人| 日本猛色少妇xxxxx猛交久久| 成人午夜精彩视频在线观看| 99国产精品免费福利视频| www.av在线官网国产| 亚洲欧美精品自产自拍| 一区二区三区精品91| 国产成人免费观看mmmm| 欧美日韩视频高清一区二区三区二| 91精品国产国语对白视频| 亚洲av免费高清在线观看| 国产乱来视频区| 国产亚洲av片在线观看秒播厂| 国产成人一区二区在线| 久久精品久久久久久噜噜老黄| 考比视频在线观看| 在线天堂中文资源库| 天天操日日干夜夜撸| 极品人妻少妇av视频| 亚洲av电影在线观看一区二区三区| 99久国产av精品国产电影| av线在线观看网站| 91精品三级在线观看| 边亲边吃奶的免费视频| 我的女老师完整版在线观看| 亚洲高清免费不卡视频| 狂野欧美激情性bbbbbb| 丁香六月天网| 亚洲av电影在线进入| 亚洲精品乱码久久久久久按摩| 爱豆传媒免费全集在线观看| 亚洲婷婷狠狠爱综合网| 国产无遮挡羞羞视频在线观看| 日韩视频在线欧美| 欧美日韩一区二区视频在线观看视频在线| 最黄视频免费看| 久久久久人妻精品一区果冻| 美女国产高潮福利片在线看| 国产亚洲av片在线观看秒播厂| 90打野战视频偷拍视频| 国产极品天堂在线| 又粗又硬又长又爽又黄的视频| 91精品国产国语对白视频| 99热全是精品| 久久久精品免费免费高清| 久久人妻熟女aⅴ| 亚洲伊人久久精品综合| 视频中文字幕在线观看| 侵犯人妻中文字幕一二三四区| 国产亚洲av片在线观看秒播厂| 亚洲经典国产精华液单| 两性夫妻黄色片 | av国产精品久久久久影院| 少妇 在线观看| 91成人精品电影| 国产高清不卡午夜福利| 日本色播在线视频| 国产精品欧美亚洲77777| 国产精品女同一区二区软件| 美女xxoo啪啪120秒动态图| 国语对白做爰xxxⅹ性视频网站| 精品一品国产午夜福利视频| 亚洲性久久影院| 纵有疾风起免费观看全集完整版| 免费在线观看黄色视频的| 国产亚洲最大av| 国产精品嫩草影院av在线观看| 日日啪夜夜爽| 性高湖久久久久久久久免费观看| 欧美亚洲 丝袜 人妻 在线| 久久人人爽人人爽人人片va| 在线观看免费高清a一片| 亚洲欧美中文字幕日韩二区| av在线老鸭窝| 熟女电影av网| 亚洲成国产人片在线观看| 99久国产av精品国产电影| 国产成人精品一,二区| av.在线天堂| www.色视频.com| 一二三四中文在线观看免费高清| 国产精品蜜桃在线观看| 欧美人与善性xxx| 哪个播放器可以免费观看大片| 中文字幕最新亚洲高清| 欧美精品亚洲一区二区| 两个人免费观看高清视频| 精品人妻偷拍中文字幕| 国产亚洲午夜精品一区二区久久| av女优亚洲男人天堂| 一级爰片在线观看| 免费高清在线观看日韩| 欧美日韩视频精品一区| av在线播放精品| 国产欧美日韩一区二区三区在线| 亚洲精品美女久久久久99蜜臀 | 欧美bdsm另类| 色视频在线一区二区三区| 啦啦啦啦在线视频资源| 国产伦理片在线播放av一区| 91aial.com中文字幕在线观看| 国产精品一区二区在线不卡| 国产日韩欧美视频二区| 国产不卡av网站在线观看| 日韩av在线免费看完整版不卡| 夫妻午夜视频| 国产精品99久久99久久久不卡 | 校园人妻丝袜中文字幕| 久久久精品区二区三区| 欧美老熟妇乱子伦牲交| 国产精品成人在线| 久久青草综合色| 熟妇人妻不卡中文字幕| 亚洲av电影在线进入| 色婷婷av一区二区三区视频| 精品久久久精品久久久| 日本欧美视频一区| 交换朋友夫妻互换小说| 一级a做视频免费观看| 亚洲国产精品成人久久小说| 久久精品aⅴ一区二区三区四区 | 97在线人人人人妻| 亚洲经典国产精华液单| 成人国产麻豆网| 日韩欧美精品免费久久| h视频一区二区三区| 欧美日韩成人在线一区二区| 色视频在线一区二区三区| 狠狠精品人妻久久久久久综合| 欧美激情 高清一区二区三区| 精品国产一区二区三区久久久樱花| 我的女老师完整版在线观看| 久久久久网色| 久久久精品免费免费高清| 免费高清在线观看视频在线观看| 亚洲情色 制服丝袜| 在线亚洲精品国产二区图片欧美| 亚洲国产日韩一区二区| 国产成人91sexporn| 热re99久久国产66热| 精品亚洲成国产av| 国产极品天堂在线| 免费日韩欧美在线观看| 国产av精品麻豆| 免费观看性生交大片5| 黑人巨大精品欧美一区二区蜜桃 | 99热6这里只有精品| 99精国产麻豆久久婷婷| 免费大片18禁| 伊人久久国产一区二区| 久久99蜜桃精品久久| 日韩欧美精品免费久久| 欧美激情国产日韩精品一区| 色婷婷av一区二区三区视频| 热re99久久精品国产66热6| 最近中文字幕2019免费版| 91久久精品国产一区二区三区| 免费黄网站久久成人精品| 婷婷色麻豆天堂久久| 欧美人与性动交α欧美软件 | 亚洲,一卡二卡三卡| 少妇的逼水好多| 精品久久蜜臀av无| 丝瓜视频免费看黄片| 香蕉国产在线看| 亚洲欧美一区二区三区国产| 男女免费视频国产| 高清欧美精品videossex| 五月开心婷婷网| 精品第一国产精品| 亚洲精品久久成人aⅴ小说| av片东京热男人的天堂| 青春草国产在线视频| 国产福利在线免费观看视频| 好男人视频免费观看在线| 欧美+日韩+精品| 欧美精品国产亚洲| 日日啪夜夜爽| 少妇被粗大猛烈的视频| 精品一区二区三区视频在线| 老司机影院毛片| 七月丁香在线播放| 国产欧美日韩一区二区三区在线| av播播在线观看一区| 欧美精品亚洲一区二区| 插逼视频在线观看| 国产1区2区3区精品| 日本黄色日本黄色录像| 人人澡人人妻人| 亚洲精品久久成人aⅴ小说| 精品午夜福利在线看| 欧美成人精品欧美一级黄| 老司机影院毛片| 有码 亚洲区| 国产免费又黄又爽又色| 亚洲欧洲国产日韩| 国产成人欧美| 在线天堂中文资源库| 久久这里只有精品19| 亚洲五月色婷婷综合| 最黄视频免费看| 国产乱来视频区| 成人亚洲欧美一区二区av| 亚洲欧洲精品一区二区精品久久久 | 日日啪夜夜爽| 亚洲精品,欧美精品| 亚洲人与动物交配视频| 免费高清在线观看日韩| 成人无遮挡网站| 亚洲国产精品国产精品| 超色免费av| 51国产日韩欧美| 久久久亚洲精品成人影院| 中文乱码字字幕精品一区二区三区| 久久精品人人爽人人爽视色| 卡戴珊不雅视频在线播放| 久久av网站| 夫妻午夜视频| 国产精品久久久av美女十八| 少妇人妻 视频| 亚洲国产成人一精品久久久| 国产精品成人在线| 国产精品久久久久久精品电影小说| 国产欧美亚洲国产| 激情五月婷婷亚洲| 国产av一区二区精品久久| 久久人人爽av亚洲精品天堂| 岛国毛片在线播放| 亚洲欧美清纯卡通| 丝袜人妻中文字幕| 内地一区二区视频在线| 久久人人爽人人爽人人片va| 高清视频免费观看一区二区| 在线观看免费高清a一片| 午夜福利在线观看免费完整高清在| 狂野欧美激情性bbbbbb| 久久精品国产自在天天线| 国产色爽女视频免费观看| 一级毛片电影观看| 日本黄色日本黄色录像| 亚洲精品国产av成人精品| 久久免费观看电影| 蜜桃国产av成人99| 国产精品国产三级国产av玫瑰| 国产av一区二区精品久久| 日本爱情动作片www.在线观看| 亚洲成人av在线免费| 水蜜桃什么品种好| 成人黄色视频免费在线看| 老熟女久久久| 夫妻午夜视频| 亚洲精品美女久久av网站| 亚洲综合精品二区| 国产极品天堂在线| 国产精品女同一区二区软件| 伦理电影大哥的女人| 日韩免费高清中文字幕av| 亚洲精品自拍成人| 高清毛片免费看| 大码成人一级视频| 久久毛片免费看一区二区三区| 亚洲天堂av无毛| av线在线观看网站| 一级毛片电影观看| 久久久久久久久久久免费av| 少妇人妻精品综合一区二区| 久久精品久久久久久噜噜老黄| 午夜影院在线不卡| 人人澡人人妻人| 精品国产一区二区三区久久久樱花| 国产 精品1| av福利片在线| 男人添女人高潮全过程视频| 制服人妻中文乱码| 欧美日韩综合久久久久久| 51国产日韩欧美| 久久久久精品久久久久真实原创| 国产日韩欧美在线精品| 欧美日韩视频高清一区二区三区二| 人体艺术视频欧美日本| 国产成人aa在线观看| 最新中文字幕久久久久| 成人综合一区亚洲| 18禁裸乳无遮挡动漫免费视频| 国产高清三级在线| √禁漫天堂资源中文www| 国产伦理片在线播放av一区| 精品人妻熟女毛片av久久网站| 人妻人人澡人人爽人人| 久久99精品国语久久久| 国产亚洲av片在线观看秒播厂| 制服人妻中文乱码| 国产熟女欧美一区二区| 少妇 在线观看| 日本wwww免费看| 少妇被粗大的猛进出69影院 | 999精品在线视频| 日日撸夜夜添| 尾随美女入室| 久久国内精品自在自线图片| 国产 一区精品| 少妇精品久久久久久久| 人人妻人人添人人爽欧美一区卜| 国产成人精品久久久久久| 欧美亚洲日本最大视频资源| 一本色道久久久久久精品综合| 色5月婷婷丁香| 中文字幕精品免费在线观看视频 | 亚洲成人一二三区av| 国产极品粉嫩免费观看在线| 免费高清在线观看日韩| 亚洲婷婷狠狠爱综合网| 亚洲第一区二区三区不卡| 人妻少妇偷人精品九色| 一级,二级,三级黄色视频| 久久久精品区二区三区| av免费观看日本| av在线app专区| 丰满乱子伦码专区| 欧美日韩综合久久久久久| 99热这里只有是精品在线观看| 国产成人91sexporn| 尾随美女入室| 在现免费观看毛片| 日韩伦理黄色片| 国产av一区二区精品久久| 9191精品国产免费久久| 18在线观看网站| 日本91视频免费播放| 人人妻人人添人人爽欧美一区卜| 高清黄色对白视频在线免费看| 三上悠亚av全集在线观看| 午夜久久久在线观看| 伦理电影大哥的女人| 免费黄色在线免费观看| 成人国产av品久久久| 国产探花极品一区二区| 最近中文字幕高清免费大全6| av有码第一页| 日韩免费高清中文字幕av| 男的添女的下面高潮视频| 国产片特级美女逼逼视频| 欧美亚洲 丝袜 人妻 在线| 亚洲精品456在线播放app| 制服诱惑二区| 女的被弄到高潮叫床怎么办| 久久这里只有精品19| 伊人久久国产一区二区| 女人精品久久久久毛片| 免费黄频网站在线观看国产| 夫妻午夜视频| 99re6热这里在线精品视频| 亚洲色图综合在线观看| 国产精品一区二区在线观看99| av不卡在线播放| 视频在线观看一区二区三区| 丝袜在线中文字幕| 亚洲国产精品专区欧美| 丝袜喷水一区| 夫妻午夜视频| 国产探花极品一区二区| 美女福利国产在线| 全区人妻精品视频| 人妻少妇偷人精品九色| 国产精品成人在线| 午夜精品国产一区二区电影| 亚洲国产精品专区欧美| 亚洲丝袜综合中文字幕| 国产视频首页在线观看| 久久国产精品男人的天堂亚洲 | 极品少妇高潮喷水抽搐| 国产精品三级大全| 久久久久久久久久成人| 久久国产亚洲av麻豆专区|