• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    When Does Sora Show:The Beginning of TAO to Imaginative Intelligence and Scenarios Engineering

    2024-04-15 09:36:30ByFeiYueWangQinghaiMiaoLingxiLiQinghuaNiXuanLiJuanjuanLiLiliFanYonglinTianandQingLongHan
    IEEE/CAA Journal of Automatica Sinica 2024年4期

    By Fei-Yue Wang ,,, Qinghai Miao ,,, Lingxi Li ,, Qinghua Ni , Xuan Li , Juanjuan Li , Lili Fan , Yonglin Tian , and Qing-Long Han ,,

    DURING our discussion at workshops for writing“What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence (AI) would be in the direction of Imaginative Intelligence (II), i.e., something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments” [2] to replace conventional “Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]–[6].Now we have OpenAI’s Sora, so soon, but this is not the final, actually far away, and it is just the beginning.

    As illustrated in [1], [7], there are three levels of intelligence, i.e., Algorithmic Intelligence, Linguistic Intelligence,Imaginative Intelligence, and according to “The Generalized Godel Theorem” [1], they are bounded by the following relationship:

    AI ?LI ?II.

    Where AlphaGo was the first milestone of Algorithmic Intelligence while ChatGPT that of Linguistic Intelligence.Now with Sora is emerging as the first milestone of Imaginative Intelligence, the triad forms the initial technical version of the decision-making process outlined in Chinese classicI Ching(orBook of Changes, see Fig.1): Hexagrams(Rule and Composition), Judgements and Lines (Hexagram Statements and Line Statements,or Question and Answer),and Ten Wings (Commentaries, or Imagination and Illustration).

    Fig.1.I Ching: The Book of Changes for Decision Intelligence.

    What should we expect for the next milestone in intelligent science and technology? What are their impacts on our life and society? Based on our previous reports in [8], [9] and recent developments in Blockchain and Smart Contracts based DeSci and DAO for decentralized autonomous organizations and operations [10], [11], several workshops [12]–[16] have been organized to address those important issues.The main results have been summarized in this perspective.

    Historic Perspective

    Text-to-Image(T2I)and Text-to-Video(T2V)are two of the most representative applications of Imaginative Intelligence(II).In terms of T2I, traditional methods such as VAE and GAN have been unsatisfactory, prompting OpenAI to explore new avenues with the release of DALL-E in early 2021.DALL-E draws inspiration from the success of language models in the NLP field,treating T2I generation as a sequenceto-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer.By the end of 2021,OpenAI’s GLIDE introduced Denoising Diffusion Probabilistic Models(DDPMs)into T2I generation,proposing classifierfree guidance to improve text faithfulness and image quality.The Diffusion Model, with its advantages in high resolution and fidelity, began to dominate the field of image generation.In April 2022, the release of DALL-E2 showcased stunning image generation performance globally, a giant leap made possible by the capabilities of the diffusion model.Subsequently, the T2I field saw a surge, with a series of T2I models developed such as Google’s Imagen in May, Parti in June, Midjourney in July, and Stable Diffusion in August, all beginning to commercialize, forming a scalable market.

    Compared to T2I, T2V is a more important but more challenging task.On one hand, it is considered important because the model needs to learn the structure and patterns hidden in the video, similar to how humans understand the world through their eyes.Therefore, video generation is a task close to human intelligence and is considered a key path for achieving general artificial intelligence.On the other hand, it is considered difficult because video generation not only needs to learn the appearance and spatial distribution of objects but also needs to learn the dynamic evolution of the world in the temporal domain.In addition, the lack of high-quality video data (especially text-video paired data) and the huge demand for computing power pose great challenges.Therefore, compared to the success of T2I, progress in T2V moves slower.Similar to early T2I, T2V in its initial stages is also based on methods such as GAN and VAE, resulting in low-resolution, short-duration, and minimally dynamic videos that do not reach practical levels.

    Nevertheless, the field of video generation has rapidly evolved during the last two years, especially since late 2023,when a large number of new methods emerged.As shown in Fig.2, these models can be classified according to their underlying backbones.The breakthrough began with language models (Transformer), which fully utilize the attention mechanism and scalability of Transformers; later, the diffusion model family became more prosperous, with high definition and controllability as its advantages.Recently,the strengths of both Transformer and diffusion models have been combined to form the backbone of DiT [17].

    Fig.2.Brief history of video generation and representative models.Sora indicates the beginning of the Imaginative Intelligence new era.

    The families based on language models are shown on the left side of Fig.2.VideoGPT[18]utilizes VQVAE for learning discrete latent representations of raw videos, employing 3D convolutions and axial self-attention.A GPT-like architecture models these latents with spatiotemporal position encodings.NUWA [19], an autoregressive encoder-decoder Transformer,introduces 3DNA to reduce computational complexity, addressing visual data characteristics.CogVideo [20] is featured as a dual-channel attention Transformer backbone, with a multi-frame rate hierarchical training strategy to better align text and video clips.MaskViT [21] shows that we can create good video prediction models by pre-training transformers via Masked Visual Modeling (MVM).It introduces both spatial and spatiotemporal window attention, as well as a variable percentage of tokens masking ratio.TATS [22] focuses on generating longer videos.Based on 3D-VQGAN and transformers, it introduces a technique that extends the capabilities to produce videos in thousands of frames.Phenaki [23] is a bidirectional masked transformer conditioned on pre-computed text tokens.It also introduces a tokenizer for learning video representation which compresses the video into discrete tokens.Using causal attention in time, it allows us to work with variable-length videos.MAGVIT [24] proposes an efficient video generation model through masked token modeling and multi-task learning.It first learns a 3D Vector Quantized(VQ) autoencoder to quantize videos into discrete tokens, and then learns a video transformer through multi-task masked token modeling.MAGVIT-v2 is a video tokenizer designed to generate concise and expressive tokens for both video and image generation using a universal approach.With this new tokenizer, the authors demonstrated that LLM outperforms diffusion models on standard image and video generation benchmarks,including ImageNet and Kinetics.VideoPoet[25]adopts a multi-modal Transformer architecture with a decoderonly structure.It uses the MAGVIT-v2 tokenizer to convert images and videos of arbitrary length into tokens, along with audio tokens and text embeddings,unifying all modalities into the token space.Subsequent operations are carried out in the token Space, enabling the generation of coherent, high-action videos up to 10 seconds in length at once.

    The families based on diffusion models are shown on the right side of Fig.2.Video Diffusion Models (VDM) presents the first result on video generation using diffusion models by extending the image diffusion architecture.VDM employs a space-time factorized U-Net, jointly training on image and video data.It also introduces a conditional sampling technique for extending long and high-resolution videos spatially and temporally.Make-A-Video extends a T2I model to T2V with a spatiotemporally factorized diffusion model, removing the need for text-video pairs.It fine-tunes the T2I model for video generation, benefiting from effective model weight adaptation and improved temporal information fusion compared to VDM.Imagen Video [26] is a text-conditional video generation system that uses a cascade of video diffusion models.It incorporates fully convolutional temporal and spatial superresolution models and a v-parameterization of diffusion models,enabling the generation of high-fidelity videos with a high degree of controllability and world knowledge.Runway Gen-1 [27] extends latent diffusion models to video generation by introducing temporal layers into a pre-trained image model and training jointly on images and videos.PYoCo [28] explores fine-tuning a pre-trained image diffusion model with video data, achieving substantially better performance with photorealism and temporal consistency.VideoCrafter [29], [30]introduces two diffusion models: the T2V model generates realistic and cinematic-quality videos, while the I2V model transforms an image into a video clip while preserving content constraints.EMU VIDEO [31] generates images conditioned on the text and then generates videos conditioned on the text and generated image, using adjusted noise schedules and multi-stage training for high-quality,high-resolution video generation without a deep cascade of models.Stable Video Diffusion [32] is a latent video diffusion model that emphasizes the importance of a well-curated pre-training dataset,providing a strong multi-view 3D-prior for fine-tuning multiview diffusion models that generate multiple views of objects.Lumiere [33] is a T2V diffusion model with a Space-Time U-Net architecture that generates the entire temporal duration of a video at once, leveraging spatial and temporal down- and up-sampling and a pre-trained text-to-image diffusion model to generate full frame-rate, low-resolution videos on multiple space-time scales.

    The center of Fig.2 shows the fusion of the language model and the diffusion model [17], which is believed as the way leading T2V to the SOTA.Video Diffusion Transformer(VDT) [34] is the pioneer in the fusion of transformer and diffusion model, demonstrating its enormous potential in the field of video generation.VDT’s strength lies in its outstanding ability to capture temporal dependencies, enabling it to generate temporally coherent video frames,including simulating the physical dynamics of three-dimensional objects over time.The proposed unified spatiotemporal masking mechanism allows VDT to handle various video generation tasks,achieving wide applicability.VDT’s flexible handling of conditional information,such as simple token space concatenation,effectively unifies information of different lengths and modalities.Unlike UNet, which is primarily designed for images, Transformer can better handle the time dimension by leveraging its powerful tokenization and attention mechanisms to capture long-term or irregular temporal dependencies.Only when the model learns(or memorizes) world knowledge, such as spatial-temporal relationships and physical laws, can it generate videos that match the real world.Therefore,the model’s capacity becomes a key component of video diffusion.Transformer has proven to be highly scalable, making it more suitable than 3D UNet for addressing the challenges of video generation.In December 2023, Stanford and Google introduced W.A.L.T[35], a transformer-based approach for latent video diffusion models(LVDMs),featuring two main design choices.Firstly,it employs a causal encoder to compress images and videos into a single latent space, facilitating cross-modality training and generation.Secondly, it utilizes a window attention architecture specifically designed for joint spatial and spatiotemporal generative modeling.This study represents the initial successful empirical validation of a transformer-based framework for concurrently training image and video latent diffusion models.

    Sora’s highlight is just the beginning of a new era in video generation, and it’s foreseeable that this track will become very crowded.IT giants including Google, Microsoft, Meta,Baidu,startups like Runway,Pika,MidJourney,Stability.ai,as well as universities such as Stanford,Berkeley,Tsinghua,etc.,are all powerful competitors.

    Fig.3.Brief principle diagram of Sora.

    Looking into Sora: A Parallel Intelligence Viewpoint

    Upon its release, Sora sparked a huge wave of excitement, with its accompanying demos showcasing impressive results.Sora shows videos with high fidelity, rich details,significant object changes, and smooth transitions between multiple perspectives.While most video generation models can only produce videos lasting 3 to 5 seconds,Sora can create videos up to one minute in length while maintaining narrative coherence, consistency, and common sense.Sora represents a milestone advancement in AI following ChatGPT.

    What underpins Sora’s powerful video generation capabilities? From Sora’s technical report and the development history of video generation models, several key points can be summarized.

    The first is the model architecture.Sora adopts the Diffusion Transformer (DiT), as shown in the left-upper corner Fig 3.Transformers have demonstrated powerful capabilities in large language models, with their attention mechanism effectively modeling long-range dependencies in spatiotemporal sequential data.Unlike earlier methods that perform windowed attention calculations or the Video Diffusion Transformer (VDT)that computes attention in the temporal and spatial dimensions separately, Sora merges the time and space dimensions and processes them through a single attention mechanism.Moreover, Transformers exhibit high computational efficiency and scalability, forming the basis for the Scaling Law with large models.The Diffusion Model, on the other hand, with its solid foundation in probability theory, offers high-resolution and good generation quality, as well as flexibility and controllability in video generation processes conditioned on text or images.DiT combines the advantages of both the Transformer and the Diffusion Model.

    The second is data processing.As shown on the right side of Fig.3, Sora leverages existing tools, such as the captioner used in DALL-E 3, to generate high-quality captions for raw videos, addressing the lack of video-text pairs.Additionally,through GPT,it expands users’short prompts to provide more precise conditions for video generation over long periods.

    The third is feature representation.During training, Sora first compresses videos into a low-dimensional Latent Space(shown in the dashed rectangle on the left of Fig.3) in both the spatial and temporal dimensions.Corresponding to the tokenization of text, Sora patchifies the low-dimensional representation in Latent Space into SpaceTime Patches,which are input into DiT for processing and ultimately generating new videos.From the perspective of parallel intelligence[36]–[44], the original videos come from the real system, while the Latent Space is the virtual system.Operations on the virtual system are more convenient to take advantage of the Transformer and the Diffusion Model.

    Since OpenAI has not publicly disclosed the technical details of Sora,there may be other undisclosed technologies that have contributed to Sora’s breakthrough in video generation capabilities.It should be noted that Sora’s technical roadmap is far from mature.A large number of institutions are actively exploring and collaborating with each other.Microsoft,Google,Runway,Pika,Stanford,etc.have all iterated multiple versions and are still moving forward.The era of Imaginative Intelligence is just beginning.

    Is Sora a World Model?

    Although the released video clips from Sora have attracted a lot of attentions, OpenAI’s claim that Sora is essentially a World Simulator or a World Model has sparked considerable controversy.Among them, LeCun’s criticism is the most noteworthy.

    A world model is a system that comprehends the real world and its dynamics.By using various types of data, it can be trained in an unsupervised manner to learn a spatial and temporal representation of the environment, in which we can simulate a wide range of situations and interactions encountered in the real world.To create these models,researchers face several open challenges, such as keeping consistent maps of the environment and the ability to navigate and interact within it.A world model must also capture not just the dynamics of the world but also the dynamics of its inhabitants, including machines and humans.

    Thus, can Sora be called a world model? We analyze this from two perspectives.

    Firstly, has Sora learned a world model? From the output results,most video clips are smooth and clear,without strange or jumpy scenes, and they align well with common sense.Sora can generate videos with dynamic camera movements.As the camera moves and rotates, characters and scene elements move consistently in a 3D environment.This implies that Sora already has the potential to understand and create in Spatial-Temporal space.Through these official demos, some have exclaimed that Sora has blurred the boundaries between reality and virtual for the first time in history.Therefore, we can say that Sora has learned some rules of real-world dynamics.However, upon closer observation of these videos, there are still some scenes that violate the laws of reality.For example,the process of a cup breaking, the incorrect direction of a treadmill, a puppy suddenly appearing and disappearing, ants having only four legs, etc.This indicates that Sora still has serious knowledge flaws in complex scenes, time scales, etc.There is still a significant gap compared to a sophisticated physics engine.

    Secondly, does Sora represent the direction of world model development?From a technical perspective,Sora combines the advantages of large language models and diffusion models,representing the highest level of generative models.Scaling video generation models like Sora seems to be a promising approach to build a universal simulator for the physical world,which is a key step toward AGI.However, Yann LeCun has a different view.He believes that generative models need to learn the details of every pixel, making them too inefficient and doomed to fail.As an advocate for world models, he led Meta’s team to propose Joint Embedding Predictive Architecture (JEPA) [45], believing that predictive learning in joint embedding space is more efficient and closer to the way humans learn.The latest release of V-JEPA also demonstrates the preliminary results of this approach.

    In summary, Sora has gained a certain understanding of real-world dynamics.However, its functionality is still very limited,and it struggles with complex scenarios.Whether Sora ultimately succeeds or fails,it represents a meaningful attempt on the road to exploring World Models.Other diverse technical paths should also be encouraged.

    Impacts

    Sora and other video generation models have opened up new horizons for Imaginative Intelligence.PGC (Professional Generated Content)will widely adopt AI tools for production,while UGC (User Generated Content) will gradually be replaced by AI tools.This commercialization of AI-generated video tools will accelerate, profoundly impacting various social domains.In fields like advertising,social media,and short videos, AI-generated videos are expected to lower the barrier to short video creation and improve efficiency.Sora also has the potential to change traditional film production processes by reducing reliance on physical shooting,scene construction,and special effects, thereby lowering film production costs.Additionally, in the field of autonomous driving [46], [47],Sora’s video generation capabilities can provide training data,addressing issues such as data long-tail distribution and difficulty in obtaining corner cases [12].

    On the other hand, Sora has also brought about social controversies.For example, Sora has raised concerns about the spread of false information.Its powerful image and video generation capabilities reach a level of realism that can deceive people,changing the traditional belief of“seeing is believing,”making it harder to verify the authenticity of video evidence.The use of AI to forge videos for fraud and spread false information can challenge government regulation and lead to social unrest.Furthermore, Sora may lead to copyright disputes, as there could be potential infringement risks even in the materials used during the training process.Some also worry that generated videos could exacerbate religious and racial issues, intensifying conflicts between different religious groups, ethnicities, and social classes.

    TAO to the Future of Imaginative Intelligence

    Imaginative Intelligence.On the path to achieving imaginative intelligence, Sora represents a significant leap forward in AI’s ability to visualize human imagination on a plausible basis.Imaginative intelligence, the highest level of the three layers of intelligence, goes beyond learning data,understanding texts, and reasoning.It deals with high-fidelity visual expressions and intuitive representations of imaginary worlds.After ChatGPT made advances in linguistic intelligence through superior text comprehension and logical reasoning,Sora excels at transforming potential creative thoughts into visualized scenes, giving AI the ability to understand and reproduce human imagination.This achievement not only provides individual creators with a quick way to visualize imaginary worlds, but also creates a conducive environment for collective creativity to collide and merge.It overcomes language barriers and makes it possible to merge ideas from different origins and cultures on a single canvas and ignite new creative inspirations.Sora has the potential to be a groundbreaking tool for humanity, allowing exploration of unknown territories and prediction of future trends in virtual environments.As technology continues to advance and its applications expand, the development of Sora and analog technologies signals the beginning of a new era in which human and machine intelligence reinforce each other and explore the boundaries of the imaginary world together.

    Scenarios Engineeringplays a crucial role in promoting the smooth and secure operation of artificial intelligence systems.It encompasses various processes aimed at optimizing the environment and conditions in which artificial intelligence operates, thereby maximizing its efficiency and safety [48]–[51].With the emergence of advanced models like Sora,which specialize in converting text inputs into video outputs, not only new pathways for generating dynamic visual content are provided but also the capabilities of Scenarios Engineering are significantly enhanced [52]–[54].This, in turn, contributes to the improvement of intelligent algorithms through enhanced calibration, validation, analysis, and other fundamental tasks.

    Blockchain and Federated Intelligence.In its very essence,blockchain technology serves to underpin and uphold the”TRUE”characteristics,standing for trustable,reliable,usable,and effective/efficient[55].Federated control is achieved based on blockchain technology, supporting federated security, federated consensus,federated incentives,and federated contracts[56].Federated security comes from the security mechanism in the blockchain, playing a crucial role in the encryption,transmission,and verification of federated data[57].Federated consensus ensures distributed consensus among all federated nodes on strategies,states,and updates.Federated incentives in federated blockchain are established for maintenance and management [58].Therefore, designing fast, stable, and positive incentives can balance the interests between federated nodes,stimulate the activity of federated nodes, and improve the efficiency of the federated control system.Federated contracts[59]are based on smart contract algorithms that automatically and securely implement federated control.Federated contracts mainly function in access control, non-private federated data exchange,local and global data updates,and incident handling.

    DeSci and DAO/TAO.The emergence of new ideas and technologies presents great opportunities for paradigm innovation.For example, the wave of decentralized science (DeSci)is changing the way scientific research is organized.As AI research enters rapid iteration, there are calls to establish new research mechanisms to overcome challenges such as the lack of transparency and trust in traditional scientific cooperation,and to achieve more efficient and effective scientific discoveries.DeSci aims to create a decentralized, transparent, and secure network for scientists to share data, information, and research findings.The decentralized nature of DeSci enables scientists to collaborate more fairly and democratically.DAO,as a means of implementing DeSci, provides a new organizational form for AI innovation and application [60], [61].DAO represents a digitally-native entity that autonomously executes its operations and governance on a blockchain network via smart contracts, operating independently without reliance on any centralized authority or external intervention [62]–[64].The unique attributes of decentralization, transparency, and autonomy inherent in DAOs provide an ideal ecosystemic foundation for developing imaginative intelligence.However,practical implementation has also shed light on certain inherent limitations associated with DAOs, such as power concentration, high decision-making barrier, and the instability of value system [65].As such, TRUE autonomous organizations and operations (TAO) were proposed to address these issues,by highlighting their fundamental essence of being “TRUE”instead of emphasizing the decentralized attribute of DAOs[66].Within the TAO framework, decision-making processes are hinged upon community consensus,and resource allocation follows transparent and equitable rules, thereby encouraging multidisciplinary experts and developers to actively engage in complex and cutting-edge AI development.Supported by blockchain intelligence [67], TAO stimulates worldwide interest and sustained investment in intelligent technologies by devising innovative incentive mechanisms,reducing collaboration costs and enhancing flexibility and responsiveness of community management.As such, TAO provides an ideal ecosystem for nurturing, maturing, and scaling up the development of groundbreaking technologies of imaginative intelligence.

    When will Sora or Sora-like AI Technology show us the real road or TAO to Imagitative Intelligence that could be practically used for constructing a sustainable and smart society with intelligent industries for better humanity? We are still expecting, but more enthusiastically now.

    ACKNOWLEDGMENT

    This work was partially supported by the National Natural Science Foundation of China (62271485, 61903363,U1811463, 62103411, 62203250) and the Science and Technology Development Fund of Macau SAR (0093/2023/RIA2,0050/2020/A1).

    久久中文字幕一级| 久久久国产精品麻豆| 久久天堂一区二区三区四区| 日本a在线网址| 日本撒尿小便嘘嘘汇集6| xxx96com| 欧美日韩瑟瑟在线播放| 精品国产一区二区三区四区第35| 成在线人永久免费视频| 人人妻人人澡人人看| 久久影院123| 老司机在亚洲福利影院| 黄色女人牲交| 天堂中文最新版在线下载| 日韩高清综合在线| 日本wwww免费看| 国产成人av教育| 日韩高清综合在线| 国产精品乱码一区二三区的特点 | 亚洲成av片中文字幕在线观看| 男女做爰动态图高潮gif福利片 | 激情在线观看视频在线高清| 亚洲国产欧美网| 国产高清videossex| 国产亚洲欧美精品永久| √禁漫天堂资源中文www| 欧美一级毛片孕妇| 黄色片一级片一级黄色片| 免费少妇av软件| 亚洲在线自拍视频| 黑人欧美特级aaaaaa片| 亚洲色图综合在线观看| 国产精品日韩av在线免费观看 | 88av欧美| 19禁男女啪啪无遮挡网站| 亚洲精品美女久久av网站| 美女高潮喷水抽搐中文字幕| 久久久国产精品麻豆| 又大又爽又粗| 欧美一级毛片孕妇| 亚洲精品av麻豆狂野| 香蕉久久夜色| 久久国产精品影院| 手机成人av网站| 女同久久另类99精品国产91| 亚洲欧美激情在线| 久久人人精品亚洲av| av视频免费观看在线观看| 天堂√8在线中文| 午夜亚洲福利在线播放| 中国美女看黄片| 交换朋友夫妻互换小说| 国产视频一区二区在线看| 亚洲av五月六月丁香网| 一二三四在线观看免费中文在| 波多野结衣av一区二区av| 19禁男女啪啪无遮挡网站| 这个男人来自地球电影免费观看| 黄色片一级片一级黄色片| 熟女少妇亚洲综合色aaa.| 日韩大尺度精品在线看网址 | 91精品国产国语对白视频| 久久久久久久久中文| 久久久久国产精品人妻aⅴ院| 国产精品久久久久成人av| 在线十欧美十亚洲十日本专区| 亚洲欧美激情在线| www.自偷自拍.com| 国产伦人伦偷精品视频| 多毛熟女@视频| 精品福利观看| 男女床上黄色一级片免费看| 精品国产一区二区久久| 热99国产精品久久久久久7| 无遮挡黄片免费观看| 叶爱在线成人免费视频播放| www.精华液| 黄色成人免费大全| 亚洲成av片中文字幕在线观看| 波多野结衣一区麻豆| 亚洲精品久久午夜乱码| 日本五十路高清| 天天添夜夜摸| 高清av免费在线| 777久久人妻少妇嫩草av网站| 80岁老熟妇乱子伦牲交| 麻豆av在线久日| 国产高清国产精品国产三级| 老司机靠b影院| 成年人免费黄色播放视频| 欧美日韩一级在线毛片| av电影中文网址| 亚洲美女黄片视频| 色精品久久人妻99蜜桃| 日日夜夜操网爽| 午夜精品国产一区二区电影| 欧美日韩乱码在线| 大码成人一级视频| 国产亚洲av高清不卡| 欧美激情 高清一区二区三区| 成人手机av| 精品国内亚洲2022精品成人| 亚洲成人免费av在线播放| 亚洲av第一区精品v没综合| 校园春色视频在线观看| 丝袜在线中文字幕| 久久中文字幕一级| 亚洲欧美精品综合久久99| 国产精品98久久久久久宅男小说| 免费av毛片视频| 亚洲五月婷婷丁香| 看黄色毛片网站| 久久国产乱子伦精品免费另类| 国产精品自产拍在线观看55亚洲| 亚洲国产中文字幕在线视频| 女人被躁到高潮嗷嗷叫费观| 一个人免费在线观看的高清视频| 欧美另类亚洲清纯唯美| 久久婷婷成人综合色麻豆| 亚洲熟妇中文字幕五十中出 | 欧美 亚洲 国产 日韩一| 老司机深夜福利视频在线观看| 国产99久久九九免费精品| 国产亚洲欧美在线一区二区| 午夜福利在线免费观看网站| 久久中文字幕一级| 两性夫妻黄色片| 老司机午夜十八禁免费视频| 夫妻午夜视频| 人妻丰满熟妇av一区二区三区| 高潮久久久久久久久久久不卡| 精品久久久精品久久久| 日韩精品中文字幕看吧| 嫩草影视91久久| 琪琪午夜伦伦电影理论片6080| 极品人妻少妇av视频| 亚洲va日本ⅴa欧美va伊人久久| 精品一品国产午夜福利视频| 99香蕉大伊视频| 欧美乱码精品一区二区三区| 午夜福利,免费看| 欧美日韩福利视频一区二区| 日韩国内少妇激情av| 久久精品aⅴ一区二区三区四区| 日韩高清综合在线| 国产成人免费无遮挡视频| 涩涩av久久男人的天堂| 国产黄色免费在线视频| 亚洲中文日韩欧美视频| 黄色女人牲交| 免费在线观看黄色视频的| 好看av亚洲va欧美ⅴa在| 午夜a级毛片| 中文字幕人妻丝袜一区二区| 黄片大片在线免费观看| 一级毛片精品| 亚洲精品久久成人aⅴ小说| 欧美乱妇无乱码| 看黄色毛片网站| 男人舔女人下体高潮全视频| 99精品久久久久人妻精品| 日韩免费高清中文字幕av| 男女下面进入的视频免费午夜 | 韩国精品一区二区三区| 日韩高清综合在线| 一本大道久久a久久精品| 国产精品久久久人人做人人爽| 男人操女人黄网站| 国产成人精品久久二区二区免费| 一区二区日韩欧美中文字幕| 亚洲欧美激情综合另类| 丰满饥渴人妻一区二区三| 亚洲av日韩精品久久久久久密| 大码成人一级视频| 18禁观看日本| 国产成人精品久久二区二区91| 国产高清国产精品国产三级| 国产亚洲精品第一综合不卡| 91国产中文字幕| 在线国产一区二区在线| 亚洲激情在线av| 热99re8久久精品国产| 欧美 亚洲 国产 日韩一| 1024香蕉在线观看| 狂野欧美激情性xxxx| 欧美成人午夜精品| 亚洲精品中文字幕一二三四区| 欧美大码av| 嫩草影视91久久| 久久久久久久久久久久大奶| 国产亚洲精品一区二区www| 80岁老熟妇乱子伦牲交| 日本wwww免费看| 欧美日韩av久久| 欧美日本中文国产一区发布| 色尼玛亚洲综合影院| 亚洲全国av大片| 欧美成人性av电影在线观看| 看片在线看免费视频| 新久久久久国产一级毛片| 欧美日本中文国产一区发布| 午夜成年电影在线免费观看| 水蜜桃什么品种好| 国产欧美日韩一区二区三| 99久久国产精品久久久| 一边摸一边做爽爽视频免费| 亚洲七黄色美女视频| 黄色a级毛片大全视频| 波多野结衣av一区二区av| 国产精品偷伦视频观看了| 成人精品一区二区免费| 久久中文看片网| 99久久精品国产亚洲精品| 久久国产精品影院| 大陆偷拍与自拍| 国产高清videossex| 嫩草影视91久久| 午夜福利在线观看吧| 成人精品一区二区免费| 中文字幕人妻丝袜一区二区| 国内久久婷婷六月综合欲色啪| 伊人久久大香线蕉亚洲五| 久久亚洲真实| 亚洲 欧美一区二区三区| 国产xxxxx性猛交| 久久午夜亚洲精品久久| 欧美精品亚洲一区二区| 欧美日韩亚洲综合一区二区三区_| 国产高清国产精品国产三级| 午夜影院日韩av| 高清毛片免费观看视频网站 | 亚洲精品国产精品久久久不卡| 成人三级做爰电影| 女人高潮潮喷娇喘18禁视频| 久久精品亚洲av国产电影网| av免费在线观看网站| 日韩视频一区二区在线观看| xxx96com| 天天影视国产精品| 亚洲欧美激情综合另类| 精品国产超薄肉色丝袜足j| 精品免费久久久久久久清纯| 日日夜夜操网爽| 两个人免费观看高清视频| 99香蕉大伊视频| av免费在线观看网站| 成在线人永久免费视频| 国产精品电影一区二区三区| 久热这里只有精品99| 操美女的视频在线观看| 麻豆久久精品国产亚洲av | 国产一区二区三区视频了| 大型av网站在线播放| 一级毛片精品| 亚洲精品美女久久久久99蜜臀| 成人三级做爰电影| 国产av在哪里看| 国产精华一区二区三区| 水蜜桃什么品种好| 国产亚洲精品久久久久5区| 免费观看精品视频网站| 亚洲,欧美精品.| 久久久久国产一级毛片高清牌| 日韩免费av在线播放| 久久精品成人免费网站| 亚洲专区字幕在线| 黄色视频,在线免费观看| 国产在线精品亚洲第一网站| 免费av毛片视频| 最新在线观看一区二区三区| 好男人电影高清在线观看| 黄片大片在线免费观看| 黑人巨大精品欧美一区二区mp4| 国内毛片毛片毛片毛片毛片| 99久久99久久久精品蜜桃| 国产精品国产av在线观看| 国产成人欧美在线观看| av中文乱码字幕在线| 久久午夜亚洲精品久久| 成人特级黄色片久久久久久久| 亚洲av熟女| 丝袜美足系列| 亚洲精品国产精品久久久不卡| 久久午夜综合久久蜜桃| 日韩 欧美 亚洲 中文字幕| 无人区码免费观看不卡| 欧美日韩亚洲高清精品| 高清毛片免费观看视频网站 | 99精国产麻豆久久婷婷| 美女福利国产在线| 黄色视频不卡| 亚洲精品久久午夜乱码| 91精品三级在线观看| www.精华液| 日韩成人在线观看一区二区三区| 两人在一起打扑克的视频| 日韩免费av在线播放| 久久精品亚洲av国产电影网| 亚洲中文字幕日韩| 免费看十八禁软件| www日本在线高清视频| 国产成人精品在线电影| 精品高清国产在线一区| videosex国产| 黄色毛片三级朝国网站| 国产午夜精品久久久久久| 欧美大码av| 国产高清激情床上av| 性欧美人与动物交配| 久久人人97超碰香蕉20202| 亚洲色图 男人天堂 中文字幕| 亚洲精品一卡2卡三卡4卡5卡| 欧美日韩乱码在线| 国产精品久久久久成人av| 久久久精品欧美日韩精品| 高潮久久久久久久久久久不卡| 久久久久国内视频| 老熟妇乱子伦视频在线观看| 大型av网站在线播放| 高清欧美精品videossex| 国产亚洲欧美在线一区二区| 99国产精品99久久久久| 五月开心婷婷网| 国内久久婷婷六月综合欲色啪| 美女午夜性视频免费| 国产区一区二久久| 新久久久久国产一级毛片| bbb黄色大片| 亚洲成人久久性| 黄片大片在线免费观看| 亚洲精品在线美女| 成年人黄色毛片网站| 一边摸一边做爽爽视频免费| 黄片大片在线免费观看| 在线观看www视频免费| 午夜福利在线观看吧| 老熟妇乱子伦视频在线观看| av超薄肉色丝袜交足视频| 两性午夜刺激爽爽歪歪视频在线观看 | 国产一区二区三区在线臀色熟女 | 男女之事视频高清在线观看| 欧美国产精品va在线观看不卡| 俄罗斯特黄特色一大片| 精品高清国产在线一区| 成在线人永久免费视频| 亚洲专区字幕在线| 天堂中文最新版在线下载| 久久精品国产综合久久久| 五月开心婷婷网| 日日爽夜夜爽网站| av有码第一页| 99久久99久久久精品蜜桃| 久久精品人人爽人人爽视色| 丁香六月欧美| 女性生殖器流出的白浆| 热re99久久国产66热| 亚洲成av片中文字幕在线观看| 国产亚洲欧美精品永久| 999久久久国产精品视频| 99精国产麻豆久久婷婷| 亚洲av熟女| 国产av一区在线观看免费| 日本五十路高清| 欧美黑人欧美精品刺激| 人妻久久中文字幕网| 女同久久另类99精品国产91| 久久人妻熟女aⅴ| 亚洲欧美日韩高清在线视频| 999久久久国产精品视频| 大陆偷拍与自拍| 多毛熟女@视频| 新久久久久国产一级毛片| 成人黄色视频免费在线看| 欧美激情久久久久久爽电影 | 91麻豆精品激情在线观看国产 | 黑人巨大精品欧美一区二区mp4| 亚洲精品美女久久av网站| 国产精品一区二区在线不卡| 国产精品久久久av美女十八| 香蕉国产在线看| 在线观看66精品国产| 午夜亚洲福利在线播放| 精品高清国产在线一区| 午夜福利,免费看| 多毛熟女@视频| 亚洲欧美日韩无卡精品| tocl精华| 一边摸一边抽搐一进一出视频| 国产三级黄色录像| 久久伊人香网站| 最好的美女福利视频网| 黄片大片在线免费观看| 9热在线视频观看99| 久久精品人人爽人人爽视色| 精品久久久久久成人av| 亚洲国产精品合色在线| 天堂√8在线中文| 女人被狂操c到高潮| 亚洲成人久久性| 99精品久久久久人妻精品| 身体一侧抽搐| 国产极品粉嫩免费观看在线| 村上凉子中文字幕在线| 法律面前人人平等表现在哪些方面| 亚洲精品久久成人aⅴ小说| 国产av一区在线观看免费| 亚洲第一青青草原| 99精品在免费线老司机午夜| 99精国产麻豆久久婷婷| a级毛片黄视频| a级毛片在线看网站| 午夜两性在线视频| 亚洲人成电影免费在线| 免费看a级黄色片| 欧美黄色淫秽网站| 欧美黑人精品巨大| 两个人看的免费小视频| 国产亚洲欧美在线一区二区| 亚洲专区字幕在线| 亚洲五月天丁香| 亚洲精品成人av观看孕妇| 成年人黄色毛片网站| ponron亚洲| 国产精品美女特级片免费视频播放器 | 国产精品综合久久久久久久免费 | 欧美另类亚洲清纯唯美| 一区福利在线观看| 久久国产精品人妻蜜桃| 成人免费观看视频高清| 日韩国内少妇激情av| 亚洲国产欧美一区二区综合| 亚洲精品国产区一区二| 18禁黄网站禁片午夜丰满| 婷婷精品国产亚洲av在线| 久久久国产成人精品二区 | 怎么达到女性高潮| 国产精品自产拍在线观看55亚洲| 又黄又爽又免费观看的视频| 亚洲五月婷婷丁香| 黑人巨大精品欧美一区二区蜜桃| 精品午夜福利视频在线观看一区| 欧美午夜高清在线| 99精品在免费线老司机午夜| 欧美激情高清一区二区三区| 午夜激情av网站| 嫁个100分男人电影在线观看| 欧美日韩亚洲高清精品| 首页视频小说图片口味搜索| 一区二区日韩欧美中文字幕| 日韩大尺度精品在线看网址 | 日本撒尿小便嘘嘘汇集6| 成人三级黄色视频| www.熟女人妻精品国产| 99在线人妻在线中文字幕| 亚洲av片天天在线观看| 高清黄色对白视频在线免费看| 悠悠久久av| 精品福利永久在线观看| 热re99久久精品国产66热6| 99热国产这里只有精品6| 国产三级在线视频| 欧美中文综合在线视频| 十八禁网站免费在线| 99久久99久久久精品蜜桃| 久久精品国产亚洲av高清一级| 精品少妇一区二区三区视频日本电影| 别揉我奶头~嗯~啊~动态视频| 最好的美女福利视频网| 丰满人妻熟妇乱又伦精品不卡| www.999成人在线观看| 精品国产国语对白av| 狠狠狠狠99中文字幕| 一边摸一边抽搐一进一出视频| 国产伦人伦偷精品视频| 久久国产乱子伦精品免费另类| 国产在线观看jvid| 久久香蕉激情| 国产精品亚洲一级av第二区| 91国产中文字幕| 国产亚洲av高清不卡| 久久99一区二区三区| 热re99久久国产66热| 成人免费观看视频高清| 99国产精品99久久久久| 国产三级黄色录像| 精品一区二区三区av网在线观看| 亚洲成av片中文字幕在线观看| 亚洲欧美日韩高清在线视频| 国产精品日韩av在线免费观看 | 欧美中文日本在线观看视频| 老司机靠b影院| 一边摸一边抽搐一进一小说| 丰满饥渴人妻一区二区三| 亚洲av第一区精品v没综合| a级毛片黄视频| 精品福利永久在线观看| 亚洲色图av天堂| 18美女黄网站色大片免费观看| 久久香蕉精品热| 久久久久九九精品影院| 日本黄色视频三级网站网址| 成年人黄色毛片网站| 成人av一区二区三区在线看| 国产欧美日韩一区二区精品| 身体一侧抽搐| 在线观看免费午夜福利视频| 欧美+亚洲+日韩+国产| e午夜精品久久久久久久| 欧美日本亚洲视频在线播放| 日韩一卡2卡3卡4卡2021年| 伊人久久大香线蕉亚洲五| 性少妇av在线| 成人国产一区最新在线观看| 国产精品美女特级片免费视频播放器 | 男人操女人黄网站| 亚洲 欧美 日韩 在线 免费| 久久精品成人免费网站| 国产视频一区二区在线看| 免费人成视频x8x8入口观看| 老司机在亚洲福利影院| 夜夜爽天天搞| 亚洲色图av天堂| 最近最新免费中文字幕在线| 一级a爱视频在线免费观看| 视频在线观看一区二区三区| 在线观看免费视频日本深夜| 大香蕉久久成人网| 欧美日韩黄片免| av福利片在线| 久久人人97超碰香蕉20202| 精品一区二区三区四区五区乱码| 99国产极品粉嫩在线观看| 最新在线观看一区二区三区| 亚洲国产精品sss在线观看 | 亚洲色图av天堂| 91国产中文字幕| 国产高清视频在线播放一区| 国产精品美女特级片免费视频播放器 | av在线播放免费不卡| 免费观看精品视频网站| 99国产精品一区二区蜜桃av| 亚洲专区中文字幕在线| www.精华液| 久久精品人人爽人人爽视色| 侵犯人妻中文字幕一二三四区| 久久久精品国产亚洲av高清涩受| 一进一出好大好爽视频| 欧美乱码精品一区二区三区| 国产精品成人在线| 日本欧美视频一区| 国产精品野战在线观看 | 成人av一区二区三区在线看| 亚洲avbb在线观看| 午夜福利免费观看在线| 亚洲专区国产一区二区| av片东京热男人的天堂| 欧美老熟妇乱子伦牲交| 国产免费男女视频| 制服诱惑二区| 在线观看一区二区三区| 免费一级毛片在线播放高清视频 | www.熟女人妻精品国产| 极品教师在线免费播放| 国产无遮挡羞羞视频在线观看| 亚洲第一欧美日韩一区二区三区| 十八禁网站免费在线| 亚洲一区中文字幕在线| 女警被强在线播放| av视频免费观看在线观看| 国产成人精品久久二区二区免费| 国产成人影院久久av| 波多野结衣av一区二区av| 看免费av毛片| 一级片免费观看大全| 女人高潮潮喷娇喘18禁视频| 精品一区二区三区av网在线观看| 国产成人精品无人区| 欧美+亚洲+日韩+国产| 91大片在线观看| 欧美日韩乱码在线| 午夜视频精品福利| 黄色丝袜av网址大全| 大陆偷拍与自拍| 亚洲一区二区三区色噜噜 | 国产亚洲精品第一综合不卡| 黄色怎么调成土黄色| 久久亚洲真实| 国产精品影院久久| 国产在线精品亚洲第一网站| 69av精品久久久久久| 大型黄色视频在线免费观看| 黄色丝袜av网址大全| 国产伦人伦偷精品视频| 999久久久国产精品视频| 中文字幕最新亚洲高清| 日本a在线网址| 97碰自拍视频| 国产在线精品亚洲第一网站| 交换朋友夫妻互换小说| 波多野结衣高清无吗| 黑人欧美特级aaaaaa片| 亚洲一码二码三码区别大吗| 亚洲 国产 在线| 成人影院久久| 狠狠狠狠99中文字幕| 日韩欧美国产一区二区入口| 淫妇啪啪啪对白视频| 老鸭窝网址在线观看| 国产野战对白在线观看| 可以在线观看毛片的网站| 这个男人来自地球电影免费观看| 亚洲国产精品999在线| 在线观看免费高清a一片| 在线观看免费日韩欧美大片| avwww免费|