• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Scalability and efficiency challenges for the exascale supercomputing system:practice of a parallel supporting environment on the Sunway exascale prototype system*

    2023-02-06 09:43:18XiaobinHEXinCHENHengGUOXinLIUDexunCHENYulingYANGJieGAOYunlongFENGLongdeCHENXiaonaDIAOZuoningCHEN

    Xiaobin HE,Xin CHEN,Heng GUO,Xin LIU,Dexun CHEN,Yuling YANG,Jie GAO,Yunlong FENG,Longde CHEN,Xiaona DIAO,Zuoning CHEN

    National Research Center of Parallel Computer Engineering and Technology,Beijing 100190,China

    Abstract:With the continuous improvement of supercomputer performance and the integration of artificial intelligence with traditional scientific computing,the scale of applications is gradually increasing,from millions to tens of millions of computing cores,which raises great challenges to achieve high scalability and efficiency of parallel applications on super-large-scale systems.Taking the Sunway exascale prototype system as an example,in this paper we first analyze the challenges of high scalability and high efficiency for parallel applications in the exascale era.To overcome these challenges,the optimization technologies used in the parallel supporting environment software on the Sunway exascale prototype system are highlighted,including the parallel operating system,input/output(I/O)optimization technology,ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method.Parallel operating systems and I/O optimization technology mainly support largescale system scaling,while the ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method mainly enhance the efficiency of large-scale applications.Finally,the contributions to various applications running on the Sunway exascale prototype system are introduced,verifying the effectiveness of the parallel supporting environment design.

    Key words:Parallel computing;Sunway;Ultra-large-scale;Supercomputer

    1 Introduction

    China’s high-performance computing(HPC)is entering the exascale era.Compared with the existing Sunway TaihuLight-China’s strongest supercomputer system,the number of computing cores of the exascale system will be greatly increased and will be much more than 10 million.The growth in the scale of the exascale system will release huge computing power.Taking into account the scalability and efficiency of parallel applications under such a huge parallel scale raises great challenges to the design of the exascale system.In addition,with the increasing demand for computing power in artificial intelligence(AI)applications,the fusion of AI and HPC applications has become an important breakthrough direction for HPC applications(Kurth et al.,2018;Jia et al.,2020).It is also a challenge to support the high scalability and efficiency of AI applications in the exascale era of HPC.

    The major challenges are as follows:

    1.Scalable management to support parallel applications

    In recent years,the performance improvement of the HPC system depends much more on the increase of the number of processor cores.The number of computing cores of Sunway TaihuLight(Fu et al.,2016)has exceeded 10 million,and the number of computing cores of the exascale supercomputer will be much more than 10 million.Therefore,scalable management is required to flexibly support parallel applications to achieve the partitioning,startup,detection,low-power control,and stopping of the parallel program running on tens of millions of cores,to ensure the high efficiency of application management.

    2.High-concurrency input/output(I/O)to support parallel applications

    In the exascale era,due to the unprecedented application scale of HPC,the amount of data generated during the running of the application also increases explosively.The process number of parallel data access may reach tens of thousands or even tens of millions.It is difficult for the traditional single construction with the shared storage system to meet the application requirements.Therefore,it is necessary to design a data access technique with large capacity,high bandwidth,high concurrency,and low cost for exascale applications,which can flexibly support concurrent data access of the parallel application running on tens of millions of computing cores.

    3.Scalable debugging and tuning to support parallel applications

    In the exascale era,due to the huge scale of application of HPC,it is common to encounter abnormal running of applications.In addition,to discover potential bottlenecks with respect to performance,scalability,and so on,it is necessary to collect massive application running information,so as to conduct data sampling during the application running(Lin et al.,2021).With the sharp increase in the system scale and application complexity,the amount of debugging information data also increases dramatically.Traditional methods have been unable to meet the requirements of scalability and complexity of parallel applications in the exascale era.Therefore,it is necessary to implement a debugging and tuning mechanism oriented to the characteristics of the exascale system,reducing the interference of data collection and analysis on applications,and improving the efficiency of debugging and tuning in super-largescale scenarios.

    4.Efficient parallel algorithms to support parallel applications running on tens of millions of computing cores

    In the exascale era,to maximize the computing capability,HPC system developers often need to design complex heterogeneous on-chip storage and interconnection systems(Hluchyet al.,2020).Meanwhile,to support the super-large-scale system,it is necessary to build a complex network interconnection system.It is a great challenge to design parallel applications that can maximize the potential of the system in terms of computing and network interconnection.Therefore,it is necessary to refine the common requirements of HPC applications,provide a scientific computing parallel framework for specific supercomputing processors and interconnection network platforms,reduce the programming’s complexity,and ensure the efficient running of applications.

    5.Efficient support for AI applications

    AI applications show a trend of integration with traditional HPC,which shows higher requirements for HPC systems in the exascale era.The purpose is to adapt to the special configuration of precision and computing power in the field of AI,improve the computing efficiency by reducing the precision,and reduce the pressure of data on memory and communication.Therefore,providing an AI-oriented parallel application ecology to support the efficient running of large-scale AI applications on HPC systems is a necessity.

    As a representative in the field of supercomputing in China,the Sunway exascale prototype system(SEPS)is an exploration to verify the technology route of the next-generation Sunway supercomputing.The system equipped with the SW26010pro many-core processors and the self-developed highspeed network is deployed in the National Supercomputing Center in Jinan.Considering the above five challenges,the system implements a collaborative design and optimization for the parallel supporting environment,including the parallel operating system,the storage system,the debugging and tuning system,the scientific and engineering parallel application framework,and the AI ecosystem,and a series of innovative technologies are proposed.The applicability of the above techniques to exascale supercomputers has been verified by the important application achievements acquired recently on SEPS.

    2 SEPS architecture

    SEPS is a small-scale verification system that is designed according to the exascale application requirements,and can be scaled up to the exascale level.The architecture of SEPS is similar to that of Sunway TaihuLight.As shown in Fig.1,SEPS consists of the computing system,the computing network,the storage system,the management cluster,the application debugging server,and the application server.The computing system is built using SW26010pro processors,which are interconnected through the self-developed Sunway computing network.The storage system consists of the burst buffer and disk storage,provides global shared storage services for the computing nodes,and is interconnected with the whole system through the self-developed Sunway network.The management cluster provides the management functions for the whole system.The application debugging server provides debugging and tuning services for the applications.The application server provides services such as application compiling,submitting,and viewing for users.

    Fig.1 Architecture of the Sunway exascale prototype system

    The system is equipped with the heterogeneous many-core processor SW26010pro,which contains 390 computing cores.The architecture of the many-core processor SW26010pro is shown in Fig.2.All computing cores in SW26010pro are divided into six core groups(CGs),and each CG consists of one management processing element(MPE)and 64 computing processing elements(CPEs),providing powerful computing capabilities.The system achieves network interconnection through Sunway’s self-developed network,supports the remote direct memory access(RDMA)communication protocol,and has large network bandwidth,which supports large-scale applications for high-speed message passing interface(MPI)communication and I/O.Moreover,applications running on SEPS can be scaled up to 10 million computing cores.

    Fig.2 Architecture of the many-core processor SW26010pro(CG:core group;CPE:computing processing element;DDR:double data rate;MPE:management processing element)

    3 Design of the parallel supporting environment

    The parallel supporting environment of SEPS provides a comprehensive basic parallel operating environment for parallel applications.As shown in Fig.3,it is composed of the parallel operating system,the distributed storage system,the debugging and tuning system,the scientific computing parallel framework,and the AI ecosystem.

    Fig.3 Components of the parallel supporting environment(DNN:deep neural network;ROFS:read-only file system;HADAFS:HADA file system)

    3.1 Parallel operating system

    The resource management for the whole system on SEPS enables users to execute their jobs concurrently and achieve the unified management,monitoring,and on-demand allocation for many-core computing nodes.The software helps the system continue to run even if some failure occurs,and helps control the energy consumption of applications.The structure of the parallel operating system is shown in Fig.4.

    Fig.4 Parallel operating system structure

    Job management adopts the adaptive multilevel parallel architecture to support the efficient management of user jobs in large-scale systems.It can achieve efficient start-up and operation control of large-scale parallel jobs and provide users with rich functions,strong scalability,and a convenient job environment.

    Resource management provides the efficient management and allocation of different granularities and heterogeneous resources for the large-scale system.It adopts the hierarchical parallel and interlayer pipeline control mode to replace the traditional large-scale one-to-many control with a simpler,multiple parallel one-to-many control.In this way,the control pressure of the master is reduced,and the scalability of the system is greatly improved.

    Usability management improves the faulttolerant operation ability of the system and supports the reliable operation of large-scale applications through the fault-tolerant mechanism for typical application characteristics.

    Power management adopts a job-driven powercapping control method to balance performance and energy issues effectively.With this method,the large-scale system can operate efficiently in terms of energy usage.

    3.2 Storage system

    SEPS carries a hybrid storage system,consisting of a global file system(GFS),a burst buffer file system(HADAFS),and a read-only file system(ROFS).The storage architecture is shown in Fig.5.The three file systems use different mount paths,so the applications could choose different paths to store as per their needs:GFS for general application data access,HADAFS for high-performance burst data storage,and ROFS for storage of AI application datasets or dynamic libraries.

    Fig.5 Sunway storage architecture(CN:computing node;GFS:global file system;HADAFS:HADA file system;I/O:input/output;LWFS:lightweight file system;NVMe-SSD:nonvolatile memory express solid-state drive;ROFS:read-only file system)

    GFS is the most commonly used unit,adopting a forwarding architecture.The bottom-level GFS is built on the dedicated I/O nodes and the disk arrays based on the Lustre file system(Ma et al.,2012).The upper-level lightweight file system(LWFS)provides services for forwarding I/O requests from computing nodes to the backend parallel file system,which provides an interface compatible with portable operating system interface(POSIX)semantics.The server of LWFS is deployed to the I/O forwarding node,and the client is deployed on the computing node.HADAFS is constructed based on the nonvolatile memory express solid-state drive(NVMe-SSD)of the I/O forwarding node to obtain high bandwidth,which provides a unified metadata view and a POSIX-like I/O interface.ROFS uses the SSD of the I/O forwarding node and the read-only local virtual file system(VFS)provided by the computing node to provide a read-only local VFS to support the requirements of AI software and applications for large-scale data burst reads.

    GFS:GFS is a globally unified storage system based on Lustre and compatible with the standard POSIX file system interfaces.Its client is mounted on the I/O forwarding node and provides global filesharing services for computing nodes through the LWFS forwarding service.Lustre implements activeactive hot standby at multiple levels of the network,equipment,and services,with strong reliability and availability,and no single point of failure.The system implements a data distribution algorithm based on performance cognition(Yang et al.,2019),which can effectively avoid I/O interference between different applications,and avoid faults or performance reduction of object storage technology(OST)devices,ensuring that the applications have better I/O performance.GFS supports quality-of-service(QoS)control of storage performance,and administrators can set performance limits for specific applications(Shi et al.,2017;Hua et al.,2019).The above work is also completely transparent to the application.The storage system also deploys a beacon performance monitoring tool(Chen et al.,2020;Yang et al.,2022),which can monitor the application’s I/O mode and performance in real time,and provide an important reference for application I/O optimization.Moreover,GFS supports dynamic allocation of storage resources according to application requirements(Ji et al.,2019).

    HADAFS:To meet the ultra-high-bandwidth requirements of data reading and writing for some applications,SEPS deploys SSD-based burst buffer and the developed burst buffer storage software HADAFS.HADAFS aggregates the NVMe-SSD storage space on multiple I/O forwarding nodes and provides it to users through resource groups.The format of the HADAFS interface is similar to that of the POSIX system call interface.HADAFS separates data management and data access;therefore,a set of data management tools named HADASH is designed to implement metadata query and data migration between Lustre and HADAFS.To fully use the capacity and bandwidth of NVMe-SSD,HADAFS does not provide high-reliability capabilities.It is critical to make full use of the performance of SSD(Shi et al.,2017),so HADAFS does not use the data redundancy mechanism.HADAFS supports the layout of data to the nearest SSD of the I/O forwarding node,and allocates resources to the application based on the group of the I/O forwarding node.The administrator can dynamically reorganize the group according to the application needs,so as to achieve better resource utilization.

    ROFS:With the development of emerging applications such as AI and data analysis,some applications need to repeatedly read a large number of files and therefore require a high level of concurrent read performance.However,it is difficult to meet these requirements with the traditional parallel file system.In response to this problem,ROFS is designed and developed based on Internet small computer systems interface(ISCSI)technology.It directly maps part of the SSD space to the computing node to provide a remote local disk for the computing node with POSIX-compatible interfaces for application.For each computing node,the reading process of the application is equivalent to local disk access,and the metadata performance is significantly improved.When running AI applications on the whole system,the data reading performance of the AI algorithm library and data set is improved by more than 100 times compared with the original version.Since ROFS is exported in read-only mode,multiple computing nodes correspond to a fixed server,and the mapping between computing nodes and servers is static.

    3.3 Debugging and tuning subsystem

    The debugging and tuning subsystem provides developers with tools to develop a large-scale application in less time(Fig.6).Debugger,parallel debugging tool,and large-scale lightweight debugging tool make up the debugging part.They support single-node applications,medium-scale applications,and large-scale applications,respectively.The tuning part includes a job-level performance monitoring tool,which can quickly obtain a performance overview of the job,and a performance monitoring library,which provides fine-grained analysis capabilities.

    Fig.6 Overview of the debugging and tuning subsystem(API:application programming interface)

    The debugger provides a unified debugging view of MPE and CPE by threading abstraction,supporting source-and instruction-level program execution control.The parallel debugging tool is built on the debugger,which can support fine-grained execution control for thousands of parallel processes.The large-scale lightweight debugging tool locates abnormal processes by clustering important process states and can analyze applications with tens of millions of computing cores in less than 1 min.

    The job-level performance monitoring tool is available to users in the form of a job submission option,without recompiling the application.The performance monitoring library provides rich application programming interfaces(APIs)for obtaining the running information of the application,and supports efficient queries for floating point operations per second(FLOPS),instructions per cycle(IPC),cache miss ratio,and so on.

    3.4 Scientific computing parallel framework

    To reduce the programming’s complexity caused by the heterogeneous many-core architecture on SEPS,a scientific computing parallel framework is designed to meet the common requirements for science and engineering computing.The framework provides a template or paradigm of multilevel parallel programming for applications.As shown in Fig.7,the framework is composed mainly of two parts.One part is a domain-oriented parallel programming framework,including the SW-Gromacs software in molecular dynamics simulation(Berendsen et al.,1995;Lindahl et al.,2001)and SW-Vina software in molecular docking simulation(Trott and Olson,2009).The other part is the kernel-level parallel programming framework for common kernels in polymorphic applications,including the tensor math library,sparse math library,and parallel algorithm library.The functions of the framework are as follows:

    Fig.7 Scientific computing parallel framework

    1.The domain-oriented parallel programming framework enables users to easily and efficiently migrate applications developed in some domains to the many-core architecture.For example,SW-Gromacs is developed based on the open-source software Gromacs(Berendsen et al.,1995;Lindahl et al.,2001).This software supports basic dynamics-related algorithms,including Newtonian mechanics,energy minimization,and regular pattern analysis.SW-Vina is designed based on the open source molecular docking software Vina(Trott and Olson,2009).This software supports the simultaneous docking of multiple ligands.In addition,this software provides many-core optimized versions for multiple docking functions.

    2.The tensor math library supports the effi-cient many-core implementation of common tensor operations,such as general(dense)matrix multiply(GEMM)and tensor transpose,and provides common interfaces with the standard mathematical format.

    3.The sparse math library supports the effi-cient many-core implementation of commonly used sparse algebra operations,such as sparse matrixvector multiplication(SPMV)(Merrill and Garland,2017)and sparse general matrix-matrix multiplication(SPGEMM)(Buluc and Gilbert,2012),and provides common interfaces with various sparse matrix storage formats,including compressed sparse row(CSR),compressed sparse column(CSC),and coordinate(COO)formats.

    4.The parallel algorithm library provides advanced parallel algorithms,such as the dynamic loadbalancing algorithm and the discrete memory access optimization algorithm.Moreover,generic interfaces and usage examples for these algorithms are given.

    The parallel computing framework provides a parallel programming model on SEPS for domainrelated and public computing requirements,hiding the complexity of the underlying implementation on the SW26010pro processor and network,effectively reducing the difficulty of transplanting and optimizing codes,accelerating heterogeneous mapping,and efficiently operating for multifield applications on the many-core processor.

    3.5 SW AI ecosystem

    The SW AI ecosystem is built on massively scalable AI frameworks(SWMind and SWPyTorch)and SWDNN,whose composition is shown in Fig.8.Based on the system software and runtime,the AI ecosystem provides model-developing tools and an efficient/scalable running environment for AI applications.

    Fig.8 The position of the AI ecosystem in Sunway software(AI:artificial intelligence;DNN:deep neural network;SW:Sunway;Sys.:system)

    1.SWDNN.SWDNN is an accelerated library based on SEPS for DNNs(Liu S et al.,2021).It provides highly optimized 32-or 16-bit floating point(FP32 and FP16,respectively)function interfaces for these frequently used operators in DNN applications,including convolution,normalization,pooling,and activation functions.By invoking the SWDNN library,users can focus on the construction,training,and application of NNs,without spending time on performance optimization.Users need only to replace the function of the original program with the corresponding operator in the SWDNN library to achieve model acceleration.The following optimization techniques are developed in SWDNN:

    On-chip memory access optimization:Since the memory access bandwidth between CPEs in a CG is higher than that between the CPE and the main memory,we need to make full use of the on-chip communication to obtain useful data.In many DNN operators,row and column broadcasts in a CG are effective ways to share data.In addition,the dual buffering technique is widely used to hide communications and calculations.We also optimize malloc specifically to ensure that the assigned data addresses are aligned to achieve efficient memory access.

    On-chip CG memory sharing:Typically,the optimized operators in SWDNN are designed based on a single CG.However,for some special operators,such as the“embedding”and“bmm”operators,their required memory far exceeds the capacity of a single CG.To solve this problem,we develop a CG memory sharing model;i.e.,one MPE occupies the memory of the entire processor,and all CPEs on the whole processor are jointly scheduled.Assuming thatNCGs are used,we need to divide the computation tasks intoN×64 components to achieve high-efficiency parallelism.

    2.SWMind.For simplicity,efficiency,and ease of use,we also design a lightweight deep learning framework,SWMind,which features simple interfaces,optimized communication,and lightweight mode.Most importantly,the framework provides many-core accelerated operators and supports largescale parallelism on SEPS.By invoking the designed efficient operators,users can easily construct their own NN models and write various custom operators.Compared with existing frameworks,such as Tensorflow and PyTorch,SWMind has higher performance due to elimination of nonessential modules.

    3.SWPyTorch.Similarly,the main mission of SWPyTorch on SEPS is to achieve many-core acceleration of PyTorch and scalability on large-scale computing nodes.To achieve many-core acceleration,the original serial implementation is replaced by the corresponding operator in the SWDNN library.The backend to support distributed dataparallel(DDP)in SWPyTorch is the MPI.Due to the support of DDP,model parallelism and data parallelism are achieved.Moreover,we modify the implementation of Megatron(Shoeybi et al.,2019),a natural language processing(NLP)model,to enable it to run on the Sunway platform with SWPyTorch and achieve an efficient,large-scale hybrid model and data parallel pretraining with mixed precision.

    4 Optimization technologies

    With the expansion of the system scale in the exascale era,achieving high scalability and efficiency has become an important challenge for the efficient operation of applications.As a bridge between applications and the supercomputing system,the parallel supporting environment has become the key to overcoming this challenge.Therefore,the parallel supporting environment on SEPS provides comprehensive support for the application in terms of scalability and efficiency.

    4.1 Scalability optimizations of the exascale parallel supporting environment

    Scalability optimization includes the I/O optimization technology and the debugging technique for large-scale parallel applications.The goal of I/O optimization is to provide diversified storage options and solve the problem of concurrent data reading and writing for the parallel application running on tens of millions of computing cores.The debugging technique is a solution to replace the traditional debugging tools that cannot support parallel debugging for the application running on tens of millions of computing cores.

    4.1.1 I/O optimization

    The data forwarding software used by the Sunway series supercomputers is LWFS,whose server runs in the data forwarding node and the client runs in the computing node.Using Filesystem in USErspace(FUSE)to support Linux-standard file system interfaces,LWFS has the advantage of high compatibility.In this way,the I/O requests from the application running on the computing nodes need to first enter the kernel FUSE module,and then be transferred from the kernel mode to the user mode.In practice,the two switchings(between the kernel mode and user mode)and memory copies lead to a large software overhead.Therefore,we propose a user-mode direct data access LIBIO library for LWFS users(Fig.9).

    Fig.9 Software architecture of LWFS LIBIO(FUSE:Filesystem in USErspace;HPC:high-performance computing;IO:input-output;LWFS:lightweight file system;VFS:virtual file system)

    LIBIO calls the functions of the LWFS client port in a library manner,including an application request intercepting component and a standard LWFS client port component.When the application is running,the request intercepting component can intercept all I/O requests of the application and transfer them to the LWFS client port integrated into LIBIO,and then send them to the server port to execute.When LIBIO is used,the user does not need to modify the code;the user has to just link the LIBIO library in the compilation stage.Moreover,the access mode in the running application is exactly the same as that of the traditional kernel file system.Through the LIBIO library,the single-process bandwidth of a computing node is increased by more than twice,and the aggregate bandwidth of a single computing node is increased by more than five times.

    Facing the endless I/O requirements of applications,for the new generation of Sunway storage system,optimization schemes have been designed for different I/O modes.A new file system HADAFS is designed and developed.The software stack of HADAFS is loosely coupled with the traditional VFS,which relaxes the POSIX semantics and retains only the necessary interface design.The main features of HADAFS are as follows:(1)adopting a flat data organization method,which breaks the limitation of the traditional directory tree structure and achieves a high degree of scalability;(2)using the rocksdb-based key-value database to store metadata,which significantly improves metadata performance;(3)separating data access and data management,as well as streamlining storage semantics,which yields better I/O performance;(4)loose coupling between software stack and traditional VFS,which guarantees a simple extension interface and strong flexibility in cache management and data management.Considering the hierarchical data format-5(HDF5),network common data form(NetCDF),and other file formats widely used in scientific computing,we also transplant HADAFS to support reading and writing HDF5 and NetCDF files through HADAFS.

    For massive data read scenarios in AI applications,we develop ROFS.Data updates for ROFS are done by users with distributed dedicated tools.The mounting of ROFS is automatically completed by the job scheduling system.ROFS is mounted before the starting of the job and unmounted after the completeness of the job.Users need only to use a fixed mount point to access the ROFS.With the help of ROFS,the load speed of the dynamic library of the AI ecosystem in the large-scale application is 100 times higher than that of the GFS scheme based on Lustre and LWFS.

    4.1.2 Scalable debugging technique for parallel applications

    As the scale of the parallel application grows,it becomes more difficult and time-consuming to locate errors that occur during runtime.Many attempts have been made to address this problem,but the current advances in debugging techniques are far outpaced by the increasing need for complex debugging,and the gap between debugging tools and debugging requirements is getting wider and wider.

    We propose a method that can quickly locate abnormal processes in very-large-scale applications(Fig.10).Our approach includes mainly two aspects:(1)the important state data of the program are obtained through the baseboard management controller(BMC)interface;(2)the abnormal process will be located using our analysis strategies.The BMC interface provides access to hardware registers and can obtain data on the entire system quickly by a hierarchical architecture.We can obtain more data by extending the interface to more registers,such as the power consumption,the performance monitoring unit(PMU),and the status of the memory component on the nodes where the process resides.By optimizing our focus registers,we are able to collect data from tens of millions of computing cores in a few seconds(Peng et al.,2022).

    Fig.10 Main idea of the very-large-scale debugging method(BMC:baseboard management controller)

    The design of the analysis strategy depends on the metrics we choose.During the running of the application,there will be a corresponding program execution space,which contains a large number of program execution states;this space can reflect the program anomalies.Therefore,we can locate abnormal processes by analyzing anomalies in certain metrics.The program counter(PC)value,which indicates where the process is currently running,is the metric we choose.Our analysis strategy includes two parts:vertical and horizontal.In the vertical aspect,we analyze the abnormal PC change sequence;in the horizontal aspect,we cluster the PC values between different task processes to find the abnormal classes.

    Our method achieves the co-design of software and hardware by combining the hardware BMC system.The proposed method has good scalability,and its capabilities can be extended by collecting other program execution states and designing new analysis strategies(Hofer and M?ssenb?ck,2014).

    4.2 Parallel efficiency optimizations of the exascale parallel supporting environment

    For an exascale-oriented new-generation supercomputer,the efficient mapping of computing tasks contained in the specific application to 10-millionlevel computing cores is the key to high-efficiency operation for the large-scale application.In this subsection,we propose ultra-large-scale parallel algorithms for the heterogeneous many-core architecture to ensure efficient implementation of mapping.

    4.2.1 Multilevel parallelization mode

    To adapt to the hierarchical memory design of the heterogeneous many-core architecture,the corresponding multilevel parallelization mode is proposed for large-scale parallel computing(Fig.11).

    Fig.11 Illustration of the multilevel parallelization mode(CG:core group;CPE:computing processing element;MPE:management processing element;MPI:message passing interface)

    At the first level,the computing tasks in the specific application are divided into multiple independent tasks according to physical or geometry characteristics.In this stage,the whole processor pool is divided into multiple subgroups,and each subgroup is responsible for calculating an independent task.Usually,no communication between subgroups is required at this level.At the second level,i.e.,processlevel(MPI)parallelism,the independent task within the subgroup is mapped to each MPI process,which generally involves communication between MPI processes.The third level is thread-level(CPE)parallelism,where computing tasks are further subdivided into CPEs within the MPI process.In this way,the basic mapping of the computing tasks from the application level to the core level has been completed.

    4.2.2 Adaptive load-balancing algorithm

    At the process level(MPI),the possibility of computational load imbalance is high,especially for those dynamic applications,in which computation and data movement change over time.To solve the problem,an adaptive load-balancing algorithm is proposed,and the particle-in-cell(PIC)simulation(Madduri et al.,2011;Derouillat et al.,2018)is used as an example to introduce in detail.

    For the PIC method,it is easy to raise the computational load imbalance due to the unbalanced particle distribution.The proposed adaptive load-balancing algorithm is illustrated in Fig.12.The idea of the original spatial partitioning method(subfigure on the left)is that each process tracks and computes the particles in its range.The improved load-balancing method involves particle sharing,which ensures that the number of particles calculated by each process is as equal as possible(different colors indicate different processes in the figure).Finally,the particles responsible for this process are further distributed into the CPE array for parallel computation.

    Fig.12 Illustration of the adaptive load-balancing algorithm(CPE:computing processing element)(References to color refer to the online version of this figure)

    4.3 Mixed-precision optimization for AI applications

    When the scale of applications gradually increases,the performance of the program is usually limited by bandwidth and memory;thus,we adopt a mixed-precision method(Micikevicius et al.,2018)to reduce the demand for both.The mixed-precision method refers to a mixture of single-precision float32 and half-precision float16,which uses half-precision to store data with less memory,reducing memory and bandwidth pressure.The floating-point data format used on the SW26010pro processor follows the IEEE-754 standard.FP32 and FP16 are shown in Fig.13.

    Fig.13 FP16 and FP32(16-and 32-bit floating points,respectively)schemes

    Regarding the mixed-precision method,two innovations are proposed:

    First,we design an adaptive scaling method to dynamically adjust the data,so that the representation of the main data is always within the range of FP16,and the error is kept at a level similar to that of using only single-precision floating-point numbers.When the errorris greater than the maximum,i.e.,“max,”we will reduce the data;when the errorris less than the minimum,i.e.,“min,”we will enlarge the data(Table 1).This method effectively prevents data underflow and achieves a better balance between accuracy and efficiency.

    Table 1 Adaptive precision scaling

    Second,we design a filter whose main function is to filter out the overflow in the calculation process.According to our method,the percentage of overflow is less than 2%,so only a small part of the overflow result will be discarded,which has little effect on the result.

    5 Contributions to various applications

    5.1 Scientific and engineering computing applications

    Benefiting from the high performance of SEPS and the support of the proposed parallel support environment,hundreds of applications in various fieldshave been deployed on the system.These applications show great scalability potential in large-scale systems.Fig.14 shows the expected scalability and efficiency of several outstanding applications running on SEPS,and deployment of the application in a much larger similar system has proved that the expected scalability and efficiency have been reached(Gu et al.,2021;Li F et al.,2021;Liu Y et al.,2021;Shang et al.,2021a,2021b,2021c;Xiao et al.,2021;Ye et al.,2022).Among them,the random quantum circuit(RQC)simulation,Raman spectra simulation,and tokamak plasmas simulation have all been shortlisted for the 2021 Gordon Bell Prize due to significant progress in these fields.An overview of the first two applications is listed here,and the details of the quantum simulator will be elaborated in Section 5.3.

    Fig.14 Expected scalability and efficiency of multiple outstanding applications on SEPS(CPE:computing processing element;KMC:kinetic Monte Carlo;MPE:management processing element;SEPS:Sunway exascale prototype system;SC:International Conference for High Performance Computing,Networking,Storage and Analysis).References to color refer to the online version of this figure

    1.Whole-volume magnetic confinement toroidal plasmas simulation(Xiao et al.,2021)

    A large-scale simulation of magnetic toroidal plasmas with 1.5 trillion particles is performed on SEPS.It is the first time that such an unprecedented high-resolution evolution of six-dimensional(6D)electromagnetic fully kinetic plasmas has been presented.This work can also investigate edge micro-instabilities directly.This application requires highly concurrent data read and write during running.The HADAFS proposed in this paper is used to build a high-speed file system based on SSD,which supports burst data output,ensures application scalability,and reduces the time cost of the I/O segment.In addition,the application uses the proposed scalable debugging technology,effectively supporting the expansion of the parallel scale.Meanwhile,multilevel parallelism mode(process level and thread level)is used to improve parallel efficiency.Finally,the scale of the simulation is expanded to about 40 million computing cores,with strong scalable effi-ciency of up to 87.5%and weak scalable efficiency of up to 95.6%.

    2.Extreme-scale ab initio quantum Raman spectra simulation(Shang et al.,2021a)

    An accurate and massively parallel ab initio Raman spectra simulation for a real biological system(consisting of 3006 atoms)is performed on SEPS with great strong and weak scaling.This work has also shown the possibility to use the quantum mechanical(QM)method for virtual drug screening.The main challenge of this application is to parallelize the perturbation method.The proposed multilevel parallelism mode is used to construct a threelevel parallelism strategy,which achieves large-scale application expansion and ensures a parallel effi-ciency of more than 80%.Moreover,a similar adaptive load-balancing algorithm is used to reduce the overhead caused by the unbalanced batch distribution,and the parallel efficiency is further improved.In the algorithm,the running time and space distribution of each batch are collected in the first iteration.Then,the distributions of all batches in all processes are recalculated according to the result of the first iteration.Finally,load balancing is achieved by keeping the distance between batches within a process as small as possible and the running time of each process as close as possible.In both strong and weak scaling tests,the parallel efficiency exceeds 80%when the number of computing cores reaches about 20 million.

    5.2 AI applications

    1.Wu Dao 2.0

    Wu Dao(meaning enlightenment in Chinese)2.0 trained on SW HPC was the world’s biggest natural language processing(NLP)model when it was released.The model size of Wu Dao 2.0 is 10 times larger than that of generative pretrained transformer(GPT)3,using 1.75 trillion parameters.Wu Dao 2.0 efficiently expands to tens of millions of cores on the whole HPC system.In addition,Wu Dao 2.0 explores the scalable boundary of the pretrained model,and is close to passing the Turing test over multiple tasks such as image generation,machine Q&A,and image description.Wu Dao 2.0 is multimodal,consists of WenLan,WenYuan,WenHui,and WenSu,and provides different skills,including text generation,image recognition,and image generation.WuDaoCorpora is a super-large-scale Chinese corpora for pretraining language models.Wu Dao 2.0 acquires skills covering both Chinese and English by studying 4.9 terabytes(TB)of images and texts,including 1.2 TB of Chinese and English texts.Wu Dao 2.0 can also learn from text and images and tackle tasks that include both types of data(something GPT-3 cannot do).

    2.Fusion of AI and HPC(Shang et al.,2021c;Li MF et al.,2022)

    The fusion of AI and HPC has made new progress.In a recent study,through a deep convolutional neural network(CNN)used to fit the quantum multi-body variational wave function,the quantum multi-body simulation for the two-dimensional(2D)highly frustrated J1-J2 Heisenberg model is implemented.It is worth emphasizing that in this study the optimized ROFS supported in the parallel supporting environment is used to improve the efficiency of loading TensorFlow dynamic libraries by approximately 15-100 times compared with the original version.Finally,this study investigates various quantum systems with different lattice sizes(including 10×10,16×16,and 24×24),as shown in Fig.15.The parallel scalability is expected to extend to 31 850 000 computing cores,and the parallel efficiency of CNN calculation reaches 92.2% with lattice size 24×24.

    Fig.15 Expected scalability test for CNN calculation(CNN:convolutional neural network;CPE:computing processing element;MPE:management processing element)

    Another research team has proposed a TensorKMC method,combining atomic kinetic Monte Carlo(AKMC)and neural network potential(NNP,used for potential energy prediction)and performed a dynamic simulation of 54 trillion atoms on SEPS.In multi-layer neural network computing,each layer needs data input(from the main memory)and output(to the main memory)once,which leads to the overall speed being limited by the memory access speed.To solve this problem,a big-fusion operator is constructed using the proposed two-level parallelization mode.The tasks of multiple layers are merged into a kernel,so that the data are only input in the first layer and output in the last layer,and all computing tasks are mapped to CPEs.The big-fusion operator transforms the original memory-intensive task into a computation-intensive task.Finally,the number of computing cores increases to>20 million computing cores with a good parallel efficiency of 82%.

    5.3 Quantum simulation exploration for new computing forms

    SW_Qsim(Li F et al.,2021;Liu Y et al.,2021)is the first attempt of SW HPC in the field of quantum computing.It is a high-performance tensor-based simulator(Markov and Shi,2008)for random quantum circuits(RQCs).It can simulate up to 10×10(qubits)×(1+40+1)(depth)RQCs,which is the world’s largest RQC to be simulated using a classic supercomputer.In 2019,Google developed the quantum processor Sycamore(Arute et al.,2019),and claimed that it took about 200 s to sample a QC 1 million times on Sycamore,while the equivalent task cost about 10 000 years in the world’s fastest supercomputer Summit at that time.In 2018,the National Aeronautics and Space Administration(NASA)and Google implemented qFlex(Villalonga et al.,2020),a circuit simulator,which has been redesigned and reimplemented to use the graphics processing unit(GPU)accelerated Summit HPC architecture efficiently;it achieved a peak performance of 92%when simulating circuits of 7×7(qubits)×(1+40+1)(depth).SW_Qsim uses the Projected Entangled-Pair States(PEPS)method(Guo et al.,2019,2021)and three-dimensional contraction skills(Huang et al.,2020;Pan and Zhang,2021)to perform quantum simulations and achieves a peak performance of>90% when simulating circuits of 10×10×(1+40+1),reducing the simulation time from 10 000 years to 304 s,which closes the“quantum supremacy”gap between quantum computers and classical supercomputers.The results show a near-linear strong and weak scaling pattern when the number of cores increases from 200 000 to 41 million under different circuits.

    The realization of SW_Qsim on the newgeneration Sunway supercomputer is due mainly to two innovations mentioned in the previous section,namely,the ultra-large-scale parallel algorithm and the mixed-precision method.

    1.Ultra-large-scale parallel computing of tensor contraction

    As the core of RQC simulation based on the PEPS method,the calculation process of contracting a closed quantum tensor network to a scalar is the bottleneck.We have encountered the double challenges of computing and storage.Taking the

    10×10(qubits)×(1+40+1)(depth)RQC simulation as an example,to solve the memory problem,we adopt a trick in tensor contraction called“cut”(Villalonga et al.,2019).We cut the legs of the tensor network and divide the tensor contraction task into 326embarrassing parallel tasks,thus alleviating the storage pressure of the tensor network on the central processing unit(CPU).Secondly,we adopt thread-level parallelism on CPEs,assigning each task to each CPE for calculation,with an unprecedented parallel scale of 41 932 800 cores.The process is shown in Fig.16.

    Fig.16 Three-level parallelism of quantum simulation(CPE:computing processing element;MPI:message passing interface)

    2.Mixed-precision computation method in tensor multiplication

    For each step of tensor contraction,we use the mixed-precision method via adaptive precision scaling(Fig.17),which greatly reduces the calculation time.The mixed-precision method is used mainly in the core section,that is,the part of tensor multiplication.First,we convert the part of the input two tensors that need to be multiplied into halfprecision,and analyze whether it is within the range of FP16.If not,then we scale the data and set a global variable to record the zoom factor.Secondly,we import the half-precision input data into the local directive memory(LDM)through direct memory access(DMA),and use vectorization to accelerate the transformation of half-precision data to singleprecision data to perform the calculations.Finally,the single-precision calculation result is used as the input for the next step to continue the loop.

    Fig.17 Flowchart of the mixed-precision method(DMA:direct memory access;FP32 and FP16:32-and 16-bit floating points;SIMD:single instruction multiple data;CPE:computing processing element)

    6 Conclusions and future work

    In the exascale era of HPC,scalability and efficiency become the key to determining the application value of the exascale supercomputer,and the challenges are growing rapidly with the expansion of the scale of the parallel system.This paper introduces the design of a parallel application support environment of SEPS and analyzes the key technologies of software in guaranteeing the scalability and efficiency of exascale applications.At present,various contributions have been made,and nearly 30 applications are expected to efficiently scale to tens of millions of computing cores,in which three applications were shortlisted for the highest award in the field of HPC applications“2021 Gordon Bell Award,”five applications were accepted by SC 2021,and two applications were accepted by PPoPP 2022.Looking to the future,new complex applications reveal novel characteristics,such as computation and data movement varying with time,discreteness and sparseness caused by data non-locality,macro-scale complex task flow,micro-scale complex instruction flow,and mixed or variable precision.Given these new features,we will carry out research on novel parallel algorithms and parallel supporting software design and optimization methods to prosper application development in the field of HPC and improve the domestic supercomputer software ecosystem.

    Contributors

    Xin LIU and Dexun CHEN designed the research.Heng GUO,Yuling YANG,and Jie GAO performed the simulations.Yunlong FENG and Longde CHEN analyzed the results.Xiaobin HE and Xin CHEN drafted the paper.Xiaona DIAO and Zuoning CHEN helped organize the paper.Xiaobin HE and Xin CHEN revised and finalized the paper.

    Compliance with ethics guidelines

    Xiaobin HE,Xin CHEN,Heng GUO,Xin LIU,Dexun CHEN,Yuling YANG,Jie GAO,Yunlong FENG,Longde CHEN,Xiaona DIAO,and Zuoning CHEN declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding authors upon reasonable request.

    真人做人爱边吃奶动态| 午夜福利在线观看吧| 欧美丝袜亚洲另类 | 午夜精品久久久久久毛片777| 国产日本99.免费观看| 天堂网av新在线| 国产亚洲精品久久久久久毛片| 日本五十路高清| 一二三四社区在线视频社区8| 91在线精品国自产拍蜜月 | 午夜影院日韩av| 天堂√8在线中文| 欧美激情在线99| 亚洲人成伊人成综合网2020| 99久久精品一区二区三区| 亚洲一区二区三区色噜噜| 一进一出好大好爽视频| 欧美成人性av电影在线观看| 亚洲成av人片免费观看| 欧美黑人欧美精品刺激| 一区二区三区高清视频在线| 2021天堂中文幕一二区在线观| 亚洲专区中文字幕在线| 97人妻精品一区二区三区麻豆| 婷婷六月久久综合丁香| 久久精品综合一区二区三区| 变态另类丝袜制服| 一夜夜www| 国内精品美女久久久久久| 日韩国内少妇激情av| 激情在线观看视频在线高清| 12—13女人毛片做爰片一| 午夜福利欧美成人| 国产三级黄色录像| 亚洲七黄色美女视频| 最后的刺客免费高清国语| 免费大片18禁| 最近在线观看免费完整版| 欧美极品一区二区三区四区| 免费观看精品视频网站| 亚洲最大成人中文| 欧美日韩综合久久久久久 | 亚洲 欧美 日韩 在线 免费| 国产高清视频在线播放一区| 久久久久久人人人人人| 国产伦一二天堂av在线观看| 欧美乱码精品一区二区三区| 欧美一级毛片孕妇| 亚洲,欧美精品.| 久久6这里有精品| 亚洲成人久久爱视频| 亚洲国产中文字幕在线视频| 又黄又粗又硬又大视频| 亚洲精品在线美女| 久久国产乱子伦精品免费另类| 日韩欧美精品v在线| 波多野结衣高清作品| 亚洲熟妇中文字幕五十中出| 九九热线精品视视频播放| 禁无遮挡网站| 美女高潮的动态| 日韩欧美国产在线观看| 久久精品国产综合久久久| h日本视频在线播放| 又粗又爽又猛毛片免费看| 欧美在线黄色| 男女床上黄色一级片免费看| 少妇裸体淫交视频免费看高清| 最近最新中文字幕大全免费视频| 亚洲av日韩精品久久久久久密| 国产高清视频在线观看网站| 两个人视频免费观看高清| 久久久色成人| 中文字幕精品亚洲无线码一区| 丰满人妻一区二区三区视频av | 免费av不卡在线播放| 天天一区二区日本电影三级| 国产熟女xx| 搞女人的毛片| 国产午夜福利久久久久久| 久久精品国产自在天天线| 国产一区二区在线av高清观看| 欧美一区二区亚洲| 一本一本综合久久| 一卡2卡三卡四卡精品乱码亚洲| 真实男女啪啪啪动态图| 欧美+日韩+精品| 国内少妇人妻偷人精品xxx网站| 亚洲自拍偷在线| 欧美一区二区国产精品久久精品| 亚洲国产精品成人综合色| 国产精品 欧美亚洲| 在线a可以看的网站| 久久久久久久久大av| 毛片女人毛片| 在线观看舔阴道视频| 亚洲国产精品sss在线观看| 亚洲欧美一区二区三区黑人| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 动漫黄色视频在线观看| 欧美一区二区国产精品久久精品| 首页视频小说图片口味搜索| 免费av不卡在线播放| 岛国视频午夜一区免费看| 真人一进一出gif抽搐免费| 丁香六月欧美| 国产一区在线观看成人免费| 国产亚洲av嫩草精品影院| 人妻丰满熟妇av一区二区三区| 国产一级毛片七仙女欲春2| 国产av不卡久久| 一级黄片播放器| 欧美+日韩+精品| av欧美777| www.www免费av| 精品一区二区三区av网在线观看| 国产爱豆传媒在线观看| 天美传媒精品一区二区| av黄色大香蕉| 久久欧美精品欧美久久欧美| 母亲3免费完整高清在线观看| 国产成人啪精品午夜网站| 女同久久另类99精品国产91| 成人特级av手机在线观看| 夜夜夜夜夜久久久久| 少妇熟女aⅴ在线视频| 久久99热这里只有精品18| 亚洲精华国产精华精| 亚洲在线自拍视频| 成人三级黄色视频| 亚洲av免费高清在线观看| 天堂av国产一区二区熟女人妻| 亚洲精品亚洲一区二区| 欧美成人a在线观看| 黑人欧美特级aaaaaa片| 亚洲欧美日韩无卡精品| 亚洲成av人片免费观看| 伊人久久精品亚洲午夜| 欧美黑人欧美精品刺激| 国产单亲对白刺激| 老汉色∧v一级毛片| 午夜福利免费观看在线| 日韩欧美国产在线观看| 亚洲精品一区av在线观看| 国产极品精品免费视频能看的| 国产真实乱freesex| 国产日本99.免费观看| 中文字幕av在线有码专区| 久久久久亚洲av毛片大全| 十八禁网站免费在线| 综合色av麻豆| 十八禁人妻一区二区| 午夜福利成人在线免费观看| 欧美高清成人免费视频www| 三级国产精品欧美在线观看| 久久精品国产清高在天天线| 69av精品久久久久久| 首页视频小说图片口味搜索| 久久久久久久久中文| 一区二区三区高清视频在线| 天堂√8在线中文| 亚洲成av人片免费观看| 国产色爽女视频免费观看| 网址你懂的国产日韩在线| 亚洲av成人不卡在线观看播放网| 久久国产乱子伦精品免费另类| 一a级毛片在线观看| 亚洲av成人av| 毛片女人毛片| 两个人看的免费小视频| 久久国产精品影院| 欧美bdsm另类| 日本免费a在线| 免费在线观看成人毛片| 午夜视频国产福利| 久久久久久九九精品二区国产| 久久亚洲精品不卡| 国产黄色小视频在线观看| 成年免费大片在线观看| 91在线观看av| 女人高潮潮喷娇喘18禁视频| 亚洲狠狠婷婷综合久久图片| 久久亚洲真实| 美女大奶头视频| 天堂动漫精品| 国产精品久久视频播放| 欧美黄色淫秽网站| 日本一本二区三区精品| 国产老妇女一区| 久久久成人免费电影| 啪啪无遮挡十八禁网站| 老司机福利观看| 国产精品一区二区三区四区免费观看 | 日韩亚洲欧美综合| 最近最新中文字幕大全电影3| 国产精品乱码一区二三区的特点| 中文字幕av成人在线电影| 亚洲精品国产精品久久久不卡| 美女cb高潮喷水在线观看| 可以在线观看毛片的网站| 又粗又爽又猛毛片免费看| 欧美性猛交╳xxx乱大交人| 美女 人体艺术 gogo| 国产毛片a区久久久久| 国产国拍精品亚洲av在线观看 | 亚洲成人久久性| 性色av乱码一区二区三区2| 精品久久久久久久久久久久久| 人妻夜夜爽99麻豆av| 两个人视频免费观看高清| 国产精品电影一区二区三区| 色吧在线观看| 欧美中文日本在线观看视频| 日韩精品青青久久久久久| 欧美xxxx黑人xx丫x性爽| 亚洲中文日韩欧美视频| 中文字幕人妻丝袜一区二区| www.熟女人妻精品国产| 日韩欧美精品v在线| 久久久色成人| 99视频精品全部免费 在线| 国产精品精品国产色婷婷| 国产综合懂色| 久久精品夜夜夜夜夜久久蜜豆| 麻豆国产av国片精品| 亚洲va日本ⅴa欧美va伊人久久| 美女黄网站色视频| 精品99又大又爽又粗少妇毛片 | 麻豆一二三区av精品| 亚洲va日本ⅴa欧美va伊人久久| 成人av在线播放网站| 国产精品亚洲av一区麻豆| 午夜免费男女啪啪视频观看 | 五月伊人婷婷丁香| 天天躁日日操中文字幕| 亚洲在线观看片| 91字幕亚洲| avwww免费| 亚洲欧美日韩卡通动漫| 在线看三级毛片| 看免费av毛片| 精品电影一区二区在线| 久久久精品欧美日韩精品| e午夜精品久久久久久久| 又黄又粗又硬又大视频| av国产免费在线观看| 色播亚洲综合网| 成人18禁在线播放| 久久精品91蜜桃| 国产高清视频在线观看网站| 99精品久久久久人妻精品| 国产亚洲精品av在线| 一级a爱片免费观看的视频| 亚洲av第一区精品v没综合| 成人欧美大片| 一二三四社区在线视频社区8| 欧美中文日本在线观看视频| av中文乱码字幕在线| 美女cb高潮喷水在线观看| 国内精品一区二区在线观看| 国产精品女同一区二区软件 | 老司机福利观看| 亚洲专区中文字幕在线| 麻豆国产av国片精品| 国产精品98久久久久久宅男小说| 亚洲av电影在线进入| 精品一区二区三区av网在线观看| 精品国产三级普通话版| 首页视频小说图片口味搜索| 在线观看av片永久免费下载| 99riav亚洲国产免费| 国产精品99久久久久久久久| 俄罗斯特黄特色一大片| 天堂av国产一区二区熟女人妻| 久久亚洲精品不卡| 色av中文字幕| 少妇人妻一区二区三区视频| 日本黄色片子视频| 91字幕亚洲| 久久九九热精品免费| 激情在线观看视频在线高清| 岛国在线免费视频观看| 一本综合久久免费| 久久久精品欧美日韩精品| 色尼玛亚洲综合影院| 国产三级在线视频| 中文字幕精品亚洲无线码一区| 在线播放无遮挡| 一区福利在线观看| 欧美日韩瑟瑟在线播放| 亚洲精品亚洲一区二区| 变态另类丝袜制服| 国产亚洲av嫩草精品影院| 国产激情欧美一区二区| av天堂在线播放| 91久久精品国产一区二区成人 | 久久久久久久久中文| 亚洲精品日韩av片在线观看 | 国产精品永久免费网站| 一区二区三区激情视频| 久久久久久久午夜电影| 老司机午夜福利在线观看视频| 国产黄片美女视频| 欧美黑人欧美精品刺激| 国产麻豆成人av免费视频| 免费看光身美女| 久久精品国产清高在天天线| 级片在线观看| 国产精品一及| 两个人看的免费小视频| 久久久久久久久久黄片| 观看免费一级毛片| 岛国视频午夜一区免费看| 精品久久久久久久久久久久久| 日韩欧美精品v在线| 99热6这里只有精品| 日韩欧美 国产精品| 国产亚洲av嫩草精品影院| av天堂中文字幕网| 精品久久久久久久末码| 小说图片视频综合网站| www.www免费av| h日本视频在线播放| 成人午夜高清在线视频| 99久久九九国产精品国产免费| 噜噜噜噜噜久久久久久91| 国产伦精品一区二区三区四那| 精品国产美女av久久久久小说| 女人被狂操c到高潮| av天堂在线播放| bbb黄色大片| 日韩欧美 国产精品| 精品欧美国产一区二区三| 亚洲电影在线观看av| 最后的刺客免费高清国语| 国产精品99久久99久久久不卡| 在线观看美女被高潮喷水网站 | 国产探花极品一区二区| 91在线精品国自产拍蜜月 | 亚洲精品影视一区二区三区av| 波多野结衣巨乳人妻| 国产三级黄色录像| 免费观看精品视频网站| 国产真实伦视频高清在线观看 | 亚洲av成人av| 日本在线视频免费播放| 91麻豆精品激情在线观看国产| 亚洲av不卡在线观看| 美女免费视频网站| 成熟少妇高潮喷水视频| 国产精品香港三级国产av潘金莲| 国产成人欧美在线观看| 日韩 欧美 亚洲 中文字幕| 99热6这里只有精品| 又粗又爽又猛毛片免费看| 国产美女午夜福利| 精品人妻偷拍中文字幕| 很黄的视频免费| 中文字幕av成人在线电影| 亚洲成人久久爱视频| 久久久久久九九精品二区国产| 男插女下体视频免费在线播放| 成年版毛片免费区| www.www免费av| 国产亚洲精品久久久久久毛片| 国产精品久久久久久人妻精品电影| 久久精品亚洲精品国产色婷小说| 听说在线观看完整版免费高清| 免费人成在线观看视频色| 国产国拍精品亚洲av在线观看 | 亚洲美女视频黄频| 国产aⅴ精品一区二区三区波| 国内揄拍国产精品人妻在线| 国产在视频线在精品| 91麻豆精品激情在线观看国产| 国产精品久久电影中文字幕| 久久久久久久久大av| 制服人妻中文乱码| 少妇的逼好多水| 国产激情欧美一区二区| 嫩草影院精品99| 久久久久免费精品人妻一区二区| 精品久久久久久久毛片微露脸| 日日夜夜操网爽| 一二三四社区在线视频社区8| 黑人欧美特级aaaaaa片| 国产精品三级大全| netflix在线观看网站| 亚洲狠狠婷婷综合久久图片| 好看av亚洲va欧美ⅴa在| 日本免费a在线| 亚洲国产精品合色在线| 男女视频在线观看网站免费| 日韩中文字幕欧美一区二区| 亚洲人成电影免费在线| 国产三级在线视频| 精品99又大又爽又粗少妇毛片 | 久久人人精品亚洲av| 欧美3d第一页| 两个人的视频大全免费| 国产精品 国内视频| 国产色婷婷99| 脱女人内裤的视频| 亚洲专区中文字幕在线| а√天堂www在线а√下载| 免费一级毛片在线播放高清视频| 国产野战对白在线观看| 69人妻影院| 9191精品国产免费久久| 亚洲国产欧美人成| 啦啦啦韩国在线观看视频| 美女免费视频网站| 97超级碰碰碰精品色视频在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 国产成人福利小说| 啦啦啦韩国在线观看视频| 色吧在线观看| 久久伊人香网站| 国产三级在线视频| 欧美色欧美亚洲另类二区| 97碰自拍视频| 午夜精品久久久久久毛片777| 一级黄色大片毛片| 欧美zozozo另类| 国产成+人综合+亚洲专区| 一个人免费在线观看的高清视频| 国产高清三级在线| www日本黄色视频网| 中文字幕精品亚洲无线码一区| 亚洲无线观看免费| 九九热线精品视视频播放| 亚洲av不卡在线观看| 欧美av亚洲av综合av国产av| 麻豆成人午夜福利视频| 九九在线视频观看精品| 51午夜福利影视在线观看| 欧美日本视频| 亚洲欧美一区二区三区黑人| 国产精品电影一区二区三区| 国产精品一及| 欧美大码av| 热99在线观看视频| 99久久综合精品五月天人人| 亚洲成人精品中文字幕电影| 国产真实乱freesex| 国产一级毛片七仙女欲春2| 国产欧美日韩一区二区精品| 国产毛片a区久久久久| 叶爱在线成人免费视频播放| 亚洲一区高清亚洲精品| 久久欧美精品欧美久久欧美| 夜夜夜夜夜久久久久| 熟妇人妻久久中文字幕3abv| 日本免费一区二区三区高清不卡| 精品国产美女av久久久久小说| 亚洲黑人精品在线| 99久久成人亚洲精品观看| 全区人妻精品视频| 99在线人妻在线中文字幕| 亚洲人与动物交配视频| 在线观看舔阴道视频| 欧美在线黄色| 中国美女看黄片| 国内精品久久久久精免费| 91字幕亚洲| 搡老妇女老女人老熟妇| 内地一区二区视频在线| 亚洲 欧美 日韩 在线 免费| АⅤ资源中文在线天堂| 午夜激情欧美在线| 老司机深夜福利视频在线观看| 国产成人系列免费观看| 久久亚洲真实| 久久国产精品人妻蜜桃| 日韩欧美 国产精品| 欧美性猛交╳xxx乱大交人| 欧美成人免费av一区二区三区| 又黄又爽又免费观看的视频| 国产欧美日韩一区二区三| 久9热在线精品视频| 亚洲午夜理论影院| 五月伊人婷婷丁香| 亚洲五月婷婷丁香| 波多野结衣巨乳人妻| 亚洲精品成人久久久久久| www.www免费av| 真人做人爱边吃奶动态| 丁香欧美五月| 亚洲五月婷婷丁香| 18+在线观看网站| avwww免费| 亚洲国产色片| 日韩国内少妇激情av| 丰满人妻一区二区三区视频av | 亚洲最大成人手机在线| 99久久99久久久精品蜜桃| www日本黄色视频网| 免费观看的影片在线观看| 人人妻人人看人人澡| 一a级毛片在线观看| 久久久色成人| 熟妇人妻久久中文字幕3abv| 18禁国产床啪视频网站| 美女 人体艺术 gogo| 国产精品99久久99久久久不卡| 亚洲av美国av| 成年免费大片在线观看| av天堂中文字幕网| 欧美一区二区国产精品久久精品| 美女高潮的动态| 99视频精品全部免费 在线| 中文字幕熟女人妻在线| 久久久久精品国产欧美久久久| 女同久久另类99精品国产91| 久久久国产精品麻豆| 亚洲精品乱码久久久v下载方式 | 亚洲国产精品成人综合色| 成人国产综合亚洲| 亚洲国产欧美人成| svipshipincom国产片| 国产爱豆传媒在线观看| 99久久无色码亚洲精品果冻| 99精品在免费线老司机午夜| 国产一区在线观看成人免费| 一级作爱视频免费观看| 精品福利观看| 成人特级av手机在线观看| 国产av一区在线观看免费| 婷婷精品国产亚洲av在线| 久久天躁狠狠躁夜夜2o2o| 亚洲午夜理论影院| 欧美黄色片欧美黄色片| www国产在线视频色| 一区二区三区高清视频在线| 欧美成人a在线观看| 在线观看舔阴道视频| 99精品欧美一区二区三区四区| 国内精品久久久久久久电影| av天堂中文字幕网| 在线观看美女被高潮喷水网站 | 日韩欧美免费精品| 高潮久久久久久久久久久不卡| 国产午夜精品久久久久久一区二区三区 | 午夜久久久久精精品| 亚洲美女黄片视频| 午夜精品久久久久久毛片777| 老司机深夜福利视频在线观看| 91麻豆精品激情在线观看国产| 18禁黄网站禁片午夜丰满| 麻豆成人av在线观看| 性色av乱码一区二区三区2| 国产av麻豆久久久久久久| 久久久久久久久大av| 欧美成人免费av一区二区三区| 人妻夜夜爽99麻豆av| 最近最新免费中文字幕在线| netflix在线观看网站| 一二三四社区在线视频社区8| 欧美一级毛片孕妇| 日韩精品青青久久久久久| 国产亚洲av嫩草精品影院| 久久精品91蜜桃| 国产精品嫩草影院av在线观看 | 色吧在线观看| 亚洲avbb在线观看| 女人高潮潮喷娇喘18禁视频| 欧美色视频一区免费| 小说图片视频综合网站| 搡老熟女国产l中国老女人| 国产高清视频在线观看网站| 国内久久婷婷六月综合欲色啪| 波多野结衣高清无吗| 最近在线观看免费完整版| 精品不卡国产一区二区三区| 岛国在线免费视频观看| 国产伦在线观看视频一区| 国产精品三级大全| 国产高清三级在线| 国产欧美日韩一区二区三| 熟女少妇亚洲综合色aaa.| 午夜影院日韩av| x7x7x7水蜜桃| 老熟妇仑乱视频hdxx| 午夜福利视频1000在线观看| 波野结衣二区三区在线 | 1024手机看黄色片| 亚洲中文字幕日韩| 国产精品99久久久久久久久| 亚洲国产精品sss在线观看| 俺也久久电影网| 国产黄片美女视频| 亚洲欧美日韩卡通动漫| 免费看光身美女| 免费观看人在逋| 好看av亚洲va欧美ⅴa在| 淫秽高清视频在线观看| 成人精品一区二区免费| 国产精品三级大全| 丁香六月欧美| 哪里可以看免费的av片| 精品一区二区三区av网在线观看| 国产高清视频在线播放一区| 亚洲欧美日韩高清专用| 色哟哟哟哟哟哟| 给我免费播放毛片高清在线观看| 国产精品国产高清国产av| 欧美成人性av电影在线观看| 亚洲欧美日韩东京热| 亚洲七黄色美女视频| 欧美绝顶高潮抽搐喷水| 欧美xxxx黑人xx丫x性爽| 两个人的视频大全免费| x7x7x7水蜜桃| 一a级毛片在线观看| 99热精品在线国产|