• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Federated Learning Model for Auto Insurance Rate Setting Based on Tweedie Distribution

    2024-02-19 12:03:00TaoYinChanggenPengWeijieTanDequanXuandHanlinTang

    Tao Yin,Changgen Peng,Weijie Tan,Dequan Xu and Hanlin Tang

    1State Key Laboratory of Public Big Data,Guizhou University,Guiyang,550025,China

    2Guizhou Big Data Academy,Guizhou University,Guiyang,550025,China

    3Key Laboratory of Advanced Manufacturing Technology,Ministry of Education,Guizhou University,Guiyang,550025,China

    4College of Computer Science and Technology,Guizhou University,Guiyang,550025,China

    5ChinaDataPay Company,Guiyang,550025,China

    ABSTRACT

    In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data from both parties without exchanging data.The assessment results of the scheme approach those of the Tweedie regression model learned from centralized data,and outperform the Tweedie regression model learned independently by a single party.

    KEYWORDS

    Rate setting;Tweedie distribution;generalized linear models;federated learning;homomorphic encryption

    1 Introduction

    In recent years,there has been a growing interest in the analysis of vehicle insurance data.Currently,many property and casualty insurance companies face a high combined cost ratio,with motor insurance accounting for a significant portion of the overall costs.In this context,usagebased insurance (UBI) for vehicles has emerged as a competitive product in the commercial vehicle insurance market.UBI premiums are determined based on specific vehicle usage behavior and the corresponding level of risk.Insurers collect data during the underwriting cycle to extract appropriate risk type parameters for different driving behaviors and habits of insured vehicles.These parameters are then used to adjust the traditional commercial vehicle insurance premiums for the next cycle,ultimately determining differentiated premiums for the insured vehicles.However,there is currently no clear standard for the differentiated premium adjustment mechanism of vehicle UBI products.It can only judge the risk type for a specific type of driving parameter(e.g.,mileage,driving speed),or use multiple driving parameters to determine the comprehensive risk type[1].

    In the motor insurance industry,there are numerous individual risks that require classification according to their characteristics and determining rates for each risk category based on the classification.The development of risk-based rate setting models for motor insurance can be divided into three stages:Initial rate setting models,the popularity of generalized linear models(GLM),and the emergence of extended classes.Early actuarial models for motor insurance rate setting used additive and multiplicative models,with the former assuming an additive relationship between rate factors and the latter assuming a multiplicative relationship.Since the late 20th century,GLMs[2]have become the industry standard for categorical rate setting in some countries,establishing a relationship between the mathematical expectations of response variables and predictor variables through a linkage function[3–5].While GLMs have contributed to the development of non-life rate setting techniques,they have limitations when dealing with increasingly complex data with certain correlation structures,such as clustered,repeated,or stratified data,and when reflecting non-parametric effects of explanatory variables.Hastie et al.[6,7] proposed a generalized additive model (GAM) to analyse the semiparametric and non-parametric relationships between variables,which was further applied to the analysis of factors influencing the modelling of auto insurance claim frequency.For correlated structural data,random effects models based on GLMs have been introduced to improve data analysis accuracy and validity,with examples including linear mixed models (LME) and generalized linear mixed models(GLMM)[8–10].

    The applications of GLMs in the car insurance field include risk assessment,claims prediction,premium pricing,and loss fitting.These applications can help car insurance companies better manage and control risks,improve business efficiency,and profitability.Therefore,the development of GLMs in the car insurance field provides more accurate and reliable modeling tools for insurance companies.

    Traditional motor insurance pricing is only related to fixed factors such as age,gender,mileage and price of the vehicle.In practice,however,there are also dynamic data on users and vehicles that affect motor insurance pricing.In the auto insurance claims process,insurance companies have an urgent need for external data due to the low understanding of personnel information and the low quality of information collection.Insurers are therefore beginning to work with external data vendors to fuse internal and external data and develop motor insurance risk control models using machine learning algorithms.

    Risk control models are statistical models that are used to estimate the risk associated with an event or situation.In the context of car insurance,risk control models can be used to predict the likelihood of a claim and determine an appropriate premium.These models are often based on various factors such as driver age,driving record,vehicle make and model,and geographical location.

    In auto insurance risk control scenario,joint modelling refers to a modelling project in which an insurer and an external data vendor collaborate to provide samples with risk performance to the data vendor,match the feature data to develop a model,and then access the model to make a risk strategy.With the tightening of regulations on personal data privacy and the increasing reliance of insurers on external data,joint modelling is also gaining importance.

    However,in recent years,countries around the world have increasingly attached importance to data privacy protection,and laws and regulations for privacy protection have been introduced successively [11].Original data from different institutions or individuals cannot be collected and used at will.The constraints of these laws and regulations have led to the emergence of data islands,where data sources cannot exchange data,making the traditional learning method of regression model training through data concentration impractical.

    To overcome the challenges brought by data privacy protection,many new technologies and algorithms have emerged,such as federated learning and homomorphic encryption.Federated learning (FL) [12] can perform model training between multiple data sources without leaking personal data,allowing different institutions to share and aggregate data without revealing sensitive data.Homomorphic encryption [13] technology allows certain specific calculations,such as addition and multiplication,to be performed while keeping the data encrypted,making data sharing more secure.

    Federated learning is widely used in scenarios that require data privacy protection,such as healthcare,financial services,and military fields.In regression problems,federated learning can be used to predict numerical target variables,such as predicting stock prices or disease incidence rates[14,15].

    To address the above issues,a Tweedie generalized linear regression-based joint modelling scheme for federal learning car insurance rate setting is proposed.The scheme considers the joint modelling of car insurance rate setting while taking into account the privacy protection of user and vehicle data.All sensitive data is stored in the local institution to which the data belongs,and encryption-based user ID alignment is used to ensure that the participants align the common user sample without the flow of raw data.The experimental results show that the scheme has good results for the quantitative analysis of car insurance pricing variables and user risks.

    2 Preliminaries

    2.1 Federated Learning

    Federated learning is essentially a cryptographic distributed machine learning framework that enables data sharing and joint modelling on the basis of data privacy and security and legal compliance.The core idea is that when multiple data sources participate in model training,only the intermediate parameters of the model are interacted with for joint model training without the need for raw data flow,and the raw data can be kept local.This approach achieves a balance between data privacy protection and data sharing and analysis,i.e.,a “data available but not visible” data application model.

    Vertical federated learning,i.e.,sample-aligned federated learning,is suitable for scenarios where there is a large overlap in user space between participants and little or no overlap in feature space,as shown in Fig.1.The training process of vertical federated learning generally consists of two parts,first aligning entities with the same ID but distributed across different participants,and then training a cryptographic model based on these aligned entities.

    Figure 1 :Vertical federated learning

    2.2 Federated Learning Framework

    The mainstream federal learning frameworks currently available include FATE (Federated AI Technology Enabler)by WeBank,PySyft by OpenMined,PaddleFL(Paddle Federated Learning)by Baidu,FedMl by USC,and TFF(TensorFlow Federated)by Google[16–21].

    PySyft separates private data from model training using federation learning,differential privacy and cryptographic computation in major deep learning frameworks such as PyTorch and TensorFlow.PaddleFL is an open source federal learning framework based on PaddleFL,offering many federal learning strategies and their applications in computer vision,natural language processing,recommendation.FedML is an open research library and benchmark that facilitates the development of new federated learning algorithms and fair performance comparisons,supporting three computational paradigms(distributed training,mobile training and standalone simulation)for users to experiment in different system environments.TFF is mainly used for horizontal federal learning scenarios,especially for Android mobile devices.With TFF,developers are able to train shared global models across multiple participating clients.

    FATE is an open source project initiated by the AI division of WeBank,the world’s first industrial-grade federation learning framework,providing a reliable and secure computing framework for the federation learning ecosystem.By the end of 2021,more than 1,000 companies and 200 research institutions have participated in the FATE open source ecosystem,with a large number of mainstream participants,contributors and major community contributors.the FATE project uses multiparty secure computing (MPC) [22] and homomorphic encryption technologies to build an underlying secure computing protocol that supports different types of secure machine learning.The FATE technical architecture is underpinned by Tensorflow/Pytorch (deep learning),EggRoll/Spark(distributed computing framework) and a multi-party federated communication network,with a federated security protocol on top,and a library of federated learning algorithms built on top of the security protocol.Around practical scenarios,FATE has built a federated blockchain,federated multicloud management,federated model visualisation platform,federated modelling pipeline scheduling,and federated online reasoning at the top of the technical architecture.

    2.3 Tweedie Distribution

    Tweedie-like distributions were first introduced in 1984 by Tweedie,a statistician at the University of Liverpool,UK,and later named by Smyth et al.[23].In probability theory and statistics,the Tweedie distribution is a family of probability distributions that includes the purely continuous normal,gamma and inverse Gaussian distributions,the purely discrete scalar Poisson distribution,and the class of compound Poisson-gamma distributions that have positive mass at zero but are otherwise continuous.The Tweedie distribution is a special case of the exponential dispersion model and is often used as the distribution for generalized linear models.

    The Tweedie distribution is a special case of an exponential dispersion model(EDM)with a power parameter p characterized by the following power relationship between the mean and variance of the distribution,whereμandφare the mean and dispersion parameters,respectively.

    The power parameterpdetermines the subclass of distributions in the family.For example,p=1 links to the Poisson distribution,p=2 links to the Gamma distribution,p=3 links to the inverse Gaussian distribution,links to the Compound Poisson-Gamma distribution,which can be shown in Table 1.

    Table 1 : Common members and parameters of the Tweedie distribution family

    Explanation of parameters:pis the exponential parameter,V(μ)is the variance function,κ(θ)is the cumulative function,θis the typical parameter,φis the dispersion,d(y,μ)is the deviation,α(y,φ)is the normalisation constant,Sis the support,Ωis the mean andΘis the respective parameter space of the natural parameters.

    Given that it is a composite distribution,a random variable can be described as:

    whereMPoisson(λ)andCiGamma(n,ζ),Mindependently ofCi.The probability density function ofXis:

    whereα(x,φ,p)is a normalisation constant to ensure that this is a valid probability density function.

    2.4 Generalized Linear Model

    The generalized linear model(GLM),first proposed by McCulloch[24]and Nelder et al.[25],is one of the most established models for car insurance pricing it is a model that analyses and treats the correlation between multiple rate factors and the explanatory variables with the help of an exponential family distribution due to the introduction of a link function.As the GLM is not limited to normal distributions,but extends to exponential family distributions,it is more suitable for modelling data with special structures such as biased and dichotomous data.At the same time,the GLM relaxes the assumptions required of its traditional linear regression model,expanding the range of applications of the model.The model generally consists of three components: the stochastic component,the systematic component and the link function.

    Stochasticcomponent: The probability distribution of the random component,error term or dependent variableYis known as the random.The samples of the dependent variableY,y1,y2,...,yn,are independent of each other and obey any of the exponential distribution families distribution.These include the zero-truncated Poisson distribution,the normal distribution,the gamma distribution,the inverse Gaussian distribution,etc.The probability density of the family of exponential distributions is:

    Systematiccomponent: System components,i.e.,linear combinations of independent variables.There is a correlation between the system components and the independent variables and this relationship can be assumed to be linearly correlated.The system components can be expressed as follows:

    Linkfunction: It is a function that expresses the relationship between the stochastic component and the system component.In traditional linear regression models,the link function is a unit function of 1.However,in generalized linear models,the link function is specified as strictly monotonic and differentiable,and is used to link the mean of the explanatory variableYto the system components.

    2.5 Homomorphic Encryption

    Homomorphic encryption was first proposed by Rivest et al.[26].The use of homomorphic encryption ensures that the result of algebraic operations on the ciphertext is the same as the result of encryption after performing the same algebraic operations on the plaintext.That is,for any valid operationfand plaintextm,there is the propertyf(Enc(m))=Enc(f(m)).This special property allows third parties to perform algebraic operations on the ciphertext,without the need for decryption operations throughout the process.According to the supported operations,homomorphic encryption can be classified into fully homomorphic encryption(FHE)[27,28],leveled fully homomorphic encryption(LFHE)[29],additional homomorphic encryption(AHE)[30]and multiplicative homomorphic encryption(MHE)[31].

    This work is concerned with additive semi-homomomorphic encryption,e.g.,the Paillier encryption algorithm is a classical additive semi-homomomorphic encryption algorithm and has been used in common federated learning algorithms.During the initialisation phase,the Paillier encryption algorithm generates the key pair<pk,sk>.The public keypkis used for encryption and can be disclosed to the other participants,the private keyskis used for decryption and cannot be disclosed.Given integers x,y,the Paillier encryption algorithm supports the following operations:

    · Encryption:Enc(x,pk)→[[x]].

    · Decryption:Dec([[x]],sk)→x.

    · Homomorphic addition:HAdd([[x]],[[y]])→[[z]],where[[z]]satisfiesDec([[z]],sk)→x+y.

    · Scalar addition:SAdd([[x]],y)→[[z]],where[[z]]satisfiesDec([[z]],sk)→x+y.

    · Scalar multiplication:SMul([[x]],y)→[[z]],where[[z]]satisfiesDec([[z]],sk)→x*y.

    3 Car Insurance Rate Setting Federated Learning Modelling Scheme

    3.1 General Architecture

    Through analysis of the data,this modelling applies to vertical federal learning,for which a system oriented towards vertical federal learning was created between the insurance company (generally referred to as Company A) and the data company (generally referred to as Company B),with the system architecture shown in Fig.2.

    Figure 2 :Vertical federated learning for car insurance rate setting

    The training process for vertical federation learning generally consists of two parts.The first part is cryptographic entity alignment,where the data of Company A and Company B are stored in their respective systems and the original data are not exchanged.The system uses an encryption-based user ID alignment technique to ensure that Parties A and B can align common users without exposing their respective original data.During entity alignment,the system does not expose users belonging to a particular company.The second part is the cryptographic model training phase,where the parties can use the data from these shared entities to collaboratively train a machine learning model after the shared entities have been identified.

    3.2 Tweedie Distribution Generalised Linear Regression Federated Learning Model

    3.2.1SystemInitialisation

    The proposed model consists of two participants,A and B,and one collaborator,C,working together to train the machine learning model,with each participant having a sample size of n.The work consists of the following main components:

    1.Participant A,with a certain number of specific samples,each with a corresponding feature valueXai=(xai1,xai2,...,xain),Xai∈DA.Participant B,with a certain number of samples,each with corresponding feature valueXbi=(xbi1,xbi2,...,xbin) and labelsYbi,(Xbi,Ybi) ∈DB.DAandDBhave partial overlap samplesDC.This scheme assumes that A,B know the overlapping sample IDs in advance,otherwise,the sample IDs can be blinded using the RSA encryption mechanism,and then the samples can be aligned.Assume that the learning rate isηand the regularization parameter isα.Additive homomorphic encryption is represented using the notation[[·]].

    2.A and B each have their own machine learning model training servers,S1andS2,which are controlled by A and B respectively and cannot carry out a conspiracy attack.This server is only responsible for the computation of machine learning models,such as eigenvalue computation,gradient computation and loss function computation.

    3.2.2CalculatingModelTrainingLossFunction

    Train1(WA,WB,DA,DB,DC)→L:

    According to Table 2 the training objective function can be obtained as:

    Table 2 : Training steps for vertical federated learning:Tweedie regression

    (a) A and B input each sampleiinto the model to calculate the eigenvalues:←NetA(WA,DA),←NetB(WB,DB).The sample eigenvalue set matrix isuA,uB.

    (b) For the calculation of the loss function of the generalized linear regression of the Tweedie distribution,according to Eq.(6),we have:

    (c) The servers S1,S2 compute the losses of A,B and use homomorphic encryption to obtain:

    (d) Server S2 receives the parameters from S1 and calculates the overall loss,then we have:

    Convergence or non-convergence based onL,if the model converges,the training is finished and the relevant parametersWA,WBare output.

    3.2.3CalculatingModelTrainingGradients

    Assuming that the loss function valuesLdo not converge,the corresponding gradient values need to be calculated,letaccording to Eq.(7):

    A and B are computed jointly by homomorphic encryption to obtain the respectiveandupdate the gradient and recalculate the loss function.

    The steps of model training are summarized in four steps,the following are shown in Table 2.

    Step 1:The coordinator C creates the key and sends the public key to both Party A and Party B.

    Step 2: The intermediate results are encrypted and exchanged between side A and side B.The intermediate results are then used to help calculate the gradient and loss values.

    Step 3:Parties A and B calculate the encryption gradient and add the additional mask respectively,and Parties A and B send the encryption result to Party C.

    Step 4:Party C decrypts the gradient and loss information and sends the results back to Parties A and B.Parties A and B unmask the gradient information and update the model parameters based on the gradient information.

    3.3 Security Analysis

    The training protocol shown in Table 2 does not reveal any information to C because C is given only the parameters of the masked gradient,and the randomness and confidentiality of the masked matrix are guaranteed.In the above protocol,Party A learns its gradient at each step,but this is not sufficient for A to learn any information from B according to Eq.(9),since the security of the scalar product protocol is based on n equations with more than n unknowns that cannot be solved[20,21].Here,it is assumed that the number of samplesNAis much larger than the number of featuresnA.Similarly,B cannot obtain any information from A.

    Proof of protocol security:This work assumes that both parties are semi-honest.If one party is malicious and tricks the system by falsifying its input,e.g.,if Party A submits only a non-zero input and a non-zero feature,it can determine the value offor that sample feature but has no way of knowing the value oforWB,and this bias will distort the results of the next iteration,alerting the other party,which will terminate the learning process.At the end of the training process,each party(A or B)has no knowledge of the other party’s data structure and is only given the model parameters associated with its own features.During the inference process,both parties need to collaborate to settle the predictions through the steps shown in Table 3,which still does not lead to information leakage.

    Table 3 : Evaluation steps for vertical federated learning:Tweedie regression

    4 Experiment

    In this Section,we evaluate the convergence value of our solution for different values of power and the time overhead for different size quantities through experiments.We also experimentally compare the evaluation results of our solution with those of the stand-alone solution.

    4.1 Experimental Environments

    The experiments are executed in a LAN environment based on the FATE vertical federated learning framework,running on an AMD Ryzen 7 5800H 3.20 Ghz CPU processor with 8 cores and 16 threads and 32 G DDR 4 RAM,in a 64-bit CentOS 7.3 environment with FATE version 1.8.The Tweedie regression model was trained using Python language and the Numpy library.

    4.2 Experimental Datasets

    We evaluated the performance of the Tweedie regression federated learning model using two datasets from the financial insurance field.

    The freMTPL2freq dataset is a French automobile third-party liability claims dataset,containing 677,991 samples of third-party liability insurance policies,each sample consisting of 10-dimensional attribute features and one label.The attribute features include policy holder characteristics (age,gender,etc.),vehicle characteristics(make,model,etc.),and claim-related information(time,location,etc.).

    The CarData dataset comes from a publicly available set of insurance policy claims data on car insurance in de Jong et al.[3].This dataset provides 65,536 insurance samples from 2004–2005,each sample consisting of 7-dimensional attribute features and one label.The attribute features include policy holder characteristics(age,gender,etc.),vehicle characteristics(make,model,etc.),and other relevant information related to the insurance policy.The label represents the total amount of claims made by the policy holder during the policy period.It is widely used in machine learning research to develop models for predicting the total amount of claims made by policy holders based on their demographic and policy information.

    4.3 Experimental Result

    To verify the effectiveness of the FL-TRM (Tweedie Regression Federated Learning Model)method proposed experimental comparisons will be conducted with three other methods.

    The experimental settings for LocalA-TRM and LocalB-TRM involve training the Tweedie regression model only on the local data of participant A and participant B,respectively.The purpose of this is to test the effectiveness of the Tweedie regression model under non-federated settings and verify the effectiveness of federated learning.The NoFL-TRM experimental setting involves training the model on the entire dataset after aggregating all the attribute features,which represents the traditional Tweedie regression method.The purpose of this is to compare its performance with the federated learning framework and evaluate the accuracy loss of the models trained under federated settings.

    The freMTPL2freq dataset is partitioned into attribute features of 10 dimensions,which are split between participant A and participant B according to the ratios of 2:8,3:7,4:6,and 5:5.The label feature y is assigned to participant A,who serves as the active participant,while participant B serves as the collaborative participant.The FL-TRM model will be trained using vertical federated learning with the joint participation of both participants A and B.

    The experiments are conducted with L1 regularization and a penalty factor ofα=0.1,using a batch size of 2000 for batch gradient descent,a learning rate ofη=0.1,and a power value ofp=1.8.The experimental results for different feature partition ratios are shown in Table 4.

    Table 4 : Experimental results under different feature partition ratios

    MAE(Mean Absolute Error)and RMSE(Root Mean Squared Error)are two evaluation metrics for regression models where lower values indicate better performance.Table 4 shows that for LocalATRM,as the number of features increases,both MAE and RMSE decrease,indicating an improvement in model performance.On the other hand,for LocalB-TRM,as the number of features decreases,both MAE and RMSE increase,indicating a deterioration in model performance.Both are weaker than NoFL-TRM,which utilizes all features to learn,demonstrating that the more features used,the better the trained model’s performance.This also proves that the model training performance in a single participant scenario is proportional to the number of features.

    From Table 4,it can be observed that the difference in the number of features between the participating parties has an impact on the performance of FL-TRM.As the difference in the number of features between the two parties decreases,the performance of FL-TRM improves.However,when the feature segmentation ratio is 2:8,the performance of FL-TRM is worse than that of LocalB-TRM.This is because LocalB-TRM is trained by a single party and has 80% of the features,which makes it easier to find features that are beneficial for improving model performance.

    In general,models trained on more data tend to perform better than models trained on less data.However,the contribution of participants’models to evaluation results depends not only on the amount of data they have but also on many other factors such as data quality,model and hyperparameter selection,and how well their data represents the overall sample.

    FL-TRM failed to learn effectively due to the extremely unbalanced feature segmentation ratio.This experiment also suggests that the difference in the number of features between the participating parties in federated learning should not be too large.

    On the CarData dataset,we conducted experiments with two participating parties.The feature split ratio of the dataset was 4:3,which means that for each sample in the dataset,4 out of 7 attributes were allocated to participating Party A as the collaborator,while the remaining 3 attributes and the label y were allocated to Party B as the active party.The FL-TRM model was trained through vertical federated learning with the joint participation of Parties A and B.The experimental results are shown in Table 5.

    Table 5 : Experimental results on the CarData dataset

    Based on Table 5,it can be seen that the model performance of FL-TRM on the CarData dataset is better than that of LocalA-TRM and LocalB-TRM,indicating that the model obtained through federated learning is better than the model trained by a single party.

    In addition to evaluating the model using MAE and RMSE,the risk coefficientRi=e(Intercept+ui)is calculated for each sample vehicle based on the model parameters;the risk coefficients for all samples in the CarData dataset are then cut into 10 quartiles of 0%,10%,20%,30%,45%,65%,80%,85%,90%,95% and 100% to generate a risk score of “1 to 10”.Finally,the mean of the sample size and payout rates under each score were counted,as shown in Table 6.

    Table 6 : Sample size and payout ratio means at different scores

    Fig.3 shows a comparison of the grouped sample sizes obtained by the scheme after risk assessment of the data samples in the NoFL-TRM model and FL-TRM model,respectively,and it can be seen that the differences are very small and the distribution pattern is consistent,with the highest number of samples with a risk score of 6,the lowest number with a score of 1,and the second highest number of samples with scores of 5 and 7.Also,by averaging the sample payout rates under each score as shown in Fig.4,the average payout rate of the samples was highest for a risk score of 10 and lowest for a risk score of 0 in both the stand-alone and federated learning environments,which is consistent with the actual payout data from the insurers.

    Figure 3 :Sample size at different scores

    Figure 4 :Sample payout means at different scores

    Fig.5 shows the relationship between the loss values and iteration rounds during the training of the FL-TRM model.It can be observed from the figure that under the aforementioned hyperparameter conditions,the proposed federated Tweedie regression model parameter update method can stably update the parameters in the direction of gradient descent,resulting in a stable decrease in the loss function.The model can converge after approximately 200 iterations.

    Figure 5 :Relationship between rounds and losses

    Fig.6 shows the variation of the convergence time of this scheme for different sizes of datasets.It can be seen that the time overhead of this scheme grows linearly and steadily with constant feature dimension and increasing dataset size,possessing better performance stability.The federal learning model has a longer training time compared to traditional Tweedie regression.The reasons for this performance degradation are the complexity of the federation learning algorithm itself and the performance drain of the data network transmission experiments in a distributed environment,especially the encryption and decryption based on the homomorphic encryption algorithm.

    Figure 6 :Time overhead at different data sizes

    5 Conclusion

    In this work,we propose a federated learning-based Tweedie regression algorithm for constructing a joint assessment model for multi-party auto insurance rate setting in data silos.The algorithm derives the logarithmic natural formula of the vertical federated Tweedie regression model using an iterative method and constructs the gradient updating strategy of the parameters based on the loss function,introducing homomorphic encryption algorithm to achieve fusion updates of parameters from all parties and obtain the federated Tweedie regression model.The experiments on two datasets demonstrate that federated learning can be used for model training using the datasets of all parties while protecting data privacy.Furthermore,the model testing results prove that the federated learning model performs better than the single-party trained models.In the auto insurance dataset with tag features following Tweedie distribution,the proposed model achieves good results in setting auto insurance rates.Future work will investigate the extension of the scheme to correlation structure data analysis and improve the accuracy and validity of data analysis by introducing random effects based on GLM.

    Acknowledgement: I would like to express my heartfelt gratitude to all those who have contributed to the successful completion of this research work.First and foremost,I am deeply grateful to my supervisor,Professor Changgen Peng,whose guidance,support,and encouragement throughout the research process have been invaluable.Second,I would like to express my heartfelt gratitude to Professor Weijie Tan,whose expertise and insightful feedback have significantly improved the quality of this paper.I am also thankful to the members of my research committee,State Key Laboratory of Public Big Data,for their valuable suggestions and constructive criticism,which helped shape the direction of this study.I extend my appreciation to my colleagues and friends for their continuous support and for being a source of motivation during challenging times.

    Funding Statement: This research was funded by the National Natural Science Foundation of China (No.62272124),the National Key Research and Development Program of China(No.2022YFB2701401),Guizhou Province Science and Technology Plan Project (Grant Nos.Qiankehe Paltform Talent [2020]5017).The Research Project of Guizhou University for Talent Introduction (No.[2020]61),the Cultivation Project of Guizhou University (No.[2019]56),and the Open Fund of Key Laboratory of Advanced Manufacturing Technology,Ministry of Education(GZUAMT2021KF[01]).

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design:Tao Yin,Changgen Peng;data collection:Tao Yin,Hanlin Tang;analysis and interpretation of results:Tao Yin,Weijie Tan;draft manuscript preparation:Dequan Xu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data and materials used in this study are available upon request.Researchers and interested parties can obtain access to the datasets and any supplementary materials by contacting the corresponding author at cgpeng@gzu.edu.cn.We are committed to promoting open science and transparency,and we will do our best to provide the necessary information to facilitate reproducibility and further research.Please note that certain datasets or materials might be subject to restrictions due to confidentiality or copyright considerations.In such cases,we will strive to provide relevant information or point to publicly available resources that align with the research findings.We encourage the scientific community to engage in collaboration and exchange of ideas.If you have any inquiries or wish to access the data and materials for non-commercial research purposes,kindly reach out to us,and we will be glad to assist you.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美在线黄色| 嫩草影视91久久| 飞空精品影院首页| 日韩欧美一区视频在线观看| 国产精品麻豆人妻色哟哟久久| 亚洲一区二区三区欧美精品| 无限看片的www在线观看| 日韩成人av中文字幕在线观看| 狂野欧美激情性xxxx| 啦啦啦 在线观看视频| 午夜影院在线不卡| 久久人人97超碰香蕉20202| 青春草视频在线免费观看| 国产黄色视频一区二区在线观看| 国产亚洲午夜精品一区二区久久| 国产男人的电影天堂91| 1024视频免费在线观看| 亚洲一区中文字幕在线| 伦理电影免费视频| 亚洲av福利一区| 777米奇影视久久| 免费观看a级毛片全部| 国产精品久久久久久人妻精品电影 | 午夜福利在线免费观看网站| 亚洲精品在线美女| 欧美亚洲日本最大视频资源| 啦啦啦 在线观看视频| 如日韩欧美国产精品一区二区三区| 国产亚洲午夜精品一区二区久久| 最近2019中文字幕mv第一页| 日韩电影二区| 午夜日本视频在线| 黄片播放在线免费| 欧美日韩亚洲高清精品| 午夜福利免费观看在线| 亚洲欧美成人综合另类久久久| 久久久久久久精品精品| 在线亚洲精品国产二区图片欧美| 午夜福利免费观看在线| 国产精品秋霞免费鲁丝片| 日本av手机在线免费观看| 国产亚洲一区二区精品| kizo精华| 日本色播在线视频| 久久国产亚洲av麻豆专区| 黄色 视频免费看| 宅男免费午夜| 亚洲av中文av极速乱| 国产成人啪精品午夜网站| 国产老妇伦熟女老妇高清| 亚洲国产毛片av蜜桃av| 欧美精品一区二区大全| 熟女少妇亚洲综合色aaa.| 99久国产av精品国产电影| 国产野战对白在线观看| 男女免费视频国产| 欧美另类一区| 蜜桃国产av成人99| 天天添夜夜摸| av又黄又爽大尺度在线免费看| 中文字幕精品免费在线观看视频| 国产淫语在线视频| 日韩一本色道免费dvd| 国产精品 欧美亚洲| 激情五月婷婷亚洲| 老司机亚洲免费影院| 午夜福利视频在线观看免费| 满18在线观看网站| 日本猛色少妇xxxxx猛交久久| 男人爽女人下面视频在线观看| 国产又爽黄色视频| 你懂的网址亚洲精品在线观看| 这个男人来自地球电影免费观看 | 9热在线视频观看99| 最近的中文字幕免费完整| 激情视频va一区二区三区| 成年av动漫网址| 国产深夜福利视频在线观看| 无遮挡黄片免费观看| 亚洲成人国产一区在线观看 | 亚洲一区中文字幕在线| a级毛片黄视频| 十八禁高潮呻吟视频| 国产精品一区二区在线观看99| 亚洲国产av新网站| 不卡av一区二区三区| 亚洲第一av免费看| 亚洲激情五月婷婷啪啪| 看免费av毛片| 国产亚洲av片在线观看秒播厂| 亚洲第一区二区三区不卡| 日韩一卡2卡3卡4卡2021年| 久久久久久久精品精品| 天天躁夜夜躁狠狠躁躁| 夫妻性生交免费视频一级片| 成人国语在线视频| 国产 精品1| 久久天堂一区二区三区四区| 中文字幕人妻熟女乱码| 无限看片的www在线观看| 国产精品麻豆人妻色哟哟久久| 久久国产精品大桥未久av| 日韩制服骚丝袜av| 中文字幕精品免费在线观看视频| 久久久精品免费免费高清| 人体艺术视频欧美日本| 成人三级做爰电影| 亚洲欧美精品综合一区二区三区| 精品人妻熟女毛片av久久网站| 看非洲黑人一级黄片| 美女福利国产在线| 国产日韩欧美在线精品| 天天操日日干夜夜撸| 国产亚洲午夜精品一区二区久久| 晚上一个人看的免费电影| 国产精品一区二区在线观看99| 色吧在线观看| 韩国高清视频一区二区三区| 国产免费现黄频在线看| 国产精品三级大全| 丝袜在线中文字幕| 欧美日韩一区二区视频在线观看视频在线| 亚洲精品国产一区二区精华液| 一级a爱视频在线免费观看| 久久亚洲国产成人精品v| 一区在线观看完整版| 啦啦啦视频在线资源免费观看| 亚洲伊人色综图| 久久人妻熟女aⅴ| 亚洲欧洲日产国产| www.熟女人妻精品国产| 啦啦啦啦在线视频资源| 精品国产乱码久久久久久男人| 亚洲av福利一区| 在线免费观看不下载黄p国产| 精品少妇黑人巨大在线播放| 亚洲一卡2卡3卡4卡5卡精品中文| 免费在线观看视频国产中文字幕亚洲 | 啦啦啦视频在线资源免费观看| 久久久精品区二区三区| 啦啦啦啦在线视频资源| 久久人人97超碰香蕉20202| 久久国产精品男人的天堂亚洲| 蜜桃在线观看..| 美女脱内裤让男人舔精品视频| 国产极品天堂在线| 一级毛片 在线播放| 精品卡一卡二卡四卡免费| 国产乱人偷精品视频| 丝袜美腿诱惑在线| 最新在线观看一区二区三区 | 亚洲三区欧美一区| 中文字幕另类日韩欧美亚洲嫩草| 亚洲av电影在线观看一区二区三区| 丝袜在线中文字幕| 老司机影院毛片| 成年女人毛片免费观看观看9 | 悠悠久久av| 黑丝袜美女国产一区| 午夜激情久久久久久久| 如何舔出高潮| 中文天堂在线官网| 亚洲精品久久午夜乱码| 久久婷婷青草| 亚洲色图 男人天堂 中文字幕| av国产久精品久网站免费入址| 巨乳人妻的诱惑在线观看| 亚洲一区二区三区欧美精品| 一区二区三区乱码不卡18| 久久精品aⅴ一区二区三区四区| 久久性视频一级片| av不卡在线播放| 午夜免费男女啪啪视频观看| 国产成人91sexporn| 一边摸一边抽搐一进一出视频| 国产不卡av网站在线观看| 国产国语露脸激情在线看| 老司机影院成人| 日韩视频在线欧美| 男女免费视频国产| 亚洲欧美激情在线| 久久久精品区二区三区| 成人午夜精彩视频在线观看| 波野结衣二区三区在线| 制服诱惑二区| av网站在线播放免费| 人妻人人澡人人爽人人| 欧美 日韩 精品 国产| 久久久精品免费免费高清| 色精品久久人妻99蜜桃| 国产成人啪精品午夜网站| 97人妻天天添夜夜摸| 亚洲精品一区蜜桃| 亚洲精华国产精华液的使用体验| 人人妻人人爽人人添夜夜欢视频| 国产伦理片在线播放av一区| 国产av码专区亚洲av| www.精华液| 久久久久久人人人人人| 这个男人来自地球电影免费观看 | a级片在线免费高清观看视频| 久久精品久久久久久噜噜老黄| 中文乱码字字幕精品一区二区三区| 亚洲精品一二三| 日本wwww免费看| 97人妻天天添夜夜摸| 日韩熟女老妇一区二区性免费视频| 少妇的丰满在线观看| 亚洲欧美激情在线| 国产成人精品久久二区二区91 | 国产深夜福利视频在线观看| 91成人精品电影| 热re99久久国产66热| 久久久久久久大尺度免费视频| 人妻一区二区av| 一区二区三区精品91| 丰满乱子伦码专区| 9191精品国产免费久久| 妹子高潮喷水视频| 亚洲视频免费观看视频| 亚洲欧美精品自产自拍| 国产成人免费观看mmmm| 人妻人人澡人人爽人人| 国产在线免费精品| 人人澡人人妻人| 免费黄频网站在线观看国产| 人人妻人人澡人人看| 亚洲欧美精品综合一区二区三区| 香蕉丝袜av| 久久热在线av| 在线观看免费视频网站a站| 黄片无遮挡物在线观看| 国产精品久久久久久精品电影小说| 亚洲在久久综合| 国产亚洲欧美精品永久| 亚洲精品一区蜜桃| 少妇被粗大猛烈的视频| 一级黄片播放器| 一级毛片黄色毛片免费观看视频| 日日爽夜夜爽网站| 亚洲,欧美,日韩| xxx大片免费视频| 午夜精品国产一区二区电影| 一区二区三区精品91| 欧美 亚洲 国产 日韩一| 国产精品二区激情视频| 国产高清国产精品国产三级| 久久狼人影院| 国产成人欧美在线观看 | 极品少妇高潮喷水抽搐| 在线观看免费午夜福利视频| av女优亚洲男人天堂| 女性生殖器流出的白浆| 精品一区二区三区av网在线观看 | 久久人妻熟女aⅴ| 亚洲 欧美一区二区三区| h视频一区二区三区| 亚洲熟女精品中文字幕| 我要看黄色一级片免费的| 一级片'在线观看视频| 亚洲欧美精品综合一区二区三区| 满18在线观看网站| 无限看片的www在线观看| 啦啦啦 在线观看视频| av一本久久久久| 国产日韩欧美亚洲二区| 日韩不卡一区二区三区视频在线| 在线免费观看不下载黄p国产| 欧美日韩综合久久久久久| 黑人欧美特级aaaaaa片| 午夜免费观看性视频| 桃花免费在线播放| 999久久久国产精品视频| 亚洲一区二区三区欧美精品| 亚洲av电影在线观看一区二区三区| 9热在线视频观看99| 多毛熟女@视频| 99久久99久久久精品蜜桃| 欧美av亚洲av综合av国产av | 秋霞在线观看毛片| 欧美在线一区亚洲| 成年人午夜在线观看视频| 久久国产精品大桥未久av| 天美传媒精品一区二区| 久久亚洲国产成人精品v| 午夜激情久久久久久久| 巨乳人妻的诱惑在线观看| 菩萨蛮人人尽说江南好唐韦庄| 妹子高潮喷水视频| 国产精品成人在线| 韩国高清视频一区二区三区| 亚洲人成电影观看| 久热爱精品视频在线9| 免费女性裸体啪啪无遮挡网站| 少妇猛男粗大的猛烈进出视频| 免费在线观看完整版高清| 熟女少妇亚洲综合色aaa.| 日本91视频免费播放| 看非洲黑人一级黄片| 亚洲熟女毛片儿| 国产成人精品久久二区二区91 | 日韩,欧美,国产一区二区三区| 激情五月婷婷亚洲| 少妇被粗大的猛进出69影院| www.精华液| 亚洲欧美色中文字幕在线| 日韩精品有码人妻一区| 国产精品蜜桃在线观看| 99香蕉大伊视频| 最近2019中文字幕mv第一页| 亚洲七黄色美女视频| 99久久99久久久精品蜜桃| 成年人免费黄色播放视频| √禁漫天堂资源中文www| 九九爱精品视频在线观看| 日韩大码丰满熟妇| 水蜜桃什么品种好| 日韩一本色道免费dvd| 老司机影院成人| 国产有黄有色有爽视频| 亚洲欧美一区二区三区久久| 人体艺术视频欧美日本| 王馨瑶露胸无遮挡在线观看| 亚洲成人av在线免费| 亚洲,一卡二卡三卡| 男女床上黄色一级片免费看| 久久久久久久久免费视频了| 免费在线观看完整版高清| 亚洲欧洲精品一区二区精品久久久 | 亚洲av日韩精品久久久久久密 | 亚洲国产欧美日韩在线播放| 天天躁夜夜躁狠狠久久av| 亚洲精品日本国产第一区| 99国产综合亚洲精品| 久久国产亚洲av麻豆专区| 捣出白浆h1v1| 中文字幕av电影在线播放| 中文字幕人妻丝袜一区二区 | www.av在线官网国产| 日韩成人av中文字幕在线观看| 亚洲成人国产一区在线观看 | 秋霞在线观看毛片| 欧美亚洲 丝袜 人妻 在线| 97人妻天天添夜夜摸| 午夜福利网站1000一区二区三区| a 毛片基地| 欧美在线一区亚洲| 最近2019中文字幕mv第一页| 婷婷色麻豆天堂久久| 亚洲欧美一区二区三区国产| 一级,二级,三级黄色视频| 欧美日韩精品网址| 这个男人来自地球电影免费观看 | 香蕉国产在线看| 欧美精品亚洲一区二区| 色精品久久人妻99蜜桃| 久久久久国产精品人妻一区二区| a级毛片在线看网站| av女优亚洲男人天堂| 51午夜福利影视在线观看| 午夜福利影视在线免费观看| 精品免费久久久久久久清纯 | 国产欧美日韩一区二区三区在线| 看非洲黑人一级黄片| 久久久久久久大尺度免费视频| 亚洲美女黄色视频免费看| 一边摸一边做爽爽视频免费| 国产精品久久久久久人妻精品电影 | 亚洲,欧美,日韩| 人人澡人人妻人| 亚洲成av片中文字幕在线观看| 黑人欧美特级aaaaaa片| 看免费成人av毛片| 精品少妇内射三级| 色网站视频免费| 亚洲国产欧美网| 黄色视频不卡| 精品第一国产精品| 久久鲁丝午夜福利片| 亚洲精品av麻豆狂野| 美女高潮到喷水免费观看| 久久韩国三级中文字幕| 国产高清不卡午夜福利| 亚洲欧美一区二区三区久久| 国产免费福利视频在线观看| 丝袜脚勾引网站| 欧美xxⅹ黑人| 九色亚洲精品在线播放| 汤姆久久久久久久影院中文字幕| 婷婷成人精品国产| 日韩伦理黄色片| 18禁观看日本| 国产麻豆69| 丁香六月天网| 亚洲精品国产一区二区精华液| 午夜免费观看性视频| www.熟女人妻精品国产| 欧美人与善性xxx| 久久久久久久精品精品| 久久女婷五月综合色啪小说| 亚洲欧美清纯卡通| 美女大奶头黄色视频| 亚洲欧美日韩另类电影网站| 亚洲熟女精品中文字幕| 日韩制服丝袜自拍偷拍| 最新在线观看一区二区三区 | 熟女av电影| 观看美女的网站| 欧美黑人欧美精品刺激| 国产1区2区3区精品| 天天添夜夜摸| 婷婷色综合www| 国产成人精品久久二区二区91 | 免费观看av网站的网址| 欧美日韩精品网址| 18在线观看网站| 亚洲精品国产区一区二| 国产麻豆69| tube8黄色片| 日韩av免费高清视频| 午夜免费男女啪啪视频观看| 久久青草综合色| 色94色欧美一区二区| 最近手机中文字幕大全| 国产精品久久久久久人妻精品电影 | 午夜福利在线免费观看网站| 久久精品亚洲熟妇少妇任你| 欧美在线一区亚洲| 性色av一级| 在线观看免费午夜福利视频| 日本91视频免费播放| 久久久国产一区二区| 国产午夜精品一二区理论片| 国产成人av激情在线播放| 国产成人精品久久久久久| 人成视频在线观看免费观看| 777久久人妻少妇嫩草av网站| 亚洲人成电影观看| 国产欧美日韩一区二区三区在线| 最新在线观看一区二区三区 | 久久狼人影院| 精品第一国产精品| 久久精品人人爽人人爽视色| 久久精品亚洲熟妇少妇任你| 亚洲精品久久久久久婷婷小说| 婷婷成人精品国产| 日日撸夜夜添| 精品视频人人做人人爽| 国产日韩一区二区三区精品不卡| 九草在线视频观看| 国产精品久久久久成人av| 老鸭窝网址在线观看| 国产日韩欧美视频二区| bbb黄色大片| 波多野结衣av一区二区av| av网站在线播放免费| 国产一卡二卡三卡精品 | 午夜福利一区二区在线看| 亚洲情色 制服丝袜| 美女高潮到喷水免费观看| 亚洲综合精品二区| 美女大奶头黄色视频| 另类精品久久| 精品一区二区免费观看| 亚洲国产看品久久| 观看美女的网站| 国产精品一区二区在线观看99| 不卡av一区二区三区| 午夜日本视频在线| 国产爽快片一区二区三区| 国产97色在线日韩免费| 久久久欧美国产精品| 天美传媒精品一区二区| 叶爱在线成人免费视频播放| 777久久人妻少妇嫩草av网站| 香蕉国产在线看| 亚洲国产精品国产精品| 多毛熟女@视频| 高清视频免费观看一区二区| 国产免费一区二区三区四区乱码| 午夜福利网站1000一区二区三区| 青春草国产在线视频| 视频区图区小说| 午夜影院在线不卡| 久久久久久久久免费视频了| 精品国产一区二区三区久久久樱花| 久久久久网色| 欧美日韩视频精品一区| 免费黄频网站在线观看国产| 久久久久精品国产欧美久久久 | 国产乱来视频区| 男人操女人黄网站| 国产亚洲av片在线观看秒播厂| 久久久精品94久久精品| 成年人免费黄色播放视频| 99久久综合免费| 久久精品国产亚洲av高清一级| 日韩中文字幕欧美一区二区 | 黄片播放在线免费| 国产免费现黄频在线看| 免费日韩欧美在线观看| 18在线观看网站| 韩国精品一区二区三区| 最近最新中文字幕免费大全7| 国产精品香港三级国产av潘金莲 | 悠悠久久av| 各种免费的搞黄视频| 国产又色又爽无遮挡免| 美女福利国产在线| 老鸭窝网址在线观看| 中文字幕亚洲精品专区| 大片电影免费在线观看免费| 亚洲久久久国产精品| 一区二区三区乱码不卡18| 亚洲av中文av极速乱| 自线自在国产av| 一级片'在线观看视频| 久久久久精品人妻al黑| 国产精品嫩草影院av在线观看| 黄片无遮挡物在线观看| 欧美av亚洲av综合av国产av | 99精国产麻豆久久婷婷| 麻豆乱淫一区二区| 亚洲国产av影院在线观看| 人妻 亚洲 视频| 91精品国产国语对白视频| 大话2 男鬼变身卡| 亚洲国产欧美日韩在线播放| 777米奇影视久久| 在线精品无人区一区二区三| 欧美人与性动交α欧美软件| 两性夫妻黄色片| 天天躁狠狠躁夜夜躁狠狠躁| 9色porny在线观看| 国产熟女欧美一区二区| 日韩中文字幕欧美一区二区 | 一本大道久久a久久精品| 大香蕉久久成人网| 亚洲成人免费av在线播放| 中文字幕最新亚洲高清| 大陆偷拍与自拍| 亚洲欧美日韩另类电影网站| 99热网站在线观看| 欧美激情 高清一区二区三区| 一区二区三区乱码不卡18| 超色免费av| 新久久久久国产一级毛片| 青春草亚洲视频在线观看| 香蕉丝袜av| 亚洲成人一二三区av| xxx大片免费视频| 欧美日本中文国产一区发布| 老司机影院成人| 欧美日韩福利视频一区二区| 一区二区日韩欧美中文字幕| 热re99久久国产66热| 国产黄色免费在线视频| 涩涩av久久男人的天堂| 人人澡人人妻人| 男女免费视频国产| 国产欧美日韩一区二区三区在线| 夫妻午夜视频| 成人亚洲欧美一区二区av| 青春草国产在线视频| 爱豆传媒免费全集在线观看| 在线精品无人区一区二区三| 成年人午夜在线观看视频| 日本一区二区免费在线视频| 一区二区日韩欧美中文字幕| 99九九在线精品视频| 大香蕉久久成人网| 日本欧美视频一区| 一区二区三区四区激情视频| 国产精品久久久久久人妻精品电影 | 亚洲精品aⅴ在线观看| 亚洲国产成人一精品久久久| 国产成人啪精品午夜网站| 日韩熟女老妇一区二区性免费视频| 少妇人妻精品综合一区二区| 亚洲免费av在线视频| 亚洲男人天堂网一区| 2021少妇久久久久久久久久久| 欧美日韩视频精品一区| h视频一区二区三区| 9191精品国产免费久久| 青草久久国产| avwww免费| 9191精品国产免费久久| 国产精品无大码| 一区福利在线观看| 亚洲欧美一区二区三区久久| 国产一区二区三区av在线| 一区福利在线观看| 精品国产国语对白av| 国产一区二区三区av在线| 精品一品国产午夜福利视频| 欧美日韩国产mv在线观看视频| 午夜av观看不卡| 日韩精品免费视频一区二区三区| 王馨瑶露胸无遮挡在线观看| 香蕉国产在线看| 日韩一区二区视频免费看| 99久久99久久久精品蜜桃| 日本av免费视频播放| 一区二区日韩欧美中文字幕| av电影中文网址| 最黄视频免费看| 国产一卡二卡三卡精品 | 老司机影院毛片| 亚洲精品中文字幕在线视频| 欧美日韩综合久久久久久| 国产无遮挡羞羞视频在线观看| 日本猛色少妇xxxxx猛交久久| 最近中文字幕2019免费版| 亚洲精品中文字幕在线视频|