• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Slicing-Based Enhanced Method for Privacy-Preserving in Publishing Big Data

    2022-08-24 07:02:16MohammedBinJubierMohdArfianIsmailAbdulghaniAliAhmedandAliSafaaSadiq
    Computers Materials&Continua 2022年8期

    Mohammed BinJubier,Mohd Arfian Ismail,Abdulghani Ali Ahmedand Ali Safaa Sadiq

    1Faculty of Computing,Universiti Malaysia Pahang,Kuantan,Pahang,Malaysia

    2School of Computer Science and Informatics,De Montfort University,Leicester,LE1 9BH,United Kingdom

    3School of Engineering,Computing and Mathematical Sciences,University of Wolverhampton,Wulfruna Street Wolverhampton,WV1 1LY,United Kingdom

    Abstract: Publishing big data and making it accessible to researchers is important for knowledge building as it helps in applying highly efficient methods to plan,conduct,and assess scientific research.However,publishing and processing big data poses a privacy concern related to protecting individuals’sensitive information while maintaining the usability of the published data.Several anonymization methods,such as slicing and merging,have been designed as solutions to the privacy concerns for publishing big data.However,the major drawback of merging and slicing is the random permutation procedure,which does not always guarantee complete protection against attribute or membership disclosure.Moreover,merging procedures may generate many fake tuples,leading to a loss of data utility and subsequent erroneous knowledge extraction.This study therefore proposes a slicingbased enhanced method for privacy-preserving big data publishing while maintaining the data utility.In particular,the proposed method distributes the data into horizontal and vertical partitions.The lower and upper protection levels are then used to identify the unique and identical attributes’values.The unique and identical attributes are swapped to ensure the published big data is protected from disclosure risks.The outcome of the experiments demonstrates that the proposed method could maintain data utility and provide stronger privacy preservation.

    Keywords: Big data;big data privacy preservation;anonymization;data publishing

    1 Introduction

    The vast influence of emerging computing techniques has encouraged the generation of large data volumes in the past few years,leading to the trending concept known as“big data”[1,2].Data publishing assists many research institutions in running big data analytic operations to reveal the information embedded and provide several opportunities with great unprecedented benefits in many fields[3].This process helps organizations improve their efficiency and future plans[1,4-6].Analyzing big data and extracting new knowledge while protecting sensitive information is now considered as one of the imperative needs[7].Moreover,much attention has been paid to potential data privacy violations and data misuse;hence,the proper protection of released data must be ensured because failure may lead to harmful situations impact to individuals and organizations[4].Many establishments such as educational institutes and healthcare centers need to publish data in different formats to extract new knowledge[7].

    Data publication is the easiest method for data sharing that helps research entities run data mining operations on published databases to extract knowledge from the published data.Such knowledge can represent,interpret,or discover interesting patterns[7,8].However,the potentials of published partial data derived from big or a series of datasets are yet to be realized.Scholars face several problems during knowledge extraction process from the published data.One of such challenges is the issue related to data privacy that leads to the disclosure of individuals’identities.This issue is threatening the secure propagation of private data over the web.It has been the reason to limit the availability of large datasets to researchers[9].One of the common practices and most widely used for providing privacy for individuals is the anonymization approach of data prior to its publication.Data anonymization aims to reduce the associated risk of disclosing information of individuals and preserves the possible utilization of published data[10].Though,this approach remains holds two main open questions:1)Can anonymized data be effectively used for data mining operations?2)What protection is needed to prevent private information disclosure while preserving data utility?[11].

    There are two popular models have been proposed for data publication [12]:(1) Multiple publication models from the same data publisher.Multiple data publications refer to a series of datasets in distinct timestamps that are all extensions in certain aspects(e.g.,quarterly released data)[8,13].When the datasets come from the same publisher,this implies that the publisher knows all the original data.(2) Single publication model from several data publishers.Several privacy approaches exist[14]for preserving data privacy.However,majority of these approaches focus mainly on a single publication[12,15,16],where the publisher anonymizes the dataset without considering other datasets that have been published.

    In both models,there are two fundamental methods for releasing the published data.The first method is an interactive setting in which the data collector computes some function on the big data to answer the queries posed by the data analyzer.The second method is the non-interactive setting in which the big data is sanitized and then published [17].It is worth noting that in our study,we consider the scenario of a single publication model in the non-interactive setting where the big data are sanitized and independently published by many organizations(data collectors)that share several common individual records.The issue with this assumption is that in several cases,the information of an individual may be published by more than one organization[18],and an attacker may launch a composition attack[12,19]on the published data to alter their privacy.

    The attributes that cover more than one organization may publish to create links,such as sex,age,and zip code,are called quasi-identifiers(QIs).A composition attack is a situation where an intruder tries to identify an individual by linking several available attributes(QIs)in the published data to an external database to exploit sensitive information [12,20-22].Therefore,anonymization can only be achieved by altering these attributes to conceal the linkage between the individual and specific values to avoid such attacks and preserve the possible utilization of the published data [12].The common method to sanitize the database while maintaining data utility is data anonymization,which is defined in[11]as a set of methods to reduce the risk of disclosing information on individuals,businesses,or other organizations.Most of the existing anonymization-based methods work by setting protection methods,such as perturbing [22,23],suppressing or generalizing variable values [13],or preserving privacy based on measures of correlation[12,24].The main aim of these methods is to create some sort of uncertainty in assessing identity inference or sensitive value [11].Besides,this protection method aims to weaken the linkage between the QI values and sensitive attribute (SA) values such that an individual cannot be identified with his/her sensitive values.

    The single publication model has several correlated attributes rather than a single column distribution to achieve exceptional new knowledge results [3].Suppressing or generalizing methods rearrange the data distributions to execute mining for privacy preservation,which involves analyzing each dimension separately,overlooking the correlations among various attributes (dimensions)[25].Preserving privacy based on the perturbation method alters the original values of a dataset to its anonymized version,which leads to data utility problems depending on the amount and type of noise or the specific properties of data that are not preserved[7].

    The clever approach to resolving these problems is to measure correlation to improve the protection and enrich data utility.The association is measured by a correlation coefficient,denoted byr,which plays a major role in data science techniques that measure the strength of association between variables;hence,the choice of a particular similarity measure can be a major cause of the success or failure in some of classification and clustering algorithms[26].

    The Pearson Correlation Coefficient(PCC)and Mean Square Contingency Coefficient(MSCC)are the two commonly used measures in identifying association[24,27,28].PCC is used to determine the strength of a linear relationship between two continuous variables.The value of the coefficientrranges from[-1,+1][27].When the value ofris-1 or+1,a perfect linear relationship exists between the considered variables.However,if the value is 0,this infers no linear relationship exists between the pairs of variables.An MSCC is a chi-square measure of the correlation between two categorical attributes.Unlike PCC,chi-square measures the extent of the significance of the relationship instead of measuring the strength of the relationship.

    The idea behind the measure of correlation is to keep data utility via grouping highly correlated attributes together in columns and preserving the correlations between these attributes.The correlation measure protects privacy as it breaks the associations between uncorrelated attributes in other columns via anonymization approaches such as randomly permutated and generalization[12,24].

    In this study,ideas are pooled from [12,24]to propose an effective method of determining the level of data protection needed and knowing the optimal manner to achieve this protection level whilst preserving data utility.Both are achieved by using slicing in the anonymization approach for data publishing using vertical partitioning (attribute grouping) and horizontal partitioning (tuple partition).The lower protection level(LPL)and upper protection level(UPL)are used to overcome the unique attributes and presence of identical data for data privacy protection whilst preserving data utility.LPLovercomes the unique attribute values,whereasUPLovercomes the high identical attribute values.LPLand UPL define the level of protection around the attribute values and ensure that an attacker cannot obtain the sensitive information needed to identify the record owner within such interval.This work also relies on value swapping to ensure a lower risk of attribute disclosure and l-diverse slicing.The proposed approach ensures that the published big data is protected from disclosure risks.The outcome of the experiments show that the UL method could keep more data utility and provide a stronger privacy preservation.

    This paper’s major contribution is the proposed upper and lower-level-based protection method(UL method) for data anonymization.The UL method better balances privacy,information loss,and utility.That is why the level of protection required and the optimal manner of achieving it are determined while preserving data utility using the lower and upper protection levels.This work also relies on rank swapping to guarantee a lower risk of attribute disclosure,achieve aggregate query and l-diverse inside the table and solve the problem of creating invalid tuples.

    The rest of this paper is organized as follows:Section 2 reviews the related work.Section 3 presents in detail the UL method.Section 4 discusses the experimental analysis.Finally,Section 5 concludes the paper and highlights the key findings.

    2 Related Work

    The most favourable approaches for preserving privacy based on the suppressing or generalizing method and anonymization of the data include the k-anonymity approach[29],l-diversity approach[30],and the T-closeness approach [15].These approaches were proposed for privacy preservation in one-time data publishing.These methods take personal data and anonymise it and make it unattributable to any specific source or person by breaking the relations amongst the attribute values.High dimensionality renders these approaches ineffective because the identities of the primary record holders can be unmasked by merging the data with either public(composition attack)or background information[12,31].Readers can refer to[5,7,32-34]for more comprehensive understanding of these approaches.

    In the last decade,the probabilistic approach [35],e-differential privacy approach (e-DP) [36],hybrid approach [31],and composition [37]-preserving privacy based on the perturbation method were proposed for multiple independent data publishing.Composition is the first privacy model to prevent composition attacks in multiple independent data publishing [12].The proposed approach in [37]has integrated two novel concepts:(ρ,α)-anonymization by sampling and compositionbased generalization for independent datasets to protect against composition attacks.The proposed approach in [31]combined sampling,generalisation and perturbation by adding Laplacian noise to the count of every sensitive value in each equivalence class.The probabilistic approach suggests a new method called(d,α)-linkable.It tries to limit the likelihood of an adversary completing a composition attack by ensuring that thedpersonal values are associated with a quasi-identifying group with a probability ofαby exploring the correlation between the QI attributes and SAs.

    Mohammed[36]proposed the first noninteractive-based approach called e-DP based on the generalization method.The proposed solution produces a generalized contingency table and adds noise to the counts.The e-DP provides a strong privacy guarantee for statistical query answering and protection against composition attacks by differential privacy-based data anonymization[12,19,31,38,39]showed that using e-DP to protect against composition attacks generates substantial data utility losses during anonymization.

    The most recent measure correlation-based methods are slicing [24]and merging [12].Slicing has received substantial attention for privacy-preserving data publishing,which is considered a novel data anonymization approach.The authors presented a risk disclosure prevention concept that is devoid of generalization.Random slicing permutates the values of attributes in the bucket to annul the column-wise relationships.This method protects the privacy of the published records from attribute and membership disclosure risks.In addition,slicing is recommended for high-dimensional data anonymization because it keeps more data utility than the generalization of attribute values.Therefore,slicing ensures data privacy and preserves data utilities because the attribute values are not generalized.It uses vertical partitioning(attribute grouping)and horizontal partitioning(tuple partition),and its sliced table should be randomly permutated[24](see Tab.1).

    However,slicing can cause data utility and privacy-preserving problems,as slicing randomly permutates attribute values in each bucket,creating invalid tuples that negatively affect the utility of the published microdata.The invalid tuples may easily result in several errors and incorrect results in process challenges.An attacker can rely on the analysis of the fake tuples in the published table to capture the concept of the deployed anonymization mechanism,having the chance to violate the privacy of published data[5,7,40].

    For instance,in Tab.1,tuplet1has just one matching equivalence class that is linked with two sensitive values for zip code 130350.Here,any person may be linked with sensitive values with a probability of not more than 1/l via l-diverse slicing because slicing has been shown to satisfy l-diverse slicing by being linked with the sensitive values by 1/2.If the QI attribute,namely,the zip code is revealed because it has high identical attribute values (sufficient variety) and an adversary relies on background knowledge and has a knowledge of(23,M),then the adversary can determine the SA for the individual.Moreover,if the slicing algorithm switches the sensitive value (randomly) betweent1andt2,then incompatibility is created between the SA and QI attribute values,as mentioned in[40].

    Hasan et al.[12]designed the merging approach to protecting personal identity from disclosure.It is considered an extension of slicing approach.The primary aim of the merging approach is privacy preservation in multiple independent data publications via cell generalization and random attribute value permutation to break the linkage between different columns.To compute data utility and privacy risks,the merging approach that preserves data utility has minor risks because it increases the false matches in the published datasets.However,the major drawback of merging is the random permutation procedure for attribute values to break the association between columns.Besides increasing the false matches for unique attributes in the published datasets,these procedures may generate a small fraction of fake tuples but result in many matching buckets(more than the original tuples).This will eventually lead to loss data utility and can produce erroneous or infeasible extraction of knowledge through data mining operations[41,42].Therefore,the primary reason for revealing people’s identity is the existence of unique attributes in the table or allowing several attributes in the row to match the attributes in other rows,leading to the possibility of accurately extracting the attributes of a person[7,12,24].

    Other studies[8,24]proved the importance of allowing a tuple to match multiple buckets to ensure protection against attribute and membership disclosure.This finding implies that mapping the records of an individual to over one equivalence class results in the formation of a super equivalence class from the set of equivalence classes.

    In this study,the proposed UL method preserves the privacy of the published data while maintaining its utility.The UL method uses the upper level to overcome the identical high values in every equivalence class.However,it uses the lower level to overcome the unique attributes found in every equivalence class.It also uses swapping to break the linkage between the unique attributes and the attributes with identical high values to improve the diversity in our work and increase personal privacy.Worth mentioning that the attributes have been generalised to the switching ability of them and the associated issues.The primary goal of swapping or generalizing the attribute values is to get anonymized data.

    3 The Proposed Method

    This section presents the UL method to be used in the enhanced protection method of data publishing while maintaining data utility.The proposed method reduces the risk of a composition attack when multiple organizations independently release anonymized data.The primary goal of this work is to get a specified level of privacy with minimum information loss for the intended data mining operations.The UL method proposed comprises four main stages,as illustrated in Fig.1.The following four subsections describe these four stages.

    Figure 1:General block diagram of the UL method

    3.1 Dataset Initialisation Stage

    A standard machine learning dataset known as the“Adult”dataset was used for the experiments.This dataset was assembled by Ronny Kohavi and Barry Becker and drawn from the 1994 United States Census Bureau data[43].The dataset comprised 48,842 tuples with fifteen QI attribute values.

    3.2 Attribute Grouping Stage

    The utilized tableThasaiattributes,wherei=1,2,...n.The highly correlated attributes are clustered into columns and uncorrelated attributes are in the other columns,such that each attributeaibelongs to one subset.Colicolumns{col1,col2,...coln,}contain all the attributesai.The grouping of the related attributes is based on the inter attribute relationship measurement,which is ideal for privacy and utility.Regarding data utility,the grouping of highly correlated attributes ensures the preservation of their interattribute relationships.However,in terms of privacy,the identification risk is relatively higher due to the association of uncorrelated attributes compared with the association of highly correlated attributes because of the less frequent association of uncorrelated attribute values;hence,they are more identifiable.For privacy protection,breaking the linkages between the uncorrelated attributes is better [24].The appropriate measure of association for this situation is MSCC because most of the attributes are categorical.Assume attributea1with value domain {v11,v12,...v1d1,},attributea2with value domain {v21,v22,...v2d2,},and their domain sizes ared1andd2,respectively.The MSCC betweena1anda2is defined as follows:

    wherer2(a1,a2)is the MSCC betweena1anda2attributes;fi.andf.jare the fractions of occurrence ofv1iandv2jin the data,respectively;andfijis the fraction of cooccurrence ofv1iandv2jin the data.Therefore,fi.andf.jare the marginal totals offij:

    3.3 Table Partition(Vertical and Horizontal)Stage

    Table 1:Published data by slicing

    Table 2:Example of partitions in table T

    Table 3:Five changes of swap rates for LPL to calculate the number of cells and tuples in each change

    Given the computation of correlation (r) for each pair of the attributes,the dataset is vertically and horizontally partitioned in the table.In vertical partitions,the k-medoid clustering algorithm,also known as the partitioning around medoids algorithm [44],is used to arrange the related attributes into columns such that each attribute belongs to one column.This algorithm ensures each attribute is determined just as a point in the cluster space,and the inter-attribute distance in the cluster space is given as,which ranges from 0 to 1.When the two attributes are strongly correlated,the distance between the related data points will be smaller in the clustering space.

    Tab.2 shows three partitions for the columns:(1)T*contains all columns with highly correlated attributescol*,wherecol*={col*1,,col2*,...coli*},andcol*∈T*.(2)T**contains all columns with unc orrelated attributescol**,wherecol**=contains columns with SAcolcwhen a single SA exists,and its SA is placed in the last position for easy representation,wherecolc∈Tcand(T*∩T**)∩Tc=T,(i=1,2...n).If the data contain multiple SAs,one can either consider them separately or consider their joint distribution[45].

    In the horizontal partition,all tuples that contain identical values are grouped into buckets or equivalence classes.Each individual is linked with one distinct sensitive value,such that an attacker could not have access to the person’s sensitive values with a probability of over 1/l.The Mondrian[46]algorithm is used to group the tuples.

    3.4 Protection Stage

    This stage explains the data protection method proposed in this study using the UL method.It is an opportunity to improve the protection level and resolve issues on privacy with the preservation of data utility via two steps:

    3.4.1 Creation of Protection Levels

    The key parameters used to improve the protection level in slicing includeLPLandUPL.This study usedLPLandUPLto overcome the unique attributes,and the attributes have identical high values in every equivalence class.Both protection levels define the protection interval around the unique attribute values and identical high values,that fall within this period inT**,such that the attacker would find deducing sensitive information difficult and identifying the record owner within such an interval impossible.The lower levels overcome the unique attribute values,whereas the upper levels overcome the high values identical to the individual’s privacy protection.Suppose the cells have high values of the correlation coefficient(r).In that case,the probability of cells is in the same equivalence class,and by linking these cells with other cells inT*,the adversary has high confidence around the SA,leading to a privacy breach.The rest of the cells are protected from attribute disclosure and membership disclosure because of their presence in over one equivalence class.The proposed privacy goal further requires the range of the rest of the cell groups to be larger than a certain threshold(containing diversity that is at least ≥2 in each equivalence class(see algorithm 1).The upper and lower protection levels(UPLandLPL)aim to find a set of unique cell values and high identical values for cells fromT**,which are presumed known to any attacker:

    The attributes that fall within this period,which will be swapped,are called the swapping attributes,andandare the numbers of cells that fall within this period.Values that have been initially marked to be swapped are called swap rate,denoted byΦ.Typically,Φis of the order of 1%-10%;thus,the fraction of attributes swapped will be less than one.

    Definition 1 (Cell):A cell is a pair of attributes,such as (age,gender),where any cellCcol,Eis identified by the number of columnsColiand the number of an equivalence classEj.For example,in Tab.1,any cell in column {(Age,Gender)} is identified byColiandEj,where 1 ≤i≤coland 1 ≤j≤Eand the first equivalence class consists of tuples t={t1,t2,t3,t4}.

    Definition 2(Matching Buckets):Letcol**be the columns,wherecol**={col*1*,,col2**,...coln**},andcol**∈T**.Lett**be a tuple,andt**|coli**|be thecoli**value oft**.LetE*be an equivalence class in the tableT**,andE**|coli**|be the multiset ofcoli**values in the equivalence classE**.E**is a matching bucket oft**iff for all 1 ≤i≤col**,t**|coli**|∈E**|coli**|.

    Definition 3 (Lower and Upper Protection Level):LPLandUPLare correlation coefficient (r)values for each cellinLPL and UPL∈r.

    Algorithm 1:Creation of Protection Levels attributes Input:Table T Output:Defining a set of attributes a**i that contain value of r that fall in Ccol,E and Ccol,E 1.attribute grouping-stage 2 2.table partition-stage 3 3. for each equivalence class E in T**do 4.Ccol,E:correlation coefficient(r)for attributes in Φ ≤a**i<1.0 5.Ccol,E:correlation coefficient(r)for attributes is in 0.0<a**i ≤Φ 6.Ccol,E:correlation coefficient(r)for attributes in Ccol,E <a**i<Ccol,E 7.Swapping or generalisation of attributes a**Ccol,E (algorihm 2)8.Swapping or generalisation of attributes a**i in(algorithm 2)9.Ensure the l-diversity of all equivalence classes to satisfy privacy requirement as in[24].i in Ccol,E

    Given the computation of correlation (r) for each pair of the attributes,attributea*i*values are grouped into three groups:1)contains all attribute values,that have a correlation coefficient(r)falling within this periodΦ≤a*i*<1.0(seeLine 4).2)contains all unique attribute values,that have a correlation coefficient(r)falling within this period 0.0<a*i*≤Φ(seeline 5).3)Ccol,Econtains the rest of the cells that contain a distant association value fromandand fall within this period(seeline 6).Ccol,Eare characterized by the presence probability it’s in multiple equivalence classes,which leads to the prevention of attribute disclosure.Line 9is a check for the ldiversity privacy requirement as in slicing[24].Moreover,these cells must contain diversity that is at least greater than or equal to two(diversity ≥2)and distributed in each equivalence class.

    3.4.2 Swapping or Generalisation of Attributes

    Swapping or generalization of attributes is the anonymization stage,where randomly permutated values in an equivalence class may not be protected from attribute or membership disclosure because the permutation of these values increases the risk of attribute disclosure,rather than ensuring privacy[40].Therefore,the proposed algorithm in this study ensures the privacy requirement in each equivalence class.Rank swapping is used to break the linkage between the unique attributes and the cells with identical high values to improve the diversity in slicing and increase personal privacy.Attribute swapping alters the tuple data with unique attribute values or identical high values by switching the values of attributes across pairs of records in a fraction of the original data.With not being able to swap attributes,the attributes have to be generalized.The primary goal of switching or generalizing the attribute values is to get the anonymized tableT,which would not have any nonsensical combinations in the record (invalid tuples) and would satisfy the l-diverse slicing (see Algorithm 2).

    Algorithm 2:Swapping or Generalisation of Attributes Input:Table T Output:Obtain the Anonymized Table T*1.Check if swapped attributes are in the same rank group.2.Check if the tuple does not have any nonsensical combination.3.Swap the attributes values to satisfy k-anonymity.4. else 5.Generalize the attributes value to satisfy k-anonymity.

    To ensure the integrity of attribute swapping,the values of an attributea*i*are ranked in groups,for example,Level0in Fig.2 has two groups:{Federal-gov,Local-gov,State-gov}and{Self-emp-inc,Self-emp-not-inc}.In line 3,the value is swapped between two attributes if the two attributes are in the same rank group and have no nonsensical combinations.If the two attributes are in different groups or if the records have any nonsensical combination,the attribute values are generalized to satisfy k-anonymity (seeline 5).Hence,the whole equivalence class is not generalized during attribute generalization;hence,it provides an opportunity to improve data utility compared with full table or column generalization.It also improves the utility of the published dataset.In addition,attribute swapping or generalization provides greater information veracity when deciding.Veracity is the reliability of data and represents the meaningfulness of depending on such data for data mining operations[12,40].

    Figure 2:Example of domain (left) and value (right) generalization hierarchies for the work-class attributes

    Definition 4(Attribute Generalization):LetT**be part of tableT,anda*i*be a QI attribute set inT**.Generalisation replaces the QI attribute values with their generalised version.Letdi**anddj**be two domains with dimensional regionsandrespectively,whereIf the values ofdj**are the generalisation of the values in domaindi**,denotedi**<dj**(a many-to-one value generalisation approach).Generalisation is based on a domain generalisation hierarchy that is defined as a set of domains whose ordering is totally based on the relationshipdi**<dj**(see Fig.2).

    Fig.2 (right) shows a domain generalization hierarchy for the work-class (WC) attributes.No generalization is applied at the bottom of the domain generalization hierarchy for the WC attributes.However,the WC is increasingly more general in the higher hierarchy levels.The maximal domain level element is a singleton,which signifies the possibility of generalizing the values in each domain to a single value.

    4 Experiment and Implementation

    The Adult dataset,which included a real dataset,was used[43].This experiment was implemented using the Python language.To perform the experiments,independent datasets were needed to simulate the actual independent data publishing scenario.Five disjoint datasets of different sizes were pooled from the Adult dataset and extracted into two independent datasets called the Education and Occupation dataset with eight QI attribute values:age (continuous,74),marital status (categorical,7),sex(categorical,2),work class(categorical,8),salary(categorical,2),relationship(categorical,6),education (categorical,16) and occupation (categorical,14).The values in the parenthesis show the type of attribute and the number of classifiers for each attribute.

    Each dataset has 4 K tuples that were randomly selected.The remaining 8 K tuples were used to generate the overlapping tuple pool and check for composition attacks.Five copies were made for each group of the remaining tuple pool by inserting 100,200,300,400,and 500 tuples into the Education and Occupation datasets to generate datasets with sizes of 4.1,4.2K,4.3K,4.4K,and 4.5(where K=1000)for the Education and Occupation dataset,respectively.

    The experiments on real datasets were presented in two parts.The first part has measured the desired level of protection.In the second part,the proposed method was tested for effectiveness against composition attacks,and the effectiveness of the proposed method in data utility preservation and privacy were evaluated compared with other existing works.The experimental results showed that the proposed method provided privacy protections against the considered attacks by maintaining good level of data utility.

    4.1 Measuring Protection Level

    The desired level of protection was determined by determining the unique attributes and grouping the identical data(matching of attributes)into tables.As mentioned earlier,the correlation coefficient(r)plays a significant role in determining the strength of the relationship between attributes.TheLPLdetermines all cells with unique attribute values,and the values of the attributes fall between the range of 0.0<LPL≤Φ.The value of r for unique attributes is always close to 0 but does not equal 0.UPLdetermines all cells with many matching attributes of which their values fall withinΦ≤UPL<1.0.The value of r for these matching attributes is always close to 1 but not equal to 1.The rest of the cells containing a distant association value from 1 and 0 are characterized by the probability of multiple equivalence classes,leading to the prevention of attribute disclosure.Moreover,these cells must contain at least greater than or equal two(diversity ≥2)and distribute in each equivalence class.

    The attributes that fall within this period(LPLandUPL)that will be swapped are called swapping attributes,whereas the values that are marked for swapping and considered a measure of privacy are called the swap rate and denoted byΦ.The decision-maker must specify this based on the disclosure risk and data utility by looking at the measures of the strength of the relationship between attributes.

    Using the experiment datasets partitioned according to Tab.2 and based on the Education dataset,five swap rates were performed on partitionsT**to find the number of cells and tuples in eachLPLandUPL.Tab.3 tabulates the number of cells that fall in the tuples that contain the swapping attributes.Cells with unique attributes or near-unique attributes are potentially riskier than other elements.Tab.4 lists the number of cells that fall in the tuples with the matching attributes,not variety.Cells with matching attributes or near matching attributes are riskier than other elements because almost all tuples are in the same equivalence class.The adversary has more confidence around the SAs by linking these attributes with the highly correlated attributes or other datasets.

    2 302 0 34 0 36 0.85#of tuples for each cell 1 1626 1674 1412 1787 1780 0.90Φ=#of cell 2 30303 1 22222#of tuples for each cell 2 264 0000 1 1412 1787 1626 1674 1780 10000#of cells 22222 2 1.0 1#of tuples for 0.95Φ=00000 each cell 2 UPL<1 1626 1674 1412 1787 1780 Data setΦ ≤0.98Φ=#of cells 2 00000 1 22222#of tuples for each cell 2 00000 0.99Φ=1 1626 1674 1412 1787 1780#of cells 2 00000 1 22222#of tuples for each cell 2 00000 Φ=1 1362 1412 1412 1501 1555#of cells 2 00000 1 11211 4.1K 4.2K 4.3K 4.4K 4.5K

    The strength of the association between attributes was used because the strength and variety of data was known.Then,LPLandUPLwere used to find the specific attributes to swap between them instead of a random approach to breaking the correlations between the attribute values.This method provided more variety of data in the equivalence class.A higher swap rate(Φ)in Tab.3 or a lower swap rate in Tab.4 means higher privacy but decreased data utility.

    4.2 Comparison Evaluation

    From many data publishers,the Single publication model is considered a non-interactive data publishing used in experimental analysis.The experiment was carried out in non-interactive privacy settings.However,most of the work in differential privacy[47]is in line with the interactive settings;a user can gain access to the data set using a numerical query as the anonymization technique will add noise to query answers.The environment may not always favor this phenomenon because datasets are usually known to be published in public.As a result,the non-interactive setting was chosen for the experiment on differential privacy,which is highlighted in[36].

    This section contains the assessment of the proposed work,which is achieved through the measurement of its efficiency using the hybrid [31],merging [12],e-DP [36],probabilistic [35],Mondrian[46],and composition[37]approaches,in the non-interactive privacy settings.The quasiidentifier equivalence class was given as k-anonymity[16]by the merging,probabilistic,e-DP,hybrid,composition,and Mondrian approaches.To create an equivalence class,k=6 was chosen,where Ldiversity is also given as 6.The main purpose of L-diversity is the preservation of privacy by expanding sensitive values’diversity.The Laplacian noise in an equivalence class for differential privacy is appended to the count of sensitive values[35]with e=0.3 for the e-differential privacy budget.There are basically two factors upon which comparison can be made.One is the data utility,while the other is the risk disclosure.These factors are discussed in the subsections below.

    4.2.1 Data Utility Comparison

    Privacy preservation is an essential issue in tableTpublication;hence,data utility must also be considered.Data quality is measured based on distortion ratio (DR).TheDRin published data can be measured using several methods [13]to quantify the effect of anonymization on the overall data distortion for data mining.The generalized distortion ratioGDRis one appropriate measure for calculating the[42].The swap and generalize method are used to break the association of the attributes because most of the attributes are categorical.For any two categorical attributes(a*1*,a*2*∈T**),wheretis its taxonomy tree and a nodepintis used to swap or generalise the attributes,theDRwithpis defined as follows:

    where|N|denotes the set of all the leaf nodes int,and|Common(a*1*,a*2*)|is the set of leaf nodes in the lowest common tree ofa*1*anda*2*int.

    Fig.2 denotes the taxonomy of the WC attributes.If the values ofa*i*anda*j*are in the same rank group and have no nonsensical combinations,then their swap values are equal,and theDRi s 0.Moreover,if the values ofa*i*andare not in the same rank group or have any nonsensical combinations,then,their generalised values are equal toand theDRis equal towheredj,kis the distortion of the attributeof tupletk.

    DRis proportional to the distortion of the generalised dataset over the distortion of the fully generalized dataset.Data utility can be estimated by subtractingDRin Eq.(3)[13]as follows:

    Figs.3 and 4 show the results of the experiments on data utility,that is made based on data loss on the Education dataset.The proposed work in Fig.3 had a swap rate (Φ) of 2% usingLPLand 98% usingUPL.The proposed work had a swap rate (Φ) of 5% usingLPLand 95% usingUPLin Fig.4.Decision-makers must select the swap rate to determine the protection level required by looking into the changes in swap rates,which helps know the number of cells in each swap rate (Φ)(see Tabs.3 and 4).An increase in swap rate (Φ) inLPLor decrease in the swap rate (Φ) inUPLenhances the privacy while the data utility becomes lower.The assessment of the proposed work,done through its comparison with hybrid[31],merging[12],e-DP[36],probabilistic[35],Mondrian[46],and composition[37]approaches revealed that the data utility obtained by the UL is higher than that of all the known works.Whereas,merging approach had N fake tuples with the same QI values as in the original table,and the sensitive values were assigned to them based on the sensitive value distribution in the initial dataset.Therefore,the proposed approach resulted in lesser data loss than the merging method.The UL method employs selective generalization within the cell when satisfying the privacy requirements is essential;hence,more data utility is preserved.

    Figure 3:Data utility on the Education dataset(swap rate(Φ)of 2%using LPL and 98%using UPL)

    4.2.2 Measuring Risks

    A composition attack is a situation where an intruder tries to identify an individual in the tableTby linking several available records in the microdata to an external database to exploit sensitive information,especially when the intruder has much background information about the relationship between the QI and SAs[48].Therefore,measuring disclosure risk is essentially measuring the rareness of a cell in data publishing.The methods employed for assessing risk disclosure in tableTduring a composition attack are discussed in this section.

    Figure 4:Data utility on the Education dataset(swap rate(Φ)of 5%using LPL and 95%using UPL)

    Data publishers should strive to measure the risk disclosure of anonymization approach outputs to ensure privacy preservation.This step is key in defining the level of protection needed.Therefore,differentiating the risk disclosure measures is important because the quantity must not depend on how the data representation method is selected.Risk disclosure can be measured by determining the proportion of the genuine matches to the total matches,as expressed in Eq.(4).

    The experimental results for the Education datasets are shown in Fig.5,while that of Occupation datasets are shown in Fig.6.The experimental results represented are for disclosure risk ratio(DRR),which is known to define the confidence level of an adversary and can be used to understand the sensitive values on the Education and Occupation dataset.Amongst the approaches,the e-DP approach [36]provided the lowest privacy risks for composition.The proposed solution in [36],probabilistically generated a generalized contingency table and then added noise to the counts.However,it reduced data utility,as discussed in Section A (Data Utility Comparison) and Figs.3 and 4.

    Figure 5:Privacy risk for Education dataset(k=6,l=6)

    Figure 6:Privacy risk for Occupation dataset(k=6,l=6)

    In addition,the hybrid [31]approach yielded a lower probability of inferring the user’s private information than the probabilistic[35],composition[37],Mondrian[46],and merging[12]approaches.The merging approach reduced the probability of composition attack on the published datasets compared with the probabilistic[35],composition[37],and Mondrian[46]approaches.The proposed work could successfully reduce the probability of composition attack on the published datasets by overcoming the unique attributes and high identical attribute values usingUPLandLPL,and providing multiple matching cells in each equivalence class,which led the protection against identity disclosure.

    In Fig.7,the experimental results are summarized for disclosure risk ratio(DRR)forLPLandUPLwhenΦ={(1%,99%),(2%,98%),(5%,95%),(10%,90%),(15%,85%)}.As Fig.7 and Tabs.3 and 4 illustrate,when increasing the swap rate(Φ)inLPLor decreasing the swap rate inUPLmeans a higher the privacy but decreased data utility.In this study,the special risk ratio for the composition attack was decreased by overcoming the unique attributes and high identical attribute values by usingUPLandLPL,and providing the multiple matching cells,which confer protection from identity disclosure.Intuitively,a cell is at risk for disclosure if it can be singled out from the rest[49].

    4.2.3 Aggregate Query Error

    An aggregate query is a mathematical computation that involves a set of values and results in a single value expressing the significance of the data.An aggregate query aims to estimate data utility in the published datasets.Aggregate query operators are often used as‘COUNT’,‘MAX’,and‘AVERAGE’to provide key numbers representing the estimated data utility to verify the effectiveness of the proposed work [50].In the experiment,only the ‘COUNT’operator was tested in this experiment,and the query was considered in the following form:

    SELECT COUNT(*)

    FROM Unknown-TableT

    Where vi1∈Vi1and....vidim∈Vidim and s∈Vs

    wherevij (1 ≤j≤d)is the QI value for attributeaij,vij?dijanddijis the domain for attributeaij,sis the SA value,s?dsanddsis the domain for the SA.Predicate dimensiondand query selectivityselare two characteristics of a query predicate;dindicates the amount of QI in the predicate,andselindicates the number of values in eachvij,1 ≤j≤d.The size ofvij,1 ≤j≤dwas chosen at random from 0,1,...set*|dij|.Each query was run on the original table as well as those generated by the proposed work and other existing works.The original and anonymized table each hada count,with the original count denoted byorgcountand the anonymised count denoted byanzcount,whereanzcountdenotes the proposed work and other existing works,respectively.All queries were computed using Equation[50]to determine the average relative error in the anonymized dataset:

    Figure 7:Experimental result for DRR for LPL and UPL when Φ={(1%,99%),(2%,98%),(5%,95%),(10%,90%),(15%,85%)}

    Based on the QI selection,the relative query error was plotted on the y-axis in Fig.8.For the Mondrian,hybrid,e-DP,probabilistic,and composition approaches,the value of k was set to 6,and I-diversity was set to 6 for merging and the proposed work,with the value of(LPL=5%andUPL=95%).The relative query error was calculated on anonymized tables created by the proposed work and other existing works,and one,two,three,four,or five attributes were chosen as QI.Furthermore,for the 4.5 K Occupation dataset,all possible variations of the query were created and executed across the anonymization tables.Fig.8 depicts the relative query error,with the value on the y-axis denoting the relative percentage error and the values on the x-axis denoting different QI choices.The experimental results show that the swapping approach (proposed work) consistently outperforms generalization in answering aggregate queries.For anonymized datasets,the competing approaches show a higher relative query error.Furthermore,the experimental results show that the proposed work has a slight relative error as compared to all other approaches.Because in the case of not being able to switch attributes,then they must be generalized.

    Figure 8:Aggregate query error

    5 Conclusions

    This study started with investigating problems associated with slicing and merging approaches that are related to the random permutation of attribute values,which is used as a way to break the association between different columns in the table.Therefore,the UL method,which confers protection by finding the unique attribute values and high identical attribute values and swapping them to decrease the attribute disclosure risk and ensure attainment of l-diverse in the published table,is proposed against composition attacks.The keyword behind that is selecting group of cells to enhance published data privacy and maintain good data utility.The results of the experiments show that the UL method could improve data utility and provide a stronger privacy preservation.In terms of data utility,the UL method achieves approximately 92.47%data utility higher than works when the percentage of swap rate is 2%usingLPLand 98%usingUPLwith Education dataset size of 4.5 K.It achieves(92.19%)when the percentage of swap rate is 5%usingLPLand 95%usingUPLwith Education dataset size of 4.5 K.Moreover,the UL method potentially reduces risk disclosure compared with other existing works.The achieved performance using our proposed method helps researchers,decision-makers,and technological experts to benefit from the published big data for extracting knowledge in many fields,such as education,healthcare.In future,the proposed work could be extended to several promising directions that may focus on speeding up the performance of UL method using parallel techniques.Moreover,the effectiveness of UL method has been tested against composition attacks,and by using Adult dataset,thus,it is important to test its performance against different attacks and by using different type of datasets.

    Funding Statement:This work was supported by Postgraduate Research Grants Scheme(PGRS)with Grant No.PGRS190360.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    一级,二级,三级黄色视频| 国产麻豆69| 国产精品久久久久久精品电影 | 亚洲精品一区av在线观看| 人人妻人人澡欧美一区二区 | 一级毛片高清免费大全| 亚洲成国产人片在线观看| av天堂久久9| 国产成人精品久久二区二区免费| 亚洲午夜理论影院| 久久久久国产精品人妻aⅴ院| 自线自在国产av| 亚洲精品国产区一区二| 欧美不卡视频在线免费观看 | 亚洲国产精品久久男人天堂| 国产在线精品亚洲第一网站| 99国产精品99久久久久| 亚洲三区欧美一区| 女人高潮潮喷娇喘18禁视频| 国产精品二区激情视频| 啪啪无遮挡十八禁网站| 操美女的视频在线观看| 窝窝影院91人妻| 在线观看66精品国产| 男女午夜视频在线观看| 亚洲中文av在线| 色播亚洲综合网| 欧美在线黄色| 久久香蕉国产精品| 午夜亚洲福利在线播放| 久久久久国内视频| 99在线视频只有这里精品首页| 国产精品美女特级片免费视频播放器 | 激情在线观看视频在线高清| 精品福利观看| 黄色丝袜av网址大全| 一本久久中文字幕| 欧美性长视频在线观看| 精品久久蜜臀av无| 欧美日韩中文字幕国产精品一区二区三区 | 久久影院123| 久久国产精品人妻蜜桃| 亚洲国产欧美网| 免费在线观看视频国产中文字幕亚洲| 一边摸一边做爽爽视频免费| 美女大奶头视频| 美女免费视频网站| 淫秽高清视频在线观看| 大陆偷拍与自拍| a在线观看视频网站| 欧美中文日本在线观看视频| 亚洲在线自拍视频| 麻豆一二三区av精品| 欧美日韩瑟瑟在线播放| 亚洲中文字幕一区二区三区有码在线看 | 在线十欧美十亚洲十日本专区| 丁香六月欧美| 无限看片的www在线观看| 黄色成人免费大全| 在线观看www视频免费| 国产精品1区2区在线观看.| 亚洲伊人色综图| 国产91精品成人一区二区三区| 精品国产美女av久久久久小说| 亚洲精品美女久久av网站| 精品一品国产午夜福利视频| 午夜成年电影在线免费观看| 亚洲第一电影网av| 久久人妻av系列| 精品午夜福利视频在线观看一区| 999久久久精品免费观看国产| 欧美日韩黄片免| 亚洲成人精品中文字幕电影| 亚洲精品粉嫩美女一区| 满18在线观看网站| 99在线视频只有这里精品首页| 亚洲国产中文字幕在线视频| 一级毛片高清免费大全| 午夜两性在线视频| 国产亚洲精品第一综合不卡| www.999成人在线观看| 日本a在线网址| 岛国视频午夜一区免费看| 视频区欧美日本亚洲| а√天堂www在线а√下载| 亚洲欧美激情综合另类| 麻豆一二三区av精品| 亚洲欧美激情在线| 欧美日本亚洲视频在线播放| 一进一出好大好爽视频| 久久精品人人爽人人爽视色| 亚洲一码二码三码区别大吗| 非洲黑人性xxxx精品又粗又长| 一卡2卡三卡四卡精品乱码亚洲| 久久久久久久久免费视频了| 中文字幕久久专区| 欧美日韩瑟瑟在线播放| 日韩精品青青久久久久久| 国产精品九九99| 十八禁人妻一区二区| 久久草成人影院| 男男h啪啪无遮挡| 精品不卡国产一区二区三区| 一本大道久久a久久精品| 欧美亚洲日本最大视频资源| 日韩欧美一区二区三区在线观看| 日韩av在线大香蕉| 一区二区三区精品91| 69av精品久久久久久| 免费搜索国产男女视频| 日韩有码中文字幕| 多毛熟女@视频| 不卡av一区二区三区| 午夜影院日韩av| 久久婷婷人人爽人人干人人爱 | 丝袜人妻中文字幕| av片东京热男人的天堂| 大码成人一级视频| 欧美乱色亚洲激情| 久热爱精品视频在线9| 亚洲电影在线观看av| 自线自在国产av| 一区福利在线观看| 成在线人永久免费视频| 欧美日韩福利视频一区二区| 可以免费在线观看a视频的电影网站| 黄色毛片三级朝国网站| av电影中文网址| 亚洲成人久久性| 国产三级在线视频| 亚洲,欧美精品.| 美女大奶头视频| 国产一区二区激情短视频| 亚洲人成77777在线视频| 亚洲人成电影观看| 国产成人一区二区三区免费视频网站| 成人亚洲精品av一区二区| 精品日产1卡2卡| 欧洲精品卡2卡3卡4卡5卡区| 中文字幕久久专区| 午夜日韩欧美国产| av中文乱码字幕在线| 天堂动漫精品| 一本久久中文字幕| 9色porny在线观看| 麻豆久久精品国产亚洲av| 久久精品影院6| 成年人黄色毛片网站| 精品第一国产精品| 午夜两性在线视频| 国产成人系列免费观看| or卡值多少钱| 无人区码免费观看不卡| 丝袜美足系列| 亚洲中文字幕日韩| www.999成人在线观看| 久久亚洲真实| 成人亚洲精品av一区二区| 国产亚洲精品一区二区www| 亚洲专区中文字幕在线| 美女高潮到喷水免费观看| 欧美成狂野欧美在线观看| 正在播放国产对白刺激| 国产精品98久久久久久宅男小说| 国产蜜桃级精品一区二区三区| 亚洲av熟女| 人妻久久中文字幕网| 搞女人的毛片| 在线视频色国产色| 精品国产乱码久久久久久男人| 极品人妻少妇av视频| 在线国产一区二区在线| 亚洲全国av大片| 午夜福利免费观看在线| 18美女黄网站色大片免费观看| 欧美成人性av电影在线观看| 午夜精品久久久久久毛片777| 国产精品美女特级片免费视频播放器 | 黑人欧美特级aaaaaa片| av片东京热男人的天堂| 在线观看免费午夜福利视频| 操美女的视频在线观看| 一级作爱视频免费观看| 91麻豆av在线| 黄色a级毛片大全视频| 午夜福利成人在线免费观看| 亚洲国产中文字幕在线视频| 精品国产乱子伦一区二区三区| 亚洲天堂国产精品一区在线| 日韩欧美国产在线观看| 制服人妻中文乱码| 中文字幕高清在线视频| 欧美久久黑人一区二区| 99精品在免费线老司机午夜| 亚洲专区字幕在线| 亚洲精品一卡2卡三卡4卡5卡| 在线永久观看黄色视频| 最近最新中文字幕大全免费视频| 国产精品 国内视频| 国产麻豆成人av免费视频| 久久 成人 亚洲| 国产1区2区3区精品| 国产精品免费视频内射| 亚洲精品一区av在线观看| 亚洲精品一区av在线观看| 99精品欧美一区二区三区四区| 亚洲黑人精品在线| 国内精品久久久久精免费| 欧美av亚洲av综合av国产av| 可以在线观看的亚洲视频| 亚洲一区中文字幕在线| 国产精品爽爽va在线观看网站 | 国产精品一区二区在线不卡| 亚洲欧美日韩无卡精品| 国产麻豆成人av免费视频| 国产精品98久久久久久宅男小说| 欧美日韩乱码在线| 国产成人免费无遮挡视频| 国产又色又爽无遮挡免费看| 操美女的视频在线观看| ponron亚洲| 国产成人精品久久二区二区91| 免费在线观看日本一区| 色综合亚洲欧美另类图片| 精品一区二区三区四区五区乱码| 精品国产亚洲在线| 久久天躁狠狠躁夜夜2o2o| 看片在线看免费视频| 国产精品香港三级国产av潘金莲| 欧美黄色淫秽网站| 日本免费一区二区三区高清不卡 | 国产欧美日韩一区二区三区在线| 亚洲五月天丁香| 亚洲午夜理论影院| 国产精品一区二区三区四区久久 | 黄片播放在线免费| 亚洲国产精品sss在线观看| 欧美成狂野欧美在线观看| 欧美日韩瑟瑟在线播放| 99国产综合亚洲精品| 淫妇啪啪啪对白视频| 美女大奶头视频| 一级作爱视频免费观看| 在线观看www视频免费| 高清黄色对白视频在线免费看| 国产成人免费无遮挡视频| 他把我摸到了高潮在线观看| 欧美另类亚洲清纯唯美| 久久久国产欧美日韩av| 18禁黄网站禁片午夜丰满| 少妇 在线观看| 国产精品自产拍在线观看55亚洲| 一级毛片女人18水好多| 亚洲成人国产一区在线观看| svipshipincom国产片| 国产av在哪里看| 桃红色精品国产亚洲av| 在线观看66精品国产| 最近最新中文字幕大全电影3 | 亚洲精品中文字幕在线视频| 18美女黄网站色大片免费观看| 午夜福利影视在线免费观看| 免费高清在线观看日韩| 成人精品一区二区免费| 日韩三级视频一区二区三区| 精品第一国产精品| 91麻豆精品激情在线观看国产| 两人在一起打扑克的视频| 妹子高潮喷水视频| 国产成人影院久久av| 中文字幕精品免费在线观看视频| 淫秽高清视频在线观看| 久久国产亚洲av麻豆专区| 少妇被粗大的猛进出69影院| 午夜久久久在线观看| 久久欧美精品欧美久久欧美| 国产欧美日韩综合在线一区二区| 999久久久精品免费观看国产| 日韩av在线大香蕉| 女性被躁到高潮视频| 国产一区二区三区在线臀色熟女| 日日爽夜夜爽网站| av欧美777| 十分钟在线观看高清视频www| 女人被躁到高潮嗷嗷叫费观| 久久久久国内视频| 久久国产精品影院| 免费女性裸体啪啪无遮挡网站| 精品国内亚洲2022精品成人| 十八禁网站免费在线| 熟女少妇亚洲综合色aaa.| 亚洲伊人色综图| 久久香蕉激情| 国产一级毛片七仙女欲春2 | 一级毛片女人18水好多| 日本精品一区二区三区蜜桃| 亚洲天堂国产精品一区在线| 国产免费av片在线观看野外av| 最新在线观看一区二区三区| 涩涩av久久男人的天堂| 精品一区二区三区四区五区乱码| 99久久久亚洲精品蜜臀av| 亚洲精品久久国产高清桃花| 国产乱人伦免费视频| 亚洲九九香蕉| 好看av亚洲va欧美ⅴa在| 国产三级黄色录像| 国产99久久九九免费精品| 亚洲第一青青草原| 午夜久久久久精精品| 在线观看免费视频网站a站| 97超级碰碰碰精品色视频在线观看| 午夜视频精品福利| 日韩有码中文字幕| 亚洲专区中文字幕在线| 女警被强在线播放| 午夜视频精品福利| 国产精品美女特级片免费视频播放器 | 国产伦人伦偷精品视频| 欧美乱色亚洲激情| 国产伦人伦偷精品视频| 99国产精品一区二区三区| 亚洲av电影在线进入| 中文字幕久久专区| 国产午夜精品久久久久久| 黄片播放在线免费| www.999成人在线观看| av天堂久久9| 婷婷六月久久综合丁香| 美女扒开内裤让男人捅视频| 国产不卡一卡二| 国产区一区二久久| 老汉色∧v一级毛片| 亚洲七黄色美女视频| 淫妇啪啪啪对白视频| 波多野结衣高清无吗| 黄频高清免费视频| 色综合欧美亚洲国产小说| 国产乱人伦免费视频| 99国产精品免费福利视频| 午夜精品久久久久久毛片777| 黑人操中国人逼视频| 国产精品久久久久久精品电影 | 国产精品亚洲av一区麻豆| 每晚都被弄得嗷嗷叫到高潮| 国产伦人伦偷精品视频| xxx96com| 久久久久亚洲av毛片大全| 母亲3免费完整高清在线观看| 国产精品电影一区二区三区| 久久狼人影院| 99香蕉大伊视频| 欧美乱色亚洲激情| 国产精品一区二区三区四区久久 | 波多野结衣一区麻豆| 日韩国内少妇激情av| 国产一区二区三区综合在线观看| 国产97色在线日韩免费| 搡老岳熟女国产| 亚洲第一欧美日韩一区二区三区| 亚洲国产看品久久| 亚洲第一电影网av| 99热只有精品国产| 欧美最黄视频在线播放免费| 国产一区二区在线av高清观看| 50天的宝宝边吃奶边哭怎么回事| 国产精品免费一区二区三区在线| 亚洲在线自拍视频| 国产欧美日韩精品亚洲av| 日韩大尺度精品在线看网址 | 香蕉久久夜色| 久久人妻熟女aⅴ| 欧美性长视频在线观看| 久久久久久久久久久久大奶| 亚洲欧美激情在线| 国产欧美日韩一区二区精品| 伊人久久大香线蕉亚洲五| 国产成人一区二区三区免费视频网站| 涩涩av久久男人的天堂| 亚洲欧美精品综合久久99| 欧美在线黄色| 国产av一区在线观看免费| 男女做爰动态图高潮gif福利片 | 日本欧美视频一区| 校园春色视频在线观看| 亚洲视频免费观看视频| 久久人人爽av亚洲精品天堂| 天堂√8在线中文| 看免费av毛片| 免费高清在线观看日韩| 国产伦一二天堂av在线观看| 国产成人精品无人区| 老司机在亚洲福利影院| 亚洲五月婷婷丁香| 悠悠久久av| 99精品久久久久人妻精品| 精品电影一区二区在线| 免费不卡黄色视频| 国产又色又爽无遮挡免费看| 天堂影院成人在线观看| 国产一区二区三区综合在线观看| 在线国产一区二区在线| 日本a在线网址| 欧美不卡视频在线免费观看 | 51午夜福利影视在线观看| 欧美日韩瑟瑟在线播放| 这个男人来自地球电影免费观看| 亚洲精品国产区一区二| 久久久精品国产亚洲av高清涩受| 午夜影院日韩av| 自拍欧美九色日韩亚洲蝌蚪91| 中文字幕人成人乱码亚洲影| 熟女少妇亚洲综合色aaa.| 精品国产一区二区三区四区第35| 久久精品国产99精品国产亚洲性色 | www.自偷自拍.com| 人人妻人人澡人人看| 国产99白浆流出| 久久九九热精品免费| 中文亚洲av片在线观看爽| 麻豆av在线久日| 欧美日韩精品网址| 大型av网站在线播放| 亚洲va日本ⅴa欧美va伊人久久| 国产伦人伦偷精品视频| 搞女人的毛片| 69av精品久久久久久| 两性午夜刺激爽爽歪歪视频在线观看 | 国产亚洲av高清不卡| 在线观看午夜福利视频| 侵犯人妻中文字幕一二三四区| 啪啪无遮挡十八禁网站| 国产精品亚洲一级av第二区| 每晚都被弄得嗷嗷叫到高潮| 一本久久中文字幕| 亚洲精品中文字幕在线视频| 日韩有码中文字幕| 天天躁夜夜躁狠狠躁躁| 免费不卡黄色视频| 97人妻精品一区二区三区麻豆 | 午夜老司机福利片| 欧美在线黄色| 国产日韩一区二区三区精品不卡| 久久久久久国产a免费观看| 国产亚洲精品一区二区www| 国产成人精品无人区| 国产精品亚洲一级av第二区| 操出白浆在线播放| 日本欧美视频一区| 欧美久久黑人一区二区| 亚洲人成77777在线视频| 在线十欧美十亚洲十日本专区| 麻豆av在线久日| 亚洲一区二区三区色噜噜| 国产极品粉嫩免费观看在线| 亚洲成人精品中文字幕电影| 国产高清videossex| 99精品欧美一区二区三区四区| 精品久久久久久久人妻蜜臀av | 国产1区2区3区精品| 少妇裸体淫交视频免费看高清 | 久久久久精品国产欧美久久久| 午夜福利,免费看| 看黄色毛片网站| 亚洲视频免费观看视频| 精品一区二区三区av网在线观看| 亚洲性夜色夜夜综合| 超碰成人久久| 久久精品成人免费网站| 国产精品综合久久久久久久免费 | 亚洲全国av大片| 中文字幕色久视频| 国产欧美日韩一区二区三| 日日夜夜操网爽| 少妇裸体淫交视频免费看高清 | 九色国产91popny在线| 变态另类丝袜制服| 免费在线观看视频国产中文字幕亚洲| 久久精品91无色码中文字幕| 成人av一区二区三区在线看| 国产精品亚洲av一区麻豆| 国产精品久久久人人做人人爽| 中文字幕久久专区| 一夜夜www| 成人特级黄色片久久久久久久| 无人区码免费观看不卡| 亚洲片人在线观看| 亚洲精品av麻豆狂野| 18禁美女被吸乳视频| 久久香蕉精品热| 国产成人一区二区三区免费视频网站| 午夜a级毛片| 一级毛片女人18水好多| 亚洲av第一区精品v没综合| 亚洲中文字幕日韩| 桃色一区二区三区在线观看| 国产免费av片在线观看野外av| 中文字幕av电影在线播放| а√天堂www在线а√下载| 动漫黄色视频在线观看| 亚洲免费av在线视频| 婷婷六月久久综合丁香| 一级毛片高清免费大全| 国产精品 国内视频| 一进一出抽搐gif免费好疼| 欧美黄色淫秽网站| 欧美黄色片欧美黄色片| 亚洲专区字幕在线| 久久九九热精品免费| 午夜福利18| 最新美女视频免费是黄的| 天天添夜夜摸| 久久香蕉精品热| 日本vs欧美在线观看视频| 色综合亚洲欧美另类图片| 中文字幕高清在线视频| 午夜福利在线观看吧| 嫩草影视91久久| 国产精品 欧美亚洲| 黄色视频不卡| 欧美日韩瑟瑟在线播放| 99香蕉大伊视频| 中文字幕人成人乱码亚洲影| 亚洲色图综合在线观看| 亚洲七黄色美女视频| 999精品在线视频| 大码成人一级视频| 国产一区二区三区综合在线观看| 少妇熟女aⅴ在线视频| 制服诱惑二区| 美女午夜性视频免费| 一级毛片女人18水好多| 69av精品久久久久久| 老熟妇仑乱视频hdxx| 熟女少妇亚洲综合色aaa.| 欧美激情极品国产一区二区三区| 成年版毛片免费区| 国产麻豆成人av免费视频| 精品人妻在线不人妻| 男男h啪啪无遮挡| 久久天躁狠狠躁夜夜2o2o| 日韩欧美国产一区二区入口| 久久久久精品国产欧美久久久| 久久久精品欧美日韩精品| 国产亚洲av高清不卡| 两人在一起打扑克的视频| 亚洲,欧美精品.| 搞女人的毛片| 麻豆国产av国片精品| 日韩精品免费视频一区二区三区| 精品久久久久久久人妻蜜臀av | 在线十欧美十亚洲十日本专区| 波多野结衣一区麻豆| 日韩 欧美 亚洲 中文字幕| 国产精品野战在线观看| 久久天躁狠狠躁夜夜2o2o| 日本五十路高清| 日本在线视频免费播放| 黄色视频,在线免费观看| 久久精品91蜜桃| 欧美一级毛片孕妇| 国产一区二区三区综合在线观看| 曰老女人黄片| 亚洲人成伊人成综合网2020| 久久天堂一区二区三区四区| 欧美乱色亚洲激情| 一级作爱视频免费观看| 久久婷婷人人爽人人干人人爱 | 免费看十八禁软件| 午夜影院日韩av| 午夜免费鲁丝| av网站免费在线观看视频| 乱人伦中国视频| 久久热在线av| 日本a在线网址| 999久久久国产精品视频| 亚洲精品国产色婷婷电影| 精品国产乱码久久久久久男人| 999精品在线视频| 亚洲国产中文字幕在线视频| 嫩草影院精品99| 国产99白浆流出| 亚洲男人的天堂狠狠| 成人亚洲精品一区在线观看| 日韩中文字幕欧美一区二区| 自拍欧美九色日韩亚洲蝌蚪91| 女性生殖器流出的白浆| 国产精品亚洲一级av第二区| 狂野欧美激情性xxxx| 久久精品影院6| 黄色成人免费大全| 在线av久久热| 变态另类丝袜制服| 精品少妇一区二区三区视频日本电影| 女人被狂操c到高潮| 国产午夜精品久久久久久| 亚洲五月色婷婷综合| 国产97色在线日韩免费| 99在线人妻在线中文字幕| 一级黄色大片毛片| 中文字幕色久视频| 成人免费观看视频高清| 久久香蕉精品热| 午夜福利成人在线免费观看| 嫩草影院精品99| 免费在线观看影片大全网站| 国产片内射在线| 亚洲av第一区精品v没综合| 一区二区三区高清视频在线| 正在播放国产对白刺激| 亚洲欧美一区二区三区黑人| 亚洲av成人不卡在线观看播放网| 亚洲在线自拍视频| 中亚洲国语对白在线视频| 久久人人精品亚洲av|