• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep image retrieval using artificial neural network interpolation and indexing based on similarity measurement

    2022-05-28 15:17:16FaiyazAhmad

    Faiyaz Ahmad

    Department of Computer Engineering,Jamia Millia Islamia,New Delhi,India

    AbstractIn content-based image retrieval (CBIR),primitive image signatures are critical because they represent the visual characteristics.Image signatures,which are algorithmically descriptive and accurately recognized visual components,are used to appropriately index and retrieve comparable results.To differentiate an image in the category of qualifying contender,feature vectors must have image information's like colour,objects,shape,spatial viewpoints.Previous methods such as sketch-based image retrieval by salient contour (SBIR) and greedy learning of deep Boltzmann machine (GDBM) used spatial information to distinguish between image categories.This requires interest points and also feature analysis emerged image detection problems.Thus,a proposed model to overcome this issue and predict the repeating pattern as well as series of pixels that conclude similarity has been necessary.In this study,a technique called CBIR-similarity measure via artificial neural network interpolation (CBIR-SMANN) has been presented.By collecting datasets,the images are resized then subject to Gaussian filtering in the pre-processing stage,then by permitting them to the Hessian detector,the interesting points are gathered.Based on Skewness,mean,kurtosis and standard deviation features were extracted then given to ANN for interpolation.Interpolated results are stored in a database for retrieval.In the testing stage,the query image was inputted that is subjected to pre-processing,and feature extraction was then fed to the similarity measurement function.Thus,ANN helps to get similar images from the database.CBIR-SMANN have been implemented in the python tool and then evaluated for its performance.Results show that CBIR-SMANN exhibited a high recall value of 78%with a minimum retrieval time of 980 ms.This showed the supremacy of the proposed model was comparatively greater than the previous ones.

    KEYWORDS Gaussian filtering,Hessian detector,image retrieval,interpolation and similarity measurement,repeating pattern

    1|INTRODUCTION

    The demand for digital media is increasing in the upcoming centuries because of its numerous application.More enhancements in digital image processing(DIP)are necessary for effective image seeking that is also indexed in a massive volume of databases.In general,images were extracted from three techniques as content-based image retrieval,test-tagged oriented retrieval and semantic-based retrieval[1].The reason for indexing images is that it increased the demand for digital images that are necessary for specific data representation for retrieval,and that promoted research fields.This is the reason for which image retrieval and fetching has an effective and direct role in an image that searches for a wide range of databases.An important aspect is content-based image retrieval(CBIR),which performs matching of image primitive features depending on its visual properties.By extracting features from an image CBIR system,a feature extraction processing step has been considered as a pre-processing stage [2,3].Classifying visual features into two categories,the overall characteristics are termed as global features,and the visual property of an image is termed as the local feature.Most probably the CBIR system used local and global features that consist of texture information,edges,spatial coordinates and shapes that also connect interest points in order to extract similar form of images.Neighbourhood relationships are represented by texture features that combine categorized spatial and combined pixel value into spatial as well as spectral feature.Classifying shape features into contoured and regained that are probably region-based applies colour features,with extracted shape points from the complete area of interest.In sensitized to noise,contour-based processes,the shape-dependent anchors are removed at the edges and corners of an image in noise sensitization [4,5].

    However,colour histogram representations are scaleinvariant and rotation.Without representing spatial distribution using colour channels,which is a primary issue along with global features they also cannot minimize the semantic gap between them.All characteristics of the image were not shown in the global features.This is the cause for which global features were not applicable for matching partially the images that develop a retrieval system.However,the semantic gap has been reduced by local features.In order to take effect over the disadvantage of global feature removal,interest point detector(IPD) is used to show the image with its local features [6,7].Different algorithms such as the scale-invariant algorithm,affiant invariant algorithm,Harris and Hessian are dependent on interest points.By recognizing objects locally and globally,the images are combined that contribute to a high level of image content.The quality of retaining an image with respect to its resolution is important as they were frequently printed or displayed by various resolutions with output devices.Thus,images consisting of an optimal resolution for all networks,display environment and printers cannot be prepared.Because digital pictures are sampled in a 2D lattice,Jaggies are unavoidable.This issue cannot be overlooked in order to increase the quality of an expanded image since edges have a significant impact on the overall image quality [8].

    The sampling function is adjusted in this approach with the neighbouring pixel values.The image retrieval procedures are divided into two methods,content-based image retrieval(CBIR)and annotation-based image retrieval (ABIR).In the first method,the image should be represented as a feature vector and then categorized.In the end,the semantic of the associated category should be propagated along with the supplied image[9,10].As a result,CBIR finds photos based on the keywords in their annotations.ABIR has various disadvantages,despite its ability to give a high retrieval performance.For example,when the content of a picture is highly abstract,it might be difficult to explain it using only a few terms.In the second method,CBIR obtains pictures related to the visual content that divides the query and convert images into vectors consisting of different methods such as texture,colour and form and then compares their similarity.Finally,as retrieval results,return an ordered collection of pictures sorted by similarity value.By relying solely on keywords,CBIR may be able to sidestep many of the issues that plague ABIR [11].As a result,CBIR [12,13] has been increasingly interested for researchers.A novel model,image retrieval,is proposed to study more issues related to these existing models.The proposed model consists of shape features and low-level features..

    1.1|Contribution of the paper

    ?This system captures and checks the image data consisting of an object,colour,spatial information,texture and shape,that lively generates the recovery rates and maximum precision.

    ?Initiated a description model and weightless feature detection that effectively retrieves appropriate outcomes from cluttered as well as complicated datasets.

    ?A method is initiated to implement the semantic variation with similarity measurement and colour mapping to highlight the objects.

    ?The potential of the provided technique is to expose only the important information of the image from the anchor translation instead of complete image iterations.

    ?Storage efficient,time and computation retrieval system is initiated,which retrieves the output in seconds.

    The proposed model is comprised of all the above contributions.The study has been arranged in the following manner:Section 2 explains the existing work briefly,and Section 3 explains the proposed methodology.Section 4 details the experimental outcomes,and Section 5 contains the conclusion of this study.

    2|LITERATURE REVIEW

    In this session,some image retrieval methods that were used in previous stages are discussed,along with their drawbacks.

    Liu et al.[14]have developed a visual attenuation system for content related image retrieval,initially,in visual prompt named colour volume,which consists of information related to the edge.This was combined and presented to predict saliency regions rather than using major visual features.However,this method has low discrimination values.Liu et al.[15]have provided an algorithm called micro-structure descriptor,which has minimum dimensionality and high indexing performance.Particularly there are only 72 dimensions for complete images that have very efficient time for image retrieval.However,it requires more extensions that show lower dimensionality,rather for every edge detector that was developed for grey level images that have a colour image from colour channels.Zeng et al.[16]have provided a method called image representation,which is used to characterize an image into spatiogram consisting of a general colour form histogram that colours with the help of Gaussian mixture models.Initially by quantizing colour space via Expectation-maximization from a training set of images.However,incorporating the spatiogram does not quantize using this Gaussian mixture model.Liu et al.[17] have provided a method called image feature representation such as colour difference histogram (CDH) that is used in the process of image retrieval technique.This technique has the characteristics of making count perceptually that has the same difference in colour between the two points with various backgrounds,related to colour and edge orientation.However,this technique played more focus on edge orientation,and colour that does not favour feature representation likewise was a drawback.Varish et al.[18]have proposed a hierarchical method for retrieving an image named the two-layer feed-forward architecture (FFD).Every layer minimizes the search range by filtering out them as an irrelevant image that was based on texture and colour features due to weights in texture and colour similarity values that measured the similarity distance.However,it failed to provide flexibility with respect to each user.

    Kumar et al.[19]have presented an image saliency technique with the help of a space model that utilized the colour distribution part of images rather than to serve the majority of visual feature representation values,by combining local features and global feature that has a content detection method.However,this comparison of local features requires more parameters to perform this simulation.Uma Maheswaran et al.[20]have presented a composite microstructure descriptor that has a working principle similar to other CBIR systems.By integrating the multi Textron histogram and micro-structure descriptor,the multiscale features were explored in order to compute them.However,integrating the block values which have shown minimum intensity values made this image to remain in the transformed state.Srivastava et al.[21]have introduced a CBIR technique that uses the Multiscale Local Binary pattern rather than the other neighbourhood protocols,by combining eight neighbourhoods that calculated multiple scales.It captures a large scale of commanding featuresthat have certain textures insidea single scale to create a limitation of multiscale techniques.Hua et al.[22]have introduced a visual descriptor that was called along with the colour volume histogram and used in the CBIR system.Colour volume histograms that had descriptive power for spatial features,shape and texture significantly perform a local binary pattern.However,these factors like edge cue,colour affiliation and spatial layout consume a lot of time for computation.Zhang et al.[23] have presented a fine-grained image categorization algorithm that defined a picture as a spatiogram with a generic colour histogram,which was coloured using the Gaussian mixture models,initially,by using Expectation-maximization from a training set of pictures to quantify the colour space.Using this Gaussian mixture model,however,adding the spatiogram does not quantize.

    Hor et al.[24]had presented a combination of local texture information derived from two different texture descriptors for image retrieval.Initially this method separated the colour channel input images.Predefined pattern units as well as evaluated local binary patterns are the two descriptors which were used for extracting the texture information.Based on the distance criteria,the similarity matching was performed once by extracting the features.Bani et al.[25] had introduced an image retrieval method for extracting local as well as global textures and colour information in two spatial and frequency domains.In this approach Gaussian filter for filtered images as well as the statistical features are extracted based on cooccurrence matrices.In this way,the noise-resistant local features were extracted.The quantized histogram is then generated in order to obtain global colour information in the spatial domain.Tuyet et al.[26] had introduced a content based medical image retrieval based on salient regions combined with deep learning.This method contained two stages such as,the first stage is based on an offline task to extract local object features and the second stage is responsible for an online take for content-based image retrieval in the database.The initial stage,based on the shape,texture and intensity of local object features in medical images were extracted using deep learning based saliency of decomposition..Second,an online job for retrieving content-based images from a database.The user enters a query image,and the system returns the topnmost comparable images by comparing similarity to a bag of code words feature values obtained in the first stage.

    From the above-stated issues,image retrieval undergoes a major concern of flexibility and colour edge orientation.Thus,a method to overcome these drawbacks and perform more appropriate retrieval was necessary.

    3|PROBLEM STATEMENT

    From the development of multimedia databases,the importance of a huge amount of material is determined by how well it can be viewed,retrieved and relevant knowledge extracted from it while reducing the amount of time spent looking at it.Multimedia information is critical in applications like entertainment,education,e-commerce,health,and aerospace in the digital field.With the rapid growth of the Internet,consumers now have access to a massive amount of multimedia material.As a result,many digital images are generated in daily life,and if they are inspected on a regular basis,there is a wealth of helpful information available to consumers.The main goal of CBIR approach is to find images with the highest similarity score by comparing the database with the query image.The CBIR approach uses a simple way to extract local and global feature sets to make picture representation easier.The retrieval technique used here is completely dependent on calculating the distance between query and database pictures using feature vectors.As a result,the most relevant image that is closest to the query image is returned,resulting in the best possible pair of images.The fixed number of images are retrieved by the colour feature similarity measure,and the relevancy of images are found by the texture and shape features.Furthermore,to obtain more accurate retrievals,region and global features are used.Experiments are performed on RGB-near infrared databases,which consist of indoor,outdoor and ortho-imagery scene category recognition tasks.Also,due to a lack of proper retrieval techniques,the users are unable to extract the relevant information from the available databases.To overcome the drawbacks,which are explained above,a technique named Deep Image Retrieval using ANN interpolation and indexing based on similarity measurement can be introduced.

    4|PROPOSED METHODOLOGY

    In the CBIR system,the first step is to convert a query image to a grey level.The proposed model contains CBIR-SMANN,which is used to convert the colour image to grey level or black-white or monochrome.The grey level conversion is important because,in the grey level,the pixel contains more intensity information and also contains grey shades,which varies from black to white starting from 0 to 255 values.For maintaining luminance value and removing the Hue saturation,RGB coefficients are converted to monochrome(grey-level).A Gaussian window is used to average and smooth the point neighbourhoods.Eigenvalue depicts the gradient signal changes and it is aligned in orthogonal directions.The same level of Eigenvalues points to a corner and the opposite level of Eigenvalues represents an edge.The point of interest in the images is spotted by the corner detector in the proposed model.The importance of this method is used to detect the important points to slice the image regions for the capability of shape formation and to perform the texture analysis.The benefit of this technique is to discover the replicating model and error on the sequence of pixels,which completely conclude the dissimilarities and similarities.Hessian matrix calculation is used to detect the interest points and to find the higher value of determinant,a scheme called blob is embedded.These outcomes give a perfect scale selection in an image transformation,and it is finer than the Laplacian operator.

    From Figure 1,the conversion of a query image to a grey level is the initial stage in every CBIR system.Because each pixel in a greyscale image carries intensity information,the suggested technique transformed the colour image to greyscale.These images are also called as monochrome images or black-white,and contain grey hues(black to white)with ranging values(0-255).RGB factor is transformed to monochrome(grey-level)by maintaining the luminance and removing the hue saturation.The Gaussian window is used for the average and smooth point of neighbourhoods.The eigenvalues,which are aligned in orthogonal directions,indicate the gradient signal variations.Equivalent eigenvalues point to a corner,and opposite levels of eigenvalues exhibit an edge.The corner detector is used in the proposed technique to locate the locations of interest in the pictures.This approach is useful because it combines texture analysis with the identification of prominent spots to segment picture areas for possible shape creation.The benefit of employing this method is that it allowsyou to discover repeated patterns and disruptions in a sequence of pixels,allowing you to determine similarities and differences.

    4.1|Query image to grey level image

    At first,CBIR starts to transform the query image to monochrome (grey-scale) as greyscale images contain intensity information in each pixel.These greyscale images are then termed as monochrome or black-white that contains grey shades (black to white) that change the values from 0 to 255.Grey level statistical feature extraction is one of the popular image texture classification strategy.Texture is observed because of the spatial variations of grey level in the image.When small cells are observed,they produce more frequent grey level changes inducing a finer texture than large cells.A granulometric texture analysis can be obtained by applying mathematical morphology operations directly on grey level images.The proposed CBIR method converts RGB factors to monochrome by removing the saturated hue images for maintaining luminance value in the image.Interest point of detection is detected by using a corner detector,and this method was developed to acquire image regions in the texture of salient image attributes.The distribution of gradient values in the neighbourhoods is shown in Equation (1)

    In this Equation (1),δI,δDand Dzdenote derivative,integration scale and differentiation scale,respectively.The direction of the derivate is given asqandr,Gaussian kernel with derivatives are computed usingδD.Each and every image block is enclosed with four sub-blocks.The average monochrome of each and every sub-block at (i,j)th imageblock is explained asa0(i,j),a1(i,j),a2(i,j),anda3(i,j).The filter factors for horizontal,vertical,135-degree diagonal,45-degree diagonal,non-directional edges are labelled asfv(k),fh(k),fd(k),fd(k),andfn(k),respectively,wherek=0,…,3 represents the location of the sub block.The respective edge magnitudesmv(i,j),mh(i,j),md-45(i,j),md-135(i,j),andmnd(i,j) for the (i,j)th image block can be obtained.

    Then the max is calculated as

    FIGURE 1 Overall architecture of the proposed image retrieval methodology

    And normalise allm

    The respective edge magnitudes such as,mv,mh,md-45,md-135,mndare the obtained Image Block.mvdenotes the EdgeHisto (3),mhdenotes the EdgeHisto(2),md-45represents the EdgeHisto (4) andmd-135represents the EdgeHisto (5).The output of the unit that exports the texture's information from each Image Block is given in Equations (2)-(4) of a 6 area histogram,Each area relates to a division thus:Edgehisto (0) Non-Directional Edge,Edgehisto (1) Non Edge,Edgehisto (2) vertical Edge,Edgehisto(3) Horizontal Edge,Edgehisto (4) 135-degree diagonal Edgehisto (5) 45-degree diagonal Edgehisto.The classification of the system in an image block is in the following:At first,the model verifies if the maximum value is greater than the given threshold.This threshold defines when the image block can be sorted as a Non-texture block or Texture block Linear.

    4.2|Derivatives computation and smoothing

    By smoothening and averaging the point of neighbourhoods,a Gaussian Window has been utilized.Eigenvalues depict the gradient signal changes,and it is determined in an orthogonal format.Opposite levels of Eigenvalues point as edges,and equal levels represent corner.Subsequently,the value intensities generate different edges that are denoted as edge points.The square block of 20 pixels covers each detected interest,and this uses the corner detector to find the point of interest in the images.The spot points of interest in images are used for finding the potential of interest in the images.Texture analysis within the finding of important points segments the image divisions for designing a possible shape generation.Differences between corner scores are calculated for the directions presented in this method.This mechanism is used to find the replicating pattern within the disturbance for series of pixels that compute the similarity value.For detecting the interest point,approximation of Hessian matrix has been used,and similarity measurement leads to the formation of objects,by comparing the properties of colours and brightness in the covering regions and by using the integral images that are adjusted as box lets.For prompt calculation,integral images have been used within the square size convolution filters.By taking an input imageIGall aggregate pixels are represented asI∑(k)at a pointk=(x,y)Twithin a rectangular region as given in Equation (5):

    The proposed design used Hessian Matrix to obtain accuracy levels,for any pointk=(a,b) the Hessian MatrixH(k,λ)inkforλis given in the below Equation (6):

    In Equation (6),Gaussian based convolution takes place with the second derivative given asCxx(k,λ),Cxy(k,λ)andCxy(k,λ).These derivatives are termed as Laplacians of Gaussians.The box filter consists of Hessian matrix approximation,which is used to check the mathematical cost and approximation for Gaussian second-order derivatives.At different scales,interest points are necessary,as scale spacing is treated in the form of pyramids.The Gaussian image is employed for frequent smoothening and also sub-samples it for making the corners reach the maximum level.Gaussian smoothing is used to perform image enhancement,and it can achieve that at many different levels.This reduces the computational cost by choosing few samples with respect to the kernel size as the final feature vectors will remain to be efficient.This dissimilarity of linear filtering over the spatial domain results in the addition of box filters to the image.By finding the average of its neighbours and generating sharpedged data,one more plus for conventional patterns is adopted along with box filtering and its same mass of attributes.An easy accumulation is produced significantly greater than the sliding window fashion algorithm.Gaussian smoothing becomes faster and suitable.Scale spacing is a technique used to monitor by enhancing the filter size rather than decreasing the image size in the following steps.By approximating Gaussian derivatives,nine squared filters give the output in the scale ofλ.Blob values are calculated at the lowest level using nine squared filters.In order to obtain the outputted layers,the images are filtered with progressive big masks.

    4.3|Filtering method and Hessian matrix detector

    The parametric smooth kernels with the lowest return values are applied.Scale specification is used to generate the final stage of scaling with high image data,and this can be applied because it is computed from the minimum level of axioms.Convolution is applied by highlighting a sequence of filter actions.As integral images having various attributes,a little difference of scale for the partial second-order derivative is generated in the direction of derivation.Starting and finishing of the Hessian response determine interpolation.Therefore Hessian Matrix determined the interpolation that was applied for image spacing and scaling for the heavy response calculated so,by implementing interpolation,its lesser obtained scale is found to beλ.Requirement of related entry,for each and every time entry,will increase in filter size and gets doubled.Additionally,by raising the accuracy,entries are calculated in a similar way.For calculating the particular image and filter size of the optimal first entry,improved sample rates are applied,hessian matrix maxima determine the interpolated scaling in space width.Interpolation of scale space is particularly important because it is similarly high between starting layers of entries.In the process of scale-space division,in certain cases,values of the pixels are missed and need to be assessed.Interpolation gets induced by approximating the missing intensity pixels from their neighbours.

    The scale-space with the form of actual entities for accurate feature vectorgeneration,whichreaches better precision,is done by calculating the missing points from the known data.At this level,the proposed model generates a small number of reflective image signatures that discard the sample.The issues occur atsubsampling,reduction and truncation of the images are approximated using the interpolated results.By reducing the feature vector PCA (Principle Component Analysis) and size,this happens at the stage of applying the cyclic steps and the coefficients of Eigen to compute the main apparatus.An orthogonal conversion with uncorrelated factors are produced from the associated parameters.These calculated and inter-related factors are defined as principal.Maximum variance has been found in the initial component.After that,it gets minimized sequentially.The variables are orthogonal to their earlier one by posing fewer reflections.Usually,a physical model presents a thick sample of stuff,which gets reflected within the related apparatus.These are estimated in given Equation(6)by a function,

    In this Equation (4),υdenotes dependency of angles,and wavelength is represented byσ.In this equation,surface reflection and body reflection are denoted as S and B.Image features are represented by RGB channels,which are the carriers of primary colours.The poposed model efficiently collects colour channel factors along with the monochrome intensities by performing spatial mapping,and these colours will exhibit deep image contents by coupling information with monochrome values that produces a maximum representation of image data.This colour data provides regular stuff,and its position contains spatial adjusts,which solves the expressive correspondence.

    4.4|ANN interpolation of image pixels

    Interpolation was done using ANN,and it requires low computation that is employed to moving pictures.With the help of a high-resolution image,optical image interpolation and network training have been attained.ANN is presented in the subsection as it follows the execution of image expansion using ANN.ANN is defined as an artificial network that resembles neurons of a living body and guarantees the approximation with different types and also the architecture of ANN.A neuron is a fundamental unit to work with ANN;input signals are given asx1,x2,…,xNand weights of a neuron are given as(w1,w2,…,wN).After adding inputs and weights,S is termed as signal and the neurons’output signal is termed asy.Neurons in ANN are described as given in Equation (8):

    In this equation,θrepresents the threshold value,and activation function is given asf.Activation function is a nonlinear credentially,sigmoid function as shown in Equation(10):

    The main target for training such a neural network is to predict synaptic mass that produces the last output as close to the aim for all coaching methods.The backpropagation algorithm is used as a multilayer feedforward training of the network.This algorithm updates the synaptic mass every time when the training pattern is completed.The transformation ratios for interpolated pixels are determined by neighbouring sampled points with the help of the starfish function.The number of input networks is 64(8×8)sampled points.These 64 pixels are defined as grey circles,which are chosen as input networks.Correct values that are compared with output values are chosen from the original image which is a group of 64 pixels and four pixels in a method of coaching set.Next,the training method is produced in the same way as the training set and saved in TS file format.The ANN architecture consists of three layers an input,output layer and a hidden layer.The size of the area is varied to affect the interpolated points.The input layer,which contains a number of neurons,is varied 16(4×4),36(6 × 6),64(8 × 8).For each case,bias is added as an extra input.There are four neurons in the output layer.The number of neurons in the hidden layer fluctuates as 40,45 and 50.The sigmoid function is used as an activation function in both the output layer and hidden layer (Figure 2).

    Sigmoid function is normally used to refer specifically to the logistic function also called the logistic sigmoid function.All sigmoid functions have the property where they map the entire number line into a small range such as between 0 and 1,or -1 and 1;so one use of a sigmoid function is to convert a real value into one that can be interpreted as a probability.Sigmoid functions have become popular in deep learning because they can be used as an activation function in an artificial neural network.They were inspired by the activation potential in biological neural networks.Sigmoid functions are also useful for many machine learning applications where a real number needs to be converted to a probability.A sigmoid function placed as the last layer of a machine learning model can serve to convert the model's output into a probability score,which can be easier to work with and interpret.This process consists of a TS file,which is produced in the previous step and used as a training set.The synaptic mass is saved as a WT file after training.For interpolating the pixels in the enlarged image,a process called interpolation is used,and estimation is done using the trained network.In this step,the image is divided into Red,Green and Blue planes,and these planes are estimated individually using the same trained network.At last,a 24-bit bitmap image is formed by integrating the RGB planes and saved as an IMG file.As a result,the image is enlarged twice.

    4.5|Extraction of image features

    Colour is one of the most important features of images.Colour features are defined as a subject to a particular colour space or model.Once the colour space is specified,colour features can be extracted from images or regions.Texture is a very useful characterization for a wide range of image.It is generally believed that human visual systems use texture for recognition and interpretation.In general,colour is usually a pixel property while texture can only be measured from a group of pixels.A large number of techniques have been proposed to extract texture features.Based on the domain from which the texture feature is extracted,they can be broadly classified into spatial texture feature extraction methods and spectral texture feature extraction methods.Shape is known as an important cue for human beings to identify and recognize the real-world objects,whose purpose is to encode simple geometrical forms such as straight lines in different directions.The most important and inborn image feature is colour that has some stability.The image gets an issue due to the change in the noise,orientation,resolution and size,but colour has a very strong sturdiness.The colour features of an image in this paper is derived from the colour moments in RGB space,and this will represent the colour distribution in each image.Colour moments have major advantages.The lower order moment mainly represents the distribution of colours.This paper consists of four moments for each colour channel (t=R,G,B) that can be connected as standard deviation (ri),Skewness (hi),average (li) and kurtosis (ci).R,G and B are the planes used to extract thefour features for RGB colour space from each plane.The formulas for the given moments are shown in Equations (11)and (12).

    FIGURE 2 Process of interpolation using artificial neural network

    wherePijis considered as the value of each colour channel atjth image pixel,the total number of pixels per image isM×N.While retrieving images,the thickness and density of an image will comparatively have a high difference.The texture feature is used as an effective method,which is called a global feature that explains the surface behaviour of an area according to the goal.The regional characteristics have greater advantages that occur only on pattern matching problems.These characteristics will not be influenced by any local deviations.Usually,it has strong sturdiness and rotational invariance for noise.The most primitive approach for extracting the feature from the texture is made by using the statistical models,and this requires a lot of computational time and storage devices.Therefore,discrete cosine transform (DCT) is used for image texture feature extraction [25].At first,the query image is converted to a monochrome version,and it is divided into 8× 8 blocks,and for each block,DCT is applied individually.The feature of the texture vector is acquired from certain DCT factors.For each pixel,the DCT calculation is shown in Equation (13).

    In Equation (10),f(x,y) is termed as pixel value at (x,y)coordinates the position in 8 × 8 block,F(u,v) is termed as DCT domain illustration off(x,y),uandvillustrate vertical and horizontal frequencies,respectively.

    4.6|Extract salient region

    An image's main content that can be roughly described by a salient area of an image is a main issue for the proposed model.After performing the process of image slicing by using the c-mean clustering depending on the colour features and texture of an image,the image will be obtained in many areas.An easy approach is used to extract the important area.Consider thatWiof theith area is given in Equation (14):

    The weight of theith area is represented asWi,the area that consists of the highestWis considered as a salient area.ω1;ω2;ω3are the weights;Areai,CDiand Brightness are areas,the brightness of theith region and centre degree(CD),respectively.For calculating the similarity between salient regions,shape is an important feature.Nowadays,many systems based on image retrieval are working with shape features.However,most system performances are not satisfactory,and there are three main problems.

    Shape features for the image retrieval model usually lack the absolute mathematical model.The potential of an image retrieval model is irregular for a certain period of time because of the target deformation.The shape features are represented by the target's shape information,which is not completely constant for human visual recognition.This is because of the difference between the similarities in feature space and the human visual system.So,that the proposed model is limited only to rely on shape features,and also people are not sensitive to the variation of scaling,and targets shape.The shape feature will appropriately become sturdy for transformation,scaling and rotation.This makes it difficult for similarity calculation.

    4.7|Calculate similarity between the salient regions

    Step1QandMbeLQandLM.These are imagined to the length of the principle axis.To check the similarities ofQandM,the value of the feature denoted asMwill be scaled according to the proportion of sizeQas well asM.Consider the scaling factor to be?LQ=LM,Q’sshape feature value is telescopically calculated as given in Equation (15) [27]:

    The proposed model not only uses the low-level features like texture and colour for an image to segment into many areas for extracting the important area of an image but,it can also acquire the features of the shape from an image by using the important areas (Figure 3).Calculating the shape feature from the similarities can be used to apply CBIR.The entire working procedure of the proposed work is shown in Figure 4.Steps for the research work:

    1.The proposed work will capture and examine the overall image data,which contains colour,object,texture,spatial information and shape that generates the maximum recall and precision rates.

    2.Introduction of a weightless feature description and detection system that effectively retrieves the related outcomes from cluttered and complicated datasets.

    3.The innovative fusion method of an image is merged by arranging the spatial coordinates with earlier candidates.

    4.This newly invented model performs scaling,interpolation and suppression together for obtaining the details of the vastly improved image data.

    FIGURE 3 Colour vector

    5.To acquire a proper difference,a novel system is produced with spatial colour mapping to point out the stuff.

    6.A novel method is exhibited successfully,which gives back an extraordinary presentation on a small object,complex background objects,similar textures,enlarged/resized images,cluttered patterns,and colour dominant arrangements,mimicked,ambiguous overlay objects,cropped objects and occluded.

    7.The potential of the proposed model is used to unveil the related image information from anchor translation instead of entire image repetition.

    8.An ideal technique that acts over colour and grey level channels at the same time to react at the uniform data illustration method.

    9.A computation,storage efficiency and time retrieval system is produced,which retrieves the outcomes in a minimal amount of time.

    10.A novel invention to collect the power of a normally scaled feature,which contains a large number of words in architecture,is used to stimulate classification and indexing.

    FIGURE 4 Flow chart for the proposed image retrieval system

    FIGURE 5 Images from dataset

    FIGURE 6 Output images from Gaussian filter

    FIGURE 7 Resized images

    5|RESULTS

    CBIR-SMANN method initially converts query image to grayscale,then performs Gaussian filtering,;the hessian detector is used to detect corners,and an interpolation value was found using ANN.These values are set up as a class,compared with database images using similarity measurement,and then retrieved from the database.For appropriate image retrieval,choosing a dataset has been a critical task.This results in a proportional precision with image attributes such as colour,occlusion,size,quality,location and cluttering.Experiments were performed in the dataset that was a public access dataset that contains 1000 images with 15 classes.Images in this dataset are of dissimilar pixel sizes and in JPEG format.Experimentation is carried out,and its implementation is done in the working platform of Python 3.7 processor -Intel i5 with 16 GB RAM and 4 GB graphics processor.For GPU analysis,the input images are tested on an i3 processor with 4 GB RAM and NVIDIA GeForce GTX 1650.The proposed method is validated and tested outcomes are compared with the existing techniques such as IRFSM [28],Sketchbased Image Retrieval by Salient Contour (SBIR) [29],Image Retrieval Scheme using Quantized Bins of Colour Image Components and Adaptive Tetrolet Transform (IRQBCI) [30] and Greedy Learning of Deep Boltzmann Machine (GDBM)'s Variance and Search Algorithm for Efficient Image Retrieval (GDBM-IR) [31],considering metrics like accuracy,precision,recall,F-measure,and retrieval time.Some of the sample images in the dataset are displayed below figure (Figure 5).

    Images are then considered from the public dataset,these images are then allowed to Gaussian filter,and these filter outputs are presented in Figures 6 and 7.

    FIGURE 8 Hessian detector matrix

    Images are then resized to perform classification,the classified result of ANN values are then saved to perform similarity measurement.

    Hessian matrix detectors are used to perform detection in the images.These images are then compared to the query image to find the relevant images for output (Figure 8).

    At the final stage,the query image that has been entered gives a customized,relevant image as output.Query image and its respective relevant images retrieved are shown in Figure 9.

    Figure 9 shows the experimental result against each query image for every input.The accuracy of the proposed CBIRSMANN has been calculated by applying different measures.

    FIGURE 9 Content-based image retrievalsimilarity measure via artificial neural network retrieval images

    TABLE 1 Comparison table

    5.1|Evaluation metrics

    The efficiency of the proposed image retrieval system is evaluated based on performance attained by feature extraction,classification rate,and similarity measurement.In this sub-section,some of the major evaluation metrics like accuracy,precision,recall,and F-measure are adopted not only to validate the effectiveness of the proposed methodology but also for showing the stability of results.When the accuracy metric is around 100,the retrieval system achieves the best performance and presents the system as an ideal technique for recovering a relevant picture collection,which is assessed by the number of photos recovered properly.

    Table 1 illustrates the comparison analysis of performance metrics among the proposed model as well as the existing models.The proposed technique is CBIR-SMANN that is compared with several techniques such as,IRFSM,SBIR,IR-QBCI and GDBM-IR.The retrieval time is calculated on the basis of time taken for retrieving relaxant relevant images related to query and database information.It is measured in terms of seconds,and the obtained values for retrieval time are graphically plotted in the below graph.It is a well-known fact that when the retrieval system takes less time for retrieval,then automatically,the proposed model performance will get enhanced.By accepting the given fact,the proposed model performs well by acquiring the smallest retrievalvalue.The proposed model attained a retrieval time of 980 ms and the IRFSM model attained a retrieval time of 2000 ms and the SBIR model attained a retrieval time of 1800 ms and the IR-QBCI model attained a retrieval time of 1150 ms and the GDBM-IR model attained a retrieval time of 1100 ms.

    FIGURE 10 Accuracy level of retrieval

    FIGURE 11 F1_score level comparison with previous techniques

    FIGURE 12 Error value comparison graph

    FIGURE 13 Precision value comparison with previous techniques

    FIGURE 14 Recall value comparison with previous techniques

    From this Figure 10,it has been seen that the proposed CBIR-SMANN has given a high level of accuracy than that of previous techniques.Previous methods such as GDBM-IR,IR-QBCI,SBIR and IRFSM are given accuracy levels below the proposed model.As CBIR-SMANN achieved 88% of accuracy due to the use of ANN to find an interpolation technique,GDBM-IR gives 82%,IR-QBCI attained 81.5%,SBIR attained 79% and IRFSM attained 70% of accuracy level due to its low capacity of retrieval of images.The obtained f-measure function attained is higher for the proposed approach;hence it shows a better outcome for the proposed retrieval system.Likewise,the calculated loss function is provided in the subsequent section.

    From Figure 11,it can be seen that the proposed CBIRSMANN have given a high F1 score value.These values are in the range of 71% which are also higher than that of the previous techniques.GDBM-IR is given a value of 70% of F1_score,IRFSM shows lower values than any other previous methods such as,GDBM-IR which gives 70%,and IR-QBCI attained 65%,SBIR attained 62% and IRFSM attained 53%of F1_score.Thus the proposed CBIR-SMANN value performed better than the other methods.

    Comparing the error values shows that the proposed CBIR-SMANN has given minimum error values during execution (Figure 12).Previous methods such as GDBM-IR have given an error value of 16%,IR-QBCI gives an error value of 17%.SBIR gives a high value of 23%,and finally,IRFSM gives a very high error value of 30%.Finally the proposed model gives an error value of 11% that was an efficient one.

    The ratio of recovered relevant images to the total retrieved image query is known as precision.Precision measurements are compared with previous methods shown in Figure 13.

    FIGURE 15 False Negative rate comparison with previous techniques

    FIGURE 16 False positive rate comparison with previous techniques

    From Figure 14,it has been seen clearly that the proposed CBIR-SMANN has given a high precision value than that of other methods.GDBM-IR has given a low range of precision values which ranged from 81%,IR-QBCI has given a precision value of 70%.SBIR continued to give a precision value of 70%.IRFSM showed the minimum value of precision that is 60%.Overall,CBIR-SMANN gives a higher precision value of 82%.

    In image retrieval,recollect is termed as the proportion of recovered related images to the complete database images.Recall values show the efficiency of the system,as for proposed CBIR-SMANN have given the recall values at a range of 78%,GDBM-IR has retrieved images with respect to recall value of 68%.SBIR gave a recall value of 62%,and the least recall values are given for the IRFSM technique at a range of 53%.Proposed CBIR-SMANN succeeds in a high recall value that has been shown in Figure 14.

    A false negative error,often known as a false negative,is a test result that implies that a condition does not exist when it does not.From Figure 15,it has been clearly visible that the CBIR-SMANN has taken a false negative rate of 0.050,GDBM-IR has shown a false negative rate of 0.075 value.IRQBCI has taken a false negative little greater than 0.100 value,and also SBIR took a greater value of 0.150 value.Finally,IRFSM has taken a greater value of 0.200 range.Thus,proposed methodology has given a minimum false-negative rate value.A false positive occurs when a test result mistakenly shows the existence of a condition,such as an image,when the query relevancy is not present as shown in Figure 16.

    FIGURE 17 Negative predictive value compared with previous techniques

    FIGURE 18 Specificity value comparison with previous techniques

    The average results of each/all queries are used to measure the performance.It is carried out at four different schemes to guarantee a fair comparison.The chance that images with a negative screening test do not have the disease is known as a negative predictive value,as shown in Figure 17.Figure 17 shows that once again,the proposed CBIR approach outperforms other approaches.

    From Figure 17,it has been understood that the negative predictive value for the proposed model has been high than other previous techniques.CBIR-SMANN gives 75%of negative prediction that has been healthy than other methods.Others like GDBM-IR and IR-QBCI give 70%and 62%of the value.SBIR techniques have given a negative predictive value of 61%,then IRFSM showed a negative predictive value of 55%.

    The proportion of people who do not have the respective image as determined by the one who obtained a negative response on this test is referred to as specificity.From Figure 18,it has been compared with previous methods that CBIR-SMANN gave specificity values of 91%that were higher than GDBM-IR,SBIR,IRFSM and IR-QBSI.Thus the proposed CBIR-SMANN has succeeded to produce high specificity values at the end.Finally,the overall performance evaluation shows that the proposed CBIR-SMANN have better performance than other conventional ones.

    6|CONCLUSION

    CBIR-similarity measure through artificial neural network interpolation (CBIR-SMANN) is a technique proposed in this study.The pictures are scaled,then subjected to Gaussian filtering in the pre-processing step,and finally,the interesting points are gathered by allowing them to pass through a Hessian detector.The mean,kurtosis,and standard deviation characteristics were retrieved using Skewness and then passed to ANN for interpolation.The interpolated results are saved in a database and may be retrieved at any time.During the testing step,a query picture was provided,which was preprocessed and featured extracted before being supplied to the Similarity measuring function.As a result,ANN aids in the retrieval of comparable pictures from the database.CBIRSMANN had a high recall value of 78% and a minimal retrieval time of 980 ms,according to the results.The specificity values of 91% give 75% of negative prediction that was higher than the previous ones.The outcomes of the experiment result in highly acclaimed benchmarks,which explains that the provided method yields a great performance when it is compared to benchmark descriptors and research approaches.This results in shape features,and fused spatial colour can specifically retrieve the images from colour,shape,object datasets and texture.An addition to the provided work will be integrating the convolution network to acquire more enhanced outcomes.

    CONFLICT OF INTEREST

    There is no conflict of interest between the authors regarding the manuscript preparation and submission.

    DATA AVAILABILITY STATEMENT

    Data sharing is not applicable to this article as no new data were created or analysed in this study.

    ORCID

    Faiyaz Ahmadhttps://orcid.org/0000-0002-2222-3307

    搡老乐熟女国产| 波多野结衣av一区二区av| 午夜福利乱码中文字幕| 天天操日日干夜夜撸| 国产极品粉嫩免费观看在线| 久久国产精品人妻蜜桃| 日韩中文字幕欧美一区二区| 国产欧美日韩一区二区三| 99国产精品一区二区蜜桃av | 精品亚洲成a人片在线观看| 亚洲精品粉嫩美女一区| 一级作爱视频免费观看| 两个人看的免费小视频| 亚洲中文字幕日韩| 99热只有精品国产| 国产精品偷伦视频观看了| 伦理电影免费视频| 超碰97精品在线观看| 午夜免费观看网址| 久久精品成人免费网站| 黑人欧美特级aaaaaa片| 黄色 视频免费看| 午夜福利,免费看| 日韩大码丰满熟妇| xxxhd国产人妻xxx| 国产精品久久久久久精品古装| 1024香蕉在线观看| tocl精华| 国产成人免费无遮挡视频| 久久天堂一区二区三区四区| 国产亚洲精品久久久久5区| 性少妇av在线| 免费在线观看影片大全网站| 91成年电影在线观看| 一进一出抽搐动态| 黄色毛片三级朝国网站| 男人舔女人的私密视频| videosex国产| 夜夜夜夜夜久久久久| 亚洲国产欧美网| 好男人电影高清在线观看| 国产午夜精品久久久久久| 欧美色视频一区免费| 亚洲在线自拍视频| 每晚都被弄得嗷嗷叫到高潮| 一个人免费在线观看的高清视频| 女人精品久久久久毛片| av电影中文网址| 男女午夜视频在线观看| 18禁美女被吸乳视频| 女人被狂操c到高潮| 午夜福利影视在线免费观看| av欧美777| cao死你这个sao货| 丰满迷人的少妇在线观看| 午夜两性在线视频| 精品国产美女av久久久久小说| 老鸭窝网址在线观看| 国产亚洲欧美精品永久| 亚洲午夜精品一区,二区,三区| 天天影视国产精品| 在线观看午夜福利视频| 免费在线观看日本一区| 亚洲少妇的诱惑av| 99国产精品一区二区蜜桃av | 一a级毛片在线观看| 亚洲av熟女| 久久久久久亚洲精品国产蜜桃av| 丰满人妻熟妇乱又伦精品不卡| 性少妇av在线| 日韩人妻精品一区2区三区| 这个男人来自地球电影免费观看| 99久久精品国产亚洲精品| 热99re8久久精品国产| 交换朋友夫妻互换小说| 欧美成狂野欧美在线观看| 国产成人欧美在线观看 | 女人高潮潮喷娇喘18禁视频| 少妇被粗大的猛进出69影院| 久久久国产精品麻豆| 这个男人来自地球电影免费观看| 欧美丝袜亚洲另类 | 不卡一级毛片| 国产又色又爽无遮挡免费看| 成人免费观看视频高清| 久久精品成人免费网站| 亚洲成国产人片在线观看| 精品国产乱子伦一区二区三区| 久久精品亚洲熟妇少妇任你| 丝瓜视频免费看黄片| 丰满的人妻完整版| 51午夜福利影视在线观看| 亚洲熟女精品中文字幕| 亚洲综合色网址| 伦理电影免费视频| 91九色精品人成在线观看| 一二三四在线观看免费中文在| 国产精品久久久久久精品古装| 亚洲国产看品久久| 成年人免费黄色播放视频| av超薄肉色丝袜交足视频| 国产极品粉嫩免费观看在线| 精品无人区乱码1区二区| 亚洲国产中文字幕在线视频| 国产精品电影一区二区三区 | 精品福利永久在线观看| 又大又爽又粗| 久久久国产成人精品二区 | 男女高潮啪啪啪动态图| 亚洲va日本ⅴa欧美va伊人久久| 97人妻天天添夜夜摸| 国产xxxxx性猛交| 黄色视频不卡| 91av网站免费观看| 久久九九热精品免费| 十分钟在线观看高清视频www| 久久午夜综合久久蜜桃| 丝袜美足系列| 精品福利观看| 老熟妇仑乱视频hdxx| 黄色女人牲交| 午夜福利免费观看在线| 人成视频在线观看免费观看| 午夜精品在线福利| 国产激情久久老熟女| 亚洲一区二区三区不卡视频| 欧美+亚洲+日韩+国产| 国产有黄有色有爽视频| 9色porny在线观看| 国产亚洲精品第一综合不卡| a在线观看视频网站| 制服人妻中文乱码| 99久久人妻综合| 一夜夜www| 男女高潮啪啪啪动态图| 日韩制服丝袜自拍偷拍| 久久久久久久久久久久大奶| 精品一区二区三卡| 久久久久久久久久久久大奶| www.熟女人妻精品国产| 欧美+亚洲+日韩+国产| 久久人妻熟女aⅴ| 国产单亲对白刺激| 婷婷成人精品国产| 精品免费久久久久久久清纯 | 国产欧美日韩精品亚洲av| 日本wwww免费看| 精品卡一卡二卡四卡免费| 亚洲欧美色中文字幕在线| 69av精品久久久久久| 久久影院123| 一进一出抽搐gif免费好疼 | √禁漫天堂资源中文www| 久久香蕉精品热| 国产午夜精品久久久久久| 国产亚洲欧美98| 亚洲午夜精品一区,二区,三区| 日韩欧美三级三区| 久久久国产成人精品二区 | 在线永久观看黄色视频| 日韩人妻精品一区2区三区| 亚洲一区高清亚洲精品| 黄色毛片三级朝国网站| 在线观看66精品国产| 一级a爱片免费观看的视频| 国产精品.久久久| 国产色视频综合| 欧美中文综合在线视频| 国产成人精品久久二区二区免费| 欧美日韩av久久| 新久久久久国产一级毛片| 精品乱码久久久久久99久播| 成人亚洲精品一区在线观看| 欧美日韩视频精品一区| 国产精品久久电影中文字幕 | 99国产极品粉嫩在线观看| 国产三级黄色录像| 国产成人影院久久av| 久久草成人影院| 热re99久久国产66热| 飞空精品影院首页| 免费观看人在逋| √禁漫天堂资源中文www| 每晚都被弄得嗷嗷叫到高潮| a级毛片在线看网站| 看片在线看免费视频| 久久久国产欧美日韩av| 国产精品偷伦视频观看了| 一级毛片高清免费大全| 国产主播在线观看一区二区| 午夜老司机福利片| 午夜两性在线视频| 亚洲国产精品一区二区三区在线| 久久 成人 亚洲| 国产又色又爽无遮挡免费看| 精品无人区乱码1区二区| 老司机在亚洲福利影院| videosex国产| 欧美久久黑人一区二区| 国产真人三级小视频在线观看| 亚洲色图 男人天堂 中文字幕| 亚洲一区二区三区不卡视频| 在线天堂中文资源库| 精品午夜福利视频在线观看一区| 999久久久国产精品视频| 国产成人精品无人区| 国产日韩一区二区三区精品不卡| 在线观看一区二区三区激情| 在线观看日韩欧美| 精品少妇一区二区三区视频日本电影| 久久中文字幕人妻熟女| xxxhd国产人妻xxx| av欧美777| 国产欧美日韩综合在线一区二区| 亚洲专区国产一区二区| 叶爱在线成人免费视频播放| 欧美最黄视频在线播放免费 | 国产91精品成人一区二区三区| 国产成人一区二区三区免费视频网站| 下体分泌物呈黄色| 国产精品av久久久久免费| svipshipincom国产片| 亚洲国产欧美日韩在线播放| 国产精品 国内视频| 精品久久久久久久久久免费视频 | 亚洲av美国av| 国产精品久久电影中文字幕 | 国产av又大| 久久久久国内视频| 久久精品亚洲熟妇少妇任你| 午夜福利在线免费观看网站| 国产99久久九九免费精品| 日本欧美视频一区| 亚洲欧美激情综合另类| 精品少妇一区二区三区视频日本电影| 亚洲av熟女| 99国产精品99久久久久| 深夜精品福利| 婷婷丁香在线五月| 丝瓜视频免费看黄片| 18禁裸乳无遮挡免费网站照片 | 午夜福利免费观看在线| 久久影院123| 亚洲国产精品sss在线观看 | 亚洲精品乱久久久久久| www.自偷自拍.com| 十八禁人妻一区二区| 中文字幕精品免费在线观看视频| 99热只有精品国产| 在线播放国产精品三级| 亚洲国产精品sss在线观看 | 欧美大码av| 午夜福利视频在线观看免费| 三级毛片av免费| 在线观看免费视频日本深夜| 亚洲黑人精品在线| 高清欧美精品videossex| 久久久水蜜桃国产精品网| 日韩欧美在线二视频 | 亚洲欧洲精品一区二区精品久久久| 成人国产一区最新在线观看| 又大又爽又粗| 亚洲精品一卡2卡三卡4卡5卡| 国产一区二区三区在线臀色熟女 | 久久久久久免费高清国产稀缺| 日本a在线网址| 国产1区2区3区精品| 国产免费av片在线观看野外av| 亚洲免费av在线视频| 999久久久精品免费观看国产| 亚洲一码二码三码区别大吗| 国产亚洲精品一区二区www | 国产精品免费视频内射| 人成视频在线观看免费观看| 香蕉久久夜色| 国产精品自产拍在线观看55亚洲 | 日韩三级视频一区二区三区| 伦理电影免费视频| 国产又色又爽无遮挡免费看| 中出人妻视频一区二区| 久久精品国产a三级三级三级| 欧美乱妇无乱码| 成年人午夜在线观看视频| 人人妻人人澡人人爽人人夜夜| 黑人欧美特级aaaaaa片| 热99久久久久精品小说推荐| 精品欧美一区二区三区在线| 亚洲成人免费av在线播放| 69精品国产乱码久久久| 婷婷成人精品国产| 久久午夜亚洲精品久久| 亚洲熟女毛片儿| 99国产精品一区二区蜜桃av | 一级,二级,三级黄色视频| 成人永久免费在线观看视频| 亚洲人成77777在线视频| 50天的宝宝边吃奶边哭怎么回事| 深夜精品福利| 国产在线精品亚洲第一网站| 午夜福利乱码中文字幕| 国产精品免费大片| 在线观看免费视频网站a站| e午夜精品久久久久久久| 成人av一区二区三区在线看| 国产欧美日韩一区二区精品| 不卡一级毛片| 久久久久久久久久久久大奶| 免费在线观看日本一区| 人人妻,人人澡人人爽秒播| 久久久水蜜桃国产精品网| 欧美黄色淫秽网站| av免费在线观看网站| 欧美黄色片欧美黄色片| 精品熟女少妇八av免费久了| 欧美日韩国产mv在线观看视频| 国产又爽黄色视频| 亚洲一区二区三区欧美精品| 久久久久久久久免费视频了| 精品国产超薄肉色丝袜足j| videosex国产| 久久久久国产一级毛片高清牌| 麻豆成人av在线观看| 亚洲中文av在线| 三级毛片av免费| 国产精品国产高清国产av | av天堂久久9| 欧美+亚洲+日韩+国产| 中文字幕另类日韩欧美亚洲嫩草| 91精品国产国语对白视频| 午夜福利,免费看| 欧美精品av麻豆av| 亚洲精品久久成人aⅴ小说| 99久久综合精品五月天人人| 精品久久久久久,| 极品人妻少妇av视频| 亚洲全国av大片| 波多野结衣一区麻豆| 精品国产亚洲在线| 久久精品亚洲av国产电影网| 亚洲精品国产区一区二| 69精品国产乱码久久久| 三上悠亚av全集在线观看| 欧美人与性动交α欧美精品济南到| 亚洲三区欧美一区| 香蕉国产在线看| 一区二区三区激情视频| 中文欧美无线码| 亚洲五月色婷婷综合| 欧美在线一区亚洲| av不卡在线播放| 99国产综合亚洲精品| 亚洲国产欧美网| 午夜福利免费观看在线| 欧美日韩国产mv在线观看视频| 人人妻人人澡人人爽人人夜夜| 美女福利国产在线| 99精国产麻豆久久婷婷| 一本大道久久a久久精品| 成在线人永久免费视频| 91九色精品人成在线观看| 大香蕉久久网| 窝窝影院91人妻| 亚洲视频免费观看视频| 校园春色视频在线观看| 欧美av亚洲av综合av国产av| 女人被狂操c到高潮| 老熟女久久久| 精品一区二区三区四区五区乱码| 亚洲全国av大片| 欧美色视频一区免费| 亚洲成人免费电影在线观看| 又大又爽又粗| 午夜两性在线视频| 亚洲人成电影观看| 麻豆成人av在线观看| 狠狠狠狠99中文字幕| 最新美女视频免费是黄的| 日韩制服丝袜自拍偷拍| 免费观看a级毛片全部| 两性午夜刺激爽爽歪歪视频在线观看 | aaaaa片日本免费| 欧美日韩亚洲高清精品| 成人黄色视频免费在线看| 日本黄色视频三级网站网址 | 成人特级黄色片久久久久久久| 日韩人妻精品一区2区三区| 中文字幕最新亚洲高清| 91麻豆精品激情在线观看国产 | 一区二区三区激情视频| 一级作爱视频免费观看| 亚洲av成人不卡在线观看播放网| 99热网站在线观看| 啦啦啦免费观看视频1| 久久久久国产一级毛片高清牌| 国产一区在线观看成人免费| 色尼玛亚洲综合影院| 国产不卡av网站在线观看| 久久热在线av| 天堂√8在线中文| 啦啦啦在线免费观看视频4| 美女高潮到喷水免费观看| 在线观看免费高清a一片| 国产精品.久久久| 看片在线看免费视频| 色94色欧美一区二区| 咕卡用的链子| 亚洲av熟女| 老司机在亚洲福利影院| 国产精品偷伦视频观看了| 亚洲精品中文字幕在线视频| bbb黄色大片| 亚洲av成人不卡在线观看播放网| 久久久久视频综合| 少妇被粗大的猛进出69影院| 亚洲欧美日韩高清在线视频| 90打野战视频偷拍视频| 99精品在免费线老司机午夜| 国产精品自产拍在线观看55亚洲 | 99riav亚洲国产免费| 妹子高潮喷水视频| 99久久国产精品久久久| 亚洲国产精品sss在线观看 | 怎么达到女性高潮| 国产精品一区二区在线观看99| 一本综合久久免费| 亚洲av欧美aⅴ国产| 又紧又爽又黄一区二区| 国产欧美日韩一区二区精品| 国产欧美日韩一区二区三区在线| 免费在线观看亚洲国产| 亚洲国产看品久久| 久久人妻av系列| 国产欧美日韩精品亚洲av| 99国产精品99久久久久| 亚洲精品中文字幕一二三四区| 欧美日韩精品网址| 欧美精品一区二区免费开放| 乱人伦中国视频| 亚洲精品乱久久久久久| 另类亚洲欧美激情| av国产精品久久久久影院| 91成年电影在线观看| 国产免费男女视频| 午夜福利在线观看吧| 在线观看www视频免费| 国产精品美女特级片免费视频播放器 | 在线观看免费高清a一片| 99国产综合亚洲精品| 欧美激情久久久久久爽电影 | 欧美日韩乱码在线| 校园春色视频在线观看| 少妇被粗大的猛进出69影院| 一级毛片女人18水好多| 香蕉丝袜av| av免费在线观看网站| 久久午夜亚洲精品久久| 极品少妇高潮喷水抽搐| 国产激情久久老熟女| 午夜福利,免费看| 少妇猛男粗大的猛烈进出视频| 午夜亚洲福利在线播放| 午夜激情av网站| 欧美成狂野欧美在线观看| 成人国产一区最新在线观看| av网站在线播放免费| 久久天堂一区二区三区四区| 看片在线看免费视频| 男女之事视频高清在线观看| 一边摸一边做爽爽视频免费| 热99re8久久精品国产| 国产又爽黄色视频| 黄片播放在线免费| 国产精品av久久久久免费| 精品福利观看| 别揉我奶头~嗯~啊~动态视频| 99国产极品粉嫩在线观看| 欧美另类亚洲清纯唯美| 欧美黑人欧美精品刺激| 午夜91福利影院| 精品国产乱码久久久久久男人| 丝袜美足系列| 90打野战视频偷拍视频| 免费一级毛片在线播放高清视频 | a级毛片黄视频| 性色av乱码一区二区三区2| 在线观看舔阴道视频| 亚洲精品国产精品久久久不卡| 夜夜躁狠狠躁天天躁| 国产不卡av网站在线观看| 午夜成年电影在线免费观看| 日韩 欧美 亚洲 中文字幕| 大码成人一级视频| 欧美另类亚洲清纯唯美| 大型av网站在线播放| 老司机亚洲免费影院| 天堂中文最新版在线下载| 国产精品1区2区在线观看. | 欧美精品啪啪一区二区三区| 电影成人av| 久久久精品区二区三区| 国产成人免费观看mmmm| 在线永久观看黄色视频| 亚洲成人手机| 啪啪无遮挡十八禁网站| 国产精品综合久久久久久久免费 | 亚洲aⅴ乱码一区二区在线播放 | 欧美日韩中文字幕国产精品一区二区三区 | 人成视频在线观看免费观看| 老司机靠b影院| 狂野欧美激情性xxxx| 1024香蕉在线观看| 韩国av一区二区三区四区| 亚洲精华国产精华精| 日韩免费av在线播放| 久久久国产一区二区| 老汉色av国产亚洲站长工具| 久久 成人 亚洲| cao死你这个sao货| 国产欧美日韩一区二区三区在线| 国产亚洲欧美精品永久| 亚洲精品粉嫩美女一区| 9热在线视频观看99| 一级作爱视频免费观看| 国产不卡一卡二| 怎么达到女性高潮| 久久99一区二区三区| 妹子高潮喷水视频| 中文字幕色久视频| 高清视频免费观看一区二区| 又黄又粗又硬又大视频| 国产色视频综合| 日韩成人在线观看一区二区三区| 中文字幕最新亚洲高清| 俄罗斯特黄特色一大片| 九色亚洲精品在线播放| 十八禁人妻一区二区| 欧美日韩黄片免| 在线av久久热| 深夜精品福利| 亚洲精品av麻豆狂野| 国产精品一区二区在线观看99| 国产精品久久久久成人av| 精品福利永久在线观看| 伊人久久大香线蕉亚洲五| 亚洲国产中文字幕在线视频| 视频区欧美日本亚洲| 变态另类成人亚洲欧美熟女 | 久久久国产成人免费| 午夜福利欧美成人| 国产1区2区3区精品| a级毛片在线看网站| 日韩欧美国产一区二区入口| av天堂久久9| 精品国产乱码久久久久久男人| 久久久精品区二区三区| 中出人妻视频一区二区| 黄色视频不卡| 国产精品免费视频内射| 亚洲成a人片在线一区二区| 日韩欧美三级三区| 99国产精品一区二区三区| 69精品国产乱码久久久| 久热这里只有精品99| 亚洲午夜理论影院| 国产精品成人在线| 精品一区二区三卡| 一区二区三区国产精品乱码| 99riav亚洲国产免费| tube8黄色片| 精品国内亚洲2022精品成人 | 午夜老司机福利片| 国产91精品成人一区二区三区| 一本大道久久a久久精品| 欧美色视频一区免费| 亚洲精品久久成人aⅴ小说| 丝袜美足系列| 久久精品亚洲av国产电影网| 美女午夜性视频免费| 99国产精品99久久久久| a在线观看视频网站| 丁香欧美五月| 在线观看免费高清a一片| 又大又爽又粗| 在线观看一区二区三区激情| 欧美最黄视频在线播放免费 | 日本一区二区免费在线视频| 国产av一区二区精品久久| 搡老熟女国产l中国老女人| 18禁裸乳无遮挡免费网站照片 | 人成视频在线观看免费观看| 老司机靠b影院| 欧美国产精品一级二级三级| 黄色视频,在线免费观看| 身体一侧抽搐| 多毛熟女@视频| 黑丝袜美女国产一区| 天堂俺去俺来也www色官网| 成人国语在线视频| 亚洲精品粉嫩美女一区| 国产精品亚洲一级av第二区| 狂野欧美激情性xxxx| 国产精品.久久久| 亚洲成人免费电影在线观看| 久久久久久亚洲精品国产蜜桃av| 亚洲男人天堂网一区| 下体分泌物呈黄色| 夜夜夜夜夜久久久久| 啦啦啦 在线观看视频| 一级黄色大片毛片| 在线av久久热| 国产亚洲欧美在线一区二区| 成人特级黄色片久久久久久久| 欧美另类亚洲清纯唯美| 99精国产麻豆久久婷婷| 午夜视频精品福利|