• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Pervasive Attentive Neural Network for Intelligent Image Classification Based on N-CDE’s

    2024-05-25 14:41:50AnasAbulfaraj
    Computers Materials&Continua 2024年4期

    Anas W.Abulfaraj

    Department of Information Systems,Faculty of Computing and Information Technology,King Abdulaziz University,Rabigh,21911,Saudi Arabia

    ABSTRACT The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when confronted with inter-class and intra-class similarities and differences.Neural-Controlled Differential Equations(N-CDE’s)and Neural Ordinary Differential Equations(NODE’s)are extensively utilized within this context.NCDE’s possesses the capacity to effectively illustrate both inter-class and intra-class similarities and differences with enhanced clarity.To this end,an attentive neural network has been proposed to generate attention maps,which uses two different types of N-CDE’s,one for adopting hidden layers and the other to generate attention values.Two distinct attention techniques are implemented including time-wise attention,also referred to as bottom N-CDE’s;and element-wise attention,called top N-CDE’s.Additionally,a training methodology is proposed to guarantee that the training problem is sufficiently presented.Two classification tasks including fine-grained visual classification and multi-label classification,are utilized to evaluate the proposed model.The proposed methodology is employed on five publicly available datasets,including CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.The obtained visualizations have demonstrated that N-CDE’s are better appropriate for attention-based activities in comparison to conventional NODE’s.

    KEYWORDS Differential equations;neural-controlled DE;image classification;attention maps;N-CDE’s

    1 Introduction

    Image recognition encompasses several techniques for automatically assigning one or many labels to an image,depending on its visual contents.This task,which can be categorized into multi and single labelled image class,is both underlying and applicable in practice.Convolutional Neural Networks(CNNs)have also achieved tremendous success in recent times[1–3].Recently,researchers have employed the CNNs in human action recognition[4,5],document classification[6],blockchain security [7] and superhero classification [8].Nevertheless,the efficacy of CNNs remains relatively constrained when confronted with demanding image recognition tasks.Illustrated in Figs.1a and 1b are representative images and their respective class,extracted from the MS COCO[9]and PASCAL VOC dataset [10],serving as instances of Fine-Grained Visual Categorization (FGVC) and image classification.

    Figure 1: A selection of representative images obtained from various datasets[9–11]

    A form of differential equation that includes a control input or a decision-making mechanism is referred to as a Controlled Differential Equation (CDE).These equations delineate the temporal evolution of a system’s state within the framework of dynamic systems and control theory.They account for both the inherent dynamics of the system and the impact of external controls.To define the specific terminology: a) Differential Equation (DE): DEs of functions are utilized in this type of equation.These derivatives represent the rates of change of particular variables in the context of CDE’s;and b) Controlled: The term “controlled” denotes the circumstance in which a control input influences or directs the behavior of the system.The control input in question is commonly a programmable function that can be altered or tailored to accomplish intended system operations.Consequently,a CDE delineates the temporal evolution of a system’s state,incorporating not only the intrinsic dynamics of the system but also the influence of a control input.In mathematical notation,a CDE may be represented as=f(x,u),here,xdenotes the system’s state variables,tsignifies time,usignifies the control input,andfrepresents a function describing the system’s natural evolution.

    The presence of several factors such as viewpoint,occlusion,illumination,scale and appearance contribute to the substantial intra-class variations observed in picture identification.These factors,along with the interplay between different object categories,provide considerable challenges and render image classification a more complex task.Additionally,Fig.1c depicts a collection of bird photos and their respective class sourced from CUB-200-2011 dataset [11],which is recognized as a demanding dataset comprising 200 distinct bird species.The presence of significant intraclass variances resulting from factors such as pose,scales,and position,along with the small changes between classes,contribute to the challenging nature of FGVC.One may pose the question:Is it possible to develop a methodology that possesses the capacity to augment the efficacy of representation?

    It is most likely possible to relate the observed performance disparities between N-CDE’s and Neural Ordinary Differential Equations(NODE’s)to certain architectural variations and innate traits that are ingrained in the models.Different architectural features such as attention processes,network depth,skip connections,and complexity may be responsible for different information-capturing and information-using capacities.The phenomenon of image analysis has been the subject of substantial research in previous studies since it has been recognized as an efficient method for enhancing the representation capabilities of machine learning in multiple domains like object recognition [12–14],image denoising [15],detection of human movement [16],CPU scheduling [17],target detection[18],person identification [19] and Spoof detection [20].Furthermore,intrinsic features like data augmentation,learning rate tactics,regularization approaches,and parameter initialization can have a significant impact on how well the models generalize and learn.The details in how these components are implemented within the architectures,in addition to taking the difficulty of the task and the structure of the dataset into account,are important factors affecting the observed differences in model performance.Referring to the original research papers or documentation related to N-CDE’s,and NODE’s is crucial for a thorough understanding because these sources usually offer in-depth explanations of the experimental conditions,hyperparameter settings,and architectural decisions that lead to the differences that are observed[21].

    When compared to N-CDE’s,the use of NODE’s in attention-based models may have certain drawbacks.Due to the complexity of attention systems,one significant obstacle is the possible difficulty in comprehending the decisions made by the model.Comprehending the logic behind the model’s concentration on input data areas could be intricate,impeding the model’s clarity and comprehensibility.Furthermore,the scalability of the model may be impacted by the computational complexity brought about by attention processes,particularly if they are widespread or complicated and result in longer training durations and higher resource requirements.Attention-based models have a danger of overfitting,especially if they do not have good regularization techniques,and they might not translate well to new,untested data.Potential drawbacks include these models’sensitivity to hyperparameter decisions as well as their reliance on the variety and distribution of the training set.Standardization issues in attention mechanisms,such as NODE’s,might make it difficult to compare them across various architectures,and their efficacy in tasks requiring a more comprehensive contextual awareness may be restricted by their inability to capture long-range dependencies.Furthermore,the resilience of attention-based models in practical applications may be questioned due to their susceptibility to adversarial attacks.It is crucial to remember that the specific shortcomings of NODE’s and how they compare to N-CDE’s will vary depending on how well each architecture is implemented[22].

    An advanced method to improve neural network modeling is the combination of N-CDE and attentive neural networks to generate attentive N-CDE’s.The underlying idea is to combine NCDE’s—which are well-known for their capacity to simulate dynamics in continuous time—with attention mechanisms that allow for the selective focus on important aspects.This combination enables the dynamic adaptation to important features at different time points and enables the modeling of temporal dynamics using differential equations in attentive N-CDE’s.This is particularly useful for time-series data,where it is essential to capture changing patterns over time.Moreover,the inclusion of attention mechanisms makes it easier to create attention maps,which offers insight into the temporal events that affect the model’s predictions.The promise of this combined method in managing complex and dynamic data structures is demonstrated by the synergy between N-CDE’s and attention processes,which not only increases interpretability in time-series analysis but also strengthens the model’s robustness to noisy or irregular temporal patterns[23].

    Over the course of time,persistent initiatives have been undertaken to tackle the concerns.A new methodology of arranging feature information with a class specific weight along with an extra approach to improve the impact of the feature information arrangement was introduced to comprehensively handle classification and localization misalignment.The results showed MaxBoxAccV2 score of 68.9% and 79.5% on CUB-200-2011 and ImageNet-1K datasets,respectively.A clustering-based approach that is Class RE-Activation Mapping(CREAM)was applied on class specific background context-embeddings as cluster centers and contextual embeddings were learned during training by CAM-guided momentum preservation approach.CREAM performed well on OpenImages,ILSVRC and CUB benchmark datasets[9].A pipeline for DA-WSOL was devised with the aim of incorporating domain adaption (DA) methodologies into WSOL by utilizing target sampling strategy to choose various sorts of target samples and experiments showed better results from SOTA methods on multi benchmark.

    Class-agnostic Activation Map(CAM),a contrastive learning approach,utilized unlabeled images data without relying on image-level supervision and reported to successfully extract object bounding boxes [24].CNN in conjunction with Recurrent Neural Network (RNN) was utilized for defining image-label relationship and the semantic label dependence.The experimental results of RNN-CNN outperformed multi-label classification models [25].The regional latent dependencies model was developed which comprises a full convolutional localization model to locate the region and the located regions are then forwarded to the RNN for characterization of dependences at the regional level.They claimed the best performance of model for predicting small objects[26].

    The evaluation of the depth of the convolutional network was conducted using an architecture that employed compact (3 × 3) convolution filter,which revealed that by increasing the depth to 16–19 weight layers,a notable enhancement in performance was attained compared to previous configurations [27].A framework for residual learning was developed,which obtained good generalization performance on recognition tasks by explicitly reformulated layers as learning residual parameters in relation to corresponding layers,as opposed to learning unreferenced functions [28].Multi labelled image recognition was achieved by proposing a recurrent memorized attention-based module,consisting of an LSTM and transformer layer subnetwork.They reported better results for both accuracy and efficiency on PASCAL VOC 07 and MS COCO dataset[29].

    Multi object recognition was performed by extracting object proposals using selective search,which yielded two distinct types of extracted features.The LMNN CNN was provided with a lowdimensional feature to generate the label view,while the normal CNN feature was employed as the feature view and then these two views were fused.The results validated discriminative effect and the generalization capability of the model[30].A novel attention framework utilizing reinforcement learning was devised to address the problem of redundant computation cost by iteratively identifying a series of attentional and informative regions associated with semantic objects.On MS COCO and PASCAL VOC,this technique outperformed in efficiency and region-specific picture labelling[31].

    Reinforcement learning approach to classify multi class images that seeks to replicate human behavior in order to assign labels to images from simple to complex was utilized to sequentially predict labels[32].RNN model with an attention layer as well as LSTM layer was used for multi labelled image recognition to jointly learns the labels of interest and results proved to be effective on MS COCO and NUS-WISE datasets [33].A unique deep learning architecture was constructed that integrates knowledge graphs to represent the connections among multiple labels and learns information from semantic label bay.The proposed methodology exhibited enhanced performance in the context of multi labelled recognition and multi labelled zero-shot learning(ML-ZSL)[34].

    Attention maps were generated from Spatial Regularization Network(SRN)and results obtained from regularized network were merged with original outcomes by a ResNet-101 model,and SRN model demonstrated improved classification performance for both spatial and semantic relationships of labels[35].An effective attention module called Convolutional Block Attention Module(CBAM)was developed with the ability to integrate with CNN architecture,resulting in minimal computational overhead [36].A novel model called Squeeze-and-Excitation (SE) was designed which dynamically adjusts channel wise feature retorts by overtly capturing the mutuality among channels.The SENet architecture was constructed by concatenating many SE blocks,resulting in a significant reduction in top-5 errors to a value of 2.251%[37].

    Although N-CDE’s exhibit potential in representing dynamic dependencies in neural network models,significant research gaps still need to be filled.First,more research is needed to determine how well N-CDE’s scale and perform when handling huge datasets or intricate model architectures.Real-world applications require an understanding of the computational demands and potential obstacles.Furthermore,studies might explore N-CDE interpretability in further detail,focusing on how reliable and understandable these models are,particularly when used for challenging tasks.Moreover,evaluating N-CDE’s adaptation to different real-world settings requires examining their generalization abilities over a range of datasets and domains.Another area that needs attention is the creation of strong training procedures,regularization approaches,and methodologies for dealing with problems like overfitting or underfitting.Finally,comparisons with other dynamic modeling techniques can shed light on the advantages and disadvantages of N-CDE’s,leading to a more thorough comprehension of their suitability in various situations.Filling in these research voids will help N-CDE’s mature and become more widely used in dynamic modeling applications.

    2 Attentive N-CDE’s

    In NODE’s,a time series multivaria∫te vector w(t1) at any time(t1) which can be computed by initial vectorw(t0) byw(t1)=w(t0)+dt,where the value of time point tj∈[0,T].Here it is noted thatgis neural network having a parameter ?gand a time dependent derivative termw.So,it can be said that the fundamental evolutionary technique information ofwafter being initialized lies ing.A lot of models includewbut they have no limitations to heat-diffusion process,climate,and epidemic model.Though,the model which directly calculatewlike RNN’s are known as discrete while the NODE’s models are continuous with respect to time t.In the integral term the time variable is controlled freely,e.g.,t1as shown above with the help of which at any time t,wcan be found.

    As compared with NODE’s,the starting process assumptions are more complicated than NCDE’s,the integral used in it is known as Riemann-Stieltjes,i.e.,w(t1)=w(t0)+If the identity function with respect to variable t,i.e.,V(t)=t is used then N-CDE’s are reduced to NODE’s.As the controlled parameter of NODE’s is “t” and N-CDE’s have a time seriesV(t).Therefore,N-CDE’s could also be taken as NODE’s generalization.

    Another important feature is that the multiplication of matrix vectorgdV(t)is done without requiring huge computational cost.Fig.2 shows the general architecture of N-CDE’s,whereas in Fig.3,it is shown that N-CDE’s of two types are used in proposed Attentive N-CDE’s technique with one N-CDE’s attention values are generated and the other one is used for initialization ofw().Two distinct attention techniques are used in proposed scheme one is known as“Time-wise attention”,called Bottom N-CDE’s in which attention value is given as 0 ≤α(t)≤1 and the other one is known as“Element-wise attention”,called Top N-CDE’s whose attention valueα(t)∈[0,1]dimV(t).The direction V and attentions of bottom N-CDE’s are concatenated,which is represented by ?symbol.In these two cases,initialize second N-CDE’s byV(t) and attention element wise multiplication represented byZ(t),in this way,the input values of N-CDE’s are selected by the first N-CDE’s variableV(t).A training technique is proposed that ensures the training problem is adequately posed.

    Figure 2: General architecture of N-CDE’s

    Figure 3: General architecture of proposed attentive N-CDE’s

    2.1 Neural ODE’s

    NODE’s are used to provide the solutions of Initial Value Problems (IVP) that contain integral terms for the calculation ofw(t1)viaw(t0):

    where,the ODE’s (Ordinary Differential Equations)is neural network used for the approximation ofw′(t)(i.e.,w′(t)=For the solution of integral term NODE’s commonly used solvers for ODE’s that mainly includes Euler Method,Modified Euler,Runge Kutta Methods(RKMethod),and Dormand-Prince (RKDP) method for higher order ODE’s.Fig.4 depicts NODE’s conventional architecture.

    Figure 4: General architecture of NODE’s

    Generally,to discretize the time variable(t)and conversion of integral into other additional step ODE’s solvers are utilized.For example,explicit Euler’s technique in one step is written as follows:

    where,hrepresents the Euler’s method step size.The RKDP technique utilize more advanced technique for updatingw(t+h)fromw(t)and it helps to control step size(h)dynamically.But sometimes the ODE’s solvers may cause numerical instability.For example,RKDP technique occasionally results in underflow error because it reduces the step size.Several other techniques were also suggested for the prevention of these unexpected issues.The adjoint sensitivity approach,which is employed for its effectiveness and theoretical accuracy,is utilized to develop NODE’s in the context of backpropagation technique.

    Consider the optimizing scalar valued loss-functionL(),the input of this function is derived from results of ODE’s solver.

    For the optimization ofL,gradient with respect to ?gis required.Let us consider the adjoint quantityαw(t)=for loss functionL,the calculation of gradient for loss with respect to(w.r.t)parameters with the help of integral reverse mode is given as:

    gradw(0)Land the gradients can be propagated backwards to parts before ODE’s.It is important to note that time complexity of adjoint sensitivity isO(1),while trained NODE’s complexity is proportional to quantity of RKDP steps.Both the techniques have similar time-complexities,but the efficiency of adjoint-sensitivity is quite better than backpropagation techniques.So,it helps to train NODE’s more efficiently.

    2.2 Neural CDEs

    One drawback observed in NODE’s is that,if given ?g,w(t1)is determined solely based onw(t0),which raises concern regarding the capability of NODE’s for representation learning.To address this limitation,N-CDE’s used given time-series data to introduce a supplementary path denoted asV(t).Consequently,the formulation forw(t1)is now regulated by bothw(t0)andV(t).

    The formulation of the Initial Value Problems(IVP’s)for N-CDE’s are expressed as follows:

    where,V(t)characterizes a natural cubic-spline path,originating from the inherent time series data.It is noteworthy that this integral problem is quoted as the Riemann-Stieltjes integral,a departure from the traditional Riemann integral employed by NODE’s.Furthermore,CDE’s functiongis introduced to approximateWhile various methods can be employed for determiningV(t),the authors prefer natural cubic spline method because of its advantageous characteristics such as being twice differentiable,computationally efficient,and ensuring the continuity ofV(t)concerning t after interpolation.

    2.3 Overall Workflow

    Initially,a continuous pathV(t) utilizing standard cubic-spline technique is constructed by assumed data of time series,i.e.,(v0,t0),(v1,t1),....Initialize N-CDE’s from bottom byV(t) for production of attention outputs at every point of time t.Attention element wise multiplication and functionV(t)used in creation of the path valuesZ(t)in Eq.(15).Now,initialize the N-CDE’s from top for the generation of hidden last vector.Eq.(19),contains additional classified layers which generate outcome.Besides of,raw points{(v0,t0),(v1,t1),...,(vm,tm)},irregular and discrete behavior of given dataV(t)path shows continuous behavior alsoV(tm)=vm,where the timetmis calculated byvm.All other non-calculated points were interpolated by cubic-spline technique using nearest data set.Fig.5 shows the overall design of the anticipated prototype.

    Figure 5: Overall design of the anticipated attentive N-CDE’s

    2.4 N-CDE’s Vs NODE’s

    Initially,a continuous pathV(t) utilizing standard cubic-spline technique is constructed by assumed data of time series,i.e.,{(v0,t0),(v1,t1),...}.Initialize N-CDE’s from bottom byV(t) for production of attention outputs at every point of time t.Attention element wise multiplication and functionV(t)used in creation of the path valuesZ(t)in Eq.(15).Now,initialize the N-CDE’s from top for the generation of hidden last vector.Eq.(19)contains additional classified layers which generate outcome.Besides of,raw points {(v0,t0),(v1,t1),...(vm,tm)},irregular and discrete behavior of given dataV(t)path shows continuous behavior alsoV(tm)=vm,where the timetmis calculated byvm.All other non-calculated points were interpolated by cubic-spline technique using nearest data set.

    The existence of pathV(t) is main key point that distinguish between N-CDE’s and NODE’s techniques.In N-CDE’s,V(t)is calculated particularly by cubic spline which utilized upcoming data values {vt′},t′>t,in combination with its recent and previous calculations,which is the limitation in NODE’s.Hence,N-CDE’s is comparatively better than NODE’s technique.Also,N-CDE’s are converted to NODE’s byV(t)=t which is commonly known as identity function.

    2.4.1 Bottom N-CDE’s for Attention Values

    The bottom N-CDE’s is formulated as follows:

    Here,s(t) represents attention hidden vector and used for its derivation at time t.Here our article will support two attention concepts,i.e.,attentions depending on timeα(t) ∈Rand elementα(t) ∈Rdim(V(t)).In the first type,the output size is 1 for a fully connected layerFC1,hereα(t)=sigma(FC1(s(t)),represents scalar value.However,the roles are reversed in second type,whereα(t)=sigma(s(t)) is a vector.Our observation indicates equivalence between bottom N-CDE’s and the original N-CDE’s setting.

    These attention types are associated with different output sizes and activation functions.In our study,we will study three variations of the sigmoid activation function.The first two are soft attention and hard attention utilizing the original sigmoid represented.Hard attention would later be finished with rounding function.The third variation will be hard attention with sigmoid slope annealing referred to as straight through estimator.We will disregard soft attention on the count of using original sigmoid.The forward and backward paths definitions for hard attention are given as:

    For forward-path,

    For backward path,

    In straight through estimator,for forward-path,

    For backward path,

    Notably,the temperature parameterTcontrols the slope of the sigmoid function.

    Slope of sigmoid function is controlled via temperatureTsuch thatT≥1.0 is a scalar.For a significantly largeT,slope of the sigmoid function approaches that of the rounding function.Hence,after initializing it to 1 at the start,we uniformly keep increase 0.12 toTevery epoch.In soft time attention,the distribution is combined with the features of the localized portion,in hard time,attention uses stochastic models like the Monte Carlo Method and reinforcement learning,making it less popular,while in Space,Time,and Environment (STE),all factors of soft and hard are integrated.Table 1 shows all six,possible attention models that we retained by using three types of attention(soft time,hard time and STE time)and three variations forsigma.

    Table 1: Different combinations of attentions

    Table 2: Architecture of function g in bottom N-CDE’s.FCL denotes fully connected layer,represents Rectified Linear Unit(ReLU)? and represents hyperbolic tangent(Tanh)

    Table 3: Architecture of function q in top N-CDE’s

    2.4.2 Top N-CDE’s for Classification

    The top N-CDE’s is expressed as follows:

    Given here,Z(t) is the element wise multiplication between attention andV(t).?represents the element-wise multiplication operation.Being able to store information picked by the bottom NCDE’s inZ(t),top N-CDE’s is free to only concern itself with useful information and consequently downstream Machine Learning (ML) tasks can be.Therefore,Z(t) includes information chosen by the bottom N-CDE’s.The top N-CDE’s can exclusively emphasis on valuable data,leading to an enhancement in the performance of subsequent ML tasks.

    Further derivation of the above equation in tractable form as:

    For attention in time wise,

    For attention in element wise,

    We highlight that our derivations primarily assume soft attention but remain applicable to hardattention and straight through estimator.These attention mechanisms facilitate the selection of relevant values by the top N-CDE’s,enhancing the execution of downstream ML tasks.The generated value by hard attention lies in set {0,1} and the interval values in [0,1] are caused by soft attention.The architectures of functiongin bottom N-CDE’s andqin top N-CDE’s are given in Tables 2 and 3,respectively.

    Hence,it is noted that the hard-attention range is still valid.For example,consider that if attention is calculated time wise and by hard attention the value ofα(t)is either 0 or 1.Consequently,=0 orthat correspond exactly to proposed attention motivation,which have the concept that the top values of N-CDE’s are chosen by the bottom values N-CDE’s.Estimator of straight through is taken as a hard-attention variant of temperature annealing,which also produced values that are either 0 or 1.So the equations are easily used in all three defined attentions.

    2.5 Training Algorithm and Well-Posedness

    Backpropagation adjoint technique is used to trained N-CDE’s whose required memory isO(G+K),whereG=t1-t0which is referred as integral time space,and K denotes size of defined N-CDE’s filed vector.Thus,adjoint method is used in training of Algorithm 1,from Line 4 to 6.Though the proposed algorithm requires two N-CDE’s which results in increment of memory,i.e.,O(2G+Kg+Kq),hereKgandKqrepresent the field vector sizes obtained by bottom and top N-CDE’s,respectively.

    On a fixed path,the well-posedness of N-CDE’s is already utilizing the mild circumstances of Lipschitz continuity,which have a constant of 1 for all activations including,Softsign,ArcTan,Sigmoid,Tanh,SoftPlus,Leaky ReLU and ReLU.Other commonly used CNN layers,i.e.,pooling,batch normalization and dropout have explicit Lipschitz continuity.Thus,the continuity of g and q is achieved in proposed model as the attention values for bottom N-CDE’s are produced by keeping ?qfixed(Line 5 of Algorithm 1)and attention values for top N-CDE’s re produced by keeping ?othersfixed(Line 6 of Algorithm 1).During the experimental process,the classification task is solved by adopting the ordinary cross entropy loss having a hidden layerw(t)and classification output layer as:

    HereΓois the predicted output label and denotes a softmax activations.The output size of FCL is kept uniform equals to total classes in each dataset,whereas standard cross entropy function is also adopted.

    3 Experiments

    3.1 Datasets and Performance Measures

    The proposed model is assessed on two classification tasks(fine-grained visual classification and multi-label classification).A total of five(5)publicly available datasets including are utilized to CUB-200-2011(D1)and ImageNet-1K(D2),PASCAL VOC 2007(D3),PASCAL VOC 2012(D4)and MS COCO (D5) are utilized during the experiments.D3 and D5 are used for multi-label classifications,whereas D1,D2 and D4 are used for fine-grained visual classifications.D1 dataset contains 5994 training and 5794 testing images of bird species.D2 dataset has 1.3 million images for training and 50000 images for testing across 1000 classes.D3 contains 5011 and 4952 images for training and testing across 20 classes,whereas D4 has 11540 training images,10991 testing images and a total of 20 classes.D5 contains 123000 images and 80 classes,where 82783 are training images and 40504 are testing images.

    MaxBoxAccV2 is utilized to evaluate the model on D1 and D2.For D2 and D3,widely used mean Average Precision(mAP)is used along with recording results for each of the 20 classes.Conventional performance measures such as Precision(P),Recall(R),mAP,Average Precision(AvP),Average Recall(AvR),Class-wise Average F1 score (AvF1C) and Overall Average F1 score (AvF1O) are used to evaluate the proposed model on D5.

    3.2 Implementation Details

    All experiments are performed on Windows 11,Python 3.12.0,CUDA 12.2,TENSORFLOW 2.14.0,MATPLOTLIB 3.8,SCIPY 1.11.3,NUMPY 1.20.3,i7 CPU and NVIDIA RTX TITAN with Nvidia GeForce Graphics Driver 537.58.All experiments are repeated 3 times and reported results are the mean accuracies.For the testing of proposed model on all selected datasets,a totalof 240 epochs with a batch size of 16 are executed with learning rate,hidden layers of size{50,60,65,70,75}and N-CDE’s have{6,7,8,9}layers.Best results are achieved by adopting a learning rate of 0.5×10-4,65 hidden layers and 7 layers in bottom NCDE’s,whereas 8 layers in top N-CDE’s.When evaluating attention-based models for intra-and interclass similarities,it is necessary to evaluate the following metrics:F1 score,precision,recall,confusion matrices,and classification accuracy.Furthermore,examining attention maps,ROC curves,AUC,and feature embeddings offers perceptions into the interpretability and performance of the model.While domain-specific metrics take care of application-specific requirements,cross-validation guarantees generality.When these measures are combined with visualization tools,it becomes possible to gain a thorough insight of how well the model handles both intra-and inter-class differences.Standard criteria like classification accuracy,precision,recall,and F1 score are often used in evaluation metrics to examine how well models like N-CDE’s and NODE’s represent similarities and differences.These metrics evaluate the models’capacity to accurately categorize instances,distinguish across classes,and manage variations within a class.Furthermore,based on the particulars of the work,researchers might use more specialized measures,including feature embedding analyses or domain-specific metrics.In order to achieve resilience,the evaluation procedure frequently takes into account the models’performance across different subsets of the data and makes use of cross-validation techniques.

    3.3 Comparisons with State-of-the-Art

    3.3.1 Results on CUB-200-2011 and ImageNet-1K Dataset

    Proposed model is compared with 7 fine-grained image classification methods including iCAM decomposition [21],CREAM [2],WSOL [23],BagCAMs [38],ViTOL [39],iMCL,iMCL [40] and C2AM [24].This comparison is presented in Table 4.BagCAMs is a plug-and-play technique that was developed for localization task based on the regional localizer generation (RLG) technique,which involves defining a collection of regional localizers and subsequently deriving them from a well-trained classifier.They reported that BagCAMs method achieved SOTA performance on three WSOL benchmarks [38].Object localization was performed by employing vision transformers for self-attention (ViTOL) and patch-based attention dropout layer (p-ADL) was included to enhance the coverage of the localization map.The results showed that on ImageNet-1K and CUB datasets MaxBoxAcc-V2 localization scores was 70.4% and 73.17%,respectively [39].Enhancements were introduced to SimCLR by proposing iMCL,where improvements were made in the MoCo framework,accompanied by certain adjustments to MoCo using MLP projection head and the application of additional data augmentation techniques.They established stronger baselines that outperformed SimCLR and do not require large training batches [40].The proposed model exhibited superior performance compared to iMCL by a margin of 2.3%and outperformed BagCAMs by a margin of 7.3%on the D1 dataset.The performance of the proposed model on the D2 dataset surpasses that of ViTOL and BagCAMs by 5.3%and 5.8%,respectively.

    Table 4: Comparison of proposed methodology with state-of-the-art on CUB-200-2011 and ImageNet-1K

    3.3.2 Results on PASCAL VOC 2007 Dataset

    For this dataset,the proposed model is compared with 12 models in terms of mAP as shown in Table 5.A simple technique for multi-label classification was designed on the concept of simultaneously recognizing both labels and the correlation of labels utilizing ConvNet and a common latent vector space,respectively.The results demonstrated exceptional performance on MS COCO and PASCAL VOC datasets as benchmark [41].Deep Semantic Dictionary Learning (DSDL) was developed in which an auto-encoder created the semantic dictionary and then such dictionary was utilized by CNN with label embeddings along with Alternately Parameters Update Strategy(APUS)was applied for training to optimize DSDL.Experimental results showed promising performance on three benchmarks[42].The proposed model attained a mAP of 97%,surpassing its nearest competitor by a margin of 3%.

    Table 5: Comparison of proposed methodology with state-of-the-art on PASCAL VOC 2007 (D3)dataset

    3.3.3 Results on PASCAL VOC 2012 Dataset

    The proposed model is compared with 6 latest techniques for this dataset in terms of mAP as shown in Table 6.A deep CNN framework referred to as Hypotheses-CNN-Pooling(HCP)performed classification based on hypotheses extraction,where each supposition is associated to a shared CNN,and the resulting CNN outputs from different suppositions are combined using max pooling.The results demonstrated the superiority of HCP with mAP up to 90.5% [43].Multi-label image identification employed object-proposal-free framework namely random crop pooling(RCP),which stochastically scales and crops images ahead of delivering them to a CNN.This technique worked well for recognizing the complex innards of multi label images on two datasets,i.e.,PASCAL VOC 2012 and PASCAL VOC 2007[44].The performance of the proposed model on the D4 dataset surpasses its nearest competitor by a margin of 1%.

    Table 6: Comparison of proposed methodology with state-of-the-art on PASCAL VOC 2012 (D4)dataset

    3.3.4 Results on MS COCO Dataset

    For this dataset,the proposed model is compared with 12 models as shown in Table 7.The multi label classification model was applied based on graph convolutional network (GCN),where directed graphs were constructed to describe the relationships between object labels,with each label being represented as word embeddings.The GCN was trained to transform this label graph into interdependent object classifiers and represented better performance on two datasets [45].Efficient Channel Attention(ECA)module achieved improved performance by utilizing a minimal number of parameters.They reported to gain performance boost in terms of Top-1 accuracy of more than 2%[46].The proposed model performed better than the previous models.

    Table 7: Comparison of proposed methodology with state-of-the-art on MS COCO(D5)dataset

    3.4 Visualization of Attention Maps

    To visually demonstrate the efficacy of the proposed model in an intuitive and qualitative manner,attention maps are depicted in Fig.6.The proposed model generates attention maps that are represented by different colors on maps.Dark red indicates the highest level of activation,while dark blue represents the lowest intensity.It is evident that the attention maps for each class effectively identify the object instances that belong to the same class,regardless of the number of objects present in the photos,such as individuals,aircraft,individuals,and animals.Using the final image in the fourth row as a case study,the suggested model effectively demonstrates its ability to accurately identify the position of the penguin,even when the object in question is of diminutive size.

    The resulting attention maps,which provide a thorough visual examination of the model’s decision-making process,are produced by a model that makes use of contextual dense embeddings,or N-CDE’s.These maps provide light on the crucial areas that support the model’s predictions by illustrating where the model focuses its attention within an input.Through close examination of these attention maps,one may identify the locations of relevant regions in the input data,which offers insights into the characteristics that draw the attention of the model.When it comes to tasks like image classification,where certain regions or patterns are suggestive of different classes,this thorough attention analysis is especially helpful.Furthermore,attention maps aid in the recognition of discriminative characteristics,exposing the components that are essential in differentiating between various groups or classifications.This comprehension is further strengthened by the contextual character of N-CDE’s,which demonstrates how the model considers more comprehensive contextual information when making decisions.To put it briefly,attention maps produced by N-CDE’s are an effective instrument for transparent and comprehensible model analysis.They aid in a better understanding of the inner workings of the model and enhance its reliability and performance.Attention-based neural networks using N-CDE’s show potential for NLP,video and image analysis,and medical fields.They promote contextual awareness for better recognition in picture analysis.For more precise predictions,they capture subtle linguistic links in NLP.N-CDE attention models support medical image analysis in the field of healthcare,providing interpretability that is essential for reliable diagnosis.All things considered,N-CDE’s strengthen and dependability of models in a variety of applications.

    Figure 6: Visualization of attention maps using proposed methodology

    4 Conclusion

    Differential equations have been extensively employed in the context of attention-based classification tasks.Numerous concepts and variants have been presented after the inception of NODE’s,all of which have been constructed upon the fundamental principles of NODE’s.The utilization of NODE’s in CNNs has been infrequent,whereas the incorporation of N-CDE’s has been exceedingly rare.This article presents a methodology for generating attention maps using an attentive neural network that utilizes N-CDE’s.The proposed approach involves the use of two distinct types of N-CDE’s: One for incorporating hidden layers and another for generating attention values.The bottom N-CDE’s are employed to capture attention values,while the top N-CDE’s are utilized for the classification task.The proposed approach undergoes evaluation using five publicly available datasets,namely CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.As all selected datasets contain different types of images,so it was evident that the proposed model is generalized.In the future,the utilization of N-CDE’s can be employed for tasks that necessitate supervised segmentation,particularly in the domains of semantic segmentation and instance segmentation.

    Acknowledgement:The author would like to express sincere gratitude to the Department of Information Systems,Faculty of Computing and Information Technology,King Abdulaziz University,Saudi Arabia,for their invaluable support and guidance.

    Funding Statement:This research work was funded by Institutional Fund Projects under Grant No.(IFPIP:638-830-1443).The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception,design,data collection,analysis,interpretation of results and draft manuscript preparation:Anas W.Abulfaraj.The author reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are available from the first and corresponding author upon reasonable request.

    Conflicts of Interest:The author declares that they have no conflicts of interest to report regarding the present study.

    久久久久精品人妻al黑| 一区二区三区精品91| 亚洲美女视频黄频| 一二三四中文在线观看免费高清| 久久久久久免费高清国产稀缺| 国产一区有黄有色的免费视频| 黄色配什么色好看| 精品国产乱码久久久久久小说| 欧美另类一区| 国产乱来视频区| 国产精品香港三级国产av潘金莲 | 亚洲国产精品国产精品| 老司机影院毛片| 99热国产这里只有精品6| 亚洲国产精品一区三区| 亚洲成人一二三区av| 少妇被粗大的猛进出69影院| 亚洲精品一二三| 亚洲,一卡二卡三卡| 欧美日韩一级在线毛片| 一区在线观看完整版| 国产一区二区三区av在线| 免费观看a级毛片全部| 男女啪啪激烈高潮av片| 亚洲精品一区蜜桃| 99久久综合免费| 亚洲欧美一区二区三区久久| 亚洲综合精品二区| 亚洲av免费高清在线观看| 亚洲欧美一区二区三区国产| 另类精品久久| 九九爱精品视频在线观看| 久久国产精品男人的天堂亚洲| kizo精华| 日韩熟女老妇一区二区性免费视频| 免费观看av网站的网址| 精品福利永久在线观看| videosex国产| 下体分泌物呈黄色| 久久国内精品自在自线图片| 中文乱码字字幕精品一区二区三区| 水蜜桃什么品种好| 国产精品久久久久久久久免| videos熟女内射| 欧美+日韩+精品| 久久精品亚洲av国产电影网| 久久精品久久精品一区二区三区| 国产精品久久久久久精品古装| 免费在线观看完整版高清| 亚洲av中文av极速乱| 伦理电影免费视频| 久热久热在线精品观看| 日韩熟女老妇一区二区性免费视频| 久久久久精品久久久久真实原创| av电影中文网址| 涩涩av久久男人的天堂| 伦理电影大哥的女人| 国产欧美日韩综合在线一区二区| 久久国产亚洲av麻豆专区| 欧美亚洲日本最大视频资源| 黄片小视频在线播放| 国产黄频视频在线观看| 卡戴珊不雅视频在线播放| 飞空精品影院首页| 精品酒店卫生间| 日韩不卡一区二区三区视频在线| 亚洲欧美清纯卡通| 精品人妻在线不人妻| 天天操日日干夜夜撸| 又粗又硬又长又爽又黄的视频| 国产老妇伦熟女老妇高清| 国产免费又黄又爽又色| 亚洲精品视频女| 欧美精品av麻豆av| 欧美亚洲日本最大视频资源| 亚洲人成77777在线视频| 少妇人妻久久综合中文| 亚洲av免费高清在线观看| 久久狼人影院| 亚洲人成电影观看| 国产成人免费无遮挡视频| www.自偷自拍.com| 天堂中文最新版在线下载| 欧美精品国产亚洲| 少妇人妻精品综合一区二区| 国产探花极品一区二区| 视频区图区小说| 王馨瑶露胸无遮挡在线观看| 在线观看免费视频网站a站| 一级,二级,三级黄色视频| 激情五月婷婷亚洲| 国产精品国产三级国产专区5o| 国产淫语在线视频| 国产 一区精品| 看免费av毛片| 男女午夜视频在线观看| 美女大奶头黄色视频| 多毛熟女@视频| 2021少妇久久久久久久久久久| 少妇精品久久久久久久| 在线天堂最新版资源| 啦啦啦在线观看免费高清www| 97人妻天天添夜夜摸| 日韩在线高清观看一区二区三区| 国产一区二区三区av在线| 久久 成人 亚洲| 日日啪夜夜爽| 亚洲精品视频女| 天堂8中文在线网| 欧美亚洲日本最大视频资源| 欧美日韩一区二区视频在线观看视频在线| 啦啦啦在线观看免费高清www| 一二三四在线观看免费中文在| 另类亚洲欧美激情| 国产精品无大码| 91精品国产国语对白视频| 国产在线一区二区三区精| 香蕉丝袜av| xxx大片免费视频| 水蜜桃什么品种好| 国产精品.久久久| 桃花免费在线播放| 亚洲精品第二区| 欧美人与善性xxx| 99热国产这里只有精品6| 性色avwww在线观看| 国产日韩欧美在线精品| 国产精品一国产av| 婷婷色综合www| 精品人妻在线不人妻| 久久99精品国语久久久| 9191精品国产免费久久| 成人午夜精彩视频在线观看| 少妇被粗大猛烈的视频| 午夜福利影视在线免费观看| 国产毛片在线视频| 各种免费的搞黄视频| 水蜜桃什么品种好| 久久精品人人爽人人爽视色| 一级毛片电影观看| 九九爱精品视频在线观看| 久久人妻熟女aⅴ| 免费av中文字幕在线| 激情视频va一区二区三区| 久久久久久久久久久免费av| 国产精品三级大全| av线在线观看网站| 亚洲激情五月婷婷啪啪| av国产久精品久网站免费入址| 亚洲国产精品国产精品| 国产一区二区激情短视频 | 青青草视频在线视频观看| 人妻人人澡人人爽人人| 人妻系列 视频| 丰满少妇做爰视频| 天天躁夜夜躁狠狠躁躁| 免费在线观看完整版高清| 精品福利永久在线观看| 中文字幕色久视频| 午夜老司机福利剧场| 中文字幕亚洲精品专区| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 国产97色在线日韩免费| 日韩中文字幕欧美一区二区 | 国产毛片在线视频| 国产极品粉嫩免费观看在线| 天美传媒精品一区二区| 免费久久久久久久精品成人欧美视频| 麻豆av在线久日| 亚洲国产av影院在线观看| 我的亚洲天堂| 欧美成人午夜精品| 精品一品国产午夜福利视频| 中文字幕另类日韩欧美亚洲嫩草| 69精品国产乱码久久久| 久久人人爽av亚洲精品天堂| 最近2019中文字幕mv第一页| 亚洲三级黄色毛片| 成年美女黄网站色视频大全免费| 菩萨蛮人人尽说江南好唐韦庄| 国产av国产精品国产| 国产无遮挡羞羞视频在线观看| 男女无遮挡免费网站观看| 尾随美女入室| 在线观看人妻少妇| 黑丝袜美女国产一区| 99热国产这里只有精品6| 久久 成人 亚洲| 三上悠亚av全集在线观看| 国产探花极品一区二区| 国产极品天堂在线| 精品久久蜜臀av无| 又黄又粗又硬又大视频| 国产乱人偷精品视频| 亚洲av电影在线进入| 亚洲欧美成人综合另类久久久| 国精品久久久久久国模美| 中文字幕亚洲精品专区| 久久精品国产亚洲av涩爱| 只有这里有精品99| 美女xxoo啪啪120秒动态图| 青春草国产在线视频| 免费在线观看完整版高清| 国产精品 欧美亚洲| 亚洲国产最新在线播放| 精品人妻熟女毛片av久久网站| 国产成人免费观看mmmm| 18禁动态无遮挡网站| 亚洲精品aⅴ在线观看| 啦啦啦在线免费观看视频4| 精品国产一区二区三区四区第35| 最黄视频免费看| 成年人午夜在线观看视频| 国产在线视频一区二区| 2018国产大陆天天弄谢| 久久精品国产a三级三级三级| 午夜日本视频在线| 国产精品熟女久久久久浪| 日本黄色日本黄色录像| 国产人伦9x9x在线观看 | 日本色播在线视频| 少妇熟女欧美另类| 成人毛片a级毛片在线播放| 亚洲,一卡二卡三卡| 久久精品熟女亚洲av麻豆精品| 久久精品国产自在天天线| a级毛片黄视频| 岛国毛片在线播放| 黑人猛操日本美女一级片| 丝瓜视频免费看黄片| 边亲边吃奶的免费视频| 春色校园在线视频观看| 久久精品人人爽人人爽视色| 天美传媒精品一区二区| 国产不卡av网站在线观看| 99热网站在线观看| 国产爽快片一区二区三区| 亚洲精品自拍成人| 老司机影院成人| 看免费成人av毛片| 天堂8中文在线网| 精品一区二区三卡| 亚洲国产av影院在线观看| 99热全是精品| 夫妻性生交免费视频一级片| 中文精品一卡2卡3卡4更新| 韩国高清视频一区二区三区| 久久鲁丝午夜福利片| 欧美人与性动交α欧美精品济南到 | 少妇 在线观看| 国产精品av久久久久免费| 9色porny在线观看| 成人午夜精彩视频在线观看| 亚洲综合色网址| 日韩制服丝袜自拍偷拍| 只有这里有精品99| 免费黄色在线免费观看| 国产成人精品久久二区二区91 | 国产精品久久久久久精品电影小说| 国产成人精品福利久久| 精品少妇久久久久久888优播| 午夜福利乱码中文字幕| av又黄又爽大尺度在线免费看| 纯流量卡能插随身wifi吗| 男女边摸边吃奶| 中文字幕精品免费在线观看视频| 国产精品一区二区在线观看99| 黑人欧美特级aaaaaa片| 欧美变态另类bdsm刘玥| 国产男女内射视频| 日韩视频在线欧美| 日韩制服骚丝袜av| 久久久久久久国产电影| 亚洲av中文av极速乱| 2021少妇久久久久久久久久久| 伦理电影大哥的女人| 久久ye,这里只有精品| 飞空精品影院首页| 日韩一区二区视频免费看| 久久久久久久大尺度免费视频| 欧美激情 高清一区二区三区| 91午夜精品亚洲一区二区三区| 久久人人爽av亚洲精品天堂| 精品一区二区三区四区五区乱码 | 亚洲av电影在线观看一区二区三区| 香蕉精品网在线| 国产不卡av网站在线观看| 成人手机av| 国产免费视频播放在线视频| 久久久a久久爽久久v久久| 男女免费视频国产| 青春草亚洲视频在线观看| 久久久久久人人人人人| 欧美激情高清一区二区三区 | 亚洲精品国产一区二区精华液| 一级黄片播放器| 成年人免费黄色播放视频| 久久精品国产自在天天线| 天天躁日日躁夜夜躁夜夜| 秋霞伦理黄片| 观看av在线不卡| 最近手机中文字幕大全| 国产欧美日韩一区二区三区在线| 亚洲精品成人av观看孕妇| 国产在线一区二区三区精| 亚洲第一区二区三区不卡| 有码 亚洲区| 久久久久久免费高清国产稀缺| 成人免费观看视频高清| 一边亲一边摸免费视频| 国产成人欧美| 肉色欧美久久久久久久蜜桃| 欧美精品av麻豆av| 成年av动漫网址| 在线看a的网站| 丰满迷人的少妇在线观看| 亚洲av在线观看美女高潮| 高清黄色对白视频在线免费看| 国产一区二区 视频在线| 亚洲精品国产av蜜桃| 精品国产乱码久久久久久男人| 中国国产av一级| 两个人免费观看高清视频| 久久久久视频综合| 亚洲av在线观看美女高潮| 最黄视频免费看| 蜜桃国产av成人99| 亚洲色图 男人天堂 中文字幕| 日韩一区二区三区影片| 国产精品偷伦视频观看了| 亚洲精品第二区| 国产精品 国内视频| 男人爽女人下面视频在线观看| 免费观看av网站的网址| 超碰成人久久| 高清黄色对白视频在线免费看| 夜夜骑夜夜射夜夜干| 黑丝袜美女国产一区| 嫩草影院入口| 久久久国产精品麻豆| 日本爱情动作片www.在线观看| 久久av网站| www.熟女人妻精品国产| 另类亚洲欧美激情| 成人漫画全彩无遮挡| 精品一区二区三卡| 精品国产一区二区久久| 国产无遮挡羞羞视频在线观看| 久久久久久久久久久免费av| 国产免费一区二区三区四区乱码| 叶爱在线成人免费视频播放| 国语对白做爰xxxⅹ性视频网站| 亚洲人成77777在线视频| av卡一久久| 美女高潮到喷水免费观看| 国产av国产精品国产| 日韩,欧美,国产一区二区三区| 天美传媒精品一区二区| 久久久久国产网址| 嫩草影院入口| 在线天堂最新版资源| 国产在视频线精品| 国产免费福利视频在线观看| 久久久久网色| 亚洲激情五月婷婷啪啪| 性色av一级| 日本av免费视频播放| 婷婷成人精品国产| 国产国语露脸激情在线看| 国产爽快片一区二区三区| 久久精品久久久久久噜噜老黄| 中文字幕亚洲精品专区| 亚洲国产av新网站| 日本91视频免费播放| 亚洲av男天堂| 国产xxxxx性猛交| 自拍欧美九色日韩亚洲蝌蚪91| tube8黄色片| 一本—道久久a久久精品蜜桃钙片| 丝袜喷水一区| 午夜久久久在线观看| 久热这里只有精品99| 一级毛片电影观看| 深夜精品福利| 久久国产亚洲av麻豆专区| videosex国产| 春色校园在线视频观看| 边亲边吃奶的免费视频| 一边摸一边做爽爽视频免费| 只有这里有精品99| 女人被躁到高潮嗷嗷叫费观| 极品人妻少妇av视频| 国产野战对白在线观看| 又粗又硬又长又爽又黄的视频| 久久精品熟女亚洲av麻豆精品| 日本猛色少妇xxxxx猛交久久| 久久精品aⅴ一区二区三区四区 | 亚洲精品在线美女| 日韩中字成人| 97在线人人人人妻| 亚洲一级一片aⅴ在线观看| 国产日韩欧美在线精品| 久久久精品94久久精品| 久久av网站| 91在线精品国自产拍蜜月| 五月开心婷婷网| 9191精品国产免费久久| 男女免费视频国产| 亚洲精品成人av观看孕妇| 青草久久国产| 纯流量卡能插随身wifi吗| 久久久精品国产亚洲av高清涩受| 国产精品麻豆人妻色哟哟久久| 精品午夜福利在线看| 在线观看免费视频网站a站| 91久久精品国产一区二区三区| 国产精品不卡视频一区二区| 亚洲 欧美一区二区三区| 看十八女毛片水多多多| 99久久精品国产国产毛片| 久久精品国产亚洲av高清一级| 午夜激情av网站| 免费人妻精品一区二区三区视频| 久久久久久久国产电影| 亚洲精品乱久久久久久| 另类亚洲欧美激情| 人人妻人人添人人爽欧美一区卜| 亚洲人成77777在线视频| 我要看黄色一级片免费的| 午夜福利网站1000一区二区三区| 亚洲三区欧美一区| 欧美日韩一级在线毛片| 老女人水多毛片| 欧美日韩亚洲国产一区二区在线观看 | 久久午夜综合久久蜜桃| 欧美日本中文国产一区发布| 精品久久久精品久久久| a级毛片在线看网站| 在线观看www视频免费| 国产极品粉嫩免费观看在线| 亚洲欧美色中文字幕在线| 久久人人97超碰香蕉20202| 2022亚洲国产成人精品| 久久久久久久国产电影| 五月伊人婷婷丁香| 久久精品久久久久久久性| 国产精品 国内视频| 日韩av免费高清视频| xxxhd国产人妻xxx| 波多野结衣av一区二区av| 又大又黄又爽视频免费| 国产精品久久久久久精品古装| 亚洲av欧美aⅴ国产| 国产精品久久久久成人av| 两个人看的免费小视频| 成人午夜精彩视频在线观看| 国产精品人妻久久久影院| 伊人久久国产一区二区| 久久精品久久久久久噜噜老黄| 国产一级毛片在线| 国产精品成人在线| 丰满饥渴人妻一区二区三| 亚洲精品自拍成人| 国产精品99久久99久久久不卡 | 夫妻性生交免费视频一级片| 日日爽夜夜爽网站| 欧美+日韩+精品| 成人18禁高潮啪啪吃奶动态图| 两个人免费观看高清视频| 亚洲欧洲日产国产| 久久婷婷青草| av网站在线播放免费| 天天影视国产精品| 中国三级夫妇交换| av在线老鸭窝| av网站在线播放免费| 国产av一区二区精品久久| 在线观看国产h片| 国产毛片在线视频| 精品少妇久久久久久888优播| 蜜桃在线观看..| 欧美另类一区| 日韩制服丝袜自拍偷拍| 老汉色∧v一级毛片| 夫妻性生交免费视频一级片| 美女xxoo啪啪120秒动态图| 亚洲第一av免费看| 十分钟在线观看高清视频www| 亚洲av在线观看美女高潮| 免费播放大片免费观看视频在线观看| 久久久久久久大尺度免费视频| 午夜av观看不卡| 韩国av在线不卡| 久久免费观看电影| 9热在线视频观看99| 亚洲av欧美aⅴ国产| 国产成人aa在线观看| 日韩一本色道免费dvd| 97人妻天天添夜夜摸| 伊人亚洲综合成人网| 国产 精品1| 男女边摸边吃奶| 九色亚洲精品在线播放| 久久久亚洲精品成人影院| 国产成人aa在线观看| 亚洲精品自拍成人| 日韩一区二区三区影片| 欧美日韩成人在线一区二区| 青春草国产在线视频| 青草久久国产| 国产一级毛片在线| 婷婷色综合大香蕉| 精品99又大又爽又粗少妇毛片| 人人妻人人澡人人看| 亚洲图色成人| 国产97色在线日韩免费| 久久久国产欧美日韩av| 日韩中文字幕视频在线看片| 国产日韩欧美在线精品| 又粗又硬又长又爽又黄的视频| 国产午夜精品一二区理论片| 两个人看的免费小视频| 女性生殖器流出的白浆| 国产高清国产精品国产三级| 亚洲美女搞黄在线观看| 人人妻人人澡人人看| 亚洲av在线观看美女高潮| 桃花免费在线播放| 一个人免费看片子| 777米奇影视久久| 两个人免费观看高清视频| 久久久久久久久免费视频了| 国产一区二区三区综合在线观看| 免费观看性生交大片5| 国产精品.久久久| av国产久精品久网站免费入址| 叶爱在线成人免费视频播放| 超碰成人久久| 中文天堂在线官网| 一级黄片播放器| 91aial.com中文字幕在线观看| 久久久久久人人人人人| 亚洲国产av新网站| 欧美精品高潮呻吟av久久| 啦啦啦在线观看免费高清www| 久久精品aⅴ一区二区三区四区 | videossex国产| 国产免费福利视频在线观看| 最近的中文字幕免费完整| 国产极品天堂在线| 亚洲少妇的诱惑av| 国产精品女同一区二区软件| 久久精品国产亚洲av高清一级| 国产精品久久久久久久久免| av网站在线播放免费| 大香蕉久久网| 97精品久久久久久久久久精品| 视频区图区小说| 免费看不卡的av| 亚洲欧美一区二区三区久久| 久久99蜜桃精品久久| 欧美精品高潮呻吟av久久| xxxhd国产人妻xxx| av网站免费在线观看视频| www.自偷自拍.com| 制服人妻中文乱码| 久久人妻熟女aⅴ| 欧美 日韩 精品 国产| 色哟哟·www| 国产男女超爽视频在线观看| 久久久精品区二区三区| 中文精品一卡2卡3卡4更新| 美女高潮到喷水免费观看| 免费黄色在线免费观看| 成人国产av品久久久| 在线看a的网站| 精品99又大又爽又粗少妇毛片| 丰满乱子伦码专区| 国产男女超爽视频在线观看| 热re99久久精品国产66热6| 亚洲av福利一区| 成人免费观看视频高清| 国产乱来视频区| 一本—道久久a久久精品蜜桃钙片| 国产探花极品一区二区| 国产精品嫩草影院av在线观看| 久久久久精品久久久久真实原创| 亚洲精华国产精华液的使用体验| 欧美 亚洲 国产 日韩一| 五月伊人婷婷丁香| 少妇的逼水好多| 中文字幕精品免费在线观看视频| 欧美日韩精品成人综合77777| 赤兔流量卡办理| 高清欧美精品videossex| 美女主播在线视频| 国产精品国产av在线观看| av电影中文网址| 中文字幕人妻熟女乱码| 国产又色又爽无遮挡免| 最近手机中文字幕大全| av福利片在线| 亚洲精品一二三| 亚洲精品国产av成人精品| 日本午夜av视频| 热99久久久久精品小说推荐| 亚洲精品国产av成人精品| 精品国产乱码久久久久久小说| 亚洲精品一二三| 中文乱码字字幕精品一区二区三区| 精品一区二区三区四区五区乱码 | 女的被弄到高潮叫床怎么办| 国产片内射在线| 天堂中文最新版在线下载| 亚洲美女搞黄在线观看|