• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Intelligent Deep Learning Based Automated Fish Detection Model for UWSN

    2022-03-14 09:26:50MesferAlDuhayyimHayaMesferAlshahraniFahdAlWesabiMohammedAlamgeer
    Computers Materials&Continua 2022年3期

    Mesfer Al Duhayyim,Haya Mesfer Alshahrani,Fahd N.Al-Wesabi,Mohammed Alamgeer,

    Anwer Mustafa Hilal5,*and Manar Ahmed Hamza5

    1Department of Natural and Applied Sciences,College of Community-Aflaj,Prince Sattam bin Abdulaziz University,Saudi Arabia

    2Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Saudi Arabia

    3Department of Computer Science,King Khalid University,Muhayel Aseer,Saudi Arabia&Faculty of Computer and IT,Sana’a University,Sana’a,Yemen

    4Department of Information Systems,King Khalid University,Muhayel Aseer,Saudi Arabia

    5Department of Computer and Self Development,Preparatory Year Deanship,Prince Sattam bin Abdulaziz University,AlKharj,Saudi Arabia

    Abstract: An exponential growth in advanced technologies has resulted in the exploration of Ocean spaces.It has paved the way for new opportunities that can address questions relevant to diversity,uniqueness,and difficulty of marine life.Underwater Wireless Sensor Networks(UWSNs)are widely used to leverage such opportunities while these networks include a set of vehicles and sensors to monitor the environmental conditions.In this scenario, it is fascinating to design an automated fish detection technique with the help of underwater videos and computer vision techniques so as to estimate and monitor fish biomass in water bodies.Several models have been developed earlier for fish detection.However, they lack robustness to accommodate considerable differences in scenes owing to poor luminosity,fish orientation,structure of seabed,aquatic plant movement in the background and distinctive shapes and texture of fishes from different genus.With this motivation, the current research article introduces an Intelligent Deep Learning based Automated Fish Detection model for UWSN, named IDLAFD-UWSN model.The presented IDLAFD-UWSN model aims at automatic detection of fishes from underwater videos, particularly in blurred and crowded environments.IDLAFD-UWSN model makes use of Mask Region Convolutional Neural Network (Mask RCNN) with Capsule Network as a baseline model for fish detection.Besides, in order to train Mask RCNN, background subtraction process using Gaussian Mixture Model(GMM)model is applied.This model makes use of motion details of fishes in video which consequently integrates the outcome with actual image for the generation of fish-dependent candidate regions.Finally,Wavelet Kernel Extreme Learning Machine(WKELM)model is utilized as a classifier model.The performance of the proposed IDLAFD-UWSN model was tested against benchmark underwater video dataset and the experimental results achieved by IDLAFD-UWSN model

    Keywords: Aquaculture; background subtraction; deep learning; fish detection; marine surveillance; underwater sensor networks were promising in comparison with other state-of-the-art methods under different aspects with the maximum accuracy of 98% and 97% on the applied blurred and crowded datasets respectively.

    1 Introduction

    Water covers 75% of earth’s surface in the form of different water bodies such as canals,oceans, rivers, and seas.Most of the expensive resources are present in these water bodies and it should be investigated to explore further.Technological advancements, made in the recent years,have managed the likelihood of performing underwater exploration with the help of sensors at every level.Consequently, Underwater Sensor Network (UWSN) is one such advanced technique that enables underwater exploration.Being a network of independent sensor nodes [1,2], UWSN is a combination of wireless techniques with minuscule micromechanical sensors that are loaded with smart computation, smart sensing and communication capability.The sensor nodes in UWSN are spatially distributed under water to capture information on water-relevant features such as pressure, quality, and temperature.The sensed data is then processed using different applications for human benefits.

    Underwater transmission is mostly performed by a group of nodes that transfers the information to buoyant gateway nodes.These gateway nodes in turn transmit the information to nearby coastal monitor-and-control stations, which are otherwise known as remote stations [3].In general, UWSN acoustic transmitters are utilized for transmission since the acoustic waves can travel longer distances and is utilized for data transmission across numerous kilometers.UWSN is used for in a broad range of applications; marine atmosphere observation for commercial research purposes; coastline security for underwater pollution observation in water-based disaster prevention; and to benefit the water-based sport personnel.UWSN yields significant result for challenging applications [4].Though UWSN applications are stimulating, on the other hand, it is demanding as well.The purpose of UWSN is to exist during uncertain situations of water atmosphere that can create severe limitations in the deployment and design of these networks.

    In recent years, tracking and underwater tracking detection have become an attractive research field [5].Tracking is a complex procedure that aims at determining the condition (such as acceleration, position, and velocity) of one or more quickly-moving targets and nearby the actual condition, by utilizing the presented measurement gathered from several sensors.This information is crucial in war atmosphere for two main causes.Initially, it is employed to prevent itself from the attackers while the next is to destroy the adversary.To a certain extent, the accuracy of the collected data could decide the failure/success of a war.A substantial number of studies has examined the challenges faced in target tracking in terrestrial atmosphere.In these studies, the system depends upon different kinds of sensors which could be applied for detecting and tracking the target.

    In literature [6], it is mentioned that the acoustic sensors are used in detecting and tracking the target by deciding the power of the attained acoustic signal that exceeds the predetermined threshold.Subsequently, the vibration is utilized to distinguish the target with distinct weight and speed.Here, the method [7] utilizes the seismic and passive infrared sensor features for identification and classification of animals, creatures, vehicles, and humans.Magnetometers are utilized in the detection of metallic target as it achieves better accuracy.A target tracking method combining Radio Frequency Identification (RFID) and Wireless Sensor Networks (WSN) was developed in the literature [8,9].Correspondingly, the researchers [10] proposed a person tracking technique based on luminosity sensor.However, the target required should be armed with a light source, which is impossible in most of the cases.Contrasting the above-mentioned sensors,the study conducted earlier [11] utilized sensor-provided video images for tracking and target detection.

    The current research article designs an Intelligent Deep Learning (DL)-based Automated Fish Detection model for UWSN, named IDLAFD-UWSN model.In background subtraction phase of the presented model, Gaussian Mixture Model (GMM) model is utilized.Besides, the presented IDLAFD-UWSN model makes use of Mask Region Convolutional Neural Network(Mask RCNN) with Capsule Network as a baseline model for fish detection.At last, Wavelet Kernel Extreme Learning Machine (WKELM) model is utilized as a classifier model.The proposed IDLAFD-UWSN model was validated using benchmark underwater video dataset and the simulation outcomes were inspected under distinct dimensions.

    The remaining sections of the paper are organized as follows.Section 2 explains the processes involved in automated fish detection and tracking.Then, Section 3 reviews the existing fish detection methods whereas the proposed IDLAFD-UWSN model is discussed under Section 4.The experimental validation process is detailed in Section 5 while the conclusion is drawn in Section 6.

    2 Background Information:Automated Fish Detection and Tracking

    In order to ensure effective marine monitoring, it is mandatory to estimate fish biomass and its abundancy through population sampling in water bodies such as rivers, oceans, and lakes.It monitors the behavior of distinct fish species by altering environmental situations.This task gains significance particularly in those regions where specific fish species are on the verge of extinction or being threatened for life due to industrial pollution, habitation loss and alteration, commercial overfishing, deforestation, and climate change [12].The manual process of capturing videos under water is expensive, labor-intensive, prone to fatigue error, and time-consuming one.One of the major problems experienced in automated recognition of fish is high variations in underwater atmosphere due to background confusion, water clarity, dynamic lighting condition, etc.

    Generally, automated fish sampling is conducted through three main processes: (1) Fish recognition that distinguishes fish from non-fish objects in underwater videos.Non-fish objects include aquatic plants, coral reefs, sessile invertebrates, seagrass beds, and common background.(2) The second process is the classification of fish species in which the species of every identified fish is recognized and classified from a predefined pool of distinct species [13].(3) The final process is fish biomass measurement which is performed by length-to-biomass regression techniques.Several techniques are in use to perform fish recognition and subsequently determine their biomass by utilizing image and video processing techniques.Though DL-based fish species classifier has attained high accuracy, the process of vision-based automated fish recognition in unrestricted underwater videos is yet to be widely studied.Because most of the efforts taken earlier results in smaller datasets with a restricted variation from atmosphere.Thus, it is significant to decide the strength and efficiency of a system using a huge dataset that possesses high number of environmental variations.

    3 Existing Automated Fish Detection Methods

    The current section reviews state-of-the-art automated fish detection techniques.Hsiao et al.[14] proposed a method that utilizes motion-based fish recognition in video.This technique encompass background subtraction too by demonstrating the background pixel in video frames by GMM.Though GMM is trained, it considers only the succeeding frames of video that lack fish samples.An equivalent method was presented on covariance model of foreground and background(fish samples) in video frames by texture and color features of the fish.DL method has been utilized recently to resolve fish-related works.Sung et al.[15] presented a significant task for fish detection in underwater images with the help of CNN while the study considered a total of 93 images containing fish samples.The method was trained on raw fish images to considered texture and color data for detection and localization of the fish samples in image.In this method,modified R-CNN method was used for locating and detecting the fish samples in the image with combined network architecture.

    Qin et al.[16] presented a new architecture based on a modest cascaded deep network to recognize the movements of live fish.Siddiqui et al.[17] presented a pre-trained CNN with linear SVM classification for the classification of fish species present in usual underwater video images.The researchers proposed a specific cross-layer pooling method that integrates the feature from two distinct layers of a pre-trained CNN to improve discriminate capacity.The combined features were accepted to have a linear SVM for ultimate classification.A cross-layer pooling pipeline improved the calculation that excluded the likelihood of real-world computation.With the involvement of another species, the study achieved a classification accuracy of 89.0%.The classification accuracy for 16 fish species was 94.3%.To infer, this value is highly beneficial compared to existing methods’outcomes on fish species recognition processes.The investigation recommended the use of pre-trained network for classification process with no external classification.Kutlu et al.[18]employed DBN for classification of three classes of Triglidae family with high accuracy rate.The morphometric feature was initially extracted by 13 landmarks.Later, the DBN method was utilized for classification process.In spite of achieving high classification accuracy, the presented technique had a drawback i.e., it demands the extraction of advanced morphometric feature.In order to enhance the efficiency of this process, various studies have been conducted earlier.

    Sun et al.[19] employed single image super resolution technique to create superior resolution images from low-resolution images.In this study, linear SVM was utilized at last for fish recognition.An unsupervised underwater fish detection method was presented by Zhang et al.[20].This study utilized motion flow segmentation and selective search models to create a combined proposal region.Later, CNN method was utilized in the classification of entire presented instance to calculate the confidence.Additionally, Modified NonMaximum Suppression (MNMS) was also applied for finding the unique regions per object to reduce false classifications in detection.The results showed that the proposed method helped in the detection of fish from poor-quality underwater images with high accuracy.In addition, several classes of fishes have been identified in the areas of biology, medicine, biomedical research, genomics, and food technology.Among these, Zebrafish (Danio rerio) is a significant vertebrate that suits the bio-medical investigations,thanks to its transparency at the beginning, increased growth, and shorter generation time.Ishaq et al.[21] utilized a pre-trained CNN method for precise high throughput classification of whole-body zebrafish deformation, that occurs as a result of drug-induced neuronal harm i.e.,camptothecin.The research specified that DL method is significant in distinguishing different wild type morphology and phenotypes under drug treatment.Salman et al.[22] developed an integrated framework with RCNN model, background subtraction and optical flow to detect the moving fishes in free underwater environment.

    4 The Proposed Model

    The overall system architecture of the presented IDLAFD-UWSN model is shown in Fig.1.According to the figure, the proposed IDLAFD-UWSN model involves three major processes namely, background subtraction, fish detection, and fish classification.At first, GMM-based background subtraction technique is executed by defining the still pixels of video frames.It denotes a set of pixel values that are relevant to a range of seabed features, aquatic plants, and coral reefs.The foreground object is segmented from the backdrop based on the movement in the scene that does not match with the background.Secondly, MaskRCNN with CapsNet model is used to differentiate every candidate region in video frames from fish to non-fish objects.Lastly, WKELM model is applied in the classification of objects in underwater video into fish and non-fish classes.

    4.1 Dataset Used

    The presented model was tested using Fish4Knowledge with Complex Scenes (FCS) database.It is mainly created from a huge fish dataset known as Fish4Knowledge.With more than 700,000 underwater videos in unrestricted condition, the Fish4Knowledge database is a result of data collection for about 5 years that intended to monitor the marine ecosystem of coral reef in Taiwan [23].It is a well-known area for large fish biodiversity environment in the globe with no less than 3,000 fish species.The database encompasses seven sets of elected videos, captured in standard underwater conditions with complex changeability in scenes.Thus, the ecological differences pose significant challenges to identify the fish as listed herewith.

    ? Blurred, including three poor contrast blur videos.

    ? Complex background includes three videos with rich seabed providing a maximum degree of backdrop confusion.

    ? Crowded, in which a set of three videos is present with maximum density of fish movement in all video frames.This poses particular challenges to detect fishes under the existence of occluding objects.

    ? Dynamic background, where two videos are given with rich texture of coral reefs backdrop and movable plants.

    ? Luminosity variation includes two videos with abrupt luminosity variations, because of the surface wave action.It generates false positives during identification process, owing to the movement of light beam.

    ? In Camouflage foreground, two videos are selected which show the camouflaging issue of fish detection in the existence of texture and colorful backdrop.

    ? Hybrid, where a pair of videos is chosen to demonstrate the integration of previouslydefined conditions of changeability.

    This database is primary developed for fish-related tasks such as detection, classification, etc.So, the ground truth images exist for every moving fish on a frame-by-frame basis in every video.A set of 1,328 fish annotations is presented in FCS database as illustrated in Fig.2.

    Figure 2: Sample test images from FCS database

    4.2 GMM-Based Background Subtraction

    GMM is one of the common methods used for modeling foreground and background conditions of the pixel.It has the capacity to perform general calculation as they could fit in all the density functions, when they possess sufficient combination.Here,Itrepresents the frame of videotandp, the deliberate pixel coordinates(i,j)—anddenotes its RGB values in frameIt.The instant values of this specific pixel, in time, are then implemented by:

    whereTdenotes the counts of the frame.GMM is related to pixelpin RGB color space at frametand it consists of K-weighted Gaussian function:

    To simplify the estimation, covariance matrix is always considered as diagonal.

    whereIrepresents the identity matrix sized, 3×3.Thus, theR,G,Bpixel levels are considered to be autonomous with equivalent difference.Though this might not be accurate, the statement avoids costly matrix inversion with regards to precision method.

    4.2.1 GMM Initialization

    This is an elective phase where the model employs EM (Expectation-Maximization) technique on a video portion; however, it could initiate an individual model for each pixel (of weight 1),that beings from the level of initial frame.

    4.2.2 Mode Labeling

    Every Gaussian mode is categorized as Background/Foreground.This crucial link is attained from a basic rule i.e., higher the precision and frequent modes, more possible to model the background colors [24].Particularly,Kmodes are arranged based on their priority level,The initial KB mode is later considered as background.The value ofKBis defined by a threshold,Tb∈[0,1]:

    4.2.3 Pixel Labeling

    This step arranges the pixels.In all the techniques, a pixel is allocated to a class of nearest mode center in limitation.

    wherekprepresents the constant coefficient which must be adjusted for every video.When no other modes fulfill this limitation, low priority mode is substituted by a novel Gaussian which is placed on the present intensity,with previous difference weights.

    4.2.4 Updating GMM

    An update function is given herewith.

    When a modeiis efficaciously chosen, the GMM variables are then upgraded to reinforce this mode.

    Or else, the latter allocation is substituted by a novel Gaussian mode.

    4.3 Mask RCNN Based Fish Detection

    Mask R-CNN model is popular in several object detection tasks.It includes three components namely, CNN-based feature extraction, Region Proposal Network (RPN) and Parallel prediction network.At first, CNN model is applied in feature extraction from the input images.Secondly,RPN makes use of anchors under various scales and aspect ratios to glide on the feature maps so as to generate the generating region proposal.Thirdly, three branches from parallel prediction network with two FC layers are involved for bounding box classification and regression while FCN is involved to predict the object masks.Principally, baseline network is found to be a major model for Deep Neural Networks (DNN) namely, CapNet, GoogLeNet, and ResNet.In this study, MaskRCNN with CapsNet model are used whereas the CapsNet is utilized as the backbone network for feature extraction.This scenario results in effective reduction of gradient vanishing and reduced training with no increase in model parameters.

    CapsNet method is one of the latest studies in this research domain.The key element of CapsNet is a capsule that comprises of a set of organized neurons.The length of capsule is decided based on invariance, whereas the number of features is present to reconstruct the image measurement of equivariance.The orientation of vector denotes its variables, i.e., data features are maintained in the image.

    When a standard NN requires extra layers to increase accuracy and details, with CapsNet,an individual layer can nest with other layers.The capsules efficiently denote distinct kinds of visual data which are known as instantiation variables and some of the examples are as follows integration of size, orientation, and position.Fig.3 depicts the process involved in CapsNet model.The output of capsule represents the vector that could be transmitted to the above layer to match its suitable parent [25].The output of capsuleiis assumed to beuiwhereas conversion matrixWijis employed to capsule the output so as to predict the parent capsulejby convertinguito predict the vector

    Figure 3: CapsNet process

    An activation function named ‘squashing’, shrinks the last output vector to 0, when it is smaller whereas when it is larger, it becomes unit vector and generates the capsule length.The activity vectorvjcan be estimated by succeeding nonlinear squashing function.

    cijis calculated as softmax ofbij.The coupling coefficient is determined by the degree of conformation between capsule and parent capsules.

    bijrepresents similar scores considered for likeliness and characteristics, instead of likeliness in neurons.

    The primary network extracts low-level features such as edges whereas the upper network extracts the top-level features that denote the target class.In order to use the features effectively at every stage, Mask RCNN model extends the baseline network to Feature Pyramid Network(FPN).This network exploits both intrinsic layers and multi-scaling characteristics of CNN to derive meaningful features in the detection of objects.The aim of RPN lies in the prediction of set of region proposals in an effective way [26].During RPN training, the anchor with maximum Intersection over Union (IoU) overlapping is used while the ground truth boxes are utilized as positive classes.Further, the anchor with IoU<0.3 are considered as negative classes.Here, IoU is determined as follows.

    Here, detection outcome designates the predicted box and ground truth specifies the ground truth box.RPN fine-tunes the region proposals based on the attained regression details and discards the region proposals that overlap with image boundaries.At last, based on Non-Maximum Suppression (NMS), around 2000 proposal regions are kept for every image.

    The region proposal, produced by RPNs, necessitates RoIAlign to adjust the dimensions for satisfying multibranch prediction network.RoIAlign utilizes bilinear interpolation rather than rounding function in RoIPool for faster R-CNN so as to extract the respective features of allregion proposals in feature map.When training the model, the loss function is determined for Mask RCNN model for all the proposals as given below.

    whereLcls,Lbox, andLmaskdenote classification, regression, and segmentation losses; a definite computation of classification and regression losses is represented herewith.

    whereispecifies the anchor index,pisignifies the predicted probability of anchori,tidenotes four coordinate variables of the box, andstands for coordinate variables of ground truth box with respect to positive anchor.When the anchor is positive,becomes 1; else,becomes 0.This technique can be optimized through minimization of loss function.

    4.4 WKELM Based Classification

    At this stage, WKELM model is applied to categorize the objects under fish or non-fish entities.WKELM model combines the benefits of distinct kernel functions and integrates the wavelet analysis with kernel extreme learning machine.The weighted ELM method is presented to manage the instances that are unbalanced in probabilities’distribution while this technique acts excellent.Besides, the weighted WKELM technique establishes the weighted model-to-cost function so as to obtain the same result as weighted ELM [27].KELM method derives from the ELM technique, and the weighted cost function is written as follows.

    In KELM method, the output is written as follows

    whereKrefers to kernel matrix,Wimplies the weighted matrix, andCdenotes the regularization parameter.

    5 Performance Validation

    The experimental validation of the presented IDLAFD-UWSN model was performed with two testbeds from FCS dataset, namely, Blurred and Crowded.Both the testbeds comprised of a set of 5,756 frames with a duration of 3.83 minutes.Fig.4 showcases the visualization images of IDLAFD-UWSN model.

    Tab.1 shows the results for accuracy analysis of the proposed IDLAFD-UWSN model upon blurred video.From the figure, it is evident that the presented IDLAFD-UWSN model detects multiple targets effectively.For instance, on the test frame 134, IDLAFD-UWSN model detected targ_1, targ_2,and targ_3 with an accuracy of 0.96,0.99, and 0.98 respectively.In addition, on the test frame 160, the presented IDLAFD-UWSN model detected the targets such as targ_1, targ_2,and targ_3 whereas its accuracy values were 0.99, 0.99, and 0.99 correspondingly.Moreover, on the test frame 173, IDLAFD-UWSN model detected targ_1 and targ_2 with an accuracy of 0.98 and 0.99 respectively.Also, on test frame 193, IDLAFD-UWSN model detected targ_1, targ_2,and targ_3 while its accuracy values being 0.99, 0.99, and 0.99 respectively.Additionally, on the test frame 203, IDLAFD-UWSN model detected targ_1, targ_2, and targ_3 with accuracy values such as 0.98, 0.99, and 0.99 correspondingly.

    Figure 4: Visualization Images of IDLAFD-UWSN Model

    Besides, on the test frame 565, the proposed IDLAFD-UWSN model achieved 0.99, 0.99, and 0.99 accuracy for the targets, targ_1, targ_2, and targ_3 respectively.In addition to the above, on the test frame 1009, IDLAFD-UWSN model found the targets such as targ_1, targ_2, and targ_3 while the accuracy values were 0.99, 0.99, and 0.99 respectively.

    Table 1: Accuracy of the proposed IDLAFD-UWSN method on target per frame in blurred video

    Tab.2 shows the results of accuracy analysis attained by IDLAFD-UWSN model on crowded video testbed.From the figure, it is evident that the presented IDLAFD-UWSN model detected multiple targets effectively.For instance, on the test frame 019, the IDLAFD-UWSN model detected the targets such as targ_1, targ_2, targ_3, targ_4, targ_5, targ_6, targ_7, and targ_8 with an accuracy of 0.98, 0.98, 0.98, 0.98, 0.99, 0.98, 0.99, and 0.98 correspondingly.In the meantime,on the test frame 036, IDLAFD-UWSN model detected the targets such as targ_1, targ_2, targ_3,targ_4, and targ_5 while its accuracy values were 0.98, 0.87, 0.99, 0.96, and 0.99 correspondingly.At the same time, on the test frame 160, IDLAFD-UWSN model detected the targets such as targ_1, targ_2, targ_3, targ_4, and targ_5 with an accuracy of 0.96, 0.96, 0.99, 0.93, and 0.99 respectively.

    Meanwhile, on the test frame 221, the proposed IDLAFD-UWSN model detected targ_1,targ_2, targ_3, and targ_4 while its accuracy values were 0.99, 0.95, 0.99, and 0.99 respectively.Afterwards, on the test frame 435, IDLAFD-UWSN model achieved the accuracy of 0.99, 0.78,and 0.97 for the targets, targ_1, targ_2, and targ_3 correspondingly.Followed by, on the test frame 1217, IDLAFD-UWSN model detected the targets such as targ_1, targ_2, targ_3, targ_4, targ_5,and targ_6 while its accuracy values were 0.99, 0.93, 0.96, 0.99, 0.99, and 0.99 correspondingly.Simultaneously, on the test frame 1506, IDLAFD-UWSN model detected the targets such as targ_1, targ_2, targ_3, targ_4, and targ_5 with an accuracy of 0.97, 0.99, 0.99, 0.99, and 0.99 respectively.

    Tab.3 shows an extensive comparison of the proposed IDLAFD-UWSN model against recent state-of-the-art techniques.

    Table 2: Accuracy of target per frame in crowded video

    Fig.5 shows the results of the accuracy analysis accomplished by IDLAFD-UWSN model and other existing methods on blurred and crowded testbeds.When analyzing the detection performance of IDLAFD-UWSN model in terms of accuracy on blurred video testbed, it is inferred that SCEA and ML-BKG models achieved ineffectual outcomes since its accuracy values were 71% and 72.94% correspondingly.Next, EIGEN technique attempted to attain slightly enhanced results with an accuracy of 82.89%, whereas FLDA, VIBE, and Hybrid system models demonstrated moderately closer accuracy values such as 86%, 86.35%, and 86.76% respectively.Simultaneously, FLDA-TM model exhibited a manageable performance with an accuracy of 88%.Though KDE and TKDE models showcased competitive results with its accuracy values being 91.73% and 93.78%, the presented IDLAFD-UWSN model accomplished the maximum accuracy of 98%.Similarly, when analyzing the detection performance of IDLAFD-UWSN model with respect to accuracy on crowded video testbed, it is inferred that SCEA and EIGEN models achieved ineffectual outcomes since its accuracy values were 70% and 75.82% correspondingly.Next, FLDA approach attempted to achieve somewhat improved outcomes with an accuracy of 80%.While, ML-BKG, Hybrid system, and KDE techniques exhibited moderately closer accuracy values such as 80.13%, 84.27%, and 84.83% respectively.Concurrently, VIBE model exhibited a manageable performance with an accuracy of 85.37%.Though TKDE and FLDA-TM models showcased competitive results with its accuracy values being 85.90% and 89%, the presented IDLAFD-UWSN model achieved the maximum accuracy of 97%.

    Table 3: Comparative analysis of the proposed IDLAFD-UWSN method against existing methods with respect to accuracy and F-score on the applied dataset

    Figure 5: Accuracy analysis of IDLAFD-UWSN model against existing techniques

    Fig.6 examines the F-score analysis results achieved by IDLAFD-UWSN technique and existing models on blurred and crowded testbeds.When investigating the detection performance of IDLAFD-UWSN model with respect to F-score on blurred video, it is understood that ML-BKG and SCEA models achieved ineffectual outcomes with F-score values such as 70.26% and 72.65%respectively.Then, EIGEN model attempted to attain slightly enhanced results with an F-score of 81.71%, whereas VIBE, FLDA, and Hybrid system models demonstrated moderately closer F-score values being 85.13%, 85.78%, and 86.76% correspondingly.Similarly, FLDA-TM model exhibited a manageable performance with an F-score of 87.32%.Though KDE and TKDE models showcased competitive results i.e., F-score values such as 92.56% and 93.25%, the presented IDLAFD-UWSN model produced the maximum F-score of 98%.

    Finally, when assessing the detection performance of the proposed IDLAFD-UWSN model in terms of F-score on crowded video testbed, the results conclude that SCEA and EIGEN models achieved ineffectual outcomes since its F-score values were 69.63%and 73.87% respectively.Afterward, ML-BKG model attained somewhat enhanced results with an F-score of 79.81%,whereas FLDA, KDE, and TKDE approaches demonstrated moderately-closer F-score values being 80.12%, 82.46%, and 84.19% respectively.At the same time, Hybrid system model exhibited a manageable performance with an F-score of 84.27%.VIBE and FLDA-TM models showcased competitive outcomes while its F-score values were 84.64% and 88.76%.The proposed IDLAFD-UWSN model outperformed all the existing models and produced the highest F-score of 97%.

    Figure 6: F-Score analysis of IDLAFD-UWSN model against existing techniques

    From the above-discussed tables and figures, it is obvious that the presented IDLAFDUWSN model accomplished promising results under blurred and crowded environments too.The improved performance is due to the inclusion of GMM-based background subtraction, MaskRCNN with CapsNet-based fish detection, and WKELM-based fish classification.Therefore, it can be employed as an effective fish detection tool in marine environment.

    6 Conclusion

    The current research article presented a novel IDLAFD-UWSN model for automated fish detection and classification in underwater environments.The presented IDLAFD-UWSN model aims at automatic detection of fishes from underwater videos, particularly in blurred and crowded environments.The presented IDLAFD-UWSN model operates on three stages namely, GMMbased background subtraction, MaskRCNN with CapsNet-based fish detection, and WKELMbased fish classification.MaskRCNN with CapsNet model distinguishes the candidate regions in video frame from fish to non-fish objects.Lastly, fish and non-fish objects are classified with the help of WKELM model.An extensive experimental analysis was conducted on benchmark dataset while the results of the analysis achieved by IDLAFD-UWSN model were promising with the maximum accuracy of 98% and 97% on the applied blurred and crowded datasets respectively.As a part of future extension, the presented IDLAFD-UWSN model can be implemented in real-time UWSN to automatically monitor the behavior of fishes and other aquatic creatures.

    Funding Statement:The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number (RGP 1/53/42),www.kku.edu.sa.This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久国产乱子免费精品| 欧美xxⅹ黑人| 高清不卡的av网站| 少妇被粗大猛烈的视频| 天天躁夜夜躁狠狠久久av| 久久人妻熟女aⅴ| 免费黄网站久久成人精品| 黄色毛片三级朝国网站 | 综合色丁香网| 激情五月婷婷亚洲| 日韩精品有码人妻一区| 汤姆久久久久久久影院中文字幕| 国产成人精品久久久久久| 亚洲性久久影院| 久久久久精品性色| av卡一久久| 久久国产乱子免费精品| 男女啪啪激烈高潮av片| 精品国产一区二区三区久久久樱花| 最近手机中文字幕大全| av国产精品久久久久影院| 男女国产视频网站| 久久久久精品久久久久真实原创| 久久99蜜桃精品久久| 亚洲第一区二区三区不卡| 亚洲精华国产精华液的使用体验| 精品亚洲成a人片在线观看| 建设人人有责人人尽责人人享有的| 婷婷色综合www| 国产免费视频播放在线视频| freevideosex欧美| 岛国毛片在线播放| av.在线天堂| 久久人妻熟女aⅴ| 美女脱内裤让男人舔精品视频| 特大巨黑吊av在线直播| 日日爽夜夜爽网站| 精品一区在线观看国产| 免费不卡的大黄色大毛片视频在线观看| 亚洲va在线va天堂va国产| 久热这里只有精品99| 久久久久久久久久人人人人人人| 国产一区二区在线观看av| 我的老师免费观看完整版| 97超碰精品成人国产| 欧美 日韩 精品 国产| 久久久久久久精品精品| 看非洲黑人一级黄片| 国产精品久久久久久久电影| 日韩精品免费视频一区二区三区 | 大片免费播放器 马上看| 丰满少妇做爰视频| 韩国av在线不卡| 91久久精品国产一区二区三区| 如日韩欧美国产精品一区二区三区 | 久久久久久久久久成人| 18禁动态无遮挡网站| 久久99热这里只频精品6学生| 看非洲黑人一级黄片| 成年美女黄网站色视频大全免费 | 如何舔出高潮| 秋霞在线观看毛片| 久久午夜综合久久蜜桃| 亚洲av成人精品一二三区| 成人毛片a级毛片在线播放| 亚洲性久久影院| 国产精品伦人一区二区| 日韩免费高清中文字幕av| 又粗又硬又长又爽又黄的视频| 欧美精品一区二区大全| 欧美高清成人免费视频www| 99热这里只有是精品在线观看| 亚洲av电影在线观看一区二区三区| 久久国产乱子免费精品| 性高湖久久久久久久久免费观看| 丝瓜视频免费看黄片| 欧美精品一区二区大全| 天美传媒精品一区二区| 色94色欧美一区二区| 午夜免费观看性视频| 亚洲精品456在线播放app| 亚洲av免费高清在线观看| 日韩强制内射视频| 在线观看三级黄色| 草草在线视频免费看| 日韩欧美 国产精品| 美女脱内裤让男人舔精品视频| 亚洲,欧美,日韩| 成年女人在线观看亚洲视频| 狂野欧美白嫩少妇大欣赏| 国模一区二区三区四区视频| 在线观看国产h片| 嫩草影院新地址| 91精品国产国语对白视频| 一级二级三级毛片免费看| 久久人人爽av亚洲精品天堂| 久久韩国三级中文字幕| 国产亚洲91精品色在线| 深夜a级毛片| 寂寞人妻少妇视频99o| 国产成人一区二区在线| 国产精品一区二区三区四区免费观看| 日韩一本色道免费dvd| 亚洲av日韩在线播放| 精品久久久噜噜| 新久久久久国产一级毛片| 黄色视频在线播放观看不卡| 日韩欧美精品免费久久| 久久99热6这里只有精品| 99九九在线精品视频 | 免费在线观看成人毛片| av国产久精品久网站免费入址| 久久精品久久精品一区二区三区| 亚洲欧美日韩卡通动漫| 亚洲国产色片| 久久精品国产亚洲av天美| 少妇被粗大的猛进出69影院 | 中文天堂在线官网| 国产欧美日韩精品一区二区| 2021少妇久久久久久久久久久| 啦啦啦视频在线资源免费观看| 最近的中文字幕免费完整| 晚上一个人看的免费电影| 少妇被粗大的猛进出69影院 | 精品久久久久久电影网| 偷拍熟女少妇极品色| 在线 av 中文字幕| 极品少妇高潮喷水抽搐| 国产欧美亚洲国产| 91aial.com中文字幕在线观看| 国产精品一区二区在线不卡| 国产精品99久久99久久久不卡 | 国产精品免费大片| 国产亚洲精品久久久com| 三级国产精品片| 亚洲国产欧美日韩在线播放 | 国产真实伦视频高清在线观看| 成人国产麻豆网| xxx大片免费视频| 久久精品久久久久久噜噜老黄| 卡戴珊不雅视频在线播放| 爱豆传媒免费全集在线观看| 一区二区av电影网| 99久久中文字幕三级久久日本| 国产一区二区三区av在线| 亚洲精品自拍成人| 国产 一区精品| 国产一区二区在线观看av| av免费在线看不卡| 亚洲精品色激情综合| 国产精品三级大全| av天堂久久9| 丰满迷人的少妇在线观看| 亚洲av福利一区| 爱豆传媒免费全集在线观看| 少妇的逼好多水| √禁漫天堂资源中文www| 18禁在线播放成人免费| 人人妻人人添人人爽欧美一区卜| 一区二区三区乱码不卡18| 大片免费播放器 马上看| videossex国产| 中文字幕av电影在线播放| 观看美女的网站| 欧美日韩视频高清一区二区三区二| 国产精品女同一区二区软件| 亚洲欧美日韩卡通动漫| 久久青草综合色| 国产av国产精品国产| 久久韩国三级中文字幕| 日韩免费高清中文字幕av| 免费观看的影片在线观看| 在现免费观看毛片| 一区二区三区免费毛片| 大片免费播放器 马上看| 国产欧美另类精品又又久久亚洲欧美| 又粗又硬又长又爽又黄的视频| 婷婷色综合大香蕉| 国内少妇人妻偷人精品xxx网站| 少妇高潮的动态图| 99re6热这里在线精品视频| 成年女人在线观看亚洲视频| 久久久国产欧美日韩av| 97在线人人人人妻| 国产成人精品一,二区| 妹子高潮喷水视频| 亚洲人与动物交配视频| 国内精品宾馆在线| 18禁在线无遮挡免费观看视频| 噜噜噜噜噜久久久久久91| 婷婷色综合大香蕉| 高清黄色对白视频在线免费看 | 国产 一区精品| 黑人猛操日本美女一级片| 国产熟女欧美一区二区| 一级黄片播放器| 最近最新中文字幕免费大全7| 国产熟女午夜一区二区三区 | 天天躁夜夜躁狠狠久久av| 91aial.com中文字幕在线观看| 日韩中字成人| 日韩av不卡免费在线播放| 黑丝袜美女国产一区| 美女大奶头黄色视频| 日本午夜av视频| 国产精品国产三级国产av玫瑰| 在线天堂最新版资源| 久久人妻熟女aⅴ| a级片在线免费高清观看视频| 人人妻人人爽人人添夜夜欢视频 | 日韩av不卡免费在线播放| 最黄视频免费看| 国产成人免费观看mmmm| 亚洲av男天堂| a级毛片免费高清观看在线播放| 成人影院久久| 美女福利国产在线| 国产亚洲午夜精品一区二区久久| 国产一区二区三区av在线| 在线观看免费日韩欧美大片 | 精品熟女少妇av免费看| 91久久精品电影网| 美女主播在线视频| 桃花免费在线播放| 免费av中文字幕在线| 成人影院久久| 在线观看免费高清a一片| 99九九在线精品视频 | 亚洲人成网站在线观看播放| 又黄又爽又刺激的免费视频.| 秋霞伦理黄片| 大陆偷拍与自拍| 亚洲一区二区三区欧美精品| 搡女人真爽免费视频火全软件| 丰满人妻一区二区三区视频av| 亚洲av福利一区| 色哟哟·www| 欧美另类一区| 一级毛片久久久久久久久女| 日韩欧美精品免费久久| 99热这里只有是精品在线观看| 丝袜在线中文字幕| 亚洲人成网站在线播| 另类亚洲欧美激情| 女人久久www免费人成看片| 亚洲精品亚洲一区二区| 免费观看av网站的网址| 国产黄色免费在线视频| 亚洲伊人久久精品综合| 在线观看三级黄色| 久久人人爽av亚洲精品天堂| 亚洲第一av免费看| 亚洲美女搞黄在线观看| 久久久久久久国产电影| 日本欧美视频一区| 久久狼人影院| 大香蕉久久网| 久久久亚洲精品成人影院| 91aial.com中文字幕在线观看| 日韩制服骚丝袜av| 日韩中文字幕视频在线看片| 狂野欧美激情性xxxx在线观看| 啦啦啦中文免费视频观看日本| 久久国产亚洲av麻豆专区| 亚洲成人手机| 免费看av在线观看网站| videos熟女内射| 欧美激情国产日韩精品一区| 久久国产精品大桥未久av | 99久久中文字幕三级久久日本| 最近最新中文字幕免费大全7| 九色成人免费人妻av| 亚洲成人手机| 亚洲欧美日韩另类电影网站| 如何舔出高潮| 秋霞伦理黄片| 波野结衣二区三区在线| 男人舔奶头视频| 色婷婷av一区二区三区视频| 99久久中文字幕三级久久日本| av又黄又爽大尺度在线免费看| 视频中文字幕在线观看| 狂野欧美白嫩少妇大欣赏| 国产一级毛片在线| 亚洲精品一区蜜桃| 99久久精品一区二区三区| 国产精品久久久久久久电影| 婷婷色综合大香蕉| 精品久久久精品久久久| 亚洲国产成人一精品久久久| 久久久久国产精品人妻一区二区| 高清毛片免费看| 人妻 亚洲 视频| av福利片在线观看| 午夜av观看不卡| 自拍欧美九色日韩亚洲蝌蚪91 | 男人添女人高潮全过程视频| av有码第一页| 男的添女的下面高潮视频| 搡女人真爽免费视频火全软件| 亚洲av电影在线观看一区二区三区| 国模一区二区三区四区视频| 丝袜在线中文字幕| 久久毛片免费看一区二区三区| 寂寞人妻少妇视频99o| 欧美精品一区二区大全| 18禁在线无遮挡免费观看视频| 日韩 亚洲 欧美在线| 精品视频人人做人人爽| 午夜影院在线不卡| 人体艺术视频欧美日本| 日日啪夜夜撸| 国产亚洲91精品色在线| 亚洲婷婷狠狠爱综合网| 三上悠亚av全集在线观看 | av免费观看日本| √禁漫天堂资源中文www| 男人和女人高潮做爰伦理| 高清在线视频一区二区三区| 亚洲欧美日韩东京热| 在线观看av片永久免费下载| 少妇的逼好多水| 久久久久网色| 内地一区二区视频在线| 亚洲人成网站在线观看播放| 22中文网久久字幕| 亚洲内射少妇av| 亚洲精品日本国产第一区| 国产亚洲av片在线观看秒播厂| 肉色欧美久久久久久久蜜桃| 亚州av有码| 最近手机中文字幕大全| 新久久久久国产一级毛片| 国产探花极品一区二区| 日韩av不卡免费在线播放| 亚洲内射少妇av| 97精品久久久久久久久久精品| 日韩视频在线欧美| 免费大片黄手机在线观看| 最黄视频免费看| 99久久综合免费| 久久精品久久久久久久性| 久久精品国产亚洲av涩爱| 国产一区二区三区av在线| 一级毛片久久久久久久久女| 国产精品女同一区二区软件| 欧美 日韩 精品 国产| 国产探花极品一区二区| 最黄视频免费看| 国产成人免费无遮挡视频| 97在线人人人人妻| 精品熟女少妇av免费看| 国产欧美日韩综合在线一区二区 | 日韩免费高清中文字幕av| 久久精品国产a三级三级三级| 99视频精品全部免费 在线| 国产国拍精品亚洲av在线观看| av不卡在线播放| 精品人妻一区二区三区麻豆| 免费观看av网站的网址| 最近最新中文字幕免费大全7| 久久久久网色| 久久婷婷青草| 免费人妻精品一区二区三区视频| 我要看日韩黄色一级片| 少妇人妻精品综合一区二区| 久久久久久久久久成人| 一区二区三区精品91| 校园人妻丝袜中文字幕| 一级黄片播放器| 人妻系列 视频| 一本—道久久a久久精品蜜桃钙片| 亚洲国产欧美日韩在线播放 | 狂野欧美激情性bbbbbb| freevideosex欧美| 精品国产乱码久久久久久小说| 色婷婷av一区二区三区视频| 一区二区三区四区激情视频| 熟女人妻精品中文字幕| h视频一区二区三区| 国内少妇人妻偷人精品xxx网站| 亚洲精品国产成人久久av| 国产美女午夜福利| 99久久精品一区二区三区| 天天躁夜夜躁狠狠久久av| 亚洲精品国产成人久久av| 国产视频内射| 久久久久国产网址| 美女国产视频在线观看| 我要看黄色一级片免费的| 男女无遮挡免费网站观看| 在线观看av片永久免费下载| 久久久国产一区二区| 国产精品一区www在线观看| kizo精华| 国产精品无大码| 天美传媒精品一区二区| 国产精品伦人一区二区| 国产伦在线观看视频一区| 亚洲欧美精品专区久久| 少妇裸体淫交视频免费看高清| 欧美成人精品欧美一级黄| 欧美最新免费一区二区三区| 美女国产视频在线观看| 国产老妇伦熟女老妇高清| 少妇被粗大的猛进出69影院 | 国产午夜精品久久久久久一区二区三区| 亚洲高清免费不卡视频| 18+在线观看网站| 欧美精品国产亚洲| 久久精品国产亚洲av天美| 国产黄频视频在线观看| 最近手机中文字幕大全| 黄色配什么色好看| 久久人人爽av亚洲精品天堂| 国产精品国产av在线观看| 又粗又硬又长又爽又黄的视频| 免费黄网站久久成人精品| 三级国产精品欧美在线观看| 亚洲高清免费不卡视频| 91精品一卡2卡3卡4卡| 哪个播放器可以免费观看大片| 精品一区二区免费观看| 国内精品宾馆在线| 免费人成在线观看视频色| 日产精品乱码卡一卡2卡三| 亚洲欧洲日产国产| 国产精品久久久久久精品电影小说| 久久精品久久久久久久性| 99热这里只有精品一区| 精品视频人人做人人爽| 岛国毛片在线播放| 少妇人妻 视频| 亚洲国产最新在线播放| 免费少妇av软件| freevideosex欧美| 亚洲色图综合在线观看| av黄色大香蕉| 成年美女黄网站色视频大全免费 | 女人久久www免费人成看片| av一本久久久久| 少妇裸体淫交视频免费看高清| 婷婷色av中文字幕| 久久久国产精品麻豆| av又黄又爽大尺度在线免费看| 乱系列少妇在线播放| 久久狼人影院| 亚洲精品久久久久久婷婷小说| 精品少妇久久久久久888优播| 成年av动漫网址| 国产熟女欧美一区二区| 欧美精品国产亚洲| 麻豆精品久久久久久蜜桃| 久久精品国产亚洲av涩爱| 我的女老师完整版在线观看| 国产精品99久久99久久久不卡 | 一本色道久久久久久精品综合| 成年人午夜在线观看视频| 伦精品一区二区三区| 亚洲国产精品专区欧美| 久久久久国产精品人妻一区二区| 内射极品少妇av片p| 最近最新中文字幕免费大全7| 欧美日韩av久久| 看免费成人av毛片| 国产一区亚洲一区在线观看| 久久精品夜色国产| 亚洲国产日韩一区二区| 日日撸夜夜添| 99热6这里只有精品| 波野结衣二区三区在线| 成人影院久久| 精品亚洲乱码少妇综合久久| 亚洲欧美一区二区三区黑人 | 国产男人的电影天堂91| 麻豆乱淫一区二区| 波野结衣二区三区在线| 97在线视频观看| 精品人妻一区二区三区麻豆| 一级,二级,三级黄色视频| 国精品久久久久久国模美| 丝袜喷水一区| 自拍偷自拍亚洲精品老妇| 精品少妇内射三级| av专区在线播放| 国产极品粉嫩免费观看在线 | 日韩av在线免费看完整版不卡| 日日爽夜夜爽网站| 久热这里只有精品99| 亚洲熟女精品中文字幕| 亚洲av免费高清在线观看| 欧美人与善性xxx| 国产亚洲91精品色在线| 一级av片app| 交换朋友夫妻互换小说| av在线播放精品| 久久精品熟女亚洲av麻豆精品| 亚洲天堂av无毛| 中文乱码字字幕精品一区二区三区| 黄色一级大片看看| 自拍欧美九色日韩亚洲蝌蚪91 | 日韩人妻高清精品专区| 99久久综合免费| 人妻一区二区av| 99热网站在线观看| 狂野欧美白嫩少妇大欣赏| 女性生殖器流出的白浆| 亚洲精品国产色婷婷电影| 日本91视频免费播放| 国产亚洲5aaaaa淫片| 2021少妇久久久久久久久久久| 99国产精品免费福利视频| 日韩av免费高清视频| 久久精品国产a三级三级三级| 亚洲av中文av极速乱| 午夜91福利影院| 成年女人在线观看亚洲视频| 精华霜和精华液先用哪个| 狂野欧美激情性bbbbbb| 中文天堂在线官网| 99久国产av精品国产电影| 国产成人精品福利久久| 国产成人精品婷婷| 久久人人爽人人片av| 人人妻人人澡人人爽人人夜夜| 丰满迷人的少妇在线观看| 国产日韩欧美在线精品| 国产一区二区在线观看日韩| 韩国av在线不卡| 夜夜看夜夜爽夜夜摸| 国产男女内射视频| 在线观看人妻少妇| 欧美日韩视频精品一区| 久久精品夜色国产| 日本黄大片高清| 最后的刺客免费高清国语| 成年人免费黄色播放视频 | 一区二区三区免费毛片| 人人妻人人爽人人添夜夜欢视频 | 欧美日韩国产mv在线观看视频| 亚洲无线观看免费| 久久久久视频综合| 日韩,欧美,国产一区二区三区| 日韩不卡一区二区三区视频在线| 毛片一级片免费看久久久久| 精品视频人人做人人爽| 国内揄拍国产精品人妻在线| 午夜老司机福利剧场| 两个人免费观看高清视频 | 亚洲av综合色区一区| 男女啪啪激烈高潮av片| 国产精品久久久久成人av| 五月开心婷婷网| 午夜久久久在线观看| 国产爽快片一区二区三区| 国产午夜精品久久久久久一区二区三区| 亚洲欧美一区二区三区国产| 免费观看av网站的网址| 久久97久久精品| h视频一区二区三区| 一本—道久久a久久精品蜜桃钙片| 久久精品国产亚洲av天美| 日本午夜av视频| 亚洲成人一二三区av| 国产极品天堂在线| 欧美精品亚洲一区二区| 欧美区成人在线视频| 嫩草影院入口| 女人久久www免费人成看片| 国产高清国产精品国产三级| 亚洲第一av免费看| 久久久久久伊人网av| 老熟女久久久| 日韩精品有码人妻一区| 日韩一区二区三区影片| 一级,二级,三级黄色视频| 成年人午夜在线观看视频| 肉色欧美久久久久久久蜜桃| 午夜91福利影院| 久久久久久久久久人人人人人人| 黄色怎么调成土黄色| 亚洲,欧美,日韩| 成人18禁高潮啪啪吃奶动态图 | 最近手机中文字幕大全| 黑人巨大精品欧美一区二区蜜桃 | 2021少妇久久久久久久久久久| 夫妻性生交免费视频一级片| videossex国产| 亚洲,欧美,日韩| 美女xxoo啪啪120秒动态图| 另类亚洲欧美激情| 有码 亚洲区| h视频一区二区三区| √禁漫天堂资源中文www| 精品视频人人做人人爽| 国产国拍精品亚洲av在线观看| av在线老鸭窝| 在线观看三级黄色| 午夜影院在线不卡| 一边亲一边摸免费视频| 亚洲丝袜综合中文字幕| 亚洲内射少妇av| 精品一品国产午夜福利视频| 一级a做视频免费观看| a级一级毛片免费在线观看| 一级毛片电影观看| 日韩一区二区视频免费看| 一边亲一边摸免费视频| av专区在线播放| 亚洲精品国产成人久久av| 少妇的逼水好多| 成人18禁高潮啪啪吃奶动态图 | 欧美日韩av久久| 久久免费观看电影| 国产亚洲一区二区精品|