• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic Detection of Nephrops Norvegicus Burrows from Underwater Imagery Using Deep Learning

    2022-03-14 09:25:00AtifNaseerEnriqueNavaBaroSultanDaudKhanYolandaVilaandJenniferDoyle
    Computers Materials&Continua 2022年3期

    Atif Naseer,Enrique Nava Baro,Sultan Daud Khan,Yolanda Vila and Jennifer Doyle

    1ETSI Telecomunicación,Universidad de Málaga,Málaga,29071,Spain

    2Department of Computer Science,National University of Technology,Islamabad,44000,Pakistan

    3Instituto Espa?ol de Oceanografía,Centro Oceanográfico de Cádiz,Cádiz,39004,Spain

    4Marine Institute Rinville,Oranmore,Ireland

    Abstract:The Norway lobster,Nephrops norvegicus,is one of the main commercial crustacean fisheries in Europe.The abundance of Nephrops norvegicus stocks is assessed based on identifying and counting the burrows where they live from underwater videos collected by camera systems mounted on sledges.The Spanish Oceanographic Institute(IEO)and Marine Institute Ireland(MIIreland)conducts annual underwater television surveys(UWTV)to estimate the total abundance of Nephrops within the specified area,with a coefficient of variation(CV) or relative standard error of less than 20%.Currently, the identification and counting of the Nephrops burrows are carried out manually by the marine experts.This is quite a time-consuming job.As a solution,we propose an automated system based on deep neural networks that automatically detects and counts the Nephrops burrows in video footage with high precision.The proposed system introduces a deep-learning-based automated way to identify and classify the Nephrops burrows.This research work uses the current state-of-the-art Faster RCNN models Inceptionv2 and MobileNetv2 for object detection and classification.We conduct experiments on two data sets,namely,the Smalls Nephrops survey(FU 22)and Cadiz Nephrops survey(FU 30), collected by Marine Institute Ireland and Spanish Oceanographic Institute,respectively.From the results,we observe that the Inception model achieved a higher precision and recall rate than the MobileNet model.The best mean Average Precision (mAP) recorded by the Inception model is 81.61%compared to MobileNet,which achieves the best mAP of 75.12%.

    Keywords: Faster RCNN; computer vision; nephrops norvegicus; nephrops norvegicus stock assessment; underwater videos classification

    1 Introduction

    The earth’s ecosystem is mainly composed of oceans, as it produces 50% of the oxygen and 97% of the water.Also, it is a significant source of our daily food as it provides 15% of proteins in the form of marine animals.There are many studies in terrestrial ecosystems than in marine ecosystems because it is more challenging to study the marine ecosystem, especially in the deeper areas.Monitoring the habitats of marine species is a challenging task for biologists and marine experts.The environment features such as depth-based color variations and the turbidity or movement of species make it a challenge [1].Several years ago, marine scientists used satellite,shipborne sensors, and camera sensors to collect underwater species’images.In recent years with the advancement of technology, scientists use underwater Remotely Operated Vehicles (ROVs),Autonomous Underwater Vehicles (AUVs), sledge and drop frame structures equipped with highdefinition cameras to record the videos and images of marine species.These vehicles can capture high-definition images and videos.Besides all this quality equipment, the underwater environment is still a challenge for scientists and marine biologists.The two main factors which make it difficult are the free natural environment and variations of the visual content, which may arise from variable illumination, scales, views, and non-rigid deformations [2].

    The Norway lobster,Nephrops norvegicus, is one of the main commercial crustacean fisheries in Europe, where in 2018, the total allowable catch (TAC) was set at 32,705 tons for International Council for the Exploration of the Sea (ICES) areas 7, 8 and 9 [3].Fig.1 shows an individualNephropsspecimen.This species can be found in sandy-muddy sediments from 90 m to 800 m depth in the Atlantic NE waters and the Mediterranean Sea [4], where the sediment is suitable for constructing their burrows.Nephropsspend most of the time inside the burrows, and their emergence behavior is influenced by several factors: time of year, light intensity, tidal strength.These burrows can be detected through optimal lighting set-up during video recordings of the seabed.The burrows themselves can be easily identified from surface features once specialist training has been taken [5].

    Figure 1: Nephrops norvegicus

    ANephropsburrow system typically can have single to multiple openings to different tunnels.A unique individual is assumed to occupy a burrow system [6].Burrows show signature features that are specific toNephrops,as shown in Fig.2.They can be summarized as follows:

    1.At least one burrow opening is particularly half-moon shape.

    2.There is often proof of expelled sediment, typically in a wide delta-like ‘fan’at the tunnel opening, and scratches and tracks are frequently evident.

    3.The centre of all the burrow openings has a raised structure.

    4.Nephropsmay be present (either in or out of burrow).

    Figure 2: Nephrops burrows signature features

    TheNephropsburrow system composed of one or more than one burrows of above-mentioned characteristics.The presence of more than one burrow nearby doesn’t means the presence of more than one Nephrops.

    The ICES is an international organization of 20 member countries.ICES is working on the marine sciences with more than 6000 scientists from 7000 different marine institutes of the member countries [7].The working group onNephropsSurveys (WGNEPS) is the expert group that specializes inNephrops norvegicusunderwater television and trawl surveys within ICES [6-8].

    UWTV surveys to monitor the abundance ofNephropspopulations were pioneered in Scotland in the early 90’s.The estimation of Norway lobster populations using this method involves identifying and quantifying burrow density over the known area ofNephropsdistribution that can be used as an abundance index of the stock [5,9].Nephropsabundance from UWTV surveys is the basis of assessment and advice for managing these stocks [9].

    Nephropspopulations are assessed and managed by Functional Units (FU) where there is a specific survey for each FU.In 2019, a total of 19 surveys had conducted that cover the 25 FU’s in ICES and one geographical subarea (GSA) in the Adriatic Sea [10] and shown in Fig 3.These surveys were conducted using standardized equipment and agreed protocols under the remit of WGNEPS.

    This study considers data from the Gulf of Cadiz (FU 30) and the Smalls (FU 22)Nephropsgrounds to detect theNephrops’burrows using the image data collected from different stations in each FU using our proposed methodology.

    The underwater environment is hard to analyse as it presents formidable challenges for Computer Vision and machine learning technologies.The image classification and detection in underwater images are quite different as compared to other visual data.Also, data collection in an underwater environment is the biggest challenge.One reason for this is light, as light and water are not considered good friends because when light passes through the water, it cannot absorb and reach the sea surface, which makes the images or videos a blurring effect.Also, scattering,and non-uniform lighting make the environment more challenging for data collection.

    Figure 3: Nephrops UWTV survey coverage in 2019 (FU: Functional unit, GSA: Geographical sub area, DLS: data limited stock).the gulf of cadiz nephrops ground (FU 30) and the smalls nephrops ground (FU 22) [10]

    Poor visibility is a common problem in the underwater environment.The poor visibility is due to the ebb and flow of tides and that cause fine mud particles to suspend in the water column.The ocean current is another factor that causes frequent luminosity change.The visual features like lightning conditions, colour changes, turbidity, and low pixel resolution make it challenging.Some of the environmental features such as depth-based colour variations and the turbidity or movement of species make the data collection very difficult in an underwater environment [1].Thus, two main factors which make it difficult are the free natural environment and variations of the visual content, which may arise from variable illumination, scales, views, and non-rigid deformations [2].

    Nephropsdata are collected, and UWTV surveys are reviewed manually by trained experts.Many of the data were difficult to process due to complex environmental conditions.Burrows systems are quantified following the protocol established by ICES [6,8].The image data (where this refers to video or stills data) for each station is reviewed independently by at least two experts, and the counts are recorded for each minute onto log sheet records.Each row of the log sheet records the minute, the number of burrows system count, and the time stamp.Count data are screened to check for any unusual discrepancies using Lin’s Concordance Correlation Coefficient (CCC)with a threshold of 0.5.Lin’s CCC [11] measures the ability of counters to precisely reproduce each other’s counts on a scale of 0.5 to 1, where 1 is perfect concordance.Only stations with a threshold lower than 0.5 were reviewed again by the experts.Fig.4 shows the current methodology used for the counting ofNephropsburrows.

    Figure 4: Current methodology for nephrops norvegicus burrows count

    With the massive amount of data collected for videos and images, manually annotating and analysing it is a laborious job and requires a lot of data review and processing time.Due to limited human capabilities, the manual review of image data requires a lot of time by trained experts to process the data to be quality controlled and ready for use in stock assessment.Due to these factors, only a limited amount of collected data is used for analysis that usually does not provide deep insights into a problem [12].

    Many scientists employ Artificial Intelligence-based tools to analyse marine species with the advancement of artificial intelligence and computer vision technology.Deep convolutional neural networks have shown tremendous success in the tasks of object detection [13,14], classification [15,16], and segmentation [17,18].These networks are data-driven and require a huge amount of labelled data for training.To automatically detect and classify theNephropsburrow systems, we propose a deep learning network that takes underwater video data as input.The network learns hierarchical features from the input data and detects the burrows in each input video frame.The cumulative sum of detections, all video frames give the final count ofNephropsburrows.The datasets from FU 30 and FU 22 were collected using different image acquisition systems (Ultra HD 4 K video camera and HD stills camera see section 2.1) from differentNephropspopulations.The image data is annotated using the Microsoft VOTT image annotation tool [19].

    This research aims to apply deep learning models to detect, classify, and count theNephropsburrows automatically.We develop a deep learning model that uses the current state-of-the-art Faster RCNN [20] models Inceptionv2 [21] and MobileNetv2 [22] for object detection.We develop two data sets from the FU 30 and FU 22 stations.We achieve amAPhigher than 80% that is a positive indication to change the current paradigm of manual counting ofNephropsburrows.This paper makes a significant advancement for all the groups working on theNephrops norvegicuscounting for stock assessment where it is shown to detect and count theNephropsburrows with high accuracy automatically.The rest of the paper is sectioned as follows.Data description is discussed in Section 2.The methodology is explained thoroughly in Section 3.The details of the experiments and results are discussed in Sections 4 and 5.Finally, the paper is concluded in Section 6.

    2 Nephrops Study Area Description

    2.1 Equipment Used for Nephrops Data Collection

    At FU 22, a sledge mounted with an HD video and stills CathX camera and lighting system at 75?to the seabed with a field view of 75 cm, confirmed using laser pointers were used [23].High-definition still images were collected with a frame rate of 12 frames per second with a resolution of 2048 x 1152 pixels for a duration of 10 - 12 min.The image data was stored locally in a SQL server which was then analyzed using different applications.Fig.5a shows the sledge used in data collection at FU 22.

    Figure 5: Sledge used in Ireland UWTV Survey (a) and sledge used in the gulf of cadiz UWTV survey: lateral VIew (b) and ventral view (c)

    Similarly, at FU 30, a sledge was used to collect the data during the survey.Figs.5a and 5c shows the sledge used in data collection at FU 30.The camera is mounted on top of the sledge with an angle of 45?to the seafloor.Videos were recorded using a 4 K Ultra High Definition(UHD) camera (SONY Handycam FDRAX33) with Lens of ZEISS? Vario-Sonnar 29.8 mm and optical zoom 10x.The sledge is equipped with a definition video camera and two reduced-sized CPUs with 700 MHz, 512 Mb RAM, and 16 GB storage.To record the video with good lighting condition four spotlights with independent intensity control is used.The equipment also has twoline laser separated 75 cm used to confirm the field of view (FOV) and a Li-ion battery of 3.7 V & 2400 mAh (480 Watt) to support the power of the whole system.Segments of 10-12-minute video duration was recorded at 25 frames per second, with a resolution of 3840 x 2160 pixels.The data were stored in hard disks and review later manually using by experts.

    2.2 Nephrops Study Area

    This paper obtained the data from the Smalls (FU 22) and Gulf of Cadiz (FU 30) UWTV surveys to conduct the experiments to detectNephropsburrows automatically.Fig.6a shows themapof MI-Ireland with stations carried out in 2018 to estimate the burrows.A station is a geostatistical location in the ocean where theNephropssurvey is conducted yearly.Fig.6b shows the Gulf of Cadiz’smapwith stations carried out in 2018 and theNephropsburrow density obtained using the manual count.

    Figure 6: (a) Study area of Nephrops FU 22 at MI-Ireland in 2018 [23] (b) Study area of Nephrops FU 30 at the gulf of cadiz in 2018 survey showing the nephrops burrow density observed and geo-statistical nephrops abundance estimated [24]

    3 Methodology

    The actual methodology used to countNephropsis explained in introduction section and summary in Fig.4.In this work, we replace the old paradigm of countingNephropsand propose an automated framework that automatically detects and counts the number ofNephropsburrows with high speed and accuracy.Fig.7 shows the high-level diagram of the proposed methodology.Video files are converted to frames using OpenCV and then images are manually annotated using the VOTT image annotation tool.The annotated data is verified by the marine experts before used for training the deep neural network.Fig.8 shows the detailed steps of the research methodology used in this work.

    Figure 7: Block diagram of proposed methodology

    Figure 8: Architecture of proposed methodology

    3.1 Data Preparation

    3.1.1 Data Collection

    Tab.1 shows the techniques and equipment used in collecting the data from FU 22 and FU 30 stations.

    Table 1: Equipment details used at FU 22 and FU 30 stations

    At FU 22, a total of 42 UWTV stations were surveyed in 2018.Out of 42, seven stations were used for data preparation.The 10-12 min video were recorded at different frame rates ranging from 15 fps, 12 fps, and 10 fps at Ultra HD.Also, the high-definition images were captured with the camera.The images were recorded with a resolution of 2048 x 1152 pixels.Out of thousands of recorded images, a total of 1133 high-definition images were manually annotated from FU 22.Figs.9a and 9b shows the high definition still images from the 2018 UWTV survey HD camera.In the top image, it can be seen a burrow system composed of three holes in the sediment, whereas in the bottom image, a singleNephropsindividual is seen outside the burrows.Illumination is better near the center of field view and decreases to the borders of the images.The camera angle shows 75 degrees with a ranging laser (red dots) on the screen.ANephropsburrow system may be composed of more than one entrance, and in this paper our focus is to detect the individualNephropsburrows entrances.

    Figure 9: (a) High definition still images from 2018 UWTV survey for FU22 station (b) High definition still images from 2018 UWTV survey for FU30 station

    At FU 30, the videos are recorded at 25 frames per second in good lighting condition.Video footage of 10-12 min has been recorded by every station at FU 30.A total of 70 UWTV stations were surveyed in 2018.Out of 70 surveyed stations, 10 were rejected due to poor visibility and lighting conditions.We selected seven stations for our experimentation which have good lightening condition, low noise and few artifacts, higher contrast, and high density ofNephropsburrows.The data from seven stations are considered for annotations.So, each video is around 15,000-18,000 frames.A total of 105,000 frames were recorded from seven different stations of the 2018 data survey.Figs.9c and 9d shows the high-definition images from the 2018 UWTV survey of FU 30.FU 30 images show better illumination (in terms of contrast and homogeneity) than FU 22.Pink lines on the images correspond to red laser lighting to 75 cm width searching areas (the red color is seen pink due to the distortion produced by different attenuation of light wavelength in water.

    3.1.2 Data Preprocessing

    Data collected from FU 22 and FU 30 is converted into frames.The collected data set has a lot of frames with low and non-homogeneous lightning and poor contrast.The frames that do not contain any burrows or poor visibility are discarded during the annotation phase, and consecutive frames with similar information are also discarded.

    3.1.3 Image Annotation

    Image annotation is a technique used in Computer Vision to create training and testing ground truth data, as this information is required by supervised deep learning algorithms.Usually,any object is annotated by drawing a bounding box around it.

    Currently, the marine experts that work withNephropsburrows are not using any annotation tool to annotateNephropsburrows, as this is a time-consuming job.In this phase, we annotate the images to overcome this challenge, and all recorded annotations are validated by the marine experts from Ireland and Cadiz institutes before training and testing processes.

    We adopt the mechanism to annotate the burrows manually in the Microsoft VOTT image annotation tool [19], using Pascal VOC format.The saved XML annotation file contains image name, class name (Nephrops), and bounding box details of each object of interest in the image.As an example, Fig.10 shows two screenshots from the FU 22 and FU 30 UWTV surveys that are manually annotated.

    Figure 10: Manual annotation in a frame from (a) FU 22 and (b) FU 30 UWTV survey using VOTT

    Tab.2.shows the number of ground truth burrow holes annotated of each station from the FU 22 and FU 30 UWTV surveys that will be used in the model training and testing.A total of seven stations are annotated for FU 30, and seven stations are for FU 22.A total of 248 images are annotated for seven stations of FU 30, and 978 images were annotated for seven FU 22 stations before validation stage.In general, there is a higher density ofNephropsburrows from FU 22 compared to FU 30 which is a factor of population dynamics.

    Table 2: Distribution of FU 30 and FU 22 Dataset

    3.1.4 Annotation Validation

    The annotated images are validated by marine sciences experts from Spain and Ireland.The validation of annotation is essential to obtain high quality ground-truth information.This process took a long time as confirming every single annotation is time consuming and sensitive job.After validating each annotation, a curated dataset is used for training and testing the model.

    3.1.5 Preparation of Training Dataset

    The annotated images are recorded into XML files and converted to TFRecord files, which are a sequence of binary strings that TensorFlow requires to train the model.The dataset is divided into two subsets: train and test.Tab.2 shows the distribution of the subsets for each dataset.

    3.2 Model Training

    To train a model, we used Convolution Neural Network (CNN).Instead of train the network from scratch, we utilized transfer learning [25] to fine-tune the Faster R-CNN Inceptionv2 [21]and MobileNetv2 [22] models in TensorFlow [25].Inceptionv2 is one of the architectures that have a high degree of accuracy.The basic design of Inceptionv2 helps to reduce the complexity of CNN.We used a pre-trained version of the network model trained on the COCO dataset [26].Inceptionv2 is configured to detect and classify only one class (c=1), namely “Nephrops”.When training the network, the “Momentum” optimizer [27] was used with a decay of 0.9.The momentum method allows us to solve the gradient descent problem.We trained a network with a learning rate of 0.01 with a batch size of 1.The Maxpool kernel size and Maxpool stride value are set to 2.The gradient clipping [28] was set with a threshold value of 2.0.(The gradient value too high and too low will lead to insatiability of the model).The Softmax activation function is used in the model.Tab.3 shows the parameter list and their values used in the Inceptionv2 model.The model is evaluated after every 10k iterations.A total of 70k iterations were performed.The box predictor used in the Inceptionv2 model was Mask RCNN.Fig.11 shows the layer-by-layer details of the Inceptionv2 model.

    Table 3: Model Training parameters

    The MobileNetv2 CNN architecture was proposed by Sandler et al.[29].One of the main reasons to choose the MobileNetv2 architecture was the relatively small training dataset from FU30.This architecture optimizes the memory consumption and execution speed at minor errors.MobileNetv2 architecture has depth-wise separable convolution instead of conventional convolution.This architecture initially has a convolution layer with 32 filters, followed by 17 residual bottleneck layers (Fig.12).Our experiments achieved the best model result with RMSProp [30]momentum with a decay of 0.9.We used a learning rate of 0.01, a batch size of 24 and a truncated normal initializer.The L2 regularization is used with Rectified Linear Unit (ReLU) as an activation function.The box predictor used in the MobileNet model was the Convolutional box predictor.Tab.3 shows the parameter list and their values used in the MobileNetV2 model.

    Figure 11: Inceptionv2 layers and architecture

    Figure 12: Mobilenetv2 model architecture

    We conducted the model training, validation, and testing on a Linux Machine powered by an NVIDIA TitanXP GPU.We created multiple combinations for model training, i.e., trained separate model for Cadiz and Ireland dataset, training a model by combining both datasets, and training and testing with different datasets.For FU30, 200 images are used, and for FU22, 619 images are used for training the model.Both the Inception and MobileNet models used two classes (one forNephropsand one as background), L2 regularization for training and were trained with 70k steps.MobileNetv2 is two times quicker in training as compared to the Inceptionv2 model.

    Precision can be seen as how robustly the model identifiesNephropsburrows’presence, and Recall is the rate ofTPover the total number positives detected by the model [31].Generally,when the recall increases, precision decreases, and vice versa, so precisionvs.recall curvesP(R)are valuable tools to understand model behavior.To quantify how accurate the model with a single number, the mean average precision (mAP), defined in Eq.(1), is used.

    In our problem, ground truth annotation and model findings are rectangular areas that usually don’t fit perfectly.In this paper, it is considered aTPdetection if both areas overlap more than 50%.This is computed by Jaccard indexJ, defined in Eq.(2)

    AandBare the set of pixels in the truth annotation and model finding rectangular areas,respectively, and .means the number of pixels in the set.WhenJ≥0.5, a TP is detected, but ifJ <0.5, detection fails with an FN.Using this methodology,PandRvalues are calculated, andmAPis used as a single number measure of the goodness of the model.Usually, this parameter is namedmAP50, but we usedmAPfor simplicity in this paper.

    3.3 Model Validation

    Models were trained using a random approximately 70-75% sample of the annotated dataset.The remaining is used for testing.We measured the training performance by monitoring the overfitting of the model.We recorded the turning checkpoints after every 10k iterations and computed themAP50on the validation dataset.The model is evaluated usingmAP, precision and recall curve, and by visual inspection of the images with automatic detections.Fig.13 shows the model evaluation life cycle.

    3.4 Model Performance

    The model is tested to assess the performance.We tested our model against some unseen images from the FU 30 and FU 22 datasets and evaluated the model’s performance.

    4 Experiments

    In this section, we evaluate the performance of different networks both in qualitative and quantitative ways.To detect theNephropsburrows automatically, multiple experiments are performed.We trained the models on three datasets.The first dataset purely contains FU 30 images.The second dataset contains the images from the FU 22 dataset, and the third is the hybrid dataset that contains the images from both datasets.Fourteen different combinations of set of experiments are performed.Each set is iterated seven times.So, 98 experiments were carried out.The details of the experiments are shown in Tab.5.

    Figure 13: Model evaluation life cycle

    The MobileNet and Inception models used 200 images from the FU 30 dataset for training the model, while 48 images for testing the models.Similarly, these models used 618 images from FU 22 dataset for training and 359 images for testing.The MobileNet and Inception models that were trained using the FU 30 data set and were tested using FU 22 dataset used 200 images for training the model and 150 images for testing.The models that used the FU 22 data set for training and FU 30 data set for testing used 618 images to train the model while 200 images were for testing.Finally, the MobileNet and Inception models that used the hybrid data set for training and testing used 818 images for training the model while 407 images for testing the model.Tab.4 shows the details of the data set used for these experiments.

    Table 4: Dataset for Experimentation

    Table 5: Summaries of mAP obtained using MobileNet Training Model

    5 Results and Analysis

    5.1 Quantitative Analysis

    We trained both MobileNet and Inception models over 70k iterations.The models’performance is reported after every 10k iterations and achieves an excellent precision on the trained dataset, as shown in Fig.14.

    Figure 14: Mean average precision of models trained and tested by FU 22 and FU 30 stations

    5.1.1 Performance

    We evaluate the performance in terms ofmAP,which is a prevalent metric in measuring object detectors algorithms’accuracy like Faster R-CNN, SSD, etc.Average precision calculates the average precision value for recall value over 0 to 1.Precision is the measurement of prediction accuracy, while Recall measures the positive predictions.We computed themAPwith the dataset ofNephropsfrom FU22 and FU30 stations over 100k iterations.

    Fig.14 shows the results obtained by MobileNet and Inception models.These models are trained and tested by the different FU 30 and FU 22 data sets.The bestmAPis 81.61% achieved by the Inception model trained using the Hybrid data set while tested by the FU 30 data set(experiment-13).Also, themAPof 79.99% is achieved when the Inception model is trained and tested using hybrid data sets (experiment-12).As expected, the MobileNet and Inception models do not Perform very well when trained by the FU 30 data set and tested by FU 22 data set and vice versa, as it can be seen in Fig.14, where minimum value ofmAPin these models was 47.86%, 48.49%, 50%, and 57.14%.

    As it can be seen in Fig.14,mAPfor most of the experiments increases with the number of iterations until 60k iterations.However, after this value, performance does not increase or became a bit erratic, and this behaviour could be explained because the overfitting of the model.It is also shown in the figure that Inception models show a more stable performance with the iteration number than the MobileNet.In summary, these results clearly show that themAPof the Inception model is better than the MobileNet for this problem.

    Tabs.5 and 6 show the maximummAPobtained by MobileNet and Inception models, respectively.In the first experiment, both models were trained using the FU 30 data set and tested by FU 30, FU 22, and hybrid dataset.While in the second experiment, the models were trained using the FU 22 data set and tested by FU 30, FU 22, and hybrid dataset.Finally, the model was trained using hybrid dataset and tested by FU 30, FU 22, and hybrid dataset separately.The maximummAPobtained using the MobileNet model is 75.12% that is when the model is trained using the FU 22 data set and tested by the FU 22 data, but the results obtained by Inception training are much better as we achieved themAPover 80% after training the model by the combined data set of FU 30 and FU 22.

    Table 6: Summaries of mAP obtained using Inception Training Model

    5.1.2 Precision and Recall

    We also evaluate the performance of the models using precision-recall curves.For model evaluation,TP,FP, andFNannotations are calculated.

    Fig.15 shows the precision and recall of MobileNet (left side), and Inception (right side)models trained and tested by the FU 30 data set after 70k iterations.

    Fig.16 shows the precision and Recall of MobileNet (left side), and Inception (right side)models trained and tested by FU 22 data set after 70k iterations.Similarly, Fig.17 shows the precision and recall of MobileNet (left side), and Inception (right side) models trained and tested by FU 30 and FU 22 data set after 70k iterations.

    Figure 15: Precision-recall curve of mobilenet (left) and inception (right) using FU 30 dataset

    Figure 16: Precision-recall curve of mobilenet (left) and inception (right) using FU 22 dataset

    ThemAPvalues could be interpreted as the area under these curves, but the behaviour of models is different.With the Inception model, precision values are close to those obtained with MobileNet model, but higher Recall values can be computed, because a lower number of FN,which results in higher values ofmAP.

    5.2 Qualitative Analysis

    In this section, we will qualitatively analyze the performance of different models on different datasets.The visualization results are from MobileNet and Inception models, trained and tested using a different combination of the FU 30 and FU 22 datasets.

    Figure 17: Precision-recall curve of mobilenet (left) and inception (right) using Hybrid dataset

    Figure 18: Nephrops burrows detections (a)(b) FU30-Mobilenet vs. FU30-Inception (c)(d) FU22-Mobilenet vs. FU22-Inception

    Figs.18-21 shows the detections ofNephropsburrows using MobileNet and Inception models,with a different combination of FU30 and FU22 datasets.The green bounding boxes on the images shown in this section are the TP detections by the trained model.The blue bounding boxes show the correct ground annotations.The red bounding boxes are the FP detections of trained models.Also, the results show the detections with a confidence level ranging from 97% to 99%.

    Figure 19: Nephrops burrows detections by model using hybrid dataset (a)(b) FU30-Mobilenet vs.FU30-Inception, (c)(d) FU22-MobileNet vs. FU22-Inception

    Figure 20: Nephrops burrows detections by model trained by FU30 and tested by FU22 dataset(a) FU22-MobileNet, (b) FU22-Inception

    Figs.18a and 18b show the detections on the FU 30 data set obtained with MobileNet and Inception trained models.In this example, MobileNet model detects one TPNephropsburrows while the inception model detects two.Figs.18c and 18d show the detections on FU 22 dataset.In both images, there are more TP detections with the Inception than with MobileNet trained model.

    Fig.19 shows the FU 30 and Inception models’results trained by both the FU 30 and FU 22 data set.Fig.19a shows only one TP detection of the FU 30 data set with the MobileNet model while, Fig.19b shows the three TP and one FP detections of burrows by the Inception model.Similarly, Figs.19c and 19d shows a significant difference in TPNephropsburrows detections in MobileNet and Inception model on the FU 22 data set.

    Fig.20 shows the detections ofNephropsburrows from the FU 22 data set.The MobileNet and Inception models are trained by the FU 30 data set and tested using the FU 22 dataset.Figs.20a and 20b show the detections using MobileNet and Inception models, respectively.Both models do not show any significant change in detections.

    In Fig.21, the results are obtained using the model trained by the FU 22 data set and tested by the FU 30 data set.The image is shown in Fig.21a is the result obtained by the MobileNet model, while Fig.21b shows the results by the Inception model.The MobileNet model detects one TP while the Inception model detects one TP and one FP.

    Fig.22 shows the MobileNet and Inception models, trained by FU 30 data and tested by both FU 30 and FU 22 data.Fig.22a uses the MobileNet model and detects one burrow, while Fig.22b detects twoNephropsburrows from the same image using Inception as a training model.In Figs.22c and 22d, theNephropsburrows are detected from the FU 22 data set using MobileNet and Inception training models.The Inception model detects one TP while the model trained by MobileNet detects no burrows.

    Figure 21: Nephrops burrows detections by model trained by FU 22 and tested by FU 30 dataset(a) FU 30-MobileNet, (b) FU 30- Inception

    Fig.23 shows the MobileNet, and Inception models trained by FU 22 data and tested by hybrid data set.The results did not show any significant difference when the FU 30 data set tests both the models, as shown in Fig.23.Still, when the models are tested using the FU 22 data set, the Inception model detects two TP.In contrast, MobileNet only detects one, as shown in Figs.23c and 23d.

    The visualization results clearly show that the Inceptionv2 model is much better in precision and accuracy.The model trained by Inception detects more True Positives as compared to MobileNet.Visualization of results also helps to understand the models’errors and improve them in future work.

    Figure 22: Nephrops burrows detections by model trained by FU30 and tested by hybrid dataset(a)(b) FU30-MobileNet vs. FU30-Inception, (c)(d) FU22-MobileNet vs. FU22-Inception

    Figure 23: Nephrops burrows detections by model trained by FU22 and tested by hybrid dataset(a)(b) FU30-MobileNet vs. FU30-Inception (c)(d) FU22-MobileNet vs. FU22-Inception

    6 Conclusion and Future Work

    Our results prove that deep learning algorithms are a valuable and effective strategy to help marine science experts in the assessment of the abundance ofNephropsnorvegicus specie when underwater video/image surveys are carried out every year, following ICES recommendations.The automatic detection algorithms could replace in a near future the tedious and sometimes difficult manual and human review of data, which is nowadays the standard procedure, with the promise of better accuracy, coverage of bigger areas in sampling and higher consistency in the assessment.

    In future work, we will plan to use a bigger curated dataset from FU22 and FU30 areas with expert’s annotations to improve the training of the Deep Learning network and validate the algorithm with data from other areas, which usually shows different habitats and relation with other marine species, and in image processing point of view, also differences in image quality, video acquisition procedures, and background textures.At the same time, the accuracy of detection could be obtained with the use of more dense object detection models and novel architectures.Finally, we will plan to correlate the spatial and morphological distribution of burrows holes to estimate the number of burrows complexes that are present and compare with human inter-observer variability studies.

    Acknowledgement:We thank the Spanish Oceanographic Institute, Cádiz, Spain, and Marine Institute, Galway, Ireland for providing the dataset for research.

    Funding Statement:Open Access Article Processing Charges has been funded by University of Malaga.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    视频区图区小说| 色吧在线观看| 激情五月婷婷亚洲| 国产免费一区二区三区四区乱码| 97精品久久久久久久久久精品| 超碰97精品在线观看| 亚洲欧美色中文字幕在线| 久久精品aⅴ一区二区三区四区| 人妻一区二区av| 国产一区二区三区综合在线观看| av.在线天堂| 国产一区二区三区av在线| 最近最新中文字幕免费大全7| 久久这里只有精品19| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲美女黄色视频免费看| 亚洲第一区二区三区不卡| 精品人妻熟女毛片av久久网站| 精品少妇黑人巨大在线播放| 久久天堂一区二区三区四区| 久久久精品免费免费高清| avwww免费| 国产xxxxx性猛交| 亚洲av在线观看美女高潮| 欧美亚洲日本最大视频资源| 精品亚洲乱码少妇综合久久| 亚洲中文av在线| 久久天躁狠狠躁夜夜2o2o | h视频一区二区三区| 国产亚洲精品第一综合不卡| 韩国精品一区二区三区| 免费黄网站久久成人精品| 亚洲图色成人| 18禁国产床啪视频网站| 成年美女黄网站色视频大全免费| 桃花免费在线播放| 精品卡一卡二卡四卡免费| 黄网站色视频无遮挡免费观看| 国产色婷婷99| 中文字幕人妻丝袜制服| 国产精品秋霞免费鲁丝片| 制服人妻中文乱码| 一级黄片播放器| 在线观看一区二区三区激情| 亚洲第一区二区三区不卡| 成人国语在线视频| 99久国产av精品国产电影| 久久精品国产a三级三级三级| 日本黄色日本黄色录像| 91aial.com中文字幕在线观看| 国产日韩一区二区三区精品不卡| 精品国产超薄肉色丝袜足j| 少妇被粗大的猛进出69影院| 亚洲av欧美aⅴ国产| 自拍欧美九色日韩亚洲蝌蚪91| 天天躁日日躁夜夜躁夜夜| 18禁观看日本| 夫妻性生交免费视频一级片| 日韩不卡一区二区三区视频在线| 免费观看性生交大片5| 中文字幕高清在线视频| 最近2019中文字幕mv第一页| 国产精品久久久av美女十八| 熟女av电影| 深夜精品福利| 性高湖久久久久久久久免费观看| 人人妻人人爽人人添夜夜欢视频| 国产一区二区激情短视频 | 欧美日韩亚洲高清精品| 80岁老熟妇乱子伦牲交| 日韩欧美一区视频在线观看| 黄片播放在线免费| 丁香六月欧美| 国产精品一区二区精品视频观看| 99久久人妻综合| 亚洲精华国产精华液的使用体验| 99精国产麻豆久久婷婷| 男女免费视频国产| 高清黄色对白视频在线免费看| 永久免费av网站大全| 香蕉国产在线看| 国产精品亚洲av一区麻豆 | 亚洲欧美色中文字幕在线| 人妻一区二区av| 天堂中文最新版在线下载| 国产精品国产三级专区第一集| 日韩不卡一区二区三区视频在线| 多毛熟女@视频| 美女福利国产在线| 中文乱码字字幕精品一区二区三区| 侵犯人妻中文字幕一二三四区| avwww免费| 人成视频在线观看免费观看| 男女免费视频国产| 自拍欧美九色日韩亚洲蝌蚪91| 国产精品久久久久久精品电影小说| 青春草视频在线免费观看| 国产成人精品久久二区二区91 | 久久精品国产a三级三级三级| 久久国产精品大桥未久av| 亚洲国产av新网站| 777久久人妻少妇嫩草av网站| 一区二区日韩欧美中文字幕| av在线观看视频网站免费| 久久午夜综合久久蜜桃| 婷婷色av中文字幕| av视频免费观看在线观看| 亚洲国产精品999| 十八禁网站网址无遮挡| 亚洲人成网站在线观看播放| 欧美xxⅹ黑人| 日韩不卡一区二区三区视频在线| svipshipincom国产片| 国产免费一区二区三区四区乱码| 日韩视频在线欧美| 国产黄频视频在线观看| 丝袜美足系列| 男女午夜视频在线观看| 日韩一本色道免费dvd| 国产人伦9x9x在线观看| 久久久久久免费高清国产稀缺| 日本黄色日本黄色录像| 大码成人一级视频| 国产亚洲一区二区精品| 午夜福利在线免费观看网站| 日本猛色少妇xxxxx猛交久久| 少妇 在线观看| 欧美黑人欧美精品刺激| 免费观看a级毛片全部| 国产av国产精品国产| 自线自在国产av| 亚洲成人av在线免费| 国产精品亚洲av一区麻豆 | 中文欧美无线码| 黄片播放在线免费| 国产亚洲av片在线观看秒播厂| 亚洲第一区二区三区不卡| www.精华液| 久久人人爽人人片av| 黄片小视频在线播放| 午夜av观看不卡| 国产免费视频播放在线视频| 精品一区二区三卡| 美女主播在线视频| 下体分泌物呈黄色| 五月开心婷婷网| 菩萨蛮人人尽说江南好唐韦庄| 午夜福利影视在线免费观看| 国产又色又爽无遮挡免| 亚洲国产av新网站| 亚洲欧美激情在线| 99精国产麻豆久久婷婷| 久久热在线av| 丝瓜视频免费看黄片| 欧美日韩一区二区视频在线观看视频在线| 国产视频首页在线观看| 成年av动漫网址| 国产精品av久久久久免费| 少妇的丰满在线观看| 亚洲欧美一区二区三区国产| 女人久久www免费人成看片| 日日撸夜夜添| 亚洲欧美激情在线| 亚洲人成77777在线视频| 色综合欧美亚洲国产小说| 日韩一区二区三区影片| 久久久久久久久久久免费av| 国产精品欧美亚洲77777| a 毛片基地| 一本一本久久a久久精品综合妖精| 午夜老司机福利片| 97人妻天天添夜夜摸| 国产毛片在线视频| 性少妇av在线| 国产片特级美女逼逼视频| 亚洲视频免费观看视频| 亚洲国产av新网站| 水蜜桃什么品种好| 国产探花极品一区二区| 日韩制服骚丝袜av| 亚洲五月色婷婷综合| 不卡视频在线观看欧美| 中文字幕色久视频| 女人被躁到高潮嗷嗷叫费观| 一级片免费观看大全| 精品一区二区三区av网在线观看 | 久久久久久久国产电影| 欧美在线一区亚洲| 王馨瑶露胸无遮挡在线观看| 午夜福利乱码中文字幕| 国产极品粉嫩免费观看在线| 一级黄片播放器| 国产免费现黄频在线看| 丰满乱子伦码专区| 国产乱人偷精品视频| 欧美在线一区亚洲| 午夜91福利影院| 国产av一区二区精品久久| av福利片在线| 下体分泌物呈黄色| 天天添夜夜摸| 黑人欧美特级aaaaaa片| 精品一区二区三卡| 精品一区二区三区四区五区乱码 | 丝袜在线中文字幕| 交换朋友夫妻互换小说| 午夜91福利影院| 另类亚洲欧美激情| 久久这里只有精品19| 国产精品一区二区精品视频观看| 国产一区二区激情短视频 | 国产成人91sexporn| 国产精品99久久99久久久不卡 | 亚洲精品国产av成人精品| 国产不卡av网站在线观看| 精品国产一区二区久久| 亚洲三区欧美一区| 亚洲国产精品一区二区三区在线| 黄色毛片三级朝国网站| 黄片小视频在线播放| 亚洲免费av在线视频| 国产精品一区二区精品视频观看| 中文字幕另类日韩欧美亚洲嫩草| 精品少妇久久久久久888优播| av在线app专区| 欧美精品人与动牲交sv欧美| 久久精品国产综合久久久| 人妻 亚洲 视频| 亚洲欧美成人综合另类久久久| 成年人午夜在线观看视频| 成年人免费黄色播放视频| 亚洲国产欧美在线一区| 亚洲专区中文字幕在线 | 天天躁狠狠躁夜夜躁狠狠躁| avwww免费| 精品免费久久久久久久清纯 | 成人毛片60女人毛片免费| 亚洲精品国产色婷婷电影| 熟妇人妻不卡中文字幕| 91aial.com中文字幕在线观看| 好男人视频免费观看在线| 免费看不卡的av| 亚洲国产av新网站| 飞空精品影院首页| 免费人妻精品一区二区三区视频| 亚洲精品日本国产第一区| 日韩,欧美,国产一区二区三区| 精品久久蜜臀av无| 丝瓜视频免费看黄片| 中文字幕人妻丝袜制服| 成人国语在线视频| 精品卡一卡二卡四卡免费| 久久久久久人妻| 国产av码专区亚洲av| 桃花免费在线播放| 久久精品亚洲熟妇少妇任你| 侵犯人妻中文字幕一二三四区| 欧美变态另类bdsm刘玥| 视频在线观看一区二区三区| 久久影院123| 老司机影院成人| 天天躁夜夜躁狠狠躁躁| 男女无遮挡免费网站观看| 国产高清不卡午夜福利| 少妇人妻久久综合中文| 亚洲欧美清纯卡通| 国产亚洲av片在线观看秒播厂| 国产伦人伦偷精品视频| 国产激情久久老熟女| 最近的中文字幕免费完整| 成年动漫av网址| 黄片小视频在线播放| 亚洲精品日韩在线中文字幕| 观看av在线不卡| 精品午夜福利在线看| 国产无遮挡羞羞视频在线观看| 久久人妻熟女aⅴ| 我的亚洲天堂| 久久毛片免费看一区二区三区| 一级毛片 在线播放| 另类精品久久| 男女下面插进去视频免费观看| 丁香六月欧美| 国产在线视频一区二区| 黄片播放在线免费| 亚洲精华国产精华液的使用体验| 亚洲欧洲日产国产| 亚洲伊人色综图| 只有这里有精品99| 久久人人爽人人片av| 亚洲欧美精品自产自拍| 亚洲欧美一区二区三区久久| www.av在线官网国产| av天堂久久9| 欧美精品高潮呻吟av久久| 成人国产麻豆网| 黄网站色视频无遮挡免费观看| 在线看a的网站| 少妇猛男粗大的猛烈进出视频| 免费在线观看完整版高清| xxx大片免费视频| 亚洲av成人不卡在线观看播放网 | 波野结衣二区三区在线| 久久97久久精品| 黄片小视频在线播放| 亚洲精品国产一区二区精华液| 国产日韩欧美亚洲二区| 亚洲欧美成人综合另类久久久| 免费看av在线观看网站| 91国产中文字幕| 国产精品久久久久久精品电影小说| 精品国产一区二区三区久久久樱花| 久久精品久久精品一区二区三区| 国产精品国产av在线观看| 婷婷色综合大香蕉| 最近手机中文字幕大全| www.自偷自拍.com| 人体艺术视频欧美日本| 国产亚洲欧美精品永久| 一二三四在线观看免费中文在| 欧美黄色片欧美黄色片| 美女主播在线视频| 人妻人人澡人人爽人人| 久久狼人影院| 免费高清在线观看视频在线观看| 久久久久久久久免费视频了| 纯流量卡能插随身wifi吗| 最近中文字幕2019免费版| 自拍欧美九色日韩亚洲蝌蚪91| 日韩中文字幕欧美一区二区 | 捣出白浆h1v1| 国产97色在线日韩免费| 国产精品免费视频内射| 热99久久久久精品小说推荐| 成人手机av| 欧美日韩国产mv在线观看视频| 亚洲国产看品久久| 国产精品国产三级专区第一集| 啦啦啦视频在线资源免费观看| 涩涩av久久男人的天堂| 男的添女的下面高潮视频| 丰满迷人的少妇在线观看| 三上悠亚av全集在线观看| 在线天堂最新版资源| 男女国产视频网站| 亚洲美女搞黄在线观看| 亚洲第一区二区三区不卡| 亚洲av电影在线观看一区二区三区| 18禁观看日本| 婷婷色综合大香蕉| 狠狠精品人妻久久久久久综合| 国产精品一国产av| 欧美成人精品欧美一级黄| 久久免费观看电影| 国产黄色视频一区二区在线观看| 亚洲国产精品一区二区三区在线| 亚洲自偷自拍图片 自拍| 99精国产麻豆久久婷婷| 亚洲精品国产av成人精品| 亚洲成人av在线免费| 99久久综合免费| 亚洲国产日韩一区二区| 国产成人免费无遮挡视频| 久久鲁丝午夜福利片| 色综合欧美亚洲国产小说| 热re99久久精品国产66热6| 欧美xxⅹ黑人| 久久久久网色| 美女大奶头黄色视频| 超碰成人久久| 51午夜福利影视在线观看| 国产亚洲最大av| 伊人久久大香线蕉亚洲五| 69精品国产乱码久久久| 日本欧美国产在线视频| 岛国毛片在线播放| 精品少妇内射三级| 欧美日韩综合久久久久久| 国产伦理片在线播放av一区| 中文字幕最新亚洲高清| 免费在线观看完整版高清| 国产日韩一区二区三区精品不卡| 国产又色又爽无遮挡免| 久久久久精品国产欧美久久久 | 2021少妇久久久久久久久久久| 亚洲成人av在线免费| 好男人视频免费观看在线| 色94色欧美一区二区| 午夜免费鲁丝| 1024香蕉在线观看| 精品一区二区三区av网在线观看 | 又大又黄又爽视频免费| 乱人伦中国视频| 又大又黄又爽视频免费| 日日啪夜夜爽| 七月丁香在线播放| 国产野战对白在线观看| 亚洲成国产人片在线观看| 天天躁夜夜躁狠狠躁躁| 日韩制服骚丝袜av| 波多野结衣av一区二区av| 一本色道久久久久久精品综合| 男女午夜视频在线观看| 777米奇影视久久| 亚洲欧美色中文字幕在线| 成年女人毛片免费观看观看9 | 一区福利在线观看| 精品一区二区免费观看| 青春草国产在线视频| 色网站视频免费| 国产一区二区 视频在线| 伊人久久大香线蕉亚洲五| 51午夜福利影视在线观看| 久久99热这里只频精品6学生| 伊人久久大香线蕉亚洲五| 久久性视频一级片| 亚洲一区中文字幕在线| 黄色一级大片看看| www.精华液| 97在线人人人人妻| 国产精品人妻久久久影院| 国产一区有黄有色的免费视频| 中国三级夫妇交换| avwww免费| 国产片内射在线| 国产伦理片在线播放av一区| 久久97久久精品| 久久久国产精品麻豆| 热re99久久国产66热| 天天添夜夜摸| 亚洲精品日本国产第一区| 电影成人av| 一边摸一边做爽爽视频免费| 日本午夜av视频| 久久久久久人人人人人| 久久这里只有精品19| 精品亚洲成a人片在线观看| 女人精品久久久久毛片| 日本欧美视频一区| 国产亚洲午夜精品一区二区久久| 蜜桃在线观看..| 久久午夜综合久久蜜桃| 黄色视频不卡| netflix在线观看网站| 国产伦人伦偷精品视频| 一二三四在线观看免费中文在| 免费高清在线观看视频在线观看| av网站免费在线观看视频| 女性被躁到高潮视频| 国产成人精品无人区| 亚洲欧美中文字幕日韩二区| 欧美日韩成人在线一区二区| 老汉色∧v一级毛片| 午夜老司机福利片| 日本欧美视频一区| av一本久久久久| a级片在线免费高清观看视频| 9色porny在线观看| 看免费成人av毛片| 国产在视频线精品| 久久天躁狠狠躁夜夜2o2o | 18禁国产床啪视频网站| 日日爽夜夜爽网站| av.在线天堂| 国产 精品1| 中文欧美无线码| 成年av动漫网址| 一本色道久久久久久精品综合| 久久精品熟女亚洲av麻豆精品| 国产精品欧美亚洲77777| 亚洲国产精品999| 黄色毛片三级朝国网站| 精品免费久久久久久久清纯 | 日韩 欧美 亚洲 中文字幕| 国产片内射在线| 不卡视频在线观看欧美| 一级毛片我不卡| 亚洲国产精品一区二区三区在线| 丝袜脚勾引网站| 欧美xxⅹ黑人| 亚洲自偷自拍图片 自拍| 亚洲av电影在线进入| 亚洲精品av麻豆狂野| 亚洲,欧美,日韩| 久久久久人妻精品一区果冻| 国产精品久久久人人做人人爽| 亚洲国产欧美在线一区| 国产精品久久久av美女十八| 成人国语在线视频| 午夜福利视频在线观看免费| 日韩一卡2卡3卡4卡2021年| 晚上一个人看的免费电影| 青草久久国产| 热re99久久精品国产66热6| 丝袜人妻中文字幕| 麻豆乱淫一区二区| 色精品久久人妻99蜜桃| 亚洲欧美精品综合一区二区三区| 国产精品 国内视频| 下体分泌物呈黄色| 母亲3免费完整高清在线观看| 老司机亚洲免费影院| 亚洲欧美一区二区三区黑人| 亚洲自偷自拍图片 自拍| 精品国产乱码久久久久久男人| 一级毛片我不卡| 国产精品av久久久久免费| 精品久久久精品久久久| 中文字幕人妻丝袜制服| www日本在线高清视频| 免费女性裸体啪啪无遮挡网站| 日韩免费高清中文字幕av| 一级毛片黄色毛片免费观看视频| 我要看黄色一级片免费的| 精品国产露脸久久av麻豆| 人妻人人澡人人爽人人| 天堂中文最新版在线下载| 在线免费观看不下载黄p国产| 99香蕉大伊视频| 国产亚洲av片在线观看秒播厂| 国产视频首页在线观看| 夫妻性生交免费视频一级片| 久热爱精品视频在线9| 中文字幕人妻熟女乱码| 国产精品香港三级国产av潘金莲 | 亚洲国产成人一精品久久久| 久久久国产一区二区| 国产在线视频一区二区| 色婷婷久久久亚洲欧美| 欧美人与性动交α欧美精品济南到| 国产亚洲精品第一综合不卡| 丰满迷人的少妇在线观看| 99国产综合亚洲精品| 一级a爱视频在线免费观看| 69精品国产乱码久久久| 国产精品香港三级国产av潘金莲 | 亚洲国产成人一精品久久久| 欧美日韩亚洲国产一区二区在线观看 | 一级,二级,三级黄色视频| 韩国av在线不卡| 在现免费观看毛片| 99国产综合亚洲精品| 久久精品国产综合久久久| 夫妻性生交免费视频一级片| 日韩欧美精品免费久久| 亚洲成人av在线免费| 日韩免费高清中文字幕av| 国产免费福利视频在线观看| 午夜久久久在线观看| 国产精品久久久人人做人人爽| 亚洲欧洲精品一区二区精品久久久 | 天堂俺去俺来也www色官网| 青草久久国产| 国产精品久久久久久久久免| 又大又黄又爽视频免费| 国产日韩欧美亚洲二区| 日韩中文字幕视频在线看片| avwww免费| 国产日韩一区二区三区精品不卡| 免费在线观看完整版高清| 天美传媒精品一区二区| 成年人免费黄色播放视频| 9191精品国产免费久久| 久久女婷五月综合色啪小说| 日本欧美国产在线视频| a级毛片在线看网站| 亚洲av综合色区一区| 精品久久久久久电影网| 巨乳人妻的诱惑在线观看| 欧美少妇被猛烈插入视频| 亚洲精品一二三| 日日爽夜夜爽网站| av.在线天堂| 老司机在亚洲福利影院| 美女大奶头黄色视频| 亚洲三区欧美一区| 国产熟女午夜一区二区三区| 久久久久久久大尺度免费视频| 性少妇av在线| 国产精品香港三级国产av潘金莲 | www日本在线高清视频| 91国产中文字幕| 高清在线视频一区二区三区| 日韩av免费高清视频| 国产xxxxx性猛交| 国产成人啪精品午夜网站| 考比视频在线观看| 人人澡人人妻人| 中文字幕人妻丝袜一区二区 | 蜜桃在线观看..| 亚洲,一卡二卡三卡| 国产一区二区激情短视频 | 久久精品亚洲av国产电影网| 亚洲伊人色综图| av又黄又爽大尺度在线免费看| 精品视频人人做人人爽| av天堂久久9| 久久精品国产亚洲av涩爱| 一本—道久久a久久精品蜜桃钙片| 国产亚洲av片在线观看秒播厂| 亚洲精品国产一区二区精华液| 午夜福利一区二区在线看| 亚洲免费av在线视频| 亚洲视频免费观看视频| 啦啦啦在线免费观看视频4| 日韩熟女老妇一区二区性免费视频| 日韩欧美精品免费久久| 男的添女的下面高潮视频| 超碰97精品在线观看| 国产麻豆69| 亚洲精品久久成人aⅴ小说| 久久影院123| 久久女婷五月综合色啪小说| 99久国产av精品国产电影| 欧美精品人与动牲交sv欧美|