• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Artificial Intelligence-Based Semantic Segmentation of Ocular Regions for Biometrics and Healthcare Applications

    2021-12-14 03:50:24RizwanAliNaqviDildarHussainandWoongKeeLoh
    Computers Materials&Continua 2021年1期

    Rizwan Ali Naqvi,Dildar Hussain and Woong-Kee Loh

    1Department of Unmanned Vehicle Engineering,Sejong University,Seoul,05006,Korea

    2School of Computational Science,Korea Institute for Advanced Study(KIAS),Seoul,02455,Korea

    3Department of Software,Gachon University,Seongnam,13120,Korea

    Abstract:Multiple ocular region segmentation plays an important role in different applications such as biometrics,liveness detection,healthcare,and gaze estimation.Typically,segmentation techniques focus on a single region of the eye at a time.Despite the number of obvious advantages,very limited research has focused on multiple regions of the eye.Similarly,accurate segmentation of multiple eye regions is necessary in challenging scenarios involving blur,ghost effects low resolution,off-angles,and unusual glints.Currently,the available segmentation methods cannot address these constraints.In this paper,to address the accurate segmentation of multiple eye regions in unconstrainted scenarios,a lightweight outer residual encoder-decoder network suitable for various sensor images is proposed.The proposed method can determine the true boundaries of the eye regions from inferior-quality images using the high-frequency information flow from the outer residual encoder-decoder deep convolutional neural network(called ORED-Net).Moreover,the proposed ORED-Net model does not improve the performance based on the complexity,number of parameters or network depth.The proposed network is considerably lighter than previous state-of-theart models.Comprehensive experiments were performed,and optimal performance was achieved using SBVPI and UBIRIS.v2 datasets containing images of the eye region.The simulation results obtained using the proposed OREDNet,with the mean intersection over union score(mIoU)of 89.25 and 85.12 on the challenging SBVPI and UBIRIS.v2 datasets,respectively.

    Keywords:Semantic segmentation;ocular regions;biometric for healthcare;sensors;deep learning

    1 Introduction

    In the last few decades,researchers have made significant contributions to biometrics,liveness detection,and gaze estimation systems that rely on traits such as the iris,sclera,pupil,or other periocular regions[1].Interest in these traits is increasing each day because of the considerable importance of ocular region applications and their significant market potential.Biometric technology has become a vital part of our daily life,as unlike conventional methods,these approaches do not require an individual to memorize or carry any information,such as pins,passwords,or IDs[2].Iris segmentation has drawn significant attention from the research community owing to the rich and unique textures of the iris,such as:rings,crypts,furrows,freckles,and ridges[3].While ocular traits other than the iris are less frequently studied,researchers have been actively investigating other periocular regions,such as the sclera and retina,to collect identity cues that might be useful for stand-alone recognition systems or to supplement the information generally used for iris recognition[4].

    A majority of previous research studies on eye region segmentation were restricted to a single ocular region at a time,for e.g.,focusing only on the iris,pupil,sclera,or retina.In multi-class segmentation,more than one eye region is segmented from the given input image using a single segmentation network.Inexplicably,very few researchers have developed multi-class segmentation techniques for the eye regions,despite several advantages in different applications.Namely,under challenging conditions,the segmentation performance can be maintained or sometimes be even enhanced when using multiple region segmentation,as the targeted region can provide useful contextual information about other neighboring regions[5].For example,boundary of iris region can provide useful information about the boundary of the sclera and pupil region.Similarly,the eyelash area presents a constraint for the sclera region[6].Another potential advantage is that multi-biometric systems can be introduced without cost and computation overheads,which can work efficiently work for the segmentation of multiple target classes using a single approach[7].

    In this work,we attempt to address the research gaps in the segmentation of multiple eye regions using a single network,as shown in Fig.1.The proposed network can segment the input eye image into four main classes corresponding to the iris,sclera,pupil,and background region using a single model.Over the last few years,deep learning convolutional neural network(CNN)models witnessed rapid development,to be an influential method for image processing tasks.CNNs have outperformed conventional methods in a wide range of applications,such as in medical and satellite image analysis.The proposed method is based on deep learning models for semantic segmentation in images,specifically on convolutional encoder-decoder networks.This design approach is based on the recently introduced SegNet architecture[8].ORED-Net was developed based on the outer residual encoder-decoder network.The proposed network achieves a higher accuracy with reduced network depth and fewer number of parameters and layers by implementing only non-identity outer residual paths from the encoder to the decoder.

    Figure 1:Sample images of multi-class eye segmentation,illustrating the input eye image(left),ground truth image(right).Each color shade represents a different eye region

    ORED-Net is novel in the following four ways:

    ?ORED-Net is a semantic segmentation network without a preprocessing overhead and does not employ conventional image processing schemes.

    ? ORED-Net is a standalone network for the multi-class segmentation of ocular regions.

    ?ORED-Net uses residual skip connections from the encoder to the decoder to reduce information loss,which allows the flow of high-frequency information through the model,thus achieving higher accuracy with a few layers.

    ? The performance of the proposed ORED-Net model was tested on public datasets collected under various environments.

    In this study,the results obtained with the SBVPI[9]and UBIRIS.v2[10]datasets for the iris,sclera,pupil,and background classes are reported.In addition,the proposed model is compared with state-ofthe-art techniques from the literature.The results demonstrate that the proposed method is the most suitable technique for ocular segmentation,which can be incorporated in recognition procedures.

    The rest of the paper is structured as follows.In Section 2,a brief overview of related literature is provided.In Section 3,the proposed approach and working procedure are described.The results of the evaluation and analysis are discussed in Section 4.Finally,conclusion and future work are presented in Section 5.

    2 Literature Review

    Very few studies have focused on multi-class eye segmentation,particularly for segmenting multiple eye regions from the given images using a single segmentation model.Recently,Rot et al.[7]reported the segmentation of multi-class eye regions based on the well-known convolutional encoder-decoder network SegNet.They studied the segmentation of multiple eye regions such as the iris,sclera,pupil,eyelashes,medial canthus,and periocular region.This study required post-processing through thresholding strategy on probability maps and an atrous CNN with the conditional random field detailed in Luo et al.[11].The results were extracted using the Multi-Angle Sclera Database(MASD).Naqvi et al.proposed the Ocular-Net CNN for the segmentation of multiple eye regions,including the iris and sclera.This network consists of non-identity residual paths in a lighter version of both the encoder and decoder.Residual shortcut connections were employed with increasing network depth to enhance the performance of the model[12].In addition,the iris and sclera were evaluated on different databases.Hassan et al.proposed the SIP-SegNet CNN for joint semantic segmentation of the iris,sclera,and pupil.A denoising CNN(DnCNN)was used to denoise the original image.In SIP-SegNet,after denoising with DnCNN,reflection removal and image enhancement were performed based on contrast limited adaptive histogram equalization(CLAHE).Then,the periocular information was extracted using adaptive thresholding,and this information was suppressed using the fuzzy filtering technique.Finally,a densely connected fully convolutional encoder-decoder network was used for semantic segmentation of multiple eye regions[13].Various metrics were used to evaluate the proposed method that was tested on the CASIA sub-datasets.

    The Eye Segmentation challenge for the segmentation of key eye regions was organized by Facebook Research with the purpose to developing a generalized model with the condition of least complexity in terms of the model parameters.Experiments were conducted on the OpenEDS dataset which is a large-scale dataset of eye images captured by a head-mounted display with two synchronized eye facing cameras[14].To address the challenge concerning the semantic segmentation of eye regions,Kansal et al.[15]proposed Eyenet,Attention-based Convolutional Encoder-Decoder Network for accurate segmentation of four different eye regions,namely the iris,sclera,pupil and background.Eyenet is based on non-identity mapping based residual connections in both the encoder and decoder.Two types of attention units and multiscale supervision were proposed to obtain accurate and sharp boundary eye regions.Eye segmentation using a lightweight model was demonstrated by Huynh et al.Their approach involved the conversion of the input image to grayscale,segmentation of the eye regions with a deep network model,and removal of the incorrect areas using heuristic filters.A heuristic filter was used to reduce the false positive in the output of the model[16].

    Tab.1 presents a comparison of the proposed method with other methods for the multi-class segmentation of ocular regions along with their strengths and weaknesses.

    Table 1:Comparison of the proposed method with other multi-class segmentation methods

    3 Proposed Method for Eye Region Segmentation

    3.1 Overview of the Proposed Model

    The flowchart of the proposed ORED-Net for semantic segmentation of multiple eye regions is shown in Fig.2.The proposed network is a fully convolutional network based on non-identity residual connections from the encoder network to the decoder network.The input image is fed into the convolutional network without an initial preprocessing overhead.An encoder and decoder are incorporated in the proposed ORED-Net for multi-class segmentation of the full input eye images.The functionality of the encoder is to downsample the given input image until it can be represented in terms of very small features,whereas the decoder performs the reverse operation.The decoder upsamples the image back to its original dimensions using the small features produced by the encoder.In addition to the reverse process of downsampling,the decoder plays another very important role of predicting multiple classes,namely the iris,sclera,pupil,and background.The prediction task is performed using the Softmax loss function and a pixel classification layer.The class of each pixel in the image is predicted by the pixel classification layer,and the designated label is assigned.

    Figure 2:Flowchart of the proposed method for eye regions segmentation

    3.2 Segmentation of Multiple Eye Regions Using ORED-Net

    The image in typical encoder-decoder networks is downsampled and represented by very small features,which basically degrades the high-frequency contextual information.This results in the vanishing gradient problem for the classification of image pixels as the image is broken down into 7×7 sized patches[17].The vanishing gradient problem was addressed by introducing identity and non-identity mapping residual blocks.When a residual block is introduced in a CNN,the accuracy achieved is higher than that of simple CNNs such as VGGNet[18].Typically,residual building blocks(RBBs)are based on identity and non-identity mapping.In identity mapping,the features are directly provided for element-wise addition to perform the residual operation.In contrast,in the case of non-identity mapping,a 1 × 1 convolution is performed in each RBB before the features are subject to the element-wise addition.Identity mapping is not considered in the proposed network.Instead,non-identity mapping is performed by a 1 × 1 convolution layer through outer residual paths from the encoder to the decoder,as shown in Fig.3.

    Figure 3:Residual building block(RBB)used in the proposed method

    The proposed ORED-Net is executed via different developmental stages to perform the multi-class segmentation task with good accuracy,as compared with the basic encoder-decoder networks.In the first stage,a well-known network for segmentation i.e.,SegNet-Basic is employed[8].SegNet-Basic consists of 13 convolutional layers in both the encoder and decoder parts.This network is reduced to its simplest possible form by removing 5 convolutional layers from both the encoder and decoder parts of the network.Hence,the proposed network has only 8 convolutional layers in the encoder and decoder parts.In addition,each group in the encoder and decoder architectures consists of two convolutional layers,resulting in a lightweight encoder and decoder convolutional network.ORED-Net ensures the empowerment of the high-frequency features.In the next preparation stage of the proposed OREDNetwork,non-identity residual connections are introduced from the layers on the encoder side to the corresponding layers on the decoder side through the outer residual paths,as schematically shown in Fig.4.Hence,the residual connectivity introduced in ORED-Net is different from that of the original ResNet[19]and previously proposed residual-based networks such as Sclera-Net[4].Tab.2 highlights the main differences between the proposed network and previously reported networks such as ResNet[19]and Sclera-Net[4].

    Table 2:Key architectural differences between ORED-Net and other residual based methods

    The overall structure of ORED-Net is shown in Fig.4.Here,four non-identity outer residual paths(Outer-Residual-Path-1,ORED-P-1 to Outer-Residual-Path-4,ORED-P-4)from the encoder to the decoder are illustrated.The group containing a convolutional layer of size 3 × 3 and batch normalization layers is represented as Conv + BN,the activation layer,i.e.,rectified linear unit is represented as ReLU,the combination of a convolution layer of size 1 × 1 and batch normalization layers is represented as 1 × 1 Conv + BN,the max pooling layer is represented as Max-pool,and the reverse of the max pooling layer,i.e.,the max unpooling layer,is represented as Max-unpool.There are four convolutional groups in the encoder,with each group consisting of two convolutional layers before each Max-pool,i.e.,E-Conv-X and E-Conv-Y.Similarly,in the decoder,there are four convolutional groups,with each decoder group also consisting of two convolutional layers after each Max-unpool layer,i.e.,D-Conv-X and D-Conv-Y.Therefore,the 1st convolutional layer of the i-th encoder of the convolutional group is represented as E-Conv-Xi,and the 2nd convolutional layer of the i-th encoder of the convolutional group is represented as E-Conv-Yi.Similarly,the 1st convolutional layer of the j-th decoder of the convolutional group is represented as D-Conv-Xj,and the 2nd convolutional layer of the j-th decoder of the convolutional group is represented as D-Conv-Yj.Here,the values of i and j are in the range of 1–4.The 1st encoder-decoder convolutional groups located at the extreme left and right sides of the network are connected through ORED-Path-1.Similarly,the 2nd convolutional groups located 2nd from the left and right sides of the convolutional group are connected through ORED-Path-2,as shown in Fig.4.

    Figure 4:Deep learning-based eye region segmentation system with light-residual encoder and decoder network

    Based on Fig.4,it can be observed that at the decoder part,the 2nd convolutional layer in the 1st convolutional group receives the element-to-element addition of the residual featuresand,wherein these features are from the 1st convolutional layer in the encoder convolutional-group-1(E-Conv-X1)after the ReLU and the 1st convolution layer in the 1st decoder convolutional group-1(D-Conv-X1)after the ReLU,respectively,via ORED-Path-1.This can be described by the following equation:

    Here,A1is the residual feature with element-to-element addition input to D-Conv-Y1 through OREDPath-1.Typically,the outer residual block shown in Fig.3 can be represented by the following equation:

    whereAiis the sum of the features presented to D-Conv-Yj by the outer residual connection,represents the residual features from the 1st convolutional layer of the i-th convolutional group(E-Conv-Xi)after the ReLU at the encoder part,andrepresents the residual features obtained from the 1st convolutional layer of the j-th convolutional group(D-Conv-Xj)after the ReLU on the decoder side.Furthermore,the values of i and j are between 1 and 4.Thus,to enhance the ability of the network for robust segmentation,each of the four outer residual paths(ORED-P-1 to ORED-P-4)provides the residual featuresfrom each of the convolutional groups from the encoder side to the decoder side.This direct path of the spatial edge information from the encoder side empowers the residual features of the decoder side,i.e.,thefeatures.

    3.2.1 ORED-Net Encoder

    It can be seen from Fig.4 that the encoder consists of 4 convolutional groups,with each group containing two convolutional layers along with the batch normalization and ReLU activation layers.The core and exclusive characteristic of the ORED-Net encoder is that the spatial information is input to the subsequent decoder group by the residual paths.These outer residual paths originate after each ReLU layer on the encoder side.Due to the outer residual connections,better results can be achieved with a lighter network compared with other networks used for a similar purpose.In ORED-Net,the important features are downsampled through the Max-pool layers,which also provide pooling indices to the decoder side.The pooling indices contain the index information and feature map size,which are required on the decoder side.

    The encoder structure of ORED-Net is presented in Tab.3.It can be observed that there are 4 outer residual encoder-decoder paths that connect the encoder with the decoder through the non-identity residual connection shown in Fig.4.These outer residual encoder-decoder non-identity residual connections achieve feature empowerment through the spatial information of the preceding layers.The outer residual encoder-decoder connections originate after the ReLU activation layer on the encoder side and end next to the ReLU activation layer on the decoder side.The proposed network uses pre-activation because summation is performed after each ReLU layer on the decoder side.In every convolutional group on the encoder and decoder sides,an equal number of convolutional layers are present i.e.,two convolutional layers,which makes ORED-Net a balanced network.

    Table 3:The ORED-Net encoder based on outer residual encoder decoder paths

    Table 3(continued).GroupSize/NameNo.of filters Output(w ×h ×ch)To decoder 1 ×1 ×128/ORED-P-2?128 3 ×3 ×128/E-Conv-2_2?128 Pool-22 ×2/Pool-2–64×64× 128 EC-G-3 3 ×3 ×128/E-Conv-3_1?? 25664×64× 256 To decoder 1 ×1 ×256/ORED-P-3?256 3 ×3 ×256/E-Conv-3_2?256 Pool-32 ×2/Pool-3–32×32× 256 EC-G-4 3 ×3 ×256/E-Conv-4_1?? 51232×32× 512 To decoder 1 ×1 ×512/ORED-P-4?512 3 ×3 ×512/E-Conv-4_2?? 512 Pool-4Pool-4/2 ×2–16×16× 512

    Tab.3 presents the ORED-Net encoder with outer residual paths based on an image with size 224×224× 3.Here,E-Conv,ORED-P,and Pool represents the encoder convolution layers,outer residual encoderdecoder paths,and pooling layers,respectively.The convolutional layers in the encoder,represented by the symbol “??”,include both the ReLU activation and batch normalization(BN)layers,while the convolution layers represented by “?” include only the BN layer.The outer residual encoder-decoder skip paths,denoted by ORED-P-1 to ORED-P-4,start from the encoder and carry edge information to the decoder.As the proposed model includes pre-activation,the ReLU activation layer is used prior to the element-wise addition.

    3.2.2 ORED-Net Decoder

    The architecture of the ORED-Net decoder shown in Fig.4 is such that it mirrors the encoder and performs a similar convolutional operation as that performed by the encoder.The pooling layers of the encoder provide the size information and indices to the decoder,which are used to maintain the size of the feature map.In addition,the decoder features are upsampled to ensure that the size of the network output is the same as that of the input image.Furthermore,the outer residual paths input the features to the ORED-Net decoder.All the 4 outer residual encoder-decoder paths,i.e.,ORED-P-1 to ORED-P-4,originate from the encoder side and end on the decoder side.Element-to-element addition between the ORED-P and previous convolution is performed in the addition layers(Add-4 to Add-1),resulting in features that are useful to the convolutional layers in the next group,as shown in Fig.4.In this work,as four classes namely the iris,sclera,pupil,and background,are evaluated for segmentation task,the decoder produces four masks corresponding to these classes,i.e.,the number of filters for the last convolutional layer in the decoder.The pixel classification and Softmax layers facilitate the pixel-wise prediction of the network.To implement post activation in the decoder,the outer residual path is terminated immediately after each ReLU activation layer.For each class,the output of ORED-Net is a mask,which outputs “0” for the BG class,“100” for the sclera class,“180” for the iris class,“250” for the pupil class.

    4 Results and Discussion

    In this work,two-fold cross-validation was performed for training and testing the proposed model.To this end,two subsets were created from the available images by randomly dividing the collected database.From the images of 55 participants,two subsets were created,where the data from 28 participants were used for training and that from 27 participants were used for testing.To avoid overfitting issues,data augmentation of the training data was performed.To train and test ORED-Net,a desktop computer with an Intel?Core?(Santa Clara,CA,USA)i7-8700 CPU @3.20 GHz,16 GB memory,and an NVIDIA GeForce RTX 2060 Super(2176 CUDA cores and 8 GB GDDR6 memory)graphics card were employed.The above-mentioned experiments were conducted using MATLAB R2019b.

    4.1 Training of ORED-Net

    ORED-Net is based on outer residual paths from the encoder to the decoder for transferring spatial information from the encoder side to the decoder side.Therefore,high frequency information travels through the convolutional network that empowers training of this information without a preprocessing overhead.To train ORED-Net,original images without any enhancement or preprocessing were employed,and a classical stochastic gradient descent(SGD)method was used as an optimizer.SGD minimizes the difference between the actual and predicted outputs.During network training,the proposed model executed the entire dataset 25 times,i.e.,25 epochs,and a mini-batch size of 5 was selected for the ORED-Net design owing to its low memory requirement.The mini-batch size was determined by the size of the database.Once training was performed with the entire dataset,one epoch was counted,as shown in Eqs.(3)and(4).

    The ORED-Net model converges very quickly because of the outer residual connections from the encoder to the decoder.Therefore,the ORED-Net model was only trained for 25 epochs.The mini-batch size was kept to 5 images during 25 epochs of training with shuffling after each epoch.Here,the training loss was calculated based on the image pixels in the mini-batch using the cross-entropy loss reported[8].The loss calculation was based on the cross-entropy loss over all the pixels accessible in the candidate mini-batch based on the iris,sclera,pupil,or background classes.Moreover,the network convergence and accuracy were affected due to a higher difference between the number of pixels in different classes and bias of the network towards learning the dominant class,as described in Arsalan et al.[20].During class training,the imbalance among the classes can be removed by assigning an inverse frequency weighting approach,as defined in Eqs.(5)and(6).

    Here,Pixels(i)is the total number of pixels belonging to class e in the training data.In this study,e =4 represents the four classes namely the iris,sclera,pupil,and background.

    4.2 Testing of ORED-Net

    4.2.1 Evaluation Metrics

    To validate and compare ORED-Net with previous models,the average segmentation error(Erravg),mean Intersection over Union(mIoU),Precision(P),Recall(R),and F1-score(F)were adopted as evaluation protocols.

    Here,Trepresents the total number of images with aMNspatial resolution.G(i,j)and O(i,j)are the pixels of the mask or ground truth and the predicted labels,respectively.

    HereNcrepresents the total number of classes,andNxxis defined as the true positive where the number of pixels predicted asxalso belong to classx.Similarly,the other terms are defined as the true negativesNyy,false positivesNxy,and false negativesNyx.

    4.2.2 Eye Regions Segmentation Results Obtained with ORED-Net

    In Figs.5 and 6,the correct and incorrect results of multi-class eye region segmentation of eye images obtained with ORED-Net for the SBVPI dataset are illustrated.These pictorial representations follow the convention ofFP(shown in black for each class),FN(shown in yellow for each class),andTP(shown in green,blue,and red for the iris,sclera,and pupil classes respectively).

    4.2.3 Comparison of ORED-Net with Other Methods

    The segmentation performance of ORED-Net was compared with previous methods in terms of theErravg,mIoU,P,R,and F described in Section 4.4.1.Tab.4 presents a comparison of the segmentation performance of existing methods with that achieved by ORED-Net for the SBVPI dataset.The results demonstrate the superior performance of ORED-Net for eye region segmentation compared with the current methods,based on the values ofErravg,mIoU,P,R,and F.Comparisons are presented for the iris,sclera,pupil,and background regions with the current state-of-the-art methods in Tab.4.Additionally,the results of mIoU,P,R,and F in Tab.4 are presented through bar graphs in Fig.7.

    Figure 5:Examples of good eye region segmentation by ORED-Net for the SBVPI dataset:(a)Original image,(b)Ground-truth mask,and(c)Predicted mask result obtained with ORED-Net

    Figure 6:Examples of bad eye region segmentation by ORED-Net for the SBVPI dataset:(a)Original image,(b)Ground-truth mask,and(c)Predicted mask result obtained with ORED-Net

    4.2.4 Eye Region Segmentation with Other Open Datasets Using ORED-Net

    To evaluate the segmentation performance of ORED-Net under different image acquisition conditions,experiments with another publicly available datasets for eye region segmentation,i.e.,UBIRIS.v2 dataset,were included in this study[10].In previous studies,masks for the iris and sclera were provided for only 300 images[21].The ground truth images of the iris and sclera were merged and the ground truths for the pupil were designed to evaluate the proposed ORED-Net model on the iris,sclera and pupil using the UBIRIS.v2 dataset.Of the 300 images considered in the UBIRIS.v2 dataset,50% of the images(150)were used for training,while the remaining 50%(150)were used for testing with two-fold cross-validation.To train ORED-Net with the UBIRIS.v2 dataset,similar data augmentation as that used for the SBVPI dataset was employed.

    Table 4:Comparison of the proposed method with existing methods for the SBVPI dataset(unit:%)

    In Figs.8 and 9,the correct and incorrect results of multi-class eye region segmentation of eye images obtained with ORED-Net for the UBIRIS.v2 dataset are illustrated.This pictorial representation follows the convention ofFP(shown in black for each class),FN(shown in yellow for each class),andTP(shown in green,blue,and red for the iris,sclera,and pupil classes,respectively).As ORED-Net is powered by outer residual paths,there are no significant errors in the segmentation of multiple eye region from a challenging dataset like UBIRIS.v2.

    Figure 7:Mean and standard deviation of the proposed method and existing alternatives in terms of mean intersection over union,precision,recall and F1-score based on SBVPI database

    Figure 8:Examples of good eye region segmentation by ORED-Net for the UBIRIS.v2 dataset:(a)Original image,(b)Ground-truth mask,and(c)Predicted mask result obtained with ORED-Net

    Figure 9:Examples of bad eye region segmentation by ORED-Net for the UBIRIS.v2 dataset:(a)Original image,(b)Ground-truth mask,and(c)Predicted mask result obtained with ORED-Net

    Tab.5 presents a comparison of the segmentation performance of existing methods with that of OREDNet for the UBIRIS.v2 dataset.Additionally,the results of mIoU,P,R,and F in Tab.5 are presented through bar graphs in Fig.10.

    Based on the results presented in Tabs.4 and 5(Figs.7 and 10),it can be concluded that the performance of the proposed ORED-Net framework is consistent with that of state-of-the-art algorithms.A noteworthy point is that ORED-Net is a novel method that performs multi-class semantic segmentation of different eye regions such as iris,sclera,and pupil simultaneously,unlike all other existing algorithms that only address one or two eye region at a time.In addition,the performance of the ORED-Net model was evaluated on different publicly available datasets for comparisons with other methods,as shown in Tabs.4 and 5(Figs.7 and 10).

    Table 5:Comparison of the proposed ORED-Net method with existing methods for the UBIRIS.v2 dataset(unit:%)

    Table 5(continued).Evaluation Metrics ClassesSegNet[8]ScleraNet[4]ORED-Net Fold 1 Fold 2 Average Fold 1 Fold 2 Average Fold 1 Fold 2 Average Pupil63.45 78.62 71.0471.69 78.29 74.9974.92 81.86 78.39 All classes 75.67 86.79 81.2380.04 85.86 82.9582.43 87.82 85.12 P Background 99.65 99.88 99.7799.59 99.89 99.7497.99 99.86 98.93 Iris87.02 92.92 89.9784.59 91.64 88.1287.20 92.21 89.71 Sclera66.63 83.03 74.8376.27 81.49 78.8879.98 84.40 82.19 Pupil68.43 81.89 75.1673.76 82.09 77.9377.90 85.42 81.66 All classes 80.43 89.43 84.9383.55 88.78 86.1785.77 90.47 88.12 R Background 97.12 98.56 97.8497.42 98.34 97.8899.22 98.58 98.90 Iris88.60 95.48 92.0490.95 94.74 92.8589.14 95.55 92.35 Sclera94.36 97.24 95.8095.07 97.48 96.2895.90 97.45 96.68 Pupil92.43 95.81 94.1296.12 95.41 95.7792.54 95.50 94.02 All classes 93.13 96.77 94.9594.89 96.49 95.6994.20 96.77 95.49 F Background 98.36 99.22 98.7998.49 99.10 98.8098.60 99.21 98.91 Iris87.97 94.12 91.0587.01 92.99 90.0087.33 93.52 90.43 Sclera77.45 89.47 83.4684.08 88.59 86.3486.77 90.32 88.55 Pupil76.39 87.61 82.0082.77 87.44 85.1184.23 89.46 86.85 All classes 85.04 92.61 88.8288.09 92.03 90.0689.23 93.13 91.18

    Figure 10:Mean and standard deviation of the proposed method and existing alternatives in terms of mean intersection over union,precision,recall and F1-score based on UBIRIS.v2 database

    5 Conclusions

    In this paper,a novel multi-class semantic segmentation network called ORED-Net was proposed for the segmentation of eye regions such as the iris,sclera,pupil,and background.ORED-Net is based on the concept of outer residual connections for transferring spatial edge information directly from the initial layers of the encoder to the decoder layers.This framework enhances the performance of the network in the case of bad quality images.ORED-Net has fewer layers,which reduces the number parameters along with the computation time.The most notable aspects of the proposed ORED-Network are that it achieves a high accuracy with a lighter network and converges in considerably fewer number of epochs with direct flow of edge information,resulting in faster training.In ORED-Net,the original image is used for both training and testing,as no extra overhead is required in the form of preprocessing.ORED-Net is the first network of its kind that simultaneously segments three important eye regions,namely iris,sclera,and pupil,without any preprocessing overhead.The robustness and effectiveness of the proposed method were tested on various publicly available databases for eye region segmentation,including the SBVPI and UBIRIS.v2 datasets.In future studies,this work will be extended to a robust multimodal biometric identification system based on multiple eye regions.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF,www.nrf.re.kr)grant funded by the Korean government(MSIT,www.msit.go.kr)(No.2018R1A2B6009188)(received by W.K.Loh).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美av亚洲av综合av国产av| 国产精品久久久久久久电影 | 久久精品91无色码中文字幕| netflix在线观看网站| 露出奶头的视频| 亚洲av中文字字幕乱码综合| 精品久久久久久,| 女警被强在线播放| 此物有八面人人有两片| 亚洲国产色片| 免费av毛片视频| 99久久99久久久精品蜜桃| 国内精品久久久久久久电影| 亚洲av免费在线观看| 午夜精品一区二区三区免费看| 天堂√8在线中文| 无限看片的www在线观看| 午夜激情福利司机影院| 日韩 欧美 亚洲 中文字幕| 欧美中文综合在线视频| 丝袜美腿在线中文| 亚洲成人中文字幕在线播放| 亚洲国产日韩欧美精品在线观看 | 日本与韩国留学比较| 熟女人妻精品中文字幕| 国产不卡一卡二| 欧美大码av| 夜夜爽天天搞| 国产精品,欧美在线| 成人国产综合亚洲| 中文字幕精品亚洲无线码一区| 一二三四社区在线视频社区8| 欧美一区二区精品小视频在线| 国产精品电影一区二区三区| 1024手机看黄色片| 热99在线观看视频| 国产99白浆流出| 99精品在免费线老司机午夜| 亚洲av熟女| 日韩欧美精品v在线| 18+在线观看网站| 美女 人体艺术 gogo| 一个人免费在线观看电影| 最好的美女福利视频网| 欧美日韩精品网址| 国产精品 欧美亚洲| 床上黄色一级片| 亚洲真实伦在线观看| 精品国产超薄肉色丝袜足j| 91九色精品人成在线观看| 欧美最黄视频在线播放免费| 成人18禁在线播放| 亚洲人成伊人成综合网2020| 俄罗斯特黄特色一大片| 亚洲真实伦在线观看| svipshipincom国产片| 色精品久久人妻99蜜桃| 99国产极品粉嫩在线观看| 熟女电影av网| 国产午夜福利久久久久久| 国产高潮美女av| 波多野结衣高清无吗| 国产一区二区在线观看日韩 | 国内少妇人妻偷人精品xxx网站| 波多野结衣高清无吗| 国产精品1区2区在线观看.| 日日夜夜操网爽| 国产aⅴ精品一区二区三区波| 国产精品美女特级片免费视频播放器| 哪里可以看免费的av片| ponron亚洲| 免费av不卡在线播放| 中文资源天堂在线| 日韩欧美在线乱码| 蜜桃亚洲精品一区二区三区| 国产v大片淫在线免费观看| bbb黄色大片| 欧美成狂野欧美在线观看| 久久久久国内视频| 天天添夜夜摸| 90打野战视频偷拍视频| 精品久久久久久久毛片微露脸| 国产精品av视频在线免费观看| 老司机深夜福利视频在线观看| 日本黄色片子视频| 欧美成人性av电影在线观看| 国产一区二区在线av高清观看| 久久久久亚洲av毛片大全| 亚洲午夜理论影院| 精品久久久久久成人av| 性欧美人与动物交配| 少妇人妻精品综合一区二区 | 国产高清videossex| 欧美性猛交╳xxx乱大交人| 精品乱码久久久久久99久播| 不卡一级毛片| 欧美乱色亚洲激情| 观看免费一级毛片| 国产视频内射| 日本撒尿小便嘘嘘汇集6| 亚洲精华国产精华精| 黄色视频,在线免费观看| 黄片小视频在线播放| 久久性视频一级片| 99国产综合亚洲精品| 亚洲欧美日韩卡通动漫| 狠狠狠狠99中文字幕| 十八禁人妻一区二区| 午夜免费激情av| 午夜精品久久久久久毛片777| 成人性生交大片免费视频hd| 亚洲av免费高清在线观看| 18美女黄网站色大片免费观看| а√天堂www在线а√下载| 亚洲av成人av| 婷婷六月久久综合丁香| 精华霜和精华液先用哪个| 亚洲午夜理论影院| 欧美国产日韩亚洲一区| 精品欧美国产一区二区三| 99在线人妻在线中文字幕| 午夜福利高清视频| 91九色精品人成在线观看| 人妻丰满熟妇av一区二区三区| 国产美女午夜福利| 丰满人妻一区二区三区视频av | 国产午夜精品论理片| 99在线视频只有这里精品首页| 偷拍熟女少妇极品色| 国产三级中文精品| 欧美区成人在线视频| 我的老师免费观看完整版| 日韩精品青青久久久久久| 伊人久久大香线蕉亚洲五| 欧美日韩黄片免| 网址你懂的国产日韩在线| 亚洲中文日韩欧美视频| 少妇的丰满在线观看| 午夜精品在线福利| 一个人免费在线观看电影| 性色av乱码一区二区三区2| 国产精品爽爽va在线观看网站| 久久亚洲精品不卡| 国产免费一级a男人的天堂| 久久香蕉精品热| 国产精品一区二区三区四区免费观看 | 少妇的逼好多水| 午夜福利免费观看在线| 99久久精品热视频| 午夜福利高清视频| 人妻夜夜爽99麻豆av| 亚洲精品在线美女| 此物有八面人人有两片| 国产午夜精品久久久久久一区二区三区 | 国产精品亚洲av一区麻豆| 法律面前人人平等表现在哪些方面| 成人18禁在线播放| 可以在线观看毛片的网站| 小蜜桃在线观看免费完整版高清| 国内久久婷婷六月综合欲色啪| 蜜桃亚洲精品一区二区三区| 少妇人妻精品综合一区二区 | 亚洲第一欧美日韩一区二区三区| 深爱激情五月婷婷| 日本与韩国留学比较| 国产三级中文精品| 国产爱豆传媒在线观看| 免费在线观看影片大全网站| 亚洲国产高清在线一区二区三| 搡老熟女国产l中国老女人| 真人做人爱边吃奶动态| 国产高清videossex| 国产亚洲欧美98| 日韩欧美精品免费久久 | 婷婷六月久久综合丁香| 老司机在亚洲福利影院| 亚洲欧美精品综合久久99| www国产在线视频色| 国产成人av教育| 一个人免费在线观看电影| 国产亚洲欧美在线一区二区| 97超视频在线观看视频| 国内精品一区二区在线观看| 成人18禁在线播放| 熟女少妇亚洲综合色aaa.| 免费观看精品视频网站| 一级毛片女人18水好多| 无限看片的www在线观看| 国产亚洲精品一区二区www| 久久精品国产亚洲av涩爱 | а√天堂www在线а√下载| 久久久久久大精品| 成年女人永久免费观看视频| 欧美日韩乱码在线| 女警被强在线播放| 国产色婷婷99| 国产乱人伦免费视频| 91久久精品电影网| 神马国产精品三级电影在线观看| avwww免费| 婷婷六月久久综合丁香| 91麻豆精品激情在线观看国产| 岛国视频午夜一区免费看| 高清日韩中文字幕在线| 99热只有精品国产| 国产一区二区亚洲精品在线观看| 91麻豆精品激情在线观看国产| 九色成人免费人妻av| 国产精品三级大全| 露出奶头的视频| 免费大片18禁| 欧美一区二区国产精品久久精品| 日韩欧美免费精品| 国模一区二区三区四区视频| 欧美性猛交╳xxx乱大交人| 男女视频在线观看网站免费| 男人的好看免费观看在线视频| 久9热在线精品视频| 国产单亲对白刺激| 免费看日本二区| 欧美极品一区二区三区四区| 日韩欧美在线二视频| 淫妇啪啪啪对白视频| 中文字幕人妻熟人妻熟丝袜美 | 久久国产乱子伦精品免费另类| 亚洲精品一区av在线观看| 人人妻,人人澡人人爽秒播| 一区福利在线观看| 一区福利在线观看| 草草在线视频免费看| 亚洲精品成人久久久久久| 无遮挡黄片免费观看| 2021天堂中文幕一二区在线观| av欧美777| 国产精品久久久人人做人人爽| 久久久久久久久久黄片| 哪里可以看免费的av片| 听说在线观看完整版免费高清| 香蕉av资源在线| 久久香蕉国产精品| 国产欧美日韩一区二区三| 亚洲av美国av| 97超视频在线观看视频| 男女床上黄色一级片免费看| 亚洲精品色激情综合| 久久久久久国产a免费观看| 91久久精品国产一区二区成人 | 无人区码免费观看不卡| 精品欧美国产一区二区三| 精品欧美国产一区二区三| 99久久无色码亚洲精品果冻| 欧美黑人巨大hd| 午夜福利视频1000在线观看| 欧美3d第一页| 淫妇啪啪啪对白视频| 久久久久免费精品人妻一区二区| 国产一级毛片七仙女欲春2| 美女免费视频网站| 深夜精品福利| 免费看美女性在线毛片视频| 最近最新免费中文字幕在线| 男插女下体视频免费在线播放| 久久久久性生活片| 99久久综合精品五月天人人| 一级毛片女人18水好多| 国产伦精品一区二区三区四那| 国产精品乱码一区二三区的特点| 亚洲成人久久性| 国产精品久久久久久亚洲av鲁大| 久久国产乱子伦精品免费另类| 欧美黑人欧美精品刺激| 亚洲av一区综合| 嫩草影院精品99| 国产精品嫩草影院av在线观看 | 99精品在免费线老司机午夜| 又黄又粗又硬又大视频| ponron亚洲| 欧美黄色片欧美黄色片| 成人特级av手机在线观看| 国产精品久久久久久人妻精品电影| 国产精品 国内视频| 国产日本99.免费观看| 床上黄色一级片| 久久久国产精品麻豆| tocl精华| 99热这里只有精品一区| www日本在线高清视频| 女同久久另类99精品国产91| 91av网一区二区| 免费看美女性在线毛片视频| 真人一进一出gif抽搐免费| 成人18禁在线播放| 国产精品99久久久久久久久| 国产99白浆流出| 亚洲专区国产一区二区| 2021天堂中文幕一二区在线观| 国产精品久久久人人做人人爽| 精品乱码久久久久久99久播| 中亚洲国语对白在线视频| 美女高潮的动态| 日日夜夜操网爽| 久久久久国产精品人妻aⅴ院| 老汉色∧v一级毛片| 国产不卡一卡二| 亚洲精品色激情综合| 女人十人毛片免费观看3o分钟| 午夜福利在线观看吧| 国产中年淑女户外野战色| 两个人看的免费小视频| 在线观看舔阴道视频| 国产极品精品免费视频能看的| 丰满的人妻完整版| 成年人黄色毛片网站| 丁香六月欧美| 悠悠久久av| 黄色女人牲交| 男女之事视频高清在线观看| 日韩亚洲欧美综合| 天堂影院成人在线观看| 欧美日本视频| 久久香蕉精品热| 人人妻,人人澡人人爽秒播| 88av欧美| 国产精品久久久久久人妻精品电影| 给我免费播放毛片高清在线观看| 亚洲久久久久久中文字幕| 成人精品一区二区免费| 日本黄色视频三级网站网址| 五月伊人婷婷丁香| or卡值多少钱| 国产成人aa在线观看| 男女那种视频在线观看| 在线播放无遮挡| 中文字幕av成人在线电影| 国产伦精品一区二区三区视频9 | 午夜福利免费观看在线| 岛国在线免费视频观看| 亚洲成人久久爱视频| 真实男女啪啪啪动态图| 又紧又爽又黄一区二区| 91久久精品电影网| avwww免费| 乱人视频在线观看| 国产欧美日韩一区二区精品| 一本精品99久久精品77| 小蜜桃在线观看免费完整版高清| 成人av一区二区三区在线看| 国产91精品成人一区二区三区| 最近最新中文字幕大全免费视频| 三级毛片av免费| 免费一级毛片在线播放高清视频| 老汉色∧v一级毛片| 欧美黄色片欧美黄色片| 老鸭窝网址在线观看| 成年女人毛片免费观看观看9| 99热这里只有是精品50| 一级黄色大片毛片| 在线视频色国产色| 极品教师在线免费播放| 精品无人区乱码1区二区| 免费在线观看日本一区| 99视频精品全部免费 在线| 免费电影在线观看免费观看| 一本久久中文字幕| 特大巨黑吊av在线直播| 高潮久久久久久久久久久不卡| 九九在线视频观看精品| 国产蜜桃级精品一区二区三区| 亚洲七黄色美女视频| 成人性生交大片免费视频hd| 级片在线观看| h日本视频在线播放| av国产免费在线观看| 国产成人福利小说| 男插女下体视频免费在线播放| 亚洲国产精品久久男人天堂| 国产探花极品一区二区| 日本五十路高清| 久久久国产精品麻豆| 日韩有码中文字幕| 啦啦啦观看免费观看视频高清| 国产一级毛片七仙女欲春2| 精品乱码久久久久久99久播| 午夜福利视频1000在线观看| 99热这里只有是精品50| 亚洲av二区三区四区| 亚洲精品成人久久久久久| 18禁在线播放成人免费| aaaaa片日本免费| 久久久久久久久中文| 亚洲一区二区三区色噜噜| 久久久久久久久大av| 亚洲精品影视一区二区三区av| 法律面前人人平等表现在哪些方面| 国产伦一二天堂av在线观看| av天堂在线播放| 综合色av麻豆| 很黄的视频免费| 91九色精品人成在线观看| 亚洲乱码一区二区免费版| 我要搜黄色片| 很黄的视频免费| xxx96com| 日本黄色视频三级网站网址| 国产毛片a区久久久久| 国产真人三级小视频在线观看| 午夜老司机福利剧场| 日日摸夜夜添夜夜添小说| 亚洲国产精品sss在线观看| 母亲3免费完整高清在线观看| 国产精品 欧美亚洲| 亚洲欧美日韩东京热| 村上凉子中文字幕在线| 久久6这里有精品| 蜜桃久久精品国产亚洲av| 亚洲成人免费电影在线观看| 免费av毛片视频| 国产亚洲精品久久久com| 看片在线看免费视频| 亚洲精品在线观看二区| 淫秽高清视频在线观看| 国产v大片淫在线免费观看| 中文亚洲av片在线观看爽| 亚洲av日韩精品久久久久久密| bbb黄色大片| 一本综合久久免费| 亚洲片人在线观看| 亚洲成av人片免费观看| 国产伦精品一区二区三区四那| 午夜福利18| 男人的好看免费观看在线视频| 99热精品在线国产| 三级国产精品欧美在线观看| 岛国视频午夜一区免费看| 国产成人aa在线观看| 国产精品亚洲美女久久久| 啦啦啦免费观看视频1| 日韩欧美免费精品| 最新在线观看一区二区三区| 国产老妇女一区| 日本a在线网址| 制服人妻中文乱码| 免费看a级黄色片| 搡老妇女老女人老熟妇| 亚洲一区高清亚洲精品| 少妇高潮的动态图| 色吧在线观看| 欧美日韩一级在线毛片| 欧美+日韩+精品| 精品国产美女av久久久久小说| 午夜免费激情av| 精品国内亚洲2022精品成人| 国产av在哪里看| 欧美成人a在线观看| 国产成人av激情在线播放| 嫩草影视91久久| 一夜夜www| 99久久成人亚洲精品观看| 精品国产超薄肉色丝袜足j| 一个人免费在线观看的高清视频| 色老头精品视频在线观看| 麻豆国产av国片精品| 窝窝影院91人妻| 精华霜和精华液先用哪个| 91久久精品国产一区二区成人 | 国产久久久一区二区三区| 九九在线视频观看精品| 国产一区二区激情短视频| 人妻久久中文字幕网| 久久精品91蜜桃| 亚洲国产欧美人成| 99久久综合精品五月天人人| 亚洲电影在线观看av| 琪琪午夜伦伦电影理论片6080| 美女高潮的动态| 全区人妻精品视频| 国产三级黄色录像| 婷婷丁香在线五月| 身体一侧抽搐| 精品久久久久久久毛片微露脸| 国产单亲对白刺激| 国产老妇女一区| 久久精品影院6| 欧美丝袜亚洲另类 | 天堂√8在线中文| 日韩亚洲欧美综合| 757午夜福利合集在线观看| 久久精品综合一区二区三区| 中出人妻视频一区二区| 国产一级毛片七仙女欲春2| 久久久久久大精品| 欧美国产日韩亚洲一区| 好男人电影高清在线观看| 色播亚洲综合网| 日韩大尺度精品在线看网址| 村上凉子中文字幕在线| 久久精品91无色码中文字幕| av专区在线播放| 一a级毛片在线观看| 一本久久中文字幕| 亚洲成av人片在线播放无| 久久这里只有精品中国| 在线观看日韩欧美| 少妇熟女aⅴ在线视频| 美女黄网站色视频| 欧美一级毛片孕妇| 一区二区三区激情视频| 极品教师在线免费播放| 亚洲第一电影网av| 精品人妻1区二区| 黄片大片在线免费观看| 99久久99久久久精品蜜桃| 国产精品一及| 免费在线观看亚洲国产| 久久国产精品影院| 免费观看人在逋| 中文字幕久久专区| 在线观看一区二区三区| 亚洲人成电影免费在线| 国语自产精品视频在线第100页| 久久精品综合一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 手机成人av网站| 国产精品三级大全| 欧美日韩亚洲国产一区二区在线观看| 亚洲一区二区三区色噜噜| 日日干狠狠操夜夜爽| 免费在线观看日本一区| 午夜福利欧美成人| 操出白浆在线播放| 国产毛片a区久久久久| www.999成人在线观看| 精品国产三级普通话版| 99riav亚洲国产免费| 亚洲国产日韩欧美精品在线观看 | 一区二区三区免费毛片| 中文字幕高清在线视频| 国产精品久久久久久精品电影| 成年免费大片在线观看| 国产免费一级a男人的天堂| 日日干狠狠操夜夜爽| 脱女人内裤的视频| 欧美在线黄色| 午夜激情欧美在线| av福利片在线观看| 天天添夜夜摸| 一级毛片女人18水好多| 久久这里只有精品中国| 久久精品国产99精品国产亚洲性色| 欧美三级亚洲精品| 在线观看美女被高潮喷水网站 | 偷拍熟女少妇极品色| 午夜久久久久精精品| 国产午夜精品久久久久久一区二区三区 | 亚洲国产欧美网| 色综合亚洲欧美另类图片| 性欧美人与动物交配| 国内精品美女久久久久久| 夜夜爽天天搞| 欧美日韩国产亚洲二区| 天堂√8在线中文| 精品久久久久久久久久久久久| 啦啦啦免费观看视频1| 少妇裸体淫交视频免费看高清| 中文在线观看免费www的网站| 亚洲精品日韩av片在线观看 | 午夜激情欧美在线| 亚洲av五月六月丁香网| 国产精品日韩av在线免费观看| 国产精品久久电影中文字幕| 91字幕亚洲| 国产精华一区二区三区| 日本熟妇午夜| 国产欧美日韩精品亚洲av| 特大巨黑吊av在线直播| 999久久久精品免费观看国产| 叶爱在线成人免费视频播放| 啦啦啦观看免费观看视频高清| 一进一出好大好爽视频| 亚洲国产中文字幕在线视频| 91av网一区二区| 黄色片一级片一级黄色片| 国产亚洲精品久久久久久毛片| 99国产综合亚洲精品| 国产探花在线观看一区二区| 宅男免费午夜| 特大巨黑吊av在线直播| 嫩草影视91久久| 最近在线观看免费完整版| 欧美最新免费一区二区三区 | 啦啦啦韩国在线观看视频| 精品99又大又爽又粗少妇毛片 | 岛国在线免费视频观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 国产av一区在线观看免费| 身体一侧抽搐| www.www免费av| 夜夜看夜夜爽夜夜摸| 嫩草影院入口| 一个人看的www免费观看视频| 免费观看人在逋| 岛国在线观看网站| 中亚洲国语对白在线视频| 欧美三级亚洲精品| 欧美日韩一级在线毛片| 成人国产一区最新在线观看| 欧美色欧美亚洲另类二区| 国产精品影院久久| 欧美中文日本在线观看视频| 少妇丰满av| 一本精品99久久精品77| 欧美黑人欧美精品刺激| 一级毛片高清免费大全| 淫秽高清视频在线观看| 国产精品亚洲美女久久久|