Top Banner
Swipe Gesture based Continuous Authentication for Mobile Devices Soumik Mondal NISlab, Gjøvik University College, Norway [email protected] Patrick Bours NISlab, Gjøvik University College, Norway [email protected] Abstract In this research, we investigated the performance of a continuous biometric authentication system for mobile de- vices under various different analysis techniques. We tested these on a publicly available swipe gestures database with 71 users, but the techniques can also be applied to other biometric modalities in a continuous setting. The best result obtained in this research is that (1) none of the 71 genuine users is lockout from the system; (2) for 68 users we require on average 4 swipe gestures to detect an imposter; (3) for the remaining 3 genuine users, on average 14 swipes are required while 4 impostors are not detected. 1. Introduction Due to technological advances we are increasingly de- pendent on the mobile devices. Such devices are used for banking transactions, therefore they contains highly sensi- tive information. People use access control mechanisms on mobile devices, like username/password or biometrics, to protect against unauthorized access by another person. This means that a user needs to give proof of his/her identity when starting the mobile device or when unlocking the de- vice. However, in many cases, leave people the device unat- tended for shorter or longer periods when it is unlocked, or the device can be stolen when it is unlocked. Access control of a mobile device (i.e. smart phone or tablet) is generally implemented as a one-time proof of identity during the initial log on procedure [4, 5, 11]. The validity of the user is assumed to be the same during the full session. Unfortunately, when a device is left unlocked, any person can have access to the same sources as the gen- uine user. This type of access control is referred to as static authentication. On the other hand, we have Continuous Au- thentication (also called Active Authentication), where the genuineness of a user is continuously verified based on the activity of the current user operating on the device. When doubt arises about the genuineness of the user, the system can lock, and the user has to revert to the static authentica- tion access control mechanism to continue working. A con- tinuous authentication system should, with very high prob- ability, never lock out the genuine user. On the other hand, it should detect an impostor user within a short period of time, to limit the potential damage that can be done by this impostor user to information available on the computer and to limit the disclosure of restricted information. Due to its novelty, little research was done in this area [7, 8, 15]. Continuous Authentication (CA) by analysing the user’s behaviour profile on the mobile input devices is challenging due to limited information and large intra-class variations. As a result, all of the previous research was done as a periodic authentication, where the analysis was made, based on a fixed number of actions or fixed time period [7, 8, 15]. Also, researchers have used special hardware to overcome some of problems [6]. In this research, we will introduce a CA biometric sys- tem, which checks the genuineness of the user during the full session. This system uses the behaviour of the user to determine the trust the system has in the genuineness of that user. In particular, it will focus on input of the user’s swipe gestures. The behaviour of the current user will be com- pared with the stored information about the behaviour of the genuine user and as a result of that comparison the trust of the system in the genuineness of the user will increase or decrease and access to the device will be blocked if the level of trust in the genuineness of the user is too low. We found that the current research on CA reports the re- sults in terms of Equal Error Rate (EER), or False Match Rate (FMR) and False Non Match Rate (FNMR), over ei- ther the whole test set or over chunks of a large, fixed num- ber of actions. This means that an impostor can perform a number of actions before the system checks his identity for the first time. This is then in fact no longer CA, but at best Periodic Authentication (PA). In this research, we focus on actual CA that reacts on every single action from a user. The contributions made in this paper are as follows: A novel scheme was introduced on continuous authen- tication for mobile devices, where the system checks the genuineness of the user on every swipe gesture per- formed by the user. 978-1-4799-7824-3/15/$31.00 ©2015 IEEE ICB 2015 458
8

Swipe Gesture based Continuous Authentication for Mobile Devices

Apr 23, 2023

Download

Documents

Soumik Mondal
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Swipe Gesture based Continuous Authentication for Mobile Devices

Swipe Gesture based Continuous Authentication for Mobile Devices

Soumik Mondal

NISlab, Gjøvik University College, Norway

[email protected]

Patrick Bours

NISlab, Gjøvik University College, Norway

[email protected]

Abstract

In this research, we investigated the performance of a

continuous biometric authentication system for mobile de-

vices under various different analysis techniques. We tested

these on a publicly available swipe gestures database with

71 users, but the techniques can also be applied to other

biometric modalities in a continuous setting. The best result

obtained in this research is that (1) none of the 71 genuine

users is lockout from the system; (2) for 68 users we require

on average 4 swipe gestures to detect an imposter; (3) for

the remaining 3 genuine users, on average 14 swipes are

required while 4 impostors are not detected.

1. Introduction

Due to technological advances we are increasingly de-

pendent on the mobile devices. Such devices are used for

banking transactions, therefore they contains highly sensi-

tive information. People use access control mechanisms on

mobile devices, like username/password or biometrics, to

protect against unauthorized access by another person. This

means that a user needs to give proof of his/her identity

when starting the mobile device or when unlocking the de-

vice. However, in many cases, leave people the device unat-

tended for shorter or longer periods when it is unlocked, or

the device can be stolen when it is unlocked.

Access control of a mobile device (i.e. smart phone or

tablet) is generally implemented as a one-time proof of

identity during the initial log on procedure [4, 5, 11]. The

validity of the user is assumed to be the same during the

full session. Unfortunately, when a device is left unlocked,

any person can have access to the same sources as the gen-

uine user. This type of access control is referred to as static

authentication. On the other hand, we have Continuous Au-

thentication (also called Active Authentication), where the

genuineness of a user is continuously verified based on the

activity of the current user operating on the device. When

doubt arises about the genuineness of the user, the system

can lock, and the user has to revert to the static authentica-

tion access control mechanism to continue working. A con-

tinuous authentication system should, with very high prob-

ability, never lock out the genuine user. On the other hand,

it should detect an impostor user within a short period of

time, to limit the potential damage that can be done by this

impostor user to information available on the computer and

to limit the disclosure of restricted information.

Due to its novelty, little research was done in this area

[7, 8, 15]. Continuous Authentication (CA) by analysing

the user’s behaviour profile on the mobile input devices is

challenging due to limited information and large intra-class

variations. As a result, all of the previous research was done

as a periodic authentication, where the analysis was made,

based on a fixed number of actions or fixed time period

[7, 8, 15]. Also, researchers have used special hardware

to overcome some of problems [6].

In this research, we will introduce a CA biometric sys-

tem, which checks the genuineness of the user during the

full session. This system uses the behaviour of the user to

determine the trust the system has in the genuineness of that

user. In particular, it will focus on input of the user’s swipe

gestures. The behaviour of the current user will be com-

pared with the stored information about the behaviour of

the genuine user and as a result of that comparison the trust

of the system in the genuineness of the user will increase

or decrease and access to the device will be blocked if the

level of trust in the genuineness of the user is too low.

We found that the current research on CA reports the re-

sults in terms of Equal Error Rate (EER), or False Match

Rate (FMR) and False Non Match Rate (FNMR), over ei-

ther the whole test set or over chunks of a large, fixed num-

ber of actions. This means that an impostor can perform a

number of actions before the system checks his identity for

the first time. This is then in fact no longer CA, but at best

Periodic Authentication (PA). In this research, we focus on

actual CA that reacts on every single action from a user. The

contributions made in this paper are as follows:

• A novel scheme was introduced on continuous authen-

tication for mobile devices, where the system checks

the genuineness of the user on every swipe gesture per-

formed by the user.

978-1-4799-7824-3/15/$31.00 ©2015 IEEE ICB 2015458

Page 2: Swipe Gesture based Continuous Authentication for Mobile Devices

• Three different verification processes are used: (1)

the system knows all of the imposters; (2) the sys-

tem knows 50% of the imposters.; and (3) the system

doesn’t know any of the imposters.

• A novel feature selection scheme was proposed.

In this research, we will show that the performance re-

porting measures used in biometrics (FMR and FNMR) are

no longer valid for continuous biometrics and we will intro-

duce the Average Number of Genuine Actions (ANGA) and

the Average Number of Impostor Actions (ANIA) as new

performance reporting measure that can be used to com-

pare continuous authentication systems. We will show how

ANGA and ANIA can be computed and how previous re-

sults can be transformed to ANGA/ANIA values.

The remainder of this paper is organized as follows. In

Section 2, we will provide the data description and the fea-

ture sets used in this research. We describe the data sepa-

ration and classification process in Section 3. In Section 4,

we will discuss the methodology followed to carry out this

research. Result analysis and detail discussion will be pro-

vided in Section 5. Finally, we conclude this research with

future work in Section 6.

2. Data Description and Feature Extraction

In our research, we have used a publicly available mobile

touch gesture dataset [1]. The complete description of the

dataset are given below.

2.1. Data Description

During the data collection process [1] a client-server ap-

plication was deployed to 8 different Android mobile de-

vices (resolutions from 320x480 to 1080x1205) and touch

gesture data was collected from 71 volunteers (56 male and

15 female with ages from 19 to 47). The data was collected

in four different session with two different tasks. One task

is reading an article and answer some questions about it and

another task is surfing an image gallery. The dataset was

divided into two sets (i.e. Set-1 and Set-2) were the

first set consisted of all 71 users with vertical and horizon-

tal touch gestures. The second set consists of a subset of

51 users (42 male and 9 female) who have more than 100

horizontal gestures. The second set only consists of hori-

zontal gestures data which were not present in first subset.

The following raw data is collected during the data collec-

tion process [1]: (1) Action Type; (2) Phone Orientation;

(3) X-Coordinate; (4) Y-Coordinate; (5) Pressure; (6) Fin-

ger Area; and (7) Time Stamp.

2.2. Feature Extraction

In our analysis, we divided the sequence of consecutive

tiny movement data into actions (i.e. strokes). From the raw

data are 15 features calculated. Antal et al. [1] show the

details of these feature extraction process: (1) Action Du-

ration: The total time taken to complete the action (in mil-

liseconds); (2) Begin X: X-Coordinate of the action starting

point; (3) Begin Y: Y-Coordinate of the action starting point;

(4) End X: X-Coordinate of the action end point; (5) End

Y: Y-Coordinate of the action end point; (6) Distance End-

To-End: Euclidean distance between action starting point

and end point; (7) Movement Variability: The average Eu-

clidean distance between points belonging to the action tra-

jectory and the straight line between action starting point

and end point; (8) Orientation: Orientation of the action

(horizontal or vertical); (9) Direction: Slope between action

starting point and end point; (10) Maximum Deviation from

Action: The maximum Euclidean distance between points

belonging to the action trajectory and the straight line be-

tween action starting point and end point; (11) Mean Direc-

tion: The average slope of the points belonging to the action

trajectory; (12) Length of the Action: The total length of the

action; (13) Mean Velocity: The mean velocity of the ac-

tion; (14) Mid Action Pressure: The pressure calculated at

the midpoint of the action; (15) Mid Action Area: The area

covered by finger at the midpoint of the action.

3. Data Separation and Classification

In this section, we discuss the data separation process for

our continuous authentication system and the details of the

classifiers used on our research.

3.1. Data Separation

In our research, we used three verification processes

which we describe below. We split the data of each of the

users into a part for training and a part for testing. In all

cases the classifiers are trained with genuine and impostor

(training) data. The amount of training data of the genuine

user is 50% of the total amount of data of that user. The

training data of the impostor users is taken such that the total

amount of impostor data is equal to the amount of training

data of the genuine user. This is done to avoid bias towards

either the genuine or the impostor class. The three verifica-

tion processes described below might be seen to correspond

with an ”internal system” (VP-1), an ”external system” (VP-

3) or a combination of both (VP-2). Because the users of

Set-2 are a subset of the users of Set-1, are classifiers

only trained with data from Set-1 and is all Set-2 data

used for testing.

3.1.1 Verification Process 1 (VP-1)

In this case the impostor part of the training data is taken

from all N (Here, N = 70) impostors and all N impos-

tors contribute approximately with the same amount of data

for the training of the classifier. This could for example be

459

Page 3: Swipe Gesture based Continuous Authentication for Mobile Devices

done internally in an organization where all users provide

data from training the various classifiers. In this verification

process all the imposter users are known to the system.

For the testing we used all the data of the genuine user

and the impostor users that has not been used for the training

of the classifier. This means that we have 1 genuine set

of test data and N impostor sets of test data for each user.

Figure 1 explains this separation process for the first user,

where |Xtr| ≈ |Ytr|.

Xtr Xte

Ytr

2

3

N - 1

N

Figure 1. Data separation for VP-1. This is an example for User 1.

3.1.2 Verification Process 2 (VP-2)

This verification process can be seen as a combination of

an internal and external system. The classifiers are trained

with data from the genuine user as well as data of N2

of

the impostor users. In this verification process 50% of the

imposter users are known to the system.

Also here we test the system with all genuine and im-

postor data that has not been used for training. For N2

of the

impostors this means that their full data set is used for test-

ing, while for the other N2

impostors the full data set with

exclusion of the training data is used for testing. Figure

2 explains this separation process for the first user, where

again |Xtr| ≈ |Ytr|.

Xtr Xte

Ytr

2

N/2

N/2 + 1

N

Figure 2. Data separation for VP-2. This is an example for User 1.

3.1.3 Verification Process 3 (VP-3)

This verification process can be seen as an external system,

where the training and the testing of the system is done by

separate sets of impostor users. This could be a situation

where a new user will provide his own training data and the

remaining training data will be provided by an external or-

ganization. This means that data of impostors of the system

will never have been used for training the classifiers. In this

case we have two training datasets per genuine user. We did

split the group of impostor users into 2 sets of N2

impos-

tors. Figure 3 explain this separation process for the first

user, where |Xtr| ≈ |Ytr|. First, we trained the classifiers

with the training data of the genuine user and the training

data of the first set of N2

impostor users, exactly as we have

done in VP-2 (see Figure 3(a)). Next we tested this sys-

tem with the testing data of the genuine user and all of the

data of the second set of N2

impostor users. This process is

then repeated with the second set of training data where the

impostor users swapping roles (see Figure 3(b)). In this ver-

ification process are imposter users not known to the system

during testing.

Xtr Xte

Ytr

2

N/2

N/2 + 1

N

Xtr Xte

2

N/2

Ytr

N/2 + 1

N

(a)

(b)

Figure 3. Data separation for VP-3. This is an example for User-1.

3.2. Classification

After rigorous analysis, we found that 2 regression mod-

els performed best for swipe actions. We applied Artificial

Neural Network (ANN) [14] and Counter Propagation Ar-

tificial Neural Network (CPANN) [2] in a Multi-Modal ar-

chitecture in our analysis [9]. The score vector we use is

(f1, f2) = (Scoreann, Scorecpann). From these 2 classi-

fier’s scores we calculate a score that will be used in the

Trust Model (see Section 4.1) in the following way: For

weighted fusion, sc = w1 × f1 + (1−w1)× f2 where, w1

is the weight for the weighted fusion technique.

3.2.1 Feature Selection

Before building the classifier models we first apply the fea-

ture selection technique [10]. Let F = 1, 2, 3, ...m be the

total feature set, where m is the number of feature attributes.

The feature subset A ⊆ F is based on the maximization of

460

Page 4: Swipe Gesture based Continuous Authentication for Mobile Devices

the Sn with Genetic Algorithm as a feature subset searching

technique where,

Sn = sup∣

∣MVCDF (xAg )−MVCDF (xA

i )∣

MVCDF () → Multivariate Cumulative

Distribution Function

xAg → Genuine user feature subset data

xAi → Imposter user feature subset data

4. Methodology

This section describes the methodology we have fol-

lowed to carry out research.

4.1. Trust Model

The concept of Trust Model was first introduced by

Bours [3] in behavioural biometrics based CA. In this model

the behaviour of the current user is compared to the template

of the genuine user. Based on each single action performed

by the current user, the trust in the genuineness of the user

will be adjusted. If the trust of the system in the genuine-

ness of the user is too low, then the user will be locked out

of the system. More specifically, if the trust drops below a

pre-defined threshold Tlockout then the system locks itself

and will require static authentication of the user to continue

working.

The basic idea is that the trust of the system in the gen-

uineness of the current user depends on the deviations from

the way this user performs various actions on the system. If

a specific action is performed in accordance with how the

genuine user would perform the task (i.e. as it is stored in

the template), then the system’s trust in the genuineness of

this user will increase, which is called a Reward. If there is

a large deviation between the behaviour of the genuine and

current user, then the trust of the system in the current user

will decrease, which is called a Penalty. The amount of

change of the trust level can be fixed or variable [3]. A small

deviation from the behaviour of the user, when compared to

the template, could lead to a small decrease in trust, while a

large deviation could lead to a larger decrease.

In this research, we implemented a dynamic Trust Model

[13]. The model uses several parameters to calculate the

change in trust and to return the system trust in the genuine-

ness of the current user after the each separate action per-

formed by the user. All the parameters for this trust model

can be user specific. Also, for each user the parameters can

be different for different kind of actions.

Algorithm 1, shows the proposed Trust Model algorithm.

The change of trust (∆Trust) is calculated according to

Equation 1 and depends on the classification score of the

current action performed by the user as well as on 4 param-

eters. The parameter A represents the threshold value be-

tween penalty and reward. If the classification score of the

current action (sci) is exactly equal to this threshold then

∆Trust = 0. If sci > A then ∆Trust > 0, i.e. a reward

is given and if sci < A then ∆Trust < 0, i.e. the trust de-

creases because of a penalty. Furthermore, the parameter

B is the width of the sigmoid for this function (see Figure

4), while the parameters C and D are the upper limits of

the reward and the penalty. In Figure 4, we have shown the

∆Trust produced by the Equation 1 based on the classifica-

tion score of the current action for various sets of parame-

ters.

Algorithm 1: Algorithm for Trust Model.

Data:

sci → Classification score for the ith action

A → Threshold for penalty or reward

B → Width of the sigmoid

C → Maximum reward

D → Maximum penalty

Ti−1 → System trust after (i− 1)th action

Result:

Ti → System trust on the user after ith action

begin

∆Trust(sci) = min{−D + (D × (1 + 1

C)

1

C+ exp(− sci−A

B)), C}

(1)

Ti = min{max{Ti−1 +∆Trust(sci), 0}, 100}(2)

end

4.2. System Architecture

In this section, we discuss the methodology of our sys-

tem. The system was divided into two basic phases (see

Figure 5).

In the training phase, the training data (see Section 3.1) is

used to build the classifier models and the models are stored

in a database for use during the testing phase (marked as

dotted arrow). Each genuine user has his/her own classifier

models and training features.

In the testing phase, we use the test data which was sep-

arated from the training data for comparison. In the com-

parison, we use the models and training features stored in

the database and obtain the classifier score (probability) of

each sample of the test data according to the performed ac-

tion. This score will then be used to update the trust value

461

Page 5: Swipe Gesture based Continuous Authentication for Mobile Devices

0 0.2 0.4 0.6 0.8 1−3

−2

−1

0

1

2

3

Classifier Score (sc)

∆T

rust(s

c)

A = 0.5, B = 0.1, C = 1, D = 1

0 0.2 0.4 0.6 0.8 1−3

−2

−1

0

1

2

3

Classifier Score (sc)

∆T

rust(s

c)

A = 0.7, B = 0.1, C = 1, D = 1

0 0.2 0.4 0.6 0.8 1−3

−2

−1

0

1

2

3

Classifier Score (sc)

∆T

rust(s

c)

A = 0.5, B = 0.05, C = 1, D = 1

0 0.2 0.4 0.6 0.8 1−3

−2

−1

0

1

2

3

Classifier Score (sc)

∆T

rust(s

c)

A = 0.5, B = 0.1, C = 2, D = 2

Figure 4. Score sc vs. ∆Trust(sc) from Equation 1 for different

parameters.

Trust in the trust model (see Section 4.1). Finally, the trust

value Trust is used in the decision module, to determine if

the user will be locked out or can continue to use the device.

This decision is made based on the current trust value and

the lockout threshold (Tlockout).

For each action performed by the current user, the sys-

tem calculates the score for that action (see for details the

subsections in Section 3.2) and used that to calculate the

change in trust according to Equation 2. Parameters A, B,

C, and D in Equation 2 dependent on the user’s behaviour

and are optimized according to that. In our experiment, we

used linear search to optimize the parameters.

Build Classifier

Models

Swipe Feature Extraction

Store Profiles

Matching Module Trust Model

Decision Module

Con

tinu

e

Loc

k ou

t

Testing Pipeline

Template Creation Pipeline Static Login

Figure 5. Block Diagram of the proposed system.

4.3. Performance Measure

In the testing phase will we measure the performance of

the system in terms of Average Number of Genuine Actions

(ANGA) and Average Number of Impostor Actions (ANIA)

[12]. The details explanation of the performed actions is

given in Section 2.

0 10 20 30 40 50 60 70 80 90 10085

90

95

100

Event Number

Trus

t lev

el

N1

Rese

t Sys

tem

Tru

st

N2

Rese

t Sys

tem

Tru

st

N3

Rese

t Sys

tem

Tru

st

N4

Rese

t Sys

tem

Tru

st

N5

Rese

t Sys

tem

Tru

st

Figure 6. Trust for impostor test set.

In Figure 6, we see how the trust level changes when we

compare a model with test data of an impostor user. The

trust will drop (in this example) 5 times below the lock-

out threshold (Tlockout marked with a red line) within 100

user actions. The value of ANIA in this example equals1

5

5

i=1Ni. We can calculate ANGA in the same way, if

the genuine user is locked out based on his own test data.

Our goal is obviously to have ANGA as high as possible,

while at the same time the ANIA value must be as low as

possible. The last is obviously to assure that an impostor

user can do as little harm as possible, hence he/she must be

detected as quick as possible. In our analysis, whenever a

user is locked out, we reset the trust value to 100 to simulate

a new session starting after the set of actions that lead to this

lockout. In Figure 6, we can clearly see that the trust is set

back 5 times to 100.

4.3.1 FNMR / FMR to ANGA / ANIA Conversion

In this section, we show how, for a PA system, we can ex-

press FNMR and FMR in terms of ANGA and ANIA. As-

sume FNMR equals p and the system operates on chunks of

m actions. The genuine user can always do m actions and

with probability (1 − p) he can continue and do m more

actions. After that, again with probability (1 − p), he can

do m more actions, etc. So in total we find:

ANGA = m+ (1− p)×m+ (1− p)2 ×m+

(1− p)3 ×m...

So, ANGA =m

1− (1− p)=

m

p

Similarly if FMR is p, then the imposter user can always do

m actions and with the probability of p he can continue and

do m more actions. After that again with probability p he

can do m more actions, etc. So in total we find:

ANIA = m+ p×m+ p2 ×m+ p3 ×m...

So, ANIA =m

1− p

462

Page 6: Swipe Gesture based Continuous Authentication for Mobile Devices

For example, Frank et al. [8] got an EER of 3% with chunks

of 12 actions. So, p = 0.03 and m = 12; so ANGA =12

0.03= 400 and ANIA = 12

1−0.03= 12. It is clear that an

impostor can do at least m actions in a PA system because

such a system checks the identity of the user for the first

time after m actions.

5. Result Analysis

In this section, we analyse the results that we obtained

from our performance analysis. We divide our analysis into

three major parts based on the verification process (see Sec-

tion 3.1). As mentioned before, in this research the total

number of data sets of genuine users is 71 and hence, the

total number of data sets of impostor users is 4970 (71 ×70). We report the results from a one-hold-out cross val-

idation test in terms of ANIA and ANGA along with the

total number of impostors not detected for different lockout

threshold (Tlockout). Also, we report the results with the

user specific lockout threshold (Tus) where, the threshold

for lockout will satisfy 50 ≤ Tus < min(Trustgenuine).Besides reporting the performance in terms of ANIA and

ANGA and in terms of the 4 categories described below,

will we also report the results in terms of FMR and FNMR.

We do realize that this is a slight abuse of terminology in

the context of continuous authentication with our analysis

methods, but we decided to do so to clarify the results in

known terminology.

Interpretation of the tables: When we present the re-

sults from our analysis, the results are shown for 4 possi-

ble categories. The categories are divided based on genuine

user lockout (+ if the genuine user is not locked out and −in case of lock out) and non-detection of impostor users (+if an impostor user is locked out and − otherwise). Each of

the 71 users can thus be classified of 4 categories:

• All Positive (+ / +) : This is the best case category. In

this category, the genuine user is never locked out, and

all the 70 impostors are detected as impostors.

• Positive vs. Negative (+ / -) : In this category, the

genuine user is not locked out but some impostors are

not detected by the system.

• Negative vs. Positive (- / +) : In this category, the

genuine user is locked out by the system, but on the

other hand are all impostors detected.

• All Negative (- / -) : This is the worst case category. In

this category, the genuine user is locked out by the sys-

tem and also, some of the impostors are not detected.

The column # Users shows how many users fall within

each of the 4 categories (i.e. the values sum up to 71 for

Set-1 and 51 for Set-2). In the column ANGA a value

will indicate the Average Number of Genuine Actions in

case indeed genuine users are locked out by the system. If

the genuine users are not locked out, then we actually can-

not calculate ANGA. The column ANIA will display the

Average Number of Impostor Actions, and is based on all

impostors that are detected. The actions of the impostors

that are not detected are not used in this calculation, but the

number of impostors that is not detected is given in the col-

umn # Imp. ND. This number should be seen in relation to

the number of users in that particular category. For example,

in the Set-1 ’+ / -’ category described in Table 1, we see

that # Users equals 3, i.e. there are 3 × 70 = 210 impostor

test sets, and only 4 imposters within these 210 impostors

are not found by the system as being an impostor.

5.1. Result

Table 1 shows the optimal result we got from this anal-

ysis for VP-1. The table is divided into two parts, based

on the dataset used (i.e. Set-1 and Set-2). For Set-1

In total 68 users qualify for the best category for the user

specific lockout threshold; whereas for the fixed threshold,

this number drops to 63. We can observe from the table

that if we go from a user specific lockout threshold (Tus) to

fixed lockout threshold (Tlockout = 90) the results are get-

ting worse. For the fixed lockout threshold, the number of

best performing users are decreasing, and the numbers for

Positive vs. Negative are increasing. We also see that for

Set-2 all the 51 users satisfied the best case category for

the user specific lockout threshold.

Table 2 displays the optimal result from the analysis for

VP-2. We can clearly observe that all the 51 users satisfied

the best case category for the user specific lockout threshold

scheme on Set-2, but the ANIA increases from 5 for VP-1

to 6 for VP-2. Finally Table 3 shows the optimal result we

got from this analysis for VP-3.

It is not surprising that from Tables 1 to 3 we consistently

see that the personal threshold performs better than the fixed

system threshold.

We first applied the methods proposed by Mondal and

Bours [12] for VP-1 on Set-1 (i.e. Static Trust Model

with Support Vector Machine as a classifier) and obtained

an ANIA of 7 for only 22 users in the best category when

using a user specific lockout threshold. Table 4 shows the

result obtained from this analysis. We observe the signif-

icant improvement of the results when in this research by

comparing the result in Tables 1 and 4.

Table 4. Results for VP-1 on Set-1 by using analysis method

from [12].

Categories # User ANGA ANIA # Imp. ND

+ / + 22 7

+ / - 47 37 269

- / +

- / - 2 113 117 28

463

Page 7: Swipe Gesture based Continuous Authentication for Mobile Devices

Table 1. Results for VP-1 with the analysis method.

Tlockout CategoriesSet-1 Set-2

# User ANGA ANIA # Imp. ND # User ANGA ANIA # Imp. ND

Tus

+ / + 68 4 51 5

+ / - 3 14 4

- / +

- / -

90

+ / + 63 14 47 19

+ / - 8 25 20 4 37 5

- / +

- / -

Table 2. Results for VP-2 with the analysis method.

Tlockout CategoriesSet-1 Set-2

# User ANGA ANIA # Imp. ND # User ANGA ANIA # Imp. ND

Tus

+ / + 66 4 51 6

+ / - 5 15 10

- / +

- / -

90

+ / + 56 15 48 19

+ / - 15 22 27 3 36 3

- / +

- / -

We also report the overall system performance in terms

of FMR and FNMR, although we are aware that this is a

slight abuse of terminology in the context of continuous au-

thentication with our analysis methods. In this case we con-

sider FMR as the probability that an impostor user is not

detected when his test data is compared to the classification

model of a genuine user. Each impostor data was tested

against 71 genuine users for Set-1 and 51 for Set-2,

which means that the total number of impostor tests equals

71×70 = 4970 for Set-1 and 51×50 = 2550 for Set-2.

Similarly we also define the FNMR here as the probability

that a genuine user is falsely locked out by the system.

We report here the FNMR and FMR values for all the

tests done in previous section. We will restrict ourselves to

only the optimal settings, i.e. the user dependent threshold

Tus. For example we see in Table 1, in that case none of the

genuine user is locked out and that 4 impostor users are not

detected (category +/-). This implies that FNMR = 0%and FMR = 4/4970 = 0.08%. All the results are pre-

sented in Table 5. In this table are the verification pro-

cesses by the columns and the rows represent the dataset

(i.e. Set-1 and Set-2).

Table 5. Results in terms of (FNMR, FMR).

Data Set VP-1 VP-2 VP-3

Set-1 (0%, 0.08%) (0%, 0.2%) (0%, 0.06%)

Set-2 (0%, 0%) (0%, 0%) (0%, 0.04%)

Our major focus in this research was to develop a proper

CA system, which can react every action performed by

the user. A change of the pre-processing can influence

the results (i.e. a different feature extraction process, a dif-

ferent feature selection techniques or the choice of classi-

fiers). The goal in [1] was user identification and the au-

thors showed that could reach a 95% accuracy when using

chunk of 10 actions.

5.2. Comparison with Previous Research

Due to its novelty, we have found few research which

have addressed the continuous authentication by using mo-

bile swipe gesture [7, 8, 15]. Table 6, shows the previous

research results in terms of ANIA/ANGA by using the con-

version technique describe in section 4.3.1. We can clearly

see that our methods outperform these researches. The

database we used is also large when compared to these three

researches which provides statistical significance of this re-

search.

The major limitation of the state-of-the-art research is

that they use fixed chunks of data (i.e. m actions) for analy-

sis. An impostor can perform m actions before his identity

is checked, even if a 0% EER rate is achieved. In fact, this

is no longer CA, but at best periodic authentication. This

limitation is mitigated by our proposed scheme.

464

Page 8: Swipe Gesture based Continuous Authentication for Mobile Devices

Table 3. Results for VP-3 with the analysis method.

Tlockout CategoriesSet-1 Set-2

# User ANGA ANIA # Imp. ND # User ANGA ANIA # Imp. ND

Tus

+ / + 68 4 50 5

+ / - 3 10 3 1 27 1

- / +

- / -

90

+ / + 57 14 49 17

+ / - 14 19 22 2 40 4

- / +

- / -

Table 6. Previous research results with our conversion method.

Ref # Users FNMR FMR Blocksize ANGA ANIA

[8] 41 3% 3% 12 400 12

[15] 30 2.62% 2.62% 6 229 6

[7] 23 9% 7% 8 89 9

6. Conclusion

Our analysis was extensive in the sense that we applied

three different verification processes and all different com-

binations of the settings with two separate sets of data. Al-

though we applied the analysis in this research to a dataset

with continuous swipe gesture data, are the techniques gen-

eral enough that they can also be applied to, for exam-

ple, keystroke dynamics data or any other biometric data

that can be used for continuous authentication on mobile

devices. We have shown that our results improve signifi-

cantly over the previously known performance results, even

though it is hard to compare the results in a straightforward

manner. In the future we intend to perform a context inde-

pendent long term experiment for continuous authentication

to provide a deployable security solution.

References

[1] M. Antal, L. Z. Szabo, and Z. Bokor. Identity information

revealed from mobile touch gestures. Studia Universitatis

Babes-Bolyai, Informatica, LIX:1–14, 2014.

[2] D. Ballabio and M. Vasighi. A matlab toolbox for self orga-

nizing maps and supervised neural network learning strate-

gies. Chemometrics and Intelligent Laboratory Systems,

118(0):24 – 32, 2012.

[3] P. Bours. Continuous keystroke dynamics: A different per-

spective towards biometric evaluation. Information Security

Technical Report, 17:36–43, 2012.

[4] Z. Cai, C. Shen, M. Wang, Y. Song, and J. Wang. Mobile

authentication through touch-behavior features. In Biomet-

ric Recognition, volume 8232 of Lecture Notes in Computer

Science, pages 386–393. Springer, 2013.

[5] A. De Luca, A. Hang, F. Brudy, C. Lindner, and H. Huss-

mann. Touch me once and i know it’s you!: Implicit authen-

tication based on touch screen patterns. In SIGCHI Con-

ference on Human Factors in Computing Systems, CHI ’12,

pages 987–996. ACM, 2012.

[6] T. Feng, Z. Liu, K.-A. Kwon, W. Shi, B. Carbunar, Y. Jiang,

and N. Nguyen. Continuous mobile authentication using

touchscreen gestures. In IEEE Conference on Technologies

for Homeland Security (HST’12), pages 451–456, Nov 2012.

[7] T. Feng, J. Yang, Z. Yan, E. M. Tapia, and W. Shi. Tips:

Context-aware implicit user identification using touch screen

in uncontrolled environments. In 15th Workshop on Mo-

bile Computing Systems and Applications (HotMobile ’14),

pages 9:1–9:6. ACM, 2014.

[8] M. Frank, R. Biedert, E. Ma, I. Martinovic, and D. Song.

Touchalytics: On the applicability of touchscreen input

as a behavioral biometric for continuous authentication.

IEEE Transactions on Information Forensics and Security,

8(1):136–148, Jan 2013.

[9] J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas. On combin-

ing classifiers. IEEE Trans. on Pattern Analysis and Machine

Intelligence, 20(3):226–239, 1998.

[10] C. Lazar, J. Taminau, S. Meganck, D. Steenhoff, A. Co-

letta, C. Molter, V. de Schaetzen, R. Duque, H. Bersini, and

A. Nowe. A survey on filter techniques for feature selection

in gene expression microarray analysis. IEEE/ACM Trans.

on Computational Biology and Bioinformatics, 9(4):1106–

1119, 2012.

[11] Y. Meng, D. S. Wong, and L.-F. Kwok. Design of touch

dynamics based user authentication with an adaptive mech-

anism on mobile phones. In 29th Annual ACM Symposium

on Applied Computing, SAC ’14, pages 1680–1687. ACM,

2014.

[12] S. Mondal and P. Bours. Continuous authentication using

mouse dynamics. In Int. Conf. of the Biometrics Special In-

terest Group (BIOSIG’13), pages 1–12, Sept 2013.

[13] S. Mondal and P. Bours. A computational approach to the

continuous authentication biometric system. Information

Sciences, 304:28 – 53, 2015.

[14] I. T. Nabney. Netlab: Algorithms for Pattern Recognition,

volume XVIII of Advances in Computer Vision and Pattern

Recognition. Springer, 2002.

[15] X. Zhao, T. Feng, and W. Shi. Continuous mobile authenti-

cation using a novel graphic touch gesture feature. In 2013

IEEE Int. Conf. on Biometrics: Theory, Applications and

Systems (BTAS), pages 1–6, Sept 2013.

465