Randomized Ensemble SVM based Deep learning with Verifiable dynamic access control using user revocation in IoT architecture RAVULA ARUN KUMAR * and KAMBALAPALLY VINUTHNA Department of CSE, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram, Andhra Pradesh 522502, India e-mail: [email protected]; [email protected]MS received 8 June 2020; revised 14 June 2021; accepted 30 July 2021 Abstract. As applications based on the Internet of Things (IoT) are growing rapidly, the deployment of sensor nodes have also increased. So, a large amount of sensed data has to be processed before storing. Since IoT-based applications are context-aware computing, an attacker can easily inject false data. Even though existing mechanisms were able to render the security on data, it is subjected to stealing, due to using symmetric-based encryption. In order to provide a comprehensive security system, the entrance control strategy called Verifiable Dynamic Access Control using User Revocation is utilized. In that, the combined CPASBE (Cipher Text Attribute Set Based Encryption) –VOMAACS (Verifiable Outsourced Multi-Authority Access Control Scheme) is utilized. CPASBE- a method for protecting sensitive data from third parties is to have data in the data owner itself. A data owner is solely responsible for securing his data and therefore replication is avoided. The CPASBE strategy ensures that the scrambled information is kept private regardless of considering reliability of supplier, which secured information against agreement assaults. Keywords. CPASBE–Cipher Text Attribute Control Scheme; VOMAACS – Verifiable Outsources Multi- Authority Access Control Scheme. 1. Introduction When all is said and done, numerous database groups and extra assets are required to store enormous information. Capacity and recovery, on the other hand, are not only issues, according to all analyses. Getting important exam- ples from huge information, for example, quiet demon- strative data is additionally a fundamental issue [1]. Nowadays, this emerging application is being produced for different situations like sensors. Sensors are regularly uti- lized in basic applications without a doubt or not so distant future. Various body sensors gadgets have been created for ceaseless checking of social insurance, individual wellness, and physical action mindfulness [2, 3]. Recently, numerous analysts have been attempting to build up various wearable clinical gadgets in remote wellbeing checking frameworks for ceaseless observation of individual wellbeing conditions [4]. For precedent, wearable device gadgets are utilized for recommending physiological activities and sustenance propensities by a two-multiday time of constant physio- logical checking of patients. In this period wearable sensors would ceaselessly watch and store the patients’ wellbeing information into an information stock [5]. This enables specialists towards the analysis of the patient’s wellbeing to state and improve grades not just utilizing the laboratory tests but also patients’ well-being information gathered from the wearable body sensors. In this way, the sensor information is frequently utilized for making the suit- able move for patient’s wellbeing and treatment suggestion, way of life decisions, and early analysis that are basic in improving the nature of patient’s wellbeing [6]. Out-dated well-being information gathered from the wearable body sensors. In this manner, the sensor information is a standard data for stockpiling procedure, and the stages are not for assisting a previously stated developing sensing device inside a utilised location where the volume, speed, and variety of data is growing. This issue requires the advancement of an effective stockpiling framework for putting away such issue and handling large information [7]. Under this a verified engineering and execution of versatile IoT design for handling and ensuring ongoing sensor information is needed for adapting enormous information technologies [8]. Fog processing is a dispersed registering framework in which particular submission administrations are measured at the system control in a brilliant gadget though particular and others are measured in a distant information center [9, 10]. The information created through sensors inserted in different things/objects produces huge measures of unstructured (enormous) information on the constant *For correspondence Sådhanå (2021)46:229 Ó Indian Academy of Sciences https://doi.org/10.1007/s12046-021-01705-1
14
Embed
Randomized Ensemble SVM based Deep learning with ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Randomized Ensemble SVM based Deep learning with Verifiabledynamic access control using user revocation in IoT architecture
RAVULA ARUN KUMAR* and KAMBALAPALLY VINUTHNA
Department of CSE, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram,
where viðt þ 1Þ is the new speed for the ith element, c1and c2 are the weighting coefficients for the individual best
and worldwide best positions separately, piðtÞ is the
ith particle’s position at the time t , pbesti is the ith element’s
best-known location, and pgbest is the best location known
to the group. The randðÞ capacity produces a consistently
arbitrary variable 2 ½0; 1�½0; 1� .Variations on this update condition consider best posi-
tions inside particles nearby neighborhood at time t.
A particle’s location is updated and used by:
piðt þ 1Þ ¼ piðtÞ þ viðtÞThe algorithm (below) delivers a pseudocode listing of
the Particle Swarm Optimization procedure for reducing a
cost function.
4. Experimental results and discussion
The proposed system for the privacy cheating discourage-
ment and secure computation auditing protocol is imple-
mented in the working platform of MATLAB 2013a with
the following system configuration.
Processor: Intel Core i5
CPU Speed: 3 GHz
RAM: 8 GB
Operating system: Windows 8
4.1 Security Analysis
In this method, the file data are encrypted utilizing kfbecause of the lower efficiency of ABE and kf is encryptedutilizing the SHA algorithm. The confidentiality of data of
our method is based on the security of Verifiable Dynamic
Access Control using User Revocation.
Initialization: The adversary X selects the access struc-
ture T�, attribute sets cð1Þ; cð2Þ; � � � c ver��1ð Þ� �, and version
number ver�, and then submits to challenger Y.
Setup: Y initially creates version 1 for the public key of X
as follows. e b; bð Þb is expressed as e b; bð Þb¼ e A;Bð Þ�e b; bð Þy
0¼ e b; bð Þabþy
0; y
0 2R Zp; and b ¼ abþ y0. For each
n 2 V , Y selects randomly rn 2 Zp. For each set of attributes
cðkÞ; 1� k� ver� � 1, create a PK for that particular version
as follows: For n 2 cðkÞ, for 1� n� i, select randomly
rkðkÞn 2 Zp,T
ðkþ1Þn ¼ T
ðkÞn
� �rkðkÞn
. For n 62 cðkÞ; rkðkÞn ¼ 1;
Tðkþ1Þn ¼ T
ðkÞn .
Stage 1: The adversary X gives a top-secret key query on
set S ¼ njn 2 V; and n 2 T�f g on behalf of variety
k,1� k� ver�. Y initially selects randomly r0 2 Zp: D is
expressed as D ¼ by0�r
0 �g; and r ¼ agþ r0: Study that for
any n 2 V ; TðkÞn ¼ T
ð1Þn
� �rkð2Þn �rkð3Þn ���rkðkÞn ¼ T
ð1Þn
� �Qk
m¼2rk
ðmÞn
denote as RðkÞn ¼ Qk
m¼2 rkðmÞn : For n 2 S;Dn ¼ br:R
ðkÞn =tn ¼
bðagþr0 Þ�RðkÞ
n = g=rnð Þ ¼ Arn:RðkÞn � br0 :rn:RðkÞ
n ,where tn ¼ grn.Y directs
secret key SKS¼ k;D; 8n 2 S : Dnð Þ to X.
Challenge X gives two messages j0 and j1. Y flips random
coin g and set C ¼ bc; ~C ¼ jg:e b; bð Þb�c¼ jg � e b; bð Þagc�e b; bð Þcy
0.According to the technique of
producing qxð0Þ in encryption, Cx ¼ bqxð0Þ�rn�RðkÞn ; for each
leaf node n 2 T�.Stage 2: Stage 1 is repeated.
In this method, the cloud server acquires the fractional
set of secret key elements of the client and the proxy re-
encryption keys. So, this needs that the leakage does not
affect the confidentiality of data. In attribute adjunction, the
data owner selects a randomly symmetric key k0f and s0 2 Zp
for the root node of the access control tree. Then, it pro-
duces the proxy re-encryption keys, i.e.,
rks$s0 ; rkqxð0Þ$q0xð0Þ; rkn$n0 . In the proxy re-encryption pro-
cedure, a cloud server does not decrypt cipher text for not
recovering secret numbers s0. They acquire random num-
bers only while obtaining the proxy re-encryption keys.
When a client accesses the encrypted file, the cloud servers
initially update the secret key of the client. Cloud servers
can not acquire all the secret key elements of a user because
SA will not get updated at all times. Then, they do not
decrypt the encrypted file data.
Figures 4–9 prove the calculation overhead brought
about in the stages set-up, key age, and encryption and
decoding under different conditions (figures 1). Figure 2, 3
proves the framework-wide set-up time with the various
number of trait specialists. Also in the proposed method,
comparison was made with the existing attribute-based
encryption scheme [28] based on parameters like set-up
time, key generation time, encryption time as well as
decryption time.
Figure 4 proves the all-out key age time with the various
number of experts, and the quantity of attributes is fixed to
20.
Sådhanå (2021) 46:229 Page 7 of 14 229
Figures 5 and 6 prove the encryption and decoding time
with the various number of characteristics and set just one
benefit for document access to gauge the most continuous
task, the record gets with file size is 100 KB.
Figures 7 and 8 prove the encryption and separating time
with various document sizes, where the quantity of assigns
is fixed to 20.
4.2 Utilization Rate
The average evolutionary curves of SLPSO, GA, and our
proposed method are given. The parallel axis denotes the
number of calculations and the perpendicular axis denotes
the yield value. It can be detected that in initial iterations
SLPSO and GA can join more quickly than this projected
method but the problem is SLPSO and GA cannot contin-
uously evolve the swarm to find a better explanation. The
operation rate of VM or memory at the ith time slot for the
secluded cloud is designed as,
CðiÞ ¼XS
n¼1
RTnaniTR
; i 2 1; 2; :::; If g
Where, RT� Number of VM or memory in the private
cloud
TR� Total VM or memory in the private cloud
If assignment tn runs in the ith slot, then ani ¼ 1; other-
wise, ani ¼ 0. Then, the normal operation rate of VM or
memory is designed by
Av ¼XI
i¼1
C ið ÞI
The average virtual machine utilization rate and average
memory utilization rate of the proposed method are given
in table 1 and also have contrasted the proposed strategy
and existing methods like SLPSO and GA-based methods
and the comparison chart is shown in figure 4.
Figure 1. Flow diagram of the proposed system.
Figure 2. Working Illustration/Creation of Message-Digest SHA
512.
229 Page 8 of 14 Sådhanå (2021) 46:229
From figure 9 and table 1, it can be understood that the
proposed method can accomplish a higher source operation
ratio for the average virtual machine and average memory
than the SLPSO algorithm and GA.
4.3 Execution Cost
Figure 10 shows the comparison between the Execution
cost of the SLPSO algorithm and GA with our proposed
system. The graph showed that our proposed system
executes with less cost as per increasing user demands.
Thus the cost of SLPSO and GA is higher than our pro-
posed system.
4.4 Comparison of the proposed systemwith existing techniques
In this section, the proposed system is compared with
existing approaches like duel steganography combined with
AES, Steganography with DES, and Steganography with
Figure 3. Set-up Time.
Figure 4. KeyGen time with different authorities’ number.
Figure 5. Encryption time with different attributes number.
Figure 6. Decryption time with different attributes number.
Figure 7. Encryption time with the different file size.
Figure 8. Decryption time with the different file size.
Sådhanå (2021) 46:229 Page 9 of 14 229
AES to evaluate the performance of the proposed system.
In order to evaluate the proposed system following
parameters are concerned Encryption time, Encryption
memory, Mean Square Error Value, Peak Signal to noise
ratio, and embedded ratio.
4.4a Encryption time: The amount of time required to
encrypt data using the selected algorithm is known as
encryption time (table 2). The comparative encryption time
of the proposed algorithms for cryptography is given using
figure 11. In the given diagram the X-axis shows the file
size (in terms of KB-kilobytes) of images used for experi-
ments and the Y-axis shows the amount of time consumed
for encryption in terms of seconds.In the proposed system,
the SHA algorithm is combined with Randomized
Ensemble Deep Learning for reducing the number of steps
for substitution and permutation operations which is
involved in the traditional AES based encryption. So the
proposed system has taken less time such that when the size
of an image is 100KB, it has taken 118 sec., whereas
existing approaches like duel steganography combined with
AES, Steganography with DES and Steganography with
AES experienced 315.48, 287.95 and 125.98 sec.
respectively.
4.4b Encryption memory: The measure of principle
memory required to execute the implemented encryption
algorithms is known as encryption memory. Figure 10
shows the relative performance of the proposed procedures
for the space complexity of encryption. The X pivot shows
the different examinations conducted by the framework,
whereas the Y pivot shows the consumption during
encryption in kilobytes. To compute the memory con-
sumption, the following formula is used (table 3; figures
12, 13, 14).
The proposed system has taken less memory since it
reduces the no of repetitions by adding features derived
from signature data to achieve the lightweight encryption
process. The result obtained showed that when the image
size is 100, the proposed system has taken a memory size of
15.633 KB whereas existing approaches like duel steganog-
raphy combined with AES, Steganography with DES, and
Steganography with AES experienced 31.25895, 21.25987,
and 25.12365, respectively.
4.4c MSE: The Mean Square Error is characterized as the
square of the distinction between the pixel estimations of
the first picture and the Steno picture and after that sepa-
rating it by the size of the picture (table 4).
In the proposed system, the least significant bit encoding
is combined with Randomized Ensemble Deep Learning to
embed the encrypted data into the cover image. So, this
approach perfectly embedded the data in less time when
compared to existing techniques such that when size is 100,
the proposed system experienced the less mean square error
value of 0.24 whereas existing approaches like duel
steganography combined with AES, Steganography with
DES and Steganography with AES experienced 0.551367,
0.497305 and 0.65925, respectively.
4.4d PSNR: The PSNR estimates the pinnacle signal-to-
commotion proportion between two pictures. This propor-
tion is frequently utilized as quality estimation between the
first and a packed picture. Higher the PSNR means better
the nature of the packed or repeated picture (table 5).
In the proposed system, Randomized Ensemble SVM for
reducing the number of repetition for basic operations such
as substitution and permutation which is involved in the
traditional AES based encryption and pixel vertex differ-
entiating technique is combined with least significant bit
encoding for embedding process increased the quality of
the embedded image so it achieves high PSNR value when
Table 1. Comparison of VM and Memory Utilization Rates of
SLPSO, GA and the Proposed Method.
Utilization Rate
Proposed
Method SLPSO GA
Average VM Utilization rate 0.911 0.892 0.709
Average memory Utilization
rate
0.822 0.774 0.682
Figure 9. Average VM and Memory Operation Rate for SLPSO,
GA and the Proposed method.
Figure 10. Comparison between Execution Cost of SLPSO and
GA with Proposed method.
229 Page 10 of 14 Sådhanå (2021) 46:229
compared to other existing techniques. When size is
100 Kb, the proposed system experienced the high PSNR
value of 96.45 whereas existing approaches like duel
steganography combined with AES, Steganography with
DES and Steganography with AES experienced 52.673192,
54.164578 and 53.94774, respectively.
4.5 Comparison of the proposed methodand the existing method’s various parameters
In this section, the proposed system is compared with
existing approaches like Attribute based Access control
scheme (ABACS), Novel Attribute based access control
(NABAC), Multi authority Access based Encryption
(MAABE), Block chain based Multi authority Access
control (BMAC).
Figure 15 depicts the comparison for computation over-
head. Thus, the graph reveals that for the proposed method
attains low computation overhead when compared with the
previous technique.
Figure 16 depicts that comparison for delay. The graph
reveals that the proposed method attains less delay when
compared with the previous technique such as for ABACS
which has 500 ms. It gradually decreases for NABAC and
then it attains Maximum for MAABE, then it decreases to
Table 2. Comparison table for encryption time.
Data Size
(in KB )
Dual Steganography Combined with
AES (Time in sec.)
Steganography Combined with
DES (Time in sec.)
Steganography Combined with
AES (Time in sec.) Proposed
10 561.10 261.94 114.6 72
20 466.94 308.66 777 79
40 758.12 314.81 116.3 89
60 364.64 350.98 116.25 95
80 285.20 230.65 115.31 119
100 315.48 287.95 125.98 120
10 20 30 40 50 60 70 80 90 100File Size(KB)
0
200
400
600
800
Tim
e(se
cond
s)
Encryption TimeDual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 11. Comparison graph for encryption time.
Table 3. Comparison table for memory requirement.
Data
Size(in
KB)
Dual Steganography Combined
with AES (Memory in KB)
Steganography Combined with
DES (Memory in KB)
Steganography Combined with
AES (Memory in KB)
Proposed
(Memory in
KB)
10 39.211914 19.598633 39.00098 13.450
20 55.18457 24.712891 38.4541 13.930
40 72.585938 27.175781 45.24316 14.329
60 25.279297 18.581055 37.20215 14.119
80 22.392578 20.542969 29.2666 15.211
100 31.25895 21.25987 25.12365 15.633
10 20 30 40 50 60 70 80 90 100File Size(Kb)
10
20
30
40
50
60
70
80
Mem
ory(
Kb)
Memory Consumption in EncryptionDual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 12. Comparison graph for memory usage.
Sådhanå (2021) 46:229 Page 11 of 14 229
0.375 ms for BMAC and for the proposed which has 0.15
ms.
Figure 17 depicts that comparison for throughput. Thus
the graph reveals that the proposed method achieves high
throughput when compared with the previous technique
such as ABACS, NABAC, MAABE and BMAC.
Table 6 shows comparison of the proposed method with
various parameters. The computation overhead of the pro-
posed method obtained 9 ms when compared with prior
techniques as ABACS which has 727.6 ms, NABAC has
500 ms, MAABE has 300 ms and BMAC has 50 ms. The
delay of the proposed method obtained 0.15 ms when
compared with prior techniques as ABACS has 500 ms,
10 20 30 40 50 60 70 80 90 100File Size(Kb)
0.2
0.3
0.4
0.5
0.6
0.7
Err
orR
ate
Mean Square Error Rate
Dual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 13. Comparison graph for mean square error.
10 20 30 40 50 60 70 80 90 100File Size(Kb)
50
60
70
80
90
100
Rat
io%
PSNR
Dual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 14. Comparison graph for PSNR value.
Table 4. Comparison table for MSE.
Data Size
(in KB)
Dual Steganography Combined with
AES
Steganography with
DES
Steganography Combined with
AES Proposed
10 0.3227109 0.4995313 0.449846 0.29
20 0.4156016 0.5329297 0.396492 0.30
40 0.6013047 0.5052734 0.454565 0.27
60 0.4822031 0.5118359 0.417235 0.26
80 0.5302891 0.6456641 0.46752 0.25
100 0.551367 0.497305 0.65925 0.24
Table 5. Comparison table for PSNR.
Data Size(in
KB)
Dual Steganography Combined with
AES
Steganography Combined with
DES
Steganography Combined with
AES Proposed
10 53.042667 51.145177 51.60016 97.23
20 57.501167 56.894584 52.14845 97.12
40 55.092265 51.095539 54.07282 96.23
60 52.307861 51.039496 51.927 96.16
80 56.981724 54.227387 52.478 96.03
100 52.673192 54.164578 53.94774 96.45
ABACS NABAC MAABE BMAC ProposedMethods
0
200
400
600
800
Time(ms)
Computation Overhead
Figure 15. Comparison for computation overhead.
229 Page 12 of 14 Sådhanå (2021) 46:229
NABAC has 2.9 ms, MAABE has 1500 ms and BMAC has
0.375 ms. The throughput of the proposed method obtained
was 90% when compared with prior techniques such as
ABACS has 62%, NABAC has 65%, MAABE has 75% and
BMAC has 85%. Thus, the proposed method will effi-
ciently have enhanced the security.
5. Conclusion
IOT indicates to the following period of data upset whose
setting includes billions of smart gadgets and sensors
interconnected to encourage quick data and information
trade under continuous limitations. The large collection of
data leads to a large volume consumption, which paves
the way for the pre-processing requirements. Then, the
pre-processed data should be stored in the cloud-based
storage system. Since the proposed system used machine
learning algorithms for pre-processing data, it achieved
better results when compared to the existing one. Then
data to be stored is subjected to cryptography as well as
learning-based steganography, the integrity of the data is
high when compared to other existing methods. Therefore,
the proposed work attains the throughput as 15% higher
than the MAABE. Similarly, the delay is highly reduced
as 1499.85 ms than MAABE and also computation over-
head is highly diminished as 291 ms than MAABE. Thus,
the proposed method proficiently enhanced the data