Top Banner
Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot 2013-2014 July 2014 Research report RR/14-01
47

Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Apr 30, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Advances on cognitive automationat LGI2P / Ecole des Mines d'Alès

Doctoral research snapshot 2013-2014

July 2014Research report RR/14-01

Page 2: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography
Page 3: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Foreword

This research report sums up the results of the 2014 PhD seminar of the LGI2P lab of the AlèsNational Superior School of Mines. This annual day-long meeting gathers presentations of thelatest research results of LGI2P PhD students.

This year edition of the seminar took place on June 19th. All PhD students presented theirwork for the past academic year. All presentations were be followed by extensive time forvery constructive questions from the audience.

The aggregation of abstracts of these works constitute the present research report and gives aprecise snapshot of the research on cognitive automation led in the lab this year.

I would like to thank all lab members, among which all PhD students and their supervisors,for their professionalism and enthusiasm in helping me prepare this seminar. I would also liketo thank all the researchers that came to listen to presentations and ask questions, thuscontributing to their thesis defense training.

I wish you all an inspiring reading and hope to see you all again for next year’s 2015 edition!!

Christelle URTADO

Page 1

Page 4: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Page 2

Page 5: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Contents

First year PhD students

Hasan ABDULRAHMAM Page 7

Model of noise and color restoration

Blazo NASTOV Page 13

Contribution to model verification: Operational semantics for systems engineering modelinglanguages

Second year PhD students

Nawel AMOKRANE Page 15

Toward a methodological approach for engineering complex systems with formal verificationapplied to the computerization of small and medium sized enterprises

Mustapha BILAL Page 19

Contribution to System of Systems Engineering (SoSE)

Mirsad BULJUBASIC Page 23

Efficient local search for large scale combinatorial problems

Sami DALHOUMI Page 27

Ensemble methods for transfer learning in brain- computer interfacing

Nicolas FIORINI Page 31

Coping with imprecision during a semi-automatic conceptual indexing process

Abderrahman MOKNI Page 35

A three-level formal model for software architecture evolution

Darshan VENKATRAYAPPA Page 39

Object matching in videos :a small report

Page 3

Page 6: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Page 4

Page 7: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

JOURNEE DE PRESENTATION DES TRAVAUX DES DOCTORANTS DU LGI2P

JEUDI 19 JUIN 2014SALLE DE CONFERENCE – SITE DE NIMES DE L’ECOLE NATIONALE SUPERIEURE DES MINES D’ALES

PROGRAMME DE LA JOURNEE

Début du séminaire 9h301A Hassan ABDULRAHMAN 15 minutes / 5 minutes 9h401A Blazo NASTOV 15 minutes / 5 minutes 10h00

Pause 10h202A Nawel AMOKRANE 20 minutes / 10 minutes 10h352A Mustapha BILAL 20 minutes / 10 minutes 11h052A Mirsad BULJUBASIC 20 minutes / 10 minutes 11h35

Déjeuner 12h052A Sami DALOUMI 20 minutes / 10 minutes 13h452A Nicolas FIORINI 20 minutes / 10 minutes 14h152A Abderrahman MOKNI 20 minutes / 10 minutes 14h45

Pause 15h152A Darshan VENKATRAYAPPA 20 minutes / 10 minutes 15h30

Synthèse – Yannick VIMONT 16h00

Fin du séminaire 16h15

Merci de votre participation !

Plus d’infos : http://www.lgi2p.mines-ales.fr/~urtado/SeminairesLGI2P.html

Page 5

Page 8: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Page 6

Page 9: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Model of Noise and Color Restoration !"#$#%&"'()(%*+,,-./%

Hasan Abdulrahmam #1, Marc Chaumont §2, Philippe Montesinos #3

# EMA, LGI2P Laboratory, Parc Scientifique G. Besse, Nimes, France

{Hasan.Abdulrahman,Philippe.Montesinos}@mines-ales.fr

§ University of Nimes, F-30021 Nimes, France LIRMM Laboratory, UMR 5506 CNRS, University of Montpellier II

[email protected]

Abstract Over the past few years sophisticated techniques for dealing with Steganography have been developed rapidly. These developments, along with high-resolution digital images, and the real world is dealing with color. The challenge of detecting the presence of hidden messages in color images leads us to search for a novel technique of Steganalysis in colored images. Steganography is the technique for hiding secret information in multimedia images, texts, audios. audios in statistically undetectable way. Whereas Steganalysis is the dual technique in which detection of presence or absence of a secret information is done. The aim of this thesis is to design and implemental a Steganalysis system that scan and test color images ,to detect hidden information through construct model of color noise, is done by separating the three color channels (R,G,B) of image, then extracting features from each channel separately by compute noise residual using high pass filters and co-occurrences matrix. K eywords: Noise residuals, Color images, Steganography, Steganalysis, Features, Ensemble classification,

1.Introduction Graphic files are the most common data files on the Internet after text information files. There are many different image formats but only a few of them are well discussed in focus to the steganography. In the modern information area, digital images have been widely used in a growing number of applications related to military, intelligence, surveillance, law enforcement, and commercial applications.[1] 1.1 Model of noise

We can consider a noisy image to be modelled as follows: ------- 1 f(x, y) is the original image pixel, (x, y) is the noise term

g(x, y) is the resulting noisy pixel. There are many different models for the image noise term (x, y): shown in Figure 1 Rayleigh Noise, Erlang Noise, Exponential Noise, Uniform Noise and Impulse Noise.[2]

F igure 1 different Noise models

),( ),(),( yxyxfyxg

Page 7

Page 10: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

1.2 Steganography

The art and science of hiding information by embedding messages within other, seemingly harmless messages. Steganography works by replacing bits of useless or unused data in regular computer files (such as graphics, sound, text, HTML, or even hard disks ) with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images. Steganography (literally meaning covered writing) dates back to ancient Greece, where common practices consisted of etching messages in wooden tablets and covering them with wax, and tattooing a shaved messenger's head, letting his hair grow back, then shaving it again when he arrived at his contact point.[3] Image steganography techniques can be divided into two groups, spatial domain (also known as Image domain ) and Transform Domain (also known as frequency domain). Spatial domain techniques embed messages in the intensity of the pixels directly. For transform domain, on the other hand, images are first transformed and then the message is embedded in the image.[4] In his 1984 landmark paper [5], Gustavus Simmons illustrated what is now widely known as steganography in terms of the prisoners crime, Alice and Bob, are arrested in separate cells. They want to coordinate an escape plan, but their only means of communication is by way of messages conveyed for them by Wendy the warden. Should Alice and Bob try to exchange messages that are not completely open to Wendy, or ones that seem suspicious to her, they will be put into a high security prison no one has ever escaped from. Block diagram of steganography is shown in Figure 1.

F igure 1 Block diagram of Steganography

1.3 Steganalysis Steganalysis is the art and science of detecting messages hidden by using steganography. The goal of image steganalysis is to discover the presence of a hidden information from the given cover image. Commercial software can identify the presence of hidden information and if possible, original information can be obtained. However, if the information is scattered in random form encrypted in a different form which does not comply to the existing methods, then there comes the difficulty in identifying the presence of information and if identified, reconstruction of original information is still a challenging task. [6] 2. Image Steganalysis Algorithms for image steganalysis are primarily of two types: Specific and Generic. The targeted approach represents a class of image steganalysis techniques that very much depend on the underlying steganographic algorithm used and have a high success rate for detecting the presence of the secret message if the message is hidden with the algorithm for which the techniques are meant for. The Blind approach represents a class of image steganalysis techniques that are independent of the underlying steganography algorithm used to hide the message and produces good results for detecting the presence of a secrete message hidden using new and/or unconventional steganographic algorithms.[7] 2.1 Targeted Image Steganalysis A lgorithms Image steganography algorithms are more often based on an embedding mechanism called Least Significant Bit (LSB) embedding. Each pixel in an image is represented as a 24-bitmap value, composed of 3 bytes representing the R, G and B values for the three primary colors Red, Green and Blue [8].

Page 8

Page 11: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Images can be represented in different formats, the three more commonly used formats are: GIF (Graphics Interchange Format), BMP (Bit Map) and JPEG (Joint Photographic Exchange Group). We discuss the algorithms for each of these formats. 2.1.1 Palette Image Steganalysis Palette image steganalysis is primarily used for GIF images. The GIF format supports up to 8 bits per pixel and the color of the pixel is referenced from a palette table of up to 256 distinct colors mapped to the 24-bit RGB color space. LSB embedding of a GIF image changes the 24-bit RGB value of a pixel and this could bring about a change in the palette color (among the 256 distinct colors) of the pixel. The steganalysis of a GIF stego image is conducted by performing a statistical analysis of the palette table vis-à-vis the image and the detection is made when there is an appreciable increase in entropy (a measure of the variation in the palette colors).[9]

2.1.2 Raw Image Steganalysis The Raw image steganalysis technique is primarily used for BMP images that are characterized by a lossless LSB plane. Fridrich et. al. [10] proposed a steganalysis technique that studies color bitmap images for LSB embedding and it provides high detection rates for shorter hidden messages. This technique makes use of the property that the number of unique colors for a high quality bitmap image is half the number of pixels in the image. The new color palette that is obtained after LSB embedding is characterized by a higher number of close color pairs. 2.1.3 JPE G Image Steganalysis JPEG is a popular cover image format used in steganography. Two well-known Steganography algorithms for hiding secret messages in JPEG images are: the F5 algorithm [11] and Outguess algorithm [12]. 2.2 Generic Image Steganalysis A lgorithms

The generic steganalysis algorithms, usually referred to as Universal or Blind Steganalysis algorithms, work well on all known and unknown steganography algorithms. These steganalysis techniques exploit the changes in certain innate features of the cover images when a message is embedded. generic steganalysis techniques that use Fisher Linear Discriminant (FLD) , Support Vector Machine (SVM) and a Ensemble Classifier have been proposed to accurately differentiate between cover and stego images.[13] 3. A ims of thesis

Since the beginning of the modern steganography in the end of the nineties, color steganography and color steganalysis have still not been studied, and now people dealing with color images more than grayscale images, leads us to search for a novel technique of steganalysis in color images. Our work aims is to design and implemental steganlysis system that detect the presence hidden information in color images through the construction of a model of noise extracted by computing noise residuals from color images using special filters for color. and also is to propose color filters that are useful for color steganalysis. Additionally also doing a state-of-the-art of the different steganographic techniques and evaluate their security through an efficient steganalysis. 4. Embedding Methods Embedding methods for images considered for the performance analysis of above mentioned steganalysis techniques, we separated each color image into three color channels then hide information in one channel And then merged the three channels to get the stego image, we try to hide information on each color channels with different payload ratio by using deferent steganography methods which is :- 1- S-UNIWARD Steganography. 2- HUGO Steganography. 3- LSB ± 1 Steganography

Page 9

Page 12: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

5. Database sets used To analyze the performance of steganalysis technique, we need to have test set of images to experiment with. the test data needed to include both non-stego images (raw images) and stego images(with the secret message). Also, ensemble classification needed a significant number of train data for training the classifiers Therefore, data preparation was the first and a very important step in our work. we listed the details of each database set created separately. and details include the embedding message payload used in the embedding process to create the stego image database and to create the cover image database. 1 - cover images database : ( 10000 ) color images taken from the BOSSBase[14], subset of Dresden image database [15], and subset of Sam Houston State University [16], all images convert into Portable PixMap (ppm). 2 - S-UNIWARD stego images database : Created by implement S-UNIWARD embedding method by embedding a message in different payload ( 0.1,0.2,0.3,0.4,0.5).( to the moment of the date of writing this report ) we embedding ( 6150 ) color images. And in the future complete the database of stego images to ( 10000 ) stego. 3- HUGO stego images database : Created by implement HUGO embedding method by embedding a message in different payload( 0.1,0.2,0.3,0.4,0.5). ( to the moment of the date of writing this report ) we embedding ( 1000 ) color images. And in the future complete the database of stego images to ( 10000 ) stego. 6. Building the r ich model Jessica F ridrich and Jan Kodovsk y [17] General methodology for Steganalysis of digital images in graylevel based on the concept of a rich model consisting of a large number of diverse submodels, and submodels consider various types of relationships among neighboring samples of noise residuals obtained by linear and nonlinear filters with compact supports. Rich model is assembled as part of the training process and is driven by the available examples of cover and stego images. 6.1 Submodels

The individual submodels of the proposed rich model are formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and non-linear high-pass filters the features are computing from the following steps:- A - Computing Residuals:

The submodels are formed from noise residuals, our work we compute residual for each channel ( Red,Green,and Bule) ( Right , Left , Up , Down, Right up , Left Up , Right diagonal , and Left diagonal ).

computed using highpass filters of the following form:

where c! ! !is the residual order, Ni,j is a local neighborhood of pixel Xi,j , Xi,j Ni,j , and "#!(.) is a predictor of c Xi,j defined on Ni,j. The set of { Xi,j + Ni,j } is called the support of the residual. B-Computing T runcation and Quantization

Each submodel is formed from a quantized and truncated version of the residual:

runc T ( round ( Rij / q ) ) ( 2 )

where q > 0 is a quantization step. The purpose of truncation is description using co-occurrence matrices with a small T . The quantization makes the residual more sensitive to embedding changes at spatial discontinuities in the image (at edges and textures). eg: if T = 2, so the residual image Ri,j , 2>= Ri,j >= -2 figure 2 show a block of noise residual

Page 10

Page 13: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

C- Computing Co-O ccurrences matrix :

The construction of each submodel continues with computing one or more co-occurrence matricesof neighboring samples from the truncated and quantized residual equation 2.[18] Figure 2 Show how can compute co-occurences matrix.

Figure 2 Show co-occurences matrix. 7. Schedule to work later In our work I will continue with my research in steganalysis to design a detector of hidden information by means of special filters for color images. All following points. a - Try to use different filters or filter banks to get more features from color image in fast time. b - Try to use LibVision Library, C and C++ programming language in Linux, which was built in LGI2P lab. c - Try to use different types of stegnography methods, d - prepare own decoder of color images using special filters for color images, e - Try to detect different sizes payload ratio to hidden message in image and make compared between the results. g - Try to optimize length of inputs into Ensemble Classifiers and decrease the complexity features. 8. Conclusions and Future work This report first, gives an overview of using steganalysis in colored images, then lists the main existing approaches of steganalysis. The main idea is to design and implementation a steganlysis system that scans and tests color images for detecting hidden information through constructing a model of noise extracted by computing noise residuals from color images using special filters then, extract very big set of features from these residuals in order to feed it to classifier. In the future work, we will try to develop a rich model for steganalysis (SRM) to detect color images by extracting features for the three channels with more advanced filters by reducing cost and time consuming in the process of managing large database. 9. References

[1] Wayner, P. Disappearing Cryptography, Second Edition: Information Hiding: Steganography & Watermarking (The Morgan Kaufmann Series in Software Engineering and Programming)., Morgan Kaufmann; 2 edition, 2002. 413 s. ISBN 978-1558607699. [2] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in: Proceedings of the IEEE International Conference on Computer Vision,Bombay, India, January 2011, pp. 839 846. [3] Hidden Data in PE-File with in U Journal of Computer and Electrical Engineering (IJCEE), Vol.1, No.5, ISSN: 1793-8198, p.p 669-678. [4].Zaidoon kh.AL- als for Journal of computing,Volume 2,Issue 3,March 2010 ISSN 2151-9617 [5 problem and the subliminal channel. In Advances 67, 1984. [6] F. Neil Johnson and Sushil Jajodia, Steganalysis of images created using current steganography software. Proc. of the Second International Workshop on Information Hiding, vol. 1525 , pp. 273-273, 1998. [7 and Multimedia Signal Processing. 2011

Page 11

Page 14: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

[8] y Computer Science, vol. 1525, pp. 32 47, Springer Verlag, 1998. [9] teganalysis of Digital Images: Estimating the Secret s Journal, Special issue on Multimedia Security, vol. 9, no. 3, pp. 288 302, 2003. [10 International Conference on Multimedia and Expo (ICME), vol. 3, pp. 1279 1282, New York, NY, USA, July August 2000. [11 pp. 289 302, January 2001. [12] Outguess Universal Steganography: http://www.outguess.org [13] -order Statistics and Support Vector Notes in Computer Science, vol. 2578, pp. 340 354, 2002. [14] Patrick Bas, Tom´a s Filler, and Tom´a s Pevn´y, Break our steganographic system the ins and outs of organizing BOSS, in Information Hiding, 13th International Conference, IH 2011, Tom´a s Filler, Tom´a s Pevn´y, Scott Craver, * and Andrew Ker, Eds. 2011, vol. 6958 of Lecture Notes in Computer Science, pp. 59 70, Springer. [15] Thomas Gloe and Rainer B¨ohme, The Dresden Image Database for benchmarking digital image forensics, Journal of Digital Forensic Practice, vol. 3, no. 2 4, pp. 150 159, 2010. [16] Sam Houston State Universit: http://groups.csail.mit.edu/vision/SUN/ [17] Jessica Fridrich and Jan Kodovsk and Applications. Cambridge, U.K.: Cambridge Univ. Press, 2012. [18] D. Zou, Y. Q. Shi, W. Su, and G. Xuan, Steganalysis based onMarkov model of thresholded prediction-error image, in Proc. IEEE, Int. Conf. Multimedia Expo., Toronto, Canada, Jul. 9 12, 2006, pp. 1365 1368.

Page 12

Page 15: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Contribution to Model Verification: Operational

Semantics for Systems Engineering Modeling

Languages

Blazo Nastov

LGI2P, Ecole des Mines d’Ales, Parc Scientifique G. Besse, 30000 Nimes, [email protected]

System Engineering (SE) [2] is an approach for designing complex systems based on cre-

ating, manipulating and analyzing various models. Each model is related to and is specific to

a domain (e.g. quality model, requirements model or architecture model). Classically, models

are the subject of study of Model Driven Engineering (MDE) [5] and they are nowadays built

by using, and conforming to Domain Specific Modeling Languages (DSMLs). Creating DSML

for SE objectives primarily consists in defining its abstract and concrete syntaxes. An abstract

syntax is given by a meta model, while various concrete syntaxes (only graphical are considered

here) define the representation of models instances of metamodels. So proposing the abstract

syntax and one concrete syntax of a DSML makes it operational to create models seen then as

a graphical representation of a part of a modeled system. Unfortunately, created models may

have ambiguous meaning if reviewed by different practitioners. A DSML is then not complete

without a description of its semantics, as described in [3], also highlighting four different typesof semantics. Denotational, given by a set of mathematical objects which represents the mean-

ing of the model. Operational, describing how a valid model is interpreted as a sequence of

computational steps. Translational, translating the model into another language that is well

understood and finally pragmatic, providing a tool that execute the model. The main idea of

this work is to create and use such DSMLs focusing on the model verification problematic. We

aim to improve model quality in terms of construction (the model is correctly build thanks to

construction rules) and in terms of relevance for reaching design objectives (the model respects

some of the stakeholder’s requirements), considering each model separately and in interaction

with the other models of the system under study so called System of Interest (SOI).

There are four main ways of verifying a given SOI model, 1) advice of a verification expert,

2) guided modeling, 3) model simulation and 4) formal proof of model properties. SE is a

multi-disciplinary approach, so multiple verification experts are requested. Guided modeling is

a modeling approach that consists of guiding an expert to design a model, proposing differentconstruction possibilities or patterns in order to avoid some construction errors. In these two

cases, the quality of the designed models cannot be guaranteed. “Simulation refers to the

application of computational models to the study and prediction of physical events or the

behavior of engineered systems” [4]. To be simulated, a model requires the description of its

operational semantics. We define an operational semantics as a set of formal rules, describing

on the one hand, the conditions, causes and effects of the evolution of each modeling concept

and on the other hand, the temporal hypothesis (internal or external time, physic or logic time,

synchronism or asynchrony hypothesis of events, etc.) based on which a considered model can

be interpreted without ambiguity. Last, formal proof of properties is an approach that consists

of using formal methods to check the correctness of a given model. Literature highlights two

ways to prove model properties, either through operational semantics, or through translational

semantics. In both cases, a property modeling language is used to describe properties which

are afterwards proved using a theorem proving or a model checking mechanisms.

Page 13

Page 16: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Our goal is to provide mechanisms for model simulation and formal proof of properties. We

focus on concepts, means and tools allowing to define and to formalize an appropriate opera-

tional semantics for a DSML when creating its abstract and concrete syntaxes. Translational

semantics however are not considered due to their classical limitations in terms of verification

possibilities. There are different ways to formally describe operational semantics; for example,

using the first order logic. In this case, a set of activation and deactivation equations are de-

fined and assigned to each DSML concept, describing its behavior. Another way is through

state transition system defining the sequence of computational steps, showing how the runtime

system process from one state to another as described in [3].

Nowadays, there are multiple approaches and tools for defining operational semantics for a

given DSML. Unfortunately, many of them require minimal knowledge in imperative or object-

oriented programming and SE experts are not necessarily experts in programming. Indeed,

the operational semantics of dedicated DSML is to be described and formalized with minimal

efforts from the expert by assisting him and automating the process as much as possible. An

approach is proposed in [1] supporting state-based execution (simulation) of models created by

DSMLs. The approach is composed of four structural parts related to each other and of a fifth

part providing semantics relying on the previous four. Modeling concepts and relationships

between them are defined in the Doman Definition MetaModel (DDMM) package. The DDMM

does not usually contain execution-related information. Such kind of information is defined

in the State Definition MetaModel (SDMM) package. The SDMM contains various sets of

states related to DDMM concepts that can evolve during execution. It is placed on the top

of the DDMM package. Model execution is represented as successive state changes of DDMM

concepts. Such changes are provoked by stimuli. The Event Definition MetaModel (EDMM)

package defines different types of stimuli (events) and their relationship with DDMM concepts

and SDMM states evolution. The Trace Management MetaModel (TM3) provides monitoring

on model execution by scenarios made of stimuli and traces. The last and key part is the package

Semantics describing how the running model (SDMM) evolves according to the stimuli defined

in the EDMM. It can be either defined as operational semantics using action language or as

denotational semantics translating the DDMM into to another language using transformational

language. Our initial objectives are to study and to evaluate this approach on DSMLs of the

field of SE in order to become able to interpret them and to verify some properties.

This work is developed in collaboration with the LGI2P (Laboratoire de Genie Informa-

tique et d’Ingenierie de Production) from Ecole des mines d’Ales and the LIRMM (Labora-

toire d’Informatique, Robotique et Microelectronique de Montpellier) under the direction of

V.Chapurlat, F.Pfister (LGI2P) and C.Dony (LIRMM)

References

[1] Benoıt Combemale, Xavier Cregut, Marc Pantel, et al. A design pattern to build executable dsmlsand associated v&v tools. In The 19th Asia-Pacific Software Engineering Conference, 2012.

[2] ISO/IEC. ISO/IEC 15288 : Systems and software engineering - System life cycle processes, volume2008. IEEE, 2008.

[3] Anneke G Kleppe. A language description is more than a metamodel. 2007.

[4] JT Oden, T Belytschko, J Fish, TJR Hughes, C Johnson, D Keyes, A Laub, L Petzold, D Srolovitz,and S Yip. Simulation-based engineering science: Revolutionizing engineering science throughsimulation–report of the national science foundation blue ribbon panel on simulation-based engi-neering science, february 2006.

[5] Douglas C Schmidt. Model-driven engineering. Computer Society-IEEE, 39(2):25, 2006.

Page 14

Page 17: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Toward a Methodological Approach for Engineering Complex Systems with Formal V erification Applied to

the Computerization of Small and Medium-Sized Enterprises

Nawel Amokrane, Vincent Chapurlat, Anne-Lise Courbis, Thomas Lambolais and Mohssine Rahhou.

Team ISOE « Interoperable System & Organization Engineering »

1 Context and objectives

Our research is driven by an industrial need of an IT-service enterprise named RESULIS, whose job is to computerize all or part of Small and Medium-sized Enterprises (SMEs) and provide them with adapted original software, designed around their business. In order to do that RESULIS has to fully understand and harness the way the SME operates but it lacks of means to formally gather and validate the requirements of the various stakeholders (business experts, decision makers and end users). Especially that they usually have different cultures and vocabularies comparing to project management stakeholders (requirements engineers, designers, engineers) stakeholders do not have the skills to use requirements elicitation tools or modeling languages. This induces difficulties in managing the activities of elicitation, documentation, verification, and validation of the stake We consider that end users have to be involved as active actors in the requirements engineering activities. So we aim at stakeholders in requirements authoring with simple means to autonomously provide information about the way they perform their business processes, the information and resources they use and the distribution of the responsibilities in the organization. And as the quality of requirements has a critical impact on the resulting system we have to check the quality of the produced requirements and models with a set of verification rules and techniques to ensure well-formed, non-contradictory and consistent information. This in the intention of transferring to developers verified models through model transformation techniques to accelerate mock-ups production that will be validated by end users. In the scope of the thesis we focus on modeling, requirements authoring and verification objectives.

Page 15

Page 18: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

2 Approach and propositions

We believe that defining requirements when developing a software that manages a business activity of an enterprise is reflected by the definition of the enterprise model. Because this is what needs to be computerized. Indeed an enterprise model formalizes all or part of the business in order to explain an existing situation or to achieve and validate a designed project [1]. Therefor we use Enterprise Modeling [1] to represent, understand and engineer the structure, behavior, components and operations of the SME. We do not aim at evaluating or optimizing its business procedures.

In order to manage the inherent complexity of enterprise systems due to their sociotechnical structural and behavioral characteristics, we studied a set of enterprise modeling methods, architectures and standards that have been developed and used in support of the life cycle engineering of complex and changing systems. Such as CIM Open System Architecture (CIM-OSA) that has been developed for integration in manufacturing enterprises [2], The Generalized Enterprise Reference Architecture and Methodology (GERAM) [3] that organizes and defines the generic concepts that are required to enable the creation of enterprise models. These methods have influenced the creation of standards: the standard ISO/DIS 19440 proposes constructs providing common semantics and enables the unification of models developed by different stakeholders [4]. It proposes modeling constructs structured into four enterprise modeling views: function, information, resources and organization. This enterprise model view dimension enables the modelers to filter their observations of the real world by emphasizing on aspects relevant to their particular interests and context. The framework for enterprise modeling ISO/DIS 19439 standard [5] took back part of the GERA modeling framework. It provides a unified conceptual basis for model-based enterprise engineering that enables consistency and interoperability of the various modeling methodologies and supporting tools. The framework structures the entities under consideration in terms of three dimensions: the enterprise model view, the enterprise model phase and levels of genericity. Along with the constructs for enterprise modeling standard ISO19440, the framework for enterprise modeling standard ISO19439 can be considered as an operational state of the art framework to manage the modeling activities of an enterprise or an information system [6].

We rely on the combination of the framework for enterprise modeling standard ISO19439 and the constructs for enterprise modeling standard ISO19440, which we extend with a requirements modeling view [7] to support, share and save information about stakeholders or system requirements. And we propose a generic conceptual model whose constructs allow modeling the way requirements relate to the other modeling views, assessing the matching level between stakeholder and system requirements and provide justification for design decisions.

This common generic conceptual model is also a basis for consistency analysis and verification between the levels of genericity and the modeling views. The verification process allows assessing the correctness of the models and their compliance to meta-models. It is carried out through the definition of modeling rules, consistency rules and completeness criteria.

The modeling activities are achieved by stakeholders to whom we provide simple and intuitive modeling languages. We studied a set of languages and approaches that can be conducted upstream of detailed conception, namely: RDM

Page 16

Page 19: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

(Requirement Definition Model) part of the CIM-OSA modeling process [8], goal oriented requirements languages: KAOS [9] and GRL (Goal-Oriented Requirement Language) [10], scenario oriented language UCM (Use Case Maps) [10] and URML (Unified Requirements Modeling language) [11] a high-level quality and functional system requirements language. We are not trying to make an inventory of the all languages proposed in the literature, but rather identify the information that should be addressed while collecting and modeling requirements, so we identified the modeling views that are covered by these languages. We noticed that along with concepts related to the requirements view (goal, expectation, hypothesis, functional / nonfunctional requirements, obstacle these languages do cover some of the enterprise modeling views; and this shows the potential links that requirements have with enterprise modeling constructs. We also assessed stakeholders based on the notation, the orientation and basic concept. We deduced that the studied languages are destined for experts and are not intended to SMEs end users. For instance, Reasoning with goals for functional requirement elicitation is not

kely to describe their daily activities rather than thinking about the motivation behind them. Furthermore, what mitigates the accessibility of these languages to is the use of notations that require special knowledge in modeling artifacts and need former training. Accordingly, we believe that textual formulation using natural language is more appropriate for non-expert users. We propose to use languages expressing business knowledge in a sub-set of natural language understandable by human and computer systems. We defined a set of natural language boilerplates (in French according to RESULIS on the basis of the proposed conceptual model to represent all enterprise modeling views. We also want to offer SME s stakeholders the freedom to choose between textual and graphical notations, we are now working on a set of simple graphical notations. Model transformation techniques will be settled to enable switching from a notation to another.

In order to guide end users in the definition of their enterprise model and requirements regarding the new software, we propose a modeling process that comprises the four following paradigms: organization modeling and role definition, Function and behavior modeling, information and resource modeling and stakeholder requirements definition.

A tool will support the modeling process. It will be endowed with verification

stakeholders and the conformance of the produced models to their meta-models; (ii) detect contradictory behaviors among roles and processes definition. For instance situations where stakeholders intervening in the same business processes provide conflicting descriptions regarding the inputs, outputs or the order of the activities; and (iii) discern non-exhaustive descriptions where for instance the output of an activity (that does not represent the purpose of the business process) is not used by any other activity. We use advanced techniques to ensure correct requirements writing such as: the use of Natural Language Processing (NLP) to verify the lexical correctness of requirements and the use of requirements boilerplates for guiding the authoring activity.

Page 17

Page 20: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

3 Conclusion

Fostering the collaboration between the involved stakeholders during the software requirements elicitation and validation activities involves the

construction, by the , of a common understanding about the structure and the behavior of the enterprise. We propose a requirements elicitation and validation process that is compliant with enterprise modeling reference frameworks. It uses a set of intuitive modeling languages to capture stakeholders as an entry point for the construction of the enterprise specific model. And it is supported with verification mechanisms to ensure the quality of the models that will be used in the downstream development phases.

References

1. -système. La modélisation systémique en entreprise; C. Braesch & A. Haurat (éd.), Hermès, Paris, 1995; pp. 83 88.

2. Berio G, Vernadat F. New developments in enterprise modelling using CIMOSA. Computers in Industry; 1999; 40:99 114.

3. Bernus P, Nemes L. The contribution of the generalised enterprise reference architecture to consensus in the area of enterprise integration. Proceedings of ICEIMT97; 1997.

4. ISO/DIS Enterprise integration - Constructs for enterprise modelling. ISO/DIS 19440, ISO/TC 184/SC 5; 2004.

5. ISO/DIS CIM Systems Architecture Framework for enterprise modelling. ISO/DIS 19439, ISO/TC 184/SC 5; 2002.

6. Millet P.A. ERP. Thèse INSA de Lyon; 2008.

7. Amokrane N, Chapurlat V, Courbis A.L., Lambolais T, Rahhou M. Modeling frameworks, methods and languages for computerizing Small and Medium-sized Enterprises: review and proposal. I-ESA. Albi, France, 2014.

8. Zelm M, Vernadat F, Kosanke K. The CIMOSA business modelling process. Computers in Industry, Amsterdam: Elsevier, 1995; pp.123 142.

9. Darimont R, Delor E, Massonet P, Van Lamsweerde A. GRAIL/KAOS: An Environment for Goal-Driven Requirements Engineering. - 20th Intl. Conf: on Software Engineering; 1998.

10. ITU T, Recommendation Z.151: User Requirements Notation (URN) Language Definition, Geneva, Switzerland; 2008.

11. Schneider, F., Naughton, H., & Berenbach, B. A modeling language to support early lifecycle requirements modeling for SE. Procedia Computer Science, 2012; 8, 201 206.

!

Page 18

Page 21: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Contribution to System of Systems Engineering

(SoSE)[Internal report - LGI2P]

Mustapha Bilal, Nicolas Daclin, and Vincent Chapurlat

LGI2P, Laboratoire de Genie Informatique et d’Ingenierie de Production, ENS Mınes

Ales, Parc Scientifique G.Besse, 30035 Nımes cedex 1, France

{firstname.lastname}@mines-ales.fr

Abstract. It is agreed that there are similarities between collaborative

Networked Organizations (CNOs) and the System of Systems (SoS). Sys-

tem of Systems Engineering (SoSE) can be distinguished from simple sys-

tem engineering. As in many other engineering and scientific discipline,

SoSE is required to conduct such complex systems. The first phase of

doing SoSE is to start by building a model. Therefore, in this report,

we propose a meta-model which respects the particularity of the SoS.

Moreover, this meta-model includes concepts that allows to analyze the

impact of interoperability on what we call the analysis perspectives of

the SoS: Stability, Integrity and Performance. Verification and simulation

approach will be used simultaneously to permit this analysis.

Keywords: System of Systems (SoS), Collaborative Networked Organi-

zations CNOs, Verification, Formal proofs, System Engineering, System

of Systems Engineering, Interoperability

1 Overview

Nowadays, most enterprises and organizations search to cooperate and to build

up their own network in order to obtain a single Collaborative Networked Organi-

zation (CNO) which is able to perform a mission that an enterprise, organization

or entity alone cannot perform [1]. In this sense, the CNO is considered as a SoS

in terms of the ARCON reference modeling framework [2]. Therefore, and in

order to conduct this kind of complex systems it is required to propose an en-

gineering approach: System of Systems Engineering (SoSE). The subject of SoS

Engineering (SoSE) versus Systems Engineering (SE) is debated in the litera-

ture. The question has been asked : Is engineering a system of systems really any

different from engineering an ordinary system? [3]. Others, like us, believe that

traditional Systems Engineering (SE) differs from SoS Engineering (SoSE). SoSE

differs from the SE in the selection phase of the entities that are supposed to be

part of the SoS. Indeed, systems, where most of them already exist, are selected

according to their relevance, capacity and Self-interest to fulfill the SoS mission.

They are assembled in a way to respect the requirements of SoS stakeholders and

that their interaction allows them to fulfill this mission. During this assembly,

interfaces are required whether physical (hardware), informational (model and

Page 19

Page 22: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

data exchange protocols) or organizational (rules, procedures and protocols) in

order to ensure the necessary interoperability of subsystems [4]. Moreover, their

behavior, decision-making autonomy and their own organization should not be

impacted or influenced more than necessary by risky situations or undesired ef-

fects resulting from the interaction between these subsystems. These interactions

have not been yet studied by the literature and it is a new challenging research

topic to discover.

It is required to help and to support actors in charge of System of Sys-

tems (SoS) design to ensure the quality of the design reducing large and time-

consuming modeling and analysis efforts. This has to be done whatever may be

the size or type of the proposed SoS, the various and wanted disciplines which

may be involved in this design, and the available details at design time about

the systems which can be considered as relevant by designers to compose the

SoS. This report presents briefly the work has been done. A methodology to

achieve SoS design verification has been proposed. It is based, first, on building

the SoS meta-model. Second, it proposes the fusion of two complementary ap-

proaches of model verification in order to ensure the impact of interoperability

on the analysis perspectives of the SoS: stability, integrity and performance in

order to maximize the robustness and reliability of the proposed SoS particu-

larly when facing some disturbances due to subsystems interactions during the

SoS operational mission execution. A formal properties specification and proof

approach allows the verification of the adequacy and coherence of SoS model

with regard to stakeholders requirements. Moreover, simulation allows the ex-

ecution of the architectural model of SoS and the identification of the impact

of the interoperability on the SoS analysis perspectives. The SoS meta-model is

enriched by concepts and mechanisms allowing the evaluation and the test of

various interoperability characteristics of various operational scenarios resulting

from the execution.

2 SoS versus CNO : similarities

Giving the definition of each SoS and CNO will draw the first line of similarities.

While there is no universal definition for SoS [5], there is a general consensus

about its several characteristics [6] [7]: Operational Independence, Managerial

Independence, Evolutionary Development, Emergent Behavior, Geographic dis-

tribution, Connectivity and Diversity. Now, if we go through the definition of the

CNO, we will realize that there is a number of common characteristics between

CNO and SoS. A collaborative network is ”a network consisting of a variety of

entities (e.g. organizations and people) that are largely autonomous, geograph-

ically distributed, and heterogeneous in terms of their operating environment,

culture, social capital and goals, but that collaborate to better achieve common

or compatible goals, thus jointly generating value, and whose interactions are

supported by computer network” [1].

Page 20

Page 23: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Looking at the life cycle draws the second line of similarities. Both CNOand SoS pass through the same phases of the life cycle: Creation, Operation,Evolution, Dissolution or Metamorphosis [1].

3 SoSE principles

Interfacing the entities appears in the creation phase of the CNO life cycle. Oneof the famous problems in CNO is the physical integration of multiple subsys-tems due to the diversity of interfaces [8]. Therefore, entities are selected andinvolved under various conditions and constraints, particularly their interoper-ability, that have to be characterized prior the assembling. Indeed this assemblyestablishes various interactions between the subsystems. In this context, inter-operability takes on its full meaning when considering these interactions thatmake these subsystems able to work together. On the one hand, the interactionsbetween subsystems are expected in order to allow to the SoS to fulfill its mis-sion. On the other hand, these interactions imposes to have interfaces of varioustypes: technical (e.g. software), organizational (e.g. communication rules), hu-man/machine (e.g. touchscreens) or logical at a high level of abstraction (e.g.resource utilization). Therefore, designers attention has to be then concentratedon interfaces-to-design. The challenge raised here is to design the interfaces whichwill improve the interoperability by managing the interactions without affectingthe entities. For that, it is required to have a modeling language adapted toachieve SoS modeling and verification exceptions. The modeling language mustpermit to design requested interfaces allowing managing these interfaces with-out inducing huge modifications or dysfunction of each subsystem. Indeed, themodeling language must allow designers to attest that the SoS model is wellconstructed, well-formed and coherent with the stakeholders’ requirements.

Moreover, it is important to mention that the main goal of the research is toprove that the interoperability has some influence over the SoS. We have foundthat there is a natural tension between interoperability and each characteristicsof the SoS. The dynamic evolution, the heterogeneity, the autonomy and theconnectivity of the SoS are strongly linked to the notion of interoperability. Weassume that changing the interoperability between the entities that form the SoSinduces some changes on the analysis perspectives of the SoS [9] :

• Performance [10]:The ability of a SoS to recover again its performance ob-jectives.

• Stability [10]: The ability of a SoS to maintain its viability and to adapt toany change in its environment.

• Integrity [10]: The ability of a SoS to return to a known operating modewhen facing a local modification in existing configuration.

The developed SoS model contains some traditional concepts of the systemengineering. However, some new concepts related 1) to the behavioral aspectof the SoS 2) to the interactions between its entities, 3) to the interoperabilityand 4) to the verification purpose have been added [9]. The SoS model is based

Page 21

Page 24: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

on four Domain Specific Modeling Languages (DSML): Requirements Model-

ing Language, Physical modeling language, Functional modeling language and

Behavioral Modeling Language. Executing the developed SoS model allows to

analyze the impact of interoperability on the SoS analysis perspectives through

step by step simulation and formal proofs techniques [10].

4 Conclusion and Perspectives

This report has briey introduced the similarities between SoS and CNO and how

the System of Systems Engineering have to be used to conduct such complex

systems. Moreover, it has showed that changing the interoperability between

SoS entities have some effects on the analysis perspectives of the SpS: Stability,

Integrity and Performance. A SoS meta-model have been developed and it took

into account new concepts that will allow to analyze the impact of interloper-

ability through step by step simulation and formal proofs techniques.

Further work has to be done to apply the simulation and verification method-

ology.

References

1. Camarinha-Matos, L.M., Afsarmanesh, H., Galeano, N., Molina, A.: Collabora-tive networked organizations Concepts and practice in manufacturing enterprises.Computers & Industrial Engineering 57(1) (August 2009) 46–60

2. Camarinha-Matos, L.M., Afsarmanesh, H.: Collaborative Networks: ReferenceModeling. Springer (2008)

3. Sheard, S.: Is Systems Engineering for ”Systems of Systems” Really Any Different?INCOSE Insight 9(1) (2006)

4. Mallek, S., Daclin, N., Chapurlat, V.: The application of interoperability require-ment specification and verification to collaborative processes in industry. Comput-ers in Industry 63(7) (September 2012) 643–658

5. Sage, A.: Processes for System Family Architecting, Design, and Integration. Sys-tems Journal IEEE 1(1) (2007) 5–16

6. Maier, M.W.: Architecting principles for systems-of-systems. Systems Engineering1(4) (1998) 267–284

7. Stevens Intitute Of Technology, Castle Point On Hudson, Hoboken, N..: ReportOn System Of Systems Engineering. (2006)

8. REITHOFER, W., NAEGER, G.: Bottom-up planning approaches in enterprisemodeling-the need and the state of the art. Computer in Industry 33 (1997)223–235

9. Bilal, M., Daclin, N., Chapurlat, V.: Collaborative Networked Organizations asSystem of Systems: a model-based engineering approach. IFIP AICT, pro-ve (2014)

10. Bilal, M., Daclin, N., Chapurlat, V.: System of Systems design verification : prob-lematic, trends and opportunities. Enterprise Interoperability VI (2014) 405–415

Page 22

Page 25: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

2nd year PhD Thesis Summary:Efficient Local Search for Large Scale

Combinatorial Problems

Mirsad Buljubasic1, Michel Vasquez1, and Haris Gavranovic2�

1Ecole des Mines d’Ales

LGI2P Research Center

Nimes, France

{mirsad.buljubasic,michel.vasquez}@mines-ales.fr2International University of Sarajevo, Bosnia and Herzegovina

[email protected]

1 Introduction

Many problems of practical and theoretical importance within the fields of Arti-ficial Intelligence and Operations Research are of a combinatorial nature. Com-binatorial problems involve finding values for discrete variables such that certainconditions are satisfied and objective function is optimized. One of the mostlyused strategies in solving combinatorial optimization problems is local searchtechnique. Local search is an iterative heuristic which typically starts with anyfeasible solution, and improves the quality of the solution iteratively. At eachstep, it considers only local operations to improve the cost of the solution.

The aim of the thesis is to develop an efficient local search algorithms for fewlarge scale combinatorial optimization problems 3 . The problems include Ma-chine Reassignment Problem (MRP), Generalized Assignment Problem (GAP),Bin Packing Problem (BPP), Large Scale Energy Management Problem (LSEM),SNCF Rolling Stock Problem (RSP), . . .Some of the problems concerned are real world industrial problems proposed bythe companies (Google, EDF, SNCF). Here we present the work that has beendone until now, emphasizing the work done in the last several months concerningSNCF Rolling Stock Problem.

2 1st year

In the first year of the thesis problems concerned were MRP, GAP, BPP, LSEM.An emphasize was on MRP problem and an algorithm developed for solvingMRP has been adapted for GAP and BPP.Machine Reassignment Problem is a problem proposed at ROADEF/EURO

�co-advisor

3start date of the thesis - December 1st 2012

Page 23

Page 26: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

2

2012 Challenge (http://challenge.roadef.org/2012/en/), the competition orga-nized jointly by French Operations Research Society (ROADEF) and EuropeanOperations Research Society (EURO). The problem was proposed by Google.The method used to solve MRP is a multi-start local search combined with nois-ing strategy and high quality results are obtained. The method is tested on 30instances proposed by Google and used for challenge evaluation. Most of thenumerical results obtained are proven to be optimal, near optimal, or the bestknown.

GAP and BPP are well-known NP-hard combinatorial optimization prob-lems. Both problems are relaxation of MRP and, therefore, we use a local searchalgorithm similar to the one developed for MRP. The method has been tested onstandard benchmarks from literature. The results obtained with adapted methodare satisfiable (but not quite in the same range with the best results that can befound in the literature).

LSEM is a problem proposed at ROADEF/EURO 2010 Challenge(http://challenge.roadef.org/2010/en/). The goal is to fulfil the respective de-mand of energy over a time horizon of several years, with respect to the totaloperating cost of all machinery. The problem was posted by Electricity de France(EDF) and it is a real world industry problem solved at EDF. The method com-bines constraint programming (for solving the problem of scheduling outages),greedy construction procedure (for finding a feasible production plan) and localsearch (for solution improvement) techniques. High quality results are obtainedon benchmarks given by EDF. There is a lot of space for improving the methodand it could be the subject for a future work.

The work on those problems is mainly finished and more details can be foundin the following published papers:

– Michel Vasquez, Mirsad Buljubasic : Une procedure de recherche iterative en

deux phases : la methode GRASP (march 2014)• Chapter in the book ”Metaheuristiques pour loptimisation difficile”

– Mirsad Buljubasic, Haris Gavranovic: An Efficient Local Search with Nois-

ing Strategy for Google Machine Reassignment problem, to appear, Annalsof Operations Research.

– Mirsad Buljubasic, Haris Gavranovic: A Hybrid Approach Combining Local

Search and Constraint Programming for a Large Scale Energy Management

Problem. RAIRO - Operations Research 47(4): 481-500 (2013)

3 SNCF Rolling Stock Problem

Most of the work in the second year has been done on SNCF Rolling Stock Prob-lem. All the work has been done while particippating in ROADEF/EURO 2014Challenge competition (http://challenge.roadef.org/2014/en/). The problem isdefined by French railway company SNCF.

Page 24

Page 27: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

3

3.1 Short description

The aim of this challenge is to find the best way to handle trains between theirarrivals and departures in terminal stations. Today, this problem is shared be-tween several departments at SNCF, so it is rather a collection of sub-problemswhich are solved in a sequential way. Between arrivals and departures in terminaltrain stations, trains never vanish. Unfortunately, this aspect is often neglected inrailway optimization approaches. Whereas in the past, rail networks had enoughcapacity to handle all trains without much trouble, this is not true anymore.Indeed, traffic has increased a lot in recent years and some stations have realcongestion issues. The current trend will make this even more difficult to dealwith in the next few years. This problem involves temporary parking and shunt-ing on infrastructure which are typically platforms, maintenance facilities, railyards and tracks linking them. This rolling stock unit management on railwaysites problem is extremely hard problem for several reasons. Most of inducedsub–problems are NP–hard problems such as assignment problem, schedulingproblem, conflicts problem on gates, platform assignment problem.

3.2 Solving method

We propose a two phase approach combining mixed integer programming (MIP)and heuristics. In the first phase, a train assignment problem (AP) is solvedwith a combination of a greedy heuristic and branch-and-bound. The objectiveis to maximize the number of assigned departures while respecting technicalconstraints. In the second phase trains scheduling problem (SP) which consistsof scheduling all the trains in the station’s infrastructure while minimizing thenumber of cancelled departures, is solved using a constructive heuristic. Thegoal of SP is to schedule as many assignments as possible, using resources on thestation and respecting all constraints. Local Search is used to improve obtainedsolutions.

Several methods are proposed to solve sub-problems such as greedy algo-rithm, local search, tabu search, matching algorithm, branch-and-bound, depthfirst search, oscillation strategy, multi–start method.

References

1. Mirsad Buljubasic, Haris Gavranovic (2014). ”An Efficient Multi-Start Local Search

with Noising Strategy for Google Machine Reassignment problem” Annals of Oper-

ational Research, to appear.

2. Mirsad Buljubasic, Haris Gavranovic (2013). ”A Hybrid Approach Combining Local

Search and Constraint Programming for a Large Scale Energy Management Problem”

RAIRO - Operations Research 47(4): 481-500 (2013).

3. Michel Vasquez, Mirsad Buljubasic (march 2014). ”Une procedure de recherche it-

erative en deux phases : la methode GRASP” Metaheuristiques pour l’optimisation

difficile.

Page 25

Page 28: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

4

4. R. Masson, T. Vidal, J. Michallet, P.H.V. Penna, V. Petrucci, A. Subramanian,and H. Dubedout. (2012). “An iterated local search heuristic for multi-capacity binpacking problems and machine reassignment.”url: http://www.cirrelt.ca/DocumentsTravail/CIRRELT-2012-70.pdf.

5. Aarts, Emile and Lenstra, Jan K. (1997). ”Local Search in Combinatorial Optimiza-tion.” John Wiley & Sons, Inc., New York, NY, USA.

6. Pisinger, David and Ropke, Stefan (2007). ”A general heuristic for vehicle routingproblems.” Comput. Oper. Res. 34(8), p. 2403 – 2435.

7. Yagiura, Mutsunori and Iwasaki, Shinji and Ibaraki, Toshihide and Glover, Fred(2004). ”A very large-scale neighborhood search algorithm for the multi-resourcegeneralized assignment problem.” Discret. Optim. 1(1), p. 87 – 98.

8. Alvim, Adriana C. F. and Ribeiro, Celso C. and Glover, Fred and Aloise, DarioJ. (2004). ”A Hybrid Improvement Heuristic for the One-Dimensional Bin PackingProblem.” Journal of Heuristics 10(2), p. 205 – 229.

Page 26

Page 29: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Ensemble methods for transfer learning in brain-computer interfacing

Sami DALHOUMI, Gérard DRAY, Jacky MONTMAIN

Parc Scientifique G. Besse, 30035 Nîmes, France.

!"#$%&'(!"#$)#*!$&+",$&%-(.

!

Introduction

A brain-computer interface (BCI) is a communication system that allows people suffering from severe neuromuscular disorders to interact with their environment without using peripheral nervous and muscular system, by directly monitoring electri-cal or hemodynamic activity of the brain. A BCI is considered as a pattern recognition system that classifies different brain activity patterns into different brain states ac-cording to their spatio-temporal characteristics [1]. The relevant signals that decode brain states may be hidden in highly noisy data or overlapped by signals from other brain states. Extracting such information is a very challenging issue. To do so, a long calibration time is needed before every use of the BCI in order to extract enough data used for features selection and classifier training. Because calibration is time consum-ing and boring even for healthy users, several machine learning approaches have been proposed to address this issue. Among all proposed approaches, subject transfer and session transfer frameworks have been shown to be the most promising ones to solve this problem [2-3]. They consist of incorporating data recorded from other users and/or during other sessions in the learning process of the current user. To do so, most of existing approaches are based on the assumption that there is a common underlying brain activity pattern which they try to extract in order to build a subject-independent classification model. Although this assumption can be effective for able-bodied users, it may be very strong for disabled users as their brain activity patterns are much more variable.

This work aims to develop new transfer learning frameworks that allow reducing calibration time in BCI technology while maintaining good classification accuracy. These frameworks are based on Bayesian model averaging technique which seems to be suitable for transfer learning applications especially when learning from many sources of data. We opted for ensemble strategies because they allow modeling many patterns simultaneously and relax the assumptions considered in previous work.

Page 27

Page 30: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Contributions

In this section, we present two transfer learning frameworks for reducing calibra-tion time in BCI technology. Both approaches are based on Bayesian model averaging which is a data-dependent aggregation method that allows tuning classifiers weights dynamically and adapt the ensemble to brain signals of each user. We validated our approaches using two types of signals used in BCI technology: near-infrared spectros-copy (NIRS) signals and electroencephalography (EEG) signals.

2.1 Bayesian model averaging

Let be a set of hypotheses and a training set. The probability of having class label given a feature vector is :

In transfer learning, since the training and test distributions are different, the hy-potheses priors should incorporate information about the test set in order to adapt the ensemble to target distribution [4]. In this case, (1) is replaced by

where is the test set.

In our application, are classification models learned using data rec-orded from different users and/or during different sessions of the same user and is a labeled set recorded during calibration phase. The goal is to find a good estimation of

while keeping as small as possible.

2.2 G raph-based transfer learning for managing brain signals variabi lity in NIRS based B C Is

In this approach, we model the heterogeneous NIRS data recorded from different users during different sessions by a bipartite graph . The two sets of ver-tices and correspond to the NIRS data sets and the feature set respectively. An edge exists if the feature is an explanatory feature in the data set (i.e., ). The partitioning of this bipartite graph allows creation of groups of data sets that share (approximately) the same spatial dis-

Page 28

Page 31: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

tribution of explanatory features. Hypotheses are learned using these groups separately. NIRS signals recorded during a new session are classified as follows: first, we find the group of data sets sharing the most similar spatial distribution of brain activity patterns and then use the hypothesis trained on that data to predict class labels of each trial in the new session. In real time conditions, assuming that spatial distribu-tion of brain activity patterns do not vary significantly during the same session, only the first few trials (i.e., test set T) are used to find the closest co-cluster in our support set. In the Bayesian model averaging framework given in (2), is calculated using « the winner takes all rule » and consequently will be determined using only one hypothesis .

This approach was validated using a real NIRS data set and it is accepted for pub-lication in proceedings of the 15th International Conference on Information Pro-cessing and Management of Uncertainty [5].

2.3 Ensemble-based transfer learning for E E G-based brain-computer interfacing

Because of the low spatial resolution of EEG signals, spatial filtering is a very im-portant stage that needs to be performed before classification. Common spatial pattern (CSP) is the most used spatial filtering technique for EEG-based BCIs. It is based on calculation of covariance matrices of different classes which requires enough labeled data. Thus, reducing the duration of calibration time may dramatically deteriorate classification performance.

In this section, we present a transfer learning framework for EEG classification that allows learning CSP filters and classifiers from other BCI users and consequently reducing calibration time for the current user. It consists of the following steps :

1. Calculate spatial filters and train corresponding classifier for each user sepa-rately.

2. Project the small labeled set of EEG signals recorded during the calibration phase of current user on spatial filters of other users.

3. Apply Bayesian model averaging to previously learned classifiers. Classifi-ers priors are estimated empirically using the projections of the test set of current user.

Page 29

Page 32: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

4. Perform leave one trial out cross-validation (LOOCV) on the test set of cur-rent user in order to check if the transfer learning framework outperforms traditional learning approach or not.

5. If yes, use the transfer learning framework to predict class labels of trials performed during the rest of the session. If not, use traditional learning tech-nique in order to avoid « negative transfer ».

In step three, base classifiers priors are estimated as follows :

(6)

Where is the projection of the feature vector on spatial filters of user .

Evaluation on a real EEG dataset (BCI competition IV dataset 2A) showed that our approach significantly outperforms traditional learning techniques when the size of test set is small.

Acknowledgement

We would like to thank Stéphane PERREY and Gérard DEROSIERE for the valu-able collaboration fruitful scientific discussions.

References

1. Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B.: A Review of Classifica-tion Algorithms for EEG-based Brain-Computer Interfaces. Journal of Neural Engineering, vol. 4, R1--R13 (2007).

2. Tu, W., Sun, S.: A subject transfer framework for EEG classification. Neurocomputing, vol. 82, pp. 109--116 (2011).

3. Samek, W., Meinecke, F.C., Muller, K.R.: Transferring Subspaces Between Subjects in Brain-Computer Interfacing. IEEE Transactions on Biomedical Engineering, vol. 60, no. 8, pp. 2289--2298 (2013).

4. Gao, J., Fan, W., Jiang, J., Han, J.: Knowledge transfer via multiple model local structure mapping. Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, Las Vegas, Nevad, USA (2008).

5. Dalhoumi S., Derosiere G., Dray G., Montmain J., Perrey S.: Graph-based transfer learn-ing for managing brain signals variability in NIRS-based BCIs. Proceedings of the 15th In-ternational Conference on Information Processing and Management of Uncertainty (2014).

Page 30

Page 33: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Coping with Imprecision During aSemi-automatic Conceptual Indexing Process

Nicolas Fiorini1, Sylvie Ranwez1, Jacky Montmain1, and Vincent Ranwez2

1 Centre de recherche LGI2P de l’ecole des mines d’Ales,Parc Scientifique Georges Besse, F-30 035 Nımes cedex 1, France

{nicolas.fiorini,sylvie.ranwez,jacky.montmain}@mines-ales.fr2 Montpellier SupAgro, UMR AGAP, F-34 060 Montpellier, France

{vincent.ranwez}@supagro.inra.fr

Abstract. Concept-based information retrieval is known to be a pow-erful and reliable process. It relies on a semantically annotated corpus,i.e. resources indexed by concepts organized within a domain ontology.The conception and enlargement of such index is a tedious task, whichis often a bottleneck due to the lack of automated solutions. In thissynthesis, we first introduce a solution to assist experts during the in-dexing process thanks to a k-nearest neighbors approach. The idea isto let them position the new resource on a semantic map, containingalready indexed resources and to propose an indexation of this new re-source based on those of its neighbors. To further help users, we thenintroduce indicators to estimate the robustness of the indexation withrespect to the indicated position and to the annotation homogeneity ofnearby resources. It is possible to visually inform users on their marginsof error, therefore reducing the risk of having a unsatisfying annotation.

Keywords: conceptual indexing · imprecision management · visualiza-tion

1 Introduction

Over the last decade, the amount of data has been incessantly growing becauseof new or improved numerical technologies. They give anyone the ability to cre-ate and share new contents. The management of massive collections is a problemthat needs to be addressed by new methods capable of handling big data. Onekey process is document indexing: it associates each document with metadataso that the corpus can be more easily exploited by applications such as informa-tion retrieval or recommending systems. Most of the time, annotations are madeof simple words. However, ambiguous words – e.g. ”jaguar” (car or animal) –and synonyms hamper such keyword-based applications. Also, there is no rela-tion considered between the words ”car” and ”vehicle” whereas their meaningsare pretty close. In order to overcome these problems, a widespread solutionis to rely on knowledge representations such as ontologies [1]. Annotations ofentities (genes, biomedical papers, etc.) using such structured vocabularies aremore informative since their concepts and the relations among them tackle the

Page 31

Page 34: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

above-mentioned limitations of keyword-based approaches [2]. However, indexingprocess is hard to fully automatise and it is time-consuming when it is manuallydone by experts. Here we describe a way of interacting with a user to assist themduring the indexing process. Visualization techniques are used to accurately de-fine the neighbor documents of the one to annotate. Once this neighborhood hasbeen identified, the system suggests concepts for characterizing the document.

2 Related work

In most existing methods indexing only consists in ordering previously collectedconcepts, such as in MTI [3]. More recently, various machine learning (ML)approaches were applied to learn how relevant a concept is towards a given doc-ument. They all show better results than MTI: gradient boosting [4], reflectiverandom indexing [5] and learning-to-rank [6].Among all indexing models, Yang [7] stated that the k-Nearest Neighbor (kNN)approach is the only method that can scale while providing good results. This ap-proach is based on the neighborhood of the document to annotate. Each neighboracts like a voter for each potential annotating concept. In basic applications, themost frequent concepts in the k neighbor annotations will be the ones proposedfor annotating the new document. Huang et al. [6] present a more elaboratedapproach. First, the union of the concepts indexing the kNNs provides the set ofconcepts to start with. Second, the concepts are ordered thanks to the learning-to-rank [8] algorithm relying on a set of natural language features and the 25top concepts are returned.

3 A New Semantic Annotation Propagation Framework

The indexing process we propose consists in two steps: the building of a semanticmap containing already indexed resources; and identification of relevant conceptsonce the user has placed a document to be annotated on this map. Relevantconcepts are then identified using a kNN approach by propagating annotationsof the k neighbors of the document to be indexed. Pointing the correct locationfor this document is thus a decisive action.

During the first step, the construction of the semantic map presented to theuser requires i) to identify a subset of relevant resources and ii) to organizethem in a visual and meaningful way. The first step is obviously crucial andcan be tackled thanks to information retrieval approaches. Given an input setof relevant resources, we chose to use the MDS (Multi Dimensional Scaling) todisplay them on a semantic map so that resource closeness on the map reflectsas much as possible their semantic relatedness.

During the second step, the annotation propagation starts with the selectionof the setN of the k closest neighbors of the click. We make a first raw annotationA0, which is the union of all annotations of N , so A0 =

�ni∈N Annotation(ni).

We defined an objective function which, when maximized, gives an annota-tion A∗ ⊂ A0 that is the median of those of the elements of N , i.e.:

A∗ = argmaxA⊆A0

{score(A)}, score(A) =�

ni∈N

sim(A,Annotation(ni)) (1)

Page 32

Page 35: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Where sim(A,Annotation(ni)) denotes the groupwise semantic similarity be-tween two groups of concepts, respectively A and Annotation(ni). This subsetcan not be found using a brute force approach as there are 2|A0| solutions. There-fore, the computation relies on a greedy heuristic starting from A0 and deletingconcepts one by one. The concept to be deleted at each step is the one leading tothe greatest improvement of the objective function. When there is no possibleimprovement, the algorithm stops and returns a locally optimal A∗.

4 Coping with Imprecision

There is one main cause that may affect the neighborhood definition, correlatedwith one of the advantages of the method: the user interaction. We propose toestimate the robustness of the proposed annotation with respect to the click po-sition to help the user focusing on difficult cases while going faster on easier ones.Therefore, the user needs to approximately know the impact of a misplacementof an item on its suggested annotation. We compute an annotation stability in-dicator prior to display the map and visually help users by letting them knowtheir margin of error when clicking. On a zone where this indicator is high theannotation is robust to a misplacement of a new resource because all elementannotations are rather similar. Whereas if this indicator is low, the annotationvariability associated to a misplaced click is high. To efficiently compute thoseannotation stability indicators, we first split the map into smaller elementarypieces and generate the annotation corresponding to their center. Using thosepre-computed indexations we then assess the robustness of each submaps of Mby identifying the number of connected elementary submaps sharing a similarannotation.

5 Evaluation

Our application protocol is based on scientific paper annotation. As the docu-ments are annotated with the MeSH ontology, we rely on this structure in ourapplication. We use the Semantic Measures Library (SML) [9] in order to assessthe groupwise semantic similarities. In order to enhance the human-machine in-teraction and to improve the efficiency of the indexation process, we propose togive visual hints to the end-user about the impact of a misplacement. To thataim we color the area of the map surrounding the current mouse position thatwill led to similar annotations. More precisely, the colored area is such that po-sitioning the item anywhere in this area will lead to an indexation similar to theone obtained by positioning the item at the current mouse position. Figure 1shows a representation of such zones on different parts of the same map.

6 Conclusion

In this synthesis, we describe a new method inspired from kNN approaches inwhich users play a key role by pointing a location on a map thus implicitly

Page 33

Page 36: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

(a) Cursor on a homogeneous zone (b) Cursor on a heterogeneous zone

Fig. 1. Visual hints of position deviation impact. The cursor is surrounded by a greyarea indicating positions that would lead to similar annotation.

defining the neighborhood of a document to annotate. In order to help the user,the system displays the homogeneity of the zone hovered by the user. Therefore,one can easily know how focused they need to be when placing the documenton the map. We plan pursuing this work by studying the possible algorithmoptimizations for generating annotations, which would make this method moreusable.

References

1. Haav, H., Lubi, T.: A survey of concept-based information retrieval tools on theweb. Proc. 5th East-European Conf. ADBIS, vol. 2, pp. 29–41, 2001.

2. Baziz, M., Boughanem, M., Pasi, G., Prade, H.: An information retrieval driven byontology from query to document expansion. Large Scale Semant. Access to Content(Text, Image, Video, Sound), pp. 301– 313, 2007.

3. Aronson, A.R., Mork, J.G., Gay, C.W., Humphrey, S.M., Rogers, W.J.: The NLMindexing initiative’s medical text indexer. Medinfo, vol. 11, no. Pt 1, pp. 268–272,2004.

4. Delbecque, T., Zweigenbaum, P.: Using Co-Authoring and Cross-Referencing Infor-mation for MEDLINE Indexing. AMIA Annu. Symp. Proc., vol. 2010, p. 147, Jan.2010.

5. Vasuki, V., Cohen, T.: Reflective random indexing for semi-automatic indexing ofthe biomedical literature. J. Biomed. Inform., vol. 43, no. 5, pp. 694–700, Oct. 2010.

6. Huang, M., Neveol, A., Lu, Z.: Recommending MeSH terms for annotating biomed-ical articles. J. Am. Med. Informatics Assoc., vol. 18, no. 5, pp. 660–667, 2011.

7. Yang, Y.: An evaluation of Statistical Approaches to Text Categorization. Inf. Retr.Boston., vol. 1, no. 1–2, pp. 69–90, 1999.

8. Cao, Z., Qin, T., Liu, T., Tsai, M. , Li, H.: Learning to rank: from pairwise approachto listwise approach. Proc. 24th Int. Conf. Mach. Learn., pp. 129–136.

9. Harispe, S., Ranwez, S., Janaqi, S., Montmain, J.: The semantic measures libraryand toolkit: fast computation of semantic similarity and relatedness using biomedicalontologies. Bioinformatics, vol. 30, no. 5, pp. 740–2, Mar. 2014.

Page 34

Page 37: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

A three-level formal model for software

architecture evolution

Abderrahman Mokni+, Marianne Huchard*, Christelle Urtado

+, Sylvain

Vauttier+, and Huaxi (Yulin) Zhang

+LGI2P, Ecole Nationale Superieure des Mınes Ales, Nımes, France*LIRMM, CNRS and Universite de Montpellier 2, Montpellier, France

‡ INRIA/ENS Lyon, France

{Abderrahman.Mokni, Christelle.Urtado, Sylvain.Vauttier}@mines-ales.fr,[email protected], [email protected]

1 Introduction

Software evolution has gained a lot of interest during the last years [1]. Indeed,

as software ages, it needs to evolve and be maintained to fit new user require-

ments. This avoids to build a new software from scratch and hence save time

and money. Handling evolution in large component-based software systems is

complex and evolution may lead to architecture inconsistencies and incoherence

between design and implementation. Many ADLs were proposed to support ar-

chitecture change. Examples include C2SADL [2], Wright [3] and π-ADL [4].

Although, most ADLs integrate architecture modification languages, handling

and controlling architecture evolution in the overall software lifecycle is still an

important issue. In our work, we attempt to provide a reliable solution to the

architecture-centric evolution that preserves consistency and coherence between

architecture levels. We propose a formal model for our three-level ADL Dedal [5]

that provides rigorous typing rules and evolution rules using the B specification

language [6]. The remainder of this paper is organized as follows: Section 2 gives

an overview of Dedal. Section 3 summarizes our contributions before Section 4

concludes and discusses future work.

2 Overview of Dedal the three-level ADL

Dedal is a novel ADL that covers the whole life-cycle of a component-based

software. It proposes a three-step approach for specifying, implementing and

deploying software architectures in a reuse-based process.

The abstract architecture specification is the first level of architecture soft-

ware descriptions. It represents the architecture as designed by the architect

and after analyzing the requirements of the future software. In Dedal, the ar-

chitecture specification is composed of component roles and their connections.

Component roles are abstract and partial component type specifications. They

are identified by the architect in order to search for and select corresponding

concrete components in the next step.

Page 35

Page 38: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

The concrete architecture configuration is an implementation view of the

software architecture. It results from the selection of existing component classes

in component repositories. Thus, an architecture configuration lists the concrete

component classes that compose a specific version of the software system. In

Dedal, component classes can be either primitive or composite. Primitive com-ponent classes encapsulate executable code. Composite component classes en-

capsulate an inner architecture configuration (i.e. a set of connected component

classes which may, in turn, be primitive or composite). A composite component

class exposes a set of interfaces corresponding to unconnected interfaces of its

inner components.

The instantiated architecture assembly describes software at runtime and

gathers information about its internal state. The architecture assembly results

from the instantiation of an architecture configuration. It lists the instances of

the component and connector classes that compose the deployed architecture at

runtime and their assembly constraints (such as maximum numbers of allowed

instances).

3 Summary of ongoing research

3.1 Dedal to B formalization

Dedal is a relatively rich ADL since it proposes three levels of architecture de-

scriptions and supports component modeling and reuse. However, the present

usage of Dedal is limited since there is no formal type theory for Dedal com-

ponents and hence there is no way to decide about component compatibility

and substitutability as well as relations between the three abstraction levels. To

tackle with this issue, we proposed in [7] a formal model for Dedal that supports

all its underlying concepts. The formalization is specified in B, a set-theory and

first order logic based language with a flexible and simple expressiveness. The

formal model is then enhanced with invariant constraints to set rules between

Dedal concepts.

3.2 Intra-level and inter-level rules in Dedal

Intra-level rules in Dedal consist in substitutability and compatibility between

components of the same abstraction level (component roles, concrete component

types, instances). Defining intra-level relations is necessary to set the architecture

completeness property:

An architecture is complete when all its required functionalities are met. Thisimplies that all required interfaces of the architecture components must be

connected to a compatible provided interface.

Inter-level rules are specific to Dedal and consist in relations between components

at different abstraction levels as shown in Figure 1. Defining inter-level relations

is mandatory to decide about coherence between abstraction levels.

2

Page 36

Page 39: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Fig. 1. Inter-level relations in Dedal

For instance, the conformance rule between a specification and a configurationis stated as follows:

A configuration C implements a specification S if and only if all the roles of Sare realized by the concrete component classes of C.

3.3 Evolution rules in Dedal

An evolution rule is an operation that makes change in a target software ar-chitecture by the deletion, addition or substitution of one of its constituentelements (components and connections). Each rule is composed of three parts:the operation signature, preconditions and actions. Specific evolution rules aredefined at each abstraction level to perform change at the corresponding formaldescription. These rules are triggered by the evolution manager when a changeis requested. Firstly, a sequence of rule triggers is generated to reestablish con-sistency at the formal description of the initial level of change. Afterward, theevolution manager attempts to restore coherence between the other descriptionsby executing the adequate evolution rules. Figure 2 presents the correspondingcondition diagram of the proposed evolution process.

4 Conclusion and future work

In this paper, we give an overview of our three-level ADL Dedal and its formalmodel. At this stage, a set of evolution rules is proposed to handle architecturechange during the three steps of software lifecycle: specification, implementationand deployment. The rules were tested and validated on sample models using aB model checker. As future work, we aim to manage the history of architecturechanges in Dedal descriptions as a way to manage software system versions.Furthermore we are considering to automate evolution by integrating Dedal andevolution rules into an eclipse-based platform.

3

Page 37

Page 40: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

Fig. 2. Condition diagram of the evolution process

References

1. Mens, T., Serebrenik, A., Cleve, A., eds.: Evolving Software Systems. Springer(2014)

2. Medvidovic, N.: ADLs and dynamic architecture changes. In: Joint Proceedings ofthe Second International Software Architecture Workshop and International Work-shop on Multiple Perspectives in Software Development (Viewpoints ’96) on SIG-SOFT ’96 Workshops, New York, USA, ACM (1996) 24–27

3. Allen, R., Garlan, D.: A formal basis for architectural connection. ACM TOSEM6(3) (July 1997) 213–249

4. Oquendo, F.: Pi-ADL: An architecture description language based on the higher-order typed Pi-calculus for specifying dynamic and mobile software architectures.SIGSOFT Software Engineering Notes 29(3) (May 2004) 1–14

5. Zhang, H.Y., Urtado, C., Vauttier, S.: Architecture-centric component-based devel-opment needs a three-level ADL. In: Proceedings of the 4th ECSA. Volume 6285 ofLNCS., Copenhagen, Denmark, Springer (August 2010) 295–310

6. Abrial, J.R.: The B-book: Assigning Programs to Meanings. Cambridge UniversityPress, New York, USA (1996)

7. Mokni, A., Huchard, M., Urtado, C., Vauttier, S., Zhang, H.Y.: Fostering componentreuse: automating the coherence verification of multi-level architecture descriptions.Submitted to ICSEA 2014 (2014)

4

Page 38

Page 41: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

OBJECT MATCHING IN VIDEOS :A SMALLREPORT

Darshan Venkatrayappa , Philippe Montesinos, Daniel.Depp

Ecole des Mines d’Ales, LGI2P,Parc Scientifique Georges Besses

30035 Nimes, France{Darshan.Venkatrayappa,Philippe.Montesinos,Daniel.Depp

}@mines-ales.fr

Abstract. In this report, we propose a new approach for object match-

ing in videos. Points of interest are extracted from the object using simple

color Harris detector. By applying our novel descriptor on these points

we obtain point descriptors or signatures. This novel deformation invari-

ant descriptor is made up of rotating anisotropic half-gaussian smoothing

convolution kernels. Thus obtained descriptor has a smaller dimension

compared to that of the well known SIFT descriptor. Further, the dimen-

sion of our descriptor can be controlled by varying the angle of the ro-

tating filter. We achieve euclidean invariance by computing Fast Fourier

Transform (FFT) between the two signatures. Deformation invariance is

achieved using Dynamic Time Warping (DTW).

1 Introduction

Object matching has found prominence in a variety of application such as image

indexing in image databases, object detection and tracking, shape matching and

image classification. In a nutshell, object matching can be defined as matching

a model representing an object to an instance of that object in another image.

Object matching methods are of two types: 1) Direct, 2) feature based methods.

Lukas-Kanade [2] came up with a direct method, in which a parametric optical

flow mapping is sought between two images, so as to minimize the sum of squared

difference between objects in two different images. In contrast, Feature based

methods such as SIFT [10] and ASIFT [12] are designed to be scale and affine

invariant. In this case, the most common approach to match object in images is

to find the points of interest of the object in images. This is followed by finding

the descriptors of the regions surrounding these points and then matching these

descriptors across images. The development of these methods has led to the

genesis of new point detectors and descriptors that are invariant to changes in

image transformation [1].

Using random Ferns [13] authors are able to achieve real-time object matching

in videos. In this approach the authors bypass the patch preprocessing step by

using a Naive Bayesian classification framework and produces an algorithm that

Page 39

Page 42: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

2 Lecture Notes in Computer Science: Authors’ Instructions

is simple, efficient, and robust. The only drawback of this approach is the off-line training stage which is very time consuming. In [7], the authors have comeup with a linear formulation that simultaneously matches feature points andestimates global geometrical transformation in a constrained linear space. Thelinear scheme reduces the search space based on the lower convex hull propertyso that the problem size is largely decoupled from the original hard combinatorialproblem. They have achieved accurate, efficient, and robust performance for scaleand rotation invariance for object matching in videos.

This report is organised as follows: section 2 describes our point descriptor.In section 3, we deal with affine invariant image matching . Section 4 deals withthe experiments and results. The final section we will talk about the conclusionand future work.

2 POINT DESCRIPTOR

We use the point descriptor as in [11], where the authors use a anisotropic deriva-tive half-Gaussian filter . We have replaced the derivative filter with a smoothingfilter oriented along a direction θ. The switching from derivative to smoothingfilter is intended to reduce the size of the descriptor there by increasing the framerate. This filter is described by :

g(σξ,ση)(x, y, θ) = C · Sy

�Rθ

�xy

��· e−(xy) .Z (1)

on a considered pixel point at (x, y) with:

Z = R−1θ

�1/(2σ2

ξ ) 00 1/(2σ2

η)

�.Rθ

�xy

�,

σξ and ση controls the size of the gaussian along the two orthogonal direc-tions, radial and axial. Sy is a sigmoid function (along the Y axis) used to ”cut”smoothly the gaussian kernel . Rθ is a 2D rotation matrix. C a normalizationcoefficient.

Depending on the application, we increment the direction parameter θ insteps of 50, 100.. to obtain a set of half-gaussian smoothing kernels. which, scansby convolution the surrounding of points from 0 to 360 degrees. Convolution ofa point in an image with all the kernels results in an intensity function, whichdepends on the direction of the kernel. Illumination invariance as proposed bydiagonal illumination model [8] (eq 2) is achieved by normalizing channel bychannel:

(R2, G2, B2)t = M . (R1, G1, B1)

t + (TR, TG, TB)t , (2)

where: (R1, G1, B1)t and (R2, G2, B2)

t are color inputs and outputs respectively.M is a diagonal 3x3 matrix and (TR, TG, TB) represents a colour transition vectorof the 3 channels.

Page 40

Page 43: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

OBJECT MATCHING IN VIDEOS 3

3 AFFINE INVARIANT IMAGE MATCHING

The descriptor discussed in the previous section dose not provide direct euclideanor deformation invariance. Euclidean invariance is easily obtained by computingcorrelation between the descriptor curves describing respectively the two points.The phase between the two curves is defined by the location of the maxima ofcorrelation. Correlation between two curves can be obtained at a low computa-tional cost using a FFT(FFTW3) transform.

Since, angles are not preserved under deformation or projective transforms,correlation alone is insufficient in ranking the match between a point in an imageand the same point seen in a second image under a change of viewpoint. In sucha situation, curve deformation is needed to obtain affine invariant correlationscores. The simplest way to transform a curve into another is to make use of thedynamic time warping(DTW) algorithm. DTW is a popular similarity measurebetween two temporal signals. In [9] the authors have used an improved DTWfor time series retrieval in pattern recognition. When transforming two curves,DTW doesn’t take into account any affine transformations. In order to obtainwarping compatibility with affine transformation, we need to introduce someconstraints to the original DTW algorithm.

4 EXPERIMENTS AND DISCUSSION

(a) 18 (b) 28 (c) 35 (d) 55 (e) 71 (f) 98

(g) 18 (h) 28 (i) 35 (j) 55 (k) 71 (l) 98

Fig. 1: first six color images are output of matching using our method. last sixgray images are output of matching using SIFT. Numbers are the frame numbersfor both SIFT and our method

Software implementation is in c/c++ programming platform. We use intelmachine with 4 cores. We have tested our method on 1 video sequences. The

Page 41

Page 44: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

4 Lecture Notes in Computer Science: Authors’ Instructions

first video which we call as the fish sequence was obtained from Youtube. Thisis a small sequence with 132 frames. The sequence is about a fish moving in anaquarium. In the middle of the sequence the fish overlaps another big yellow fishmaking the appearance indistinguishable due to lack of texture. The video is oflow quality due to heavy compression. We have compared our results with theSIFT descriptor. Since there is no video implementation for SIFT. We have justextracted significant frames from the video and applied SIFT on those individualframes. The SIFT code is provided by [10].

We first tested our method on the fish video sequence. For this particularsequence we have used the rotating filter angle ∆θ = 200 . This results in adescriptor with 18 dimensions. From the Fig 1, we can clearly see that ourmethod out performs SIFT. This is same for all the frames in the sequence,which is verified manually. Results on this video demonstrates that our methodcan deal with low quality videos and images. In this case, the frame rate achievedwas around 11 frames per second on average.

5 Conclusion

We propose a new method for image matching using a novel deformation invari-ant descriptor made up of rotating anisotropic half-Gaussian smoothing convo-lution kernels. Using DTW and FFT, we are able to achieve deformation androtation invariance. Experiment results show that our descriptor with a dimen-sion as low as 18 provides matching performance similar to that of SIFT. Wecan further improve our method by making it invariant to scale changes usingthe scale space representation. We can achieve the video rate of 24 frames persecond by implementing the fast version of the DTW and parallel programming.We believe our method is generic and it can be used to solve problems relatingto image retrieval and object tracking.

References

1. Chiu, H.P. and Lozano-Perez, T.: Matching Interest Points Using Affine InvariantConcentric Circles: Proc. Intl. Conference on Pattern Recognition (ICPR),pp. 167-170, (2006).

2. Lucas, B.D. and Kanade, T.: An iterative image registration technique with anapplication to stereo vision: Proc. IJCAI, pp. 674 679, (1981).

3. Bay, H. and Tuytelaars, T. and Van Gool, L.J.: SURF: Speeded Up Robust Features:Proc. ECCV, pp. 404-417, (2006).

4. Matas, J. and Chum, O. and Urban, l.M. and Pajdla, T.: Robust Wide BaselineStereo from Maximally Stable Extremal Regions: Proc. BMVC, pp. 1-10, (2002)

5. Lobaton, E.J. and Vasudevan, R. and Alterovitz, R. and Bajcsy, R.: Robust topolog-ical features for deformation invariant image matching: Proc. ICCV, pp. 2516-2523,(2011)

6. Magnier, B. and Montesinos, P. and Diep, D.: Texture removal by pixel classificationusing a rotating filter: Proc. ICASSP, pp. 1097-1100, (2011)

Page 42

Page 45: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography

OBJECT MATCHING IN VIDEOS 5

7. Jiang, H. and Yu, S.x.: Linear solution to scale and rotation invariant object match-ing: Proc. CVPR, pp. 2474-2481, (2009)

8. Finlayson, G.D. and Funt, B.V. and Barnard, K.: Color Constancy under VaryingIllumination: Proc. ICCV, pp. 720-725, (1995)

9. D. Lemire.: ”Faster retrival with a two pass dynamic-time-warping lower bound:Pattern recognition, pp. 2169-2180, (2009)

10. D.G. Lowe :Distinctive Image Features from Scale-Invariant Keypoints: Interna-tional Journal of Computer Vision, pp. 91-110, vol 42, 2004.

11. J. L. Palomares, P. Montesinos and D. Diep : 3DIP Image Processing and Appli-cations, vol 8290, 2012.

12. J. Morel and G. Yu :ASIFT: A new framework for fully affine invariant imagecomparison: SIAM Journal on Imaging Sciences,vol. 2, 2009.

13. M.O.zuysal and M. Calonder and V. Lepetit and P. Fua :Fast Keypoint RecognitionUsing Random Ferns: IEEE Trans. Pattern Anal. Mach. Intell, pp. 448-461, vol. 32,2010.

Page 43

Page 46: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography
Page 47: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_14_01.pdf · Since the beginning of the modern steganography in the end of the nineties, color steganography