Page 1
Using Kinect for Smart Costume Design
José Henrique Magalhães H00022955
August 2012
Computer Science School of Mathematical and Computer Sciences
Dissertation submitted as part of the requirements for the award of the degree of MSc in Creative Software
Systems
Page 2
HERIOT-WATT UNIVERSITY
Using Kinect for Smart Costume Design
MSc Dissertation
José Henrique Magalhães
H00022955 16/08/2012
Supervisor: Sandy Louchart
Second Reader: Fairouz Kamareddine
Page 3
iii
Declaration
I, José Henrique Martins Azeredo de Magalhães, confirm that this work submitted for
assessment is my own and is expressed in my own words. Any uses made within it of the
works of other authors in any form (e.g., ideas, equations, figures, text, tables, programs) are
properly acknowledged at any point of their use. A list of the references employed is
included.
Signed:_____________________________
Date:_______________________________
Page 4
iv
Abstract
The project we propose was developed as part of the “Smart Costumes: Smart
Textiles and Wearable Technology in Pervasive Computing Environments” project, and it
aims to provide an initial study of how does Kinect perform under different lighting
conditions. For that, a computer application was created, application that is capable of detect,
track and measure colour using the Kinect sensor.
Two different experiments were conducted where several samples were submitted to
four different lighting environments (conditions). The RGB values of each sample was
collected under each lighting environment, using either a high resolution picture taken with a
digital camera or the developed application and the Kinect sensor.
It was found that lighting conditions affect significantly Kinect’s performance, and
that Kinect does perform better under higher lighting conditions than under lower lighting
conditions. It was also found that lighting conditions affect significantly the colour that is
detected by Kinect. Finally, it was found that, between the three primary colours – red, green
and blue, green is the colour that is more affected by the lighting conditions.
Page 5
v
Acknowledgements
There are not enough words to express the gratitude of the author towards all the
people that supported and helped him throughout the project that is now presented.
Nevertheless, leaving here a thankful note is the least that it is imposed.
Even that imperfectly translated in these short words, I would like to thank to Sandy
Louchart, my supervisor, and to Lynsey Calder for their patience, support and help
throughout the entire project. I would also like to thank Stefano Padilla and Andrew
MacVean for their valuable help on setting up the experiments and evaluating the data
collected, respectively.
Other people deserved to see their names mentioned here for all the support they gave
me. They know, however, that there is no need to name them to express what I feel.
To you all, thank you very much for everything.
Page 6
vi
Table of Contents
ABSTRACT IV
ACKNOWLEDGEMENTS V
ABBREVIATIONS XII
CHAPTER 1 – INTRODUCTION 13
1.1. AIMS & OBJECTIVES 15
1.2. OUTLINE OF THE REPORT 16
CHAPTER 2 – LITERATURE REVIEW 18
2.1. SMART TEXTILES 18
2.2. WORKSHOP 23
2.3. CHROMIC MATERIALS 27
2.3.1. Thermochromic Materials 27
2.4. MICROSOFT KINECT SENSOR DEVICE 29
2.4.1. Kinect Software Development Tools 34
2.4.1.1. Microsoft Kinect SDK 34
2.4.1.2. OpenNI Framework 34
2.4.1.3. OpenKinect Libfreenect Sofware 36
2.4.1.4. Discussion 36
2.4.2. IDE 37
2.4.3. Image Processing Libraries 39
2.5. CONCLUSION 40
CHAPTER 3 – REQUIREMENTS ANALYSIS 41
3.1. USERS OF THE APPLICATION 41
3.2. REQUIREMENTS 41
3.2.1. Software/Hardware Requirements 43
3.2.2. Evaluation Criteria 44
Page 7
vii
CHAPTER 4 – TECHNICAL DESCRIPTION 45
4.1. TECHNICAL DECISIONS 45
4.2. APPLICATION 46
4.3. APPLICATION TESTING 50
4.3.1. Colour Detection/Tracking Testing 50
4.4. POSSIBLE TECHNICAL FUTURE WORK 52
CHAPTER 5 – EVALUATION 53
5.1. METHOD 53
5.1.1. Design 54
5.1.2. Participants 54
5.1.3. Apparatus 54
5.1.4. Procedure 55
5.1.4.1. Experiment 1 55
5.1.4.2. Experiment 2 56
5.2. RESULTS 58
5.2.1. Red Colour Results 60
5.2.2. Green Colour Results 61
5.2.3. Blue Colour Results 62
5.3. DISCUSSION 64
CHAPTER 6 – PROFESSIONAL, LEGAL AND ETHICAL ISSUES 66
CHAPTER 7 – PROJECT PLAN 67
7.1. PROJECT TASK ANALYSIS 67
7.2. PROJECT SCHEDULE 68
7.3. RISK ANALYSIS 68
CHAPTER 8 – CONCLUSION & FUTURE WORK 70
REFERENCES 72
BIBLIOGRAPHY 76
APPENDICES 77
APPENDIX A – SEQUENCE DIAGRAMS 77
Page 8
viii
APPENDIX C – DEVELOPER’S GUIDE 91
APPENDIX D – SAMPLES 109
APPENDIX E – EXPERIMENTS RESULTS 114
APPENDIX F – EVALUATION RESULTS 119
Page 9
ix
List of Figures
Figure 1 - Concepts of autonomic systems and materials [3]. ...................................................................... 20
Figure 2 – Figure illustrating technologies that could be used .................................................................... 24
Figure 3 – Project concept and description of how it could work ............................................................... 24
Figure 4 – Cyclic pattern of changes ............................................................................................................ 25
Figure 5 – Research Questions (cont.) and Possible Project Titles .............................................................. 26
Figure 6 – External stimuli and their corresponding chromic name [4]. .................................................... 27
Figure 7 – Wallpaper created by Elisa Stozyk [16]. ..................................................................................... 28
Figure 8 – Colour-change expression. Left: before colour change. Right: iridescence colour changing [21].
.............................................................................................................................................................. 29
Figure 9 - Kinect Sensor .............................................................................................................................. 29
Figure 10 – Kinect’s Motion Sensor 2 ........................................................................................................... 30
Figure 11 - Kinect’s Skeletal Tracking 2 ...................................................................................................... 30
Figure 12 - Kinect’s Factial Recognition 2 .................................................................................................... 31
Figure 13 - Kinect’s Voice Recognition ....................................................................................................... 31
Figure 14 – Abstract Layered View of the OpenNI concept [35]. ................................................................ 35
Figure 15 – Overall Platform Rankings [37]. ............................................................................................... 38
Figure 16 – Visual Studio Results [37]. ........................................................................................................ 38
Figure 17 – Extended WPF Toolkit “UpDown box” .................................................................................... 46
Figure 18 – UML Diagram ........................................................................................................................... 47
Figure 19 – Blob Detection .......................................................................................................................... 48
Figure 20 – Aforge.NET Colour Filter ........................................................................................................ 49
Figure 21 – Experiment 1 Set Up ................................................................................................................. 55
Figure 22 – Experiment 1 Set Up 2 .............................................................................................................. 56
Figure 23 – Experiment 2 Set Up ................................................................................................................. 57
Figure 24 – Experiment 2 Set Up 2 .............................................................................................................. 57
Figure 25 – Effects of lighting on RGB colour values .................................................................................. 58
Figure 26 – Red value variation from Light0 ............................................................................................... 58
Figure 27 – Green value variation from Light 0 .......................................................................................... 59
Figure 28 – Blue values variation from Light 0 ........................................................................................... 59
Figure 29 – Mean value of Red ..................................................................................................................... 60
Figure 30 – Estimated marginal means of Red values ................................................................................. 61
Figure 31 – Mean values of Green ................................................................................................................ 61
Figure 32 – Estimated marginal means of Green values ............................................................................. 62
Figure 33 – Mean values of Blue .................................................................................................................. 63
Page 10
x
Figure 34 – Estimated marginal means Blue values .................................................................................... 63
Figure 35 – Work Breakdown Structure ..................................................................................................... 67
Figure 36 - Gantt Chart................................................................................................................................ 68
Page 11
xi
List of Tables
Table 1 – Main Features Testings ................................................................................................................ 51
Table 2 – Project Risk Matrix including Contingency Plan ........................................................................ 69
Page 12
xii
Abbreviations
UI User Interface
API Application Programming Interface
VGA Video Graphics Array
SDK Software Development Kit
IDE Integrated Development Environment
NI Natural Interaction
GPU Graphic Processing Unit
VS Visual Studio
UML Unified Modelling Language
WPF Windows Presentation Foundation
MSDN Microsoft Development Network
OpenCV Open Source Computer Vision
Page 13
Chapter 1 – Introduction
13
Chapter 1
Introduction
The project we are proposing is going to be developed as a part of the “Smart
Costumes: Smart Textiles and Wearable Technology in Pervasive Computing Environments”
research project. Therefore, an introduction and description of this project is needed to
contextualise the subject of our work.
The “Smart Costumes: Smart Textiles and Wearable Technology in Pervasive
Computing Environments” is a research project that is currently under development at Heriot-
Watt University. Many researchers of several different areas of expertise have been working
together on this research project, and their aim is to explore the interfaces and interaction
between smart textiles and smart – or pervasive – environments.
It’s the researchers’ intention to focus their work initially on performance rather than
everyday interaction. Performance is a time-limited non-task-related activity in which
interaction can be studied in a purer form than in more routine environments. Researchers
will target dancers within a pervasively-enhanced performance environment. The research
will explore the integration of colour-change technology (thermochromics and
photochromics) and electronics within a smart costume to create different scenarios of
interaction and communication.
The interactive space will be enhanced with embedded computing technology, for
example, sensors and microprocessors. The combination of technologies and computing
environments could be used to direct and create new forms of performance. From a computer
science perspective, the research will open up a completely new interaction modality for
pervasive computing environments. This adaptive approach is particularly relevant to
emergent systems where user input is generated in real-time rather than pre-determined
through a sequence of scripted interaction. This research is timely as there is a growing gap
Page 14
Chapter 1 – Introduction
14
between the development of multi-modal technologies and concrete exploitations of their
potential.
In terms of smart textile design, the research will explore the application of chromic
and fluorescent dye systems to create multiple colour-change signals that switch between
non-emissive to emissive displays. These effects will be explored alongside integrated
electronic systems supported by specific expertise in electronic engineering. The project will
lead to a new dimension in the established area of research in electronics and microsystem
packaging and manufacturing technologies, enabling the development of a new area in
embedded and flexible electronics, and sensors for non-conventional applications in the
creative and entertainment industries and markets.
Finally, researchers identified several possible outcomes of their project:
Immediate outcomes:
A prototype dance costume that is enabled to respond to signals from the dance
environment and to the actions of the dancers.
More generically:
A completely new interaction modality for pervasive computing environments;
New forms of performance;
New forms of digital storytelling and pervasive games;
Novel applications of thermochromic and other chromic dye systems;
New embedded and flexible electronics and sensors for non-conventional applications
in the creative and entertainment industries and markets.
Introduced the “Smart Costumes” research project, we will now introduce our
proposed project.
After a workshop/brainstorm session with one of the “Smart Costumes” project
researcher associates, we were able to identify several possible areas of work that would help
the researchers in their project and that would be feasible for a Master’s project. The main
Page 15
Chapter 1 – Introduction
15
ideas discussed during that workshop were about the possibility of associate computer vision
knowledge and technologies to the performing arts. One of those ideas was the possible
integration of the Kinect sensor with the interactive space (dance environment). The Kinect
would be used to detect colour changes in the prototype dance costume and, depending on
those changes, the system would change the dance environment (in this case, the environment
temperature). As it is known, the dance environment lightning conditions can rapidly and
repeatedly change during a performance, fact that made us raise the following questions: how
well can Kinect perform under different lighting conditions? How do lighting conditions
affect the colour detected by Kinect? After some literature review, we weren’t able to find
any previous research on or related to this particular subject. The necessity to answer to those
questions led us to the proposed project.
Therefore, we propose the development of a computer application that will be used,
among other things, to measure Kinect’s Sensor performance under different lighting
conditions. With the development of this application, we also intend to provide to the “Smart
Costumes” project researchers a tool that can help them reach their goals faster, or, at least, a
tool that help them explore other areas that were not initially foreseen.
This project will be done in collaboration with Lynsey Calder, designer and “Smart
Costumes” project research associate.
1.1. Aims & Objectives
The aim of this project is to create an application capable of detect, track and measure
colour using a Kinect sensor device. For this, a computer application, which will make use of
the data captured by the Kinect sensor, will be created. This application will be able to
automatically calculate the RGB values of the colour detected, values that collected under
different lighting conditions will allow us to test our hypothesis.
One of the objectives of this project is also the creation of a software application that
can be used by current and future researchers working on the “Smart Costumes” project. This
application will allow researchers to:
Page 16
Chapter 1 – Introduction
16
Automatically detect colour changes in thermochromic materials. Depending on the
detected colour change, the application will generate an output which can be used
later for several things, like, for example, emit a command to an electronic switch
connected to the computer.
Use the Kinect’s depth sensor to track and record the performer’s space position data.
At the end of this project we intend to be able to answer to the following questions:
how well can Kinect perform under different lighting conditions? How do lighting conditions
affect the colour detected by Kinect?
We hypothesise that the better the lighting conditions are, then the better Kinect’s
colour detection will perform.
1.2. Outline of the Report
After presenting this first chapter, where an introduction of the project was made, we
will, in the second chapter, provide a literature review and a background research of the
theoretical concepts related to the proposed project. In this second chapter, we will also
describe the workshop day we had with Lynsey Calder, designer and “Smart Costumes”
project research associate. In Chapter 3 we describe and analyse the necessary requirements
identified during the workshop, requirements that need to be delivered at the end of the
project, as well as the necessary requirements for the successful evaluation of our system.
In Chapter 4 we present and discuss the technical aspects of the new developed
application. In Chapter 5 we describe how the evaluation was conducted, the main findings
and the results.
Chapter 6 discusses the professional, legal and ethical issues that may rise during the
evaluation part of the project, and how can they be addressed. In chapter 7 we present the
project plan, with a diagram showing the project’s work breakdown structure, a Gantt chart
that illustrates the project schedule, and a project risk matrix identifying, among others,
Page 17
Chapter 1 – Introduction
17
possible risks that may occur during the project and a description of what can be done to
prevent or minimise their occurrence.
Finally, in the eighth and final chapter, we draw a conclusion about this project and
suggest some possible future work.
Page 18
Chapter 2 – Literature Review
18
Chapter 2
Literature Review
2.1. Smart Textiles
The “Smart Textiles” term was first introduced in Japan during 1989, and the first
textile material being labelled as a “smart textile” was a silk thread including shape memory.
The smart textiles concept derives from intelligent or smart materials, and experts [1] agree
that smart textiles are able to sense and then react or adapt to stimuli from the environment,
having the stimulus and the response a mechanical, chemical, thermal, electrical, magnetic or
other origin [2] [3]. The direct responses to these stimuli include automatic changes in shape,
colour, geometry, volume and other visible physical properties. The indirect responses can
include changes at electrical, molecular, thermal or magnetic levels that are not necessarily
visible to the naked eye, but they are able to trigger reactions or programmed functions [1].
An example of a simple smart material can be fabrics with dyes that change colour in the
presence of heat or UV lights.
According to S. Tang and G. Stylios [1] wearable electronics cannot fall within the
concept of smart textiles. Wearable electronics simply adds electronic components to
garments in order to perform the sensing and responding activities, increasing their
functionality, while smart textiles are able to sense and respond to environment stimuli. In the
same context, C. Norstebo [4] claims that wearable computing “should be called as an
intelligent solution, but never an intelligent textile when it is not including textile which
themselves are defined as intelligent”. J. Bersowska [5] also states that “an electronic textile
refers to a textile substrate that incorporates capabilities for sensing, communication, power
transmission, and interconnection technology to allow sensors or things such as information
processing devices to be networked together within a fabric. This is different from the smart
textiles that feature scientific advances in materials research…” Despite these opinions, most
of the literature conceptualises electronic textiles as intelligent or smart textiles, yet these
Page 19
Chapter 2 – Literature Review
19
only gain intelligent attributes when they have the capacity to sense and respond to
environment stimuli, being able to achieve that through the use of electronic technologies.
X. Zhang and X. Tao [6][7][8] categorized smart textiles according to their manner of
reaction, dividing them in three subgroups:
Passive smart textiles – that can only sense the stimuli from the environment [6];
Active smart textiles – that can sense the stimuli from the environment and react to
them, because besides being sensors they also have an actuator function [7];
Very smart textiles – that have the ability to sense the stimuli from the environment
and react to those stimuli adapting their behaviour to the circumstances [8].
Additionally to these three subgroups, X. Tao [3] also considered that a higher level of
intelligence can be achieved, forming the intelligent materials subgroup, which are those that
are “capable of responding or activated to perform a function in a manual or pre-programmed
manner.”
Very smart textiles have been studied in the past years due to their abilities to change
their physical properties in a smart way, when they are stimulated by the environment. The
concept of the very smart textiles is that they have their own way of sensing the external
stimuli, in which changes in their properties result [9].
L. Langenhove and C. Hertleer [2] state that for a textile to be considered a smart
textile it has to incorporate in its structure two main components, a sensor and an actuator,
and also possibly completed with a processing unit. X. Tao [3] presented two concepts of
autonomic systems and materials, basically describing how does a smart textile work,
illustrated in Figure 1. There we can see that sensors detect triggers or stimuli from the
environment, signals that are then analysed and evaluated by the processor. Based on the
detected and evaluated signals, the actuators generate responses or actions either directly or
from a central control unit. In Figure 1, (a) illustrates the concept where the three processes
are incorporated in smart textiles in conventional autonomic systems, while (b) illustrates the
concept where the three separated processes are incorporated in smart textiles and need to be
linked by a controlling system.
Page 20
Chapter 2 – Literature Review
20
Figure 1 - Concepts of autonomic systems and materials [3].
Advanced materials such as breathing, fire-resistant or ultra-strong fabrics, and
according to L. Langenhove and C. Hertleer definition [2], are not considered as intelligent
fabrics, independently of how technological the material might be.
According to Norstebo [4], Smart textiles can be divided into five groups (this is an
example where the author considers electronic textiles as being a smart textile):
Phase change materials;
Shape memory materials;
Chromic materials;
Other intelligent fabrics;
Electronic/conductive textiles.
Phase change materials are materials that can store or release large amounts of
thermal energy when they melt or freeze. Shape memory materials are materials that have the
ability to “remember” their original shape, i.e., materials that can return to their original
Page 21
Chapter 2 – Literature Review
21
shape after they get deformed. Chromic materials are materials that are able to change their
colour depending on external stimuli, like light and heat for example. Finally, electronic
textiles are the textiles that combine fabrics with electronic components. For the purposes of
this project, we will only look at the chromic materials, more specifically, the thermochromic
materials.
Scientists in the textiles area have been working together with scientists of many
different areas, such as computational, electronic, chemical and physical sciences, among
others. A lot of research and development has already been done by academic and research
institutions and companies, and most of it resulted in a lot of different smart applications to
be used in medicine, sports, buildings, aerospace, military, etc. Most of the developments
resulted in some sort of wearable textiles, more specifically clothes.
Sports brands like Adidas and Nike have been integrating smart textiles in their
products for some time now. Clothes that regulate body temperature or that pull water and
moisture away from the body keeping it dry, are just some simple examples of smart textiles.
The amount of research that is being done based on smart textiles is vast. Next, we will
present some examples of such research.
Scientists at the Singapore Agency for Science, Technology and Research Institute of
Materials Research and Engineering and the National University of Singapore have created a
smart material that is capable of dissipating high impact energy. This lightweight, flexible,
comfortable and form-fitting material is soft and fluid when it is at rest, but becomes rigid
upon impact, protecting a person from knocks and falls or injuries from weapons. Some of its
possible applications are “body armour, sports protective equipment, surgical garments, and
even aerospace energy absorbent materials” [10]. D3O (www.d3o.com/) is a good example of
this type of smart materials and also of how they can be applied to the consumers market.
Fibre manufacturer DSM Dyneema and protective suit specialist TST Sweden have
been working together to create suits that “protect operators using water jets operating at up
to 2000 bar in hydro demolition and water blasting applications”. Dyneema fibres, due to
their high strength and low height, are widely used in the development of high performance
cut-resistant gloves, in law enforcement and military inserts and shields [11], and, more
recently and in a different area other than cloth, also being used in a major construction
Page 22
Chapter 2 – Literature Review
22
project, the world’s largest offshore wind farm, currently under construction. Here,
Dyeneema fibres were used to build a system for lifting slings, capable of lifting 65-metre,
650-tonne steel “mono piles”. The new system is seven times lighter than and as strong as the
conventional slings, made of steel wire [12].
In another area, US company Waubridge Specialty Fabrics developed a stitch-bonded
nonwoven performance fabric. This lightweight, breathable, very flexible, highly durable and
abrasion resistant fabric is a flame-resistant blend of carbon-based fibres capable of providing
protection from spark, weld spatter, arc flash and extreme heat and flame [13].
Smart textiles represent the new generation of fibres and fabrics. Smart textiles field is
becoming in a progressive way one of the most important research fields for engineers,
scientist and designers, being its success dependent on their multidisciplinary teamwork. The
emerging development of smart textiles led to new applications that changed the way we
think about materials, sensors and actuators. In his book, X. Tao [3] identified some of the
possible research and development areas and has grouped them as follows:
For sensors/actuators:
o Photo-sensitive materials;
o Fibre-optics;
o Conductive polymers;
o Thermal sensitive materials;
o Shape memory materials;
o Intelligent coating/membrane;
o Chemical responsive polymers;
o Mechanical responsive materials;
o Microcapsules;
o Micro and nanomaterials.
For signal transmission, processing and controls:
o Neural network and control systems;
o Cognition theory and systems.
For integrated processes and products:
Page 23
Chapter 2 – Literature Review
23
o Wearable electronics and photonics;
o Adaptive and responsive structures;
o Biomimetics;
o Bioprocessing.
o Tissue engineering
o Chemical/drug releasing.
Smart textiles area is, without any doubt, an area with a huge potential. Being a
multidisciplinary area, it is dependent on the advances in other areas. As we know,
technology areas like material science, polymer chemistry, computer and electronic sciences,
nanotechnology, etc., are constantly evolving, which means that smart textiles will continue
evolving as well. Technology is getting smaller, faster and cheaper and because of that, new
areas of research, development and production of smart textiles will become possible in a
near future.
Introduced the main concept of smart textiles, we will now describe the workshop that
was recently organised with Lynsey Calder, designer and Heriot-Watt University research
associate on the “Smart Costumes” research project, the aim of that workshop, and the
conclusions drawn at the end of it.
2.2. Workshop
We organised recently a workshop with Lynsey Calder, designer and Heriot-Watt
University research associate on the “Smart Costumes” research project. The aim of this
workshop was to spend a day with someone that is actually working on a project and try to
find an idea that would help them in their project and that would be feasible for a Master’s
project.
The day started with a meeting where it was explained to Lynsey the aim of the
workshop and what our main objectives in relation to the Master Project were. After the
Page 24
Chapter 2 – Literature Review
24
initial meeting, she described the “Smart Costumes” project in more detail, giving us her
vision of the project and what it is that they want to achieve with their research. She also
introduced us to the smart textiles and thermochromic materials concept, using several
examples of work done so far to better illustrate them.
After this initial introduction to some key concepts, we started to discuss how we
could possibly integrate Kinect with the “Smart Costumes” project. Then, after discussing
what could and what could not be possible to achieve having in account factors like available
technologies (Figure 2), level of expertise needed, available time, we started to build a
concept, identifying how the different technologies could be connected and work with each
other (Figure 3).
Figure 2 – Figure illustrating technologies that could be used
Figure 3 – Project concept and description of how it could work
Page 25
Chapter 2 – Literature Review
25
After identifying the project concept and how the different technologies could be
connected with each other in the system, we created a possible scenario describing how the
system could operate. That possible scenario can be described as follows: a performer,
wearing a colour-changing smart costume and performing a choreography, will be tracked
using a Kinect sensor. The Kinect sensor will be connected to a computer, computer that will
be running an application that will, among others, fire a specific event depending on the
colour change detected on the smart costume. The computer will be connected to an
electronic switch which in turn will be connected to a heat device source (ex. hairdryer) and a
cold device source (ex. fan). Depending on the output generated by the specific application
event, the electronic switch will turn the heat device source on and the cold device source off,
or else, will turn the heat device source off and the cold device source on. This will make the
environment temperature change, which consequently will make the colour of the smart
costume change as well. This cycle will be repeated until the end of the performance (Figure
4). During the entire performance, the computer application will also track and record the
space position data of the performer.
Figure 4 – Cyclic pattern of changes
At this point in the day, we already had our idea and we had also identified most of
the requirements that the system would have to deliver. The next step was to try to find
possible research questions, aims, objectives, and hypotheses to test with that system, and
that could be feasible for a master’s project. After some research and some discussion,
several research questions were identified as being potentially good for a master’s project.
These questions can be seen in Figures 5.
Page 26
Chapter 2 – Literature Review
26
Finally, at the end of the day, another meeting was held. The purpose of this meeting
was to present the work for us developed throughout the day to Sandy Louchart – supervisor
of this master’s project. During the meeting we explained to him the concept, how the
systems would work, and why we think our ideas would allow us to develop something that
can be used by the “Smart Costumes” project researchers in the future and, at the same time,
develop something feasible for a Master’s Project. After presenting all the possible research
questions, Sandy suggested that we should only focus on one or two of those questions: how
well can Kinect perform under different lighting conditions? How do lighting conditions
affect the colour detected by Kinect?
After getting his approval and opinions, we finished the day with a new project in
hand.
Figure 5 – Research Questions (cont.) and Possible Project Titles
During the workshop day, we concluded that the best way of performing the necessary
tests and evaluations would be to use samples of chromic materials, more specifically,
thermochromic materials. This decision was based on the fact that the same material is going
to be used in the manufacturing of the smart costume prototype.
In the next section, we will describe in more detail what thermochromic materials are,
how they work and some of their possible applications.
Page 27
Chapter 2 – Literature Review
27
2.3. Chromic Materials
As we’ve seen before, chromic materials are one of the types of materials that are
considered to be smart. Chromic materials are “fabrics dyed, printed or finished with
chromatic dyes” that are able to change their colour depending on external conditions or
stimuli, like light and heat for example [1]. Figure 6 illustrates the major kinds of chromic
materials, their names and kind of external stimuli used.
Figure 6 – External stimuli and their corresponding chromic name [4].
2.3.1. Thermochromic Materials
A. Seeboth, R. Ruhmann, and O. Mühling [14] defined thermochromic materials as
materials that “change their visible optical properties in response to temperature”, i.e.,
materials that change colour as the temperature changes. According to [1], thermochromic
dyes “come in two types: “leuco” dye types, which exhibit a single colour change with a
molecular re-arrangement, and liquid crystal types, which have a spectrum of colour
changes”. Different thermochromic dyes have different response temperatures.
Detailed description of the thermochromic phenomenon is beyond the scope of this
report. Interested readers are referred to [15].
Thermochromic materials have been around us for quite some time now and in a wide
range of applications. Thermochromic inks or dyes, paints, papers and polymers have been
applied to things such as t-shirts that change colour depending on the body and ambient
temperature, mugs that change colour or display new images when filled with a hot or cold
Page 28
Chapter 2 – Literature Review
28
substances, tiles that change colour depending on body or water temperature, or even the
colour indicators used on batteries. These are just some of a few common examples of
commercially available products that use thermochromic materials.
Elisa Strozyk [16] used thermochromatic inks to create wallpapers that, when heated
by an ordinary radiator, show a colourful pattern (Figure 7).
Figure 7 – Wallpaper created by Elisa Stozyk [16].
In academics, thermochromic materials are also being used for a wide range of
research. L. Berglin [17] applied thermochromic materials to toys to study how smart
materials can be used as a user interface and how they can be used in order to communicate
the interaction. J. Berzowska has published several papers where she demonstrates the
development of several wearable technologies using a combination of thermochromic
materials and electronic devices [18] [19] [5]. Among others, she created a tunic in which the
neckline incorporates thermochromic ink, allowing it to change colour. Change in colour can
be triggered either electronically or simply by changings in the human skin temperature [18].
A. Wakita and M. Shibutani [20] proposed a wearable ambient display where
thermochromic inks are used to change the colours of textiles. In the same study, the
researchers created a prototype capable of a programmed colour animation, using
thermochromic inks and electronic devices, which react to the body temperature of the user.
They claim that “dancing with this prototype clothing will make it possible to realize a
performance with rich visualization of dancer’s emotion and passion”.
In a different area, K. Tsuji and A. Wakita [21] combined thermochromic inks with
other elements to create an interactive pictorial artwork based on a novel paper computing
Page 29
Chapter 2 – Literature Review
29
technique. This new technique allows dynamic colour change and animation on paper,
without losing its thin soft character. Figure 8 shows an example of this paper computing
technique.
Figure 8 – Colour-change expression. Left: before colour change. Right: iridescence colour changing [21].
During the workshop day we also concluded that this project will then make use of
the Kinect sensor to detect colour changes in thermochromic materials. But, is Kinect the
appropriate device for our project? Can it do what we need? What else can it do? These are
just some of the questions that we will address in the following section.
2.4. Microsoft Kinect Sensor Device
Figure 9 - Kinect Sensor 1
1 Source: http://lorla.com/kinect-for-xbox-360-from-microsoft-might-be-an-instant-hit/ [Accessed: 15-Mar-
2012].
Page 30
Chapter 2 – Literature Review
30
Microsoft revolutionized the world of video games and video games consoles when
they introduced, in November 2010, Kinect for Xbox 360. Kinect is an interactive, motion
sensing gaming device for the Microsoft Xbox 360. In its body, Kinect includes a multiple-
array microphone, a regular RGB camera and a pair of 3-D depth-sensing range cameras (see
Figure 9). All these technologies allow Kinect to track objects and up to six people in three
dimension, as well as to recognise faces and voice and provide full-body 3D motion capture
[22].
Both RGB and depth sensors work at a frame rate of 30Hz. The depth sensor
combines an infrared laser projector with a monochrome sensor, allowing it to capture video
data in 3D under any ambient light condition. This sensor uses VGA resolution (640x480
pixels) with 11-bits depth, while the RGB sensor uses 8-bit VGA resolution (640x480
pixels). Kinect’s working space (playing area) has been identified as the main problem of this
device. To work properly, a user has to be within a range of 1.2 to 3.5 meters from the device
and within an area of approximately 6m2. Despite these limitations, it has been seen that
sensor can still track a user within a range of approximately 0.7 to 6 meters.
Figure 10 – Kinect’s Motion Sensor 2
Figure 11 - Kinect’s Skeletal Tracking 2
Page 31
Chapter 2 – Literature Review
31
Figure 12 - Kinect’s Factial Recognition 2
Figure 13 - Kinect’s Voice Recognition 2
With Kinect, players no longer need to memorize the different functions of
conventional hand controllers, because now the player is the controller. Figures 10, 11, 12
and 13 show how Kinect technology works. For a better understanding of how Kinect works,
we recommend the reader to watch the following videos: 3 4.
After presenting itself as one of the most spectacular console controllers ever,
Microsoft rapidly realized that Kinect had a huge potential for other things than just games
and released, in mid-2011 and late-2011, non-commercial and commercial versions of its
official SDK, respectively. With these releases, Kinect rapidly expanded to PC’s and became
a platform for programmers and companies to develop new and innovative products.
Kinect is a relatively recent technology, no more than one and a half years old. But
being such an advanced and novel piece of hardware, a lot of research has already been
performed on it. That research includes studies in fields that vary from physical rehabilitation
2 Source: http://www.xbox.com/en-GB/Kinect/Kinect-Effect [Accessed: 15-Mar-2012].
3 http://www.youtube.com/watch?v=T_QLguHvACs [Accessed: 28-Mar-2012].
4 http://www.youtube.com/watch?v=Mf44bWQr3jc [Accessed: 28-Mar-2012].
Page 32
Chapter 2 – Literature Review
32
[23] to robotics. A good example of its use in robotics is NASA’s integration of Kinect with
their prototype robots [24]. The LIREC project5 (Living with Robots and intEractive
Characters) is also another good example of Kinect’s integration in robotics. This project is a
collaboration between several entities (universities, research institutes and companies) from
several different countries, Heriot-Watt being one of the partner universities. Heriot-Watt
researchers have been integrating Kinect with their prototype robot and studying how can it
be used to facilitate human-robot interaction.
Efficiently tracking and recognising hand gestures with Kinect is one of the fields that
is getting more attention from researchers [25][26][27][28]. This is a complex problem but it
has a number of diverse applications, being “one of the most natural and intuitive ways to
communicate between people and machines” [28]. Regarding full-body poses recognition, E.
Suma et al. [29] developed a toolkit which allows customizable full-body movements and
actions to control, among others, games, PC applications, virtual avatars and the onscreen
mouse cursor. This is achieved by binding specific movements to virtual keyboard and mouse
commands that are sent to the current active window.
M. Raptis et al. [30] proposed a system that is capable of recognising in real time and
with high accuracy a dance gesture from a set of pre-defined dance gestures. This system
differs from games like Harmonix’s Dance Central, allowing the user to perform random
sequences of dance gestures that are not imposed by the animated character. In this paper,
authors also identified noise – originated from player’s physique and clothing, from the
sensor and from kinetic IQ – as being one of the main disadvantages of depth sensors, like
Kinect, when compared to other motion capture systems. Following the same idea, D. S.
Alexiadis et al. [31] addressed the problem of real-time evaluation and feedback of dance
performances considering a scenario of an online dance class. Here, online users are able to
see a dance teacher perform some choreography steps which they try to imitate later. After
comparing the user performance with the teacher performance, the user performance is
automatically evaluated and some visual feedback is provided.
In a different perspective, F. Kistler et al. [32] adopted a game book and implemented
an interactive storytelling scenario, using full body tracking with Kinect to provide different
types of interaction. In their scenario, two users listen to a virtual avatar narrating parts of a
game book. At specific points, the users need to perform certain gestures to influence the
5 http://lirec.eu [Accessed: 28-Mar-2012].
Page 33
Chapter 2 – Literature Review
33
story. Almost none of the Kinect games developed so far concentrate on a story, and this may
be an interesting approach for the creation of new games. Another interesting idea is the one
presented by M. Caon et al. [33] for the creation of smart environments. Using multiple
Kinects, these researchers developed a system capable of recognising with great precision the
direction to where a user is pointing in a room. Their “smart living room” is composed by
several “smart objects”, like a media centre, several lamps, a fan, etc., and it’s able to identify
several users’ postures and pointing gestures. If a user points to any of those objects, they will
automatically change their state (On/Off). A lot of work can be done to improve home
automation based on this idea (and using Kinect).
In almost all of the papers presented before, researchers have used both the RGB and
depth sensors to track the human body or objects. At the time this literature review was
conducted, we weren’t able to find any relevant paper where researchers have studied how
well Kinect’s RGB camera can recognise and track colours and/or objects changes in size
under different lighting conditions, especially on smart textiles. Searching online, we can find
some videos of developers/users demonstrating Kinect applications where colour recognition
and tracking seems to be the main objective. However, this is clearly not enough to report on
or to formulate an informed opinion. This information would be valuable, as these are very
important features for the successful achievement of the purposed goals.
We believe that, given all the possibilities Kinect offers, much more research will be
done based on it in the next years. One fact that supports this idea was the release of the
Kinect for Windows in February 2012. Kinect for Windows consists of an improved Kinect
device and a new SDK version especially designed for Windows PC’s. This will allow the
development of new kinds of applications, and, consequently, new tools will become
available to researchers to perform new studies.
To develop the proposed software application, several choices in terms of SDKs,
frameworks, libraries, etc., will have to be made, having in account the fact that the
application will have to make use of the Kinect Sensor. In the next subsection, we will
present and discuss some of the possible choices available.
Page 34
Chapter 2 – Literature Review
34
2.4.1. Kinect Software Development Tools
There are several tools available that can be used to develop software using Kinect.
Next, we will briefly describe three of the most important software development tools
currently used.
2.4.1.1. Microsoft Kinect SDK
The Microsoft Kinect SDK is the official Kinect SDK developed and release by
Microsoft. Both the non-commercial and commercial versions were released in 2011. This
SDK allow programmers to use the Kinect sensor on computers running Windows 7. It also
allows programmers to develop applications for Kinect using Visual Studio 2010 IDE and
C++, C# or VB.NET languages.
Microsoft Kinect SDK provides, among others, three important features. The first one
is the access to raw data streams from the depth and colour camera sensors, and the
microphone array. The second enables skeleton tracking, which is able to track up to two
people moving in the Kinects sensor’s field of view. Finally, the advanced audio processing
capabilities, using the microphone array.
In February 2012 Kinect for Windows SDK was released. According to the Microsoft
Kinect website [34], this new SDK “offers improved skeleton tracking, enhanced speech
recognition, modified API, and the ability to support up to four Kinect for Windows sensors
plugged into one computer”.
2.4.1.2. OpenNI Framework
OpenNI is a non-profit organization that developed a multi-language, cross-platform
and open source framework, which provides APIs for creating applications using natural
interfaces (NI). OpenNI framework provides the interfaces for both physical and middleware
components. The physical components currently supported are: 3D sensor, RGB camera, IR
camera and audio device. The middleware components currently supported are: full body
analysis (component that generates body related information), hand point analysis
(component that generates the location of a hand point), gesture detection (component that
Page 35
Chapter 2 – Literature Review
35
identifies predefined gestures and alerts the application) and scene analyser (component that
analysis the image of the scene to produce the separation between the foreground and
background of the scene’s data, the coordinates of the floor plane data and the individual
identification of figures in the scene data) [35].
Figure 14 – Abstract Layered View of the OpenNI concept [35].
Figure 14 shows the three-layered view of the OpenNI concept. The top layer
represents the software that implements NI applications. The middle layer represents the
OpenNI API providing interfaces that interact with both sensors and the middleware
components. The bottom layer represents the hardware devices that capture the scene’s audio
and visual elements.
Production Nodes – “set of components that have a productive role in the data
creation process required for Natural Interaction based applications” – are the fundamental
elements of the OpenNI interface. There are two types of production nodes, sensor-related
and middleware-related production nodes. OpenNI currently supports the following sensor-
related production nodes: device (node that represents a physical device and enables the
device configuration); depth generator (node that generates a depth-map); image generator
(node that generates coloured image-maps); IR generator (node that generates IR image-
maps); and audio generator (node that generates an audio stream). OpenNI also currently
supports the following middleware-related production nodes: gestures alert generator (node
that generates callbacks to the application when specific gestures are identified); scene
analyser (node that analysis a scene, identifying figures and detecting the floor plane); hand
Page 36
Chapter 2 – Literature Review
36
point generator (node that supports hand detection and tracking and generates a callback that
provides alerts when a hand point is detected); and user generator (node that generates a
representation of a body in the 3D scene). Recorder (node that implements data recordings),
Player (node that reads data from a recording and plays it) and Codec (node used to compress
and decompress data in recordings) are also supported by OpenNI for recording purposes
[35].
2.4.1.3. OpenKinect Libfreenect Sofware
OpenKinect is an open community of over 2000 members that are working on free
open source libraries for Kinect. Their primary goal is to develop the libfreenect software
including drivers and cross-platform API, which enables the Kinect to be used with
Windows, Linux and Mac [36].
The libfreenect software is a multi-language platform, supporting C, C++, C# and
Java, just to name a few. It includes all the necessary code to activate, initialize and
communicate data with the Kinect hardware. Currently, libfreenect library supports access to
RGB and depth images, accelerometer and motors. Being libfreenect a project under
development, there are a lot of features that are still being developed and are not included in
the latest releases of this software. In future releases, OpenKinect community expects being
able to provide features like, among others, hand tracking, skeleton tracking, 3D audio
isolation, 3D reconstruction and GPU acceleration.
2.4.1.4. Discussion
Any of the three Kinect software development tools described before have the
necessary tools needed for the development of our application. However, we think that
Microsoft Kinect SDK has some advantages when compared to the other two. First of all, it is
the official SDK, which means that all features were appropriately tested and should work
properly. One other big advantage is that the documentation of Kinect SDK appears to be
much better than the documentation of the other two. Being a Microsoft product, it is fairly
easy to start developing applications that make use of the Kinect and Kinect SDK on Visual
Page 37
Chapter 2 – Literature Review
37
Studio, using for that any of the three most common used programming languages for
developing Windows applications, C++, C# and VB.NET.
In the perspective of the application that we will develop, the main disadvantage of
the Microsoft Kinect SDK, when compared to the other two, is that applications developed
using that Kinect SDK only run on Windows operating systems, unlike the other two.
Despite the fact that the use of any of the tools would be possible for the development
of our application, we can then conclude that Microsoft Kinect SDK will probably be our best
choice in terms of Kinect-based applications development tool.
Knowing that a Kinect sensor is going to be used in this project, and that the tool that
most likely will be used to develop the Kinect application will be the Microsoft Kinect SDK,
we now need to consider which IDE will be more suitable for our development.
2.4.2. IDE
An IDE (Integrated Development Environment) is a computer program containing
characteristics and tools to support programmers in software development. Normally, IDEs
allow a faster software development, increasing programmers’ productivity substantially.
Presently, there are several different types of IDEs, being Visual Studio, NetBeans and
Eclipse the most used within the students community.
According to a recent worldwide survey [37] conducted by Evans Data Corporation
(http://www.evansdata.com), Visual Studio is, by far, the most widely used IDE in the world,
having almost twice the users as any of its rivals. Visual Studio resulted from the
combination of Visual C++, Visual Basic, Visual C# and Visual J# - all in the same package.
In this survey, users were asked to rate several different features on each platform.
Based on those ratings, researchers were able to create an overall platform rankings graphic,
which can be seen in Figure 15.
Page 38
Chapter 2 – Literature Review
38
Figure 15 – Overall Platform Rankings [37].
Figure 16 shows a graphic detailing how users rated each one of the individual
surveyed features.
Figure 16 – Visual Studio Results [37].
Outlining the key factors, it can be verified by this study that programmers that use
Visual Studio enjoy particularly the size and quality of the “Dev Community” and the “Web
Tools”. It’s also important to mention that “Database Integration” and “Visual Tools” are two
of the main reasons programmers keep using Visual Studio. Visual Studio’s ease of use is
also one of the aspects users most enjoy.
Page 39
Chapter 2 – Literature Review
39
As we know, Microsoft Kinect SDK was developed to be used with Microsoft Visual
Studio (it’s also one of the SDK requirements). This means that we will have to use Visual
Studio if we decide to use Microsoft’s Kinect SDK.
One final decision that needs to be made is related to which image processing library
to use during the development. The library is expected to have several basic functionalities,
like pre-defined image processing algorithms, which will help us during the development
process. Therefore, it won’t be necessary to write all the algorithms needed to perform most
of the required image processing routines.
2.4.3. Image Processing Libraries
After a quick online search, several free open source image processing libraries were
found. OpenCV (Open Source Computer Vision) is a free library of programming function,
written in C and C++, for real time computer vision [38]. OpenCV is the most widely
used/supported Computer Vision library in the world. Having worked with it before, our
initial idea was to use this library. However, we don’t thing that’s going to be possible. The
main reason why we think that as to be with the fact that we intend to use C# as the main
programming language of the application, which makes very difficult the use of the OpenCV
library.
Having in consideration those limitations, we then found two interesting libraries that
seem to have what is needed for the development of this application, EmguCV.NET and
AForge.NET. EmguCV.NET is a “.NET wrapper to the OpenCV image processing library,
allowing OpenCV functions to be called from .NET compatible languages such as C#, VB,
VC++, etc.” [39]. AForge.NET is a framework entirely written in C#, containing a set of
Computer Vision and Artificial Intelligence libraries. As both seem to have everything that
we need to develop this application, the decision of each on to use will be made based on
their learning and programming level of difficulty.
Page 40
Chapter 2 – Literature Review
40
2.5. Conclusion
Motion capture, facial recognition, voice recognition, etc. These are just a few of the
many features Microsoft made available to everyone with the release of the Kinect and the
Kinect SDKs. For this project, the use of devices other than Kinect could have been
considered, being that probably most of the research will be done based on its RBG camera.
But the fact is that Kinect is able to integrate all these different technologies in one small
device, making it a very powerful tool.
Therefore, and taking in consideration all the features and possibilities Kinect offers,
we think that this device will be suitable for the successful accomplishment of the proposed
goals. We also think that if the integration of Kinect with the “Smart Costumes” project is
successful, other than a suitable device for our experiment, it will be a device that, due to its
characteristics, will allow researchers to go much further in their investigations, allowing then
to explore new and different areas.
Page 41
Chapter 3 – Requirements Analysis
41
Chapter 3
Requirements Analysis
During the workshop day we were able to identify the requirements that the final
system will have to deliver. Therefore, this chapter is based on everything that was agreed
with the “Smart Costumes” research associate during that day, and described in the
Workshop section in Chapter 2.
3.1. Users of the Application
The application developed during this project is going to be used by the researchers
that are currently working on the “Smart Costumes” project. In this initial stage, it is not
expected that this application will be used by anyone else other than current or future
researchers working on that project.
3.2. Requirements
The main requirement for this project is the creation of an application capable of
detect, track and measure colour using a Kinect sensor device. How well can Kinect perform
under different lighting conditions and how do lighting conditions affect the colour detected
by Kinect is what we intend to study with the development of this project, and that is what it
is going to be evaluated.
Once the proposed application is developed, we will then conduct some experiments
that will help us evaluate our system. In those experiments, will submit our samples to four
different lighting conditions, and for each sample we will collect the RGB values detected. In
Page 42
Chapter 3 – Requirements Analysis
42
the first experiment, we will collect the RGB values detected using a high resolution pictures
taken with a digital camera, under a controlled lighting environment. In the second
experiment, we will collect the RGB values detected using the application developed and a
Kinect sensor, under three different lighting conditions (two under non-controlled natural
lighting conditions and one under non-controlled artificial lighting conditions).
Finally, we will statistically test our hypothesis by performing some inferential
statistic tests using the data gathered, more specifically, some One-Way Repeated Measures
ANOVA tests. The results of these tests will allow us to conclude whether or not the lighting
conditions affect the colour that is detected by Kinect, and also under which of the several
lighting conditions tested does Kinect performs better.
For more details on how the experiments were conducted and how the evaluation was
performed please read Chapter 5.
Being this project just a small part of a much bigger project, several other
requirements were identified. One of the objectives of the “Smart Costumes” project
researchers is to build a complex system integrating smart costumes with colour-changing
technology (thermochromic) and interactive spaces. Having in consideration the possible
scenario described during in the Workshop section in Chapter 2, we were asked to develop
the computer application part of the system. This way, and besides the main requirement, it
was also asked that the application should be able to:
Track and record the space position data of the performer. This data will be used later
for, among others, the creation of beautiful data visualisations or Infographics6;
Automatic detect colour changes in the smart costume and, depending on those
changes, fire an event that will later be used to turn an electronic switch on or off;
Develop an application that will allow researchers in a near future to create new
software applications based on the work developed;
Create software documentation.
6 Infographics or Information Graphics are graphic visual representations of information, data or knowledge.
Page 43
Chapter 3 – Requirements Analysis
43
3.2.1. Software/Hardware Requirements
To effectively develop, test and evaluate the application, several software and
hardware requirements are needed.
The hardware requirements needed are:
Kinect sensor which includes special USB/power cabling;
PC with:
o Dual-core 2.66-GHz or faster processor (Kinect SDK requirement);
o 2 GB RAM (Kinect SDK and VS2010 requirements);
o 3GB of available hard disk space (VS2010 requirement);
o 5400 RPM hard disk drive (VS2010 requirement);
o DirectX 9 capable video card running at 1024 x 768 or higher-resolution
display (VS2010 requirement);
The software requirements are:
Windows XP or higher (Visual Studio 2010 requirement);
Kinect SDK;
Visual Studio 2010 Express or other Visual Studio 2010 edition (Kinect SDK
requirement);
.NET Framework 4.0 (Kinect SDK requirement);
Photoshop or any other image editing software.
Other requirements:
Several thermochromic material samples;
Device(s) capable of applying heat and/or cold to the thermochromic material
samples.
Light meter.
Page 44
Chapter 3 – Requirements Analysis
44
3.2.2. Evaluation Criteria
The following criteria will be used in the evaluation to assess if the work developed
meets or fails to meet the mandatory and the non-mandatory requirements:
Is the application able to detect colour?
Is the application able to track colour?
Is the application able to track and record the space position data of the performer?
How well can Kinect perform under different lighting condition?
How does light affect the colour that is detected by Kinect?
Page 45
Chapter 4 – Technical Description
45
Chapter 4
Technical Description
As it was mentioned in Chapter 1, being that no information was found about the
Kinect’s sensor ability to perform under different lighting conditions, it became necessary the
development of a new software system capable of performing several tests about this subject.
In this chapter, we will discuss the technical aspects of the new software system created.
4.1. Technical Decisions
The first technical option for the development of this work was the choice of the
programming language, C#. Most of the Kinect’s and Image Processing libraries support
several different programming languages, allowing the programmer to choose which
language he desires. As we haven’t had any kind of restriction when choosing the
programming language, we decided to use C#. C# is one of the most widely used
programming languages, especially within software development industry/companies
worldwide. The main reason why we decided to use this language was our necessity to
deepen our knowledge of this language, as we had only learned and started to work with it in
a module during this Master’s degree.
In the scope of this project, an IDE also had to be chosen, and again, we had the
opportunity to choose whatever IDE we wanted. We decided to use Microsoft’s Visual
Studio. We’ve had the chance to use this IDE before and we consider it to be one of the most
– if not the most – complete and easy to use IDEs that we’ve worked with so far. The ease of
use and the amazing integrated debugger are also one of the reasons why we used and we’ll
keep using this IDE to develop our applications. It’s also important to mention that the
official Microsoft’s Kinect SDK was developed to be used with Visual Studio – being also
one of the SDK requirements – so, the choice was obvious.
Page 46
Chapter 4 – Technical Description
46
In terms of external frameworks, toolkits and libraries, this application makes use of
the Kinect SDK, the AForge.NET Framework and the Extended WPF Toolkit. Kinect SDK is
used in this application mainly to provide support for the features of the Kinect Sensor, more
exactly, colour images. Others could have been used, but this SDK, other than having
everything that we needed in this project, has the advantage of being the official Kinect’s
SDK, released by Microsoft, having better documentation and being easier to integrate,
especially, with Visual Studio. AForge.NET Framework is a framework written in C#,
containing a set of Computer Vision and Artificial Intelligence libraries. In this application
we make use of the AForge.NET image processing library to filter and process the colour
image captured by Kinect. As we decided to use C# on this project, we had to choose a C#
image processing library. EmguCV.NET – which is an OpenCV (most widely used/supported
Computer Vision library in the world) C# wrapper – could have been used in this application
instead, but it has worst documentation and it’s harder to program with when compared to
AForge.NET. During the development of this application we became aware that the normal
WPF – Windows Presentation Foundation (WPF) is a “computer-software graphical
subsystem for rendering user interfaces in Windows-based applications” [40] – Toolkit,
which is integrated in Visual Studio, does not have, in its original set of components, a
commonly used and needed component, the “UpDown box” (Figure 17). After a quick online
search, we found the Extended WPF Toolkit, which is a collection of WPF controls,
components and utilities made available outside the normal WPF Toolkit [41]. The
“UpDown” box was the only Extended WPF Toolkit component used in this application.
Figure 17 – Extended WPF Toolkit “UpDown box”
4.2. Application
The developed application is an application that allows its users to detect and track a
certain colour in a real-time image captured by Kinect Sensor, and perform several different
tests based on that image. For the process to be simple, of easy access to a new user (that
does not imply spending a lot of time learning how the deal with the application), it was
created a simple interface that maintains a common look throughout its use. A User’s Guide
Page 47
Chapter 4 – Technical Description
47
of the developed application can be seen in Appendix B. There, we show in more detail the
main screen and the different options that are available to the users. Next, we will explain
some of the main concepts and definitions that will allow the user to understand how the
main features of the developed application work.
The application’s UML diagram is shown in Figure 18. An explanation of the utility
and the most important methods of each of these classes can be found in the Developer’s
Guide in Appendix C.
Figure 18 – UML Diagram
Page 48
Chapter 4 – Technical Description
48
In a very basic way, we can say that the application starts by getting the colour image
being captured by the Kinect Sensor – image that is displayed on the left hand side window
(“Colour Image”). The colour image then goes through some image processing routines and
the result is displayed on the right hand side window (“Gray Image”) where we can see if the
colour is being detected or not.
To treat the received images we had two options: create a Bitmap every frame, or
create a Writeable Bitmap during the first frame and then use it on the other frames. Creating
a Writable Bitmap during the first frame and then use it on the other frames is more efficient
than creating a Bitmap every frame. Having this information, logically, we decided to use the
second option. This decision was made during the initial stages of the development and
proved to be the wrong one to this problem. At later stages of the development, when we
started to implement the image processing routines, we found that the image saved in the
Writeable Bitmap would necessarily need to be converted to a Bitmap, so that those and the
blob detection routines would work. That meant the creation of a method that converts a
Writeable Bitmap into a Bitmap, creating for that a new Bitmap each time it fires (which is in
each frame!), thing that we so much avoided in the first place.
To perform the colour detection we use an image processing technique called blob
detection. Blob detection is a technique that is “aimed at detecting points and/or regions in
the image that differ in properties like brightness or colour compared to the surrounding”
[42]. Figure 19 shows an example of an image before and after applying the blob detection
algorithm.
Figure 19 – Blob Detection 7
7 Source: http://www.aforgenet.com/framework/features/blobs_processing.html
Page 49
Chapter 4 – Technical Description
49
Before the blob detection routine can be applied, the colour image needs to be filtered
by colour – the filter removes the background, making it completely black, being only visible
stand-alone objects with a certain colour – and then transformed in to a grayscale image. To
perform those operations, some other AForge.NET routines are used.
The AForge.NET colour filter filters pixels inside/outside of a specified RGB colour
range, keeping the pixels with colour that is inside that range and filling the rest with a
specified colour (in this case, black). “Gray Image”, in the Main Window, shows an image
after the filter has been applied. In that image we can see that the filter keeps the pixels that
are inside the defined range and fills the rest of the pixels with black. If no filter was applied,
or if the RGB colour ranges were all 0 (lower value of red, green and blue range) and 255
(higher value of red, green and blue range), the image displayed would be exactly the same as
the one displayed by the “Colour Image”. Figure 20 shows an example of an image before
and after applying the AForge.NET colour filter, with RGB colour ranges = Red: 100 - 255;
Blue: 0 - 75; Green: 0 - 75.
After this process, the image is converted to a grayscale image using an AForge.NET
image grayscaling routine.
Figure 20 – Aforge.NET Colour Filter 8
It is also important to mention that, we can see if a certain colour is being detected if,
in the “Gray Image”, in the Main Window, a green box is drawn around the colour/object.
8 Source: http://www.aforgenet.com/framework/docs/
Page 50
Chapter 4 – Technical Description
50
When a colour is being detected, the average RGB values of that colour are calculated. Those
calculations are based only on the pixels inside the green box.
4.3. Application Testing
Creating a good computer software program implies spending a considerable amount
of time testing it, not only when it is finished, but also throughout the entire development
process. Testings’ should be performed every time a new feature is implemented or changed,
allowing the developer to verify if the new implementation works properly, does what it is
designed to do, and does not affect the correct functioning of the other parts of the
application.
In the next subsection we will describe how the main features of the application were
tested throughout the development process.
4.3.1. Colour Detection/Tracking Testing
To detect a certain colour, we need to play with the RGB colour ranges and find the
ranges that better allow the application to detect the colour. To test this feature, we placed
each of our samples in from of the Kinect and played with the RGB values until the colour
was detected (until the green box was drawn around the coloured object). This test allowed us
to check if the colour detection algorithms were working properly.
When a colour is being detected, the application needs to calculate the RGB colour
values of the detected colour. To test this feature, we placed each of our samples in from of
the Kinect and played with the RGB values until the colour was detected. When the
application detects the colour, the RGB values of the detected colour are calculated and
displayed in the Main Window. Having the RGB values calculated by the application, we
then used an image editing software application (e.g. Adobe Photoshop) to check if the colour
given by those RGB values was anywhere near the colour that was being detected. This test
allowed us to check if the algorithms used to calculate the RGB values of the detected colour
were working properly.
Page 51
Chapter 4 – Technical Description
51
At the end of the development process we were able to test if the actual output of
application’s main features was the one expected (Table 1).
Feature Test Expected Output
Is the actual
output the
one
expected?
Is the application
able to detect
objects of a
certain colour?
Place sample in front of
Kinect and play with
RGB colour ranges.
A green box is drawn
around the detected
object and the RGB
colour values
calculated and
displayed in the Main
Window.
Yes
Is the application
able to track
objects of a
certain colour?
Place sample in front of
Kinect; play with RGB
colour ranges until
colour is detected; move
sample around.
Green box follows the
detected object. Yes
Is the application
able to
automatically
detect colour
changes?
Place sample with
thermochromic dye in
front of Kinect; set
automatic colour
detection; start automatic
detection; apply heat to
the sample.
After applying heat to
the sample the colour
starts to change and the
application detects the
colour. Colour to
detect changes after a
certain amount of time.
Yes
Can the
application
calculate the RGB
values of the
detected colour?
Place sample in front of
Kinect; play with RGB
colour ranges until
colour is detected.
When a colour is being
detected, RGB values
are calculated and
displayed in the Main
Window.
Yes
Table 1 – Main Features Testings
Page 52
Chapter 4 – Technical Description
52
4.4. Possible Technical Future Work
We think that in the future the code could be improved in some aspects. The use of
threading, to separate the image processing and blob detection routines execution from the
rest of the application’s execution, would be beneficial to the application, improving its
performance. One other thing that could be improved is how we currently deal with the
received images. We knew that the use of Writeable Bitmaps instead of Bitmaps would
improve our application’s performance, but one thing that we didn’t know, at that stage in the
development, was that the some of the image processing and blob detection routines needed a
Bitmap to work. If we have known that in the first place, we probably would not have needed
to create the convertToBitmap() method and our application would probably be more
efficient. Actually, we think that the developed application is quite efficient having in
account that we are working with real-time images, but these two improvements could make
the application even more efficient.
This application was originally designed to work with Windows operating systems
only. One possible idea for future work could be trying to port it to other operating systems.
Page 53
Chapter 5 – Evaluation
53
Chapter 5
Evaluation
In this chapter we will describe the experiments performed and present the results
obtained, which allowed us to statistically test our hypothesis.
To collect the necessary data to test our hypothesis, we conducted two different
experiments. Each experiment allowed us to collect the RGB colour values of our samples
under four different lighting conditions. In the first experiment, we submitted our samples to
a controlled lighting environment. That experiment allowed us to collect the RGB values of
the samples under what we, in this study, consider to be the optimum/ideal lighting
conditions for colours to be detected using a Kinect sensor. In the second experiment, we
used a Kinect to collect the samples’ RGB values when submitted to three different non-
controlled lighting environments. These values were then compared to the values measured
under the controlled lighting environment. Finally, we draw some conclusions about under
which natural lighting conditions does Kinect perform better, and, how do lighting conditions
affect the colour detected by Kinect.
5.1. Method
Twelve different samples (twelve pieces of fabric, three of which printed with one
simple dye and nine printed with a combination of a simple dye with a thermochromic dye)
were submitted to two different experiments.
In the first experiment, the RGB values of each sample were calculated based on high
resolution pictures taken to each sample under a certain lighting condition (Light 0), using a
digital camera. The aim of the first experiment was to measure the RGB values of each
sample under a controlled lighting environment.
Page 54
Chapter 5 – Evaluation
54
In the second experiment, we submitted the same samples to three other different
lighting conditions (Light 1, Light 2 and Light 3), measuring the RGB values of each sample
using a Kinect sensor and the developed application. The aim of the second experiment was
also to measure the RGB values of each sample, but now using the Kinect sensor and the
application developed, and submitting the samples to three different non-controlled lighting
environments.
5.1.1. Design
This study used a within-subjects design. There was one independent variable with
four conditions: lighting condition (with four levels: Light 0, Light 1, Light 2 or Light 3).
Each sample was tested under the four different lighting conditions and for each, the Red,
Green and Blue colour values were measured.
5.1.2. Participants
To perform this particular study, no human subjects other than the developers were
involved in any kind of experience or were asked to provide any kind of opinion or feedback
about the developed work.
5.1.3. Apparatus
To perform the first experiment we used a Canon EOS 5D MK II (Shutter Speed:
1/200 s, F-Stop: f/18, Aperture Value: f/18, Max Aperture Value: f/4.0, ISO Speed Rating:
100, Focal Length: 85.0 mm, Lens: EF24-105mm f/4L IS USM) and Elinchrom D-Lite 400
studio lights, to take high resolution photos of each sample. To calculate the average RGB
values of the pictures of each sample we used Adobe’s Photoshop CS5 (trial version).
To perform the second experience we used a Kinect sensor, which was connected to a
laptop running the developed application. The RGB values of the samples under the three
different lighting conditions were measured using the developed application. In the second
experiment it was also used a light meter device (RS Components Lux-Meter 0500) to
measure the amount of light the sample were exposed to.
Page 55
Chapter 5 – Evaluation
55
Both experiments were based on twelve samples – pieces of fabric printed with one
simple dye, or, a combination of a simple dye with a thermochromic dye – created especially
for this project. For more information about the samples used please see Appendix D.
5.1.4. Procedure
5.1.4.1. Experiment 1
To collect the data under a controlled lighting environment, we set up a first
experiment which allowed us to take a high resolution picture of each sample.
In a dark room were placed two Elinchrom D-Lite 400 lights and a Canon EOS 5D
MK II digital camera facing a white marker (Figure 21 and 22). After calibrating and
configuring all necessary parameters of the machines, we placed or first sample in front of the
camera, turned off the lights, and took a picture. For each sample with thermochromic dyes, a
first picture was taken before heat had been applied to the sample. After taking that picture,
we applied heat to the same sample using an iron. After heating up the sample, we placed it in
front of the camera, turned off the lights and took another picture of the same sample, which
had now a different colour. This process was repeated for all the other samples.
Figure 21 – Experiment 1 Set Up
Page 56
Chapter 5 – Evaluation
56
Figure 22 – Experiment 1 Set Up 2
Finally, to calculate the RGB values of each sample, we opened each picture with
Photoshop CS5. There, and using the Photoshop tools, we selected the portion of the image
containing our sample, opened the “Histogram” window and then extracted the “Mean” value
of each of the Red, Green and Blue channels.
5.1.4.2. Experiment 2
After collecting data of each sample under controlled lighting conditions, using
pictures and an image editor software, we then set up another experiment in which we
collected data under three other different non-controlled lighting conditions, using our
application.
Three different measurements were performed on three different times during a day.
At the beginning of each of the three different measurements, we used a light meter to
measure the amount of light that the sample would be exposed to at the moment of the
experiment. The first measurements were performed in the middle of the day, with high
natural lighting (Light 1), being the samples exposed to ≈2490 lux of light. The second
measurements were performed at the end of the day, with low natural lighting (Light 2),
being the samples exposed to ≈28 lux of light. The third and final measurements were
performed at night, with no natural lighting, and recurring to artificial lighting (Light 3),
being the samples exposed to ≈71 lux of light.
Page 57
Chapter 5 – Evaluation
57
In a room with plenty natural lighting, a Kinect sensor, which was connected to a
laptop running the developed application, was placed facing a wall (Figure 23 and Figure 24).
To test the first sample, we stuck it to the wall right in front of Kinect and configured the
application to detect the sample colour. When the colour was detected, the RGB values were
automatically calculated and that data recorded. For each sample with thermochromic dyes, a
first measurement was made before heat had been applied to the sample. After recording the
necessary data, we applied heat to the same sample using an iron. After heating up the
sample, we repeated the same process as described before. This process was repeated for all
the other samples.
Figure 23 – Experiment 2 Set Up
Figure 24 – Experiment 2 Set Up 2
Page 58
Chapter 5 – Evaluation
58
5.2. Results
Figure 25 shows the mean values of the Red, Green and Blue colours for each lighting
condition. The errors bars represent the 95% confidence interval for the mean. Inspecting
Figure 25, and comparing the measured RGB values under Light 0 with the values measured
under Light 1, 2 and 3, it appears that the RGB colour values detected by Kinect are highly
affected by light. Figure 25 also suggests that the RGB values detected are more affected
under lower lighting conditions, Light 2 and Light 3.
Figure 25 – Effects of lighting on RGB colour values
Figure 26 – Red value variation from Light0
0153045607590
105120135150165180195210225240255
Light 0 Light 1 Light 2 Light 3
Me
an C
olo
ur
Val
ue
Lighting Condition
Red
Green
Blue
-40-20
020406080
100120140160
1 22
.1 33
.1 44
.1 5 66
.1 77
.1 88
.1 91
01
0.1 11
11
.1 12
12
.1
Dif
fere
nce
Val
ue
Sample
Red Value Variation from Light 0
L1
L2
L3
L0
Page 59
Chapter 5 – Evaluation
59
Observing Figure 26, it appears that the red values measured under Light 2 are the
ones that differ the most from the values measured under Light 0. It is also apparent that the
red values of Light 1 are the ones that most approximate to the red values of Light 0.
The same way as before, but now observing the Figure 27 containing the green
values, it appears that the green values measure under Light 1 and 2 are the ones that most
approximate and most differ from the green values measured under Light 0, respectively.
Figure 27 – Green value variation from Light 0
Finally, in Figure 28, the same apparent tendency is observed for the blue values, with
the blue values measured under Light 1 and 2 being the ones that most approximate and most
differ from the blue values measured under Light 0, respectively.
Figure 28 – Blue values variation from Light 0
-40-20
020406080
100120140160
1 22
.1 33
.1 44
.1 5 66
.1 77
.1 88
.1 91
01
0.1 11
11
.1 12
12
.1
Dif
fere
nce
Val
ue
Sample
Green Value Variation from Light 0
L1
L2
L3
L0
-40-20
020406080
100120140160
1 22
.1 33
.1 44
.1 5 66
.1 77
.1 88
.1 91
01
0.1 11
11
.1 12
12
.1
Dif
fere
nce
Val
ue
Sample
Blue Value Variation from Light 0
L1
L2
L3
L0
Page 60
Chapter 5 – Evaluation
60
5.2.1. Red Colour Results
Figure 29 shows the mean values of the red colour for each lighting condition. The
errors bars represent the 95% confidence interval for the mean. Inspecting Figure 29, and
comparing the measured values of red under Light 0 with the values of red measured under
Light 1, 2 and 3, it appears that the red colour values detected by Kinect are highly affected
by lighting. This figure also suggests that red values detected are more affected under lower
lighting conditions, Light 2 and Light 3.
Figure 29 – Mean value of Red
A one-way repeated measures ANOVA was performed on these data. Inspecting the
results of the Mauchly’s test we were able to verify that the assumption of sphericity had
been violated, χ2(5) = 32.18, p < .05. Therefore, degrees of freedom were corrected using
Greenhouse-Geisser estimates of sphericity (ϵ = .50). The results show that the values of the
red colour detected were significantly affected by the amount of light the samples were
exposed to, F(1.49, 29.77) = 78.59, p < .05, r = 0.89.
Bonferroni post hoc tests revealed significant differences on the Red colour values
between Light 0 and Light 2, CI.95 = 50.54 (lower) 93.55 (upper), p < .05, and between Light
0 and Light 3, CI.95 = 5.34 (lower) 41.89 (upper), p < .05.
143.43 137.86
71.38
119.81
0153045607590
105120135150165180195210225240255
Light 0 Light 1 Light 2 Light 3
Me
an C
olo
ur
Val
ue
Light Condition
Page 61
Chapter 5 – Evaluation
61
Figure 30 – Estimated marginal means of Red values
5.2.2. Green Colour Results
Figure 31 shows the mean values of the green colour for each lighting condition. The
errors bars represent the 95% confidence interval for the mean. Inspecting Figure 31, and
comparing the measured values of green under Light 0 with the values of green measured
under Light 1, 2 and 3, it appears that the green colour values detected by Kinect are highly
affected by lighting. This figure also suggests that green values detected are highly affected
not only under lower lighting conditions, Light 2 and Light 3, but also under higher lighting
conditions, Light 1.
Figure 31 – Mean values of Green
121.24
96.10
53.14 72.81
0153045607590
105120135150165180195210225240255
Light 0 Light 1 Light 2 Light 3
Me
an C
olo
ur
Val
ue
Light Condition
Page 62
Chapter 5 – Evaluation
62
A one-way repeated measures ANOVA was performed on these data. Inspecting the
results of the Mauchly’s test we were able to verify that the assumption of sphericity had
been violated, χ2(5) = 34.59, p < .05. Therefore, degrees of freedom were corrected using
Greenhouse-Geisser estimates of sphericity (ϵ = .48). The results show that the values of the
green colour detected were significantly affected by the amount of light the samples were
exposed to, F(1.43, 28.58) = 60.20, p < .05, r = 0.86.
Bonferroni post hoc tests revealed significant differences on the Green colour values
between Light 0 and Light 1, CI.95 = 15.41 (lower) 34.87 (upper), p < .05, between Light 0
and Light 2, CI.95 = 45.54 (lower) 90.65 (upper), p < .05, and between Light 0 and Light 3,
CI.95 = 31.34 (lower) 65.51 (upper), p < .05.
Figure 32 – Estimated marginal means of Green values
5.2.3. Blue Colour Results
Figure 33 shows the mean values of the blue colour for each lighting condition. The
errors bars represent the 95% confidence interval for the mean. Inspecting Figure 33, and
comparing the measured values of blue under Light 0 with the values of blue measured under
Light 1, 2 and 3, it appears that the blue colour values detected by Kinect are highly affected
by lighting. This figure also suggests that blue values detected are more affected under lower
lighting conditions, Light 2 and Light 3.
Page 63
Chapter 5 – Evaluation
63
Figure 33 – Mean values of Blue
A one-way repeated measures ANOVA was performed on these data. Inspecting the
results of the Mauchly’s test we were able to verify that the assumption of sphericity had
been violated, χ2(5) = 35.57, p < .05. Therefore, degrees of freedom were corrected using
Greenhouse-Geisser estimates of sphericity (ϵ = .49). The results show that the values of the
green colour detected were significantly affected by the amount of light the samples were
exposed to, F(1.46, 29.19) = 58.06, p < .05, r = 0.85.
Bonferroni post hoc tests revealed significant differences on the blue colour values
between Light 0 and Light 2, CI.95 = 49.27 (lower) 92.55 (upper), p < .05, and between Light
0 and Light 3, CI.95 = 1.80 (lower) 43.54 (upper), p < .05.
Figure 34 – Estimated marginal means Blue values
134.71 124.57
63.81
112.05
0153045607590
105120135150165180195210225240255
Light 0 Light 1 Light 2 Light 3
Me
an C
olo
ur
Val
ue
Light Condition
Page 64
Chapter 5 – Evaluation
64
5.3. Discussion
The main result of this study was that Kinect does perform better under high lighting
conditions than under low lighting conditions. These results also show that lighting
conditions affect significantly the colours that are detected by the Kinect sensor. This is true
especially for low lighting conditions, where the colours detected by Kinect are significantly
affected. It is also possible to conclude from these results that, between the 3 different RGB
colours, the green colour values are the ones that are more affected by the lighting conditions.
Having in consideration that experiment 2 was performed under non-controlled
lighting conditions, it is possible that some values may slightly vary from the values collected
(results that can be seen in Appendix E), if the same experience is repeated under the same
lighting conditions. However, that variation, if any variation at all occurs, will be very small
and won’t have a significant impact on the tests performed.
Problems would rise in a scenario where the application developed and Kinect would
be used to automatically detect several different colours under very low lighting conditions.
The main problem would be the difficulty in distinguish between the different colours. In a
scenario where good lighting conditions are used, results suggest that Kinect and the
developed application would have a good performance, being able to detect a colour from a
wide range of different colours with a considerable high level of accuracy.
It is also important to mention that, although the results showed that there is a
significant difference on the colour detected under Light 0 when compared to Light 3,
lighting conditions similar to Light 3 would still be possible to use in a real scenario. The
performance would be worst, but the developed application and Kinect would still be able to
detect different colours with a certain level of accuracy.
In future research, it would be useful to perform the same experiments under more
(and different) lighting conditions, especially with brighter lighting conditions. That way it
would be possible to collect more data which could be used to extend this study further. As
mentioned before, the fact that experiment 2 was conducted under a non-controlled lighting
environment and in the impossibility of controlling all variables in such environment, data
collected can vary slightly if collected again under the same lighting conditions. Therefore, it
Page 65
Chapter 5 – Evaluation
65
would also be useful to perform those experiments under controlled lighting environments,
instead of under non-controlled lighting environments, like the ones performed in this study.
That would increase the validity of the results.
Finally, different techniques to check how accurate the values read by the application
developed are could also be used. That would allow researchers to compare the values and
analyse the application’s accuracy.
Page 66
Chapter 6 – Professional, Legal and Ethical Issues
66
Chapter 6
Professional, Legal and Ethical Issues
During this project, no professional, legal or ethical questions needed to be addressed.
No human subjects other than the developers were involved in any kind of experience or
were asked to provide any kind of opinion or feedback during the entire project. All data
collected during the experiments is non-protected and can be used by others.
The only question that could possibly be raised in terms of legal issues could be the
use of illegal software during the development of the application. The version of the Kinect
SDK that was used was a free non-commercial version. Regarding Visual Studio 2010, the
free Express edition was used. Finally, a trial version of Photoshop CS5 and SPSS with an
academic licence were also used during this project.
Page 67
Chapter 7 – Project Plan
67
Chapter 7
Project Plan
Taking in consideration the users of the application and the requirements identified
and described before in this report, we now present our project plan.
7.1. Project Task Analysis
Figure 35 illustrates the work breakdown structure of the project.
Figure 35 – Work Breakdown Structure
Project
1. Design
(5 Days)
1.1 Identity Processes
(1 Day)
1.2 Identify I/O
(1 Day)
1.3 Identify Interfaces
(1 Day)
1.4 Create Tests Plan
(1 Day)
1.5 Validate Specifications
(1 Day)
2. Implementation
(55 Days)
2.1 Create Source Code to Implement Colour Recognition and Measurement
System
(35 Days)
2.2 Create Source Code to Implement
Unitary Tests
(5 Days)
2.3 Write Documentation
(10 Days)
2.4 Initial Testing and Feedback
(5 Days)
3. Testing and Debugging
(6 Days)
3.1 Create Test Data
(1 Day)
3.2 Test Colour Recognition and
Measurement System
(5 Days)
4. Evaluation
(5 Days)
4.1 Perform Necessary Evaluations
(5 Days)
5. Report and Poster
(33 Days)
5.1Project Report Write Up
(25 Days)
5.2 Hand In Porject Report
(1 Day)
5.3 Poster Preparation
(6 Days)
5.4 Poster Session
(1 Day)
Page 68
Chapter 7 – Project Plan
68
7.2. Project Schedule
This project will have an expected developing time of approximately three-and-a-half
months, starting on April 30st of 2012 and ending on August 23
th of 2012 with the poster
session. Having in consideration the time available for the development of this project and
the steps needed to complete it, the following Gantt chart illustrating the project schedule
(Figure 36) was defined:
Figure 36 - Gantt Chart
7.3. Risk Analysis
Table 2 shows the project risk matrix. In this table we identified possible risks that
may occur throughout the project, their impact, how likely are they to occur, and also a
description of what can be done to prevent or minimise their occurrence.
Risk Impact Likelihood Contingency Plan
Not having access to the
necessary software and/or
hardware
High Low
Make sure that the necessary
tools will be available by the
time the project starts. If
necessary, buy some of the
Page 69
Chapter 7 – Project Plan
69
hardware.
Learning the Kinect SDK
takes longer than expected Medium Low
Start studying the SDK
before starting the project.
Implementation of any task
takes longer than expected High Medium
Try to have as much float
time as possible.
Timeline Estimates
Unrealistic Medium
Low /
Medium
Try to have as much float
time as possible. If that is
not enough, reduce number
of implemented
specification.
Kinect malfunction and time
needed to get a new one High Low
Try to have as much float
time as possible.
Impossibility to work on the
project due to illness or any
other reason
Medium Low Try to have as much float
time as possible.
Project supervisor
unavailability due to illness,
lack of time or any other
reason
Medium Medium
Move tasks where
supervisor intervention will
be needed to the start of the
project, if possible.
Bugs in the code Medium Medium
Make sure that the necessary
tests are performed every
time major changes in the
code occur.
Not enough thermochromic
material samples to perform
evaluation
High Low
Make sure that enough
thermochromic samples are
made available for testing
and evaluating the system.
Not having access to the
necessary devices to apply
the heat or cool in the
thermochromic material
samples
Low Low
Make sure that the necessary
tools will be available by the
time evaluation process
starts.
Not having access to a device
capable of measuring the
amount of light
Medium Medium
Ask to our supervisor for
one early before the
evaluation process.
Table 2 – Project Risk Matrix including Contingency Plan
Page 70
Chapter 8 – Conclusion & Future Work
70
Chapter 8
Conclusion & Future Work
Having in consideration everything that was said throughout this report and
remembering all the aspects that let to this project, we present, in this chapter, the conclusions
of this Master’s dissertation.
As it was mentioned in Chapter 1, during the workshop/brainstorm session, organised
with one of the “Smart Costumes” researcher associates, were discussed some ideas that
raised questions about Kinect’s performance on detecting colour under different light
conditions. After some literature review, we weren’t able to find any previous research on or
related to this particular subject, fact that led us to this project.
Through the creation of a new software application that is capable of detect, track and
measure colour, as well as automatically detect colour changes in real time images captured
by the Kinect sensor, we were able to conduct an initial study about the Kinect’s performance
under different lighting conditions. The evaluation results showed that lighting conditions
affect significantly Kinect’s performance, and that Kinect does perform better under high
lighting conditions than under low lighting conditions.
Having in account the results of this study, we suggest that, if the initial idea of the
“Smart Costumes” project – which will make use of the developed application and Kinect in
a real dance environment – is implemented, it should be avoided an environment with low
lighting conditions. Other than that, the developed application and Kinect should have a good
performance and should be able to automatically detect and track different colours in a smart
costume.
This new software represents a tool which can, in the future, be used by the “Smart
Costumes” project researchers in their research. Its user-friendly interface will allow current
and future researchers to be ready to operate with it in a short period of time. The way code
Page 71
Chapter 8 – Conclusion & Future Work
71
was developed and commented will also allow current and future researchers, with technical
knowledge, to add new features to the application very easily.
We can’t say, however, that “everything is done”, and in that perspective there is for
sure some future work that can be developed. One of the non-mandatory requirements of the
developed application was the use Kinect’s depth sensor to track and record performer’s
space position data, feature which, due to the lack of time, was not possible to develop.
Kinect’s depth sensor can be easily integrated in the developed application in the future,
which will allow amazing feature to be added to the application.
Being this project just a small part of the “Smart Costumes” project, next step will
pass for its integration – the integration of the developed application – with the other parts of
the “bigger” project. This is something that is already being planned and that it will be done
in the near future. After the successful integration, we think that other studies can be
performed based on this project and on the developed application. Some technical
improvements can also be performed on the application in the future, mainly in the code. But
we think that the next step, in terms of technical work, would be to adapt (port) the
application, so that it could run on other operating systems other than Windows.
The fact that our project was developed as part of the “Smart Costumes” research
project, that we worked directly with people that are currently working on that project, and
also the fact that our work may actually have some practical implementation in a near future,
made this MSc dissertation project gain a whole new dimension, becoming more challenging
and, at the same time, more interesting.
Page 72
72
References
[1] S. L. P. Tang and G. K. Stylios, “An overview of smart technologies for clothing
design and engineering,” International Journal of Clothing Science and Technology,
vol. 18, no. 2, pp. 108–128, 2006.
[2] L. V. Langenhove and C. Hertleer, “Smart clothing: a new life,” International Journal
of Clothing Science and Technology, vol. 16, no. 1/2, pp. 63–72, 2004.
[3] X. Tao, Smart fibres, fabrics and clothing. Woodhead Publishing Limited, 2001.
[4] C. A. Norstebo, “Intelligent Textiles, Soft Products,” Journal of Future Materials,
2003.
[5] J. Berzowska, “Electronic textiles: Wearable computers, reactive fashion, and soft
computation,” Textile: The Journal of Cloth and Culture, vol. 3, no. 1, pp. 58–75,
2005.
[6] X. Zhang and X. Tao, “Smart textiles: Passive Smart,” Textile Asia, pp. 45–49, 2001.
[7] X. Zhang and X. Tao, “Smart textiles: Active smart,” Textile Asia, pp. 49–52, 2001.
[8] X. Zhang and X. Tao, “Smart textiles: Very smart,” Textile Asia, pp. 35–37, 2001.
[9] J. L. Hu, Shape memory polymers. Woodhead Publishing Limited, 2007.
[10] “Corn starch principle for new body armour material,” Smart Textiles and
Nanotechnology: The News Service for Textile Futures, no. September, p. 7, 2010.
[11] “Dyneema in Germany,” Smart Textiles and Nanotechnology: The News Service for
Textile Futures, no. June, p. 9, 2010.
[12] “Dyneema for water-jet protection suits...,” Smart Textiles and Nanotechnology: The
News Service for Textile Futures, no. September, p. 8, 2010.
[13] “Carbon-based protection,” Smart Textiles and Nanotechnology: The News Service for
Textile Futures, no. June, p. 8, 2010.
[14] A. Seeboth, R. Ruhmann, and O. Mühling, “Thermotropic and Thermochromic
Polymer Based Materials for Adaptive Solar Control,” Materials, vol. 3, no. 12, pp.
5143–5168, Dec. 2010.
[15] A. Seeboth and D. Lötzsch, Thermochromic phenomena in polymers. Smithers Rapra
Technology Limited, 2008.
Page 73
73
[16] E. Strozyk, “Thermochromatic Wallpaper,” 2009. [Online]. Available:
http://www.fashioningtech.com/profiles/blogs/thermochromatic-wallpaper. [Accessed:
19-Mar-2012].
[17] L. Berglin, “Spookies: Combining smart materials and information technology in an
interactive toy,” in Proceedings of the 2005 conference on Interaction design and
children, 2005, pp. 17–23.
[18] J. Berzowska, “Memory rich clothing: second skins that communicate physical
memory,” in Proceedings of the 5th conference on Creativity & cognition, 2005, pp.
32–40.
[19] J. Berzowska and M. Coelho, “SMOKS: the memory suits,” in CHI ’06 extended
abstracts on Human factors in computing systems, 2006, pp. 538–543.
[20] A. Wakita and M. Shibutani, “Mosaic Textile: Wearable Ambient Display with Non-
emissive Color-changing Modules,” in Proceedings of the 2006 ACM SIGCHI
international conference on Advances in computer entertainment technology, 2006.
[21] K. Tsuji and A. Wakita, “Anabiosis: an interactive pictorial art based on polychrome
paper computing,” in Proceedings of the 8th International Conference on Advances in
Computer Entertainment Technology, 2011, pp. 11–12.
[22] K. Sung, “Recent Videogame Console Technologies,” Computer, vol. 44, no. 2, pp.
91–93, Feb. 2011.
[23] J. D. Huang, “Kinerehab: a kinect-based system for physical rehabilitation: a pilot
study for young adults with motor disabilities,” in The proceedings of the 13th
international ACM SIGACCESS conference on Computers and accessibility, 2011, pp.
319–320.
[24] A. Olive, “It walks, it climbs, it jumps, it even dances - meet Nasa’s six-legged, 12-
eyed robot that can be controlled... with an Xbox,” Daily Mail Online, 2012. [Online].
Available: http://www.dailymail.co.uk/sciencetech/article-2094833/It-walks-climbs-
jumps-dances--meet-Nasas-legged-12-eyed-robot-controlled--Xbox.html. [Accessed:
27-Feb-2012].
[25] P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, and V. Athitsos, “Comparing
Gesture Recognition Accuracy Using Color and Depth Information,” in Proceedings of
the 4th International Conference on PErvasive Technologies Related to Assistive
Environments, 2011.
[26] M. V. den Bergh, D. Carton, R. D. Nijs, N. Mitsou, C. Landsiedel, K. Kuehnlenz, D.
Wollherr, L. V. Gool, and M. Buss, “Real-time 3D hand gesture interaction with a
robot for understanding directions from humans,” RO-MAN, 2011 IEEE, pp. 357–362,
Jul. 2011.
[27] I. Oikonomidis, N. Kyriazis, and A. Argyros, “Efficient model-based 3d tracking of
hand articulations using kinect,” in Proceedings of the British Machine Vision
Conference, 2011, pp. 101.1–101.11.
Page 74
74
[28] Z. Ren, J. Meng, J. Yuan, and Z. Zhang, “Robust hand gesture recognition with kinect
sensor,” in Proceedings of the 19th ACM international conference on Multimedia,
2011, pp. 759–760.
[29] E. Suma, B. Lange, A. Rizzo, D. Krum, and M. Bolas, “Faast: The flexible action and
articulated skeleton toolkit,” in Virtual Reality Conference (VR), 2011 IEEE, 2011, pp.
247–248.
[30] M. Raptis, D. Kirovski, and H. Hoppe, “Real-time classification of dance gestures
from skeleton animation,” in Proceedings of the 2011 ACM SIGGRAPH/Eurographics
Symposium on Computer Animation, 2011, vol. 1, pp. 147–156.
[31] D. S. Alexiadis, P. Kelly, P. Daras, N. E. O’Connor, T. Boubekeur, and M. B. Moussa,
“Evaluating a dancer’s performance using Kinect-based skeleton tracking,” in
Proceedings of the 19th ACM international conference on Multimedia, 2011, pp. 659–
662.
[32] F. Kistler, D. Sollfrank, N. Bee, and E. André, “Full Body Gestures enhancing a Game
Book for Interactive Story Telling,” in Interactive Storytelling, vol. 7069, Springer
Berlin / Heidelberg, 2011, pp. 207–218.
[33] M. Caon, Y. Yue, J. Tscherrig, E. Mugellini, and O. Abou Khaled, “Context-Aware
3D Gesture Interaction Based on Multiple Kinects,” in AMBIENT 2011, The First
International Conference on Ambient Computing, Applications, Services and
Technologies, 2011, pp. 7–12.
[34] Microsoft, “Kinect for Windows - What’s New,” 2012. [Online]. Available:
http://www.microsoft.com/en-us/kinectforwindows/develop/new.aspx. [Accessed: 15-
Mar-2012].
[35] OpenNI, “OpenNI Documentation.” [Online]. Available:
http://openni.org/documentation/. [Accessed: 15-Mar-2012].
[36] OpenKinect, “OpenKinect Main Page,” 2012. [Online]. Available:
http://openkinect.org/wiki/Main_Page. [Accessed: 15-Mar-2012].
[37] E. Data Corp., “Users’ Choice: 2011 Software Development Platforms,” 2011.
[Online]. Available:
http://laerer.rhs.dk/poulh/CMS2012s/softwaredevelopmentplatforms-
2011rankings.pdf. [Accessed: 21-Jul-2012].
[38] OpenCVWiki, “OpenCV,” 2012. [Online]. Available:
http://opencv.willowgarage.com/wiki/. [Accessed: 26-Jul-2012].
[39] “EmguCV,” 2012. [Online]. Available:
http://www.emgu.com/wiki/index.php/Main_Page. [Accessed: 26-Jul-2012].
[40] Wikipedia, “Windows Presentation Foundation,” 2012. [Online]. Available:
http://en.wikipedia.org/wiki/Windows_Presentation_Foundation. [Accessed: 24-Jul-
2012].
Page 75
75
[41] Xceed, “Extended WPF Toolkit,” 2012. [Online]. Available:
http://wpftoolkit.codeplex.com/. [Accessed: 24-Jul-2012].
[42] Wikipedia, “Blob Detection,” 2012. [Online]. Available:
http://en.wikipedia.org/wiki/Blob_detection. [Accessed: 24-Jul-2012].
Page 76
Appendix F – Evaluation Results
76
Bibliography
American Psychological Association (2009). Publication Manual of the American
Psychological Association (6th edition). Washington, D.C.: APA Books.
Field, A. & Hole, G (2003). How to Design and Report Experiments. London: Sage.
Page 77
Appendix A – Sequence Diagrams
77
Appendices
Appendix A – Sequence Diagrams
applyFilters() method sequence diagram:
Page 78
Appendix A – Sequence Diagrams
78
newSensor_AllFramesReady() method sequence diagram:
Page 79
Appendix B – User’s Guide
79
Appendix B – User’s Guide
B.1. Main Window
As soon as the user starts the application, the screen illustrated in Figure B.1 is
displayed. This screen, as the user will see, is common throughout the use of the application.
The option of keeping it common has the purpose of facilitate the application usability.
Figure B.1 – Application’s Main Window
The main window of this application is divided into the following areas (Table B.1):
Area Description
1. Colour Image Area containing a window displaying the coloured
image captured by the Kinect sensor.
2. “Gray” Image
Area containing a window displaying the detected
colour. (We say that a colour was detected when a
green box is displayed around the detected colour)
1 2
3 4 5
Page 80
Appendix B – User’s Guide
80
3. Frame Per Second (FPS)
Indicator
Area containing an indicator of the frames per
second of the colour image.
4. Average RGB Colour Values Area containing the average R, G and B colour
values of the detected colour.
5. Options Area containing some general options and some
colour detection related options.
Table B.1 – Main Window
B.2. Main Window Options
B.2.1. General Options
Figure B.2 – General Options
Calculate RGB Colour/Gray Image
When this option is enabled, the average R, G and B values of the detected colour are
calculated and displayed in the “RGB Colour/Gray Image” boxes. The average R, G and B
values are calculated based on the RGB values of each pixel that are inside the green box
drawn around the detected colour.
Figure B.3 – “Calculate RGB Colour Image” enabled
When this option is disabled, no calculations are made and no values are displayed in
the “RGB Colour/Gray Image” boxes.
Page 81
Appendix B – User’s Guide
81
Figure B.4 – “Calculate RGB Colour Image” disabled
If this option is enabled but no colour is detected, the R, G and B values are set to “0”.
Figure B.5 – “Calculate RGB Colour Image” enabled but no colour detected
Note: Enabling the “Calculate RGB Gray Image” option may turn the application
very slow, depending on the size of the detected colour.
RGB Colour/Gray Image Update Frequency (s)
With this option the user can set the frequency at which the average R, G and B values
are calculated.
By default, each calculation is performed and displayed every 0.5 seconds.
Blob Size
With this option the user is allowed to set the minimum allowed high and width of the
blob (object/colour) that the application is able to detect.
By default, the blob’s minimum high and width is set to 25.
Elevation Angle
This option allows the user to set the Kinect’s elevation angle.
By default, the Kinect’s elevation angle is set to 15.
Page 82
Appendix B – User’s Guide
82
B.2.2. Colour Detection Options
With these options, the user can manually change the RGB colour ranges, which
allow him to look for a certain colour. He’s also allowed to set a list of RGB colour ranges
and use the automatic colour detection.
Figure B.6 – Colour Detection Options
Component Description
1. Radio Buttons (Red | Green |
Blue | Custom) Predefined RGB colour ranges.
2. Sliders and Up Down Boxes
Allow the user to set the RGB colour ranges.
(Upper sliders/up down boxes represent the
maximum value of the range. Lower sliders/up
down boxes represent the minimum value of the
range.)
3. Buttons Allow the user to set, start and stop the automatic
colour detection.
4. Automatic Detection State
Label
Show messages indicating the state of the
automatic colour detection.
Table B.2 – Colour Detection Options
The automatic detection state label can show one of the following messages (Table
B.3):
State Label Messages Description
Stopped! Automatic colour detection is not running.
Detecting… Automatic colour detection is running and trying
to detect the defined colour.
1 2
3 4
Page 83
Appendix B – User’s Guide
83
Colour detect in: n seconds.
Changing colour in: m seconds
Defined colour was detected. Application will start
looking for the next colour on the list in n seconds.
Colour detect in: n seconds.
Automatic detection will finish in:
m seconds
Last colour on the list was detected and the
automatic colour detection will finish in n
seconds.
Ended! All the colours on the list were detected and the
automatic detection has finished.
Table B.3 – State Label messages
B.3. Detect Colour
A colour is being detected if in the “Gray Image” a green box is drawn around the
colour (portion of the image). The pixels inside that box are the only pixels used to calculate
the average RGB values of the detected colour.
B.3.1. Manual Detection
To detect a specific colour the user needs to play with the red, green and blue
minimum and maximum values until he gets values that are within the range of the colour he
wants to detect.
B.3.2. Automatic Detection
To use the automatic detection the user needs first to press the button “Set Automatic
Colour Detection” which opens the “Colours to Detect” window. In this window, the user
needs to create a list with the colours he wants the application to detect. In this example, we
created a list with two colours, the predefined blue and the predefined green, each with a 10
seconds interval.
Page 84
Appendix B – User’s Guide
84
Figure B.7 – List of colours to detect
After closing the “Colours to Detect” window, the “Start” button in the Main Window
becomes enabled. This means that a list of colours to detect was created and that the user can
start the automatic detection whenever he wants.
Figure B.8 – Button “Start” enabled
To start the automatic detection the user just needs to press the “Start” button. When
the automatic detection starts the “Stop” button becomes enabled, allowing the user to stop it
at any time. The message in the state label also changes showing now the message
“Detecting…”.
In this example, the application starts by trying to detect an object with blue colour in
the image. As we can see in Figure B.9, there are no blue coloured objects in the image
captured by Kinect. The application keeps waiting until a blue coloured object is introduced
in the image (Figure B.10).
Page 85
Appendix B – User’s Guide
85
Figure B.9 – Detecting blue colour…
Figure B.10 – Blue colour detected
After detecting the colour, the application waits n seconds before changing to the next
colour (in this example, after detecting the blue colour the application waits 10 seconds).
Also, after detecting the colour, the message in the state label changes to “Colour detected in:
n seconds. Changing colour in: m seconds”. Finished the m seconds, the application
automatically switches the RGB colour range values and starts looking for the next colour on
the list (green) (Figures B.11 and B.12).
Page 86
Appendix B – User’s Guide
86
Figure B.11 – Detecting green colour…
Figure B.12 – Green colour detected
When the last colour in the list is detected, the message in the state label changes to
“Colour detected in: n seconds. Automatic detection will finish in: m seconds”. After those m
seconds, all the colours in the list have been detected and the automatic detection ends.
Figure B.13 – Automatic colour detection ended
Page 87
Appendix B – User’s Guide
87
Note: While the automatic detection is running, the user is not able to manually
change the RGB colour range values.
B.4. Colours to Detect Window
In this window the user can set a list of RGB colour ranges that will be used during
the automatic colour detection.
Figure B.14 – Colours to Detect Window
Component Description
1. Radio Buttons (Red | Green |
Blue | Custom) Predefined RGB colour ranges.
2. Sliders and Up Down Boxes
Allow the user to set the RGB colour ranges.
(Upper sliders/up down boxes represent the
maximum value of the range. Lower sliders/up
down boxes represent the minimum value of the
range.)
3. Colours List (Grid) Grid containing a list with all the RGB colour
ranges defined by the user.
4. Add/Remove Buttons Allow the user to add a new row to or remove a
row from the grid.
5. Save/Load Colours List
Buttons
Allow the user to save the colours list to a file or
load the colours list from a file.
1 2
3 5 4 6
Page 88
Appendix B – User’s Guide
88
6. Time
Allows the user to set the amount of time during
which a certain colour is going to be detected,
before changing to the next colour in the list. (The
timer starts after the colour has been detected.)
Table B.4 – Colours to Detect Window
After adjusting the red, green and blue colour ranges and after setting the time, the
user needs to press “Add” to add a new row with the new values to the grid.
Figure B.15 – Add rows to the grid
To remove a row, the user just needs to select the row he wants to remove and press
“Remove”.
After setting up the list of colours, the user can close this window. The button “Start”,
in the main window, will become enabled and the user will be able to start the automatic
detection.
In this window, the user can also save the values present in the grid to a file or load
the values from a file.
Note: This application only allows files in the text (.txt) format.
Page 89
Appendix B – User’s Guide
89
Figure B.16 – Context Menu
If the grid is empty and the user presses the “Save Colours List” button, the UI will
display the following error message:
Figure B.17 – "No items to save" error
If the file is save successfully, the UI will display the following message:
Figure B.18 – "File saved successfully" message
Page 90
Appendix B – User’s Guide
90
B.5. Possible Kinect Error’s
There are several errors related to the Kinect sensor that can occur during the
execution of the application.
1. When a Kinect sensor is required but can't be detected, the UI displays a sensor
required message.
Figure B.19 – Kinect required message
2. When a Kinect sensor is required but it’s being used by another application, the UI
displays the following message:
Figure B.20 – Kinect is being used by another application message
3. When a Kinect sensor is plugged into the computer with its USB connection, but
the power plug appears to be not powered, the UI displays the following message:
Figure B.21 – Plug power cord in message
Page 91
Appendix C – Developer’s Guide
91
Appendix C – Developer’s Guide
Figure C.1 – Class Dependencies Diagram
This application is divided into two WPF windows, each one with its own code, and
six classes.
In the next pages we will shortly describe the role of each window and each class in
the application. We will also describe in more detail some of the most important methods
created during the development process.
C.6. Windows
C.6.1. MainWindow.xaml
This class contains all the code necessary to manage the Main Window of the
application.
Page 92
Appendix C – Developer’s Guide
92
Figure C.2 – “Main Window” Class
When this window is initialised, several variables and classes are instantiated.
The variable blobRectangleBorders is an array that holds the coordinates of the green
rectangle drawn around the detected colour. The first position of the array contains the top
coordinate of the rectangle; the second position contains the bottom coordinate; the third
position contains the left coordinate; and finally, the fourth position contains the right
coordinate.
A DispatcherTimer is also instantiated. According to MSDN, DispatcherTimer is “a
timer that is integrated into the System.Windows.Threading.Dispatcher queue which is
processed at a specified interval of time and at a specified priority”. timeToNewColour is a
timer that is used during the automatic detection. It controls the amount of time during which
a certain colour is going to be detected, before calling an event handler that is responsible for
changing the colour to be detected to the next colour on the list. The event handler
timeToNewColourElapsed() is invoked every time the timer interval has elapsed (ticks).
private DispatcherTimer timeToNewColour = new DispatcherTimer();
Page 93
Appendix C – Developer’s Guide
93
... this.timeToNewColour.Tick += this.timeToNewColourElapsed;
A Stopwatch is also instantiated. Also according to MSDN, Stopwatch “provides a set
of methods and properties that can be used to accurately measure elapsed time”. stopWatch is
then used to measure the time taken by the Kinect sensor and the application to detect a
certain colour.
The kinectSensorChooser1_KinectSensorChanged() method is an event handler that is
invoked every time the “KinectSensorChooser” component, from the Kinect SDK, changes
(this component changes every time the state of the Kinect sensor changes. Example: if
Kinect is unplugged or disconnected during the execution of the application, the component
state changes).
This method starts by getting the value of the component before the change. If this
value is not null it means that the component has changed before. The event handler
newSensor_AllFramesReady() is removed from the oldSensor.AllFramesReady event and the
Kinect sensor stopped.
var oldSensor = (KinectSensor)e.OldValue; //Stop the old sensor if (oldSensor != null) { //Remove event handler from the oldSensor.AllFramesReady event oldSensor.AllFramesReady -= this.newSensor_AllFramesReady; ... StopKinect(oldSensor); ... }
The application then gets the value of the component after the change and checks if
that value is not null and if the Kinect sensor is connected. If not, the method finishes and
only fires again when the “KinectSensorChooser” changes. If that is true, the Kinect’s colour
stream is enabled, the event handler newSensor_AllFramesReady() is added to the
newSensor.AllFramesReady event and the new sensor started.
//Get and start the new sensor var newSensor = (KinectSensor)e.NewValue; if (newSensor != null && newSensor.Status == KinectStatus.Connected) {
Page 94
Appendix C – Developer’s Guide
94
newSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30); newSensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(newSensor_AllFramesReady); //Start new sensor try { newSensor.Start(); ...
The newSensor_AllFramesReady() method is an event handler that is fired when the
frame data for all streams (ColourStream, DepthStream and SkeletonStream) is ready. In this
application we only make use of the ColourStream so, we could have used the
ColorFrameReady() event handler instead of the AllFramesReady() event handler.
After opening the ColorImageFrame and checking if it is not null, the application
initialises a pixelData byte array with size equal to the size of the ColorImageFrame pixel
data. As this variable only needs to be initialised once and not every frame, we use a flag to
check if the current frame is the first frame executed. If the flag is set to true, it means that the
current frame is the first frame executed.
using (ColorImageFrame colorFrame = e.OpenColorImageFrame()) { try { if (colorFrame != null) { if (firstFrameFlag) { this.pixelData = new byte[colorFrame.PixelDataLength]; } ...
Next, the ColorImageFrame pixel data is copied to the pixelData byte array. After
that operation, pixelData contains the information of all the pixels of the current frame.
colorFrame.CopyPixelDataTo(this.pixelData);
During the first frame a WriteableBitmap is also created. A WriteableBitmap is a
WPF construct that enables resetting the bits of the image. This method is more efficient than
Page 95
Appendix C – Developer’s Guide
95
creating a new Bitmap every frame. A WriteableBitmap is created on the first frame and then
reused on the other frames. Once created, it is set as the colour image source.
this.outputImage = new WriteableBitmap(colorFrame.Width, colorFrame.Height, 96, 96, PixelFormats.Bgr32, null); this.imageRGB.Source = this.outputImage;
In the next step, the application updates the pixels data in the specified region of the
bitmap. The output of this operation gives us the colour image captured by the Kinect and
displayed in the Main Window.
//Set stride value. Stride is the number of bytes from one row of pixels in memory to the next row of pixels in memory int stride = colorFrame.Width * colorFrame.BytesPerPixel; this.outputImage.WritePixels(new Int32Rect(0, 0, colorFrame.Width, colorFrame.Height), this.pixelData, stride, 0);
Having captured and dealt with the colour image, the application now starts to deal
with the “gray image”. First of all, the colour image is converted from a WriteableBitmap
into a Bitmap. To the result of that conversion are then applied all the necessary filters and
blob detection methods. In the end, this returns a Bitmap that is set as the “gray image” image
source.
//Transform WriteableBitmap into Bitmap bmp = imageProcessing.convertToBitmap(outputImage, colorFrame.Width, colorFrame.Height); //Apply filters and blob detection imageWithFilters = imageProcessing.applyFilters(bmp, blobRectangleBorders); //Set the image that is displayed by the PictureBox component //Gray Image picBox.Image = imageWithFilters;
Next, the application checks if a colour was detected on the current frame and if the
automatic detection is running. If true, the content in the state label is changed accordingly.
The timeToNewColour timer is also set. The timer Interval sets the period of time between
timer ticks. This timer allow us to control the time during which a certain colour is going to
be detected, time that is defined by the user on the “ColourToDetect” window. After setting
the time interval between timer ticks, the timer is started and a flag is set to true. This flag
Page 96
Appendix C – Developer’s Guide
96
will not allow this block of code to be executed again before the timeToNewColour time has
elapsed.
if ((imageProcessing.blobDetection.blobDetected(blobRectangleBorders) == true) && (isAutomaticDetectionEnabled == true) && (isTimerRunning == false)) { int time = listOfColours[nextColour - 1].Time; if (listOfColours.Count > nextColour) lblState.Content = string.Format("Colour detected in: {0} seconds.
Changing colour in: {1} seconds", stopWatch.Elapsed.TotalSeconds, time); else lblState.Content = string.Format("Colour detected in: {0} seconds.
Automatic detection will finish in: {1} seconds", stopWatch.Elapsed.TotalSeconds, time);
//Set time interval between timer ticks and start timer timeToNewColour.Interval = TimeSpan.FromSeconds(time); timeToNewColour.Start(); //Flag that saves the state of the timeToNewColour timer isTimerRunning = true; }
Finally, the updateFrameRate() method is fired. After calculating the current frame
rate the frame rate text box is updated.
//Update Frame Rate txtFrameRate.Text = imageProcessing.updateFrameRate();
C.6.2. ColoursToDetect.xaml
This window allows the user to set a list of colours, list that is used by the automatic
detection.
Figure C.3 – “ColoursToDetect” Class
Page 97
Appendix C – Developer’s Guide
97
When this window is initialised a List<clsColourList> is created. This List<> is used
throughout this class as the grid’s data source.
The menuItemLoadFile_Click(object, RoutedEventArgs) is an event handler that is
called when the user clicks the “Load Colour List” button. This method instantiates a new
object of the type clsLoadFromFile and then calls the method load(), which reads a text (.txt)
file and returns a List<> with the values read.
clsLoadFromFile loadFromFile = new clsLoadFromFile(); List<clsColourList> newListOfColours = loadFromFile.load();
If the returned List<> has some items it is set as the grid’s data source.
if (newListOfColours.Count != 0) { listOfColours = newListOfColours; //Set the List<> as the grid items source gridColours.ItemsSource = listOfColours; //Refresh grid items gridColours.Items.Refresh(); }
The btnAdd_Click(object, RoutedEventArgs) is an event handler that is called every
time the user clicks the “Add” button. This method creates a new object of the type
clsColourList, sets its attribute values and then adds it to the List<>. Finally, it sets the
List<> as the grid’s items source.
//Add object to the List<> listOfColours.Add(new clsColourList() { Sequence = _sequence, RedLow = (int)sliderRedLow.Value, RedHigh = (int)sliderRedHigh.Value, GreenLow = (int)sliderGreenLow.Value, GreenHigh = (int)sliderGreenHigh.Value, BlueLow = (int)sliderBlueLow.Value, BlueHigh = (int)sliderBlueHigh.Value, Time = (int)upDownTime.Value }); //Set the gridRow List<> as the grid items source gridColours.ItemsSource = listOfColours; //Refresh grid items gridColours.Items.Refresh();
Page 98
Appendix C – Developer’s Guide
98
C.7. Classes
C.7.1. clsColoursList.cs
This class contains several properties that are used to hold the values of all the
columns for each row of the DataGrid.
Figure C.4 – “clsColourList” Class
C.7.2. clsImagePrecessing.cs
This class contains all the attributes and methods used to process a given image so
that it can be used by the blob detection method. In this class it is also where the calculations
of the RGB values of the detected blob are performed.
Figure C.5 – “clsImageProcessing” Class
Page 99
Appendix C – Developer’s Guide
99
When this class is initialised, a new ColorFiltering is instantiated. This class belongs
to the AForge.NET framework and, according to its library, the colour filter filters pixels
inside/outside of a specified RGB colour range, keeping the pixels with colour that is inside
that range and filling the rest with a specified colour (in this case, black).
The convertToBitmap(WriteableBitmap, int, int) method is the method that is used to
convert a WriteableBitmap to a Bitmap. It receives as arguments the WriteableBitmap that is
going to be converted, and the colour frame width and height. This method returns the new
Bitmap after the conversion.
//Copy WriteableBitmap to BitmapSource BitmapSource bmapSource = outputImage.Clone(); //Create new Bitmap Bitmap bmp = new Bitmap(colorFrameWidth, colorFrameHeight, System.Drawing.Imaging.PixelFormat.Format32bppRgb); //Specifies the attributes of a bitmap image //LockBits locks a Bitmap into system memory BitmapData dataBitmap = bmp.LockBits(new System.Drawing.Rectangle(System.Drawing.Point.Empty, bmp.Size), ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); //Copies the bitmap pixel data within the specified range bmapSource.CopyPixels(Int32Rect.Empty, dataBitmap.Scan0, dataBitmap.Height * dataBitmap.Stride, dataBitmap.Stride); //Unlocks the Bitmap from system memory bmp.UnlockBits(dataBitmap);
The applyFilters(Bitmap, int[]) method applies the necessary filters to the Bitmap
returned by the convertToBitmap() method, and then fires the method that is used to detect
blobs. It returns a Bitmap after all filters have been applied which is displayed as the “gray
image” in the Main Window.
First of all, the red, green and blue colour filters need to be configured. Each of these
filters receives a range of values (given by the AForge.NET framework as IntRange(int min,
int max)), and these are the colour ranges the filter keeps. I.e., all image pixels with RGB
values within these ranges are kept, all others are set to black.
colorFilter.Red = new IntRange(sliderRedLow, sliderRedHigh); colorFilter.Green = new IntRange(sliderGreenLow, sliderGreenHigh); colorFilter.Blue = new IntRange(sliderBlueLow, sliderBlueHigh);
Page 100
Appendix C – Developer’s Guide
100
After the filters have been configured, they are applied to the image (Bitmap).
Bitmap objectImage = colorFilter.Apply(bmp);
The Bitmap is then locked into system memory for further processing and a grayscale
algorithm applied to the image in unmanaged memory. This operation results in a gray image
in unmanaged memory, which is exactly the image the blob detection algorithm uses to detect
blobs.
//Lock Bitmap for further processing BitmapData objectData = objectImage.LockBits(new System.Drawing.Rectangle(0, 0,
bmp.Width, bmp.Height), ImageLockMode.ReadOnly, bmp.PixelFormat);
//Grayscaling UnmanagedImage grayImage = Grayscale.CommonAlgorithms.BT709.Apply(new
UnmanagedImage(objectData));
Finally, the blob detection method is called. This method filters and locates blobs,
returning the Bitmap that is displayed as the “Gray Image” in the Main Window.
objectImage = blobDetection.detectBlobs(grayImage, objectImage, blobRectangleBorders, minBlobWidth, minBlobHeight);
The calculateRGBValuesColourImage(int[], byte[]) method performs the calculation
of the average red, green and blue colour values of all the pixels that are inside the green box
drawn around the detected blob. These calculations are based on the colour image. It receives
as arguments the array containing the coordinates of the green rectangle drawn around the
detected blob, and a byte array (pixelData[]) that contains all the information of each pixel in
the colour image.
As the average RGB colour calculations are only performed when a blob is detected,
the first thing the application checks is if a blob was detected. When a blob is being detected,
the application checks if the defined number of seconds between RGB colour average
calculations (defined by the user in the Main Window “RGB Colour Image Update
Frequency” option) has elapsed since the last calculation (update).
if (DateTime.Now >= lastTime1.AddSeconds(updateFrequencyColour))
…
Page 101
Appendix C – Developer’s Guide
101
The values inside the blobRectangleBorders array give us the coordinates of the
rectangle drawn around the detected blob. So, in a 640x480 image, we know exactly where
the detected blob is. But all the information of each pixel in the colour image is saved in
pixelData[], which is a 1-dimensional byte array. Each pixel has 4 bytes, each byte holding
the R, G, B and A information. This means that, as we are using a 640x480 image, the
pixelData[] array has size equal to 1228800 (640 * 480 = 307200, and 307200 * 4 =
1228800). This way, there are some calculations that need to be performed to find the
position in the array where each line of pixels starts.
Note: pixels information in the pixelData[] array are saved following this order: B, G,
R and A (pixelData[B,G,R,A,B,G,R,A,…]).
R,G,B,A R,G,B,A …
R,G,B,A R,G,B,A …
… ...
The first thing that we need to know is which position of the array corresponds to
each line of pixels in the image. The first line of pixels starts in the position [0] of the array.
The second line of pixels starts after 640 pixels (which is the width of the image) times 4
(each pixel information size) which is equal to the position [2560] of the array. If the
rectangle top coordinate is, for example, 200, we need to know the array position that
corresponds to the line 200 of the image, which is 640 * 4 * 200 = 512000. This way, we can
calculate the array position that corresponds to the start of any of the 480 pixel lines of our
colour image. Consequently, and having the top and bottom coordinates of the green
Width = 640
Hei
ght
= 4
80
Pixel
1st Line of Pixels
2nd Line of Pixels
Page 102
Appendix C – Developer’s Guide
102
rectangle, we are also able to calculate the array position that corresponds to the start of each
line in the rectangle.
for (int i = top; i < bottom; i++) { int nextLine = 4 * 640 * (i); ...
Knowing the array position of the rectangle’s top and bottom coordinates, we now
need to calculate the array position corresponding to the left edge of the rectangle. This is
done simply by multiplying the left coordinate value by 4 (each pixel information size). The
red, green and blue values of the pixel are then added to the total values. This process is
repeated until we reach the array position corresponding to the right edge of the rectangle.
... for (int j = left; j < right; j++) { avgRed = avgRed + pixelData[nextLine + (j * 4) + 2]; avgGreen = avgGreen + pixelData[nextLine + (j * 4) + 1]; avgBlue = avgBlue + pixelData[nextLine + (j * 4)]; countNPixels++; }
Example:
R,G,B,A R,G,B,A R,G,B,A R,G,B,A
R,G,B,A R,G,B,A R,G,B,A R,G,B,A
R,G,B,A R,G,B,A R,G,B,A R,G,B,A
R,G,B,A R,G,B,A R,G,B,A R,G,B,A
pixelData[B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A,
B,G,R,A, B,G,R,A, B,G,R,A, R,G,B,A, B,G,R,A, B,G,R,A, B,G,R,A, B,G,R,A]
Line 3 Line 4
Line 1 Line 2
Page 103
Appendix C – Developer’s Guide
103
Finally, after getting the values of all the pixels inside the green rectangle, the red,
green and blue averages are calculated and the results copied to the RGBColourImage[]
array.
if (countNPixels != 0) { RGBColourImage[0] = avgRed / countNPixels; RGBColourImage[1] = avgGreen / countNPixels; RGBColourImage[2] = avgBlue / countNPixels; }
The calculateRGBValuesGrayImage(Bitmpa, int[]) method performs the calculation
of the average red, green and blue colour values of all the pixels that are inside the green box
drawn around the detected blob. These calculations are based on the gray image. It receives
as arguments the gray image bitmap, and the array containing the coordinates of the green
rectangle drawn around the detected blob.
This method is very similar to the one described above. The only difference is that, as
here we have a bitmap instead of a 1-dimention byte array, we can get the pixels information
directly, using the GetPixel(int, int) function. GetPixel(int, int) gets the colour of the
specified pixel in the Bitmap, and receives as arguments the x and y coordinates of the pixel.
//Go through the entire rectangle and get the RGB values of each pixel for (int i = blobRectangleBorders[2]; i < blobRectangleBorders[3]; i++) { for (int j = blobRectangleBorders[0]; j < blobRectangleBorders[1]; j++) { redGrayImage = redGrayImage + image.GetPixel(i, j).R; greenGrayImage = greenGrayImage + image.GetPixel(i, j).G; blueGrayImage = blueGrayImage + image.GetPixel(i, j).B; countNPixels++; } }
C.7.3. clsBlobDetection.cs
This class contains all the attributes and methods used to detect blobs in a given
grayscale image.
Page 104
Appendix C – Developer’s Guide
104
Figure C.7 – “clsBlobDetection” Class
When this class is initialised, a new BlobCounter is instantiated. This class belongs to
the AForge.NET framework and it allows to count objects in an image, which are separated
by black background. According to the AForge.Net Framework library, “the class counts and
extracts stand-alone objects in images using connected components labelling algorithm”.
The detectBlobs(UnmanagedImage, Bitmap, int[], int, int) method is the method used
to filter and locate blobs. It receives as arguments a grayscale image in unmanaged memory
that is used to locate the blobs, a Bitmap where the green rectangle around the blob that is
being currently detected is drawn, the array where the coordinates of the rectangle edges are
saved for later use, and the minimum blob width and height defined by the user in the Main
Window. This method returns a Bitmap after all filters have been applied and it is the one that
is displayed as the “gray image” in the Main Window.
The first step of this method is to configure the blob counter filters. Here, certain
blobs are filtered by their size, removing blobs that are smaller or/and bigger than the
specified limit (width and height are measures in pixels). The order in which the detect blobs
should be returned is also specified.
blobCounter.MinWidth = minBlobWidth; blobCounter.MinHeight = minBlobHeight; blobCounter.FilterBlobs = true; blobCounter.ObjectsOrder = ObjectsOrder.Size;
Page 105
Appendix C – Developer’s Guide
105
Next, the grayscale image is processed and an object map built, which is used later to
extract blobs.
blobCounter.ProcessImage(grayImage);
After building the object map, we call another method from the BlobCounter class
that returns the rectangles coordinates of all the blobs detected in the image. As it was
specified in the code that the blobs should be ordered by size, all those coordinates are saved
to an array ordered by blob size, i.e., if more than one blob is detected in the image, the
coordinates of the biggest blob are save to the first position of the array, the coordinates of
the second biggest blob are saved to the second position, etc.
System.Drawing.Rectangle[] rects = blobCounter.GetObjectsRectangles();
The application always works with the biggest blob detected, and gets its coordinates
from the first position of the array.
System.Drawing.Rectangle objectRect = rects[0];
The objectRect variable stores a set of four integers that represent the location and size
of the blob rectangle. These integers are then copied to the blobRectangleBorders[] array
which is used throughout the application.
blobRectangleBorders[0] = objectRect.Top; blobRectangleBorders[1] = objectRect.Bottom; blobRectangleBorders[2] = objectRect.Left; blobRectangleBorders[3] = objectRect.Right;
Finally, a green rectangle is drawn in the image based on the blob coordinates stored
on the objectRect variable.
Graphics g = Graphics.FromImage(objectImage); using (System.Drawing.Pen pen = new
System.Drawing.Pen(System.Drawing.Color.FromArgb(160, 255, 160), 3)) { g.DrawRectangle(pen, objectRect); }
Page 106
Appendix C – Developer’s Guide
106
C.7.4. clsKinectElevationAngle.cs
This class contains several attributes and a method which are used to change the
Kinect’s elevation angle. This class is called every time the user modifies the “Elevation
Angle” option in the “Main Window”.
Figure C.8 – “clsKinectElevationAngle” Class
When this class is initialised a new DispatcherTimer is instantiated. The Interval sets
the period of time between timer ticks, which in this case is 0.5 seconds.
public readonly DispatcherTimer debounce = new DispatcherTimer { IsEnabled = false, Interval = TimeSpan.FromMilliseconds(500) };
The timer starts every time the user changes the “Elevation Angle” slider or up down
box in the Main Window. After a tick (0.5 seconds), the debounceElapsed(object,
EventsArgs) method is called. This method starts by stopping the timer and by saving the
current Kinect’s elevation angle to a new variable.
this.debounce.Stop(); int angleToSet = elevationAngle;
Next, the code checks if there’s already an elevation angle update in progress (as we
will see below, every elevation angle update takes 1.5 seconds to complete). If there is an
update in process, the timer starts again and the application tries to change the elevation angle
on the next tick.
if (this.backgroundUpdateInProgress)
Page 107
Appendix C – Developer’s Guide
107
//Try again in a few moments this.debounce.Start();
If not, the backgroundUpdateInProcess flag is set to true and a new thread task is
created and started. Inside this thread the Kinect elevation angle value is changed to the new
value and, after that, the thread is suspended for 1.5 seconds. This waiting time gives the
Kinect engine enough time to perform the elevation angle change, in case the user decides to
change the elevation angle value repeatedly in a short period of time.
Task.Factory.StartNew( () => { try
{ //Check for not null and running if ((this.Kinect != null) && this.Kinect.IsRunning) { //We must wait at least 1 second, and call no more frequently than 15 times every 20 seconds //So, we wait at least 1500ms afterwards before we set backgroundUpdateInProgress to false. this.Kinect.ElevationAngle = angleToSet; Thread.Sleep(1500);
} } ...
C.7.5. clsLoadFromFile.cs
This class contains only one method, method containing all the code needed to read a
text (.txt) file and save the values read into a List<>. The load() method returns the List<>
with the read values.
Figure C.9 – “clsLoadFromFIle” Class
Page 108
Appendix C – Developer’s Guide
108
C.7.6. clsSaveToFile.cs
This class contains only one method, method containing all the code needed to read
the values of each row of the DataGrid and save them to a text (.txt) file.
Figure C.10 – “clsSaveToFile” Class
Page 109
Appendix D – Samples
109
Appendix D – Samples
In order to be able to test the application during the development process and then at
the evaluation stage, twelve different samples were created. A sample is composed by a piece
of fabric printed with one simple dye, or, a combination of a simple dye with a
thermochromic dye (simple dye as base and a thermochromic dye on top of that base).
Having the thermochromic dye on top of the base dye means that, every time a certain
amount of heat is applied to the sample its colour changes, from the colour of the
thermochromic dye to the colour of the base dye.
Three of the twelve samples were printed with one simple dye only, sample 1 (blue
dye), sample 5 (green dye) and sample 9 (red dye), being these the base dyes of the rest of the
samples. On the other nine samples, first it was applied a simple dye as base. Then, on top of
that base, it was applied a thermochromic dye.
Sample 1 – Blue Pigment:
Page 110
Appendix D – Samples
110
Sample 2 – Yellow + Blue 31 oC on top of Blue Pigment Base:
Sample 3 – Leuco Black 40 oC on top of Blue Pigment Base:
Sample 4 – Red 31 oC on top of Blue Pigment Base:
Page 111
Appendix D – Samples
111
Sample 5 – Green Pigment:
Sample 6 – Leuco Black 40 oC on top of Green Pigment Base:
Sample 7 – Blue 31 oC on top of Green Pigment Base:
Page 112
Appendix D – Samples
112
Sample 8 – Red 31 oC on top of Green Pigment Base:
Sample 9 – Red Pigment:
Sample 10 – Blue 31 oC on top of Red Pigment Base:
Page 113
Appendix D – Samples
113
Sample 11 – Yellow + Blue 31 oC on top of Red Pigment Base:
Sample 12 – Leuco Black 40 oC on top of Red Pigment Base:
Page 114
Appendix E – Experiments Results
114
Appendix E – Experiments Results
Optimum Light Conditions – Light 0
Sample # Base Pigment
Colour
Base Pigment
RGB Colour
Values Detected
by Kinect
Thermochromic
Dye
Thermochromic
Dye RGB Colour
Values Detected
by Kinect
1 Blue
Red: 85
Green: 131
Blue: 219
---
Red: -
Green: -
Blue: -
2 Blue
Red: 78
Green: 125
Blue: 172
Yellow + Blue
31 oC
Red: 78
Green: 123
Blue: 128
3 Blue
Red: 85
Green: 128
Blue: 202
Leuco Black
40 oC
Red: 52
Green: 57
Blue: 69
4 Blue
Red: 117
Green: 148
Blue: 227
Red
31 oC
Red: 143
Green: 48
Blue: 63
5 Green
Red: 178
Green: 254
Blue: 193
---
Red: -
Green: -
Blue: -
6 Green
Red: 174
Green: 246
Blue: 170
Leuco Black
40 oC
Red: 58
Green: 60
Blue: 57
7 Green
Red: 164
Green: 240
Blue: 193
Blue
31 oC
Red: 64
Green: 105
Blue: 153
8 Green
Red: 185
Green: 251
Blue: 180
Red
31 oC
Red: 210
Green: 58
Blue: 63
9 Red
Red: 253
Green: 72
Blue: 108
---
Red: -
Green: -
Blue: -
10 Red
Red: 226
Green: 110
Blue: 143
Blue
31 oC
Red: 84
Green: 68
Blue: 125
Page 115
Appendix E – Experiments Results
115
11 Red
Red: 254
Green: 97
Blue: 126
Yellow + Blue
31 oC
Red: 201
Green: 96
Blue: 70
12 Red
Red: 251
Green: 81
Blue: 114
Leuco Black
40 oC
Red: 72
Green: 48
Blue: 54
High Natural Lighting (values detected by Kinect under ≈2490 lux) – Light 1
Sample # Base Pigment
Colour
Base Pigment
RGB Colour
Values Detected
by Kinect
Thermochromic
Dye
Thermochromic
Dye RGB Colour
Values Detected
by Kinect
1 Blue
Red: 85
Green: 92
Blue: 202
---
Red: -
Green: -
Blue: -
2 Blue
Red: 92
Green: 100
Blue: 171
Yellow + Blue
31 oC
Red: 84
Green: 92
Blue: 105
3 Blue
Red: 97
Green: 101
Blue: 196
Leuco Black
40 oC
Red: 70
Green: 62
Blue: 83
4 Blue
Red: 124
Green: 121
Blue: 224
Red
31 oC
Red: 108
Green: 29
Blue: 51
5 Green
Red: 172
Green: 193
Blue: 125
---
Red: -
Green: -
Blue: -
6 Green
Red: 157
Green: 212
Blue: 140
Leuco Black
40 oC
Red: 76
Green: 65
Blue: 67
7 Green
Red: 155
Green: 200
Blue: 177
Blue
31 oC
Red: 68
Green: 77
Blue: 133
8 Green
Red: 201
Green: 227
Blue: 169
Red
31 oC
Red: 181
Green: 44
Blue: 64
Page 116
Appendix E – Experiments Results
116
9 Red
Red: 243
Green: 54
Blue: 111
---
Red: -
Green: -
Blue: -
10 Red
Red: 199
Green: 82
Blue: 142
Blue
31 oC
Red: 69
Green: 45
Blue: 102
11 Red
Red: 244
Green: 63
Blue: 117
Yellow + Blue
31 oC
Red: 170
Green: 85
Blue: 80
12 Red
Red: 233
Green: 38
Blue: 106
Leuco Black
40 oC
Red: 67
Green: 36
Blue: 51
Low Natural Light (values detected by Kinect under ≈28 lux) – Light 2
Sample # Base Pigment
Colour
Base Pigment
RGB Colour
Values Detected
by Kinect
Thermochromic
Dye
Thermochromic
Dye RGB Colour
Values Detected
by Kinect
1 Blue
Red: 43
Green: 63
Blue: 123
---
Red: -
Green: -
Blue: -
2 Blue
Red: 42
Green: 57
Blue: 82
Yellow + Blue
31 oC
Red: 50
Green: 64
Blue: 71
3 Blue
Red: 33
Green: 41
Blue: 79
Leuco Black
40 oC
Red: 20
Green: 19
Blue: 25
4 Blue
Red: 55
Green: 60
Blue: 113
Red
31 oC
Red: 67
Green: 29
Blue: 42
5 Green
Red: 78
Green: 128
Blue: 76
---
Red: -
Green: -
Blue: -
6 Green
Red: 75
Green: 103
Blue: 60
Leuco Black
40 oC
Red: 23
Green: 19
Blue: 20
Page 117
Appendix E – Experiments Results
117
7 Green
Red: 74
Green: 100
Blue: 81
Blue
31 oC
Red: 27
Green: 31
Blue: 59
8 Green
Red: 106
Green: 149
Blue: 93
Red
31 oC
Red: 97
Green: 21
Blue: 32
9 Red
Red: 137
Green: 23
Blue: 52
---
Red: -
Green: -
Blue: -
10 Red
Red: 113
Green: 42
Blue: 69
Blue
31 oC
Red: 43
Green: 29
Blue: 59
11 Red
Red: 141
Green: 37
Blue: 63
Yellow + Blue
31 oC
Red: 98
Green: 43
Blue: 46
12 Red
Red: 139
Green: 37
Blue: 67
Leuco Black
40 oC
Red: 38
Green: 21
Blue: 28
Artificial Light (values detected by Kinect under ≈71 lux) – Light 3
Sample # Base Pigment
Colour
Base Pigment
RGB Colour
Values Detected
by Kinect
Thermochromic
Dye
Thermochromic
Dye RGB Colour
Values Detected
by Kinect
1 Blue
Red: 85
Green: 80
Blue: 154
---
Red: -
Green: -
Blue: -
2 Blue
Red: 85
Green: 76
Blue: 131
Yellow + Blue
31 oC
Red: 82
Green: 72
Blue: 101
3 Blue
Red: 80
Green: 71
Blue: 131
Leuco Black
40 oC
Red: 57
Green: 41
Blue: 69
4 Blue
Red: 91
Green: 80
Blue: 149
Red
31 oC
Red: 100
Green: 34
Blue: 74
Page 118
Appendix E – Experiments Results
118
5 Green
Red: 158
Green: 177
Blue: 139
---
Red: -
Green: -
Blue: -
6 Green
Red: 148
Green: 155
Blue: 126
Leuco Black
40 oC
Red: 79
Green: 56
Blue: 77
7 Green
Red: 143
Green: 145
Blue: 140
Blue
31 oC
Red: 70
Green: 54
Blue: 111
8 Green
Red: 164
Green: 163
Blue: 139
Red
31 oC
Red: 143
Green: 40
Blue: 85
9 Red
Red: 201
Green: 24
Blue: 116
---
Red: -
Green: -
Blue: -
10 Red
Red: 164
Green: 55
Blue: 116
Blue
31 oC
Red: 80
Green: 46
Blue: 100
11 Red
Red: 192
Green: 36
Blue: 119
Yellow + Blue
31 oC
Red: 142
Green: 53
Blue: 94
12 Red
Red: 183
Green: 34
Blue: 113
Leuco Black
40 oC
Red: 69
Green: 37
Blue: 69
Page 119
119
Appendix F – Evaluation Results
G.1. Red Colour One-Way Repeated Measures ANOVA Results
Mauchly's Test of Sphericityb
Measure:RedValue
Within Subjects
Effect Mauchly's W
Approx. Chi-
Square df Sig.
Epsilona
Greenhouse-
Geisser Huynh-Feldt Lower-bound
LightCondition .180 32.127 5 .000 .496 .527 .333
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent
variables is proportional to an identity matrix.
a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are
displayed in the Tests of Within-Subjects Effects table.
b. Design: Intercept
Within Subjects Design: LightCondition
Tests of Within-Subjects Effects
Measure:RedValue
Source
Type III Sum of
Squares df Mean Square F Sig.
Partial Eta
Squared
LightCondition Sphericity Assumed 67566.905 3 22522.302 78.593 .000 .797
Greenhouse-
Geisser
67566.905 1.489 45389.111 78.593 .000 .797
Huynh-Feldt 67566.905 1.581 42744.976 78.593 .000 .797
Lower-bound 67566.905 1.000 67566.905 78.593 .000 .797
Error(LightCondition) Sphericity Assumed 17194.095 60 286.568
Greenhouse-
Geisser
17194.095 29.772 577.520
Huynh-Feldt 17194.095 31.614 543.877
Lower-bound 17194.095 20.000 859.705
Page 120
120
Pairwise Comparisons
Measure:RedValue
(I)
LightCondition
(J)
LightCondition
Mean Difference
(I-J) Std. Error Sig.a
95% Confidence Interval for
Differencea
Lower Bound Upper Bound
1 2 5.571 3.661 .862 -5.145 16.288
3 72.048* 7.347 .000 50.541 93.554
4 23.619* 6.243 .007 5.344 41.894
2 1 -5.571 3.661 .862 -16.288 5.145
3 66.476* 5.591 .000 50.112 82.841
4 18.048* 4.133 .002 5.951 30.144
3 1 -72.048* 7.347 .000 -93.554 -50.541
2 -66.476* 5.591 .000 -82.841 -50.112
4 -48.429* 3.009 .000 -57.237 -39.620
4 1 -23.619* 6.243 .007 -41.894 -5.344
2 -18.048* 4.133 .002 -30.144 -5.951
3 48.429* 3.009 .000 39.620 57.237
Based on estimated marginal means
a. Adjustment for multiple comparisons: Bonferroni.
*. The mean difference is significant at the .05 level.
Page 121
121
G.2. Green Colour One-Way Repeated Measures ANOVA Results
Mauchly's Test of Sphericityb
Measure:GreenValue
Within
Subjects
Effect Mauchly's W
Approx. Chi-
Square df Sig.
Epsilona
Greenhouse-
Geisser Huynh-Feldt Lower-bound
LightCondition .158 34.594 5 .000 .476 .503 .333
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent
variables is proportional to an identity matrix.
a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are
displayed in the Tests of Within-Subjects Effects table.
b. Design: Intercept
Within Subjects Design: LightCondition
Tests of Within-Subjects Effects
Measure:GreenValue
Source
Type III Sum
of Squares df
Mean
Square F Sig.
Partial Eta
Squared
LightCondition Sphericity
Assumed
54538.893 3 18179.631 60.203 .000 .751
Greenhouse-
Geisser
54538.893 1.429 38172.516 60.203 .000 .751
Huynh-Feldt 54538.893 1.508 36168.631 60.203 .000 .751
Lower-bound 54538.893 1.000 54538.893 60.203 .000 .751
Error(LightCondition) Sphericity
Assumed
18118.357 60 301.973
Greenhouse-
Geisser
18118.357 28.575 634.064
Huynh-Feldt 18118.357 30.158 600.779
Lower-bound 18118.357 20.000 905.918
Page 122
122
Pairwise Comparisons
Measure:GreenValue
(I)
LightCondition
(J)
LightCondition
Mean Difference
(I-J) Std. Error Sig.a
95% Confidence Interval for
Differencea
Lower Bound Upper Bound
1 2 25.143* 3.324 .000 15.413 34.873
3 68.095* 7.707 .000 45.537 90.654
4 48.429* 5.837 .000 31.343 65.514
2 1 -25.143* 3.324 .000 -34.873 -15.413
3 42.952* 6.236 .000 24.698 61.207
4 23.286* 4.207 .000 10.972 35.599
3 1 -68.095* 7.707 .000 -90.654 -45.537
2 -42.952* 6.236 .000 -61.207 -24.698
4 -19.667* 3.384 .000 -29.573 -9.760
4 1 -48.429* 5.837 .000 -65.514 -31.343
2 -23.286* 4.207 .000 -35.599 -10.972
3 19.667* 3.384 .000 9.760 29.573
Based on estimated marginal means
*. The mean difference is significant at the .05 level.
a. Adjustment for multiple comparisons: Bonferroni.
Page 123
123
G.3. Blue Colour One-Way Repeated Measures ANOVA Results
Tests of Within-Subjects Effects
Measure:BlueValue
Source
Type III Sum
of Squares df
Mean
Square F Sig.
Partial Eta
Squared
LightCondition Sphericity
Assumed
62054.524 3 20684.841 58.064 .000 .744
Greenhouse-
Geisser
62054.524 1.459 42525.532 58.064 .000 .744
Huynh-Feldt 62054.524 1.545 40167.073 58.064 .000 .744
Lower-bound 62054.524 1.000 62054.524 58.064 .000 .744
Error(LightCondition) Sphericity
Assumed
21374.476 60 356.241
Greenhouse-
Geisser
21374.476 29.185 732.389
Huynh-Feldt 21374.476 30.898 691.771
Lower-bound 21374.476 20.000 1068.724
Mauchly's Test of Sphericityb
Measure:BlueValue
Within
Subjects Effect Mauchly's W
Approx. Chi-
Square df Sig.
Epsilona
Greenhouse-
Geisser Huynh-Feldt Lower-bound
LightCondition .150 35.574 5 .000 .486 .515 .333
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent
variables is proportional to an identity matrix.
a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are
displayed in the Tests of Within-Subjects Effects table.
b. Design: Intercept
Within Subjects Design: LightCondition
Page 124
124
Pairwise Comparisons
Measure:BlueValue
(I)
LightCondition
(J)
LightCondition
Mean Difference
(I-J) Std. Error Sig.a
95% Confidence Interval for
Differencea
Lower Bound Upper Bound
1 2 10.143 3.854 .096 -1.138 21.424
3 70.905* 7.393 .000 49.265 92.545
4 22.667* 7.130 .028 1.797 43.537
2 1 -10.143 3.854 .096 -21.424 1.138
3 60.762* 6.272 .000 42.402 79.122
4 12.524 6.204 .343 -5.636 30.683
3 1 -70.905* 7.393 .000 -92.545 -49.265
2 -60.762* 6.272 .000 -79.122 -42.402
4 -48.238* 2.322 .000 -55.034 -41.442
4 1 -22.667* 7.130 .028 -43.537 -1.797
2 -12.524 6.204 .343 -30.683 5.636
3 48.238* 2.322 .000 41.442 55.034
Based on estimated marginal means
a. Adjustment for multiple comparisons: Bonferroni.
*. The mean difference is significant at the .05 level.
Page 125
125
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Re
d C
olo
ur
Val
ue
Sample
Red Colour Results Comparison
Light 0 Light 1 Light 2 Light 3
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Re
d C
olo
ur
Val
ue
Sample
Red Colour Values - L0 vs. Mean L1,L2,L3
Light 0 L1,L2,L3 Mean
Page 126
126
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Gre
en
Co
lou
r V
alu
e
Sample
Green Colour Results Comparison
Light 0 Light 1 Light 2 Light 3
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Gre
en
Co
lou
r V
alu
e
Sample
Green Colour Values - L0 vs. Mean L1,L2,L3
Light 0 L1,L2,L3 Mean
Page 127
127
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Blu
e C
olo
ur
Val
ue
Sample
Blue Colour Results Comparison
Light 0 Light 1 Light 2 Light 3
0153045607590
105120135150165180195210225240255
1 2
2.1 3
3.1 4
4.1 5 6
6.1 7
7.1 8
8.1 9
10
10
.1 11
11
.1 12
12
.1
Blu
e C
olo
ur
Val
ue
Sample
Blue Colour Values - L0 vs. Mean L1,L2,L3
Light 0 L1,L2,L3 Mean