By the name of Allah THE ISLAMIC UNIVERSITY – FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARTMENT Final Work Summarization Graduation Project-Part 1 Multi Touch Table BY Wafaa' Audah Haneen El-masry Nisma Hamdouna Maysaa El-Safadi SUPERVISOR Eng. Wazen Shbair Gaza, Palestine June. 4 th , 2012
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
By the name of Allah
THE ISLAMIC UNIVERSITY – FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT
Final Work Summarization
Graduation Project-Part 1
Multi Touch Table
BY
Wafaa' Audah Haneen El-masry
Nisma Hamdouna Maysaa El-Safadi
SUPERVISOR
Eng. Wazen Shbair
Gaza, Palestine
June. 4th
, 2012
Dedication
We dedicate this work To our supervisor Eng. Wazen Shbair… To our university …
Acknowledgment
We wish to express our sincere appreciation and thanks to Eng. Wazen Shbair
who gave us the chance to search and to develop our knowledge in the project
topic beside the possession of supervision wisdom through the semester .
TABLE OF CONTENTS
Dedication ……………………………………………………………..……...……. I
Acknowledgment ……..………………………………………………..….……… II
Table of contents…………..……….. ….……... …..……………….………….. III
6- Now you can close the command window and go to "Control Panel" and then "Device
Manager".
7- Expand the Human Interface Devices. Right click on "Universal Software HID
device" and select "Disable", answer "Yes" for the prompt. Then "Enable" it again.
This actually did a reload and the driver should already start working. To confirm that,
go to "Control Panel" and then "System" to check that "Pen and Touch:" is "Touch
Input Available with 255 Touch Points".
8- Then proceed with either one of the following:
a- To run Multi-touch application created with Multi-Touch Vista Framework, go to
the Multi-Touch Vista folder extracted earlier, find
"Multitouch.Service.Console.exe" and double click to run it. The default input is
already set to MultipleMice, so you can see red dot moving together with the
mouse but it will not be at the same location at the mouse cursor. You still have to
use the regular mouse cursor to interact with the windows as the red dot will only
interact with applications created using Multi-Touch Vista Framework. Whenever
you are adding mouse or removing mouse, you have to restart
"Multitouch.Service.Console.exe".
b- To test Multi-touch features in Windows 7, first go to the Multi-Touch Vista folder
extracted earlier, find "Multitouch.Service.Console.exe" and double click to run
it. You should see red dot corresponding to the mouse cursor (probably not at the
same location). Now go to the same folder (use the regular mouse cursor, the red
dot doesn't interact with Windows yet) and find
"Multitouch.Driver.Console.exe", double click and run it. Now Multi-touch
driver should be running, but the original mouse cursor still dominating. Now go
to the same folder and find "Multitouch.Configuration.WPF.exe", double click
and run it. Click on "Configure device", tick on the empty box for "Block native
windows mouse input....". and press "Ok". Now the red dot can finally interact
with the Windows. To stop it (sometimes mouse interaction totally gone after
testing for a long time), use "alt-tab" to reach the two command windows and
press "Enter" to end them.
Some of the Windows 7 multi-touch features to test on is:
1- Paint
2- Internet Explorer or Firefox Browser (Zoom in and zoom out).
3- Activate the software keyboard on the left edge of the Windows and type using it.
- CL Eye Platform Driver
CL Eye Platform Driver provides users a signed hardware driver which exposes Sony
Playstation™ 3 Eye camera to third party applications such as Adobe Flash, Skype, MSN or
Yahoo for video chat or conferencing. It provides a full control of camera such as resolution,
exposure and gain configurations. Also, CL Eye Platform Driver needed to define Sony
Playstation™ 3 Eye camera to run with Windows.
3.3 Hardware Requirements
- Introduction to Hardware
Multi-touch denotes a set of interaction techniques that allow computer users to control
graphical applications with several fingers. Multi-touch devices consist of a touch screen
(table) and other components as well as software that recognizes multiple simultaneous touch
points. At the moment there are five major techniques that are allowed for the creation of a
stable multi-touch hardware systems; these include: Frustrated Total Internal Reflection
(FTIR), Rear Diffused Illumination (Rear DI) such as Microsoft’s Surface Table, Laser Light
Plan (LLP), LED-Light Plane (LED-LP), and finally Diffused Surface Illumination (DSI).
Optical or light sensing (camera) based solutions make up a large percentage of multi-touch
devices. The scalability, low cost and ease of setup are suggestive reasoning for the popularity
of optical solutions. Each of the previous techniques consist of an optical sensor (typically a
camera), infrared light source, and visual feedback in the form of projection or LCD. Prior to
learning about the technique, it is important to understand these parts that all optical
techniques proximately share.
1. Woody or metal table or coffer.
2. Piece of glass.
3. Diffuser.
3. Projector
4. Web camera
5. Piece of Mirror
6. IR LEDs
- The details of these components will be shown later.
3.4 Detailed description about components:
- Woody or metal table or coffer: used to contain all the components inside it, and the
upper surface is for touch. The size of table is bounded to the size of wanted screen.
- The table that we need will be at these parameters, height 80cm, width 60cm, length 80cm. screen size 30 inch. Many holes must be opened to enable access to the internal components and to make a holes for the heat of the projector.
- Piece of glass which represent the surface (screen), this must be done by the glass or
of Plexiglass material. The thickness of the surface from 3-5 cm.
- Diffuser which is the upper surface of the table, this layer capture the photo from the projector beside the avoidance of the outer light effects on the camera. This layer can be made by the cheep white nylon.
- Projector is used to transfer the picture to show it on the upper surface of the table, the quality of the projector affects the quality of the picture appearance.
- Mirror is used to increase the distance between the surface and the projector in order to have more size for the screen.
- Infrared LEDs used to send infrared x-rays towards the surface, every touch tip of the fingers reflect the rays towards the bottom (the exact touched point). The reflected
rays captured by the camera and sent to the CPU. Four IR LEDs are needed with 48 LEDs.
- Camera is used to capture the infrared rays that reflected when touching the surface, then it send the picture to the CPU. The needed camera must have high rate and high resolution, so that a lot of pictures can be made in one second. Sony camera will be used that have the name PS3Eye that give 60 picture at the second with accuracy of 640/480. This camera have to be with attached to the computer with special driver.
- Every camera has a filter that avoid the infrared rays to reach the sensor, we are need-in this project-to enable only the infrared rays to be identified, so we must remove the filter from the camera and add a negative piece that is used to avoid the natural brightness to reach the camera.
- CPU for the connection with the table and making the applications.
CHAPTER 4
CCV DETAILS
4.1 About CCV
- Community Core Vision, CCV for short, is a open source/cross-platform solution
for computer vision and machine sensing. It takes an video input stream and outputs
tracking data (e.g. coordinates and blob size) and events that are used in building
multi-touch applications. The coordinate positions are found in port 3333 of the
computer; we know the coordinate positions can be input into Java. CCV can interface
with various web cameras and video devices as well as connect to various
TUIO/OSC/XML enabled applications and supports many multi-touch lighting
techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future
vision applications (custom modules/filters).
- CCV outputs in three formats (XML, TUIO and Binary) over network sockets and
has an internal C++ event system.
-To get working with Surface your best bet is the MT Vista project as it will take
TUIO input and dispatch WM_Touch events.
1. Source image - Displays the raw video image from either camera or video file.
2. Use Camera Toggle - Sets the input source to camera and grabs frames from
selected camera.
3. Use Video Toggle - Sets the input source to video and grabs frames from video file.
4. Previous Camera Button - Gets the previous camera device attached to computer
if more than one is attached.
5. Next Camera Button - Gets the next camera device attached to computer if more
than one is attached.
6. Tracked Image - Displays the final image after image filtering that is used for blob
detection and tracking.
7. Inverse - Track black blobs instead white blobs.
8. Threshold Slider - Adjusts the level of acceptable tracked pixels. The higher the
option is, the bigger the blobs have to be converted in tracked blobs.
9. Movement filtering - Adjust the level of acceptable distance (in pixels) before a
movement of a blob is detected. The higher the option is, the more you have to
actually move your finger for CCV to register a blob movement.
10. Min Blob Size - Adjust the level of acceptable minimum blob size. The higher the
option is, the bigger a blob has to be to be assigned an ID.
11. Max Blob Size - Adjust the level of acceptable maximum blob size. The higher
the option is, the bigger a blob can be before losing its ID.
12. Remove Background Button - Captures the current source image frame and uses
it as the static background image to be subtracted from the current active frame. Press
this button to recapture a static background image
13. Dynamic Subtract Toggle - Dynamically adjusts the background image. Turn this
on if the environmental lighting changes often or false blobs keep appearing due to
environmental changes. The slider will determine how fast the background will be
learned..
14. Smooth Slider - Smoothes the image and filters out noise (random specs) from the
image.
15. Highpass Blur Slider - Removes the blurry parts of the image and leaves the
sharper brighter parts.
16. Highpass Noise - Filters out the noise (random specs) from the image after
applying Highpass Blur.
17. Amplify Slider - Brightens weak pixels. If blobs are weak, this can be used to
make them stronger.
18. On/Off Toggle - Used on each filters, this is used to turn each filter on or off.
19. Camera Settings Button - Opens a camera settings box. This will open more
specific controls of the camera, especially when using a PS3 Eye camera.
20. Flip Vertical Toggle - Flips the source image vertically.
21. Flip Horizontal Toggle - Flips the source image horizontally.
22. GPU Mode Toggle - Turns on hardware acceleration and uses the GPU. This is
best used on newer graphics cards only. Note: GPU mode is still in early development
and may not work on all machines.
23. Send UDP Toggle - Turns on the sending of TUIO messages.
24. Flash XML - Turns on the sending of Flash XML messages (no need for flosc
anymore).
25. Binary TCP - Turns on the sending of RAW messages (x,y coordinates).
26. Enter Calibration - Loads the calibration screen.
27. Save Settings - Saves all the current settings into the XML settings file.
4.2 Community Core Vision (CCV) – Calibration
In order to calibrate CCV for your camera and projector/LCD, you’ll need to run the
calibration process. Calibrating allows touch points from the camera to line up with
elements on screen. This way, when touching something displayed on screen, the
touch is registered in the correct place. In order to do this, CCV has to translate
camera space into screen space; this is done by touching individual calibration points.
Follow the directions below to setup and perform calibration.
note: For those displaying an image on the touch surface (projector or LCD) , you’ll
need to set up your computer so that the main monitor is the video projector so that
CCV is displayed on the touch surface.
Calibration Instructions
1. Press the enter calibration button or “c” to enter the calibration screen.
A grid of green green crosses will be displayed. These crosses are calibration
points you touch once we begin calibrating (step 4).
There is a white bounding box that surrounds the calibration points. If a visual
image is not being displayed on the touch surface (MTmini users), skip to step
3; otherwise, continue.
2. (MTmini users skip this step) If the white bounding box is not fully visible or
aligned with the touch surface, follow the directions under Aligning Bounding Box to
Projection Screen displayed on the CCV screen to align the bounding box and
calibration points so they fit the touch surface. The goal is to match the white
bounding box to the left, right, top, and bottom of your screen.
Aligning Bounding Box to Projection Screen:
o Press and hold “w” to move the top side, “a” to move left side, “s” to
move bottom side, and “d” to move right side.
o While holding the above key, use the arrow keys to move the side in
the arrowed direction.
In other words, Hold “up arrow”, then “left arrow” on your keyboard to get the
upper corner at the top left corner of your screen, then hold “s + down arrow”,
then “d + right arrow” to get the bottom right corner in position. Up, down,
right, left arrows will make the box move, and a combination of up, down,
right, left and z, q, s, d or equivalent on qwerty keyboard will make the edges
move.
3. If using a wide angle camera lens or need higher touch accuracy, more calibration
points can be added by following the Changing Grid Size directions on screen. note:
adding additional calibration points will not affect performance.
To Change Grid Size:
o Press “+” to add points or “-” to remove points along the x-axis.
o Hold “shift” with the above to add or remove points along the y-axis.
If this does not work, you may want to try “_”, and “+/-” from
the numerical pad.
4. Begin calibration by pressing “c.”
5. A red circle will highlight over the current calibration touch point. Follow the
directions on screen and press each point until all targets are pressed.
If not projecting an image on the touch surface (MTmini users), you may guess
or draw the touch points directly on the touch surface so you know where to
press.
If a mistake is made, press “r” to return to the previous touch point. If there
are false blobs and the circle skips without you touching, press “b” to
recapture the background and return “r” to the previous point.
6. After all circles have been touched, the calibration screen will return and accuracy
may be tested by pressing on the touch area. If calibration is inaccurate, calibrate again
(Step 4) or return to the main configuration window “x” to adjust the filters for better
blob detection.
CHAPTER 4
MULTI-TOUCH HELLO WORLD
4.1 Application using the Virtual Studio with C++
Multitouch "Hello World" program:
Start Visual Studio and create a new WPF project and name it
"MultitouchHelloWorld".
Add reference to Multitouch.Framework.WPF.dll assembly.