Top Banner
#AWE2016 Getting rid of the marker: object recognition and tracking of industrial equipment June 1st, 2016 Augmented World Expo US #AWE2016 David Marimon CEO & Co-founder [email protected] +34 654 906 753
13

David Marimon (Catchoom) Getting Rid of the Marker: Object Recognition and Tracking

Feb 18, 2017

Download

Technology

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Presentacin de PowerPoint

Getting rid of the marker:object recognition and trackingof industrial equipmentJune 1st, 2016Augmented World Expo US#AWE2016David MarimonCEO & [email protected]+34 654 906 753

#AWE2016

Current Limitations of Training and Maintenance using ARMarkerless 3D Object Recognition of Industrial EquipmentWorking with Depth Sensing CamerasContent

#AWE2016

Training and Maintenance with AR

Source: NGRAIN

#AWE2016Throughout the day, youll be seeing solutions like this one that can serve multiple purposes. From visualizing the state of a certain machine, to learning how to manipulate it on-site or with remote assistance. The operator is provided with guidance and sometime even life data coming from connected devices.

Lets look at commercial approaches and how this is solved today.

Current solutions rely on markersSource: NGRAIN Augmented Reality in 90 seconds on Youtube.

Source: iQAgent Overview on Youtube.

#AWE2016There are several solutions in the market, that provide excellent tools for field operators. Among them are NGRAIN and iQAgents solutions.

Systems like the ones shown in the pictures make use of fiducial markersor QR codes in order to identify and augment the equipment.

As you can see on those pictures, this requires an extra step of placing those markers. In some cases, this may be a stopper, depending on the application or environment, or even customer requirements.

Current solutions rely on 3D CAD modelsSource: 3dcadbrowser.com

#AWE2016Those that do not rely on markers, often make use of 3D CAD models in order to recognize and track the object.

However, some manufacturers do not want to provide such CAD models to an integrator. This was a bit shocking the first time we encountered this while working for a car manufacturer but it makes perfect sense if you think about it for a second.The reason is that industrial designs and IP in general is too important to make it available for a 3rd party integration, even if such integration is for to help operators use their own equipment or parts.

------

Thomas Perpre from Diota recently said in a webinar hosted by the AREA that AR becomes most interesting when humans meet complexity. Which in my vision of what AR can bring to industrial applications means, lets make it less complex.

Catchoom has developed computer vision software to tackle both limitations:

Markerless: the object itself is recognized.

Based only on images and depth: no CAD model needed.Markerless 3D Object Recognition

#AWE2016Catchoom is developing computer vision software that helps simplify the setup and makes the bridge between the equipment and the digitally content associated to that equipment seamless.

We use images and depth as input for the setup, no other source of information.

Current solution:

Textureless: no need to have plenty of corners, edges, or any well-defined pattern on the object.

On-Device: everything can run inside the portable device of the operator.Markerless 3D Object Recognition

#AWE2016

Markerless 3D Object Recognition

#AWE2016

Working with Depth Cameras

#AWE2016A Depth camera is a device that provides the distance from the camera to whatever object is depicted for every pixel.They work in different ways, but one that is very common is by projecting a pattern with an infrared light emitter and then capturing the light that comes back from the real world and into de camera. Depending on the deformation of the patterns or the time it takes for the light to go out onto the world and reflected back into the sensor, it is possible to estimate the depth.

There are several commercial depth cameras in the market. They may look like this one from Occipital or if you use the Micorsoft Kinect, you have one at home already.

Cameras:MS KinectIntel Real SenseStructure Sensor from Occipital

Lessons learned:Cameras explored provide similar performance. The key to success is the software that processes the raw data.

Light conditions and materials can change results significantly.Working with Depth Cameras

#AWE2016Regarding light, basically it interacts with the waveforms emitted and received by the camera. For instance, in outdoors, there is typically too much external light coming from the sun, and the emitter is not strong enough to overcome that.

Our suggestion is to consider depth sensors for indoors only. At least until computer vision based on RGB cameras compensates for that and of course were also working on that.

As for materials, their reflective properties or even transparent, like glass, can cause some trouble to the sensors.

Speaking about materials, did you ever wondered why there is so much AR in Ironman? Well, its because everything is red!

Catchoom

#AWE2016However, some other superheros dont think the same.

Black sucks all the light and poses serious trouble to depth sensors.So, if youre producing parts, please stay with grey (or pink would be even better).

So theres still work to be done with those cameras. From our experience software makes the big difference, trying to overcome the challenges of light and materials.

The future of training and maintenance is augmented.

Catchoom has developed markerless 3D object recognition for industrial environments.

Depth sensing has still some challenges with light and materials but software can help a lot.Takeaways

#AWE2016

June 1st, 2016Augmented World Expo US#AWE2016David MarimonCEO & [email protected]+34 654 906 753

Come visit our booth for a live demo!

#AWE2016