SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT … · 2017-05-19 · 3 INTRODUCTION Captured external point cloud data by captured Kespry-Drone Captured Photogrammetric Point
Post on 02-Aug-2020
3 Views
Preview:
Transcript
Innfarn Yoo, 05.08.2017
SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD
2
AGENDA
Introduction
Previous Work
Method
Result
Future Work
3
INTRODUCTION
Captured external point cloud data by captured Kespry
- Drone Captured Photogrammetric Point Cloud
- Captured every 2-3 days
- 235 captures, 190 GB (avg. 810 MB)
- Each capture has 300 MB ~ 1.9 GB
- 10 ~ 50 million points
- Resolution is 10 ~ 20 cm
- Some noise
Time-lapse Point Cloud Dataset
4
5
INTRODUCTION
Capturing internal point cloud data
- Laser Scan (LIDAR) Point Cloud
- Captured every 2 weeks
- 23 captures, 510 GB (avg. 22 GB)
- Each capture has 13 ~ 45 GB
- 0.9 ~ 1.9 billion points
- Resolution is 1 mm ~
- Accurate (some noise near glass)
Time-lapse Point Cloud Dataset
6
7
INTRODUCTION
- Drone captured point cloud
- Dynamic loading and rendering of small, but many point cloud data
- 10 ~ 30 millions of points per capture
- We already showed our methods in GTC 2016
- Laser scan point cloud data
- Around 1.7 billions of points per capture
- 1.7 billions of points * 16 bytes (float x, y, z and color r, g, b, a) = 25.9 GB
- NVIDIA Quadro P6000 has 24 GB, GDDR5X memory, but not enough
Problems
Not a topic of this presentation
8
INTRODUCTION
- Visualizing laser scan dataset in real-time
- Compactly save time-lapse laser scan dataset
- Provide more spatial information
- Primitive conversions
- Convert to machine learning friendly dataset
- Filling gaps in between points
Goals
Sparse Volumes
9
AGENDA
Introduction
Previous Work
Method
Result
Future Work
10
PREVIOUS WORK
- Point Cloud VR
- Time-Lapse VR Rendering
- Octree-based dynamic loading and rendering (LOD)
- Achieved 90 fps per eye
- Show entire dataset in VR village
GTC 2016
Render point cloud per two eyes, Markus Schuetz
11
PREVIOUS WORK
- Progressive Blue-Noise Point Cloud
- Generating progressive blue-noise point cloud
- Buffer management using OpenGL 4.5 extension
- Dynamic loading and rendering of massive scale point cloud
GTC 2016
12
PREVIOUS WORKDrone Captured Time-lapse Point Cloud Visualization
13
AGENDA
Introduction
Previous Work
Method
Result
Future Work
14
TIME-LAPSE LASER SCAN POINT CLOUD
- Time-lapse laser scan point cloud
- Notoriously big data size
- Capturing same space different time
- Some area has higher density than other
Pros & cons
15
16
SPARSE VOLUMETRIC REPRESENTATION
- Sparse Volumetric Representation
- Data compression
- Naturally represented by octree structure
- Voxel allows several algorithms
- e.g., Surface Extraction, Feature Detection, Object Detection
- Gives spatial relationship between voxels
- Can access neighbor voxels
Advantages of Sparse Volume
17
CREATING SPARSE VOLUMEOffline Processes
Input laser scan files -
format: E57, LAS, or LAZCalculate Bounding Box
Generate Octree &
Splatting Points
Voxelate & Save Voxels Merge & Compress Voxels
18
CREATING SPARSE VOLUME
- Make Octree power of 2 cube
- All leaves have same depth same volume for leaf node
- Subdivide leaf node into small subsets
- e.g., 1cm x 1cm x 1cm voxel
- Calculate whether points are in the voxel
- If a point hits the voxel, voxel activated (sparse)
Bounding Box Octree Voxels
19
CREATING SPARSE VOLUME
- Activated voxels are only represented by several index bits (x, y, z)
- 202.42 meter x 226.53 meter x 74.67 meter area
- Voxelated by 1 cm x 1 cm x 1 cm
- Only 43 bits are required to save 1 voxel index x, y, & z
Octree Voxels Compression
20
OCTREE-BASED SPARSE VOLUMES
- Save time-lapse point cloud
- If a voxel is activated, then only save colors
- System memory is not enough (out of core design)
- Process each laser scan
- Save to disk
- Merging and compression on disk
Merge & Compress Voxels
21
SPARSE VOLUMETRIC REPRESENTATIONVoxelization
22
RENDERINGProgressive Rendering
- More than 1 billions of points or voxels are too big to render in real-time
- Keep 60 fps, 80 millions of points per frames is maximum in NVIDIA Quadro P6000
- To see entire voxels or points, we use progressive rendering
- Usually used for physically-based rendering
23
RENDERING
i. Not clean depth and color framebuffer per frame
i. Only clean when camera moved or rendering option changed
ii. Planning 80 millions of points budget per frame
i. Calculate view-frustum & octree node distance
ii. Calculate probability (visibility) per node based on distance
iii. Consecutively render additional points per frame per node
i. When no more points are remained in a node, then give remaining point budget to farther node
iv. Copy framebuffer to back buffer every frame
Progressive Rendering
24
25
26
RENDERING
- Dynamic loading
- Planning how many points we can load per second (depending on disk speed)
- Calculate probability based on node distance from space & time
- Probability whether points need to be rendered in future frame
- Load points in different thread
- Sparse buffer allows to handle the points in virtual linear address
- Actual physical memory in GPU will be used when needed
- Loading or unloading blocks of sparse buffer based on spatiotemporal location
Thread & Sparse Buffer
27
AGENDA
Introduction
Previous Work
Method
Result
Future Work
28
RESULTS
0
0.5
1
1.5
2
2.5
5/8/2016 6/8/2016 7/8/2016 8/8/2016 9/8/2016 10/8/2016 11/8/2016 12/8/2016 1/8/2017 2/8/2017 3/8/2017
Num
ber
of
Poin
ts &
Voxels
(Billion)
Laser Scan Capture Date
Number of Points & Voxels (Voxelated result)
Num Points Num Voxels
29
AGGREGATE RESULTS
0
50
100
150
200
250
300
350
400
Points Voxels (1cm) Remove Duplicated Voxels
0
5
10
15
20
25
30
Point to Voxel Conversion Size Comparison
Number of Object (billion) File Size (GB)
Unit: Billion Unit: GB
30
RESULT
- Sparse voxel representation can alleviate notoriously big data size problem
- Preprocessing takes long time
- Several hours to process 400 GB laser scan data
- Progressive rendering allows see entire data set with real-time control
Overall
31
AGENDA
Introduction
Previous Work
Method
Result & Demo
Future Work
32
FUTURE WORK
- NVIDIA’s GVDB
- Similar to OpenVDB, but CUDA-based VDB
- Our dataset is larger than the limit of GVDB for now
- We have 3.5 billion voxels
- Later we will cut subset of voxels and process in GVDB
GVDB
Partially Converted to GVDB and Rendered using NVIDIA OptiXSplatting 10 GB of points to GVDB
33
FUTURE WORK
- Integration to NVIDIA’s new ProViz viewer and editor
- ProViz team is developing new ProViz viewer and editor
- This work will be integrated
- Object Detection from Volumetric Point Cloud
- Detect objects using machine learning in 3D space
ProViz tool & Machine Learning
34
NVIDIA’S NEW BUILDING
top related