Real-time 3D Reconstruction using Kinect

http://jiakaizhang.com/project/real-time-3d-reconstruction/

Real-time 3D Reconstruction using Kinect

Real-time 3D Reconstruction

Jiakai Zhang, Prof. Davi Geiger
New York University
July 2012 – September 2012

In order to reconstruct an indoor scene using a moving Kinect camera, I first needed toalign point clouds of different frames, then integrate them and rebuild the surface, and finally realize the real-time reconstruction using CUDA language.

More details are in my report.

3D Reconstruction using Kinect

Here is the pipeline:

pipleline

Figure 1 pipeline

1. Input raw data – depth image

The figure 2 shows the raw data from Kinect which is RGB Image and Depth Image.

RawImage

Figure 2 Raw Data from Kinect

The Kinect Camera has 30 FPS. The resolution for the depth image is 640 by 480.

2. Noise reduction – bilateral filtering

The raw depth data from the Kinect is pretty noisy. It’s hard to use for camera tracking. If I apply the Phong-shading to represent the normal map, the noisy normal vectors make the objects irregularity.

RawDepthDataFigure 3 Raw Normal Map

Thus we implement a bilateral filtering which is used to smooth the depth image and remove noise while still preserving edges. The details of this algorithm shows on this Web Page. The figure 4 shows the result by choosing different parameters of filtering

BilateralFiltering

 

Figure 4 bilateral filtering process results

3. Camera Pose Estimation – ICP

The input of ICP is the consecutive cloud points and normal vectors in different frames. The output is the 6DOF transformation matrix T which indicates the pose of camera. The figure 5 shows the results before and after applying ICP. The two images are obtained from two different viewports but the same scene.

before_ICP

after_ICPFigure 5 ICP Result

6. Update reconstruction – TSDF and Ray Casting

Once I know the position and rotation relations between frames, I can use TSDF to merge all frame depth map into one. Here I use truncated signed distance function (TSDF) to save merged data. TSDF actually a 3d tensor or I call it a cube, which represents the space I are measuring. The value of each volume in the cube is the distance to closest surface. And this distance is signed and truncated. If the volume is behind the surface in the view of camera, then I set distance a negative value. If the distance between volume and surface is too long, then I set the distance equal to 1 or -1. I use truncation to efficiently get parallel surfaces.

After updating the TSDF cube, I choose the particular camera position to cast ray to the volume of the TSDF cube. If we find the sign of the TSDF value changes, it means we find a point on the surface. And we calculate the normal vector by calculating the gradient of TSDF at this point. The figure 6 shows the result of ray casting.

RayCasting

 

Figure 6 Ray Casting

7. Reference

[1] KinectFusion: Real-Time Dense Surface Mapping and Tracking. Microsoft Research
[2] B. Curless and M. Levoy. A volumetric method for building complex models from range images.
[3] M. Harris, S. Sengupta, and J. D. Owens. Parallel prefix sum (scan) with CUDA. In H. Nguyen, editor, GPU Gems 3, chapter 39, pages 851–876.
[4] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Proceedings of the ICCV, 1998.
[5] C. Rasch and T. Satzger. Remarks on the O(N) implementation of the fast marching method.
[6] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. Image and Vision Computing (IVC), 10(3):145–155,1992
[7] Kok-Lim Low Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration

posted @ 2014-09-14 22:37  止战  阅读(1345)  评论(0编辑  收藏  举报