Chris Burbridge and Lorenzo Riano from the University of Ulster Intelligent Systems Research Centre used the Kinect to turn their robot into a mobile 3D person scanner. A Kinect is great for collecting 3D data, but sticking it on wheels is even better because you can collect data from multiple points of view and construct full 3D models.
Their demo uses the Kinect at both the skeleton tracking and 3D point cloud level. The OpenNI skeleton tracker is used to identify the position of the person in the room, and then the 3D point cloud data is used to start building the full 3D scan. Once all of the point clouds are collected, they use PCL to create a unified 3D model.
The UU robot is a custom MetraLabs Scitos G5 mobile robot with a Kinect mounted at the end of a Schunk 7 DOF manipulator, but their code should be adaptable to other robot platforms.
Leave a comment