Kinect Process: methods for processing Kinect Data

Kinect Process: methods for processing Kinect Data

For a recent “micro-theater” project I needed to turn actors into digital characters with variable designs. I decided to use the depth frame of the Kinect 2 and extract the 3D pointcloud of each actor and then use this data as the base for my CharacterGenerator class.

I used libcinder because of its performance, efficiency and clean API. Additionally two Cinder Blocks were needed to access Kinect Data and to use OpenCV for blob detection, Cinder-KCB2 and Cinder-OpenCV3. I first wrote this tiny class that keeps all the 2D data associated with each blob (body) in the kinect and calculates the necessary 3D data like the pointcloud and the blob outline in 3D:

typedef std::shared_ptr<class Body> BodyRef;
class Body {
public:
static BodyRef create() { return std::make_shared<Body>(); }
Body() {}
~Body() {}
// calculating the Camera Space data, the DeviceRef and the 16-bit Depth Channel are needed to be able to use the mapDepthToCamera() method
void calc3dData(const Kinect2::DeviceRef& aDevice, const ci::Channel16uRef& aDepthChannel);
public:
//3d
std::vector<ci::vec3> mPointCloud; // a pointcloud is basically is vector of 3D vectors
std::vector<ci::vec3> mOutline; // keeping the 3D outline in another vector
ci::vec3 mCenter; // blob's centroid translated to the 3D space
//2d
std::vector<cv::Point> mContour;
cv::Moments mMoments;
ci::PolyLine2 mPolyLine;
ci::Rectf mBoundingRect;
ci::vec2 mCentroid;
std::vector<ci::ivec2> mScreenPositions; // these are all the pixels that are inside the contour in the 2d space
};

Using Kinect’s Coordinate Mapping we can implement our Body class’s methods:

void Body::calc3dData(const Kinect2::DeviceRef & aDevice, const ci::Channel16uRef & aDepthChannel)
{
//////// calc pointcloud
if (mScreenPositions.size() > 0) {
mPointCloud = aDevice->mapDepthToCamera(mScreenPositions, aDepthChannel);
}
/////// calc the 3D outline
//we need ivec2 instead of vec2 for the mapDepthToCamera method
vector<ivec2> outline2d;
for (auto& point : mPolyLine.getPoints()) {
outline2d.push_back(point);
}
if (outline2d.size() > 0) {
mOutline = aDevice->mapDepthToCamera(outline2d, aDepthChannel);
}
//calc the center
mCenter = aDevice->mapDepthToCamera(mCentroid, aDepthChannel);
}

That looks right. But before making Body instances, we will need to process the Kinect’s data a little, first to prepare the image and then to find blobs and most importantly all the pixels inside those blobs (referred to as mScreenPositions in the Body class).

FIND THE FULL GIST HERE!

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*