Pose recognition using Microsoft Kinect depth sensor

January 1st, 2011

kintectitle

This month I’m working on build a pose recognition middleware ,the whole setup is based on openCV.

I tried a simple way to do the gesture recognition: first use harr-like feature to find the face, then flood fill to get the connected area ,which suppose to be body , then use some sort of  AAM(http://en.wikipedia.org/wiki/Active_appearance_model), ASM (http://en.wikipedia.org/wiki/Active_shape_model) or just simple integral steping model to get the limb’s position and direction.

I made a win form application to run the recognition analysis, and made a simple flash coveflow to test the gesture control , the result is good but some time the recognition will turn to failure , still working on make the algorithm more reliable.

update:
After Microsoft release the Kintect SDK, I gave up this project. They have a great team using machine learning and pattern recognition algorithm to classify human gesture based on huge a amount of data, which is certainly a better approach than I did. : )

kinect1


Trackback URI | Comments RSS

Leave a Reply

Name (required)

Email (required)

Website

Speak your mind

(please refresh this page if you can not see this image)