Key Part Detection Using Boundary-Regional Codebook
This thesis addresses human body part detection techniques for the purpose of modeling complex human activities. The goal of this research is to detect and segment key human body parts, such as head, torso and arms from cluttered background. These are fundamental research issues that have attracted intensive attention in the computer vision community because of their wide applications. Meanwhile they also remain to be ones of the most challenging research issues largely due to the ubiquitous visual ambiguities in images/videos. We propose a novel approach to attack these challenges in the Codebook-based framework via a new feature, boundary-region pairs (BRP). Codebook is a relative new technique for object recognition or detection which does not gain enough attentions in computer vision area until recent years. Shape-based approaches are explored extensively, as well as region-based methods. However, edges alone are not descriptive enough for human object detection in a general scene, and regions normally represented by patches contain a lot of redundant information which has great chance to be noise. It is the motivation of our approach to simultaneously extract and utilize the complementary visual cues of region and edge. The key contribution of our research is that our detection results are parts themselves (boundary and regions of a part), which distinguishes our approach from all others that only provide a bounding box. Our outputs are more suitable for further stage feature extraction process, such as motion extraction of each body part. These motion cues defined on each body part can be crucial for modeling complex human activities. The effectiveness of our method is evaluated on CMU Motion of Body(MoBo) dataset . This thesis is constructed as following: First, we state the problem that we are addressing and the motivations of key body part detection in Chapter 1 ; Then, we review related work in Chapter 2 ; In Chapter 3 , the detailed implementation of our approach is given; At the end, the experiment results and some discussions are presented in Chapter 4 .