In recent years, gesture recognition has become essential in many vision based systems related to wireless interfacing between human and machines. The rising interest in this research is in conjunction with the immense attentions of employing real time application to improve human – machine interaction. The implementation is prepared for the purpose of real time application. The presented gesture recognition system exploits the relation between divine proportion and human body structure to approximate the length of various body parts.
[...] Figure 1 Divine Proportion in Human Body III. ALGORITHM STEPS The whole algorithm can be divided into four major steps namely frame Acquisition, segmentation module, feature Extraction module and recognition module. Details of each module have been explained in coming sections. A. Frame Acquisition All frames for real-time or offline recognition, are captured using a webcam of either 320x240 or 640x480 resolution. All images consist of three channels depending on the returned colour space by the camera. The only constraint imposed is that the whole human body must be under camera view limit and is properly visible. [...]
[...] Figure 10 Computation of Elbow Points. Feature Computation: In this project basic features consist of 6 angles as shown in figure 8. Figure 9 Flow Diagram IV. IMPLEMENTATION AND RESULTS The code was implemented on C platform using OpenCV library. The most difficult part of our project was to calculate the feature vector based on which all the information regarding the gesture is calculated. Also the lighting condition and the colour level of input video effects the performance of the algorithm. [...]
[...] These are the angles of the axes about which the object has minimum and maximum moments of inertia, respectively. Determining which solution of the quadratic equation minimizes (or maximizes) the moment of inertia requires substitution back into the equation for M. Orientation is shown in figure 5. The angle min that minimizes is the direction of the axis of least inertia for the object. If min is unique, then the line in the direction min is the principal axis of inertia for the object. [...]
[...] We assume that the segmented image is as shown in figure 3 to make feature extraction module comprehensible: Figure 3 Input image and Segmentation Segmentation of hands and head could be done using either skin detection while other body parts are covered, or using hand gloves and mask on the face. Both methods are explained in detail in the following sections: Using Skin Detection: A wide variety of colour spaces are available to detect skin in images. RGB normalised RGB, HSV, TSL and YCbCr color spaces can be used for this purpose. [...]
[...] The whole system can be described using following flow diagram: Output Image Figure-12(a) Sample vector Input Image Output Image Figure-12(b) Sample vector CONCLUSION Hence within limitations a real-time human gesture recognition system has been developed using a webcam only. System is independent of image resolution and rotation. Due to relative features the method is robust and simple. It can be used in various applications like human computer interaction, robotics control, traffic light control etc. REFERENCES D.J. Sturman, D. Zeltzer, A survey of glove-based input, IEEE Comput. [...]
APA Style reference
For your bibliographyOnline reading
with our online readerContent validated
by our reading committee