Just noticed an article on TechCrunch about the Kinetic Space Framework project. Check out the video they linked to below. According to the write up the software is able to detect subtle gestures down to the level of Sign Language. While the video below doesn't show this particular feature off -if it works on a practical level- then this opens the door to an intriguing level of human-computer interfaces. Now if I can only figure out how to get the information it generates out through OSC . . .