Interacting with Multi-Dimensional Gestures
Interacting with gestures is becoming increasingly more relevant since the introduction of touch-screen devices where the user can trace gestures directly on the screen (as opposed as when using an indirect input device like a mouse). Gestures, in existing solutions, are identified solely based on their geometrical shape. Unfortunately, this approach does not scale up, leading to complex gesture vocabularies that are hard to learn and hard to execute. MDGest focuses on the use of new gesture characteristics (dimensions) to discriminate different gestures (dynamics, drawing direction and orientation, distinctive drawing patterns, etc.) to propose a large vocabulary of gestures having a simple shape to be usable from both motor and cognitive perspectives.
Research activities
Participants
|