Multimodal interfaces with voice and gesture input
The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.
- Research Organization:
- Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- National Science Foundation, Washington, DC (United States); Texas Univ., Austin, TX (United States)
- DOE Contract Number:
- W-7405-ENG-48
- OSTI ID:
- 105888
- Report Number(s):
- UCRL-JC-121501; CONF-9510197-1; ON: DE95015989; CNN: Grant IRI-9213823
- Resource Relation:
- Conference: Institute of Electrical and Electronics Engineers international conference on systems, man and cybernetics, Vancouver (Canada), 22-25 Oct 1995; Other Information: PBD: 20 Jul 1995
- Country of Publication:
- United States
- Language:
- English
Similar Records
Creating Tangible Interfaces by Augmenting Physical Objects with Multimodal Language
Towards Perceptual Interface for Visualization Navigation of Large Data Sets Using Gesture Recognition with Bezier Curves and Registered 3-D Data