TEXT, SPEECH AND LANGUAGE TECHNOLOGY Volume: 19
This book covers the topic of multimodality from a large number of different perspectives and provides the advanced student/researcher with a current survey of theories of multimodal communication between people as well as reviewing many aspects of multimodal input/output in technical systems. Chapters dealing with human-human multimodal communication include speech-gesture systems, semiotics of gesture, structure and functions of face-to-face communication, emotional relations and intercultural variation, and human-human communication which is mediated by computer for the handicapped. Chapters dealing with human-machine communication and interfaces cover the technology and science of creating talking faces, technology and methods for the development of animated interface agents in intelligent multimedia systems, and the integration of multimodal input and output in the computer interface. The book also covers computer processing and understanding of signal and symbol input from speech, text, and visual images.
Preface. Contributors. Introduction. Bodily Communication Dimensions of Expression and Content; J. Allwood. Dynamic Imagery in Speech and
Gesture; D. McNeill, et al. Multimodal Speech Perception: a Paradigm for Speech Science; D.W. Massaro. Multimodal Interaction and People with Disabilities; A.D.N. Edwards. Multimodality in Language and
Speech Systems - From Theory to Design Support Tool;
N.O. Bernsen. Developing Intelligent Multimedia Applications;
T. Brondsted, et al. Natural Turn-Taking Needs no Manual:
Computational Theory and Model, from Perception to Action;
K.R. Thorisson. Speech and Gestures for Talking Faces in
Conversational Dialogue Systems; B. Granstrom, et al.