W3C MMI: Difference between revisions
Appearance
Content deleted Content added
m Robot - Moving category W3C standards to World Wide Web Consortium standards per CFD at Wikipedia:Categories for discussion/Log/2008 January 1. |
|||
Line 24: | Line 24: | ||
* [http://www.w3.org/2002/mmi/ Multimodal Interaction Activity on W3C site] |
* [http://www.w3.org/2002/mmi/ Multimodal Interaction Activity on W3C site] |
||
* [http://www-128.ibm.com/developerworks/web/library/wa-multimodarch1/ The W3C Multimodal Architecture, Part 1: Overview and challenges] on IBM DeveloperWorks |
* [http://www-128.ibm.com/developerworks/web/library/wa-multimodarch1/ The W3C Multimodal Architecture, Part 1: Overview and challenges] on IBM DeveloperWorks |
||
* [http://www.openinterface.org/ OpenInterface Platform], OpenInterface Multimodal Interaction Designer Framework |
|||
[[Category:World Wide Web Consortium standards]] |
[[Category:World Wide Web Consortium standards]] |
Revision as of 12:05, 14 May 2008
The Multimodal Interaction Activity is an initiative from W3C aiming to provide means (mostly XML) to support Multimodal Interaction scenarios on the Web.
This activity was launched in 2002. The Multimodal Interaction Framework Working group has already produced :
- the Multimodal Interaction Framework, providing a general Framework for Multimodal Interaction, and the kinds of markup languages being considered.
- A set of Use cases.
- A set of Core Requirements, which describes the fundamental requirements to address in the future specifications.
The set of devices that are considered are : Mobile phones, automotive telematics, PCs connected on the Web.
Current Work
The following XML specifications (currently in advanced Working draft state) are already addressing various parts of the Core Requirements :
- EMMA (Extensible Multi-Modal Annotations) : a data exchange format for the interface between input processors and interaction management systems. It will define the means for recognizers to annotate application specific data with information such as confidence scores, time stamps, input mode (e.g. key strokes, speech or pen), alternative recognition hypotheses, and partial recognition results etc.
- InkML - an XML language for digital ink traces : an XML data exchange format for ink entered with an electronic pen or stylus as part of a multimodal system.
- Multimodal Architecture : A loosely coupled architecture for the Multimodal Interaction Framework that focuses on providing a general means for components to communicate with each other, plus basic infrastructure for application control and platform services.
See also
- Multimodal Interaction
- VoiceXML - the W3C's standard XML format for specifying interactive voice dialogues between a human and a computer.
- SSML - Speech Synthesis Markup Language
- CCXML - Call Control eXtensible Markup Language
- SCXML - an XML language that provides a generic state-machine based execution environment
Useful Links
- Multimodal Interaction Activity on W3C site
- The W3C Multimodal Architecture, Part 1: Overview and challenges on IBM DeveloperWorks
- OpenInterface Platform, OpenInterface Multimodal Interaction Designer Framework