Line 17: |
Line 17: |
| |- | | |- |
| ! [[File:Microphone.svg|180px|left|thumb]] || Voice Interface || Martin Abente ||align=left valign=top| | | ! [[File:Microphone.svg|180px|left|thumb]] || Voice Interface || Martin Abente ||align=left valign=top| |
− | Speech recognition technologies are interaction mechanisms that, nowadays, have evolved from "alternative" to "extended". Proof of this is the proliferation of such technologies in a wide range of domains. From smartphones assistants, medical record transcriptiors, smart cars and TVs command controls to many others. | + | Speech-recognition technologies are interaction mechanisms that, nowadays, have evolved from "alternative" to "extended". Proof of this is the proliferation of such technologies in a wide range of domains. From smartphones assistants, medical-record transcriptions, smart cars, and TV command controls to many others. |
| | | |
− | In this regard, not much have been seen in the education domain. This is could be due the fact that there is still a missing glue between the speech recongnition technologies and educational content developers. This project is about filling the gap, within the Sugar Learning Platform, and for doing so the next objetives must be fulfilled: | + | In this regard, not much have been seen in the education domain. This is could be due the fact that there is still a missing glue between the speech-recognition technologies and educational content developers. This project is about filling the gap, within the Sugar Learning Platform, and for doing so the next objectives must be fulfilled: |
| | | |
− | (a) put together a speech recognition engine that allow us to deploy it in offline scenarios. IE: using PocketSphinx and Voxforge projects. | + | (a) put together a speech-recognition engine that allow us to deploy it in offline scenarios (i.e., using PocketSphinx and Voxforge projects); |
− | (b) define a general architecture that will allow us to provide high level speech recognition functionalities to the Sugar core and activities. IE: exposing this engine as a DBus service. | + | (b) define a general architecture that will allow us to provide high-level speech-recognition functionality to the Sugar core and activities (i.e., exposing this engine as a DBus service); |
| (c) finding acceptable solutions that will allow us to add new content and being able to handle different languages. | | (c) finding acceptable solutions that will allow us to add new content and being able to handle different languages. |
| |- | | |- |