Changes

Jump to navigation Jump to search
929 bytes added ,  10:54, 11 February 2014
Line 16: Line 16:  
! [[File:Cordova_sugar.png|180px|left|thumb]] || Cordova/PhoneGap container for Sugar || Lionel Laské ||align=left valign=top| The idea is to allow Sugar Web Activities to use device dependent features. In this objective, the project will be to transform Sugar into a Cordova/PhoneGap container and to implement major PhoneGap features. Shortly, the project is to add Sugar as new supported platform for Cordova/PhoneGap. Lists of Sugar features that could be exposed to Sugar Web Activities using Cordova/PhoneGap will be: Camera, Audio/Video capture, Accelerometer, Connection, Events, File, Globalization, Media. During the project, the student will also have to demonstrate its work by writing some sample activities using device features. Example of activities is a Record like activity or a Level Tool like.
 
! [[File:Cordova_sugar.png|180px|left|thumb]] || Cordova/PhoneGap container for Sugar || Lionel Laské ||align=left valign=top| The idea is to allow Sugar Web Activities to use device dependent features. In this objective, the project will be to transform Sugar into a Cordova/PhoneGap container and to implement major PhoneGap features. Shortly, the project is to add Sugar as new supported platform for Cordova/PhoneGap. Lists of Sugar features that could be exposed to Sugar Web Activities using Cordova/PhoneGap will be: Camera, Audio/Video capture, Accelerometer, Connection, Events, File, Globalization, Media. During the project, the student will also have to demonstrate its work by writing some sample activities using device features. Example of activities is a Record like activity or a Level Tool like.
 
|-
 
|-
! [[File:Microphone.svg|180px|left|thumb]] || Voice Interface || Martin Abente ||align=left valign=top| There has been good headway at NUA in providing voice recognition in Sugar. The goal of this project it so incorporate voice IO as a first-class interface to the Sugar desktop.
+
! [[File:Microphone.svg|180px|left|thumb]] || Voice Interface || Martin Abente ||align=left valign=top|
 +
Speech recognition technologies are interaction mechanisms that, nowadays, have evolved from "alternative" to "extended". Proof of this is the proliferation of such technologies in a wide range of domains. From smartphones assistants, medical record transcriptiors, smart cars and TVs command controls to many others.
 +
 
 +
In this regard, not much have been seen in the education domain. This is could be due the fact that there is still a missing glue between the speech recongnition technologies and educational content developers. This project is about filling the gap, within the Sugar Learning Platform, and for doing so the next objetives must be fulfilled:
 +
 
 +
(a) put together a speech recognition engine that allow us to deploy it in offline scenarios. IE: using PocketSphinx and Voxforge projects.
 +
(b) define a general architecture that will allow us to provide high level speech recognition functionalities to the Sugar core and activities. IE: exposing this engine as a DBus service.
 +
(c) finding acceptable solutions that will allow us to add new content and being able to handle different languages.
 
|-
 
|-
 
! [[File:Headwand.jpg|180px|left|thumb]] || Assistive Interface || Andres Aguirre ||align=left valign=top| Using a base sensor package, the goal of this project is to provide a physical sensor interface to the Sugar desktop for people with limited motor control.
 
! [[File:Headwand.jpg|180px|left|thumb]] || Assistive Interface || Andres Aguirre ||align=left valign=top| Using a base sensor package, the goal of this project is to provide a physical sensor interface to the Sugar desktop for people with limited motor control.
572

edits

Navigation menu