Features/Global Text To Speech

From Sugar Labs
< Features
Revision as of 12:38, 15 November 2011 by FGrose (talk | contribs) (moved Features/GlobalTextToSpeech to Features/Global Text To Speech: deCamelcase to enable search)
Jump to navigation Jump to search


Summary

When the user press Alt+Shift+S the currently selected text should be said by the computer.

Owner

Current status

  • Targeted release: 0.96
  • Last updated:
  • Percentage of completion:

Detailed Description

A few activities implement Text to Speech, but Sugar itself has a feature not fully implemented to do text to speech in Sugar.

Most of the code is already done. Currently, if the user select a text in any activity, and press Alt+Shift+S, in shell.log you can see:

1321360323.039468 DEBUG root: _key_pressed_cb: 39 9 <alt><shift>s
1321360323.090568 DEBUG root: owner_change_cb
1321360323.090857 DEBUG root: Clipboard.add_object
1321360323.093460 DEBUG root: ClipboardTray: 1 was added
1321360323.094790 DEBUG root: KeyHandler._primary_selection_cb: 'hola'
1321360323.099433 DEBUG root: Asking for target text/rtf.
1321360323.100389 ERROR dbus.proxies: Introspect error on org.laptop.Speech:/org/laptop/Speech: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.laptop.Speech was not provided by any .service files
1321360323.100704 DEBUG dbus.proxies: Executing introspect queue due to error
1321360323.102645 ERROR root: An error occurred with the ESpeak service: DBusException(dbus.String(u'The name org.laptop.Speech was not provided by any .service files'),)

We already have the code to implement text to speech in Read, Memorize and Speak activities.

A device should be implemented in the frame, to enable the user to select pitch and velocity.

More information:

http://wiki.laptop.org/go/Speech_Server

http://wiki.laptop.org/go/Speech_synthesis

Tickets:

http://dev.laptop.org/ticket/7911

http://dev.laptop.org/ticket/7906

http://dev.laptop.org/ticket/7907

Benefit to Sugar

Text to speech is a good feature to kids, when they are learning to read, and to kids with disabilities

Scope

The change is isolated.

UI Design

I propose use de default language now, and only expose controls to set pitch and velocity. In a later change, we can implement have more than one language enabled, and a switch to change it.

The UI will be a device in the frame, with the needed controls in the palette.

How To Test

Features/Global Text To Speech/Testing

User Experience

Dependencies

We already include all the needed dependencies.

Contingency Plan

Documentation

Release Notes

Comments and Discussion