Difference between revisions of "Speech-synthesis"

From Sugar Labs
Jump to navigation Jump to search
Line 166: Line 166:
  
 
*  '''Both of these use espeak. Listen and Spell uses the speechd. But when I discussed it with alsroot on IRC, he told me that using a speechd is a bad idea because it has become a system daemon and requires root privileges to work. Therefore using gstreamer plugin is the only and best idea.
 
*  '''Both of these use espeak. Listen and Spell uses the speechd. But when I discussed it with alsroot on IRC, he told me that using a speechd is a bad idea because it has become a system daemon and requires root privileges to work. Therefore using gstreamer plugin is the only and best idea.
 
*  '''Then a command line tool that will be operated by a button or Keyboard shortcut to speak the selected text.
 
 
*  '''Then comes the GUI for the configuration management tool of the speech. It will configure the sound qualities like volume, language, accent, pitch etc.
 
 
   
 
   
 
*  '''For the GUI pyGtk can be used.
 
*  '''For the GUI pyGtk can be used.
  
*  '''Now to implement speech in sugar core my idea is to use clipboard module which takes care of copy paste in sugar. So using this module the entire selected text can be sent to the speech framework that it can speak out.
+
*  '''Now to get the user selected text my idea is to use clipboard module which takes care of copy paste. So using this module the entire selected text can be sent to the speech framework that it can speak out.
  
 
*  '''For the keyboard speaker, we can simply store the keystrokes in a file and then send the file to the speech generator.
 
*  '''For the keyboard speaker, we can simply store the keystrokes in a file and then send the file to the speech generator.
 
*  '''The basic idea is to provide a read button in core sugar (like a home button) which is always there. So that if a user selects any of the text in the current window and presses the button it gets speak out.
 
  
 
*  '''A small code snippet which I have prepared for demonstration purpose is shown below. You can copy paste this and try it. First select some text and then run the code trough terminal. The code will speak the text. This is a very basic thing which we want to achieve in sugar'''
 
*  '''A small code snippet which I have prepared for demonstration purpose is shown below. You can copy paste this and try it. First select some text and then run the code trough terminal. The code will speak the text. This is a very basic thing which we want to achieve in sugar'''
Line 191: Line 185:
 
                         obj.speak(text)
 
                         obj.speak(text)
  
:  '''You can select the text anywhere in the sugar. Be sure to first install the espeak. Although I have created this code using espeak but in future I will be using gstreamer plugin.
+
:  '''You can select the text anywhere in the sugar. Although I have created this code using espeak directly but in future I will be using gstreamer plugin.
 
----   
 
----   
 
    
 
    
 
Q.10:  '''What is the timeline for development of your project? The Summer of Code work period is 7 weeks long, May 23 - August 10; tell us what you will be working on each week. (As the summer goes on, you and your mentor will adjust your schedule, but it's good to have a plan at the beginning so you have an idea of where you're headed.) Note that you should probably plan to have something "working and 90% done" by the midterm evaluation (July 6-13); the last steps always take longer than you think, and we will consider cancelling projects which are not mostly working by then.
 
Q.10:  '''What is the timeline for development of your project? The Summer of Code work period is 7 weeks long, May 23 - August 10; tell us what you will be working on each week. (As the summer goes on, you and your mentor will adjust your schedule, but it's good to have a plan at the beginning so you have an idea of where you're headed.) Note that you should probably plan to have something "working and 90% done" by the midterm evaluation (July 6-13); the last steps always take longer than you think, and we will consider cancelling projects which are not mostly working by then.
  
A:  [TODO]
+
A:  '''April 21-March 22
 +
:    During this period I will remain in constant touch with my mentor and sugar community. I will remain active on IRC and mailing list to discuss the design details and further improvements that can be incorporated in this project.
 +
 
 +
:    I will also study a relatively new things called STARDICT and ORCA. These can be very useful for sugar.
 +
 
 +
:    Up to this time I will become absolutely clear on my further approach. But now I am providing a rough plan.
 +
 
 +
:    '''May 24 - June 5
 +
*    Will work on implementing the command line interface of the framework.
 +
*    Will complete the basic architecture in which user can select and listen the text from command line interface.
 +
*    Will discuss the UI design for configuration manager on IRC. 
 +
 
 +
:    '''June 6- June 15
 +
*    Will work on implementing the keyboard reader. (Command line)
 +
*    Will design the finalized GUI of the configuration manager.
 +
*    Will release the snapshots of the GUI on wiki page.
 +
 
 +
:    '''June 16 - June 25
 +
*    Will implement the configuration manger.
 +
*    Will link the various options I already described in the GUI.
 +
 
 +
:    '''June 18 - June 25
 +
*    Will release the basic configuration manager.
 +
 
 +
:    '''June 26- July 7
 +
*    Will implement the keyboard speaker.
 +
*    Work on the implementation of the icon reader.
 +
 
 +
:    '''July 8-July12
 +
*    Will finalize an alpha release of the framwork.
 +
 
 +
:    '''July 13
 +
*    Mid term evaluation. Will release the alpha version.
 +
 
 +
:    '''July 14 - July 23
 +
*    Will test it on XO.
 +
*    Ask for bugs and further improvements.
 +
 
 +
:    '''July 24 - August 3
 +
*    Will port the framework on windows.
 +
 
 +
:    '''August 4 - August 13
 +
*    Ask for feedback
 +
*    Preparation for the beta version release.
 +
 
 +
:    '''August 14 - August 22
 +
*    Will release the beta version
 +
 
 +
:    '''August 23 onwards
 +
*    I will continue working on this to make it available in official sugar distros.
  
 
----
 
----
Line 202: Line 245:
 
Q.11:  '''Convince us, in 5-15 sentences, that you will be able to successfully complete your project in the timeline you have described. This is usually where people describe their past experiences, credentials, prior projects, schoolwork, and that sort of thing, but be creative. Link to prior work or other resources as relevant.
 
Q.11:  '''Convince us, in 5-15 sentences, that you will be able to successfully complete your project in the timeline you have described. This is usually where people describe their past experiences, credentials, prior projects, schoolwork, and that sort of thing, but be creative. Link to prior work or other resources as relevant.
  
A:  I am currently pursuing my B.E. in Computer science from Netaji Subash Institute of Technology, New Delhi. A lot of students from this college have been associated with OLPC for development work. Like:
+
A:  I am currently pursuing my B.E. in Computer science from Netaji Subash Institute of Technology, New Delhi.
*  '''Food Force which is still in its developing phase. My seniors are working hard to achieve collaboration in Food Force.
+
:  I have already described some of my past achievements like AI challenge whose simulator code I prepared just in a time span of 15 days.
*  '''Listen and Spell. This project was started by my senior Assim at GSoC 2008 and he is still working on it to remove speech dispatcher dependencies from it.
+
:  Link: http://code.google.com/p/artificial-intelligence.
*  '''Speech dispatcher. This project was handled by my senior Hemant at GSoC 2008.
+
:  So by giving these examples what I am trying to say is that I have got many helping seniors, who have a lot of experience and who are ready to help me in every possible way they can.
+
In school also I prepared a lot of small projects in C++ like digital diary, Sudoku solver, library manager, telephone directory etc.
:  I have already described some of my past achievements like AI challenge whose simulator code I prepared just in a time span of 15 days. In school also I prepared a lot of small projects in C++ like digital diary, Sudoku solver, library manager, telephone directory etc.
 
 
:  Another reason that I can easily complete the project is that I will be getting almost 3 months break during my summer vacations right from the end of May to August. Therefore I can concentrate entirely on this project with all my energies.
 
:  Another reason that I can easily complete the project is that I will be getting almost 3 months break during my summer vacations right from the end of May to August. Therefore I can concentrate entirely on this project with all my energies.
 +
 +
:  A lot of students from my college have been associated with OLPC for development work. Like:
 +
 +
*  '''Food Force which is still in its developing phase. We are working hard to achieve collaboration in Food Force.
 +
*  '''Listen and Spell. This project was started at GSoC 2008 and is still in progress to remove speech dispatcher dependencies from it.
 +
*  '''Speech dispatcher. This project was completed by my senior at GSoC 2008.
 +
:  So by giving these examples what I am trying to say is that I have got many helping seniors, who have a lot of experience and who are ready to help me in every possible way they can. So I can get a lot of guidance and ready help in any case. My chances of stucking at any point are very low.
 +
  
 
====You and the community====
 
====You and the community====

Revision as of 11:46, 1 April 2009

About you

Q.1: What is your name?

A: Chirag Jain


Q.2: What is your email address?

A: chiragjain1989{AT}gmail{DOT}com


Q.3: What is your Sugar Labs wiki username?

A: chiragjain1989


Q.4: What is your IRC nickname?

A: chirag


Q.5: What is your primary language? (We have mentors who speak multiple languages and can match you with one of them if you'd prefer.)

A: Hindi and English


Q.6: Where are you located, and what hours do you tend to work? (We also try to match mentors by general time zone if possible.)

A: I am located in India, Delhi 5:30+GMT. I can work from early morning to late midnight.

collaborating with any mentor wouldn't be a big deal.

Q.7: Have you participated in an open-source project before? If so, please send us URLs to your profile pages for those projects, or some other demonstration of the work that you have done in open-source. If not, why do you want to work on an open-source project this summer?

A: I was not aware of a thing like open source before I stepped into my college. But then I heard a lot about this stuff from my seniors. Then I started participating in coding events and my first open source event was AI Challenge organized during our technical fest.

I did write a simulator code for the event.
Link: http://code.google.com/p/artificial-intelligence.
Then I also made a Sudoku solver in open source using a back tracking method in C++. The algorithm has complexity which is exponential in nature.
Link: http://code.google.come/p/sudoku-crazy
I also actively participate at SPOJ programming contest site: http://www.spoj.pl/
Currently I am at world rank 756.
Link: http://www.spoj.pl/users/chiragjain1989
Now after knowing a lot about open source I want to gain some real time experience in open source development. The GSoC is an opportunity where I can apply my technical skills, can learn new things and at the same time can contribute something to the society.


About your project

Q.8: What is the name of your project?

A: Speech Synthesis


Q.9: My project description. What are you making?

A: I want to integrate speech in the core sugar. Means I want to create a framework which can provide speech synthesis as a basic functionality in sugar.

Let me become more clear. I am using case scenarios to clear my proposal. Imagine any window containing some text is open in sugar. Now what I will do is to provide speech to the text in that window. And of-course user has the freedom to listen the text he has selected. Means this framework will speak either the complete text contained in the window or the user selected text.
Now how user can do this is very simple.
First he have to select the text he wants to listen then he can simply press a keyboard shortcut (key combination like Alt+s) or a button provided in the sugar like the home button. From now onwards I will call this button as speech button. This button can be made as a permanent which is showed up on moving the mouse pointer to the top left corner of the screen.
I will also provide a configuration management tool with a simple GUI. Simple because our target users are small children of age group 3-15.
Now what you can do with this tool is that you can configure the speech. Like you can increase or decrease the volume, can change the language, can change the accent, pitch, male or female voice, rate of speech etc.
I will also provide karaoke style coloring or captioning to the words being spoken. Like for example write activity is open up in sugar with some text. Then the user selects the text and presses the speech button. Now the framework will start speaking the selected text.
The word which is currently being spoken will be captioned so that user can keep the track of the word.
One more thing which I am including is a keyboard speaker. In the configuration tool, the user will be provided with an option of turning on or off the keyboard speaker. So if the key board speaker is turned on, then as the user presses any keyboard key, the framework will speak it. Like if user presses Tab key then the framework will speak 'Tab', on pressing caps lock it will speak 'caps lock' and so on.
Now after describing so many features of this framework, I think that why not to make it a useful thing for the blind users. So here is an idea.
ICON READER. Now what is icon reader is simple. As the blind user is browsing the XO, he can keep track of the current position of the mouse pointer. Suppose at present the mouse pointer is at Home button. Then as the user presses a pre-defined keyboard key the framework will speak 'Home'. Similarly if pointer is at desktop the framework will speak 'desktop'.
If sugar labs likes this idea then they can provide a simple change in the XO hardware by creating that predefined keyboard key feel-able for the blind user. Like for example there are two keys j and f having a slight projection for feel. This functionality can be a boon for blind users.
Here again I am pointing the main characteristics of my proposal:
  • Providing speech to text opened in any window in any activity in sugar.
  • Providing a configuration Panel with GUI from which speech configuration can be changed.
  • Karoake style coloring of the text being spoken.
  • Key board speaker.
  • Icon reader.
Who are you making it for?
According to eye-tracking research it can be shown that ‘’viewers naturally synchronize the auditory and textual information while watching a film song with SLS. When SLS is integrated into popular TV entertainment, reading happens automatically and subconsciously.’’
Language learning can be a great experience if done with speech. The literacy rate can be increased by 6-10% if speech is also included with text because this is the ability of our brain to easily remember sounds rather than text. So I am making this framework for children of age group 3-15 so that learning language can become easier for them.
Not only this, XO can now become a boon or a useful thing for blind children too.
Why do they need it?
The main of sugar is to spread the fruit of literacy. And as I have already mentioned that students can learn very fast if speech is also included with the text or words they read. So including this framework in sugar will make it more efficient.
Not only this, now the blind students or children can also use the XO which will be like a boon for them. Blinds also want to study...
What technologies (programming languages, etc.) will you be using?
I discussed a lot with alsroot, assimd and besmac on IRC about this project. The main points of discussion are:
Some rough ideas of implementation:
                                                    --------------
                                                        Speech  
                                                       (Level 1)
                                                    ---------------
                                                          |
                                                          |
                                                          |
                                                          |
                                                          V
                                                   -----------------
                                                       Espeak (TTS)
                                                        (Level 2)
                                                   -----------------
                                                          |
                                                          |
                                                          |
                                                          V
                                                   ------------------
                                                    gstreamer Plugin
                                                       (Level 3)
                                                   ------------------
                                                          |
                                                          |
                                                          |
                                                          V
                                                  -------------------
                                                   Command Line Tool
                                           (To produce speech of the selected text)
                                           (In user selected languages and accents)
                                                  --------------------
                                                          |
                                                          |
                                ---------------------------------------------------------
                                |                                                        |
                                |                                                        |
                                V                                                        V
                   ------------------------                                      ---------------------
                   Button/Keyboard shortcut                                 GUI for Configuration management 


  • On the top level is the speech engine (espeak) producing the speech.
  • There are two options for using a layer over TTS engine espeak, one is a speech dispatcher which was created as last year GSoC project and other is the gstreamer plugin.
  • Both of these use espeak. Listen and Spell uses the speechd. But when I discussed it with alsroot on IRC, he told me that using a speechd is a bad idea because it has become a system daemon and requires root privileges to work. Therefore using gstreamer plugin is the only and best idea.
  • For the GUI pyGtk can be used.
  • Now to get the user selected text my idea is to use clipboard module which takes care of copy paste. So using this module the entire selected text can be sent to the speech framework that it can speak out.
  • For the keyboard speaker, we can simply store the keystrokes in a file and then send the file to the speech generator.
  • A small code snippet which I have prepared for demonstration purpose is shown below. You can copy paste this and try it. First select some text and then run the code trough terminal. The code will speak the text. This is a very basic thing which we want to achieve in sugar
                        import gtk
                        from espeak import espeak
                        obj=espeak()
                        clip=gtk.Clipboard(display=gtk.gdk.display_get_default(),selection="PRIMARY")
                        text=clip.wait_for_text()
                        if text==None:
                        obj.speak("Sorry! No text is selected")
                        else:
                        obj.speak(text)
You can select the text anywhere in the sugar. Although I have created this code using espeak directly but in future I will be using gstreamer plugin.

Q.10: What is the timeline for development of your project? The Summer of Code work period is 7 weeks long, May 23 - August 10; tell us what you will be working on each week. (As the summer goes on, you and your mentor will adjust your schedule, but it's good to have a plan at the beginning so you have an idea of where you're headed.) Note that you should probably plan to have something "working and 90% done" by the midterm evaluation (July 6-13); the last steps always take longer than you think, and we will consider cancelling projects which are not mostly working by then.

A: April 21-March 22

During this period I will remain in constant touch with my mentor and sugar community. I will remain active on IRC and mailing list to discuss the design details and further improvements that can be incorporated in this project.
I will also study a relatively new things called STARDICT and ORCA. These can be very useful for sugar.
Up to this time I will become absolutely clear on my further approach. But now I am providing a rough plan.
May 24 - June 5
  • Will work on implementing the command line interface of the framework.
  • Will complete the basic architecture in which user can select and listen the text from command line interface.
  • Will discuss the UI design for configuration manager on IRC.
June 6- June 15
  • Will work on implementing the keyboard reader. (Command line)
  • Will design the finalized GUI of the configuration manager.
  • Will release the snapshots of the GUI on wiki page.
June 16 - June 25
  • Will implement the configuration manger.
  • Will link the various options I already described in the GUI.
June 18 - June 25
  • Will release the basic configuration manager.
June 26- July 7
  • Will implement the keyboard speaker.
  • Work on the implementation of the icon reader.
July 8-July12
  • Will finalize an alpha release of the framwork.
July 13
  • Mid term evaluation. Will release the alpha version.
July 14 - July 23
  • Will test it on XO.
  • Ask for bugs and further improvements.
July 24 - August 3
  • Will port the framework on windows.
August 4 - August 13
  • Ask for feedback
  • Preparation for the beta version release.
August 14 - August 22
  • Will release the beta version
August 23 onwards
  • I will continue working on this to make it available in official sugar distros.

Q.11: Convince us, in 5-15 sentences, that you will be able to successfully complete your project in the timeline you have described. This is usually where people describe their past experiences, credentials, prior projects, schoolwork, and that sort of thing, but be creative. Link to prior work or other resources as relevant.

A: I am currently pursuing my B.E. in Computer science from Netaji Subash Institute of Technology, New Delhi.

I have already described some of my past achievements like AI challenge whose simulator code I prepared just in a time span of 15 days.
Link: http://code.google.com/p/artificial-intelligence.

In school also I prepared a lot of small projects in C++ like digital diary, Sudoku solver, library manager, telephone directory etc.

Another reason that I can easily complete the project is that I will be getting almost 3 months break during my summer vacations right from the end of May to August. Therefore I can concentrate entirely on this project with all my energies.
A lot of students from my college have been associated with OLPC for development work. Like:
  • Food Force which is still in its developing phase. We are working hard to achieve collaboration in Food Force.
  • Listen and Spell. This project was started at GSoC 2008 and is still in progress to remove speech dispatcher dependencies from it.
  • Speech dispatcher. This project was completed by my senior at GSoC 2008.
So by giving these examples what I am trying to say is that I have got many helping seniors, who have a lot of experience and who are ready to help me in every possible way they can. So I can get a lot of guidance and ready help in any case. My chances of stucking at any point are very low.


You and the community

Q.12: If your project is successfully completed, what will its impact be on the Sugar Labs community? Give 3 answers, each 1-3 paragraphs in length. The first one should be yours. The other two should be answers from members of the Sugar Labs community, at least one of whom should be a Sugar Labs GSoC mentor. Provide email contact information for non-GSoC mentors.

A: According to me, the main aim of sugar labs is to spread the fruit of literacy in developing nations. It is a common experience that we learn very fast on listening things then reading them. Providing speech in core sugar will be like making the sugar 10-15% more efficient. When children of age group 3-15 and who are learning languages will hear the speech again and again they will be able to learn it very fast. Not only this, now they will be able to hear a story or any other text than just reading it. One more potential advantage is for blind students which can't read the texts but can learn the language by listening it and feeling the words.

According to Edward Cherlin <echerlin@gmail.com>
"call our text coloring engine to mark the word being spoken. That's designed for the pre-literate, on the model of Same-Language Subtitling in India."
Means the people in developing nations like India can learn the language or text faster if same language subtitling model is employed.
According to Philip Wagner <Philip5147@aol.com>
"I am Philip Wagner. I am a member of the Education Team for Sugar Labs. I was a teacher for seventeen years in Africa and in the United States.

The speech synthesizer is very important to help children learn to speak, read and write languages. Some research states that a child needs to hear words pronounced between one hundred and one thousand times before the child knows the word. This is a monumental task for a teacher to be repeating words enough times for the children to learn. The speech synthesizer is a tool which helps the teacher in doing some of that repetition. The more words a child knows the more learning can take place inside the child's brain. We think with words. If we do not have the words necessary to do the thinking then things don't progress as well. For writing, the child listens to a story that the speech synthesizer very patiently repeats as many times as the child needs it and then the child writes the story in his or her own words.. The hearing, seeing, and writing of the words helps the student for reading. There are many more words that are used in books and on the internet than we use to speak with. In English we use about ninety thousand words for speaking. An estimate of the total words in English is more than five-hundred-thousand. One dictionary has five-hundred-thousand entries. The more a child learns the more the child will learn. We cannot depend on teachers to teach enough words to children.

I am very much encouraged that Chirag Jain is working toward preparing the speech synthesizer in Sugar."




Q.13: Sugar Labs will be working to set up a small (5-30 unit) Sugar pilot near each student project that is accepted to GSoC so that you can immediately see how your work affects children in a deployment. We will make arrangements to either supply or find all the equipment needed. Do you have any ideas on where you would like your deployment to be, who you would like to be involved, and how we can help you and the community in your area begin it?

A: I would greatly appreciate the efforts of sugar if they are planing for this and I think that my home town which is still backward and has many primary schools, will be the best place where this pilot can be set up. I have many friends in the home town who are involved in such activities and they would love to contribute in here also. I also have a primary school near my home where we can easily test the activity.


Q.14: What will you do if you get stuck on your project and your mentor isn't around?

A: Well I have some of my great helping seniors who are already associated with OLPC for some projects (Like Food Force) and who are ready to help me out in every possible way they can.

If still the problem can't be resolved then I can always ask it on IRC.
Google is also a very great option
I can also post the problem on sugar mailing list.

Q.15: How do you propose you will be keeping the community informed of your progress and any problems or questions you might have over the course of the project?

I will regularly post my progress reports on my wiki page.
Link: http://wiki.sugarlabs.org/go/chiragjain1989
I can mail my progress reports to sugar mailing list.

Miscellaneous

My Screenshot with my email address

Q.16: We want to make sure that you can set up a development environment before the summer starts. Please send us a link to a screenshot of your Sugar development environment with the following modification: when you hover over the XO-person icon in the middle of Home view, the drop-down text should have your email in place of "Restart." See the image on the right for an example. It's normal to need assistance with this, so please visit our IRC channel, #sugar on irc.freenode.net, and ask for help.

A: My development environment screen shot is attached on the right side.


Q.17: What is your t-shirt size? (Yes, we know Google asks for this already; humor us.)

A: Extra Large


Q.18: Describe a great learning experience you had as a child.

A: When I was in my primary school there were some teachers who believed in education through entertainment. So they always perform some entertaining activities to teach us. Like for example when I was in third or fourth standard, I always get confused in less than and greater than signs. Even if I could make which number is greater or lesser but I become confuse in selecting the right sign. So one day I approached my teacher. She removed my confusion by a nice method. She told me that I should give two dots in front of the number which is greater like : and one dot to the number which is lesser like. For example if I have to place sign between 2___ 5 then I would give one dot in front of 2 and two dots in front of 5 like this 2. : 5

Now on joining these dots we can get the correct less than sign.

Q.19: Is there anything else we should have asked you or anything else that we should know that might make us like you or your project more?

A: [TODO]