Changes

Line 41: Line 41:  
:The final output that would be accepted would be the maximum occurrence of words from all the signals combined. This might have an adverse effect when, on cleaning the signal, a word might get split in 2 or combined in 2.  I plan to select those words with the more common occurrence (1/2 words) for the final output.  
 
:The final output that would be accepted would be the maximum occurrence of words from all the signals combined. This might have an adverse effect when, on cleaning the signal, a word might get split in 2 or combined in 2.  I plan to select those words with the more common occurrence (1/2 words) for the final output.  
 
:The Core Engine would be using the voice enginge pocketSphinx(as suggested in the ideas page) and the voice models from voxforge. This would be the first part of the project.
 
:The Core Engine would be using the voice enginge pocketSphinx(as suggested in the ideas page) and the voice models from voxforge. This would be the first part of the project.
After this, in the second part we could then expose the API and make this procedural architecture event driven.  Taking in the input speech via GStreamer and sending the output through the d-bus and connecting them all to the Core Engine would be done in python.
+
:After this, in the second part we could then expose the API and make this procedural architecture event driven.  Taking in the input speech via GStreamer and sending the output through the d-bus and connecting them all to the Core Engine would be done in python.
     
15

edits