Changes

Jump to navigation Jump to search
no edit summary
Line 44: Line 44:  
: For the GSoC’09, keeping in mind the time constraints, I intend to develop a Directions tool using OpenSteetMap/OpenLayers for Visually Impaired people (as well as general mass) such that after user enters source and destination as his/her query in text boxes, the route or walking/driving directions are resulted as output not only on maps but also as text explaining entire route with major point of Interests and minute details like a Square/Junction/Traffic Signal etc (similar to the one currently implemented by MapQuest) ,confirming to W3C guidelines such that it is easily readable.
 
: For the GSoC’09, keeping in mind the time constraints, I intend to develop a Directions tool using OpenSteetMap/OpenLayers for Visually Impaired people (as well as general mass) such that after user enters source and destination as his/her query in text boxes, the route or walking/driving directions are resulted as output not only on maps but also as text explaining entire route with major point of Interests and minute details like a Square/Junction/Traffic Signal etc (similar to the one currently implemented by MapQuest) ,confirming to W3C guidelines such that it is easily readable.
 
: Currently the text based outputs are given in terms of either m/km or miles. I intend to create metric convertor so that user uses the convention he/she is comfortable with and since it’s specifically for visually impaired a new metric included will be “foot steps” under long/short strides. So, that a blind user can actually count the number of foot steps and reach his/her destination (generally a blind will need the service for short distances, since he/she cannot drive on their own). However, directions for cars, walking, bicycle etc will be provided too (keeping in mind that the service can be used by all).  
 
: Currently the text based outputs are given in terms of either m/km or miles. I intend to create metric convertor so that user uses the convention he/she is comfortable with and since it’s specifically for visually impaired a new metric included will be “foot steps” under long/short strides. So, that a blind user can actually count the number of foot steps and reach his/her destination (generally a blind will need the service for short distances, since he/she cannot drive on their own). However, directions for cars, walking, bicycle etc will be provided too (keeping in mind that the service can be used by all).  
: To implement the same, I intend to use CloudMade’s Library /APIs and Services like Geocoding and Geosearch in combination with Routing (the services are many complicated server machines, which could be accessed via HTTP, for the same there exists API wrappers in Ruby, Java and Python). I totally understand OSM Tags for Routing and explored various other options like OSMNavigation and LibOSM, GraphServer, Pyroute Lib and other services like OpenRouteService and YOURS which promises the implementation totally possible.  For the text to speech, visually impaired people will use ScreenReader, however, to make the system machine independent I intend to develop application's own text to speech software using Washington University’s OpenSource project for Online Screen reader known as WebAnyWhere (http://webanywhere.cs.washington.edu/ ),which can further be accustomed to meet needs of other applications and support text to speach.  
+
: To implement the same, I intend to use CloudMade’s Library /APIs and Services like Geocoding and Geosearch in combination with Routing (the services are many complicated server machines, which could be accessed via HTTP, for the same there exists API wrappers in Ruby, Java and Python). I totally understand OSM Tags for Routing and explored various other options like OSMNavigation and LibOSM, GraphServer, Pyroute Lib and other services like OpenRouteService and YOURS which promises the implementation totally possible.  For the text to speech, visually impaired people will use ScreenReader, however, to make the system machine independent I intend to develop application's own text to speech software using Washington University’s OpenSource project for Online Screen reader known as WebAnyWhere (http://webanywhere.cs.washington.edu/ ),which can further be accustomed to meet needs of other applications and support text to speach..Generally DHTML interface is difficult to access for someone using a speech-output interface,to solve the issue,the embeded XML metadata delivered by the application will be used to generate an audio-formatted representation of the content.  
 
: The application will be totally keyboard accessible with various short cut options, with very easy to understand interface such that people with Cognitive disabilities can also use the service with ease.  
 
: The application will be totally keyboard accessible with various short cut options, with very easy to understand interface such that people with Cognitive disabilities can also use the service with ease.  
 
: The application will benefit millions of children studying in normal as well as blind schools,schools which cannot afford costly computers and softwares/hardwares.  
 
: The application will benefit millions of children studying in normal as well as blind schools,schools which cannot afford costly computers and softwares/hardwares.  
17

edits

Navigation menu