Difference between revisions of "Maps for visually impaired"
Jump to navigation
Jump to search
Vaish.rajan (talk | contribs) |
|||
(5 intermediate revisions by 2 users not shown) | |||
Line 22: | Line 22: | ||
:'''6 Where are you located, and what hours do you tend to work? (We also try to match mentors by general time zone if possible.) ''' | :'''6 Where are you located, and what hours do you tend to work? (We also try to match mentors by general time zone if possible.) ''' | ||
− | :I am located in India (5:30 | + | :I am located in India (+5:30 GMT),since I am graduating in May'09 so I can work anytime while staying at home (Except 4am to 10am - India time). |
:'''7 Have you participated in an open-source project before? If so, please send us URLs to your profile pages for those projects, or some other demonstration of the work that you have done in :open-:source. If not, why do you want to work on an open-source project this summer? ''' | :'''7 Have you participated in an open-source project before? If so, please send us URLs to your profile pages for those projects, or some other demonstration of the work that you have done in :open-:source. If not, why do you want to work on an open-source project this summer? ''' | ||
Line 29: | Line 29: | ||
:OLPC Profile URL > http://wiki.laptop.org/go/User:Vaish.rajan | :OLPC Profile URL > http://wiki.laptop.org/go/User:Vaish.rajan | ||
:I have been involved in various Real-world projects and Research works.One can have a look into my Resume for the same > http://vaish.rajan.googlepages.com/rajanvaish_resume.pdf | :I have been involved in various Real-world projects and Research works.One can have a look into my Resume for the same > http://vaish.rajan.googlepages.com/rajanvaish_resume.pdf | ||
− | :I have been Fedora Ambassador-India and member of University's LUG for almost an year and have contributed by organizing workshops | + | :I am a member of International Association of Engineers and have been Fedora Ambassador-(India list) and member of University's LUG for almost an year now,and have contributed by organizing various University level workshops. |
Line 44: | Line 44: | ||
: For the GSoC’09, keeping in mind the time constraints, I intend to develop a Directions tool using OpenSteetMap/OpenLayers for Visually Impaired people (as well as general mass) such that after user enters source and destination as his/her query in text boxes, the route or walking/driving directions are resulted as output not only on maps but also as text explaining entire route with major point of Interests and minute details like a Square/Junction/Traffic Signal etc (similar to the one currently implemented by MapQuest) ,confirming to W3C guidelines such that it is easily readable. | : For the GSoC’09, keeping in mind the time constraints, I intend to develop a Directions tool using OpenSteetMap/OpenLayers for Visually Impaired people (as well as general mass) such that after user enters source and destination as his/her query in text boxes, the route or walking/driving directions are resulted as output not only on maps but also as text explaining entire route with major point of Interests and minute details like a Square/Junction/Traffic Signal etc (similar to the one currently implemented by MapQuest) ,confirming to W3C guidelines such that it is easily readable. | ||
: Currently the text based outputs are given in terms of either m/km or miles. I intend to create metric convertor so that user uses the convention he/she is comfortable with and since it’s specifically for visually impaired a new metric included will be “foot steps” under long/short strides. So, that a blind user can actually count the number of foot steps and reach his/her destination (generally a blind will need the service for short distances, since he/she cannot drive on their own). However, directions for cars, walking, bicycle etc will be provided too (keeping in mind that the service can be used by all). | : Currently the text based outputs are given in terms of either m/km or miles. I intend to create metric convertor so that user uses the convention he/she is comfortable with and since it’s specifically for visually impaired a new metric included will be “foot steps” under long/short strides. So, that a blind user can actually count the number of foot steps and reach his/her destination (generally a blind will need the service for short distances, since he/she cannot drive on their own). However, directions for cars, walking, bicycle etc will be provided too (keeping in mind that the service can be used by all). | ||
− | : To implement the same, I intend to use CloudMade’s Library /APIs and Services like Geocoding and Geosearch in combination with Routing (the services are many complicated server machines, which could be accessed via HTTP, for the same there exists API wrappers in Ruby, Java and Python). I totally understand OSM Tags for Routing and explored various other options like OSMNavigation and LibOSM, GraphServer, Pyroute Lib and other services like OpenRouteService and YOURS which promises the implementation totally possible. For the text to speech, visually impaired people will use ScreenReader, however, to make the system machine independent I intend to develop application's own text to speech software using Washington University’s OpenSource project for Online Screen reader known as WebAnyWhere (http://webanywhere.cs.washington.edu/ ),which can further be accustomed to meet needs of other applications and support text to speach. | + | : To implement the same, I intend to use CloudMade’s Library /APIs and Services like Geocoding and Geosearch in combination with Routing (the services are many complicated server machines, which could be accessed via HTTP, for the same there exists API wrappers in Ruby, Java and Python). I totally understand OSM Tags for Routing and explored various other options like OSMNavigation and LibOSM, GraphServer, Pyroute Lib and other services like OpenRouteService and YOURS which promises the implementation totally possible. For the text to speech, visually impaired people will use ScreenReader, however, to make the system machine independent I intend to develop application's own text to speech software using Washington University’s OpenSource project for Online Screen reader known as WebAnyWhere (http://webanywhere.cs.washington.edu/ ),which can further be accustomed to meet needs of other applications and support text to speach..Generally DHTML interface is difficult to access for someone using a speech-output interface,to solve the issue,the embeded XML metadata delivered by the application will be used to generate an audio-formatted representation of the content. |
: The application will be totally keyboard accessible with various short cut options, with very easy to understand interface such that people with Cognitive disabilities can also use the service with ease. | : The application will be totally keyboard accessible with various short cut options, with very easy to understand interface such that people with Cognitive disabilities can also use the service with ease. | ||
: The application will benefit millions of children studying in normal as well as blind schools,schools which cannot afford costly computers and softwares/hardwares. | : The application will benefit millions of children studying in normal as well as blind schools,schools which cannot afford costly computers and softwares/hardwares. | ||
Line 71: | Line 71: | ||
: From me: In its initiative to bring education to every child,this project fills the gap of children who are blind.The text to speech Screen reader developed during the project will not only help the Maps application,but various other applications in the XO thereby helping Sugar Labs in its mission to bring education to all. | : From me: In its initiative to bring education to every child,this project fills the gap of children who are blind.The text to speech Screen reader developed during the project will not only help the Maps application,but various other applications in the XO thereby helping Sugar Labs in its mission to bring education to all. | ||
− | : From my mentor: | + | : From my mentor: The Screen reader project look forward to empower the minorities through the use of an interface to get full functionality of diverse applications. We encourage to make wider the concept of inclusiveness in the Sugar Lab community not only closing the gap between economic situation of the kid but the disables too. ( nestorgr<at>gmail<dot>com ) |
:'''2 Sugar Labs will be working to set up a small (5-30 unit) Sugar pilot near each student project that is accepted to GSoC so that you can immediately see how your work affects children in a deployment. We will make arrangements to either supply or find all the equipment needed. Do you have any ideas on where you would like your deployment to be, who you would like to be involved, and how we can help you and the community in your area begin it?''' | :'''2 Sugar Labs will be working to set up a small (5-30 unit) Sugar pilot near each student project that is accepted to GSoC so that you can immediately see how your work affects children in a deployment. We will make arrangements to either supply or find all the equipment needed. Do you have any ideas on where you would like your deployment to be, who you would like to be involved, and how we can help you and the community in your area begin it?''' | ||
Line 92: | Line 92: | ||
:'''1 Screenshot . ''' | :'''1 Screenshot . ''' | ||
− | : http://vaish.rajan.googlepages.com/RajanVaishScreenshotXO.png .It's my XO screenshot,I could not get Sugar Environment due to download issues(which I discussed on IRC). | + | : http://vaish.rajan.googlepages.com/RajanVaishScreenshotXO.png . |
+ | : It's my XO screenshot,I could not get Sugar Environment due to temporary download issues(which I discussed on IRC). | ||
:'''2 T-Shirt Size . ''' | :'''2 T-Shirt Size . ''' | ||
Line 110: | Line 111: | ||
[[Category:2009_GSoC_applications]] | [[Category:2009_GSoC_applications]] | ||
+ | |||
+ | ==Related Links:== | ||
+ | |||
+ | [http://www.finroo.com '''T shirts'''] |
Latest revision as of 04:16, 22 May 2010
About you
- 1 What is your name?
- Rajan Vaish
- 2 What is your email address?
- vaish<dot>rajan<at>gmail<dot>com , vaishrajan<at>fedoraproject<dot>org
- 3 What is your Sugar Labs wiki username?
- vaish.rajan
- 4 What is your IRC nickname?
- generoushous
- 5 What is your primary language? (We have mentors who speak multiple languages and can match you with one of them if you'd prefer.)
- English, Hindi.I prefer English.
- 6 Where are you located, and what hours do you tend to work? (We also try to match mentors by general time zone if possible.)
- I am located in India (+5:30 GMT),since I am graduating in May'09 so I can work anytime while staying at home (Except 4am to 10am - India time).
- 7 Have you participated in an open-source project before? If so, please send us URLs to your profile pages for those projects, or some other demonstration of the work that you have done in :open-:source. If not, why do you want to work on an open-source project this summer?
- Yes,I have.I volunteered and did my online Internship for One Laptop per Child (OLPC)in summers'08.
- OLPC Profile URL > http://wiki.laptop.org/go/User:Vaish.rajan
- I have been involved in various Real-world projects and Research works.One can have a look into my Resume for the same > http://vaish.rajan.googlepages.com/rajanvaish_resume.pdf
- I am a member of International Association of Engineers and have been Fedora Ambassador-(India list) and member of University's LUG for almost an year now,and have contributed by organizing various University level workshops.
About your project
- 1 What is the name of your project?
- Maps for Visually Impaired
- 2 Describe your project in 10-20 sentences. What are you making? Who are you making it for, and why do they need it? What technologies (programming languages, etc.) will you be using?.
- Today, there are millions (according to Wikipedia > In the November 2004 article Magnitude and causes of visual impairment, the WHO estimated that in 2002 there were 161 million (about 2.6% of the world population) visually impaired people in the world, of whom 124 million (about 2%) had low vision and 37 million (about 0.6%) were blind.) of visually impaired people who use Computer and Internet, they use Screen Readers like JAWS to email, fill forms, read articles etc. But there is something where Screen Reader fails, and that is, it cannot read image (no Screen Reader can).MAPS is one of those images which a visually impaired person cannot read. Hence, one of the most needed thing by them. In an attempt to solve it, many large companies like AOL have worked on the same (Please see > http://sensations.aol.com/sensations-app-accessible-walking-directions/ ) and their concept lies in the fact such that imagery result to a query is produced with text describing it in html .Once there is text as an output to the query, it can be read by Screen Reader (or any text to speech software).Basically text needs to confirm W3C guidelines (http://www.w3.org/WAI/quicktips/Overview.php )to make it Screen Reader enabled in the form of HTML header tags .
- For the GSoC’09, keeping in mind the time constraints, I intend to develop a Directions tool using OpenSteetMap/OpenLayers for Visually Impaired people (as well as general mass) such that after user enters source and destination as his/her query in text boxes, the route or walking/driving directions are resulted as output not only on maps but also as text explaining entire route with major point of Interests and minute details like a Square/Junction/Traffic Signal etc (similar to the one currently implemented by MapQuest) ,confirming to W3C guidelines such that it is easily readable.
- Currently the text based outputs are given in terms of either m/km or miles. I intend to create metric convertor so that user uses the convention he/she is comfortable with and since it’s specifically for visually impaired a new metric included will be “foot steps” under long/short strides. So, that a blind user can actually count the number of foot steps and reach his/her destination (generally a blind will need the service for short distances, since he/she cannot drive on their own). However, directions for cars, walking, bicycle etc will be provided too (keeping in mind that the service can be used by all).
- To implement the same, I intend to use CloudMade’s Library /APIs and Services like Geocoding and Geosearch in combination with Routing (the services are many complicated server machines, which could be accessed via HTTP, for the same there exists API wrappers in Ruby, Java and Python). I totally understand OSM Tags for Routing and explored various other options like OSMNavigation and LibOSM, GraphServer, Pyroute Lib and other services like OpenRouteService and YOURS which promises the implementation totally possible. For the text to speech, visually impaired people will use ScreenReader, however, to make the system machine independent I intend to develop application's own text to speech software using Washington University’s OpenSource project for Online Screen reader known as WebAnyWhere (http://webanywhere.cs.washington.edu/ ),which can further be accustomed to meet needs of other applications and support text to speach..Generally DHTML interface is difficult to access for someone using a speech-output interface,to solve the issue,the embeded XML metadata delivered by the application will be used to generate an audio-formatted representation of the content.
- The application will be totally keyboard accessible with various short cut options, with very easy to understand interface such that people with Cognitive disabilities can also use the service with ease.
- The application will benefit millions of children studying in normal as well as blind schools,schools which cannot afford costly computers and softwares/hardwares.
- 3 What is the timeline for development of your project?
- Since, I am graduating this mid- May’09; I can devote all my time till August while working from home , after which I expect date of joining from companies I have been offered jobs from (Tata Consultancy Services and Accenture) ,in September.
- I break the time line as:
- 1-Community bonding Period (April 21 to May 22): Study minute details of the technologies involved in the project.
- 2- May 23 to May 31: Creating and designing workflow of the project and initiate coding.
- 3- June 1 to July 5: Completing the directions application by the end of it for blinds, still machine dependent and needs Screen Reader for the blinds. A cushion period of few days before deadline for mid-term evaluations .
- 4-July 14 to July 28: Implementing text to speech for the application, totally costumed for it and hence making it machine independent.
- 5-July 29 to Aug 5: Adding keyboard accessibility features.
- 6-Aug 6 to Aug 13: Testing and Documentation, with few days cushion time. End of project.
- 4 Convince us, in 5-15 sentences, that you will be able to successfully complete your project in the timeline you have described.
- I assure completion of this project in the timeline described,with my past experience with OLPC,I finished my project a week before time.I have successfully worked for various Coding/Development competitions like by AOL(by TopCoder),IBM Great Minds Challenge'07 with strict deadlines.
- I am a quick learner and a comfortable team worker, I believe in self-study, almost all my technical skills are out of my passion to learn things on my own and with my commitment to this project,I won't leave it undone.
You and the community
- 1 If your project is successfully completed, what will its impact be on the Sugar Labs community?
- From me: In its initiative to bring education to every child,this project fills the gap of children who are blind.The text to speech Screen reader developed during the project will not only help the Maps application,but various other applications in the XO thereby helping Sugar Labs in its mission to bring education to all.
- From my mentor: The Screen reader project look forward to empower the minorities through the use of an interface to get full functionality of diverse applications. We encourage to make wider the concept of inclusiveness in the Sugar Lab community not only closing the gap between economic situation of the kid but the disables too. ( nestorgr<at>gmail<dot>com )
- 2 Sugar Labs will be working to set up a small (5-30 unit) Sugar pilot near each student project that is accepted to GSoC so that you can immediately see how your work affects children in a deployment. We will make arrangements to either supply or find all the equipment needed. Do you have any ideas on where you would like your deployment to be, who you would like to be involved, and how we can help you and the community in your area begin it?
- I am currently working on Internet Accessibility issues (for visually impaired people) for ASSETS'09 (http://www.sigaccess.org/assets09/ mentored by Dr. Shari Trewin from IBM Research ,who is also General Chair of ASSETS’09 ) and for Microsoft Imagine Cup's Accessibility Awards.
- For the same,I developed contacts with Authorities of State Blind School,Lucknow,India and a Professor from University of Lucknow,India( Dr.U.N Sinha ,who is himself blind and is member of AccessIndia - http://accessindia.org.in/mailman/listinfo/accessindia_accessindia.org.in for any cause for blinds,he has offered me any help,anytime).
- Hence,the work can be started from State Blind School (2-3kms from my house) and extended to Nation wide task with help of AccessIndia organization.
- 3 What will you do if you get stuck on your project and your mentor isn't around?
- I am a quick learner and a comfortable team worker, I believe in self-study, almost all my technical skills are out of my passion to learn things on my own. In case my mentor stops responding, will report about the issue on mailing list, but won't let project affect from the same, instead will hangout more frequently on IRC/Mailing lists and other resources online, and assure that the project finishes well in time. Though,I have known my mentor for more than an year now and have been in touch throughout.
- 4 How do you propose you will be keeping the community informed of your progress and any problems or questions you might have over the course of the project?
- Weekly Updates on Wiki,Mailing Lists,IRC .
Miscellaneous
- 1 Screenshot .
- .
- It's my XO screenshot,I could not get Sugar Environment due to temporary download issues(which I discussed on IRC).
- 2 T-Shirt Size .
- XL
- 3 Describe a great learning experience you had as a child.
- I've had very few good teachers in school,generally there were teachers who never encouraged queries from children.So,to quench my thirst of knowledge,I usually used to spend time in library,I never used Internet until as long as passing out from HighSchool.I remember my struggle to create a project on Solar System in grade 6,where I had no Internet,no books with colored pictures,so I with the help of my dad got an appointment at National Planetarium,Lucknow's Library and got my project done,for which no wonder,I got A+ :)
- 4 Is there anything else we should have asked you or anything else that we should know that might make us like you or your project more?
- No,nothing as such.Just wanted to share an experience while my visit to State Blind School,Lucknow ,I felt that they are much smarter than kids with vision,I saw them operating on normal cell phones as fast as we do,talking confidently as we do,they are hungry for knowledge.In fact,the inspiration of this project,I got from there itself ,after a kid told one of his problem ,that he wants to know directions on maps ,so that he can be more self dependent.This project will for sure,help millions of kids like him.