Difference between revisions of "Summer of Code/2015/git backend alex"
(Created page with " = WORK IN PROGRESS = = About You = Name: Alex Herrmann<br /> Email: alexandermherrmann@gmail.com<br /> Wiki username: alexmherrmann (Alex Herrmann)<br /> IRC nickname: ale...") |
|||
(3 intermediate revisions by 2 users not shown) | |||
Line 9: | Line 9: | ||
IRC nickname: alexmherrmann (freshly registered)<br /> | IRC nickname: alexmherrmann (freshly registered)<br /> | ||
First Language: English<br /> | First Language: English<br /> | ||
− | Location: Salt Lake City, Utah<br /> | + | Location: Salt Lake City, Utah<br />Time: I am typically free from 10 AM MST (4 pm UTC) to 10 PM MST (4 am UTC)<br /> |
− | Time: | ||
Line 21: | Line 20: | ||
Having a strong background in git (my old testbed where all of my code went is at github.com/alexhairyman/ unfortunately it has not been contributed to in a long time as high school and a job took over my free time), I would love to implement the Git backend for sugar journal. Essentially I would want to modify sugar journal's code to essentially propagate all changes as a commit to a local repository. The end goal would be to wrap up git with a UI to explore all previous changes, each commit would contain a metadata file generated by either the datastore or the journal source code, and then perhaps a binary file containing the content too. This would update a locally stored git repository, which would then propagate changes to any one of the git hosting sites out there (github has a nice python API to facilitate using github specific features). One way of doing it could be store all of the binary/large data to google drive, the API available for drive would make it easy to store data in an existing drive setup. Instead of using git at all, having a metadata file keep track of the journal entries would allow the computer to just interact with google, and expanding storage for google drive is ridiculously cheap. This would be the first discussion I have with my mentor | Having a strong background in git (my old testbed where all of my code went is at github.com/alexhairyman/ unfortunately it has not been contributed to in a long time as high school and a job took over my free time), I would love to implement the Git backend for sugar journal. Essentially I would want to modify sugar journal's code to essentially propagate all changes as a commit to a local repository. The end goal would be to wrap up git with a UI to explore all previous changes, each commit would contain a metadata file generated by either the datastore or the journal source code, and then perhaps a binary file containing the content too. This would update a locally stored git repository, which would then propagate changes to any one of the git hosting sites out there (github has a nice python API to facilitate using github specific features). One way of doing it could be store all of the binary/large data to google drive, the API available for drive would make it easy to store data in an existing drive setup. Instead of using git at all, having a metadata file keep track of the journal entries would allow the computer to just interact with google, and expanding storage for google drive is ridiculously cheap. This would be the first discussion I have with my mentor | ||
− | + | = Rough Timeline = | |
− | |||
− | |||
− | |||
* The first few weeks (no later than may 15th) will be when I explore, in-depth, the journal and datastore and talk with my mentor on the best possible way of "gitifying" the events and journal entries | * The first few weeks (no later than may 15th) will be when I explore, in-depth, the journal and datastore and talk with my mentor on the best possible way of "gitifying" the events and journal entries | ||
− | * the next few weeks (mid May to beginning of June) will be modifying these to produce an output metadata file each time a journal entry is created that is associated with some kind of binary or compressed content. This will be either pushed to git | + | * the next few weeks (mid May to beginning of June) will be modifying these to produce an output metadata file each time a journal entry is created that is associated with some kind of binary or compressed content. This will be either pushed to local git and the file system. A small format that is human readable is good, I'm thinking YAML could be especially good. |
* the beginning of June I would hope to start stress testing the system as no more than a local repository | * the beginning of June I would hope to start stress testing the system as no more than a local repository | ||
* half way through June I would begin the remote propagation part, where content hosts such as github, google drive, imgur would come into play. This is where the meat will be, keeping track of what's been pushed and how to access it. I am aiming for google drive first. Getting hooked into the API with python to store the binary data, and then having one large metadata file or do it the git way | * half way through June I would begin the remote propagation part, where content hosts such as github, google drive, imgur would come into play. This is where the meat will be, keeping track of what's been pushed and how to access it. I am aiming for google drive first. Getting hooked into the API with python to store the binary data, and then having one large metadata file or do it the git way | ||
* This will take a while, interfacing smoothly with the API's of the other services. I estimate up to late July | * This will take a while, interfacing smoothly with the API's of the other services. I estimate up to late July | ||
* The final weeks of GSOC in August will be the testing and integration phase, where hopefully community members can get their hands on it and test it out, see how well it fairs in the "real world" | * The final weeks of GSOC in August will be the testing and integration phase, where hopefully community members can get their hands on it and test it out, see how well it fairs in the "real world" | ||
+ | |||
+ | == Impact on the Community == | ||
+ | '''Me''': Having the journal be propagated to the cloud will allow someone to switch between devices. Allowing someone to make changes at both school and home will make it easier | ||
+ | |||
+ | to switch between devices and eventually upgrade to bigger hardware. It will essentially be the fostering from the environment that is sugar to windows, OS X, Linux, etc. Besides that, hardware fails, and being able to rebound from that is a | ||
+ | |||
+ | '''Walter:''' Above and beyond the expanded functionality we have the opportunity to introduce our users to some of the core concepts behind modern FOSS development. Win/win. | ||
+ | |||
+ | == Miscellaneous == | ||
+ | [[File:AMH 2015 dev environment.png|thumb]] | ||
+ | |||
+ | === Experience as a child: === | ||
+ | My dad has always pushed for me to do GSOC, not because he was an overbearing father, but because he saw my potential. I started programming in middle school and have always just been so blown away by the limitless possibility of code. |
Latest revision as of 00:06, 27 March 2015
WORK IN PROGRESS
About You
Name: Alex Herrmann
Email: alexandermherrmann@gmail.com
Wiki username: alexmherrmann (Alex Herrmann)
IRC nickname: alexmherrmann (freshly registered)
First Language: English
Location: Salt Lake City, Utah
Time: I am typically free from 10 AM MST (4 pm UTC) to 10 PM MST (4 am UTC)
open source experience
My experience with open source has typically been that of a consumer, Most of the open source projects I have were small pet projects from early high school, before things got busy. Now that I'm attending college (at the University of Utah) my interest in open source has been re-ignited, I have a little bit more time and freedom to code as I please and contributing to a project like sugar would be extremely cool (Our family actually has one of the original XO laptops purchased when they were first made available, we moved a little while ago, it's probably still out in our garage somewhere). I chose XO as one of my organizations because beyond being able to productively code for a summer, I have experience with the XO laptops,
My project Proposal
Having a strong background in git (my old testbed where all of my code went is at github.com/alexhairyman/ unfortunately it has not been contributed to in a long time as high school and a job took over my free time), I would love to implement the Git backend for sugar journal. Essentially I would want to modify sugar journal's code to essentially propagate all changes as a commit to a local repository. The end goal would be to wrap up git with a UI to explore all previous changes, each commit would contain a metadata file generated by either the datastore or the journal source code, and then perhaps a binary file containing the content too. This would update a locally stored git repository, which would then propagate changes to any one of the git hosting sites out there (github has a nice python API to facilitate using github specific features). One way of doing it could be store all of the binary/large data to google drive, the API available for drive would make it easy to store data in an existing drive setup. Instead of using git at all, having a metadata file keep track of the journal entries would allow the computer to just interact with google, and expanding storage for google drive is ridiculously cheap. This would be the first discussion I have with my mentor
Rough Timeline
- The first few weeks (no later than may 15th) will be when I explore, in-depth, the journal and datastore and talk with my mentor on the best possible way of "gitifying" the events and journal entries
- the next few weeks (mid May to beginning of June) will be modifying these to produce an output metadata file each time a journal entry is created that is associated with some kind of binary or compressed content. This will be either pushed to local git and the file system. A small format that is human readable is good, I'm thinking YAML could be especially good.
- the beginning of June I would hope to start stress testing the system as no more than a local repository
- half way through June I would begin the remote propagation part, where content hosts such as github, google drive, imgur would come into play. This is where the meat will be, keeping track of what's been pushed and how to access it. I am aiming for google drive first. Getting hooked into the API with python to store the binary data, and then having one large metadata file or do it the git way
- This will take a while, interfacing smoothly with the API's of the other services. I estimate up to late July
- The final weeks of GSOC in August will be the testing and integration phase, where hopefully community members can get their hands on it and test it out, see how well it fairs in the "real world"
Impact on the Community
Me: Having the journal be propagated to the cloud will allow someone to switch between devices. Allowing someone to make changes at both school and home will make it easier
to switch between devices and eventually upgrade to bigger hardware. It will essentially be the fostering from the environment that is sugar to windows, OS X, Linux, etc. Besides that, hardware fails, and being able to rebound from that is a
Walter: Above and beyond the expanded functionality we have the opportunity to introduce our users to some of the core concepts behind modern FOSS development. Win/win.
Miscellaneous
Experience as a child:
My dad has always pushed for me to do GSOC, not because he was an overbearing father, but because he saw my potential. I started programming in middle school and have always just been so blown away by the limitless possibility of code.