Jump to: navigation, search


5 bytes removed, 17:12, 22 December 2011
no edit summary
[ Wikipedia es] or [ Wikipedia en]
The general idea is download an xml dump-file as a dump (backup) containing the state of the wikipedia pages, this is processed to select certain pages and compress them into a self-contained Sugar activity. Whether or not to include the images from the wiki articles will have a large impact on the size of the activity.
You will need a computer with a lot of space on disk, and a working Sugar environment. May be using packages provided by your linux distribution or in a virtual machine. The wikipedia xml file is big (almost 6 GB to the spanish wikipedia, bigger in english), and you need more space to generate temporary files. The process takes a lot of time too, but is automatic, you only need check states at finish of every stage.

Navigation menu