BugSquad/Bug Triage
One key process in any software development project is the triaging of bugs and new features. The Sugar project has been part of the triage process used by the OLPC team (See http://dev.laptop.org). The tool, trac, seems more than up to the task. The challenge is to define unambiguous category tags in order to make the job of triage containable and to bring clarity to what is expected of developers.
Undoubted, a key is to define a clear Roadmap and decision-making process.
Currently, the OLPC trac system uses the following property categories and actions:
- Type
- defect
- enhancement
- task
- Milestone
- Update.1
- Update.1.1
- xs-03
- Update.2
- Gen-2
- Future release
- Future features
- Never Assigned
- Opportunity
- Retriage, please
- Version
- a seemingly almost random list
- Priority
- blocker
- high
- med
- low
- Component
- a long list...
- CC
- Keywords
- Verified
- Blocking
- Blocked by
- Action
- leave as new
- resolve as
- fixed
- invalid
- won't fix
- duplicate
- works for me
- reassign to
- accept
Certainly areas of potential improvement is in regard to defining appropriate tags for Milestones and Components, and coming up with a list of keywords that we can agree to the meaning of.
Questions
Would it make sense to have a Sugar milestone (e.g., Sugar 0.82) that is distinct from the OLPC milestones? Or would it make more sense to have a Sugar version that maps to an OLPC milestone?
Would it make sense to consistently add keywords that map to the Sugar modules or should these be components?
- sugar
- sugar-base
- sugar-datastore
- sugar-presence-service
- sugar-toolkit
- sugar-artwork
- sugar-activity
- journal-activity
- chat-activity
- et alia
The assignment of priorities is the difficult one. We need to come up with definitions and a process. A first pass:
- Blocker: catastrophic failure—Sugar will not run or user experience severely impaired (new features would rarely, if ever, fall into this category)
- High: important to Sugar user experience—either in terms of performance or usability (these would typically be coupled with the "task" ticket type)
- Med: enhancements to non-core features (or enhancements that impact individual activities)
- Low: odds and ends
Would it be possible to assign teams to each ticket, where we identify up front someone who agrees to verify a ticket, and someone who agrees to test a fix? Maybe we can accumulate a list of volunteers who'd be willing to be assigned in a work-wheel-like system?