[kforge-dev] Proposal: online KForge planning game

John Bywater john.bywater at appropriatesoftwarefoundation.org
Mon Aug 7 19:03:27 UTC 2006


Rufus Pollock wrote:

> Both in my capacity as 'customer' and my role as 'developer' I concur 
> entirely with these suggestions and would be very happy to follow this 
> plan in relation to the current iteratoin.


Excellent!

> Further comments follow below.

Thanks.

<snip>

>
>>
>> 2. Generate and publish the dates and serial numbers of iterations 
>> well into the future, with a notice that a planning game will be held 
>> at the start of each iteration, and publish a description of the 
>> planning game.
>
>
> This is an important point and one you have made before. My suggestion 
> would be to have month-long iterations with the planning game fixed to 
> start on a regular day close to the start of that month (e.g. the 
> first thursday of every month).


Thinking about this a little more, I suspect the variability of a 
calendar month-based algorithm means that a more abstracted system would 
be preferable. We could perhaps instead work with the weeks of the year 
(ie week 1 to week 52) and say that there are twelve iterations per 
year. That leaves 4 weeks over. With 3 iterations per release, there 
would be 4 releases per year. So we could have a "reading" week at the 
end of each release. So the release cycle would be thirteen weeks, and 
there would be four each year.

I think that would make a nice annual layout. I'll try to write some 
Python for this.

<snip>

>
>> 3. Split iteration planning game into four distinct parts:
>>
>> (i) at any time during the project, the customer writes and publishes 
>> user stories to an issue tracker, the customer estimates and 
>> publishes an estimate of the benefit of each story to the issue 
>> tracker, the customer makes sure there is always a good "head" of 
>> unimplemented stories;
>
>
> Implementation detail: we are currently using trac for project 
> managment and suggest that all user stories (use cases?) be entered as 
> enhancements in the tracker and that the value/benefit be set by the 
> priority (1=lowest, 5=highest)


Yes, I think that can work. We could rename the "enhancement" type to 
"use case", but "enhancement" works.

There is also a "task" type, so we can use that for tasks.

>
> Aside: i am not sure what the semantic difference between severity and 
> priority is for trac tickets (severity seems oriented to bugs while 
> priority seems oriented to tasks and improvements0


Yes, I don't really know either. I guess priority is more of a value 
judgement of the severity. E.g. however severely Windoze is broken it 
should always have a low priority.

>
>> (ii) at the start of each iteration, the customer chooses as many 
>> user stories as completed on average during the last few iterations, 
>> marks these within the issue tracker (e.g. against a Trac milestone), 
>> when done the customer notifies the mailing list that this descision 
>> has been made;
>>
>> AND THEN, EITHER
>>
>> (iii) when the selected story list notification arrives at the 
>> developers inbox, the developers continue with the iteration by 
>> implementing the user stories until available working time expires.
>>
>> OR (if more analysis and planning accuracy is desired for the price 
>> of less implementation time, or if there are more than a very small 
>> number of developers)
>>
>> (iii) when the selected story list notification arrives at the 
>> developers inbox, the developers self-select stories (somehow), and 
>> then break the stories roughly down into tasks, estimate each task, 
>> add task estimates up to create story estimates, and create a total 
>> estimate for the selected story list;
>
>
> Implementation detail: Do costs go on each story or on the milestone? 
> If so how we will enter these costs: we could (ab)use the severity 
> option on trac enhancements for this purpose.


They go on the tasks, and are sub-totalled by user story. The planning 
makes sure the total for the selected user stories matches the allowance 
for the iteration, which depends of the number and skill of the developers.

Using severity could work if we want to carry on with coarse grained 
estimation (where each user story takes 1-3 weeks [Kent Beck]). The 6 
severity options: blocker, critical, major, normal, minor, trivial could 
be used as difficulty values -- severity can mean severity of resolution 
of the issue.

However, the task based estimation deals in hours, and you want to total 
them quickly, maintain a history for future planning of 
estimates/actual/value and be able to sort stories by such things.

I expected there to be some extension to Trac to support an agile 
software project but didn't see one, yet.

Let's try to work out Web-based support for fine grained planning work 
before we start to use lots of time solving a problem that, at the 
moment, isn't broken (as far as I can tell).

<snip>


>
>> 5. Regular method retrospectives to provide a place for discussion of 
>> the process. Perhaps we could post a standard invitation at the end 
>> of each iteration to provide feedback on the development process as 
>> it was during the last iteration?
>
>
> Would it perhaps make sense to run this at the start of the *next* 
> iteration and integrate it with that iteration's planning game


As we are, we should be prepared to improve the process all the time, 
but with "distributed agile" it may help also to actively invite 
comments. I reckon the best time to invite comments is at the end of the 
iteration, whilst experience is fresh.

In general, we should guard against loading up the planning game 
activities with anything other than planning the iterations. Let's have 
retrospectives at the end of the cycle. It's fairly coincident with the 
start of the next iteration, but let's have it come first in the passage 
from the end of one iteration to the start of the next?

Also, we could plan a retrospective once every release, or only every 
three iterations, coincident with the release. I think that makes more 
sense, particularly to start with.

>
>> 6. (Optional extra) We could maintain a release plan by saying that 
>> three iterations make a release, using the latest average 
>> stories-per-iteration count to estimate how many user stories will be 
>> implemented in each release, and then listing out (in order of 
>> estimated value) the unimplemented user stories against each release. 
>> The customer could update the release plan each release, or every 
>> three iterations.
>
>
> Think this is a good idea and would agree with release = 3 iterations. 
> Suggest update release plan every 3 iterations.


Yes, now we're generating a list of activities to run each release. ;-)

J.






More information about the kforge-dev mailing list