Thursday, October 22, 2009

Automation of Burn-up and Burn-down charts using GScript and Entrance

I have always found that the burn-up and burn-down charts are very informative and fit to the iterative story-based development very well. Every project that I work on, I will try to figure out different ways to generate burn-up and burn-down charts.

Two months ago, I took the job on putting Platform on Sprints. After some consideration, I have decided to follow the setup that I have for the AF team, creating the stories in the form of JIRA issues. However, the chart generation that I had for AF team was still semi-manual, which means that it takes a couple of minutes to download, and a couple of minutes to update the stats every morning. The worst part is that when I get busy or sick, I will forget.

So my first action item was to figure out how to generate the same kind of charts with the push of a button. The idea seems to be easy enough:

  1. Figure out the search criteria to retrieve all the JIRA issues of the backlog
  2. Count the issues that are in different states
  3. Update the data with the counts, and check it into Perforce.
  4. Refresh the chart with the updated data
Number one and two were actually not that hard, because Guidewire GScript has a nice WebServices support. With a few tries, I was able to count the beans.

Here is an example of the data generated. I think you get the idea just looking at it.
Date,Day,Closed,Deferral Requested,In QA,Open Stories,Open New Features,Open Bugs
10/09/2009,41,55,0,1,13,7,40
10/12/2009,42,55,0,1,14,7,40
10/13/2009,43,56,0,0,14,7,40
10/14/2009,44,56,0,0,16,7,41
10/15/2009,45,56,0,0,21,8,42
10/16/2009,46,58,0,1,19,8,42
10/19/2009,47,58,0,2,28,8,42
10/20/2009,48,58,0,6,26,8,42
10/21/2009,49,58,0,6,26,8,42
10/22/2009,50,58,0,7,25,8,44


Number three took less time but a bit of research because Perforce Java library's API is not exactly straightforward.

It took me a while to figure out how to do the last one. After looking into JFreeChart and Google Chart API, I eventually turned to my dear friend, Tod Landis, who is also my partner at Entrance, and he quickly drafted an entrance script for me. Based on it, I was able to write a template that can be used for all teams within a few hours.

PLOT
very light yellow area,
light yellow filled circles and line,
DATALABELS
very light orange area,
light orange filled circles and line,
very light red area,
light red filled circles and line,
very light blue area,
light blue filled circles and line,
very light gray area,
dark gray filled circles and line,
very light green area,
dark green filled circles and line,
all AXISLABELS
WITH
ZEROBASED
TITLE "Sprint"
TITLE X "Days"
TITLE Y "Points"
SCALE Y 0 100 0
CLIP
legend
gridlines
collar
no sides
SELECT
`Open Bugs`,
`Open Bugs`,
date,
`Open New Features`,
`Open New Features`,
`Open Stories`,
`Open Stories`,
`In QA`,
`In QA`,
`Deferral Requested`,
`Deferral Requested`,
`Closed`,
`Closed`,
day
from report;

Please note this is the final PLOT script, there are other SQLs run before this to import the data into the MySQL database, sum up the data to produce a stacked chart, and even out the labels.

And I now have this chart generated automatically every morning with the help of a windows scheduler.

Wednesday, June 10, 2009

Cotta Asserts vs FEST Asserts

Background

Cotta-asserts is a public release of the JUnit 4 assertion adapter that came to as part of the Cotta implementation and my search for a better sematics for assertions in test.

FEST-Assert is a Java library that provides a fluent interface for writing assertions. Its main goal is to improve test code readability and make maintenance of tests easier.


Bottom Line

The bottom line is that these two libraries are trying to solve the same problem in a different way, so they are both viable solution for writing assertions through fluent interface. Even though personally I like what I have built, I would not argument very hard about using one versus the other, same way as I would never argue between JUnit and TestNG, even though all my projects have been JUnit (And I know some people feel strongly about it).

I can also say the same thing about Java versus Ruby. When it comes to the bottom line, having a gelling team building high quality software that makes the user happy is all I care.

Why I like FEST asserts

Don't get me wrong, I love FEST asserts. Their API looks solid and really complete. Even now, Cotta Asserts still has some way to go to catch up, it is simply not high enough on my list, and I have not found anyone willing to contribute.

Why I like Cotta Asserts

With that said, I think it is worth it to write down the reason that I like Cotta Asserts.

First of all, Cotta Asserts is aimed to filling the gap of JUnit and Hamcrest, nothing more. It is just an API adapter that exposes the assertThat method in JUnit 4 and the matcher classes in Hamcrest. If one day this fluent API appears on the new JUnit release, I would be happily retire this little side project, and get back to the next item of my long list of things I would like to try as an open-source project.

Second, you can extend Cotta Assert however you want it to be, because all the classes are open to extension. You can extend AssertionFactory and add more methods to return other assert objects. You can make your AssertionFactory return different StringAssert.

For FEST-asserts, I always wonder why they made the classes final. It does not seem to serve any purpose. Then again, I never asked, so there could be some good reason behind it.

If you have been using assertThat method in JUnit, chances are that you already have a bunch of matchers lying around, and probably start having trouble remembering where they are. You just need to create your AssertionFactory and Asserts to leverage them and you don't have to throw anything away or even change anything.

With FEST-asserts, when you look at an API like this, something is just not right...

FEST-asserts requires you to add one assertThat method for each object type that you want to run asserts on, and one method for each custom condition. It can get confusing sometimes if you have different modules that do assertion on different objects in the module.

With Cotta Asserts, you only need one field declaration from the super test case class, and everything just goes from there. So it is really easy to set up, and really easy to customize it so that each module can have methods that only create asserts that are applicable to the module.

So here you are, happy testing!

Friday, June 05, 2009

Touché

(This one definitely falls in "Energized Work" category)

In an email thread with my old XP mentor and a long time friend, Rob Myers, about coordinating the next BayXP gathering, I mentioned that I have another baby coming soon.

Rob: "... about time!"
Shane: "... are you saying that I should get a life and stop messing with this thing called 'agile'?"

Rob:
"Au contrare my friend. Living 'agile' is supposed to provide you with the opportunity to have a life beyond the office. We should coach through example! :-)"
Shane:
"..."
"..."
"Touché"
"..."
"I think this deserves a blog post"




Wednesday, June 03, 2009

JUnit Assertion Adapter

I have been trying different assertion styles ever since I read this blog, like many others. If you google "assertThat" you can find many hits.

With the creation of Cotta project, I got a chance to really think hard on simple file operations and tests to see how I can make the API as polite as possible, and I think I have found a good answer.

With the default JUnit 4 assertion, you still don't get the benefit of a static typed language, in that you have to remember the class to use to create the matcher for a certain type.

So I created a JUnit assertion adapter that will allow you to type
ensure.that(yourValue)
and get the appropriate assertions object based on the type of the object being passed in. The type of the returned object will have just the assertion methods that are applicable to the value.

This also brings an additional benefit. For example, you can have a list instance and call
ensure.set(list)
and it will automatically convert the list to the set and return the set assertion object.

I have just made a release under the cotta project for general feedback

The simple document page is here: http://cotta.sourceforge.net/assertions.html
and the jar can be downloaded here:https://sourceforge.net/project/showfiles.php?group_id=171037

Saturday, March 07, 2009

Phoenix First Two XP Sprints

Phoenix has finished its Sprint 5 and 6, which are the first two XP Sprints.

An XP team always goes through four stages, "forming, storming, norming, performing". The first Sprint felt like a storming stage, where we are trying to figure out the best way to get the code in without spending too much time on upfront design. At the same time, we are also getting used to paired programming.

Even though paired-programming has become an old trick for me, I still feel that my pairing skill has gotten worse during the past three years of working solo. The second Sprint felt a lot better, and I am hoping to keep this trend.

Items that worth noting:

  • We modified the lava lamp to have a green light on when everything is good. Even though it is redundant, it has very positive effect among us. The only thing we might need to watch out is that someone mentioned that they could be fire hazard because the lamp gets very hot at the end of the day. So we are going to turn them off by the end of the day. This is when I found out that the X10 remote controller does not work, so they are back for replacement now.
  • The lava lamps are helping us getting on the habit of treating broken tests as the highest priority. Due to the nature of Phoenix, we got some interesting test breakage already. We got tests that only break on the server, tests that only break on Linux, and a test that hung. One interesting discovery is that each time we are forced to figure out what is wrong and fix them, our tests ended up making better sense and being more like behavior driven, and I was planning on settling for hacks to keep the test passing!
  • At the beginning of the project, we chose to create just enough stories to get us through the first Sprint, then created a few more for the second Sprint. Looking back, I think that is a good choice. The kind of stories that we create now are so much different but better from the earlier ones. I think that is because at the beginning, your system has literally nothing. It would take a very good story writer to come up with a list stories that really fit into the "INVEST" category of the story. I am not saying it is impossible, I just think that two Sprints of bad stories is not a bad price to pay to get the ball rolling as early as possible and avoid lots of hassle to learn and teach and debate about good stories vs bad stories.

Monday, February 23, 2009

Lava Lamp with CruiseControl

As we are getting Phoenix project under way, I am trying to get it started right by introducing more XP practices. The first three things that we are trying to do are Paired-Programming, Test-Driven Development, Continuous Integration.

Actually, Guidewire has already built an internal tool, ToolsHarness, to handle continuous integration, as I have written in "Managing Tests with ToolsHarness, Individually". The only difference that I want to introduce for Phoenix project is to fix broken tests AS SOON AS POSSIBLE.

What this means is that I want the testing status of our branch to show right in out faces, without us having to launch a browser, so that we know to take action the moment a test is broken.

I talked to the developer who manages ToolsHarness, and he wrote a servlet that serves information about broken tests and test status like this picture, except in one HTTP GET. Then I set up CruiseControl(version 2.8.2) with X10 publisher, following the setup described on this blog post "Bubble, Bubble, Build's In Trouble".

One thing about the normal lava lamp setup has always bugged me in the past, which is when the continuous integration server is in the "testing" state. When you have test broken, the red lava lamp will be on, and you just have to remind yourself that the fix is in and test is running. In some projects, I have used "project soundscape", so that when tests finishe but are still broken, you will know about it. But if you happen to step outside, you will miss it. Or if you just came in, you have to check the browser or ask others.

So this time, I have done it a little differently, taking advantage of the fact that CruiseControl is not the process running the tests. I bought two lava lamp, one kind of in the red color and the other in blue. I set it up so that when there are two independent lava lamps:
  • Red Lava Lamp for broken tests: When there are broken tests, it will be on, otherwise, it will be off
  • Blue Lava Lamp for testing status: When there are tests running, it will be on, otherwise it will be off
In this way, you have four state to display:
  • Neither is on: All tests pass and the tests are up-to-date
  • Blue is on and red is off: All tests pass so far, but there are tests running against newer changes
  • Blue is off and red is on (see below): You have broken tests, and no code checked in to fix it
  • Both blue and red are on (see below): You have broken tests and someone has cheked in new code (hopefully to fix it)

The setup is pretty straightforward, except CruiseControl 2.8.2 release is missing two crucial files, "lib/win32com.dll" and "lib/javax.comm.properties", for X10 publisher to work. That, and me missing a tiny but also crucial detail in the documentation, caused my three-hour-hair-pulling experience, and that was with Jeffrey coming to rescue through GTalk. I am going to submit the patch for the release script to include those two files, and documentation with the following checklist:
  • You should provide all FOUR attributes related to X10 for the element, so that you are aware of them and make sure they are correct. These four attributes are as following:
    • "houseCode" and "deviceCode" are for X10 module configuration.

    • "port", with the value of COM1, COM2, etc., to match the place you plugin the COM module.

    • The last one is "interfaceModel", which you should really double check with the COM module that you have.

  • Make sure "javax.comm.properties" is in your CruiseControl lib directory (should be there after 2.8.3)
  • Make sure you copy "win32com.dll" from CruiseControl lib directory (should be there after 2.8.3) to your Java bin directory
In the end, I would like to say that I am a satisfied ci-guys customer!

Sunday, February 22, 2009

Greate Article on Pair-Programming

All I can say is that this article says EXACTLY how I feel.

http://www.nomachetejuggling.com/2009/02/21/i-love-pair-programming/

One thing to add is James Shore's Programmer Man's Theme Song (see end of the post)

Sunday, February 08, 2009

JIRA Story Wall

With the "shared dashboard" feature of JIRA, we have been experimenting a shared dashboard that can be served as a virtual story wall that can be useful to us. And here is one version.

AF stories are in the form of JIRA items, in this way, JIRAs created by other teams for bug fixese or support can be rolled into one backlog. Creating stories in the form of JIRA is not nearly as trivial and easy as creating stories on the index cards. But once you pass that phase and get yourself used to it, it does bring a lot of benefits of a digital media.

On the left, the first section shows the stories for the current Sprint with status and the person who is working on them. Each person is to finish the JIRAs assigned to him or her, before picking the ones assigned to the general bucket (AF General).

The section section shows the stories allocated for the next Sprint, grouped by assignees and components. The third section shows the full current backlog by component and priority. We used it a lot when trying to figure out what to work on next or what to push to next release. The last one is the backlog for the next milestone.

During the Sprint, some issues will come up. The most urgent ones will be pulled into the current Sprint to be dealt with right away. The others will either be added to the next Sprint, or add to the appropriate backlog. At the beginning of the Sprint, after counting the JIRAs already added to the Sprint, carrying over the ones from the past Sprint, we will select more JIRAs from the backlog by looking through the components.

On the right, the first section is the list of the JIRAs that the current user is working on (In Progress). It has been pretty useful to me to come in and get started right away by looking at this short list. However, I just learned today that everybody else is just looking at the JIRAs assigned to him or her in the current Sprint.

The JIRAs in the next list are the ones that have been marked as resolved by developers but not verified by QA. They are sorted based on the order that QAs would like to process them. QA team uses this to pick the JIRAs to verify during the Sprint.

The last section on the right contains the JIRAs that have not been added to any backlog. In this way, all the JIRAs will be looked at before adding to the backlog. One thing about using JIRA as story is that anyone can create a JIRA and assign it to your team, which mean your backlog can grow without you knowing it. With this extra step of adding newly created JIRA to the appropriate backlog, we are always aware of any new work coming our way.

Wednesday, January 21, 2009

Burn-up and Burn-down Charts

I have always thought Sprint reporting is a major communication tool to be used within the team as well as to the outside. It is the time for the team to take one step back, look at the project as a whole, comparing notes, and make continuous improvements. It is also the time for the team to report the progress and any difficulties encountered, so that the stake holders can make plan adjustments and provide help if needed.

Burn-up and burn-down charts are my favorite report, because they fit very well in the story based iteration model of the project development. Anyone understanding stories and iterations (not that it is always easy to learn) can understand these charts very easily. I also find that these chart can generate more questions and lead the team in the right direction.

Burn-down Chart



Burn-down chart is straightforward and easy to understand. It measures the burn rate of the story in the unit of the story points. It is really easy to understand different ways of predicting the outcome of the project by predicting the future velocity of the Sprints.

All the iteration tracking tools that I have tried have this support. This one is made by Pivotal Tracker.

For those who use JIRA or good-old story cards to track the iterations, it is not hard to produce this chart as well, with the worst part being figuring out where to use what formula. The following are from the other two projects, made with Microsoft Excel and Google spreadsheet. With customized tool, I get to explore different styles.

In the first one, the stories are divided into "must-have" and "everything else" category, and tracked at the same time. The prediction lines are shown in different color. In the second one, the progress is shown along with the burn-down, so that in the case where it is actually "burning up", it shows that it is not caused by losing velocity.


















Burn-up Chart

For a project with just a single coach, burn-up chart can be a great help. It can explain a lot of concepts in the story based development, iterative development, and can help the coach recognize the patterns in the development and take actions to adjust the direction of the team.

I have found that burn-up chart always a bit harder to understand, and might look intimidating. So if you are introducing it for the first time, you should not just paste it in a report and email to others. It is best to show it in person and let the conversation start.

The first chart shows a project where development is fairly smooth, QA can just keep up with the story being finished. On the other hand, the project requirement is very volatile. The interesting thing to point out is that because the team is focusing on one Sprint at a time during the release, and one story at a time during the Sprint, the dramatic scope changes did not affect the development at all.





The second one is a quite typical burn-up chart, where the team discovers new cases as they go, and adding the understanding to the backlog in the form of the stories.










Sprint Burn-up Chart

I also found a burn-up chart for the Sprint is useful to figure out what happened during the Sprint. I think this is what is called "Sprint Signature" in the SCRUM book.

A Sprint burn-up chart should be used strictly internally, because only the team who have just been through the Sprint can look at it, talk about it and then draw conclusions. This should never be used for managerial purpose, in my humble opinion.