fredag 14 december 2007

Meeting with the Ableton CTO.

Our team had the opportunity to meet the CTO of Ableton and get some general advices. He was of course a real clever guy who knows a great deal about developing software. We got to discuss testing and quality with him and they have it all: testdepartment, testframeworks, inhouse developed acceptence tests, etc. They also had a small group of dedicated users that received prereleases to evalutate. This proved to be a valuable things, not only for discovering bugs and see what needs to be changed, but also because they tend to hype the next version of Abletons software.

Something intresting that came up was that before they had a separate testdepartment they had this one guy who always managed to find the most bugs in their software. He applied the infamous 'monkey test strategy', i.e. randomly klick on everything as much as possible. Surprisingly this, as I said, proved to be the most successful way of finding bugs and he thought it could be a good idea to program some kind of 'bot' who randomly presses everything. But if I should do that I have to find a way of recording what the bot done, otherwise it would just come up with a bug and left us with no clue how to reproduce it.

Lets stop and think about this. This would be really intresting to try out. The way I imaging doing this is to have a couple of conditions that always has to be satisfied and then let the bot randomly visit the site and check that these conditions are met. One example of such a conditions is the accessability of tracks. We have provide the ablitiy to upload and share tracks on our site and the user can choose which persons to share their tracks with. This is of course quite errorprone and very bad for our reputation if we fail to keep the private tracks private.

The next problem is to record what has happend, Ableton has solved this by secretly recording every event in their program. If the programs then crashes they could just replay the events to reproduce the bugs and see what went wrong. Nowadays this is very helpful because they can let the users send them their crashing program and just replay it to find the bug.

Another part of quality, that we discussed, is planing. According to the Ableton CTO one sure way of introducing bugs is to plan to much features to the release. His general idea about this was that it's better to have very few really good features that works very well than to have very many features that work ok. This could seem obvious, but in reality it's not. There are so many things you like to cram in there and everybody knows that it hurts to 'kill your darlings'. Then comes the thing that people tend to underestimate how long time things take to develop. According to the CTO only 10% of software projects are finished in time, and I think none are finished with all feautures that were planed for it.

Oh and another tip we got was: merge often, release often.

A quick evaluation of testing frameworks : htmlunit

I had a quick look at htmlunit. It's a kind of a browser emulator, it parses html sites for you and even has the abliity to run some javascript. Then you could 'click links', 'fill in fields' and 'press buttons' on your website by calling some functions on the emulator.

The framework, which is written i java, seems kind of "oldschool" and doesn't seem to handle all the ajax we're doing on our site.It has some support for (XMLHTTPRequest) but that is kind of it. One nice thing about it is that it doesn't provide testingmethods (such as assertions) so you have to use an external testingframework to do this. This lets you use the same framework for all of you application and homogenise your tests. Nice.

Because we're using ruby on rails to develop our site and htmlunit is written in java we could have some trouble integrating it and probably have to use JRuby to get it to function. This is doable but think about it, first were using JRuby (a ruby runtime implemented in java) instead of ruby, this could be the cause of numerous bugs. Then on top of this we should use a browser emulator instead of a real browser, which will introduce even more bugs and glitches that, in reality, isn't there. The problem with this framework for us seems to be that we could spend to much time 'chasing ghosts'; even if the tests fails that doesn't mean that we have a bug. This is of course always a problem with testing, but this seems even worse.

Instead of this framework I stumbled over the Hpricot_forms framework. Hpricot is a nice framework for parsing html/xml content and the hpricot_forms is kind of a htmlunit port for ruby. It's really nothing fancy but it lets you "click links" and "fill in and submit forms" in your tests. I downloaded the project and after some troubles, including manually updating some code, I had it running and wrote some tests in it! It's nice and really supply a valuable addition to our testframework Rspec, but I think it would be even better if combined with the Storyrunner included in the next Rspec release.