tisdag 29 januari 2008

The Hells of Crosscheck

I know you're never supposed to refer to the title in the body of your text but now I do it anyways. The hell isn't really the crosscheck framework, but the act of faking a browser. There should be no surprises that this eventually will lead to problems, the question is rather how important these are. In my case these problems seems to be quite big : I can't seem to model the events correctly. Lets tell the story from the beginning.

I decided to integrate the javascript testing with our existing testing process, meaning setting up tests that are run on each 'build', i.e. each time a developer commits code to the main repository. Crosscheck was the framework I used to achieve this, and because it is written in java and we're going with ruby on rails, it meant setting up tests and running test in java from ruby. That was really no big issue and after a couple of hour I had it all set up. I even hocked it up with jslint, so that all the javascript-files runes through that each commit. It worked super, but the problems started to come soon after I tried to use it in the development process. I of course tested this before, but there is one thing I didn't test; event handling.

Sooner or later when you test the frontend layer of an (web)application you have to test the event handling, because in many cases the events is the main interface, the entry point to the code. Not to test that the events are fired like expected would be absurd. It therefore was a major letdown to learn that crosscheck doesn't play well with jQuery's event handling. For some reason the type of the event object doesn't seem to be set in a proper way, i.e. event.type has the value undefined.

This seems like an obvious bug in the crosscheck framework but after a couple of hours diving into the source code (*phew*) it seems like everything is set up correctly, so only one question remains: why doesn't it work? Before this is solved, we can't use crosscheck to any extend. No other framework seems to do it either, because we really want the CI-integration feature. If the tests don't run on each commit, they're not going to be run. That is at least what we think...

I definitely is going to post when I found a solution/work-around/other for this problem. But right now I've reached a dead end.

måndag 14 januari 2008

Manual Testing

So where back from christmas and just got a new release up and running. I've been talking with some more experinced QA-people and also been reading some articles on it. They all use some sort of scripts to test their things. A also read that you have to specify what your system should be able to do to effectively do automated tests of it. This and the fact that I thought our testing procedure was a bit unorganized was my main inspiration to make the whole team do some manual testing before this released.

I've compiled a numbers of script like tests that I've put in an Excel sheet. Each tab is divided into different categories, testing different areas of our site. We then sort of divided those tabs between each other and went testing like h*ll. These tests looks like this:
No
1
Description
Successfull log in

Page (if any)
/login

Prerequisites
User goes to login page, types correct password

Expected Behaviour
User is redirected to dashboard

The first thing we came across was how-boring-it-was. A couple of them are ok, but after a while it becomes so slow. So slow. One of the main ideas is of course to automate as much of this as possible. But not everything can be automated, and as you do manual testing you tend to test a whole lot more than just the thing you set out to test. The thing is that humans very soon starts to improvise when they test and therefore catch more bugs. We also learned that you spot a lot of this that you weren't supposed to test, but got tested on the way.

Everybody agreed that this style of testing was way superior and that we caught a lot of bugs we definitely would have missed. But I got to warn about how time consuming this was. Me for example went though some tests, discovered some other, mostly related, bug. Fixed that, submitted the patch and so on. These were almost all small fixes but all in all it took a long time. Better would maybe had been to had noted and reported some bugs but let them slip this release.

Just a note. You can't do without manual testing. You really can't and this more organized style of testing is needed.