Sometimes it is like a long treacherous march through an endlessly dark and damp jungle. Whenever you think your nearing the end, another obstacle pops up to block your way. Behind every corner is only more pain, suffering and another corner. Every step of the way leaves you wondering if this time, it isn't really a death march. Will it end; can it go on forever?
How many software development efforts feel like an aimless trek through an untamed forest? A haphazard stumble through hostile terrain?
The goal when we start development is to try and find the shortest path towards getting it completed.
Wondering around aimlessly only increases the stress and makes it more difficult. Staying on course during development comes first from knowing the path, but also from following it. A map through the jungle is obviously a prerequisite. But that will only get you part of the way there, the second half and often more dangerous part of the journey comes from testing.
Most developers share the same fears: that initial one about being able to get it all going, and that final one about bringing it all together. Sometimes, in a particularly rough implementation, the last few months can feel like you are stuffing straw into one side of a burlap sack, only to watch it fall out the other side, again, and again, and again. If you haven't already lost it, by the time you get around to testing your probably pretty close. The jungle feels like it is closing in on you.
You might be a little frazzled going into testing, but you shouldn't let that be an excuse for turning off your brain. Testing, you see, is often were we lose the most, and gain the least. Development may seem to drag on endlessly, but testing is when you get totally lost.
It is worse too, when you widen your definition of 'flaws'; bugs you see, aren't just functions that don't quite work as expected, they also include spelling mistakes, grammar problems, awkward interfaces, ugly colors, poor documentation and anything else that ultimately distracts the users from an otherwise perfect system.
Often we try to narrow the definition as we get farther down the road, but that doesn't really help.
With an expanded view towards problems, if you really counted all of the flaws and inconsistencies for most software, the bug list would be huge. Yet, for the majority of bugs, the fixes don't take long to correct. For whatever reason -- most often fatigue -- we choose to turn a blind eye towards the ugly bits at the end of the development, even when they are simple to fix. Fear of making it worse is usually the culprit.
The standard problem with testing is inconsistency. The test cases get attention late in the game, and they always over-test the easy sections of the code, while ignoring the complicated parts. To be honest, most testing ranges from poor to totally ineffective. And most developers don't want to admit it.
Trouble starts early with the writing of the test plans.
It is commonly known that programmers make horrible testers. They test what they find interesting, which is usually the parts of the code they did well. They skip over what they dislike, mostly the sections of code with the higher bug counts.
This results in spending more time looking for bugs in the places they are least likely to be. If, as programmers we continually fail at testing, they by what madness does it make sense for us to write the test cases and test plans? We're only marginally better at thinking up cases for our own work, then we are at actually executing them properly.
Too add it it, some programmer's feel that uncovering a bug is a bad thing.
They want the program to get a bug free pass through in testing. They'll even pre-check the code, using up lots of time, and check all of the tests just to insure that there are no bugs. Fear of fixing bugs in the end seems to drive this type of behavior. If it were that the code was perfect, why bother to test it? Usually they burn through too much time in the simple sections trying to be perfect.
By the time testers execute the plans, they are ineffective because the underlying problems have been fixed, or because they really don't test anything significant. Sometimes, the same test suites will even be executed multiple times in a row, as if that increases the odds of finding anything. Hardly effective.
Testing is a necessary part of the process, so we want to maximize the time spent to get the biggest effect.
In that sense, we actually want to find bugs, particularly if we shortcut the work earlier to save time for other pieces of work. We don't want to find too many bugs, and we don't want to be forced back into another round of development, but we do want to maximize the effort. Finding bugs, after all shows that testing is actually working. Not finding bugs could either be interpreted as a good or a bad thing. More often its a bad thing.
I was once called a defeatist for suggesting that bugs will always exist in all code, but accepting it makes it more likely that you'll handle the bugs better. There is nothing you can do to stop bugs. Accepting that some flaws will get into the release makes it possible to handle them rationally.
If you don't plan on fixing bugs after the final release you are being too optimistic. In all my years of developing, the best I ever did was a four or five year period of just having just one bug in a complex production system. The secret behind this unusual success was lots of time; probably too much of it. Time that isn't generally available for most development projects.
Accepting that there are some arbitrary number of bugs left, helps to utilize the limited resources. It becomes a challenge to find them as quickly as you can with the least amount of effort. Even in a well tested system, a decent round of testing might reveal five or six. Digging deeper may reveal a couple more, but there will still be a couple of them that are shipped. We don't have an infinitely large amount of time, so bugs will exist. Knowing that, the only question is: which ones are the most acceptable?
Nothing is worse, then a stupid bug showing up in an obvious place, so if only for appearances sake, we always want to make sure that the main user pathways through the system are bug free. We can broaden the search out from there depending on the time and quality constraints.
When we do find a bug, there are always two choices, we can accept it, or we can fix it. Accepting it is generally cheaper because fixing it can reveal more bugs and require more testing. However, if you can precisely gauge the impact of the fix then you can retest only the smallest possible area; which is always cheaper than retesting everything. The impact of a fix in a well written program is not hard to determine.
Getting back to that previous project were we had virtually no bugs: what made it work so well was that the testers were two rather bored programmers from another project. They took up the challenge of finding the flaws in the system, and they did so with enthusiasm. The type of enthusiasm that the original programmers -- after months and months of pounding out code as fast as they can -- no longer had the ability to show.
The secret to great testing that I learned from that experience lies in handing off the testing to someone with fresh eyes. To really leverage independence, the testers themselves need to work out the test suite from first principles. They must do it directly from the design documentation because the programmers are tainted. The testers should be able to make certain assumptions about the type of system that fits into the documented problems and solutions. Not only does it supply good system testing, but it also validates that the design matches the system.
The time for independent testing is marginally longer, but if the results come back as several different sets of issues, the value for this type of assessment is worth it. Development can fix some issues, and schedule others for later. The results of different tests can be set back in several passes, allowing the developers to gauge the impact and make the changes early for specific types of problems; effectivily layering the test results. Testing could start with the deeper functionality and work its way out, taking care to insure that the order does not cause significant retesting.
There are various different types of tests that can be applied in this way, ranging in depth and effectiveness verses cost.
Rough testing could, for instance be applied in the middle of development to insure that the current direction is reasonable, and that the code is mostly usable. Walk-throughs could check for interface consistency and ease of use. In a large GUI program, specific pathways could be tested independently of each other. Sections of the system could receive extra testing because they are mission critical, or hard to recover from.
Interaction between the test teams and the developers can occur to validate some of the test plans and possibly later to help explain some of the problems, but it should be left to a minimum. The testers have an expectation that the system will behave in a specific way, it is there job to document any deviation. The developers need to assess the deviations and decide the underlying cause. If the testing is thorough, not everything found will be dealt with.
Wondering around aimlessly in a jungle is never fun. Knowing how long you have to suffer can help make it seem less painful. Still, even the shortest path for development can be a long journey. We need to be wise about how we spend our time. After a long and painful first half for the development, the programmers are tired and not functioning too well. Traversing the second half with a fresh team of testers -- while giving the developers a bit of breathing time -- increases the likelihood that the project will arrive straight at its destination. Where we know we continually have problems, we need to find solutions that fix the original issues without causing worse ones as a side effect. That puts us on the shortest path.
No comments:
Post a Comment
Thanks for the Feedback!