I am after a bit of advice please. We are trying out a 'developer testing' phase where a developer who did not work on coding the System Under Test (SUT) tests the product before it is formally released to System Testing (i.e. my realm). Basically everyone was fed up with products arriving in System Testing and going straight back again because of a show stopper which the developers could have found themselves.
Component and Component Integration testing happens almost by default
because of the way our products our developed and put together so it is
not as if developer testing is being shortcutted and nor is it the case
that this is the only testing a developer will do.
I am all for this approach because I think it will give me a better chance of getting a high-quality product into testing. However, I am wondering whether others have tried this and how it has or has not worked. Are there any dangers that I should be aware of? I am thinking particularly that there is still going to be a developer bias there so I cannot assume a great deal about the testing that has been undertaken.
A few initial ideas (which may or may not apply based on your context - please see my context questions below:
- It doesn't matter who tests (developer or tester) "going straight back again because of a show stopper " problem will still remain, wouldn't it?
- I have a different experience: developers testing their coded components (or even bug-fixes) in QA environment before formal release to testers. It reduce time to fix the bug (and no need to write a formal bug report).
- Developer testing (helping testers to do their job) is a good idea as one-time activity. Developer will find defect that tester may miss. But don't make it a routine.
I think that you can't get really good advice here until you describe the details of what does a "System Testing" mean to you and how exactly does it differ from component and component integration tests does already. Does it mean a different environment (i.e. MS SQL server instead of MySQL, etc.), installation procedure (i.e. deploy VS run from IDE), or just test scope (hope developer not coding SUT will be doing better tests)?
Thanks Ainars. I will try to be more specific about what happens at the different test levels...
Our Component and Component Integration phases are IDE driven. Our software architecture is such that interfaces between components will either work or they will not work at compile and build time.
In System Testing we take the completed system and run in the sort of environments that would be found 'in the field'. So, for example, our APIs will be tested on machines that have Visual Studio installed in different guises; desktop products will be tested with no development software installed initially. We use a combination of white and black box techniques to prove that the system is performing according to specification and combine this with 'user thinking' prior to release.
The idea is to reduce the test - report - fix - test cycle because the developers can turn around a fix much quicker when the ball is still effectively in their court than if they have signed the product over to formal system testing and moved on to something else. Before reaching System Testing, another developer will take the application having few - if any - pre-conceived ideas about the SUT because it is felt that they will do better testing than the original programming team.
Can I ask you about your developers testing their coded components in a QA environment before formal release to testers? Are they the developers who wrote the original code? If not, this sounds very similar to what we are trying to do. We are trying to remove the bias inherent in developers testing their own code.
In one particular project (it was 10 years old software with huge code-base) we had routine for developer testing their own code and one-time even called "QA week" testing peer's code. Details of the routine procedure (perhaps this is what you need): we had following life-cycle for each defect and feature request:
once developer is finished coding it goes to status integration (assigned to build master)
once build master have built system and created installation CD all his tickets are reassigned back to previous assignee.
Now developer have to install software from CD and quick-test all the defects and features before they go to QA team. If there is a problem with build procedure or installation/environment - ticket goes back to development status.
First of all I think that this is the right track, to me this always helped to emphasize that quality is not solely the responsibility of the Testing team but of the whole Project Team.
Based on my experience working like this in 2 projects there is one classic drawback, and that is that once the system has "passed" this dev-integration phase they will be less likely to accept accept the system in case you want to return it as you today (in the case that it is really impossible to continue testing).
The way that I approach this project is by making sure that I am (or members of my team are) part of the group defining the scenarios to be run on the system by the developers. I also implement a policy where for specific builds we will be able to request additional and specific tests to be done by the developers, based on the functionality I am expecting to receive. I also make sure to provide constant feedback about the bugs that "escaped" them and I was expecting them to catch before reaching me.
Again, good approach, it helps a lot to "teach" about the difficulty and challenges of testing.
Thanks Joel: that is really helpful. I will keep a watch out for reluctance to accept bugs once the system has been through testing by a developer. I am hoping that I will get good co-operation from the developers when I need additional tests run by developers.
If this testing is going to occur when the components are put together in a dev environment and prior to testing by a test team, my suggestion is that the "component/unit test" mentality will probably still exist without some training. That is, the developers might tend to just retest what has already been tested (sometimes automated - think "junit"), without considering full scenarios. We've had the same issues; sometimes we would receive code where you couldn't even log in or perform a basic, existing function that was a precursor to beginning the test of the new code.
We've had success overcoming that obstacle by using Just-in-Time test techniques; the project team gets together in a room and using some structured brainstorming, drives out what needs to be tested in about an hour. While it is not exhaustive or extensive, it gets the development staff thinking about how the software is used, rather than how it was coded, and they tend to ensure that at least those items brought up in the meeting are tried. The quality of our code improved significantly once we started using the technique. The advantage is that trying to institute other methods involved such a significant commitment of time and/or effort, there was a lot of pushback and the work simply wasn't getting done - there wasn't enough time. But EVERYONE could agree to dedicate one hour to the task of deciding what to test, and it made the actual testing by the dev. staff much faster and easier, since they already had a prioritized list of what to try. In addition, the "brainstorming" session is attended by dev, QA, and BA/end users, so they start getting a good feel as to what groups OUTSIDE dev consider important. It broadens their perspective in valuable ways. But I think what they *really* consider valuable is that it's less work for themselves!
If you'd like more detail, let me know. And I'd like to mention that I took Rob Sabourin's class on this technique and shamelessly corrupted it to work for my environment. And work it does!