If I were a developer...... I'd hate you too.

As a test manager, one of the most embarrassing, and all to frequent, situations I find myself in is when I am reviewing bug reports with the development manager and he passes me a stack and says,

“We haven’t a clue what these mean.”“We haven’t a clue what these mean.”

I look at the first one and it says,
Title: System Crashes,
Description: System crashed
or words to that effect.
The development manager looks at me and says,



“What the hell are we meant to do with that? How does it crash? What data are they using? What were they doing beforehand? How do your guys expect us to fix stuff if we don’t know what the problem really is?"

I quickly point out the window, and why he is turned away, run from the room in shame.
Guys we have to do better!

Why Report It
Ok, once we get over the fact that we love finding bugs, that its what we are paid to do and it’s the thing that gets our juices flowing, we have to ask the awkward question, ‘Why do we report bugs?’
As far as I am concerned there is only one real answer, ‘So we can get things fixed.’ The reason for reporting a bug is to get it fixed, the whole reason for writing a bug report is to give the developers the right information to give them the best chance of fixing it.
So, the key is to use all the information we have at hand to give a full and comprehensive report.


What To Include

No Brainers
It doesn’t matter if you are using a popular issue tracking tool, an in-house Access database or Word and Excel paper sheets in a folder, ALL bug reports should include non negotiable bits of information in the header. These include.
Full details of the system under test, (Release Title, Version Number, Build Number)
Environment Details (What server build, configuration details)
Issue Number
Tester name and contact details.
Test Run date and Time


Title & Description
Give the bug a meaningful title, not ‘system crashes’ but, ‘when attempting to view X pc resets’
The description should take the main title and Fully explain it, for example.
“Having run a full data upload (see attached data set ref) I was attempting to follow the steps described in the test script (ref:xxx) I gat to step 11 where I was expecting to be able to see the full data report displayed, however on selecting the ‘Display Report’ option from the dropdown list in the title bar (not the Display Report button) the system displayed an error message reading ‘blah blah balh’ (see attached screen shot). On closing the error message the PC reset, and resumed displaying the normal desktop with no application open.
See attachments for: System Log, Test Script, Data, screenshot.
Note: This appears similar (but not the same as) Issue Nos 123.”

Test Script Details
However one item I feel should always be included, but often is not, is the full test script details. Test Script Name, Reference and the specific test step that has failed.
This should be in such a format that the developer can quickly and easily find the test script (you do have formal written test scripts right?) and the associated test data pack (you do have reusable test data packs as well yea?).
This will prove invaluable to the developer when he is struggling to recreate your issue. He will be able to set up the exact scenario, walk through the exact steps, looking to see what is going on under the hood, and using the exact same data get the exact same result…. or not. If not, then it indicates an environmental issue, if yes then it indicates a coding error (or feature!).


Screen Shots.
If at all possible screen shots should also be provided, particularly if you are able to annotate them to point out your concerns, important points, easily missed details etc.
So if you don’t have the facility to capture screen shots, go get an application now! I tend to use Snagit for my screen captures as it allows me to do all I want both in terms of what I capture and how I annotate it, however there are a number of other solutions.


Logs
If you run with any logging enabled (application failure, network traffic etc) be sure you include these as well.


Severity Priority
Depending on local practice you may or may not add a severity and priority. Out of choice I prefer not to add these at the time the issue is raised, but later after reviewing the issue with the relevant stake holders. A low priority low severity bug from a technical viewpoint, might well be high from a business viewpoint.


Tips for Successful Bug reporting

Take Notes At The Time
Have incident recording sheets with you as you test. Where ever possible make copious notes at the time the error arises. Stop the test run, make the notes, go on. Don’t think you can simply record an a failed step, finish the test and go back and remember what to write up.

All The ‘W’s’
The age old classic questions apply when writing the incident report ask?
Who, What, When, Where, Why, and How.
Answer these and the developer will have a good understanding of the issue and circumstance

My No 1…. All Time Best….. Killer Issue Raising Tip
Get out of your seat and talk to the developers!
If there is an issue I am not sure about or don’t understand, or if I think it would help the developer understand the issue better if he saw it in action, or it is difficult to put in words, I get up and ask a developer to have a look see and tell me what he thinks. It works wonders, give it a try.

Tony Simms is the Principal Consultant at Roque Consulting (www.roque.co.uk) he can be contacted via email, tony.simms@roque.co.uk

Views: 456

Add a Comment

You need to be a member of Software Testing Club - An Online Software Testing Community to add comments!

Join Software Testing Club - An Online Software Testing Community

Comment by Tony Simms on September 30, 2008 at 13:55
Hey, this sounds good. Is it £1800 per user, or per seat? I am going to have a look see as I am running a UAT phase at present with only business users, no testers. This sounds like it will really help them.

Cheers for the heads up.
Comment by Peter Cliffe on September 30, 2008 at 13:43
Getting testers to write issue reports with the right information is often a problem, and a workshop to explain what is required helps. Recently I tried out a tool that records your test, automatically captures screen shots throughout, so that when an issue arises, you know what you were doing when the issue arose. This can then be sent to the developer so he knows exactly how to reproduce the issue. The tool is TestDrive Assist, and it's from Original Software (www.origsoft.com) - £1800 a pop which seems good value to me. Very useful for exploratory testing.It can also be used to verify that a tester has followed a script and executed it if you don't have a test management tool such as HP Quality Center. I have not come across BB TestAssistant, but have just eyed their website - it looks good so will go and try it out.
Comment by Dipan on September 29, 2008 at 17:15
A very worthful suggestion...thanks Tony.
Comment by James Christie on September 17, 2008 at 15:03
Darren raises an interesting point about the difference between defect priority and severity.

I agree with his definitions, and I think these are widely used. However, the ISTQB definitions are that severity is basically a technical matter, ie how big the failure is to the techies, whereas priority is a business matter, ie how big the impact of the failure would be to the business.

I believe this is consistent with IEEE 1044, on which defect management processes are supposed to be based, but I haven't seen a copy of that document since I don't know when.

I'm not convinced that the ISTQB definitions are very helpful. Does a defect have any impact other than in business terms? Is a technical shambles that is invisible to end users and customers but expensive to deal with not a business problem in that it wastes staff time and money?

When it comes to the crunch of managing a testing project the distinction made by Darren is vital. Senior management and users have to know the score with the defect severities so they can decide whether the application meets the acceptance criteria.

Developers have to know which defects have the highest priority for fixing so that the test manager can try and keep the schedule together. Severities, as Darrren explained them, are important as management information, but on a day to day basis during testing they're not of great value. Fix priority is crucial, however.

The distinction should be spellt out in the test strategy and defect management process, and it needs to be communicated again and again, so that people understand the difference. I remember once having real trouble with a user manager who insisted that any defect that had to be fixed before implementation had to carry a priority 1 classification - fix immediately. He argued that doing anything else sent the wrong message to the developers about what was acceptable.

The result would have been no prioritisation of fixes at all - because nearly every defect would have been priority 1. I had to keep overruling him, which was necessary, but created other problems. Not every customer is that awkward, but the test manager has to be clear about the difference between severity and priority (as Darren and I understand them), and they have to try and head off confusion and confrontation by being clear up front about what these mean on the project.

Frankly, I'm disappointed with the ISTQB that they've not helped out here. Their definitions are not helpful, in my opinion.
Comment by Tony Simms on September 16, 2008 at 12:54
Hi Sherilyn

Thanks for your comments, you make a good point.

Generally I don’t cut and past the script into the test report, I reference the script and note the step. That way the developer can go have a look at the steps that led up to the error and the data set used.

The point of the test script is that it does give the exact steps to re-create, unless of course you have gone off on a tangent (which is OK, as long as you document what you were doing.)

They key though, as you point out, is fully understanding and documenting the issue. This where I find that some testers are not as effective as they could be. My plea in the blog is simply, please, please please write professional, intelligent, helpful bug reports.
Comment by Simon Godfrey on September 16, 2008 at 12:22
Useful post, and a good point made. One thing I notice about where I work, is we teach our testers how to report bugs, but we don't neccessarily teach them how to diagnose bugs and what use is one, without knowing the other!?

I would however, balance this with the fact that we have to test defects raised by developers and they are utterly rubbish at writing good defect reports.
Comment by Sherilyn Tasker on September 16, 2008 at 1:01
I have had developers complain about getting the full test script details included in the bug report - they always say it fills the report with mindless fluff...they only care about the exact steps to reproduce. If the tester has added in the full script its often because they were too lazy to investigate the problem further and narrow down the steps to reproduce or they didn't really understand the problem to start with.
Comment by Glenn Halstead on September 15, 2008 at 23:00
I've also found a video of the problem is very persuasive when trying to convince a sceptical developer that there actually is an issue. BB TestAssistant is good - Mercury's Screen Recorder is really BB TestAssistant under the hood.
Comment by Glenn Halstead on September 15, 2008 at 22:58
Hi Darren,

I've had success using remote assistance to overcome situation where the developer is in another location. Also, as Phil says, usually it works fine on the developers pc. I've found that demonstrating the issue on the test pc for the developer so they actually see the problem themselves rather than just reading or being told about it helps a great deal, especially if they don't have to get out of their seat to see it. I've found that once a developer actually sees the issue they're usually willing to work together to identify what's different between the dev and test pc's.
Comment by Darren Hails on September 15, 2008 at 22:52
Hi Tony

I am interested in your comments around setting severity and priority for defects. You mention that you prefer not to set these until the defect review meeting with stake holders, and mention that a defect that might have a low sev and low priority at a technical level may be high from a business perspective.

I have always had severity cover the business impact should the defect exist in production, so critical meaning something that would cause (I use extremes) the loss of money, company reputation, life etc, and low meaning the shade of blue is not correct on the header of an intranet site. These I would not set at the time of the defect being raised, but at the review as you have suggested.

However the priority I associate to the effect of that defect on the progression of testing. So again for example, if that defect stops all further testing until fixed this would be a Critical, If I can work around it, but it cuts of a module of testing then a medium, and if it only prevents a small percentage of the overall tests to be executed it would get a low priority.

Therefore I am interested to know how you, and others define severity and priority when relating to defects being raised.

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service