Skip to main content

Testing playbook #1



So it's summer time again, and my year long spell as a scrum master is starting to be over. So starting August I'll be a full time tester again! (Or, as much as you can be a full time tester when working in a scrum team..) I'll likely write up some experiences on being a scrum master later (summary: hard, abstract, frustrating, fun), but thought now to start writing up on testing again. I'll start by sharing some of the tactics I often deploy when approaching testing situations. So this should become a series, until I get bored on it.

So I'll start with my classic, my "master" tactic that often acts as a framework to other more subtle testing tactics. This is a rather common way for me to approach situations more on the exploratory and less on the confirmatory side, and/or situations where the thing (it may not always be the system) under test is not yet that common to me. It's rather simple:
1. Take a tour
2. Decide the target
3. Go with the happy flow
4. Divide and conquer
5. Give em hell

To open it up a bit, taking the tour means a shallowish walk through on the building blocks of the thing I am looking at. Aim is to get a general picture of what are the potential parts, interactions, scenarios, etc that I could be testing. After doing this for a while, I then decide my target.

Deciding the target is often really essential, and it has a few important aspects. Firstly, it should be somehow relevant to the general testing mission. Spending a lot of time on an unimportant piece is usually not a good idea (although spending some time on them often may be a good idea). The size of the target should reflect the amount of time available for testing before next break, in the way that you have to push yourself to be able to generate enough ideas for the whole time. So a rather small piece. Having too much options to choose from, makes my testing sessions often a bit too vague, and actually makes it harder for me to come up with good test ideas and observations. So make it small enough. Thirdly, it would be usually good that you are at least somehow curious about the piece. Makes me work so much better. Then you start with the happy flow.

Going with the happy flow means executing your target with the easiest/most common way that you can imagine. For some which test (happy, nasty, interesting, boring) you execute first may not make a big difference, but for me it does. In both testing and life, I have a strange habit of leaving the most interesting things usually to be the last things. E.g. if I eat a bag of candy having many different candies, I want to be sure that the last thing in the bag is (one of) the best tasting candy there is. Or the last piece of the steak I'm eating should be the juiciest, biggest piece. Why? Because otherwise I may lose interest. And coming back to the testing bit, after I lose interest I become much worse observer. So, generally I want to go with the happy flow first, which I usually hope to succeed rather well (although in many past projects in my life, I think I have never even been able to go past the happy flow cause even it has never worked).

Divide and conquer is something I do really often. Basically it means repeating the same thing over and over, while modifying only one (or perhaps a couple) variable at a time. Inputs and outputs are a classical example of this, change one input at a time, and take a look what happens to the output (while of course observing everything else too. We're testers anyway right?). Are the outputs behaving as expected? This btw is also my common way of trying to reproduce an intermittent issue. Try to do the exact same thing over an over again, and change as little as you can. With the non reproducables I usually get a bit obsessed in this, as in doing EVERYTHING exactly as when I first was when experiencing the issue, like, the position I am sitting in.. (btw, inventing a non reproducable issue, and then start trying to find it is a powerful testing tactic too. More on that on a later post)

Finally we get to the part of Give em hell, which naturally means giving the target everything you got in order to try to break it. It could mean extreme flows, abysmal inputs, scarce memory conditions, switching system time, breaking connections, etc. Basically here you do things where you expect the system to fail, and see how gracefully that happens. Problems you find here may or may not be that relevant, but boy they will be interesting :)

So that's my very common testing tactic. In football terms that is my 4-4-2. Next time I'll share my common tactic for starting to test something totally new that I now very little about before. Teaser; I do not start by reading the documentation.

Comments

Popular posts from this blog

Periodical retrospectives are lame

  "You got nothing, not a single thing?! Well lets just end this here then." I remember well when I said this, being very frustrated. About ten years ago I had been working as a Scrum master for a team some months, and putting quite a lot of effort into planning our scrum teams sprint retrospectives. Lot of work also because I felt we were not getting too much out from them; not very good discussions, very few actions, and even the few actions we did come up with did not stick.  And then it happened: a retro where none of the participants came up with anything to say about the sprint. Regardless of the retro topic boxes, reading of books on retrospectives, getting inspiration from tools like retromat.org, having them in different places, using all kinds of different formats and rainbow coloured post-it notes. Not a single thing. Blank.  So then I said the words, out of frustration, mainly to myself. Why couldn't I get this thing everyone is so hyped about to work? Af...

Testers doing test automation - is that the most important thing for you to do right now?

I've been thinking quite a lot about tester's moving to do test automation.  Lately beause of these three things: 1.  European testing conference , a great testing conference I attended couple of weeks ago. It is very cool due to many things; the way the speakers get compensated, the focus on the conferring side of conferences making it very easy for people to discuss stuff, the way the talks are chosen, and because there are also a lot of developers joining. So anyway when I was talking with several of the attendees, it was a bit strange how it was easier for me to talk about the product development as a whole with the developers, where as with the testers it more naturally moved into talk of automation and tools. Also on the open space, I think the majority of the topics pitched was automation or tool related. And quite little on the process or on the customer facing side of product development.  2. In my company there are a lot of different products and...

Testing drunk

(My first blog writing ever.) I've been thinking a long time that it's funny how many bugs I find by accident. Try to do something, make a mistake and boom - a bug is found.  Making the mistakes intentionally doesn't quite work - that's why they are called accidents I guess.. So I've thought of ways to make myself more prone to accidents, coming up with an apparent one; testing drunk. TUI (testing under the influence). So this I gotta try. More to come on that later.