Skip to main content

Testers doing test automation - is that the most important thing for you to do right now?



I've been thinking quite a lot about tester's moving to do test automation.  Lately beause of these three things:

1. European testing conference, a great testing conference I attended couple of weeks ago. It is very cool due to many things; the way the speakers get compensated, the focus on the conferring side of conferences making it very easy for people to discuss stuff, the way the talks are chosen, and because there are also a lot of developers joining. So anyway when I was talking with several of the attendees, it was a bit strange how it was easier for me to talk about the product development as a whole with the developers, where as with the testers it more naturally moved into talk of automation and tools. Also on the open space, I think the majority of the topics pitched was automation or tool related. And quite little on the process or on the customer facing side of product development.

 2. In my company there are a lot of different products and product teams, and in an attempt to share some knowledge between the teams there are different guilds. Like devops guild, architecture guild, API guild, and a testing guild. So usually when the testing guild meets there are mainly testers from different teams participating, and a topic is introduced by someone with a little bit of followup discussion. And the three of the last four topics introduced have been quite automation/tool centric. (And the non automatic centric one was my topic)

3. This week I read a linkedin post from an old colleague titled Runnable Specifications are here, never speak about test cases anymore. In it he made several good arguments against the-step-by-step-testcase-execution by a tester and offered the role of executable specification writer in return. So kind of shifting left and doing automation.

So I don't have anything against test automation (well I do make a case against stupid automation but anything stupid is stupid so that don't really count), but I do have a little trouble understanding why so many testers think that's what they should be doing now. I mean, I do not try to argue that anybody should write and maintain test cases and then "execute" them manually. But that the only alternative to this is for the tester to automate those test cases? I don't buy it.

First of all, I'd say that most commonly there are a lot less "testers" than there are "programmers" on a product development team. And I'd say that commonly the people identified as "testers" are not as good in programming as the ones who have identified as "programmers". So I think the heavy lifting of test automation (which is also programming) should be done by people who know programming. Because they can, you know, program. And if they do test automation they might make the app more testable in the first place. And they also know how stuff works so that they can automate the tests faster. And if the same people work on the code and the test automation it ain't as likely to create bottlenecks and missing automation. And if it is ok already that programmers handle the unit and often also the integration level test writing - what is so different on the end to end level? Testing expertise comes in handy here of course, as testers might be better at providing stronger assertions, better test data, better test scenarios, etc. So teaming up might be a really nice idea (it almost always is ;) ). But who-does-what should be based on motivation and skills - not based on some label.

Secondly, when thinking of the biggest problems any of the products I have ever worked with, those would not have been prevented with more test automation. Many things might have been easier, sure. And many bugs would have been caught earlier, I believe. But the biggest problems? No. And this leads to a crazy statement.

There is more to product development than programming and testing.

I don't want to diss programming. It is an art. An art I ain't really good at (at least yet). But there's other stuff. There are alternatives to brainless test execution by person, or making the brainless test execution to be done automatically. I'll mention a few

Product management. This is what makes or breaks the product. A good idea done shitty, might give people value. But a shitty idea done in the most beautiful way with beautiful automated tests is still a shitty idea. It is like an ass made of silver - it's shiny but it's useless. So this is a place where we would need more emphasis! More brains to engage on making sure we pick the right stuff to do, that we focus on impacts, that we focus on not doing too much, that we pick the right level to prototype, that we do not overcommit too early, that we do stuff that is of value. And this is not the responsibility of a "product manager". It is the responsibility of everyone in the product team! And this requires work.

Stakeholder involvement and communication. If I would pick one thing that has caused most problems in all the stuff that I have been working on, it has been the lack of communication. Within team yes, but especially between the team and it's various stakeholders. We need people that not only consider what stakeholders need and want, but people that actually do the work of discussing, asking, asking again, demoing, listening, telling, managing expectations, with the stakeholders. Sales, marketing, support, customers, end users, integration partners, and who ever it is that has an interest on your product. They need to be heard, they need to know what's happening. And the team needs to do this. And this requires work.

Planning. We need people to make sure we are planning ahead - not too much! But not too little. Making sure we have an idea on the vision, that we think we know the next couple of steps, and a bit of the risks and alternatives. And that the team has always a shared understanding of what we are going to do. And creating this ain't easy. It requires work.

Process. It is easy to settle, continuous improvement is not. It is easy to agree to stupid procedures given, it is not easy to explain why you need to do differently. It is easy to believe that everyone is happy, it is hard to know if they are. It is easy to create silos, it is hard to break them. And it is easy to fallback to routine, it is hard to really start doing something in a different way. This requires work.

Monitoring and analysis. What is your definition of done? Acceptance tests pass? Documented and released to prod? That is not done, that is the start. Actually following what is happening in production, analyzing usage, digging for problems, thinking of improvement and making sure improvements get done is a super important part of product development. And it requires work.

Exploratory testing. Being there constantly looking for problems, looking for alternative ideas and things we did not think of, providing constant feedback to the team, and learning the whole product. It. Requires. Work.   

And a lot of other stuff.

Again, I don't want to say that (test) automation is not important. Hell, I would like to do more of programming myself too, because it is fun! And I guess it is easier to add that to employment adds and CVs. But there are also other things that a product development team needs to do.

So the next time you start writing those automated tests, I want you to think - is this the most important thing to do right now.  


Comments

  1. Hi Anssi,
    thanks for a good post, and providing a bit different view on who should be doing test automation. I'll give my 2 cents on the topic.
    I think that the shift that many testers make into more automation, meaning more testers write automated tests are a natural one, since they want to use their time more efficiently. Instead of having to execute the same manual steps over and over again, they want to automate them. But who should do it? We expect developers to write unit tests, some integration tests (depending on what we mean with integration), perhaps they should write the UI tests as well, e2e tests, or other? To some degree, yes. I think that this should be done by testers as well, as much as possible. I do agree though that the ones being good in coding, should help the team to create a good framework, a solid foundation, and make sure that the automated tests written follow a certain pattern, standard, so that they do not become a liability for the team (lot of false positives, rewrites, flakiness, etc.), and testers should be able to add new test cases as they discover new scenarios that should be automated. On which "layer", UI/API/Unit, depend of course on the knowledge of the tester.
    The second point I also agree on, the worst bugs or biggest bugs, however to classify these, I've discovered would probably not been discovered by any of our automated tests, but I would probably have used much more time to execute the repetitive, regression tests and had less time to explore the solution, which actually led to finding these bugs, if it were not for automation that checked these repetitive tests, so I could focus on other stuff :) So my opinion is that automation, in the form that we know it and use it today, should be a safety net for the team, to let us know if the known stuff in the services we develop is still working, so that we can spend more time to explore the unknown.
    Test automation in it self is perhaps not the most important thing, delivering quality software/services and making our customers happy I would say is, but if we want to be able to deliver changes to our software/service daily (or rapidly), and be able to do it repetitively, and reliably, automating the regression tests will be very important, so we have time to focus on other stuff.
    Sorry for the long comment, hope it is not bigger than the post it self :P (and thanks for making me aware of the automation/tool-centric meetings lately :D )

    ReplyDelete
  2. Hi Mili,

    Thanks for the comment :)

    "I think that the shift that many testers make into more automation, meaning more testers write automated tests are a natural one, since they want to use their time more efficiently. Instead of having to execute the same manual steps over and over again, they want to automate them."

    I totally agree that executing the same manual steps over and over again would be a huge waste. I do tho think that even if testers have said that they execute them, they actually do a lot more than just specified on the tests. And people should be mindful that when stopping "manual" testing they do not just get rid of the repeating stuff but also a lot of the exploring that happens around the scripts.

    And I do believe that testers want to use their time more efficiently - I was mainly arguing on the post that there are also many other very important things where one can use their time efficiently than just the automation. And this applies to the whole notion of programming. I read a tweet (that I copied but cannot find the original tweeter anymore) that was referring to "good" programmers like this:

    "
    There are no good engineers who only pay attention to code
    There are no good engineers who only think about technologies
    There are no good engineers who ignore social issues
    The human context of your work is what makes engineers good or bad
    "

    So I hope also the testers delving to automation will think about this.

    And definitely I am not trying to say that automation is bad. Not at all. Just that I think it is not the only other option.

    ReplyDelete

Post a Comment

Popular posts from this blog

Periodical retrospectives are lame

  "You got nothing, not a single thing?! Well lets just end this here then." I remember well when I said this, being very frustrated. About ten years ago I had been working as a Scrum master for a team some months, and putting quite a lot of effort into planning our scrum teams sprint retrospectives. Lot of work also because I felt we were not getting too much out from them; not very good discussions, very few actions, and even the few actions we did come up with did not stick.  And then it happened: a retro where none of the participants came up with anything to say about the sprint. Regardless of the retro topic boxes, reading of books on retrospectives, getting inspiration from tools like retromat.org, having them in different places, using all kinds of different formats and rainbow coloured post-it notes. Not a single thing. Blank.  So then I said the words, out of frustration, mainly to myself. Why couldn't I get this thing everyone is so hyped about to work? After t

I don't report bugs

I don't report bugs . Bug is such a loaded word that people understand very differently, that instead of using it and explaining what I mean by it I rather just use other words. Like observations, thoughts, surprises, ideas, alternatives, or something similar. (And no I don't use fault, defect, or error either). Bug has also quite a negative connotation. "Reporting a bug" is kind of like telling someone that they've been served. And as we are actually giving away the gift of information, why wrap it in such a nasty package? And maybe more importantly it is very likely that whatever you might have to say is wrong. If not plain wrong, then at least incomplete. So I like to approach the kind of situations with the assumption that I am probably wrong. Cutting off anything that might sound arrogant makes stuff quite a lot easier. Especially after you realise later on that you have been wrong. I leave plenty of observations unreported . I don't want to waste

Testing drunk

(My first blog writing ever.) I've been thinking a long time that it's funny how many bugs I find by accident. Try to do something, make a mistake and boom - a bug is found.  Making the mistakes intentionally doesn't quite work - that's why they are called accidents I guess.. So I've thought of ways to make myself more prone to accidents, coming up with an apparent one; testing drunk. TUI (testing under the influence). So this I gotta try. More to come on that later.