Wednesday, January 17, 2018

Answers to common questions asked from a "tester"


Here's another chat I have had many times. It starts with the golden and notorious question of "how much should you test". And it then drifts a bit from there.

********

How much should you test?

"Well, you know, I used to work once in this team doing a medical device where releasing frequently was not really possible due to the many constraints. And even doing a small release would cost a lot. And it being a class C (death or serious injury may occur) device all issues found on a released product would have been a pretty big deal. So this meant that we took unbugginess very seriously, and thus that I had a lot of time to spend testing the device.

And boy did I.

I spent sometimes almost like the entire week just in front of the machine testing, thinking of ways how it could fail, unrolling all the test techniques I got in me, learning the ins and outs of the product. And then one day when the new release was done.... several problems were found. So taken into account that we really bled our heart out for this and still problems were found, I think you shouldn't really test too much as you anyway won't catch all problems.

And then again sometimes especially these days we release stuff without spending close to no time in testing and no problems are found. But with this often I feel that without the testing, you miss out on the chance to learn, to get ideas, to make suggestions, and to give feedback to the developers and business. So you shouldn't really test too little, because it ain't only about catching bugs.

Then again if you consider testing is "done" after you have deployed to production, then think again! In production you have a beautiful chance to investigate if & how the new stuff is being used and if it is working as expected.  And be ready to react if/when something unexpected happens. Testing in production is often at least as (if not more) important as the testing you do before the release. And if you can't see or affect to what happens in production then there's something missing from the implementation, like proper logging, pilots, toggles, communication with customers, etc."


So what about the significance of test automation? That must play a role too on how much you should test? And what about risk? And what ab.. 

"yea yea I know. Of course it depends on the context, on the level of automated tests, and on the risk of the change. It always depends. Everything always depends. But you get involved, you do what is needed at the given time, you make mistakes, you try to do better next time. And you will figure it out."


How about being involved early then? Like testing requirements?

"Well.. I don't really like to talk about "testing requirements". Sounds like reading thru excels trying to find sentences that don't have the word shall... But of course you want to be involved on all parts of the product development. Starting from discussing why should we do this thing, what impacts would we want to achieve. Then discussing and working together on what do we do, and how do we do it. Pair or mob on the code, do code reviews, talk with the users and the whole lot. I think doing stuff together from the start is in the long run so much more effective. And so much more fun!"


Aren't you afraid of loosing the objectiveness then when testing something so familiar?

"Not really. I am more afraid about testing something I do not understand. Or of testing irrelevant things because of not knowing the implementation. I am also a lot more afraid of solo work than not being objective. And anyway, I think you can still be objective while testing by switching your approach while exploring. E.g. by running through different scenarios. And it is really great to do some mob testing here too."


What do you call yourself these days? A tester? A QA? A developer?

No one answers better than the prisoner: https://www.youtube.com/watch?v=d0LaT6qVRpg :)



Tuesday, January 2, 2018

Honesty pays

I share this story every once and then when talking to people, so thought to write it here too.

About 8 years ago I had got my second consulting gig, working on a big company and on a huge project. And as huge projects normally go it was already very late and presumably very over budget. This had caused a pretty tense atmosphere around the project and things that delayed it further were not looked very fondly on.  And so it happened that I made a mistake of that sort. It was kind of done by a person I was "coordinating", failing to provide a complete set of specifications we had worked on to a vendor, resulting in a release missing a lot of important stuff.

I was very stressed being a pretty new kid on the block and considering my options, whether to lay low, blame the coordinate or the vendor, quit, etc. But in the end decided that I would go for admittance. So I walked to the office of the rather intimidating program manager, knocked, went in and said that "I messed up" explaining what had went wrong. I still remember the angered face, the way the PM inhaled and did not breath out for a while. But then his face softened a bit, and he asked that what could we do about it. So we discussed about our options a bit, came up with a certain one, and then I left feeling still a bit stressed about the next consulting contract renewal occasion, but very relieved about being able to spill the problem out. And thinking that no matter what would happen, honesty had paid off.

A few days after this I was in a reoccurring program status meeting, and the matter of the delay came into the discussion. One member started asking about this in a pretty tight voice, but was cut off by the PM who said that this has been already discussed with me and is under control. So the topic was switched and I was feeling that honesty had really paid off.

Going about a year after this I was talking with a sales rep from our consulting company and she told me quite a nice thing. She had had a meeting with the program manager in an effort to sell some more people into the company and he had simply said, "What we need is more people like Anssi, so if you got some then I am buying."

Honesty pays :)

p.s. This is the first post out of 26 I will be publishing in the year 2018, after a bet made with a colleague to publish one in turns every week during the year. Off to you Mili ;)

Wednesday, November 1, 2017

It was a good day


Decided to write down today the stuff I (remembered to have) had done today. And as paper doesn't store so well, thought to write it here as well. 

So here in a semi random order:

  1. Reviewed code for a change now implemented that had planned couple of days earlier. Mainly to understand how it had been done, but dropped two comments still; one on a missing log event, and another one on the naming of a method. Also asked another team member explain me a thing I didn't totally understand (it was a lazy loading feature so now I know).
  2. Coffee
  3. Did some happy path testing for the change, looked good.
  4. Discussed through the status and next steps of a cool new feature started yesterday. It seemed devs where far with the implementation already and not that much was missing.
  5. Got pretty excited of the feature
  6. Identified and discussed a possible fix with a dev for a thing I had heard yesterday about from a customer. Replied back to customer
  7. Tested one new feature a bit and noticed one missing validation on it, discussed with the dev a bit about it and he wanted to fix it.
  8. Discussed with two testers a bit on something I was introducing to them yesterday, helped out on some test script running problems
  9. Lunch. Roasted beef, pretty good.
  10. Discussed status on three other things with three other devs, and also what could be next up on the todo list. They picked one, and agreed to have a little starting mob for it (dev booked one for tomorrow)
  11. Coffee
  12. Helped out an integrator with problems they had in using one of our API functions. They sent me some php code. Asked a dev to help me out, and based on that sent them a suggestion on what to change. Got response that it worked. (The dev had worked with php 10 years ago).
  13. Sent mail to a person in the organization suggesting that we should use Flowdock more for cross team communication, and stop using other tools for the same thing (lync, skype, hangouts, jira). Much frustration there.
  14. Issue in step 6 was fixed by a dev. Low risk, so he pushed it to production.
  15. Had a short discussion about test strategy on a project
  16. The two testers pairing on a thing reported of a problem they had while testing. Realized it was a potentially nasty problem, discussed with a dev about it.
  17. Noticed me and the dev are 5 mins late with a meeting with another integrator. Went in it, discussed a little about a new service of us they are taking into use, demoed them one of our new features that might be interesting for them. They were interested
  18. Dev said that the cool feature of step 4 would be pilot ready. Exciting! Can't wait us to tell our stakeholders about it :)
  19. Coffee + water (I always forget to drink water)
  20. Helped our customer support in three problem cases
  21. Dev said he had fixed the issue on step 14. Fast! Tried it and seemed to work. Production tomorrow likely? Another dev noticed another related problem on the scenario. Fi tomorrow maybe
  22. Tested a bit more the case in number 1. Learned nothing new really.
  23. Time to leave home. Plenty of exciting stuff to do tomorrow :)

Definitely not the most focused day of my life, but definitely a good one. 


Sunday, October 22, 2017

Back on the wagon



I reread some of my old blogs and was a bit sad not to be able to read anything from the past three years. So I thought I need to force myself to start writing more again. So here goes.

Past almost three years I've been working on a really cool team, with a lot of freedom (and some responsibilities) to do a lot of different things. And quite a lot has happened in our team inside the last three years. We have moved from
  • one month sprints TO sort of a one piece flow based kanban with a weekly cadence 
  • one big team TO (semi) self-organizing work groups.
  • lot of solo work TO work in the mobbing style
  • PM led Reviews TO team led show sessions
  • unappreciated monolithic architecture TO appreciated monolith + bunch of other services architecture
  • nebula TO AWS
  • + many many more things TO other kind of things
And I'm thinking of writing on all of these a bit. But also many things have happened in me. 

First of all I have stopped clinging on the thought of hands-on (testing) work to be the only important thing for me to work on. And give in to the thought that the most important thing for me to work on should be the most important thing that needs to be done. Be it communicating, coordinating, thinking process, planning, testing, analysis, coding, teaching, documenting, making coffee, or whatever is needed. Testing is great and a great "tool", but not the solution for every problem. As is not programming either. 

Also of course my thinking of testing has changed, and changes all the time. I gave a talk year ago where I discussed these kind of shifts on my testing skill needs in past 10+ years:

Used to be important to...Has moved into...
Figure out what has been builtBuild together
Test hard thingsImprove testability
Good bug reportsGood discussions
Act like a customerTest with customers
Write test casesUse data to help you while testing
Test requirementsTest ideas
Make test plansDo continuous exploratory testing
Estimate testing effortsEnable fast feedback
Block premature releasesEnable premature releases
Manual regression testsAutomatic regression tests
Do your best in a hard work environmentDo your best to improve the work environment

So a few more things to write about there. 

Also I just gave a talk on 10 tools that I use to aid me in my line of work, that was very fast and think all of those tools should get a post of their own.

And also I wantto start writing about the small big things that happen every day, while mobbing or testing or planning or whatever. That make working as a product developer so super interesting.

But now I'll just publish this, cause otherwise I will never get into doing any of it.


Thursday, February 19, 2015

Regulations on medical software development kill more than save


After a two years as a sw testing specialist in the medical device industry, I decided to quit my job. Reasons were many, but perhaps the biggest one is the insane situation where regulations have taken the industry.

As a summary, I have come under the belief that these regulations kill more people than save. This is because of:
- the unbelievable opportunity cost
- the delay in time to market
- killing of innovation
- restricting of competition
and because of the fact that regardless of all those regulations, you still can fake it all.

Actually it is easier to fake than follow.

The main idea in the regulations is that everything needs to be planned in detail, executed accordingly, and documented to be done this way. And by planned in detail, they mean that there needs to be a documented plan, in which all the actions taken needs to be specified. If you do something that was not in the plan, you are in trouble. If you don't do something that was in the plan - boy are you in trouble. The only way to create a plan that has everything you will do and nothing you will not, well documented, is a plan that is written afterwards. So the only way to do audit proof plans is to create them after stuff has already been done. Smart.

Everything that is done, also needs to be done following well established processes, of course. And you need to also prove that processes have been followed. This way, the audit proof way of creating processes is to add stuff into them that can be easily proven to have happened, and that do not generate too much load in documentation side when proving to have happened, This means that the processes are written for the auditors, and not the people who should follow them. So the manageable way to do the process dance is to have the written process generating the documentation, and an actual process generating the product. Efficient.

And when the auditors pop by, they always find issues (probably the easiest job in the world to find them), which need to be rectified, by including something new into the process, which again grows the part done for the auditors. If you actually find something that you should improve on, you must not write it to a process cause then you need to prove that you are following it, which means arbitrary metrics and more documentation. And also by including stuff into the process, you hang yourself on not being able to improve and change it efficiently, so better not to include it. Real atmosphere for continuous improvement.

You may at this point think that what is so bad in writing a documentation. I'll tell you what is. First, anything you write into a document needs to be carefully considered on how the auditors would interpret them, which is really burdening. Secondly, all documents need to be reviewed and authorized after EVERY change you do. Which is horrible. Thirdly, there is a 99% chance that no-one will ever read those documents, which is not the most motivating thing either.

I could rant about so many things, but as this is a testing blog I'll switch into that now.

Nothing, absolutely nothing is so badly misunderstood by the auditors than testing is. Testing for them means verification of requirements, in a preplanned, pre-reviewed, extremely detailed  test cases having descriptions on what to do, and what is the expected result. The tests must be run exactly as detailed, and objective evidence needs to be provided on each result. So if you go and execute a test case, and everything works perfectly but some step cannot exactly be followed the way it is written, you need to rewrite, re-review, and rerun the test. The only way to be able to survive this is to write the test case while executing the readily built system, and writing the step by step instructions. And even then, it is really hard to write, and really burdening to maintain. The tests are not designed or executed to find problems, but to prove that those tests can be run. And because of the huge amount of work put into them, you generally like to keep the amount of new tests as minimal as possible. As the actual tests quality or coverage is not that important either.

This has nothing to do with sw testing, but this is what auditors require. As a funny anecdote, while I was trying to push for more high level tests and more skills for the testers, we got a warning from the auditors stating we need more detailed test cases, detailed on the level that anyone could run them. What probably makes it even more painful is, that as in the medical domain there are a lot of scientists, there seems to be a general understanding that good testing means and requires exploration and investigation. But that is not at all what the auditors expect. And it is hard to keep on pushing and pushing to do a good job in testing, while what is expected from you is just a demonstration of bad checking.

Funnily enough, working as a sw tester I have nowhere else tested so little as I have tested here. Or perhaps funny is not the right word here.

Talk about the hell of a tester.

I must raise my hat to my fellow colleagues who keep on trying to do best possible work under these conditions. Who try to do everything keeping in mind the customer, while trying to please the auditors. I could not, and will now try to do my bit in changing the way auditing is being done on the medical domain from the outside. I hope that I can return again to the domain, either as a stronger person, or preferably into a domain where the regulations are actually working.

If any auditor by any chance would happen to read this, lets talk, please. What we need in this domain is more communication and collaboration, and focus on what it is we really are trying to do. Provide safe, working products to people who should be able to rely on them.