Sunday, September 2, 2018

The way our team's work week works

Our normal work week is currently organised like this:

Monday morning, and the #weeklyStartUp

We start the week with a short 15 minute meeting the whole team (20 people) participates, called "The weekly startup". In that meeting we go briefly through our main goals for the week, and decide in which kind of work groups to solve them. People can themselves decide which goals they want to work with, and based on that we decide our work groups.

This goal list is a one page google doc with 5-10 goals, each written in the format of Do what, in order to get why accomplished.

And the goal list is not meant to include everything anybody is going to work on, just the most important goals. We allow and expect people to engage on various other things too, as long as those won't come with a big expense on the main goals.

Monday-Thursday, we work

Work groups have full freedom and responsibility to plan, implement, test, deploy, and communicate what they believe is necessary to accomplish the goal. I usually encourage each group to:
- create a high level plan of what steps needs to happen so that their vision is fulfilled
- visualize and track work in progress in their preferred tool
- do some mob programming OR then sync often

And we have a set of common ideas for group work.

But in the end it is up to each work group to decide how they want to work.

Friday morning, we #showTheTeam

We have a 1 hour meeting called showTheTeam (STT) that is optional but still each team member participates into. Here each work group shows and tells how the goals have progressed, plus anyone can freely show anything else they have done or learned during the week.

Friday afternoon, some #prePlanning

We have a ~1 hour meeting where we try to come up with the goal list for the next week's startup meet. The meeting is also open for all, and usually about half of the team joins. Here we will discuss and debate what we should work on next, with the help of a google doc we call "The list of things", plus people chipping in with other things they think should be done next.


Have fun!

And maybe write a blog post...

Our idea of group work

Our team has a practice of dynamically splitting down to work groups to solve different goals. Grouping happens through our normal weekly process, respecting these common ideas:


1. Each bigger task or project should be done within a work group of at least 2 devs + qa + ba

Because we believe that in the long run it is through teamwork that we can “deliver fast and with good quality every time”


2. Work group is created based on (suggested) people voluntarily deciding to work on it, but still ensuring enough competence.

Because being able to affect on what you work on is good for your motivation. And motivation is key.


3. Work group is responsible for deciding, planning, implementing, and releasing the next most important task(s) on the project.

Giving freedom and responsibility to the people who know best.


4. Planning is done with whole group; when needed, when starting on the next task(s), or if there is a need to do some planning.

“Plans are useless but planning is indispensable”

Planning within group keeps all inline and provides many perspectives.


5. It is good to have both consistency, but also some variation on the work group (not always the same, but not always different members either) 

Because focus is important. And variation keeps you fresh and away of tunnel vision.


Sunday, June 24, 2018

Uncalled rant on testing metrics

Uncalled because we all already know this right? As great people such as Cem Kaner already told us about it a long time ago

Uncalled as I haven't used these in years.

Written because I was just asked by management to report product fault statistics.

Using any sort of defect/bug/fault count related metrics in sw development is harmful.

Bug counts fall apart already as they are trying to quantify something that is inherently qualitative. Additionally they make people focus on reporting problems rather than solving them. And really bug counts tell nothing about the product that involved people wouldn't already know.

The only thing good in bug statistics on sw development is that it gives test managers a very easy way to provide meaningful and professional looking but totally hollow metrics. 

And that is not good.

Using any sort of test case count related metric in sw development is harmful. 

Test case is not a specific thing. A test case can be good, bad, incomplete, false, useless, misleading, dishonest, etc. A test case may be small, large, expensive, cheap, etc. And no amount of test cases will ever be "all the test cases". Counting these together gives you a sum that means nothing about the coverage of the tests on product under test. 

A passing test may mean that the test, the tester, the system under test, or the circumstances in the test were wrong/right. A failing test means the same thing. Counting these means nothing on the quality of the product under test.  

The only good thing about test case counts is that it gives test managers a very easy way to give meaningful and professional looking measures on the progress of testing, that actually have no substance.

And that is not good.

Want to get information about the quality of your product and process?

Ask the developers, testers, sales people, customers, and end users. Investigate the root causes of problems. Hold retrospectives. Analyse logs and usage of the system.

Do the work. Don't settle for defect and test case counts just because those are easy.

Sunday, June 10, 2018

I don't report bugs

I don't report bugs. Bug is such a loaded word that people understand very differently, that instead of using it and explaining what I mean by it I rather just use other words. Like observations, thoughts, surprises, ideas, alternatives, or something similar. (And no I don't use fault, defect, or error either).

Bug has also quite a negative connotation. "Reporting a bug" is kind of like telling someone that they've been served. And as we are actually giving away the gift of information, why wrap it in such a nasty package?

And maybe more importantly it is very likely that whatever you might have to say is wrong. If not plain wrong, then at least incomplete. So I like to approach the kind of situations with the assumption that I am probably wrong. Cutting off anything that might sound arrogant makes stuff quite a lot easier. Especially after you realise later on that you have been wrong.

I leave plenty of observations unreported. I don't want to waste my or my colleagues time on stuff I believe will not create any actions. If I see no risk, I see no/little value, I just drop it. Someone might think this is irresponsible, I consider it professional.

I don't write reports. Writing a great report takes a lot of time, and it still very easily might get understood differently. So I always rather go talk to someone, or ask someone to come and take a look. Demoing a behaviour to someone is faster, enables better understanding, and also with other people you usually can find the possible root causes faster.

Still, Sometimes due to different reasons I still may write something. If I do I:
1. Keep it short. People will lose focus or not even read long texts
2. Tell what the observation is. So what happened, and why I think it was interesting. E.g. this happened, although I would've expected that.
3. Tell why. Why I think it is interesting. But briefly.
4. Give a bit details. ID, log snippet, link. Just enough to understand how the observation could be seen.

But will not write this into a backlog or defect tracking system. It's such a sad thing to see those beautiful bug reports getting buried into defect tracking systems or backlogs, forever to be forgotten. So rather into a chat, where someone might even comment on it.

If I can wrap all this into a failing automated test, I might. But then I often feel like I am spending a bit too long time on my own, missing the communication. Plus if I'd go this far why not just go and fix the problem myself or by pairing up with someone.

Can you imagine, I used to lecture about bug reporting, I used ti arrange competitions on who has the best bug reports. And now I won't even write them anymore.

And I love it.

Wednesday, May 30, 2018

10 things to help you suck less in prioritisation

Improvements in how things are being done don't help that much if you are doing the wrong things.

Focusing on cutting down the deployment/production pipeline, using the latest and greatest languages and tools, exploratory testing, mob programming, etc will surely be a boost to efficiency. But efficiency is not key if you are doing the wrong things.

And quite often we are.

And a big reason for that is, that we suck at prioritisation. We suck at it because we:
- spend too little time on it: "But we could save minutes of talking by hours of coding!"
- do it too rarely: "Welcome to our annual roadmap revision meeting."
- try to have specific people/roles be responsible for it: "Ask the PO..."
- do not think about different dimensions enough: "But the customer needs it!"

But mainly we suck at it because it is so hard.

Here tho is list of 10 things I think might help.

1. Don't keep a big backlog. Focus on the things being done now, and on the few things to do next. Forget the rest.
2. Do not rank things with labels, instead just rank them in an order. We all have seen too many priority1 projects..
3. It's ok and good to have visions and high level plans for longer times. But don't put them on a "roadmap" and then forcefully execute that.
4. Avoid long and seldom happening prioritisation meetings. Instead prioritise often and ad-hoc.
5. The value is in the idea, not in who presents the idea. Let the ideas compete, not the people.
6. Do not only consider the cost to build, but also the cost of delay (how much do we lose or not gain while this is not done, opportunity cost (what else could we be doing instead of this), cost to maintain, etc
7. Do not only consider the value to customer but also team motivation & wellbeing, code and system infrastructure simplicity, brand, support, relationships between stakeholders, etc
8. Involve everyone to prioritisation. It's hard. It's messy. It's important.
9. Try to get everyone to understand why we decide what we decide. Not everyone needs to agree, but understanding is very important.
10. Look back at the good&bad decisions. What helped you to select the right thing back then? Why the hell did we end up doing that?

Why ten? It sounds nice, fits into a board made of stone, and because this post was very late already due to some bad prioritisation.

I'll try to do better next week..