the reluctant tester

Perpetual learner of the craft of Software Testing,Servant Leadership and creating better Teams

Observability – advocacy heuristics for Testers/Devs “new” to the system

One of the hallmarks of a highly observable system is that it enables new users to debug problems & find root causes faster i.e. shorten the learning curve for experienced Testers/Devs/Support Engineers new to that system.

However, as one gets more knowledgeable with inner workings of their system under Test, so do their biases about missing potential deficiencies in it’s observability.

For me, this bias came to the surface when coaching relatively new Testers, who did not have specialised knowledge about the technical idiosyncrasies/undocumented workarounds of the system that I had.I was relying on my time with the system and glossing over observability issues that they were picking up during the coaching sessions.

Another effect of the bias was that the standards that I expected from content of the logs and the metrics presented for debugging, was lower than what the new (to the system) Testers were expecting.

So, how does one reorient themselves and watch against this bias ?

Some heuristics that I would advocate, are ->

Observability advocacy is (also) about wearing the newbie Tester/Support Engineer/Developer hat and thinking critically about what would make them effective , quicker, in their roles?

  • How could our logging be made better for them? Have we involved them in defining observability acceptance criteria for our user-stories ?
  • How are we prioritising their observability use-cases & fulfilling those ?
  • Will this new metric add value to new users ? How do we measure that ?
  • Can we change the format of our logs to make them easier to integrate with other apps/Teams (who are interfacing with our system) ?
  • What are the pain points that new users face when troubleshooting support issues ? How do we establish this feedback loop?
  • Are we missing aspects of the customer journeys that we dont log or metrics that we dont gather (that will benefit new users) ?
  • What do we choose not to log & why ?
  • Is the logging granular enough ?
  • Do we have all the correct data sources for our metrics/logs ?
  • Which new logging & metrics could be introduce for the Automated checks that they are writing ?
  • Can we add more documentation/training around observability during their onboarding ?

How do you advocate for observability for newbies in your Team ?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

About Me

I’m Sunjeet Khokhar

An experienced People Leader,Practice Lead  and Test Manager .

I am driven by the success of people around me, am a keen student of organisational behaviour and firmly believe that we can be better craftspeople by being better humans first.

CoNNECT with Me

%d bloggers like this: