Dishonest Performance Metrics

Dishonesty in metrics occurs where Team metrics have ambiguous goals, are used to achieve ends other than what they were introduced for, are arbitrarily applied or can not be traced to business outcomes/personnel growth

Dishonest metrics arouse frustration, instil fear or even worse, perpetuate dishonesty in staff ( e.g. when people start to game the metrics to position themselves for professional gain )

Good Leaders work hard on ensuring Psychological safety and transparency during introduction, collection and application of metrics

They usually have (or can facilitate) compelling answers to the following…

  1. Why are we using this metric ?
  2. What are we using this metric to inform….x) y) z) ?
  3. What are we using this metric to NOT inform….x) y) z) ?
  4. Who will view & use this data ? Who will not ?
  5. How will we measure effectiveness of the metric itself ?
  6. How will we measure ineffectiveness of this metric ?
  7. What is the business impact if we dont have this metric ?
  8. How will we look for better metrics (5,6,7 apply again here)?
  9. Can I propose a Team metric? (5,6,7 apply here again)?
  10. Which metrics are common across Teams or across the Org structure ? which ones are unique ? why ?

Looking for adaptability while hiring Project Leaders

“Agile/Lean” ways of working are fast becoming the norm in enterprises (especially software) , but you still see emphasis on “control, predict and linear plan” style of project management.

Project management mechanics are important and no denying that delivering in sprints is not a guise to not plan at all ,however,I have observed a lot of senior managers still biasing towards hiring Project Leaders who exude the highest level of comfort and predictability of time, cost and quality.

This mindset under values and hence under incentivizes the “adaptability” aspect of project management (irrespective of whether you are delivering in increments or as a waterfall).

Effective Project Management in uncertain operational environments is as much about gathering empirical evidence post incidents, adapting to change and constant prioritisation of scope, as it is about predicting the trajectory of a project, mitigating risk & exercising control

While hiring Project Managers look for thought leadership on adapting to uncertainty & change during interviews ( in addition to depth in facilitation, process mechanics and framing predictability)

“How do you estimate features(of the size & scale) that have never been delivered by your Team before ?”

“What will be your Project Delivery strategy for a mandated rewrite of your product’s database schema?”

“As Project Leader how will you deal with a spate of critical bugs found immediately post a major release?”

“How do you re-plan your project when your dependent peer Team is blocked for weeks?”

“How do you deal with a key integrated sub-system suddenly not being supported by your vendor anymore?”

“A newly hired Lead Engineer on your project turned out to be a brilliant jerk and their peers are leaving your project, how will you resolve this situation?

Root patterns of organisational silos

The single most influential factor that dedicates if organisations succeed at their goals, is the ability of their business units/Teams to work together in alignment towards those goals

I have learnt that behaviours that bring misalignment, e.g siloed behaviours (defensiveness, back stabbing, closed attitudes towards new ideas ,ambiguity on accountability) are symptoms of something systemic occurring in the Teams, Business Unit and/or the firm that has allowed that silo to take root, establish and sustain.

As an independent consultant , I am often put in assignments to affect cultural change ( advocating shared ownership of Quality, new Testing approaches or tools , performing a Test practice assessment) where in I often encounter the above mentioned siloed behaviours.

My approach towards chipping away at these siloes is to observe patterns of systemic issues and frame them as themes of root causes that need to be addressed to initiate that cultural change

Here are some of the themes that I have encountered so far that have helped me understand root causes of siloed behaviours in organisations

1. Siloes due to lack of psychological safety

Manifest as…

My resources will be poached if anyone outside our unit get a whiff of the initiative

I invariably get attacked if I approach them with new ideas on how they could improve their Team processes ?

What will it mean for my job’s prominence if we collaborate with that Team?

Will be I punished if I move out of my lane ?

2. Siloes due to following path of least resistance

Manifest as…

This is the only way we can get anything done in this company

My last boss said this is OK, she will handle the consequences !

We are only responsible for these areas of the stack, unhappy customers not my problem

3. Siloes due to lack of coaching

Manifest as…

This is the way we have always run this Team !

I do not know of another way to do it

We are always busy and there is no time to reflect and improve

Categorising siloed behaviours into these themes helps me contextualise and train my mind to view silos through the lens of the systemic issues and helps frame solutions as,

What steps do I need to advocate to increase physiological safety of these Team members ?

Do I need to agree & document acceptable ways of working first before moving ahead with the project ?

Is there a people coaching need here rather than a project resourcing need, who needs coaching on which aspects ?

,rather than viewing siloed behaviours as lazy choices that Teams or individuals have make naturally to avoid accountability.

If you just had 1 Testing question to hire/reject a QA candidate

It is unfair to judge a candidate through just one challenge or exercise, but image that you are in a (non-violent & harmonious) Squid Game situation and as the hiring manager you were only allowed one Testing challenge to pose to the candidate ,

what would that be and why?

Something that is related to the Testing craft, can be applied agnostic of the experience level of the candidate and can be used a vehicle to elicit their core testing mindset

For me, it is goes something as below…

  1. I will draw a whiteboard diagram of the product or system under test
  2. I will explain a typical end to end use-case of the product/system
  3. I will explain the integrations and touch points that the system has with other sub-systems/products

and then I would commence the challenge by an open ended question

“What do you think could go wrong with this Product/System ?”

Good testers, that I have had the fortune to hire & work with, engage with this exercise usually on the below lines

  • They will probe more on the context under which this question is being asked, they will try and understand what “wrong” means here i.e. are we talking about functionality going wrong ? Scalability of this system ? end user experience ? data integrity ? security of the components ? deployment & availability ?

  • They will try & understand how and what stages does a human interact with the system and in which roles ( UI end user , admins , deployment, tech support ) ?

  • They will ask counter questions on how does data flow through the system ? Architecturally how do the integrations work , to which spec , is there a shared understanding on API specs ? Which operations can be performed on the data ? where is it stored ? how is it retrieved & displayed ?

  • They will inquire about testability & monitoring of the system or the sub-components ? How do I know data has reached from A to B in the system ? What does A hear back from B when the transaction finishes ? How are errors logged, retrieved, cleared ?

  • They will frame questions around understanding change to the system ? What is our last working version in this context ? Which patterns of failures in the past might be relevant in this context ? How do we track changes to the code , config , test environments of the product/system ?
  • They will try & establish modes of failure of the components of the system , how to simulate them ? how to deploy and redeploy the system ?
  • They will delve into finding what happens when parts of the system are loaded or soaked e.g. exposed to user interaction or due to voluminous transactions of bulk data or susceptible to infrastructure availability/scalability

These are just some of the rudimentary but important aspects of critical thinking that I would expect from promising or established Testers

Of course , a holistically capable Testers’ skills go way beyond the above points but this challenge has served me as a handy guide that acts as a screener during interviews and usually sets up the trajectory for the remainder of the interview

Should your Team have a dedicated Scrum Master or Agile coach?

Temporarily – yes

Dedicated to your Team full time, as a permanent role – respectfully, no

From my agile practitioner experience, I believe there are 2 reasons for not having a dedicated full time Scrum Master or an Agile coach on your cross-functional squad/Team

  1. Coaching needs are inherently impermanent .

For example, A Senior Team member needs coaching to get better at facilitation, A Tester in the squad needs coaching on determining the best Testing approach, The Team needs coaching on how to provide estimates to Business users/Project Leaders, A Senior Tech lead needs coaching on aligning product road maps with other Teams

Coaching needs like the above have ( and must) a life cycle , roughly where in ,

a) A coaching need is detected

b) Coach facilitates discovery & framing of the root cause, metrics of success are established

c) and then , experiments/solutions are tried over an agreed time frame to meet the coaching need

d) and then, at the end of the cycle either you have fulfilled the need or have surfaced sub-problems/impediments that may not be coaching needs but organisational/systemic problems

( e.g. external dependency that can not be resolved,Team can self organize now to run effective meetings & does not warrant any further coaching, line management escalation is needed , Team resourcing issue etc) .

2. Scrum Mastering is not a front to off-load “admin” work

Let’s first define “admin” work first, i call it “common work” ,

Work resulting from agile rituals that –

a) repeats every release cycle

b) Has connotations of not being intellectually rewarding

d) might not align with your core competency/background

e.g. Maintaining your JIRA board on a daily basis , running effective playback sessions, facilitating a post mortem, organising meetings to resolve business priority conflicts , weeding the mid/long term backlog ( beyond current release cycle), regularly communicating with the Teams that are dependent on your work

Often, common work is considered by Org Leaders as coming in the way of achieving tangible Team outcomes and business value. Hence, they plug the gap by delegating common work to a dedicated role , so that the Team can focus on “real” work.

This is fallacious thinking, because , well executed common work

firstly, benefits the whole Team by instilling software engineering discipline

secondly and importantly, allows the Team to get better at self-organising ,inspecting & adapting to change and taking ownership of aligning their work to business needs .

Getting someone else to do the thinking (all the time) on the Team’s behalf, stymies the ability of the Team to do it for themselves ,

and in it’s essence contradicts the bed rock of agility i.e. forming self-organising Teams, putting the Team in a disadvantaged position, that if they don’t have a dedicated person ensuring that the Team’s common work keeps flowing, they will loose their throughput.

Having expressed all this, doing common work definitely requires tangible skills that need to be nurtured and practiced. That is exactly, where coaching steps in and it becomes a cyclic coaching problem (as described in point 1 above).

Starter pack on Penetration/Security Testing for newbies

As an experienced Tester, recently I have been endeavouring to grow my Penetration & Security Testing skills.

As with any new skill-set the journey can get overwhelming very quickly , because of the vast number of concepts, new terminologies, lack of dedicated mentorship and research sources .

Based on my learning and explorations over the past few months in the Pen Testing & Cyber Security realm, I am putting together a table a learning goals and resources that i hope will help Testers start out on their journey in Pen Testing .

Not by any stretch this is a replacement for real world project experience or structured certification training like OSCP , but is rather aimed as full-time Test Professionals, who on the side are interested in learning about security challenges & Pen Testing for Web,Network and Mobile apps.

Learning goal/research topicResources
What are some of the most common security weaknesses out there?OWASP Top 10
How can you inspect HTTP requests/responses, view source code, manipulate cookies etc using Chrome Dev tools ?
Why is Kali Linux so popular for Pen Testing practitioners ? How can you install Kali Linux using Virtual Box ?
Set up your own instance of Kali Linux and if you are new to Linux , handy to go through this –>
Where can you find apps that are deliberately vulnerable ?
The common Pen Testing approach for all tool sets below is –
You have a machine + OS ( like Kali Linux) to be your “attacker” machine, i.e. from where to run the tools to find weaknesses in the “target” machine or a machine hosting the vulnerable app.
How do you scan a web app for vulnerabilities ? Start with ZAP proxy –
Application of ZAP proxy to detect common weaknesses in Web apps
then explore Nessus –
What does everyone rave about Burpsuite ?
What capabilities does it provide to perform scanning and penetration attacks ?
Starting with Burpsuite ->

OWASP Top 10 detection using Burpsuite –>
this is quite intense, but well worth the learning
What is Network reconnaissance ?
Which is a beginner’s tool to scan your network for gathering information ?
Watch this series of excellent tutorials on Nmap from YouTuber – Hackerspoilt
Are there any tools solely focussing on trying to exploit sql databases ?
Yes, SQLMap is one that is preinstall on Kali Linux , that you can use to try & penetrate a vulnerable website
How to get started with Android Pen testing ? Understand Android architecture and how Android apps are built ?

Use one of the traffic sniffing tools ( e.g Burp Suite proxy) to intercept traffic from an Android app

This is intense again , but going through these tutorials really helped me get a understanding common Android vulnerabilities and how to detect them ?

How do you reverse engineer apk files and study application code for static verification ?APK tool and JADX GUI are two reverse engineering tools that i used

Are there any “Security as a Service” type of scanners for apps ? I explored and played with 3 –

Python based and you have to install it locally

Ostor Lab – A cloud based service where you can upload your app and run vulnerability scans on it

Immuni Web – Another cloud based service

Other tools that I have come across but have not used yet
Infection Monkey – Simulates breaches & attacks on your Network
Going deeper into Mobile Application Security

This book by the OWASP Team is excellent and has great hands on material
Self Training and hacking practice platforms I have primarily used TryHackMe and their paid service , found it will worth the 10 $ per month that they charge

There is another one, I have have come across but not used yet –

Testing is “easy”

Testing is easy, you just have to …..

1. Elicit user needs from missing or no requirements (usually in a single Tester Team)

2. Be great at analytical thinking and detecting your own biases 

3. Analyze and understand end to end architectural risks

4. Analyze and understand end to end business process 

5. Be resilient in the face of “why didnt you catch it ?” probes

6. Be apt at creating effective test data 

7. Excel at communicating technical issues to business folks and vice versa 

8. Be nerdy enough to analyze lots of PROD data to inform your tests 

9. Be informed enough to know which logs to dive into for which errors 

10. Coach peers on effective testing ( vs. just breaking the system) 

11. Constantly look to automate repetitive tasks, reduce Testing related waste

12. Facilitate discussions and manage stakeholder expectations on effort, scope and risks of Testing effort 

13. Report progress on Testing , adapting to the context of the audience, project, company culture and tech stack.

14. Determine what (code/environment/user behavior/test method/dependencies/integration interface/data/cognitive interpretation) changed since last time ?

15. And ,how to quantity that change, to prove that it is faster/slower/better/less useable/non-complaint ?

Organisational QA/Testing smells

On the lines of Code smells” , QA/Testing related smells in my experience, fall into these 4 broad categories (of root cause(s))

  1. Apathy – Disregard for Testing as a function/craft
  2. Hubris – Talent or position driven blindspots that lead towards flawed decision making
  3. Ignorance – No one has shown them how to do better or Team members lacking (certain) Testing mindset
  4. Helplessness – Cognitive exhaustion from pushing back against immature SW practices or organisational dysfunction

Compiling a list of verbatim/observational “smells” I have come across in my Testing career so far 🙂 , including some that I myself have been culprit of !

Feel free to add yours in the comments below , thank you

I’m sure some will resonate as stereotypes , hopefully some are new to you? ( hence something you might want to watch out for)

  • Customers wont use it “that” way
  • You are testing too early
  • (corollary) You tested too late
  • Why would you be needed in the design session ?
  • Why would you be needed in the code review ?
  • Why would be be needed in the requirements gathering session ?
  • I challenge you to break it
  • (corollary) No customer has complained so far !
  • Look ( pointing to their IDE) , works
  • Try now <keyboard clatter> ..Try now <keyboard clatter> ……………..Try now <keyboard clatter>
  • Did it just break ?
  • When was it last working ?
  • I did not change anything
  • how do I know what changed?
  • I cant tell you which tests you should write , your job!
  • It’s a big change, we just have do it all in one go
  • How do I see the back end errors ?
  • What does “Unhandled exception , contact your System Administrator” mean ?
  • How do I know if all these errors are related?
  • This keeps happening but I just cant make it happen at will
  • Ok, I cant tell what else is broken ?
  • This aaaaawwlllways breaks
  • We always, must take 3 days to retest everything
  • I am a “manual” Tester
  • (corollary) I am an “automated” Tester
  • Every developer must be experienced in Automated Testing
  • (corollary) Ma..look no Testers needed !
  • (corollary of corollary) <this approach/tech> will replace Testers
  • No, DONT try changing this config file
  • (corollary) why do you need access to the build pipeline ?
  • It will be faster in this release
  • Test it while I document the design
  • (corollary) will document it, only if I have time
  • I will refactor it in one go
  • This was never meant to be in scope
  • Why do we just have 157 test cases for this project?
  • (corollary) We are 100 % PASS
  • (corollary of corollary) We were 100 % but this we are 87 % PASS
  • (corollary of corollary of corollary) If we go down from 75 % we can’t ship
  • This environment is just for development
  • (corollary) It will be “different” in PROD
  • “Don’t worry about integration yet” ( Team 1) ……tududududu… ( 2 weeks from Go Live) … Team 2 – “not one told us about these changes”
  • Must be the database box
  • (corollary) Must be permissions
  • (corollary of corollary) Must be a known issue
  • These are my automated tests , they dont need them in version control
  • (corollary) Test code isnt “production code”
  • (corollary of corollary) Our Team only ships application code
  • This is how Agile is
  • (corollary) This is how Agile Testing is
  • (corollary of corollary) This is how <insert prevalent industry term> Testing is
  • Because, Docker
  • (corollary) Because, <some new tech>

List to be continued…..

Quick starter – Web automation using Playwright

Playwright is a (relatively) new kid on the block, joining several others kids already on the block, that are the JavaScript based automation frameworks.

I learnt about it from a mention on Twitter and thought to give it a whirl.

This post is purely meant to be a sketchy guide on getting started with it ( with less focus on it’s overall architecture & philosophy, however, importantly, for that start with the docs here )

Toolkit for this post –

  • JavaScript as the programming language. Playwright has a Python based library too !
  • VS Code as IDE
  • Chrome as the browser
  • OS = macOS

Installation and set up –

  • Install playwright framework using npm on your terminal –> npm i -D playwright
  • Check Node.js has been installed successfully and is in your path –> node command on your terminal should return a prompt like below. Ctrl+C twice to exit.
  • Lets create a simple .js file that we will use to write our test and run it using Node.js
  • Ensure Google Chrome’s canary build is installed at default location –> /Applications/Google Chrome Chrome Canary

Build code for your test –

  • Test is — Visit a contemporary e-commerce web site, browse for some items, then login user a username & password combination , but the combo is incorrect, assert on the error message displayed
  • Code pattern followed is async & await
  • Declare that you will use chromium based browser for your testing –>
  • Declare that you will use inbuilt assertions in your test –>
  • Write a snippet to launch an instance of Chrome, headful, by providing a path to the chrome canary and navigate to the webapp under test.
  • Selecting elements is a breeze and no need to write waits ! Can select elements based on text easily
  • Find our dynamic error message on a failed login ,get it’s content and perform a rudimentary assertion on it
  • Run ! –> node <file_name>.js

Putting it all together …

const { chromium } = require('playwright');
const { assert } = require('console');
(async () => {
// launch a chrome instance from the path provided , not in headless mode and with the slowMo value set for slower playback
const browser = await chromium.launch({headless: false,executablePath: '/Applications/Google Chrome Chrome Canary',slowMo: 500});
// playwright has the concept of giving context to your browser object –>
const context = await browser.newContext();
// from a context spawn your page object, the primary medium to perform broswer automation
const page = await context.newPage();
// lets head over to the home page of out website
await page.goto(';)
// oh,dealing with a pesky pop up is easy peasey,did not have to write waits etc,just had to enter text of the button to be used as a selector !
//playwright's inbuilt auto-wait capability —
await'text=No Thanks')
// perform navigation to another page of the app using text as a selector.
// More on selectors here –
// head over to the login page
await page.goto(';)
// select an element by id and click on it
// or just directly filling it with text
await page.fill('id=LoginForm_email','')
// another element found easily by id and text entered
await page.fill('id=LoginForm_password','tununutunu')
// find by id and click the login button
// lets find the text contents of the selector below , just have pass the selector to the page.textContent method
const login_error = await page.textContent('#form-account-login > div:nth-child(2) > div:nth-child(2) > div');
// perform a simple assertion
assert(login_error=='The email address or password you entered is incorrect. Please try again.1')
// we are done 🙂
await browser.close();
view raw playwright.js hosted with ❤ by GitHub

Initial Playwright experience

  • Found writing the test very intuitive
  • Loved not having to write explicit waits
  • Stoked about the straightforwardness of dealing with selectors

Will definitely explore Playwright more !

Lessons learnt from a POC to automate Salesforce Lightning UI

My recent client work has been on testing a migration(data & business processes) to Salesforce CRM platform.

As part of Test execution, I took the initiative to build a POC to exercise automation of Salesforce both by interacting with the Lightning UI and the APEX Salesforce API interface.

This post is to share the hurdles I faced and lessons I learnt in building the POC for UI automation.

1. Choice of tools – & Selenium WebDriver

I exercised two tools sets that I am experienced with for UI automation – and Selenium Webdriver API (using Python) .

I could not go far wth Cypress, as it has limited support for iframes ( by design) , covered in this open issue thread.

Basically, as soon as I automated the login process, hit an error where Cypress terminated with an error “Whoops there is no test to run”

I tried some of the workarounds mentioned in the thread that worked for some folks, but not success.

So, once I exhausted my time box for Cypress I moved onto Selenium Webdriver.

2. Bypassing Email verification after login

The first hurdle I hit was the email 2FA that was set on the Salesforce sandbox environment that I was testing.

If I would have been automating at the API layer, there are various secure ways (e.g. API tokens) to authenticate but with email verification from the UI,Salesforce bases it on IP addresses. So, to work around that I to create another test user & profile that had either my IP whitelisted or basically no-IP filtered out.

Instructions from here were helpful –>

3. Explicit waits & dealing with dynamic UI ids

Goes without saying that we have to explicitly wait for elements to load, to create robust tests, I had to write heaps of explicit waits to the interact with the Lightning UI (as it is a very “busy” UI)

Another interesting quirk I found , was ,even though some elements that I wanted to interact with has unique ids that I could as a selector, but as I found later through flakiness in my tests, that those ids especially for modal dialogs were being generated dynamically, most likely per login.


this piece of code although using id, was flaky because webdriver could not find the same id across sessions. The value “11:1355;a” would change to “11:3435;a” on subsequent test runs for the same modal dialog box.

So, instead I went with a different approach to not use id for the dynamic modal dialogs, instead search by XPath in this case and awaiting for element to be clickable

That worked and finally I was able to complete a basic UI automation flow of logging in , interacting with dynamic elements, adding user input and asserting some test conditions 🙂