It is unfair to judge a candidate through just one challenge or exercise, but image that you are in a (non-violent & harmonious) Squid Game situation and as the hiring manager you were only allowed one Testing challenge to pose to the candidate ,
what would that be and why?
Something that is related to the Testing craft, can be applied agnostic of the experience level of the candidate and can be used a vehicle to elicit their core testing mindset
For me, it is goes something as below…
- I will draw a whiteboard diagram of the product or system under test
- I will explain a typical end to end use-case of the product/system
- I will explain the integrations and touch points that the system has with other sub-systems/products
and then I would commence the challenge by an open ended question
“What do you think could go wrong with this Product/System ?”
Good testers, that I have had the fortune to hire & work with, engage with this exercise usually on the below lines
- They will probe more on the context under which this question is being asked, they will try and understand what “wrong” means here i.e. are we talking about functionality going wrong ? Scalability of this system ? end user experience ? data integrity ? security of the components ? deployment & availability ?
- They will try & understand how and what stages does a human interact with the system and in which roles ( UI end user , admins , deployment, tech support ) ?
- They will ask counter questions on how does data flow through the system ? Architecturally how do the integrations work , to which spec , is there a shared understanding on API specs ? Which operations can be performed on the data ? where is it stored ? how is it retrieved & displayed ?
- They will inquire about testability & monitoring of the system or the sub-components ? How do I know data has reached from A to B in the system ? What does A hear back from B when the transaction finishes ? How are errors logged, retrieved, cleared ?
- They will frame questions around understanding change to the system ? What is our last working version in this context ? Which patterns of failures in the past might be relevant in this context ? How do we track changes to the code , config , test environments of the product/system ?
- They will try & establish modes of failure of the components of the system , how to simulate them ? how to deploy and redeploy the system ?
- They will delve into finding what happens when parts of the system are loaded or soaked e.g. exposed to user interaction or due to voluminous transactions of bulk data or susceptible to infrastructure availability/scalability
These are just some of the rudimentary but important aspects of critical thinking that I would expect from promising or established Testers
Of course , a holistically capable Testers’ skills go way beyond the above points but this challenge has served me as a handy guide that acts as a screener during interviews and usually sets up the trajectory for the remainder of the interview