A genteel peek into GitLab CI/CD

My CI engine of choice & experience as a Tester has been Jenkins.

One of the strategic projects in the pipeline at my current client is to adopt GitLab as a solution for SCM,Continuous Testing and potentially Continuous Deployment .

And that project involves porting a Test Framework ( that I was fortunate to lead create) based on Python/Behave/PyTest running on  Jenkins to GitLab

Even though the project is in the pipeline, I thought to flirt with the idea of doing a wee POC to explore GitLab’s CI/CD .

Objective –

As a novice GitLab user, I would like to set up a trivial build pipeline, so that I can run a piece of Python code on every commit 

Approach –

1. Understand how GitLab’s CI/CD architecture works

2. Sign up for GitLab & set up a project

3. Set up a vehicle to execute your code (aka a “runner”)

4. Write instructions to build your pipeline (aka the “.gitlab-ci.yml” file)

5. See the magic happening i.e. output of your Python code being rendered

Step 1 – (A simplistic view of) GitLab’s CI/CD architecture 

To get a pipeline up and running, you need need components to be talking to each other.

  • A GitLab instance to act as a code repo and host of your project
  • A YAML file that has pipeline details like platform to run on, build steps, shell commands etc , and acts as the orchestrator
  • A local or remote machine to check out code & run the instructions in the YAML file

GitLab

Step 2 – Sign up and create a project in GitLab 

SignIn/Register on GitLab here -> https://gitlab.com/users/sign_in

and create a blank project –

Screen Shot 2020-05-02 at 7.35.19 PM.png

 

Step 3 – Configure a runner  

I decided to use my machine ( MAC OS X) as a runner and these are the steps that I took

  1. Install –  https://docs.gitlab.com/runner/install/
  2. Register your runner – This is a critical step, that will make the GitLab instance know about your local runner. https://docs.gitlab.com/runner/register/ . I chose my runner to be a shell, just for the purposes of keeping this a simple exercise. Important to remember the tag for your runner here, as we will use that to call the runner from the YAML file
  3. Enable this runner in your project settings and disable shared runners. Click “expand” on runners in the settings for your project as below and on the next page click “disable shared runners”Screen Shot 2020-05-03 at 11.54.09 AM.png
  4. If everything is set up correctly, you should see your runner being detected in the settings page as follows . Note the tag “smoke” that I used in step 2 above Screen Shot 2020-05-03 at 11.57.22 AM.png

 

Step 4 – Set up build script 

In this step we will add the gitlab-ci.yml file to the project and make it

a) call & run a python file,

b) on the runner that we configured above

  1. Goto your project homepage & click on “set up CI & CD” Screen Shot 2020-05-03 at 1.13.50 PM.png
  2. On the next page you will be presented a web IDE to create the YAML file, I choose one of the many useful templates available fo my  script Screen Shot 2020-05-03 at 1.18.03 PM.png
  3. My pipeline script is very simple, this should be self explainable , note the bit with the “tags” is what calls the runner ! Screen Shot 2020-05-03 at 1.28.04 PM.png
  4. Lastly, add to the project the python file being called in the pipeline scriptScreen Shot 2020-05-03 at 4.41.01 PM.pngScreen Shot 2020-05-03 at 4.37.44 PM.png

 

Step 5 – Trigger the pipeline

The pipeline YAML script gets trigger automatically on every commit or you can goto Pipelines on the left navigation menu and click the “run pipeline” button

Screen Shot 2020-05-03 at 4.44.42 PM.png

Here is what the output looks like running on a bash shell , as you can see the echo output, output for pwd command and the Python file !

Screen Shot 2020-05-03 at 4.48.09 PM

so, there you go we have successfully set up a basic pipeline in GitLab that run a simple python script on a bash shell. I have liked what I have seen of GitLab so far and will explore more

 

 

 

 

Pen Testing reconnaissance 101 : Using NMap,Tor and ProxyChains

Learning objective : How can you perform reconnaissance on a remote target to check which ports are unsecured for possible exposure to network attacks?

Step 1: Create or choose an off the shelf Network Port scanner.

Based on my research and talking to more experienced peers in this space, I choose Nmap ( https://nmap.org/) , a free & open source network security auditing tools. It is very popular among researchers and professionals alike .

I’m using a MAC and one can either choose to install using the DMG file or using Homebrew

$ brew install nmap

Nmap has it’s downsides in terms of being “noisy” and easily detectable in terms of the amount of traffic it creates while performing it’s operations.

That brought me to step 2

Step 2: Find an anonymous way to run your Network Port scanner and perform reconnaissance in a securer fashion

Find a secure “overlay” to pass you traffic through so that you can anonymously use Nmap and not be exposed to exploitation yourself. Tor network and it’s Tor browser is what I chose

Here is an excerpt from an excellent intro guide on Tor ,

You may know Tor as the hometown of online illegal activities, a place where you can buy any drug you want, a place for all things illegal.  Tor is much larger than what the media makes it out to be. According to Kings College much of Tor is legal.

When you normally visit a website, your computer makes a direct TCP connection with the website’s server. Anyone monitoring your internet could read the TCP packet. They can find out what website you’re visiting and your IP address. As well as what port you’re connecting to.

If you’re using HTTPS, no one will know what the message said. But, sometimes all an adversary needs to know is who you’re connecting to.

Using Tor, your computer never communicates with the server directly. Tor creates a twisted path through 3 Tor nodes, and sends the data via that circuit.

The core principle of Tor is onion routing which is a technique for anonymous & secure communication over a public network. In onion routing messages are encapsulated in several layers of encryption.

Step 3: Stringing above tools together to execute a reconnaissance

The plan from here is to call Nmap commands from the terminal and redirect traffic through the Tor network ( that the Tor browser initiates when an instance is launched on the local machine – default for Tor is 127.0.0.1 9050)

Further research, led me to a useful tool called Proxychains. It is a unix based OS tool that marries really well with Tor(it is configured to redirect Tor traffic be default) or any other proxy or in fact chain proxies together to redirect traffic out from your local host.

Note – In terms of this part of the post I have not yet researched a windows equivalent for ProxyChains, so the end to end solution is incomplete in that regard.

So,

a) install ProxyChains using HomeBrew – $ proxychains4 brew install proxychains-ng

b) install and run Tor service from the command line –

$ brew install tor

$ brew services start tor

c) choose a target , that allows ethical pen testing . I chose Nmap’s offering called – scanme.nmap.org

d) Goto proxychains.conf file (usually found in /usr/local/etc folder) and if your installation was successful you see already see an entry saying –

[ProxyList]

# add proxy here ...

# meanwile

# defaults set to "tor"socks4

127.0.0.1 9050

e) You are all set now to run, nmap command through Tor ,on your terminal type ->

$ proxychains4 nmap -sT -PN -n -sV -p 21 scanme.nmap.org

The switches in the above command mean –

-sTfull TCP connection scan
-PNdo not perform host discovery
-nnever perform DNS resolution (to prevent DNS leaks)
-sVdetermine service version/info
-pports to scan

i.e. We are scanning port 21 on scaneme.nmap.org anonymously through nmap and see if it is open or closed?

f) The output will look something like ->

As you can see above , the request has been denied and state of the port is closed.

So, there you are , a simple basic Pen test to perform port scanning in a “safe” environment.

Further considerations with this approach and homework –

It is common for hosts to block Tor end points , that is where ProxyChains comes in handy . You can chain one or more public proxy server (anonymous as well) to your Tor service .

Port scanning through Tor is very slow , so I will have to find a more scalable solution when it comes to perform this kind of tests in bulk


Reflection :: The toll of Leadership and an year of being self employed

I have been very fortunate to be in leadership roles for 8 years now, ranging from Team Leadership, mentoring, to leading a practice (business line) of extremely competent Testers.

It has been in the top 3 fulfilling experiences of my professional & persona l life. Seeing individuals succeed with (some of) your assistance, advise and guidance is what made leadership so satisfying to me. Putting others first, always, and to shepherd them towards success is why I have kept leadership roles as a sought after career path. Growing and developing individuals and Team is a passion that blossomed , almost , as a second skin on me, during these stints in leadership roles.

However, what I did not realize , that I had also started wearing the foggy lens of a “careerist” and exercising questionable judgement during that time . By that , I mean –

a) Attaching self worth to the extent of my responsibilities . More responsibilities, new strategic projects, bigger Teams to help lead ,were all a measure of professional “success” for me

b) Incessant intellectual restlessness until that bar of self worth was reached and after every milestone finding that the bar just got higher, i.e. a vicious cycle.

c)Achieving outcomes for Team members in the face of corporate dysfunction and resistance ( aka the bread & butter of leadership roles) , made me “compromise” . Compromise with staying in/trying to change organisation behaviours in eco-systems where clearly, the org’s values/mindset and mine, did not match. But still I had to carry on , because the “Team can not be let down!” and leadership is a “balancing act” , at the end of the day

d)Surprise…surprise….this took focus away from my mental & physical well being ( in-spite of getting professional help) . Also took focus away from effectively exercising my role as a parent

This carried on for a dangerously long period for about 2 years until last July , when after only 4 weeks into a “dream” role, I quit ,without a job in hand .

My act of quitting was not a Buddha-esque lightning bolt of enlightenment but occurred because I could not just carry on , i was in hospital, twice in a matter of 2 weeks ,with dangerous symptoms of cardiac pain. My body and mind had plotted to conjure up the act of giving up.

I had to be ejected from the corporate hairball orbit , without a space suit , let alone a plan. I had close to 0 savings , borrowed money from my sister , and only thing foreseeable and enjoyable i had was to drop & pick kids from school as I could do it now. And the closest I had to a plan was to reach to ex-colleagues on LinkedIn and check out with my ex-employer to see if I can get my old desk back. I did not , which now in hindsight is the best that could have happened to me , because , what happened next and has been happening since has been equally fulfilling to the so called “zenith of professional success” that I had experienced earlier.

Gentle warning – I’m not suggesting that this path be followed at all , sorry who knows, whether you will be more or less lucky than what I was , but what transpired was that an ex client whom I had consulted before had a role for a contract Test Manager . I had nothing to lose , I had the courage to say no , I had the flexibility to try something new out and shun it if I did not like … well that was a fragrance I had not experienced before , so I followed the whiff . And it has been a sumptuous feast so far !

Over the past year –

a) I have worked on time & mission critical programme of work affecting daily lives of NewZealanders

b) I have been exposed to /tested new technologies that I had no experience before e.g. R, big data ETL , Machine Learning models

c) Achieved things that I yearned for in my leadership roles e.g. further deepen my tech skills, contribute to the Team in code on a daily basis , architect a cross functional Team from scratch.

d) And doing that at the same time as leveraging on my core skills of servant leadership , facilitation and critical thinking

During this very brief journey in being a self-employed contractor , it has dawned on me that being a “careerist” had not only definite negative inclinations but consequences too , as I was equating my self worth to my job title . Being a contractor has given me the gold dust of flexibility , where in

— I can choose to say no to organisations and walk away from oppurtunities when their demonstrated values/ethics dont align with mine ,without worrying about how would it look on my CV

— Exercise my core skills and develop new ones parallely

— Above all , take care of my family and myself , physically , mentally and spiritually.

Lastly,

Please dont get me wrong ,

I am not suffering fools, this joy ride is impermanent or contracting is somehow better than in house roles, objectively !

Self employment comes with some lusty challenges around inconsistent financial reward and the risk that poses e.g. to a young mortgage paying family . Creating a sustainable pipeline of work in an emerging but (relatively) small IT community wont be easy , but all i can say, is I am relishing every minute of this current joyride , with no mental demons to slay . And I would encourage every current/ex “careerist” to try freelancing/independent contracting atleast once in their career and/or feel free to reach out to me if you want a sounding board.

Stay well peers and flourish ! 🙂

Python 3.x – Using sets to parse log data

Testing problem: 

As a output of a data transformation program, I had a large excel sheet ( 100 ~ 200 MB) of error logs to sieve through manually to look for error codes.

These error codes were supposed to be compared against an expected set of error codes, to ensure that the program was capturing the complete set of errors ( that were purposely injected into the source data set).

Scripting opportunity: 

I was executing the check “manually” i.e. filtering the output logs to look for the “error_code” column and then retrieve unique error codes to be compared against the source list of unique error codes.

Capture

This has a fair amount of duplicate effort on each test run, hence I decide to script it using my programming language of choice i.e. Python

Scripted comparison approach:

My approach was to iteratively script the test i.e.

  • Script the excel parsing and comparison of error codes
  • Then script the running of the data transformation program ( to output the log files). (This already existed , all I had to integrate this with the parsing/comparison script once I had created it. This post covers the parsing and comparison solution)

Parsing -> I started researching (aka googling) a solution and ended up the with using pyexcel as the module that I will use to parse the excel sheet, mainly because it supports multiple excel formats and has excellent documentation .

Comparison -> This lead to thinking about the second part of the problem i.e. how to retrieve unique error codes from the logs and compare them against an expected list.

I landed on using sets for comparison, as they are an extremely handy to deal with data-sets formed of unique elements and can operate seamlessly with lists & dictionaries.

Equipped with the above tools, I started coding a basic POC as below

Solution -> 


import pyexcel as pe
from datetime import datetime
from datetime import date
import pytest
def test_monthly_close_off_checks():
# this is the expected set of errors from business requirements
expected_error_code_set = set({'J01','J01.N','J02','J03','A01','A02','A02.N','A03','A04','A05','D01','D02','AV01','AV02','AR01','AR02','AP01','AP02','AP03','AP04','AP05'})
#get today's date in YYYYMMDD format as it is appended at the end of the excel sheet that needs to be parsed
today_date = str(date.today().strftime("%Y%m%d"))
print(today_date)
# using pyexcel object parse the first log file to get a list of ordered dicts for each row in the log file
notifications = pe.get_records(file_name = r"CloseOffChecks_Sunjeet\Notifications_"+today_date+".xlsx")
# define an empty set to store the list of unique error codes parsed from the log files
parsed_error_code = set()
#iterate through the rows and get the value of the error code , it is under the column "Error Code" i.e. would be the key in the retrieved dict
for n in notifications:
#add the error to the set . The set will ensure uniquness !
parsed_error_code.add(n["Error Code"])
print (n)
print(parsed_error_code)
#same drill as the notifications file above for the error file
errors = pe.get_records(file_name = r"CloseOffChecks_Sunjeet\Errors_"+today_date+".xlsx")
for e in errors:
#append further error codes to the existing set
parsed_error_code.add(e["Error Code"])
print (n)
print(parsed_error_code)
#assert that the parsed set of error codes is the same as the expected set
assert(parsed_error_code == expected_error_code_set)

view raw

set.py

hosted with ❤ by GitHub

Further work ->

  • Integration with the data transformation program to complete an E2E solution that grabs source data, transforms it , parses and compares error codes
  • Performance ! I am working with fairly chunky log data, how could I optimize my code ?

 

 

 

 

Heuristics for debugging integration problems

Outstanding Testers (that I have had the chance to work with/coach) did not just “report that there was a fire” , they were skilled at investigating and communicating –

  • How long the fire has been burning ?
  • What is the scale of impact ?
  • Which areas are affected vs not ?
  • What is the nature of the impact ?
  • When did it start ?
  • When did we last check ?
  • What could have caused it ?
  • What could we do better next time to help answering the above questions (when the next fire hits) ?

For Exploratory Testing, one of key challenges in testing an unfamiliar (and complex) system is ascertaining where to look for the source of the error for debugging and root cause analysis purposes.

From my experience in testing multi-technology integrated systems, I have put together a bunch of generic heuristics that I use to investigate and look for information that helps in debugging and contributes towards articulating the root cause of end-user errors.

1. “Top- down” heuristics

By top-down (in this context) I mean debugging the application stack of the system component where the symptom has cropped up.

The intention here is to ask questions to ascertain as to whether the root cause lies in the vertical slice of the architecture or not ? Because , if not , then we can start looking at the second set of heuristics (i.e. integration of the current system component with other components of the solution architecture)

  • Symptom Repeatability – Are you able to repeat the error consistently from the UX ? Which browsers + platform combination is the best bet to reproduce the symptom ?
  • API traffic for the stack– Which underlying API end points are called by the stack’s UX (when the error happens) ? Are those end points responding ? What do the browser developer tools ( or alternate methods) tell you about the request payload and response when the error happens ? Invoke the API end point directly ( with exactly the same request payload) and compare the response with the response received from the UX ? Are there any errors logged in the developer tools console ? Are those error related , how do you know ?
  • DB transactions within the stack – Which tables is the API supposed to write to ? Which fields ? Are those tables /fields being correctly populated ? Are your DB schema definitions upto date and correct ? If a stored procedure is called , is it being called , how do you know ? Do you log API/Database errors in the database ? If yes , have any errors been logged when the UX error happens ? If not, you should advocate persistent logging of errors for debuggability with your Product Owner
  • Last working version of the stack – What was the last working version of the stack i.e. did not have this error? Revert the stack to that version , can you still reproduce the error ? If not, hold a peer review of the changes since then ? Have you got automated checks to tell the status of all the versions between the working and non-working ? By reviewing those checks or manually changing (one variable at a time ) can you pin point which version of the stack this error started ?

2. “End to end” architecture heuristics

Ok, running our top-down ( through the stack) debugging checks did not yield success, now we need to inspect the integration points and other system components that your application interacts with.

  • Data flow and events across integration points – Do you have(/can you draw) a solution architecture diagram to confirm which other system components does your application stack deal with ? When the error happens, can you confirm what data, events is your application expecting from the system components(that it is integrated with) ? Is your application the receiving the data it expects ? Is the data in the right format ? When the error happens , can you confirm which data is being written to which other system components ? Is it actually happening , how do you know ? Is there logging evidence to confirm the answers ? If not, you should advocate persistent logging of errors for debuggability with your Product Owner
  • Last working version of the architecture – Do you know the last working versions (i.e. not displaying this error) of all the integration points and system components ? Can the whole architecture be rolled back to a working version ? Have you got automated checks to tell the status of all the system components between the working and non-working copies of the architecture ? By reviewing those checks or manually can you pin point in which version/by which change of a system component/integration point this error started ?
  • Completeness of the architecture – Is the architecture complete i.e. are all the system components and integration points responding ? Is there logging to confirm (or negate) that there is no missing system component or disabled integration point ? If not , have a discussion with your solution architect as to how could this be improved to aid debuggability ?
  • Non-functional/timing activities across the architecture – When the error happens , are there any resource intensive ( CPU,Memory, Dis I/O) processes that are running and/or kicked off , in other part of the architecture ? How can you monitor resources across the components and integration points ? How do you know that those resource intensive processes have completed or are stuck ? If not , where do you refer to for evidence of failure of those processes/tasks ? Are there any time outs involved i.e. is any system component awaiting on another for a response and is not getting it ? If their logging to this affect ? If not, you know what to do 😉

There will always be a job for you , if,

Stop sweating over whether <insert latest tech trend> will take your job away, rather focus on getting better @

  1. Learning new tools and practices with curiosity
  2. Putting Team outcomes before individual outcomes
  3. Sharing your knowledge
  4. Improving your facilitation skills
  5. Improving your public speaking
  6. Learning to build persuasive  business cases
  7. Writing code every day
  8. Volunteering in fields other than yours
  9. Finding mentors from fields other than yours
  10. Giving, taking and actioning feedback

Image source – Quora

Failure is good, it is an actual option, take it

https://en.wikipedia.org/wiki/The_Persistence_of_Memory#/media/File:The_Persistence_of_Memory.jpg

 

Since childhood (or the time when we were all artists) , we have been programmed to perceive failure as a non-viable option.

Something that needs to be avoided , dreaded and is socially unacceptable

“If you fail at <x> exam, you will end up being a failure in life” , constraints like this are slapped onto us.

As a result, failure becomes very closely knit with shame,guilt and inexplicable discomfort

Talking , atleast in a professional work environment context, these constraints not only influence the decisions we make  professionally but also dictate the narrative through which we articulate our achievements ( or side step risks that might lead to failures)

I “failed” recently, I made a significant emotional and physical investment in a career move . The career move did not work out , I ended up ( still am) being without a job within couple of months of that career move.

What have I learnt from this experience ?

  1.  The world will not end . It really does not .

Your kids will continue to be carefree and look upto you for being a role      model 

       Your father will continue to annoy you for remote IT support 

       The nature  and your direct environment will continue to be tumultuous  ,presenting you the same challenges as before with disregard of your current LinkedIn status

2. The world will change though

    Your career path will be foggier now 

    Your financial situation will present significant physical and mental stress 

    You will drop your kids in your jammies

    You will seek refuge in immediate relief ( in things ranging from alcohol, to taking up the first plausible employment opportunity)

You will feel that regret is your middle name

Because, we have been programmed to not fail .

Because, quitters never win

Because, it would look bad on my CV

Because , underneath all this,we allow ourselves and our successes to be “seen” through metrics that someone else created ,rather than us.

We stole those metrics and applied them to ourselves,our careers as if they were ours.

It is not only our prerogative but a life-duty ,to choose our own metrics of our success rather than be led by a third party yardstick.

Hence,time to choose the option of failing .

Hence,time to reflect and  ask if the world could matter in different ways ?

Which leads to my third learning ,

3. The world ,now, will matter in different ways

You will stop fearing venturing into professional and personal territories ( that you only had watched from the fringes) 

You will question your existing decision making parameters  and biases

  You will shun some of your existing metrics of success, might adopt couple new ones

   You will realize the irrationality of your fears 

You will strengthen existing professional and personal bonds

   You will  form new professional and personal bonds

 You will realize that the safeguards  or needs that you had constructed from being gainfully employed are faux or misconstrued

  and , most importantly

You will realize that you are only able to challenge your mindset or confront your fears or topple the apple cart of your beliefs, because for you ,the world is mattering in different ways now

It will continue to matter to you in further different ways throughout your life

You just need to continuously keep challenging your norms, keep refining what you define as success  and what you classify as failure or to even consider whether there is value in classifying at all !?

 

 

 

 

Testing != Automation

Automation is not the goal .

The goal(s) is to –

  • Make humans effective at the craft of discovering,communicating and advocating risks to user experience,commercial reputation and utility of what we ship.
  • Keep feedback loops to humans as short as possible
  • Provide information to humans that is reliable and consistent( to answer the question…”if we ship now, what are the risks to our customers’ existing experience and utility of our software ?”)
  • Reduce repetitive tasks ( repetitive i.e. following the same steps with the same data with the same intent to achieve the same objective done over and over during a work day), so that cognitive load on humans is lowered and creative intent is sustained

Automation is not just about a Tool set

Automation , before tools, is actually, about –

  • analysis of the architecture
  • change
  • communication
  • facilitation
  • workshopping
  • reflection
  • test data
  • test environments
  • shared understanding
  • business risks
  • business strategy
  • product roadmap
  • commercial significance
  • experiementation
  • probing techniques
  • not Automating everything
  • not dichotomous to “manual” Testing

 

Performing sorting on sub strings in Python 3.x using “key” parameter

sorted() and list.sort() are very useful inbuilt Python functions .

They get even more powerful with the “key” parameter

The key parameter basically allows us to call either another function or some logic, the outcome of which forms the basis of our sorting.

The return value of the key parameter will be used to decide the sort order.

Lets through talk an actual example – 

Lets say we are given an input list of names of great personalities from history and we want to sort the names based on the last name.


input_list = [ 'Sachin Tendulkar','Nelson Mandela', 'Mohandas Ghandhi','Napolean Bonaparte']
output_list = ['Napolean Bonaparte','Mohandas Ghandhi','Nelson Mandela', 'Sachin Tendulkar']

view raw

key.py

hosted with ❤ by GitHub

Step 1:

Write the logic to decide the sort order i.e. sorting based on last name

I wrote a tiny function that will receive a list item and shall return the last name , after splitting the full name string


def last_name(x):
return x.split()[1]

view raw

split.py

hosted with ❤ by GitHub

Step 2:

Use the sorting logic as a key parameter in the sorted() function now.


# just call the function's name as the key value !
output_list = sorted(input_list,key= last_name)

view raw

sorted.py

hosted with ❤ by GitHub

It is as simple as calling the last_name function name as the key and the list will be sorted based on that key’s value.The key’s value acts as a sort of proxy to decide the sorting order .

Bonus learning – 

Rather than defining and calling the key logic as a separate function, we can also use Lambda operator to define the key inline.


# use lambda operator to define the key
output_list = sorted(output_list,key= lambda x: x.split()[1])

view raw

lambda.py

hosted with ❤ by GitHub

The flexi-ways of asserting with Cypress.io

 

One the many joys of working with Cypress is the variety of support for various assertion methodologies.

What is even more powerful is that they can be chained at the end of core Cypress API commands like cy.get

Here are coupe of examples that I put into practice recently.

  1. JQuery based


cy.get("#header > div > div > div:nth-child(2) > div > div.headButtons > div.header_button_float.logMenu > div > a.underLine")
// assert that the element's text matches a reg ex
.should(($txt) => {
const text = $txt.text()
expect(text).to.match(/Login/)
})

view raw

cypress.js

hosted with ❤ by GitHub

2. BDD type assertions


//assert that the element contains a particular text
cy.get('.whtLink').should('contain','nirvana')

view raw

cypres.js

hosted with ❤ by GitHub


//find and confirm that an element is visible
cy.get("#header > div > div > div:nth-child(2) > div > div.headButtons > div.header_button_float.logMenu > div > a.underLine")
.should('be.visible')

view raw

cypress.js

hosted with ❤ by GitHub

Simple , elegant and flexible 🙂

I will continue to practice further ways to assert using Cypress

Which assertion methodology do you particularly prefer?