Hayden Hudson
Which Test Automation Software is Right for Me?
Introduction

This blog post is a loose recap of my 2019 Kscope Talk and is aimed at Oracle APEX and PL/SQL developers who are interested in getting started or going further with Test Automation. These recommendations are base on my professional experience advising clients as a Consultant for Insum and as the Test Automation Architect for Fabe.

First, let's define a key term. 'Test automation' is about executing tests against software you are maintaining or developing. These tests then compare their actual results with the predicted or expected results. Examples include: Selenium, utPLSQL, Jmeter, Percy, Gatling, HammerDb, Appium, Autoit, Eggplant... The list is long and can be overwhelming.

If this post is successful, it will equip you with:

  • 1. A better informed framework for how to think about test automation
  • 2. A specific knowledge of some of the more popular test automation tools
  • 3. A desire to get started with one or several of the tools we will discuss today

Why test automation?

Let’s remind ourselves quickly of why this is important to us. There are many answers to this question. For me- if I had to pick the primary appeal of test automation it would be- *spotting problems early*:

If I introduce a bug in my code - I want to know immediately because if I have to debug it a month from now, it might as well have been written by someone else.

To spell out further how test automation helps with this- if you have good test coverage and you run your tests frequently, you’re likely to spot problems that you’ve introduced *quickly*. If you have a discipline of running your tests every time you check in your code, that is called 'continuous integration'.

Other motivations for Test Automation include: you find manual testing tedious, you don't have a QA team, you want to try 'Test Driven Development', etc.

A Hard Truth

That’s the motivational portion over with - let’s confront a hard truth. Test automation is not for everyone. Let’s make sure, we are on the same page about some key things.

Test automation is hard:

  • 1. it takes discipline to keep at it
  • 2. creative discipline to apply and design
  • 3. intellectual discipline to interpret and act upon
Bottom line: Test automation is for developers only. It is not realistic to ask, for eg, a non-technical QA team to pick this up.

3 sections

I’ve decided to divide our topic into 3 sections: (1)PL/SQL Testing, (2) Browser Testing and (3) Performance Testing - this corresponds to the order in which I think Oracle APEX organizations should prioritize Test Automation implementation.

PL/SQL Testing

Let’s start with PL/SQL testing. By my analysis, this will probably give you the biggest return on effort. The 1st piece of good news here, is that this isn’t a crowded space - there are a few options that you may recognize but without question the most popular and successful choice is utPLSQL - a software originally conceived of by Steven Feuerstein in the 90s.

Demo

Here's a gif of the demo I provided in my presentation. Notice how human-readable the output is.

The Challenge of PL/SQL Testing

What about procedures/functions:

  • 1. With no output parameters?
  • 2. With non-deterministic output?
  • 3. That aren’t easily repeatable?

The above are some common examples of hurdles you will have to confront as you implement PL/SQL testing. There are no easy solutions to any of them. But recognize that an imperfect/partial test is better than no test at all.

PL/SQL Testing will change how you code

You will write smaller procedures/functions to isolate the code that is difficult to test.

Tip : Recognize that testing is about approximation and compromise.

Browser Testing

This may be the most popular test automation focus. Let’s take a moment to review what is a very crowded competitive landscape (above). I have only looked into a handful of the above tools so I am unqualified to make across the board recommendations. I’ll share my browser automation journey - my very 1st introduction to any kind of automated testing was with Selenium, which was and continues to be an extremely popular solution. However, I never loved Selenium and when I joined Fabe, I took it as an opportunity to give Cypress a try and I am very impressed. I do not miss Selenium one bit.

Selenium vs Cypress

The above screen is from my presentation in which I show that 59 lines of python code to use Selenium translates to just 11 lines in Cypress code.

Selenium Demo

A very simple demo from my presentation of what Selenium can do.

Cypress Demo

The same demo but this time in Cypress. Notice how the left-hand nav bar gives a friendly description of steps taken.

The challenge of Browser Testing

  • 1. ‘User story’ (‘end-to-end’) tests can take 10min+ and are therefore expensive
  • 2. The risk of ‘flake’ increases multiplicatively with every incremental HTTP request in a given test
  • 3. Most of the things we care about testing take a lot of set-up (authentication, authorization, seed data)
As a result of these challenges, browser tests are ungainly, unreliable and rarely run.

There's no completely avoiding these hurdles. The best recommendation I have for making your peace with them is embracing the approach of testing 'features' in addition to 'stories'. Feature tests / Unit tests can be short and therefore minimize time and set-up.

Browser testing will change how you code

You will:

  • 1. Label and apply your elements more thoughtfully (no more reusing css classes with misleading names)
  • 2. Move more of the logic into the database so utPLSQL can test it
  • 3. Make features more easily isolated (eg. relax some security in your dev environment)
  • 4. Learn the value of ‘Unit testing’

Performance Testing

Last stop 'Performance testing' - a fairly crowded space but not as crowded as that of browser testing. I have limited familiarity with most of these softwares. I can point out that Apache Jmeter has been an industry standard for 20 year and will probably continue to be so. In my opinion, there has only been modest innovation since then.

Let's define some terms. We have 3 principal types of performance testing:

  • 1. load/reliability : observe the health metrics of your system at given load of users
  • 2. stress torture test : extreme test until system failure
  • 3. endurance /continuous test : observe the health metrics of your system as you run your test for days on end

Jmeter Demo

A very simple demo from my presentation. Jmeter requires that we translate what we do in the browser into a series of HTTP requests.

The challenge of Performance Testing

Load testing’s challenge is two-fold but really they feed into each other:

  • 1. A resource challenge- The system you care about is your Production environment. You do not want to run your performance test against this environment because of the risk that it will impact customers. You therefore have to approximate your system in a test environment in the hope that when you simulate 50,000 users against your test system, it will be a good approximation of what would happen if that same volume hit your production system. Additionally, it takes real server power to simulate 50k users of activity. So all together, we're taking a lot of server power and some of production grade - so it can get very expensive.
  • 2. Interpretation challenge: relatedly - let's say your performance test is showing that wait times for your customers begins to sky rocket after 50k users. This sounds important. But it's not always easy to have confidence that the fault isn't with the test environment.

In summary - you have one side saying 'I can't use these results because the system is unreliable', and the other side saying 'I can't afford to improve the system, it's too expensive.'.

The compromise approach requires you to start your performance testing before you think you need it. You take your perfomance test at a point of being reasonably content with performance against your test environment but you don't scrutinize the meaning of the health metrics. However, you can then use those metrics as the base line to all future tests. If performance dips or soar against this base line, you will therefore be confident it's the code and not the system.

Hayden's difficulty scoreboard
In the hope that it may be helpful, I've estimated the difficulty of these software along 7 key vertices (each score is out of 10, where 1 = 'easy', 10 = 'impossible').
Software NameWriteReadTrustAct On OutputDebugInstallRunAvg Difficulty
utPLSQL11111211.1
Selenium786191016
Cypress43311112
Jmeter78285185.6
Gatling78285134.9

Some callouts:

  • 1. Selenium can be written in 9 different languages (Python,Java,C#,Ruby,Javascript,Groovy,Perl,Scala & PHP) so it is not a self-evident thing to assess the difficulty of writing or reading Selenium tests. I have a comfort level with Jquery that makes Cypress fairly accessible to me.
  • 2. I have installed Selenium 3+ times and it takes me over an hour every time because I forget all the dependencies, so I've given Selenium the maximum score for difficulty for installation.
  • 3. Cypress gets a better score for 'Trust' relative to Selenium because it is more intelligent about waiting for resources to load before running its test. Selenium tests, consequently, have a higher incidence of 'false positives'.
  • 4. Jmeter and Gatling get high difficulty scores for 'Act on Output' because it is often not self-evident whether the Performance issue is related to the software or the hardware.
  • 5. Jmeter gets a high difficulty score for 'Run' because it's ability to simulate volumes of users is constrained by the processing power of the machine running the test. Enterprise-level performance tests often employ

Date
Date : Jun. 21, 2019