This is how I use labels in JIRA to give visibility to the team whenever QA is working on software installation and/or configuration scripts.
I add the label “qa_install” to tickets that involve working on installation scripts or config related to installing the software
To view all tickets in the project with that label, here is the JQL query I use:
project = QA and labels = "qa_install"
Returns: 72 tickets
I also want to see all tickets without this label
project = QA AND (labels not in (qa_install) OR labels is EMPTY)
Returns: 266 tickets
This returns tickets in the project that have labels other than “qa_install”, as well as tickets with no labels at all.
In other words, this returns all the tickets in the same project that do not have the “qa_install” label.
This allows me to get a rough percentage of the tickets, which could translate to an amount of overall work, involved for QA to automate software installation
72/266 x 100 = 27%
Of course, credit goes to the entire team for working on these scripts, as QA does not work in a bubble.
Originally I added this label for myself, to better understand my typical work day, and where my efforts were spent. Also, I wanted to be completely transparent with the team when I was not “just testing tickets”. Of course, “just testing tickets” is only possible when the appropriate environment is available. Hence the need for spending time on installing and configuring test sites.
Over time, adding this label has been valuable. For example, this label highlights whenever development changes affect the installation procedure, as a QA ticket with the label “qa_install” can be linked from the development ticket causing the change, with a link description such as “caused by” or “relates to”.
Adding this label was especially useful when I realised I had been hearing variations of this phrase when asking for help or clarifications on our software installation and configuration procedure:
Why invest time in installation scripts when installing is only running “two lines” ?
Sure, maybe it is only two lines on an environment that is already up and running, and it is only a standard update that is required. Maybe it is only two lines if you happen to know the correct two lines for that particular configuration and customer. In any case, running two lines is besides the point.
The point of having installation scripts is to have an easily reproducible environment that can be torn up and down quickly. And the tickets with the label clearly illustrate how there are many more than “two lines” that need to be run when installing the software. If it were so simple, the scripts would be tiny, and there would be virtually no QA tickets with the label “qa_install”.
Inevitably, this question popped up on many occasions:
But why not just test on the XYZ site? It is already running the “latest” version of the software
I won’t get too deep into the vagueness of what running the “latest” software could mean. Does it mean the latest of the entire software suite, or only a few key packages, in which case, what configuration should be used? The most common? What about associated services — do they have a “latest” version that should be installed as well? And so on.
However, running on some “other site” that you do not have control of, or knowledge of the setup, invalidates your testing by definition. For example, that test site could have any number of manual tweaks, such as certain users being given more permissions that they would have in a production site meaning that you could falsely assume a scenario passes with that user because they have too many permissions on the “other site” compared to a production site.
Testing involves having an easily reproducible environment that can be torn up and down quickly.
Of course, there is also configuration to take into account. For the sake of simplicity let’s assume that all configuration is also baked into the install scripts, so one version = “one version and its configuration for customer A” where customer A has the most common configuration used.
Back to having a reproducible environment. For example, to test a feature, here are just some examples of when a test site needs to be installed with a certain version of the software:
- Running a pre test involves installing the version the issue was found on, in order to reproduce the issue
- Running a post test then involves installing the version the issue should no longer be reproducable on
- Running automated regression tests involves installing a particular version, for example when running semi automated tests that involve some human interactions during the test run
- Tearing down a site because your testing made the whole thing blow up, and you need to reset it since the devs have taken a look and gotten the info required to evaluate if this is a likely scenario in production
- Installing on a new server available for testing
Therefore, I can find myself installing, or reinstalling after wiping a site, somewhere between one and five times a day.
If this was not automated, I would surely go mad… or at least make human errors when installing.
So I believe the time invested in creating these scripts has been well worth the effort. In addition, being able to highlight to the team when these scripts are being worked on, gives transparency to the work done in this area.