It is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die. And you, my dear readers will help make that happen.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
Patient Zero and I are part of a larger test team, and the rest of the test team all work on Waterfall-style projects, use Waterfall-compatible tools, and they generally don't get how we work. :) Unfortunately, one of the tools mandated for our team's use is HP's Quality Center (HPQC). I hadn't seen that tool in about a decade and it looked very similar to how I last remember it.
To my agile coach/practitioner friends I should clarify that at no time during our sprint development work does anyone ever touch HPQC! However, once the code is deployed/falls into the Waterfall Release process, regression test cases are created in HPQC and it is used for defect tracking. It is mandated, and so shall it be done. I can live with that. It's just a tool at this point and the impact to our ability to deliver a good solution is eliminated by the fact that we don't touch it until after we are "done". (Communication and collaboration FTW!)
Two days ago.
Our whole test team took part in a 2-day HPQC training workshop on something HP calls "Business Process Testing" or BPT. Being naturally curious to learn something new, I wanted to know what BPT was and how it fits into the bigger testing picture. Here we go.
I read through these tests and then I tried to follow them using the system. I quickly encountered a half-dozen bugs - some with the system, some with the test scenarios/cases, and some were open questions that I would follow-up with the Product Owner for requirement clarification.
But, woah-woah-woah-hey.. wait a minute. We only need to worry about *these* documented test scenarios! I struggled hard to keep my mouth shut about the value of time and the many different kinds of tests I would happily engage in at this point if I could leave the tool alone. But, I left that to my "inner voice" and we were now one hour into the first day's training.
At this point, we were given an overview of the HPQC modules and told to (1) enter the Requirements, (2) create BPT test scenarios, and (3) "componentize" the test scenarios into groupings of related steps. This last part required some explanation since this was new to me, and once we got it, we set to work on the task. Since we could see (1) and (2) on the handout sheets in front of us, we went straight to work on part (3). That was kind of fun working in small group of 4 looking at these scenarios and trying to come up with solutions.
And then someone went and spoiled the fun. We were "told" that we weren't supposed to do part (3) until after we had entered the information for (1) and (2) into the tool. I was all like "really?" and then a team lead came by and repeated the exact same thing. This may be summed up as: Enter data into the tool first, and think later.
I was kind of shocked with this comment and attitude and in retrospect it kind of foreshadowed the rest of the training experience -- i.e. Tool and process first; think later; maybe. Okay. I'll play along and see where this goes.
During this exercise, I began to grasp this idea of "components" as HP uses them. I think they are like Page Objects - chunks of code (or in this case, test steps) that perform a certain function, promote reuse, and reduce duplication. Although it was never described in this way, I believe the HP "BPT" module is a proprietary Do-It-Yourself DSL. Aha! I have experience with those. I get that stuff.
So, once I got it, I started to explain the concept of YAGNI to my group team members. That is, let's not overthink or over-engineer these components. Let's build/write them based on the needs of the requirements in front of us. We will modify the components in the future as new requirements appear. This idea was well received and we quickly came to an efficient solution for our small group.
When we took up the exercise, I found I had to explain the YAGNI concept to the instructor/trainer (and rest of the class/team) as he proposed that we try to abstract out these components to allow for compatibility with other features and system elements. What a waste of time! We cannot know what we will need beyond our immediate needs so that kind of abstraction is a pointless exercise that leads to more headaches than you need.
Eventually, I started to get that the point of writing the test scenarios using BPT was that these components form a basic vocabulary that may then be automated at some point in the future -- yes, BPT integrates with HP's QTP automation tools. Now, I'm all in favour of consistency, clarity, reuse, and automating tests that humans should never have to do more than once (and if there is value in re-running the test), so I struggled to understand why it was never explained to us this way. As long as I kept the DSL/automation model in my head, I understood what we were doing and can see the potential benefits of it.
I saw many testers struggle with the exercises and models we were presented with. End of day one.
Day two began with a quick recap and then we were introduced to the bureaucracy that is Waterfall and HPQC. That is, there are review processes and workflows to cover each requirement, BPT and component. Welcome to Wasteland. (I mean the Lean Development concept of "waste" here, although other interpretations of "wasteland" may be just as valid.) Easily a third of the 2-day training was spent on the processes surrounding the management and review of the various HPQC objects. sigh.
We then moved onto the concept of "parameters" for the components we created yesterday. Okay, I get method parameters when I am scripting with Ruby, so this was no sweat. Given the length of time I spent parametrizing my components compared to everyone else in the class, I think I may be one of the few who really got it.
I learned a few more things about the main instructor. One was that he didn't know how to identify web page elements using commonly available browser tools. Umm, aren't these tools supposed to interact with web pages? This never came up before? You have never wondered how to find the name/id for an element on a web page? really?
The other was that he had a really bad sense of humour. He made a reference to a "QA" joke that I shall not repeat here. Needless to say that I found it insulting, offensive, it made my skin crawl and my blood boil. Many unpleasant feelings and ideas arose in me and it took all my willpower and strength not to react to the blatant stupidity of insulting the profession of the students in your class and the market that the HPQC tool represents.
The final blow for the day came to me when we tried to "execute" these BPT test scenarios using the HPQC tool. THEN I discover that these "parameters" can have 2 kinds of values - fixed/hard-coded and run-time. Anyone who has done intelligent automation knows NOT to hard-code values in their scripts. Data-driven is way better, and as an exploratory tester, I may not know what value I will choose until moments before.
Here's where the HPQC tool gets stupid..er. Regardless which parameter type you choose, they may ONLY be defined BEFORE the test execution! You cannot leave empty parameters so that when you get to a particular step, the tester can decide what value to input and feed it back into the tool.
What does this mean? As a tester, let's say you want to create a new user profile in an app of some kind. The test parameters for things like Name, Email, Address, Country, and so on, must all be defined and set before the test is executed. You cannot decide while you are actually testing!
Why does this matter to me? It matters because it means that the humans who execute these BPT tests manually have no choice in what values they may input. Testing techniques like Equivalence Classes and BVA that help guide our choices to pursue interesting paths are completely cut-off! It turns out that HPQC treats humans worse than the automated counterparts. In discussion with an automation "expert" at the end of the class I learned that at least you can code some variability into the QTP automation scripts. This is not possible with the same test scenarios executed manually by human beings.
So. Much. Wrong.
So after my first ever QC training session, here are some of my take-aways:
At the end of the day, there were several people interested in my idea of randomising tests. The automation expert in the class insisted that it couldn't be done with automation, so I called up my "Unscripted Automation" presentation slides that I gave a few years ago. He said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used.
My response was to simply say "they are all wrong."
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
Patient Zero and I are part of a larger test team, and the rest of the test team all work on Waterfall-style projects, use Waterfall-compatible tools, and they generally don't get how we work. :) Unfortunately, one of the tools mandated for our team's use is HP's Quality Center (HPQC). I hadn't seen that tool in about a decade and it looked very similar to how I last remember it.
To my agile coach/practitioner friends I should clarify that at no time during our sprint development work does anyone ever touch HPQC! However, once the code is deployed/falls into the Waterfall Release process, regression test cases are created in HPQC and it is used for defect tracking. It is mandated, and so shall it be done. I can live with that. It's just a tool at this point and the impact to our ability to deliver a good solution is eliminated by the fact that we don't touch it until after we are "done". (Communication and collaboration FTW!)
Two days ago.
Our whole test team took part in a 2-day HPQC training workshop on something HP calls "Business Process Testing" or BPT. Being naturally curious to learn something new, I wanted to know what BPT was and how it fits into the bigger testing picture. Here we go.
We were given a handout with some "test scenarios" to be used for training. The test scenarios fell into this pattern:
- Scenario name/title
- Requirement description
- Test Situation (I am staring at this right now and I still don't know what this means)
- Role (kind of system user this requirement/situation applies to)
- Steps
That's okay information. The "Steps" are what you might typically expect to see if you have been testing for a while. Here is an example for working with a sample web app:
- Login to the system with (a certain type of user)
- Navigate to some module in the app
- Click "Create" button from the tool-bar
- Enter mandatory field values and save
- Search for the information you created in the previous step
- Logout of the system
And there were 6 of these scenarios.
But, woah-woah-woah-hey.. wait a minute. We only need to worry about *these* documented test scenarios! I struggled hard to keep my mouth shut about the value of time and the many different kinds of tests I would happily engage in at this point if I could leave the tool alone. But, I left that to my "inner voice" and we were now one hour into the first day's training.
At this point, we were given an overview of the HPQC modules and told to (1) enter the Requirements, (2) create BPT test scenarios, and (3) "componentize" the test scenarios into groupings of related steps. This last part required some explanation since this was new to me, and once we got it, we set to work on the task. Since we could see (1) and (2) on the handout sheets in front of us, we went straight to work on part (3). That was kind of fun working in small group of 4 looking at these scenarios and trying to come up with solutions.
And then someone went and spoiled the fun. We were "told" that we weren't supposed to do part (3) until after we had entered the information for (1) and (2) into the tool. I was all like "really?" and then a team lead came by and repeated the exact same thing. This may be summed up as: Enter data into the tool first, and think later.
I was kind of shocked with this comment and attitude and in retrospect it kind of foreshadowed the rest of the training experience -- i.e. Tool and process first; think later; maybe. Okay. I'll play along and see where this goes.
During this exercise, I began to grasp this idea of "components" as HP uses them. I think they are like Page Objects - chunks of code (or in this case, test steps) that perform a certain function, promote reuse, and reduce duplication. Although it was never described in this way, I believe the HP "BPT" module is a proprietary Do-It-Yourself DSL. Aha! I have experience with those. I get that stuff.
So, once I got it, I started to explain the concept of YAGNI to my group team members. That is, let's not overthink or over-engineer these components. Let's build/write them based on the needs of the requirements in front of us. We will modify the components in the future as new requirements appear. This idea was well received and we quickly came to an efficient solution for our small group.
When we took up the exercise, I found I had to explain the YAGNI concept to the instructor/trainer (and rest of the class/team) as he proposed that we try to abstract out these components to allow for compatibility with other features and system elements. What a waste of time! We cannot know what we will need beyond our immediate needs so that kind of abstraction is a pointless exercise that leads to more headaches than you need.
Eventually, I started to get that the point of writing the test scenarios using BPT was that these components form a basic vocabulary that may then be automated at some point in the future -- yes, BPT integrates with HP's QTP automation tools. Now, I'm all in favour of consistency, clarity, reuse, and automating tests that humans should never have to do more than once (and if there is value in re-running the test), so I struggled to understand why it was never explained to us this way. As long as I kept the DSL/automation model in my head, I understood what we were doing and can see the potential benefits of it.
I saw many testers struggle with the exercises and models we were presented with. End of day one.
Day two began with a quick recap and then we were introduced to the bureaucracy that is Waterfall and HPQC. That is, there are review processes and workflows to cover each requirement, BPT and component. Welcome to Wasteland. (I mean the Lean Development concept of "waste" here, although other interpretations of "wasteland" may be just as valid.) Easily a third of the 2-day training was spent on the processes surrounding the management and review of the various HPQC objects. sigh.
We then moved onto the concept of "parameters" for the components we created yesterday. Okay, I get method parameters when I am scripting with Ruby, so this was no sweat. Given the length of time I spent parametrizing my components compared to everyone else in the class, I think I may be one of the few who really got it.
I learned a few more things about the main instructor. One was that he didn't know how to identify web page elements using commonly available browser tools. Umm, aren't these tools supposed to interact with web pages? This never came up before? You have never wondered how to find the name/id for an element on a web page? really?
The other was that he had a really bad sense of humour. He made a reference to a "QA" joke that I shall not repeat here. Needless to say that I found it insulting, offensive, it made my skin crawl and my blood boil. Many unpleasant feelings and ideas arose in me and it took all my willpower and strength not to react to the blatant stupidity of insulting the profession of the students in your class and the market that the HPQC tool represents.
The final blow for the day came to me when we tried to "execute" these BPT test scenarios using the HPQC tool. THEN I discover that these "parameters" can have 2 kinds of values - fixed/hard-coded and run-time. Anyone who has done intelligent automation knows NOT to hard-code values in their scripts. Data-driven is way better, and as an exploratory tester, I may not know what value I will choose until moments before.
Here's where the HPQC tool gets stupid..er. Regardless which parameter type you choose, they may ONLY be defined BEFORE the test execution! You cannot leave empty parameters so that when you get to a particular step, the tester can decide what value to input and feed it back into the tool.
What does this mean? As a tester, let's say you want to create a new user profile in an app of some kind. The test parameters for things like Name, Email, Address, Country, and so on, must all be defined and set before the test is executed. You cannot decide while you are actually testing!
Why does this matter to me? It matters because it means that the humans who execute these BPT tests manually have no choice in what values they may input. Testing techniques like Equivalence Classes and BVA that help guide our choices to pursue interesting paths are completely cut-off! It turns out that HPQC treats humans worse than the automated counterparts. In discussion with an automation "expert" at the end of the class I learned that at least you can code some variability into the QTP automation scripts. This is not possible with the same test scenarios executed manually by human beings.
So. Much. Wrong.
So after my first ever QC training session, here are some of my take-aways:
- It took me 2 days to "script" 6 test scenarios in this tool and they were rotten test cases to begin with! I suspect that outside of the training environment, it will actually take longer to complete since you won't have a "reviewer" sitting next to you waiting for you to finish your piece.
- And they weren't even automated! Who knows how long it would take to tweak the "components" to make them work with a particular automation strategy.
- HP QC will never be a useful tool for any agile or rapid development efforts
- (HP best practice) Put the tool, data and review processes first, before you think. Maybe always instead of it too.
- BPT is a DSL framework for test scripting
- BPT component parameters cannot be customised during test execution. They may only be set/defined before you start testing. => No thinking allowed while testing.
- The instructor didn't appear to be knowledgeable on anything outside of the tool itself. This includes how we might actually want to use the tool. No, no, no. And I quote: "Testers must change how they work to use the tool in the way it was designed."
- As long as there are people pushing these kinds of horrible tools that suck the life and intelligence out of people, inject mountains of wasteful activities that provide no value to the customers or end users, and continue to create barriers between testers and their developer counterparts, I will always have job security in helping organisations recover from these cancers.
At the end of the day, there were several people interested in my idea of randomising tests. The automation expert in the class insisted that it couldn't be done with automation, so I called up my "Unscripted Automation" presentation slides that I gave a few years ago. He said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used.
My response was to simply say "they are all wrong."