Outlook Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 23 May 2012

What is Exploratory Testing?

Posted on 23:01 by Unknown
What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where Wikipedia doesn't really help much.

For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.

Okay, so if ET = [good] testing, what is testing then? According to Cem Kaner, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.

If you ask Elisabeth Hendrickson, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I really like this definition! I studied Science in university and I love the way this definition reminds me of the Scientific Method. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. So much goodness in there!)

So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!


I dislike the first line description of ET on the Wikipedia page. I dislike it because it is incomplete. It says that ET is "concisely described as simultaneous learning, test design and test execution." ... AND?!? And then what?! This definition is kind of missing what happens after you do the execution part. That's really important.

Elisabeth Hendrickson offers a better description (IMHO): "Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next."

I like this because it closes the loop between the purpose of the test and what you do with the results. In this case, when you learn something from the test you intentionally performed, you use that to decide what you will do next. It is kind of like playing the game of 20 questions. If you play the game poorly, you ask specific questions about what you think the answer is - e.g. "is it a turnip? No. Is it a bicycle? No. Is it a ...?" There's a slight, random chance you may guess it right, but that's really unlikely. If you play the game well, each question you ask helps you narrow down the possibilities until you can make a good guess with a high probability of getting it right.

I often use this diagram to explain the relationship between exploratory testing and test cases:

When we first look at a new feature or system, we don't know very much. We design experiments (or tests) to help us learn more about it. Initially, it is like the game of 20 questions, where we try many things and look at the system in different ways to try and discover what is important to someone who matters. That is, we explore the system for qualities and risks that we believe the customers, users, or other stakeholders may care about.

Test cases are different. When you have learned something about the feature (ET session complete), you may choose to document or automate important or representative paths (i.e. test cases) through the software for future reference (e.g. "regression testing"). You don't learn anything new from these test cases, so we sometimes refer to them as scripted "checks". We may use and reuse specific test cases for many different purposes - e.g. regression testing, performance testing user profiles, sanity checks, and so on.

At some point in history, the (1) intent or purpose and (2) test design behind the testing activities were lost and some idiot propagated the idea that test cases are the important part of the testing activity. This "bad practice" has doomed most of the technological world for a few generations now.

Let me be clear about this: test cases are not important. To anyone who knows how to test, we can create and choose new representative paths at any time, and, often times, the variations between the chosen paths through a system helps us uncover new risks and potential problems. Testing requires thinking. Checking, or blindly executing test cases, does not. If executing test cases doesn't require thinking, you may as well program a computer to run them because humans are famously bad at precisely following instructions.

An important difference between exploratory testing and scripted testing is that scripted testing blinds you to everything else going on in the system while exploratory testing aims to help you see more. To use a literary example, author Paulo Coelho posted a short story on "the secret of happiness" that illustrates this point. (NOTE: please read that story before continuing here - it'll only take a few minutes. I'll wait.)

I don't know if that is the secret to happiness, but I do know that in the first run through the palace, the young man was so focussed on the task that he missed everything else - this is exactly like scripted testing. The second time, the young man took in everything but forgot about the spoon - this is random or haphazard testing. Many people think this is exploratory testing but it is NOT! Exploratory testing would be how the wise man described the secret of happiness - complete your task and take in your surroundings.

It sounds hard, doesn't it? You know what, it IS hard. Good testing requires thoughtful effort and practice. If good testing was as easy as we are often led to believe then we wouldn't have all the software problems we have today, now would we?

Okay, so if doing good testing, exploratory testing, is hard, who can do it? Good question.

From one perspective, many people do this kind of testing naturally. BUT WAIT! Many people do this style of testing naturally, the same way many people can solve rate and calculus problems intuitively in their heads whenever you try to catch a ball. The mathematics behind the motion of a ball through the air (gravitational, kinetic and frictional forces) coupled with your movement relative to the ball in order to catch it is really quite complex. Not many people would say they understand or can do the math, but most people can catch the ball. So, part of your brain knows how to do the math even if your brain doesn't tell you how it does it.

It's the same with testing. There is a method to the madness. When someone goes looking for information, it is usually in response to some question in their head. Either someone asked them the question, or they thought it up based upon some related thought. That question drives you to poke, look, observe and evaluate what you learn to answer that question. That is testing. It has important elements: the question, intentional test design, observations, and analysis of results.

Some people are good at all of these elements, some are good at some of these elements, and some suck at all of them. To the latter group of individuals I say: please step away from the keyboard, and avoid management roles. Please.

There is an interesting side note related to Agile Software Development. Practitioners and coaches of agile methods may be familiar with the Agile Testing Quadrants. You will see that "Exploratory Testing" appears in quadrant 3, so what's that all about?

Funny you should ask. It is a bit misleading.

You may think that ET in Q3 means that it is something that is only done to critique the product with some business-facing tests. Not so. Exploratory testing will be performed in any and every quadrant as long as the person doing the testing is thinking, intentionally designing their tests, and learning from the results. Last time I checked, that happens in all the quadrants.

For example, when a programmer is creating unit tests to drive the development (Q1), they are thinking about the feature and design and making choices about what to automate. There is a lot of learning going on in this process and I would very much consider this discovery process as "exploratory". However, when the unit tests are coded and running automatically with every build, these are now "checks" and no more learning is taking place. So, executing these checks that were created in an exploratory way is no longer an exploratory testing activity. Get it?

Same thing with functional tests (Q2). You start off learning and exploring but once you decide upon and document a specific set of test cases, these test cases are no longer exploratory.

Quadrant 3 is an interesting place. It is the catch-all space for the million other tests that the system users and stakeholders may be interested in. The problem here is that complete testing is impossible and there is an infinite number of perspectives one may use to examine a particular system. The human brain is uniquely qualified to process a lot of different factors really quickly, integrating and adapting to new information, and eliminating and ignoring aspects that are not a priority to the stakeholders.

Computers cannot do this. Not even close. That's why the bubble in the corner of the matrix says "Manual" - because our brains are the most efficient tools to perform this kind of testing! Of course, we make use of tools and automation to help us gather information when appropriate; we just can't let ourselves fall into the trap of thinking that computers can do this for us.

So, while exploratory testing is a means to an end in the other agile testing quadrants, it is the primary approach in this particular quadrant (Q3). Got it?

So, if you fumble your way through the other three quadrants on your agile project and you are wondering why your quality still sucks, you may need to take a serious look at finding an awesome tester with some mad exploratory testing skills. Sorry to say that this is not widely taught in schools yet, so we are still something of a rare breed.

Does this help clear a few things about Exploratory Testing? Please let me know. Cheers!

Read More
Posted in agile testing, ET, exploratory testing, science, Software testing | No comments

Saturday, 25 February 2012

Testing is a Medium

Posted on 16:07 by Unknown
In a few days I will be giving a presentation to the local Agile/Lean Peer 2 Peer group here in town. The group has a web site - Waterloo Agile Lean, and the announcement is also on the Communitech events page.

I noticed the posted talk descriptions are shorter than what I wrote.  The Waterloo Agile Lean page has this description:
"This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it’s done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"
The Communitech page has this description:
"Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"
This is what I submitted:
"Testing is the medium in which solutions are developed. The value of our delivered solutions depend upon how well we understand and utilize that medium. We can fly straight like an arrow or explode outwards like a spherical sound wave.
Traditional automated TDD "checks" help us fly straight in the direction we choose. Is it the right direction though? How do we know? Are you sure?
Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"

This blog post is *not* about the differences in session descriptions. (In fairness, I really should learn to keep it to one paragraph. I hope to get better at writing session descriptions - it'll come with practice.) Reading the last one first, I can't help think that a bit of context is missing from the two posted descriptions for the final "blown away" statement. That is, that phrase comes from the sound wave analogy and not from some arrogant expectations I have for my presentation abilities. If I had known the descriptions would be shortened, I would have at least changed that sentence.

This blog post is about the idea that I try to convey in the first sentence that is unfortunately missing from both posted session descriptions: "Testing is the medium in which solutions are developed."

Software Development is a creative process. It is the intersection of people, skills, tools and experimentation to solve a people-problem with technology. As Jerry Weinberg once said: "all problems are people-problems." Therefore all developed products or services are "people-solutions."

Perhaps the main thought with "Testing is a medium" is that you cannot (successfully) solve any problem without trying to understand what the problem is in the first place. e.g.: Who is it a problem for? Where? When? How? -- these are all *Testing* questions. When you deliver a solution, you check it with something like: "how does this meet your needs?" -- again, another question.

Software Development begins and ends with questions. Somewhere in the middle of the creative development process are more testing questions. Lots of different kinds of questions, tests and checks depending on the people, skills and risks involved in developing each particular solution. One could say that Software cannot be developed without Testing.

Have you tried? Have you ever been on a project where you didn't first ask the customer what the problem was? You didn't check to see if what you are building is working towards that design? Or you didn't ask the customer afterwards if what you delivered meets their needs? How did that work for you?

If one cannot hope to hit the target without checking many different things, why is Testing often given so little attention or recognition on development projects? Too few individuals try to develop expertise in the Testing field to elevate their contributions to the development effort.

Anyone may ask a question. That doesn't make you an "expert" in asking questions. My 10-year-old son uses scissors to cut things, but that doesn't mean I want to let him cut my hair! Everyone I meet feels they know what Testing is. Okay, then why do so many projects fail?

So, what does it take to become really good at Testing? It appears to be somewhat important for the success of most software projects. Maybe it's time to take a good look at Software Development through the medium of Testing. How might you look at things differently then? What skills or knowledge would you want to learn more about? Who do you think should be involved?

I won't be talking about any of this stuff on Tuesday though. After all, it's not in the session description. =)
Read More
Posted in agile, development, questions, skills, testing | No comments

Thursday, 26 January 2012

Quality Agile Metrics

Posted on 21:24 by Unknown
I was asked recently what metrics I would collect to assess how well an agile team is improving. I paused for a moment to scan through 12 years of research, discussion, memories and experiences with Metrics on various teams, projects and companies - mostly failed experiments. My answer to the question was to state that I presently only acknowledge one Metric as being meaningful: Customer Satisfaction.

We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book "Implementing Lean Software Development" by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.

It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value.  If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.

I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:
Aside from Customer Satisfaction, are there any other Quality metrics you'd recommend in an #agile environment?


Here are the responses I received:

  1. Churn or team turnover. (Real case: Product delivered on time, customer happy, whole team left.)
  2. Escaped defects, inbound support calls/emails
  3. Value created - Reference: "Lean Startup" by Eric Ries (I'm reading it right now)
  4. Number of contributors to each story - not because exact count is meaningful but because it encourages collaboration & review.
  5. Profitability of the project is one. Are the goals (whatever they may be) of the project met?
  6. Lines of code changed during regression. That can expose some severe problems.
  7. Production Defects in 15 days after 'Go Live'.
  8. Cost of rework due to requirement changes.
  9. Code churn as a measure of "quality" (e.g. System Defect Density) - Research, Code example, Discussion, and Sample Stats.

Hm, almost a Top 10 list. I am happy with the responses here and think many of them may be worth exploring further to see what insights they provide. Cautionary Note with all Metrics: Beware the impact they have on the team. If behaviour or performance starts to change in a negative way, STOP immediately!

One of the things I talked about in the conversation was the importance of Retrospectives to allow the team to own their improvement activities. If the team uses these opportunities, their improvements should be observable over several iterations. I think a Happiness Index reading during Retrospectives might be an interesting indicator of the overall effectiveness of the improvement strategies employed.

What do you think? Anything else I should consider?
Read More
Posted in agile, metrics, quality | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Enable ActiveX Control in Outlook
    Occasionally when using Microsoft Outlook, you may receive an error message telling you that your security settings do not allow ActiveX con...
  • Outlook requires Outlook Express 4.01
    Cannot start Microsoft Outlook. Outlook requires Microsoft Outlook Express 4.01 or greater. You can install Outlook Express by running Add/R...
  • Some incomplete thoughts...
    There are a series of related ideas that I want to discuss, but I don't think I'll have the time to properly describe them here. I...
  • Microsoft Outlook Duplicate Email Fix
    When using Microsoft Outlook, you may encounter an error in which all of your emails are downloaded twice. Depending on the size of your inb...
  • The Human Side of Living
    As I go through life I keep noticing stories, ideas and insights into humanity and I sometimes wonder if we are meant to discover these less...
  • Troubleshoot Outlook Express Error 0X800ccc90
    If you're an Outlook Express user trying to log on and you get an Error 0X800ccc90, which stops your password from being authenticated, ...
  • Happy Limerick Day! (May 12th)
    The CEO where I currently work sent around the following note by email at the start of today: Today is Limerick day. A limerick is a five-li...
  • Distorted Sound
    Another problem with MSN/Windows Live Messenger is the sound cuts out. This is a known problem with versions 7, 8.0, 8.1 Beta and 8.1. Usua...
  • Now with minty-fresh visitor counter
    Someone suggested to me this past weekend that I add a visitor counter to this blog. It's one of the most common suggestions made to me...
  • Windows live mail error Ox800CCCD2
    This error generally comes up when your firewall block any port. Reconfigure your account to Live Mail and also make sure that your firewal...

Categories

  • agile
  • agile testing
  • AYE
  • bad training
  • bugs
  • building software
  • certification
  • communication
  • conference
  • configure outlook express
  • configure windows live hotmail account in windows live mail
  • configure windows live mail
  • context-driven
  • development
  • engineering
  • error message
  • ET
  • exploratory testing
  • future
  • hiring
  • hobbies
  • hotmail account validation process
  • How to Enable ActiveX Control in Outlook
  • how to fix duplicate email
  • how to solve error 4.01 or greater
  • incoming mail sync to outlook
  • information radiator
  • instruction for pst file
  • interests
  • lean
  • lean software development
  • learning
  • low tech testing dashboard
  • management
  • mastery
  • measuring progress
  • metrics
  • Microsoft Outlook Duplicate Email Fix
  • Microsoft Technical Support
  • Microsoft Windows Mail
  • ms outlook duplicate email
  • msn account reset
  • msn account validation process
  • msn error code 0x80004005
  • msn error code 0x80004005 in apple mac
  • msn error code 0x80004005 windows 8
  • MSN Error Support Msn Help and Support
  • MSN Password Recovery
  • msn password reset
  • MSN Technical Support
  • outgoing mail not sent from outlook express
  • outlook not authenticate password
  • passion
  • people
  • pop3 email server
  • programming
  • quality
  • Quality Center
  • questions
  • regression testing
  • remove error 0X800ccc90
  • remove Error 0X800ccc90/Error 0x800ccc18
  • remove error 421
  • remove error ox800ccc90
  • remove msn error code 0x80004005 in windows 7
  • remove windows live mail
  • repair microsoft outlook pst file
  • repair PST file
  • resolve sound distortion problem with your live messenger
  • reviewing resumes
  • Satir
  • SBTM
  • science
  • skills
  • software
  • software testers
  • Software testing
  • sound distortion msn
  • sound distortion with livemail
  • support for microsoft outlook
  • support for outlook
  • TDD
  • technical support for microsoft outlook
  • testing
  • testing dashboard
  • time
  • Unable To Login in Windows Mail
  • unable to loging in waindows mail
  • value
  • Waterfall
  • windows live mail error Ox800CCCD2
  • windows live mail support
  • writing

Blog Archive

  • ►  2013 (16)
    • ►  September (10)
    • ►  August (1)
    • ►  April (1)
    • ►  February (2)
    • ►  January (2)
  • ▼  2012 (3)
    • ▼  May (1)
      • What is Exploratory Testing?
    • ►  February (1)
      • Testing is a Medium
    • ►  January (1)
      • Quality Agile Metrics
  • ►  2011 (25)
    • ►  December (1)
    • ►  October (1)
    • ►  September (2)
    • ►  August (3)
    • ►  July (2)
    • ►  May (1)
    • ►  April (2)
    • ►  March (9)
    • ►  February (2)
    • ►  January (2)
  • ►  2010 (13)
    • ►  November (1)
    • ►  September (3)
    • ►  July (1)
    • ►  May (1)
    • ►  April (1)
    • ►  February (4)
    • ►  January (2)
  • ►  2009 (10)
    • ►  December (1)
    • ►  November (2)
    • ►  October (2)
    • ►  July (3)
    • ►  May (1)
    • ►  February (1)
  • ►  2008 (4)
    • ►  October (1)
    • ►  April (1)
    • ►  March (2)
  • ►  2007 (12)
    • ►  November (1)
    • ►  August (2)
    • ►  July (1)
    • ►  May (3)
    • ►  February (2)
    • ►  January (3)
  • ►  2006 (1)
    • ►  August (1)
  • ►  2005 (16)
    • ►  November (2)
    • ►  October (1)
    • ►  September (2)
    • ►  August (1)
    • ►  May (4)
    • ►  April (4)
    • ►  February (1)
    • ►  January (1)
  • ►  2004 (2)
    • ►  December (2)
Powered by Blogger.

About Me

Unknown
View my complete profile