Archive

Monthly Archives: October 2019

Which White Box Testing Technique is Best? (Examples)

If you are preparing for your ISTQB, or genuinely curious about the best white-box techniques prior to starting an up-coming test phase, then I have prepared some info to help you decide the best white-box testing approach.

Which white-box testing is best? The best white box testing technique is a combination of a decision based which is also run dynamically (more on this later). Because you have the potential of high test coverage with the ability to analyze how the code actually behaves.

Now that you understand which technique is best, in my opinion, let me go on to explain exactly what white-box testing is, the advantages, disadvantages, types, how it can be measured and much more.

What is white-box testing?

what is white box testing

what is white box testing

White-box testing is effectively the opposite of black-box testing. It focuses on the internal structure of a test object, rather than just analyzing the inputs and outputs.

This comes down to analyzing the actual code of the software-based on a detailed design or equivalent specification document.

What are the advantages of White-box testing?

Now that you have a general idea of what this technique is, you may be wondering what the point of it is. Why should you even care about it, right? Well, let me explain some of the

This testing technique is ideal for improving the efficiency of the code, as well as spotting clear errors before getting into further phases of testing.

In my experiences of testing, developers are sometimes hesitant to do this, with the attitude that they are not testers. But, in my opinion, this is the mark of a great developer who truly checks his work before throwing it over the fence to the testers (so to speak).

Disadvantages of White-box testing?

Like most things, there are always set-backs with every technique. So, in this section, I will explain some of the disadvantages of this testing technique.

Firstly, whilst it’s a great way to weed out erroneous code early on in the Software Development lifecycle (SDLC). It relies on your organization having skilled developers that can perform the tests.

Usually, it the actual developer who wrote the code in the first instance that actually executes the tests, so you can guess the level of expertise required here.

The point I am getting at here is the cost. To do this correctly you need a decent budget for a skilled resource.

Impossible to catch everything

Whilst this technique helps to improve the code, prior to getting into later stages of testing, such as system testing, etc. It needs to be said, to set your expectation, that it is impossible to catch every potential defect at this level. In fact, ever, throughout any testing stage for that matter.

Don’t get me wrong, it’s worth the investment, but it is worth explaining this upfront.

Types of White-box testing

In this section, I will now explain some of the actual white-box techniques that can be used. According to the ISTQB syllabus, this testing can occur at any test level. However, I will explain some of the most common techniques used.

White-box testing is typically performed by a developer at a code unit test level using one of the following techniques:

  • Statement based
  • Decision-based

Statement based

The statement based technique focuses on the individual lines of code to be delivered. The idea is to analyze the actual lines of code in the software program and have tests that verify these lines.

Decision-based

Decision-based techniques are slightly different. Instead of focussing on the lines of code it revolves around the conditions in the code. When I say conditions, I mean the decisions, for example, IF or case statements.

As you can imagine there are many conditions within code that will create many permutations.

Static vs Dynamic

Whether you use statement or decision-based techniques there is another choice to make. This is how you will actually execute these tests. For this there are two choices:

  • Static
  • Dynamic

Static

Static is a manual approach. This relies on the developer going through the code and eyeballing the code to see obvious errors. This can be done by the creator of the code.

But, it is more effective if it is done by a different developer with a similar skill set.  These reviews are typically called peer reviews.

The benefit of the peer review is; it introduces fresh eyes. Often a developer will overlook obvious issues because they have simply spent too much time steering at the code. Whereas a new pair of fresh eyes will look at it from a different angle.

This is an opportunity for the reviewer to not only spot obvious errors but also suggest smarter or more efficient ways of doing things.

Dynamic

Dynamic is the opposite. This comes down to running the code and analyzing the outputs. This gives the developer the opportunity to compare the outputs to what’s expected from the specification.

This is a really effective method because the software can be complicated and even the most experienced developer may be surprised with output that seems to look perfect until it is dynamically run.

How can white-box testing be measured?

Understanding the concept of white-box testing is one thing, but grasping how to measure it is another. And, arguably one of the most important factors that will govern the next stage of your test object. For that reason, I will explain how this is done for statement & decision-based techniques.

Statement based measurement

For statement-based testing, the lines of code is the main focus, as discussed earlier. Therefore, for this method, the total lines of code (statements) are compared to the total tests that execute these lines.

The coverage is typically expressed as a percentage of statements executed to total statements of code.

Decision-based measurement

For decision-based testing, the conditions (decisions) in the code is the main focus, as discussed earlier. Therefore, the coverage is based on this metric.

In particular, the total number of decisions executed by the tests is expressed as a percentage of the total number of decisions.

Special skills and knowledge requirements

With this intricate testing, you can expect that it will require a certain level of skills to get it right. For that reason, in this section, I will explain exactly what level of skills you will expect to have.

White-box testing requires a resource that fully understands the code structure of the test object. In reality, every code object is different, so each skilled resource will develop their own unique expertise in one particular area of the system.

Either way, to begin with, you need a technical low-level coding resource that can do this task correctly. In reality, it doesn’t necessarily have to be a developer, but in my experience is nearly always is.

Related Questions:

In this section, I will answer some questions related to white-box testing. If you have some additional questions that are not answered here please drop a comment below.

Q: What is the difference between white and black-box testing?

White-box focuses on the internal structure of a test object. Whereas in black-box testing, the internal structure is not important. All that matters are the inputs and outputs meet the specification.

Also, with black-box testing, you would expect testers to take on this task. However, with white-box testing, it is typically developers that do this.

Q: How is coverage measured at a component integration level?

This is typically measured by the total number of integrations executed by your tests as a percentage of total integrations.

Integrations are different. The expectation at this stage is that the individual component (click here to see why stubs and drivers are used in component testing) has already been tested and fit for purpose. Therefore, this stage focuses on how the outputs of component A interface with component B.

These integrations are usually in the form of a file (typically XML) with various fields and data. These data points need to match the specification to pass. Hence the need for the component level integrations. And, even more, an important way to measure its coverage.

What is Non-Functional Testing? (2 examples)

If you are preparing for your ISTQB or just curious, you may be wondering what non-functional testing (NFT) is and how it fits into the testing life-cycle. Let me explain…

What is Non-functional Testing? It tests how well a software system behaves, based on an agreed specification. In particular, it focusses on certain aspects of the system, for example, performance, usability or security.

Now that you know what it is, let me explain a bit more detail Such as when is it used, how it can be measured if special skills are required, the types of non-functional testing and more. Keep reading…

When is non-functional testing used?

Understanding what Non-functional testing is, is one thing. But, understanding when to use it is another. For that reason, in this section, I will explain when it is a good idea to use this. The answer may shock you because it’s not what most people assume or do.

Even though most software testers will assume that non-functional testing should be scheduled after system testing. The reality is, it should be done at all test levels. In fact, the earlier the better.

Why is this? Simply because in the event that you find a serious “show-stopping” non-functional defect. It is better to detect it sooner rather than later. The cost of finding it later is nearly always significantly more.

How can Black-box testing techniques be used for this?

While planning non-functional tests you need a method to work out the test casts. For that reason, in this section, I am going to explain how you can use some known Black-box testing techniques to draw up these cases.

One of the known Black-box techniques, called boundary value analysis (BVA) can actually be used to derive non-functional test cases. IN BVA, the agreed boundaries of fields and inputs are tested to make sure that they meet the agreed specification.

How can the Non-functional tests be measured?

Without the ability to report and quantify testing would effectively make Non-functional testing worthless. But, you may be wondering, with that being said, how can it actually be measured, right?

The coverage of non-functional tests relies on clear requirements/specifications. For each non-functional element specified, it is possible to measure the coverage based on the number of tests run per element.

This can then be used to help to find any gaps in the testing. Such gaps may be deemed as acceptable based on the risk it poses to the system, assuming a risk-based analysis is used.

Are special skills required?

Non-functional tests cover a large set of possible tests. But, you may be interested to know if any resource can run or design these tests, or does it need a special level of expertise?

In most cases, non-functional tests, for example, performance tests, typically need a specialized resource with a great level of experience. This experience could be deep knowledge of a tool, e.g. win-runner. Or specialised security-related skills to make sure the system is tested correctly.

Who does Non-functional Testing?

In the last section, we touched on what type of skill set is required to do non-functional testing. However, I did not explicitly state who would be the typical resource member to do it.

Non-functional is quite a broad categorization, so there will be different roles for every different particular testing activity. For example, for performance testing, you would typically have a dedicated performance test analyst. If the testing required is quite significant, this could be a team of testers. With one of these resources being a Test lead or senior tester.

Is security testing functional or non-functional?

security testing (non-functional)

security testing (non-functional)

If you have security testing in the scope of your project, or you are just curious, you may be wondering what this is classified as. In this section, I will explain this.

Security testing is classed as non-functional. It is required to verify that the system in question is secure. Such as resilience to external infiltration.

What is the difference between functional and non-functional testing?

To make sure that it is clear, I want to draw a distinction between functional & non-functional testing. This will help you to understand what NFT is.

Functional testing (Click here to see why Functional Testing is Required) is concerned with the functional elements of the system. Typically, this is defined in a functional spec as “features”. At this stage, we are not concerned with code-level testing. It is mainly black-box tests that focus on the inputs and expected outputs of each function. And, verifying that they meet the agreed specification.

Non-functional testing is different because it focuses on aspects of the system that are not usually addressed during functional testing. Such as how many users can use the system simultaneously (max load) before it stops responding, are you with me?

Then, to follow on with this example. Once we establish the max load, how does this compare to the documented non-functional specification? Does it support as many users as we expected, or not?

My experience with Non-functional testing

Reading about NFT is easy, but understanding it is another. Sometimes a real life example will help to put things into perspective. For that reason, in this section, I will give you an example of my first-hand experience using this method.

In one of my contracts in the past, the client was responsible for providing vehicle finance. Therefore, they have a large amount of critical data that needs to be secure and resilient. In particular, in the event of a disaster, e.g. office fire, damage to servers, etc.

I was tasked with leading an annual task they had. Which was to verify that they can recover all of their core systems following a disaster. To do this we emulated the situation by creating a “disaster day”

Disaster Day (Disaster Recovery – Non-Funtional testing)

This testing is known as Disaster recovery. And, is a known example of non-functional testing. The objective was to prove that we can prove that each core system can be used following the disaster.

To prove this, we had systems located on another site. Their sole purpose was to provide a different backup environment that we could switch to following the disaster, are you with me?

The recovery Site

On the recovery site we had two goals:

  • Prove that the core systems can be used (core functionality)
  • Verify the data was restored correctly.

Proving that the core systems can be used

To prove this, we had a selection of Subject Matter Experts (SME’s). Their role was to test the recovered systems and verify that the core functionality was operational, following an agreed test plan. The time was limited (one day on site for all tests), so we had to be efficient.

Verify the data was restored correctly

The second goal, and arguably one of the most important, was to prove that the data was recovered successfully. This was done by taking a snapshot of the data before the disaster day. And then, comparing that snapshot on the recovered system.

To clarify, although this sounds complicated, it was done as simply as possible (due to time constraints. The data snapshots were as simple as taking some screenshots before the disaster and then comparing them after the data was restored on the backup systems, are you with me?

Why are non-functional requirements important?

You may be wondering what is so important about these non-funtional requirements. And, you may even consider cutting corners and leaving them as a “nice to have”. But, in this section I will explain why they are required.

Outline Performance Boundaries

One of the main reasons why it is important is the performance boundaries. Once you understand these boundaries, e.g. how many users can use the system simultaneously. It can help your organization understand what they need to do to scale up. It can give you an idea of the cost of this and the impacts it may have on other projects, resources, etc.

Also, it can give an insight to see how long the system is useable in its current form. This can help with planning for future quarters, etc.

Why Functional Testing is Required (Examples,Types)

If you are preparing for your ISTQB (Click here for my prep guide), you may need to understand the nature of functional testing, and how it works. Let me explain…

Why is functional testing required? It verifies that the agreed behavior of each function works as specified in the related requirements. It is an important part of software testing and is covered throughout all levels during the software development lifecycle.

Now that you understand what functional testing is, at a high level. Let me now explain how it is measured, what skills are required, the different types and much more. Keep reading…

What is functional testing?

Functional testing is a Test type that focuses on the functions that a system has been designed to do. It can be used at all test levels and uses techniques such as Black-box testing (more on this later) to derive its tests.

The test cases that form this are based on functional requirements. These functional requirements can be across varying different levels and stages. For example, functional specifications, user stories, etc.

How can functional testing be measured?

how to measure functional testing

how to measure functional testing

You may be wondering if these functional tests can cover such a broad amount of levels, how can the actual coverage be measured? In reality, this is done quite easily, assuming you have a documented test basis to work with, e.g. functional specification. Let me explain…

The coverage is measured based on the functions documented against the number of test cases created. This can be done in the form of a traceability matrix at a high level. These gaps that are defined (if found) can help to tighten up the test coverage.

What skills are typically required?

In this section, I will discuss what skills are usually required when designing or executing functional tests. As you can imagine, depending on your specific requirements, it can be quite a technical process.

As I stated it is a technical process and for that reason, it requires skills or experience with the system or business process. This isn’t a deal-breaker if the individual resource does not have this, but in the absence of this, there will be a steep learning curve until it is mastered.

What types of Functional testing is there?

As you can imagine there are various different types of functional tests, as I mentioned earlier this testing is conducted at multiple levels. For that reason, I will break down some of these types to help you understand.

The following is a list of functional test types:

  • Component testing
  • Component integration testing
  • System testing
  • System integration testing
  • Acceptance testing

Component testing

Component testing is a testing technique that focuses on the individual test component (or code function), for example in a calculator application, the “subtract” component (or code function) is responsible solely for subtraction.

Therefore, in component testing, you would have a test that verifies this component performs as per the specification. At this level, there is no regard for whether or not it integrates with the calculator application correctly.

Component Integration Testing

This is closely related to component testing as you may have assumed. However, the key difference is how these components talk to each other.

At this stage, we assume that the component has been tested and works as expected. We only focus on how the output of component A integrates with the inputs of function B.

e.g. In the calculator App, that we referred to earlier. The substation component would be tested to see how it integrates with other components in the calculator app.

System Testing

System testing ( is concerned with the system as a whole. It primarily looks at the end-to-end tasks or features that a system is expected to perform.

The main objective of this is to find system defects before they get into a live production environment. As well as verify that the system behaves as specified.

System Integration Testing

System Integration Testing (SIT – Click here to see What the difference is between SIT and UAT testing) is closely related to system testing, as you can imagine. However, the objective of this testing is to verify how the system integrates with other systems.

For example, in a parking application, there is an interface to the vehicle database that matches number plates scanned with real vehicle information.

The system integration test will verify that a number plate requested from system A will return the vehicle data successfully from System B.

Acceptance Testing

This is, broadly speaking, testing that validates if the whole system is fit for purpose before going into production. It can be done by various different users of differing skill levels, for example, Operational Acceptance Testing (OAT), is typically run by very skilled support staff. Who typically have deep knowledge of the existing system (or product).

How is a Functional Test Usually structured?

If you are wondering what an actual functional test looks like, then this section is for you. I will be explaining more about how these tests are structured and the common attributes of these tests.

Depending on what type of functional test you are executing, the actual look of the test can vary. However, there are some common attributes that you would expect. Even if the tests are very informal. These include the following:

  • Test pre-requisites
  • Test description.
  • Actual test steps.
  • Expected results.

Test prerequisites

This is the expected setup before executing the test. For example, it could be a previous test is done first, so that the data is in the correct state, etc.

Test Description

This is a high-level description of the test purpose, objective, etc. It explains what this test is about.

Test Steps

These are the actual, step-by-step instructions to execute the test. Depending on what company you work for they can be really low-level detail (e.g. right-click on the “ok” button to submit the data”) or high-level tests that assume a level of experience (e.g. “submit the data”).

Expected Results

This is what we expect to see once the test has been executed. This is important because it will let us know if the test was successful or not. You may fee that this should be obvious, but, understand this, some tests could be negative, meaning the expected result could be an error.

Result

This is the actual result of the test. Typically, this will be “Pass or Fail”. But, there can be other statuses such as “Blocked” that can be used.

Can Functional tests be automated?

If you have some interest in automation or are just thinking of ways to speed up test execution, then you may be wondering if functional tests can be automated. So, in this section, I will be explaining if this is possible and how this is typically handled at this stage.

Yes, automation can and is used during functional testing. The reality is, functional testing is quite a broad term that encapsulates many testing techniques, as you have seen already.

Therefore, there is quite a lot of scope to automate at different levels. If we take system testing, for example, this is an example of a stage where this can be used. I have used this personally at this stage. It is beneficial if you have a large set of tests that are quite similar/repetitive. This will help to speed up the process.

Is performance testing a functional testing activity?

As we have discussed functional testing covers quite a few stages. So, you may be wondering if performance and functional are one of the same things. I will discuss this further in this section.

The short answer is no, performance and functional testing are different. Why? Because performance testing is classed as a non-functional test (Click here to see what is Non-Functional Testing).

These tests usually incorporate tests which are based on testing the performance of the system against the specification/requirements. An example of this is the number of concurrent users that a system can have at one time.

Is functional testing and manual testing the same?

Earlier we talked about automation and it is a part of functional testing. But what about manual testing? Is that also included? This is what I will clarify in this section.

Yes, manual testing is used in functional testing. In fact, I would go as far as to say that it is used extensively. Obviously, these days there is more and more automation being used. However, manual testing is still used a lot across all test phases.

What is Manual Testing?

Manual testing is a method of test execution. It means that a tester will manually run test cases by supplying the necessary inputs, pre-requisites, etc. and then analyzing the results to confirm if the test passed or failed.

What is Change Related Testing (ISTQB Syllabus)?

Once a system is promoted to live it is not the end of its updates or changes. In fact, there can be many iterations after it goes live, such as defects, new functionality, etc. But what is Change-related testing and what has that got to do with this?

What is change related testing?  Change-related testing is a Test Type that has a specific purpose. It is designed to verify that a change to the system has been fixed correctly and does not have any adverse effects on the system as a consequence.

Now that you know what change related testing is let me explain what types of tests are performed during this time, how it relates to the overarching “Test Types” if some of these tests can be automated and more. Keep reading…

What is a Test Type?

A test type is a group of testing activities that have specific objectives to test the system. Here is an example of other Test Types that are used:

  • Functional
  • Non-functional
  • White-box testing

When would change related testing be required?

You may be thinking this is all good, but when would you need to actually use change-related testing? In this section, I explain when and why you will use it.

This testing type is typically used following a change to a system. Usually, this is after a system is live, but this is not always the case.

Types of changes

These changes can be as follows:

  • Defect Fix.
  • Change of environment.
  • Change of functionality.
  • New functionally added.

Defect Fix

defect fixes

defect fixes

If the system has just been upgraded due to defect fixes it is important to verify that the defect fixes have been fixed correctly and also check that their inclusion has not caused any unexpected issues to the current system (aka Regressions – more on this later).

Change of Environment

If the system has been implemented and for whatever reason, it needs to be moved. Then this can be a risk. This risk needs to be mitigated with change-related tests to ensure that it is safe to use.

Change of functionality

If the system has had modifications to improve or even change the behavior of the system, it needs to be tested to verify it. This will include tests to confirm that the changed functionality works as per the upgraded specification and also that it does not introduce any issues.

New Functionality added

Similar to the section above, new functionality added will be treated much the same as changed functionality. The new functionality will be tested to verify that it works as specified, but it will also need to be checked for any regressions.

Techniques Used to Test This?

As discussed earlier change-related testing is an over-arching group of test activities. So, you may be wondering what specific techniques are used to test this. Let me explain…

  • Regression Testing.
  • Confirmation testing.

Regression Testing

This change-related test technique focuses on verifying that any new builds or code changes do not adversely affect the system. For example, a defect fix may be released to fix a calculation error in a timesheet system. The fix was designed to fix a miscalculation on weekend work rates.

After implementation the defect worked, however, the users have now discovered that their standard day rates are now being incorrectly calculated. This is an example of a system regression (Click here for Regression Testing Best Practices). And this is exactly what regression testing will deal with.

Confirmation testing

This technique confirms that the intended change meets its specification. So if we refer back to the example in the last section, the timesheet system defect fix. In this context, confirmation testing will be used to verify that the release fixed the issue with weekend rate calculations, which it did.

Can you automate change-related Testing?

With all these potential tests that could be required on an ongoing basis, e.g. regression testing that we discussed earlier. You may be wondering if it would make sense to automate these tests. In this section, I will explore this for you.

When we look at the two main techniques in change-related testing the obvious one to automate is regression testing. Why? Well, confirmation testing is effectively a moving target, right?

Think about it, each time the system is changed for a defect or functionality change it could literally be anything. However, with regression you have some common tests that a run over and over again to check that the system responds in the same manner as it did pre-release, are you with me?

e.g. If we go back to our timesheet system we used earlier…. Every time an amendment is made to this system, it would be a good idea to check that the standard day-to-day calculations still work as designed.

This would be a perfect example of a group of tests that could be easily automated.

Can Smoke Testing Be Used for Change-related tests?

If you have heard of smoke testing before, you may be curious to see if it is used during Change-related testing as well. For that reason, I will clarify this in this section.

Firstly, what is Smoke Testing?

Smoke testing evolved from industrial engineering. When physical changes were done it was deemed as fit for purpose if no smoke was seen.

For that reason, it has evolved into a technique used in software testing. The purpose of smoke testing is to establish if the system passes some very basic tests, to confirm that the system is even remotely ready to test. This can be basic system functions that are mandatory to move forward.

Using Smoke Testing for Change-related testing.

Yes, smoke testing can and is used during change-related testing. Although it is not necessarily documented in exam syllabuses, in the real world it is used. I know from my own personal experience of using it in anger.

It helps to reduce wasted time and does not take much time to run the tests. In reality, these tests are so basic, one could argue that they are bordering on common sense tests that do not need a formal name or phase.

Using Sanity Testing for Change-related testing.

Earlier we mentioned Smoke testing and the inclusion in change-related testing. However, what about sanity testing? Could this be included also? Let me explain…

Firstly what is Sanity testing? Sanity testing is essentially a subset of regression testing. It aims to verify, with some basic tests, that the function in question is working. It does not confirm it, but it gives us an early view if something really basic is not working, which could render the regression testing a waste of time.

But isn’t this just the same thing as Smoke testing?

No, it’s similar, but not the same. Sanity testing focuses on the function under test. Whereas Smoke testing has a broad set of tests, which can be run in a short space of time.

Can it be used with change-related tests?

Yes, it can and, like smoke testing, it usually is. Whether that is formally or informally. It is a great way to eliminate any time-wasting for a code fix that clearly has not addressed the problem.

Related Questions:

In this section, I am going to answer some questions related to change-related testing. If you feel you have more questions you need answering, feel free to drop me a comment below.

Q: What is the difference between Re-testing and regression testing?

Re-testing is effectively testing test cases that have previously failed. Whereas regression testing is verifying that the system has not “regressed” by re-running a subset of previously run tests.

Although they sound similar in action they are very different. Both are just as important as the other by the way, just have different objectives. Obviously, as discussed regression is one of the key activities in change-related testing.

Skip to toolbar