What is Change Related Testing (ISTQB Syllabus)?
Once a system is promoted to live it is not the end of its updates or changes. In fact, there can be many iterations after it goes live, such as defects, new functionality, etc. But what is Change-related testing and what has that got to do with this?
What is change related testing? Change-related testing is a Test Type that has a specific purpose. It is designed to verify that a change to the system has been fixed correctly and does not have any adverse effects on the system as a consequence.
Now that you know what change related testing is let me explain what types of tests are performed during this time, how it relates to the overarching “Test Types” if some of these tests can be automated and more. Keep reading…
What is a Test Type?
A test type is a group of testing activities that have specific objectives to test the system. Here is an example of other Test Types that are used:
- White-box testing
You may be thinking this is all good, but when would you need to actually use change-related testing? In this section, I explain when and why you will use it.
This testing type is typically used following a change to a system. Usually, this is after a system is live, but this is not always the case.
Types of changes
These changes can be as follows:
- Defect Fix.
- Change of environment.
- Change of functionality.
- New functionally added.
If the system has just been upgraded due to defect fixes it is important to verify that the defect fixes have been fixed correctly and also check that their inclusion has not caused any unexpected issues to the current system (aka Regressions – more on this later).
Change of Environment
If the system has been implemented and for whatever reason, it needs to be moved. Then this can be a risk. This risk needs to be mitigated with change-related tests to ensure that it is safe to use.
Change of functionality
If the system has had modifications to improve or even change the behavior of the system, it needs to be tested to verify it. This will include tests to confirm that the changed functionality works as per the upgraded specification and also that it does not introduce any issues.
New Functionality added
Similar to the section above, new functionality added will be treated much the same as changed functionality. The new functionality will be tested to verify that it works as specified, but it will also need to be checked for any regressions.
Techniques Used to Test This?
As discussed earlier change-related testing is an over-arching group of test activities. So, you may be wondering what specific techniques are used to test this. Let me explain…
- Regression Testing.
- Confirmation testing.
This change-related test technique focuses on verifying that any new builds or code changes do not adversely affect the system. For example, a defect fix may be released to fix a calculation error in a timesheet system. The fix was designed to fix a miscalculation on weekend work rates.
After implementation the defect worked, however, the users have now discovered that their standard day rates are now being incorrectly calculated. This is an example of a system regression (Click here for Regression Testing Best Practices). And this is exactly what regression testing will deal with.
This technique confirms that the intended change meets its specification. So if we refer back to the example in the last section, the timesheet system defect fix. In this context, confirmation testing will be used to verify that the release fixed the issue with weekend rate calculations, which it did.
With all these potential tests that could be required on an ongoing basis, e.g. regression testing that we discussed earlier. You may be wondering if it would make sense to automate these tests. In this section, I will explore this for you.
When we look at the two main techniques in change-related testing the obvious one to automate is regression testing. Why? Well, confirmation testing is effectively a moving target, right?
Think about it, each time the system is changed for a defect or functionality change it could literally be anything. However, with regression you have some common tests that a run over and over again to check that the system responds in the same manner as it did pre-release, are you with me?
e.g. If we go back to our timesheet system we used earlier…. Every time an amendment is made to this system, it would be a good idea to check that the standard day-to-day calculations still work as designed.
This would be a perfect example of a group of tests that could be easily automated.
If you have heard of smoke testing before, you may be curious to see if it is used during Change-related testing as well. For that reason, I will clarify this in this section.
Firstly, what is Smoke Testing?
Smoke testing evolved from industrial engineering. When physical changes were done it was deemed as fit for purpose if no smoke was seen.
For that reason, it has evolved into a technique used in software testing. The purpose of smoke testing is to establish if the system passes some very basic tests, to confirm that the system is even remotely ready to test. This can be basic system functions that are mandatory to move forward.
Yes, smoke testing can and is used during change-related testing. Although it is not necessarily documented in exam syllabuses, in the real world it is used. I know from my own personal experience of using it in anger.
It helps to reduce wasted time and does not take much time to run the tests. In reality, these tests are so basic, one could argue that they are bordering on common sense tests that do not need a formal name or phase.
Earlier we mentioned Smoke testing and the inclusion in change-related testing. However, what about sanity testing? Could this be included also? Let me explain…
Firstly what is Sanity testing? Sanity testing is essentially a subset of regression testing. It aims to verify, with some basic tests, that the function in question is working. It does not confirm it, but it gives us an early view if something really basic is not working, which could render the regression testing a waste of time.
But isn’t this just the same thing as Smoke testing?
No, it’s similar, but not the same. Sanity testing focuses on the function under test. Whereas Smoke testing has a broad set of tests, which can be run in a short space of time.
Yes, it can and, like smoke testing, it usually is. Whether that is formally or informally. It is a great way to eliminate any time-wasting for a code fix that clearly has not addressed the problem.
In this section, I am going to answer some questions related to change-related testing. If you feel you have more questions you need answering, feel free to drop me a comment below.
Q: What is the difference between Re-testing and regression testing?
Re-testing is effectively testing test cases that have previously failed. Whereas regression testing is verifying that the system has not “regressed” by re-running a subset of previously run tests.
Although they sound similar in action they are very different. Both are just as important as the other by the way, just have different objectives. Obviously, as discussed regression is one of the key activities in change-related testing.