If you are interested in UAT or acceptance testing in general. You may wonder if it is used in Agile or other testing models.
Does UAT exist in Agile? Yes, UAT does exist in Agile. However, it is not reserved for the end of the project. Instead, it can be performed at the end of each sprint. There are other forms of acceptance tests performed as well (more on this later).
Now that you know that UAT is included in Agile, I will go on to explain what other types of acceptance can be involved, who typically does the testing, the benefits and much more…
Acceptance testing is similar to system testing from the perspective that you’ll be testing the entire system. However, in acceptance testing, it can form a decision if the system is viable for deployment.
It is also used as a quality gate for regulatory or contract related projects (more on this later).
There are a number of different objectives for acceptance testing.
However, the overarching objectives are to get confidence that the system has been built to the user’s expectation.
Acceptance testing is typically assumed to be at the end of the SDLC process. Therefore, the assumption by all parties is that the system has been thoroughly tested. And, it is expected that there should be minimal or no new defects found.
In reality, finding no defects is quite unrealistic. Because you will inevitably find some defects. But, what I would say is, if there is a large number of defects found at this stage it is definitely a red flag.
In most cases, if there are too many defects, it is regarded as a high project risk. And often recommended not to deploy the code.
In my experience of testing, had we have found a large number of defects during the acceptance testing stage, there would have been a lot of questions asked. We would also expect to be questioned why simple defects were not picked up during testing. Believe me, these are uncomfortable questions to answer, especially under pressure.
Now that you understand what the objectives of acceptance testing is. It is important to understand that there are various other types of acceptance testing. Each different type has different objectives and resources to perform it. They are as follows:
When acceptance testing is mentioned most people will typically assume UAT. However, it is not the only form of acceptance testing. I will agree though, it is one of the most popular, from my experience.
UAT is an opportunity for the end-user to verify and get confidence that the system performs as expected.
This usually revolves around the user requirements (Test Basis – more on this later) that were drafted from the initial stages of the project.
This stage of testing is typically performed by business users, typically end-users who will be using the system on a day-to-day basis when it goes live. These resources are usually had picked based on their knowledge of the current system (if its an existing system, that is being upgraded).
However, if it is a brand new Greenfield system, that has never been used before, then you will typically have the users who are expected to use the system in anger once it goes live into production.
Operational acceptance testing is another form of acceptance testing that is often overlooked. Essentially this involves highly technical resources, such as tech support.
The focus of this acceptance testing is different from UAT because these testers want to feel confident that they can support the system when it goes live.
For example, this could include tests like backing up and restoring the system in a simulated disaster (disaster recovery), etc.
This is quite a technical level of acceptance testing and requires experts that have had, ideally, years of experience or acquired a high level of skills and knowledge in a short space of time.
Quite a mouthful but ultimately it comes down to two separate types of testing. but there are aspects of which overlap.
Contractual acceptance testing involves testers (or representative testers) verifying that the system in question meets the agreed contract specification.
The agreed contract specification is often used to verify that the system is operating as expected. This contract and agreed behavior is usually created at the start, once a contract is agreed, at the very beginning of the project, way before testing starts.
Regulatory acceptance testing is slightly different because it involves testing to verify that a system meets certain regulatory standards.
During my experience of testing, I worked for an energy company (Gas & Electric). They were launching a Customer Management System (CMS) in a new European country.
However, the difference with this particular country is, they had a strict regulatory licensing quality standard that needed to be proven and verified before the system could go live and be awarded its license.
Therefore, I was drafted as a test manager to prove to the regulatory representatives that the system was fit for purpose to go into production in their country.
Typically for these regulatory acceptance testing, you will have a neutral third-party company that will verify what you have tested to ensure it meets their standards.
I also had experience of this, which included verifying that the test assets were created as per their specification as well as face-to-face visits to witness test what were testing.
Depending on who you are working with these tests can be quite stressful. Why? Because they usually have to be done within a restricted time frame. And, there is not usually much chances to get this right.
And, to make matters worse, the stakes are usually quite high because of the investment that has been made to get the software to pass the regulatory standards before the system can go live.
Alpha or Beta testing is a strategy used to verify that the system is fit for purpose. However, it is typically used by developers of commercial off-the-shelf software (COTS).
The objectives of this testing is to get an early understanding of the system and verify it is fit for purpose.
You may have even noticed some of these strategies used by big players such as Google. Sometimes they roll out an early stage “Beta” and get user feedback, before rolling out the final product.
The idea is to get a good dataset from the intended customers to understand if the system is working before you roll out the fully tested and production-ready version.
Like most phases of testing, test cases will typically be drawn up. However, a test basis is required to write the test cases against, are you with me? Typical documents that are used for test basis are as follows:
However, we also discussed OAT earlier. This usually requires some of the following:
As well as the test basis documents. There is obviously the actual objects that will be tested. Such as:
At this stage, defects are expected to be very little. However, as I mentioned earlier, you will expect to see some. The types of defects that usually show up during this phase are as follows:
All of these examples are important and need to be investigated. And, at this stage in the project, there is very little time to rectify these issues.
During this testing, depending on which type of acceptance testing, there can be various different resources testing it.
For UAT, you would expect the end-users to be involved or potentially some of the stakeholders.
For OAT, this is typically done by system administrators, support workers or a knowledgeable resource that has lots of working knowledge of the system in question.
Regulatory and contractual acceptance testing is typically done as a collaborative effort with the actual system testers and a representative (from the regulatory body or third party contract owner/partner).
As mentioned earlier most people assume that acceptance testing is done at the end of testing after system testing has completed.
In my experience, this is usually been the case. However, it does not necessarily have to happen at this stage. Especially when you are dealing with iterative development methodologies such as agile.
This is because each iteration is like a mini-project within itself or known as a sprint (click here for a full explanation). You may find different types of acceptance testing done at the end of each sprint, treating it like a mini-project.
If you are new to system testing, you may be wondering what it is and why there is typically multiple rounds (or cycles) of testing.
Why are multiple rounds of testing needed? Because each initial round typically results in defects. Each time a new build is introduced to address the defects, a new cycle is started to verify that the inial tests, that were passed, have not been adversely affected.
Now that you know why multiple rounds of the system is required, you may be wondering what is this “Gold Build” that people talk about, the objectives of system testing, who actually executes the tests and much more.
The Gold build is the ideal build, that should, in theory, have little to know defects after a round of system testing. But, let me explain where this all comes from…
System Testing is needed because a system is very rarely correct on the first iteration. Typically after the expected tests are run a number of defects are detected.
If you continued where you left off after the defects have been fixed you would not know if the previous tests, that had been run and passed, have been tainted by the new code that had been implemented, are you with me?
Therefore, it is important to repeat these tests, that have previously been run, on a new iteration (cycle/round) to confirm that there have been no adverse effects to previously tested code. (Basically a form of regression).
Usually, after a couple of rounds of testing in this manner, you can get to what is deemed as the “gold build”. The gold build is expected to give little to no defects and be the final round of testing before completing the system testing phase.
However, in reality, these things never quite go as planned. And, there can be a lot more testing iterations (cycles) than expected. Usually, based on my experience, a judgment call by the stakeholders is made, typically called a “Go-No-Go” meeting to decide if the code will be promoted.
System testing focuses on the entire system as a whole. As opposed to looking at it from a functional by function perspective. At this phase, you’re looking at the entire system. It is often regarded as an end-to-end view of testing.
System testing is quite important because it is often part of a decision if the system is safe for release. In some cases, it is used for a legal or regulatory quality gate.
System testing is a key part of the Software Development Lifecycle (SDLC). Because one of its main objectives is reducing risk to the production environment.
Its aim is to detect as many defects or functional issues before the system is promoted to the live environment. It is a formal method of validating that the system matches the agreed specification.
Obviously, at this stage, there can be situations where the system does meet the agreed specification, but it still does not function as one would logically expect.
This could turn out to be more of a design issue rather than a system testing error (but more on this later, as to why testers should be involved early).
Finally one of the additional objectives of system testing is to provide more confidence that the system will work as expected and also no reduce the risk of defects being passed onto a later level of testing such as you 8000 etc.
Ideally, for system testing, you need a system that closely matches the true production system. However, this is not always realistic due to the cost or resource availability. Either way it is important to try and emulate as closely as you can to the real production system.
This is because you want to feel confident that when it actually goes into production it will perform in the same way that you had it working in system test.
During system testing, as discussed earlier, there may be multiple iterations (cycles) of tests. During each of these iterations, a new build can be implemented which has defect fixes.
It is an opportunity to use automated regression to speed up these repeated tests so you can clarify that the defect build that has been implemented has not adversely affected the system. Obviously, this regression automation can add complexities as well.
To make sure you have a reference to base your system tests against, it’s important to have a test basis. Documents that can be used for system testing are:
In my experience, at this stage, you’ll often be using functional specs. Especially if you are using traditional models such as the v-model. However, don’t be alarmed to be asked to test, with nothing but the system, but that’s another story.
However, user stories are just as good and quite commonplace these days, especially with agile (Click here for the advantages of the Agile model).
You may even use user manuals at this stage, depending on what you were testing.
During system testing, you will be testing system objects. However, this can come in many forms. such as:
For example, in my previous roles we have used off-the-shelf products that have already been tested and are actually commercially available software.
In these situations, we were testing the configuration of these off-the-shelf software items. This was to prove that they met the configuration specification outlined by the client.
As you can imagine during system testing which is typically one of the largest phases of testing, there is a wide variety of defects that can be found.
However, typical examples of this will be system functions not performing as per the specification.
Even basic graphical user interface issues. These could be low priority defects, such as spelling errors or incorrect labeling.
I have had experience of defects that relates to the firmware of hardware devices. Such as XML messages that are expected to cause a certain action in the hardware device not responding as outlined in the specification.
A number of different testing techniques can be used during system testing. This includes tactics such as:
For example, using BVA, you will be testing the boundaries of a particular input or condition in the system. So, if you were expecting values in an input box of between 1 and 3 you would have three boundaries to test:
System testing, as you can imagine, is typically done by the system testers. The reason for this is system testing is quite technical and skilled activity.
It needs to be done by an individual with good competence and understanding of the systems involved and how to capture the defects adhering to the testing standards.
Therefore, you need a skilled professional that can ensure that the system has been tested correctly.
Testers should be involved as early as possible because it helps to reduce the risk of finding big errors or defects later on in the cycle.
In the SDLC, defects that are found later typically cost the company a lot more money to fix. Therefore, it is better to get these defects found as early as possible.
For example, if big defects are found at the system testing stage, that could impact a lot of resources.
You will lose time with your system testers, as well as additional development resources having to recode the issue. Also, you may even have to involve the designers or the Business Analysts as well to rethink how they can factor in this is code/design change, are you with me?
Therefore, involving testers early, obviously as well as other stakeholders, can help to eliminate these issues as early as possible before any code or testing has even started.
Integration testing is a pivotal part of testing and one of those areas that is always covered in the ISTQB syllabus. However, who actually does this and what is it? These are the questions I plan to cover in this article…
Who does integration testing? Developers and testers perform integration testing depending on the level of testing. There is a component as well as system integration testing that will be tested by their respective resources.
Now that you know who does what, let me go on to explain exactly what it is, the objectives, what types of defects you may expect to get from the process and more…
As discussed briefly, Integration testing is a term that encapsulates system as well as component-level testing. Therefore, there are different resources that perform each of these levels.
At the component level you would expect to see a developer doing the work. This is because they need to be intimate with the component and have a deep understanding of how the component works.
At the system integration level, this is typically done by testers. They also need to have a deep understanding, but not of the component, of the architecture of the system and how these interfaces should interact, are you with me?
Integration testing is an important part of the Software Development Life-cycle (SDLC). There is a number of objectives in this process and for that reason, I will give you an overview of this now.
At the integration level, we expect a level of confidence in the systems that are to be integrated. Whether that be the components or the systems themselves.
However, integration testing reduces the risks before getting into further levels of testing because there are basic integrations that may not be quite correct.
A quick example of this could be a basic XML file interface which is misinterpreted by two separate parties (but more on this later).
The benefit of finding defects early is reducing the cost further down the line. The cost of fixing defects towards the end of a project is substantially more than at the beginning or earlier stages.
Therefore it is important to capture the defects as early as possible. During integration testing is a perfect opportunity to see glaringly obvious integration issues such as what I mentioned earlier in a previous section about XML interfaces.
As discussed earlier there are different levels of integration tests at the component and the system level. But let me give you more detail on each of these levels…
At the component level, we are focused on the interaction of the individual components. This is a lower level than the system.
This is after a developer completes their component testing (unit testing). And then take it one step further to confirm that the component that they have just unit tested can interface with the next component in line.
System-level integration testing usually happens after system testing (Click here to see why multiple rounds of system testing is recommended). It is essential to confirm that the system has been tested before going into integration.
The objective here is to prove that the system integrates as defined in the specification with external systems, for example, web services.
So far we have discussed the objectives of integration testing and how it works between a component and a system level. But what will be used to confirm how the integration should work? This is where test basis items come in..
Examples of test basis documents are as follows:
During integration there are a number of different objects that can be used at both levels, as you can imagine. But, to give you an idea of these, I will list down a few of these objects to give you an idea.
Once you are in the actual integration test phase it is a good idea to find as many defects as you can, as I explained earlier. However, what kind of defects can you expect to see?
At the component level, typical defects include data issues, interface problems, or functions that may not be handled correctly. E,g, causing indefinite loops or system component errors.
At the system level, you will typically see defects around the actual system interfaces. one of the biggest issues that I personally experienced during my integration testing is XML interfaces that integrate with external systems that are hosted by third parties.
This usually comes down to ambiguous design specifications. These specs could be interpreted in two different ways. Or even data items that were not correctly defined and left open for debate.
It’s amazing the number of man-hours that can be lost just by a simple field not being defined correctly in a design specification.
However, this is the purpose of capturing these things at the integration level and not later on in the process when it cost more money.
Integration at the surface level may sound straightforward but it can get very complicated depending on how large the system is.
Also, it depends on how frequent and how far into the integration testing. Whether that be at the component or system level.
For large systems, for example, financial-based systems can be very complicated. Especially when you’re dealing with multiple third-party vendors that have to interface with different systems.
I remember dealing with a client who specialized in vehicle finance applications. During my time working for this client, I experienced a number of challenges with simple interfaces into their system.
During this process, one of the key things that helped push the project through was regular contact with the client and clearly defined specifications.
Briefly, we talked about integration testing at the system and component level but when is this usually done?
At the component-level, these tests will typically happen immediately after a component has been completed and unit tested.
However, logically speaking you need more than one component completed before the integration can be tested, obviously.
Therefore there may be a reliance on a collaboration of developers working together so that they can make integration happen.
As for the system testing, integration, as you can imagine, happens after system testing. Ideally. And again, there is a dependency on the 3rd party system being available.
Therefore, you can be at the mercy of third parties and playing the waiting game if they are not ready when you are.
You may be wondering how integration testing fits into the big scheme of things, right? Is it a black box testing activity or a white-box? The reality is, it can be white or black box testing depending on the test scenario. The best way to explain this is with an example…
Let’s start with a white box testing example. A white-box integration test could be the integration of how two web services integrate, at a component level.
An example of a black box test in this context could be, testing the integration of a vehicle finance system with an external credit referencing agencies system, in particular, making sure that the XML feed from the external agencies system integrates with your system.
So far you have learned about what integration testing is, its objective, etc. But why is it even needed? This is the question I will help to explain now…
Integration testing is essential because if dependant systems or components cannot talk to each other the system is useless, are you with me?
For example, from a system integration perspective, imagine that you have a credit checking online app, but you can log in and see your profile, but when you click the “get credit score” button you get no response because the interface to Experian is not integrated correctly, would you be happy to pay for that service? are you with me?
This is a simple but powerful example of how critical integration testing is.
In integration, there are a few different ways to actually implement it. Each one has an inherent risk associated. Let me explain what they are…
This is one of the riskiest methods. It essentially means that all integration will be tested together and fingers crossed, it will work. The problem is identifying what has caused the issue in this method. If there are so many integrations at once it is a real challenge.
As the name suggests, the testing will start at the top-level components or systems and work its way down the integration tree.
An example would be testing the login page loads the profile page of a CRM system.
As you can imagine, this is the opposite to top-down testing. The integrations at the bottom will be tested first until you reach the top. So, following on from the previous example, the login page would be tested last. This method is only really effective if the components or systems are ready at the same level.
If you are preparing for your ISTQB, you are probably keen to understand how stubs and drivers are used. For that reason, in this article, I will explain this to you…
Why are Stubs and Drivers Used in Component Testing? This is to enable a developer to unit test a code component without other dependent code modules being available. Stubs help to produce an expected output whereas a driver will send a required input to the code module.
Now that you know what they are at a high level, let me explain in more detail, with examples. Also, exactly what component testing is, the objectives and so much more.
For example, if you have an address book app and the dev wants to test the “login” code component, in the absence of a display contacts code module a stub could be used to test the login component in isolation.
In another example for a driver, if we use the address book app example again if the login page is not ready, but the “display contacts” code module is ready for testing. The developer can use a driver for the Login feature so he can test the display contacts component in isolation.
What is component testing?
This method focuses on a specific code component in isolation. It is also referred to as unit or module testing.
There is some confusion regarding unit and component testing, with other sources claiming/suggesting they are separate, but according to the latest ISTQB syllabus, they are one of the same thing. And, from my experience, they have been treated as one of the same as well.
Now that you understand what component testing is, from a high level. Let me explain how stubs and drivers are used. This will broaden your knowledge and expand on the brief explanation I covered in the earlier section.
As discussed, component testing is typically done in isolation. Meaning, to do this correctly other supporting objects will be required, such as:
Drivers are place holder (dummy) programs that call the function that you need to test. For example, earlier I used the address book analogy, this driver could be the login code module that calls the display contact component you wish to test.
These code modules take inputs from a calling function and display the required output. So, refereeing back to the address book app example, this could be used to replace the display contacts component while the developer tests the login component.
You may be wondering why do we need to use component testing. To help to make this clear I will explain the objectives of it in this section.
The main objective of component testing is to reduce the risk of the component. It is better to find defects early rather than waiting until later test levels. Why? Simple, it saves costs by detecting defects as early as possible.
It is far easier and cheaper to verify that the component meets the related specification before promoting it to integration or system testing.
I have had personal experiences in the past where code has been promoted to system test and there were glaringly obvious issues with the software that wasted many test resources time.
For example, after attempting to test the application, discovering that we are not even able to log in, are you with me?
It may not sound much to you right now. But, look at through the lens of a business. Each tester costs money per hour. If the code is not testable, the time lost could be half a day, or even worse. And, when you multiply that by the number of test resource booked to test, it gets pricey, are you with me?
Using simple techniques such as component testing at the earlier stage helps to reduce these risks.
At this component stage, another objective is verifying that the code module matches the agreed specification. This is from a functional as well as non-functional level.
If it is verified at this stage, it reduces the chance (but not eliminates) of defects going forward.
Automating regression for ever-changing components.
In some cases, when code components are changing frequently, it may be advantageous to automate some component regression tests to verify that the component(s) have not regressed since the last iteration. This is even more relevant when an Agile methodology is used.
Now that you understand what component testing is, its benefits and why we use it. It is also important to understand how it is verified. What do I mean? Well, how can you be sure it’s right? In this section, I will explain some documents that can be used to do just that. Also referred to as the “test basis”.
Here are a few documents that can be used:
In my experience, these are the most common documents used as a test basis. Typically, they clearly outline what is to be expected by the code components. Making the component testing easy to see if it is correct.
One thing to remember, these specifications can also be wrong, they are not perfect. However, that is not the objective of component testing. The concerns can be noted, but ultimately the task is to verify it meets the specification.
According to the latest ISTQB syllabus, the code and data models can also be used. To be honest, in my time I have not seen this done. But, I am not saying it’s impossible, just that I have not experienced this.
Understanding the high-level definitions of component testing is one thing, but grasping exactly, down to the code level, is another. What do I mean? Well, what type of code will be testing in this phase? Let me explain…
During component testing here are some examples of code that can be tested:
These are examples, in reality, there are more. But this is a good flavor of what can be expected.
As you can imagine, during component testing you will encounter many different issues. But, that’s a good thing, right? But, what kind of issues typically rear their ugly head? Let me explain…
During component testing here are some of the most common issues detested:
These errors are a lot easier to pick up these days. Why? Because most up to date code management software will do this automatically. But, there is always room for error.
These errors are more complex when compared to syntax errors. Why? Because they are not always obvious. The challenge with these errors is, it may look ok on the surface (passes syntax checks, etc.), but it won’t give you the expected result. This is where an experienced developer will save the business a lot of time.
Data issues can cause code errors where it is least expected. It could be a weird combination of data, that was not expected, that is actually used in some obscure scenario, that can trip up the said program.
So, now we know the type of issues we can expect. But, which resource is usually responsible for this testing? Let me explain…
Usually, the developer that actually coded the component is responsible for this testing. And from my experience, this is the case. However, I have seen many cases where the developers attempt to avoid this task and effectively promote the code directly to testers. Not ideal.
When new methodologies are being used, such as agile, with automation. There can be additional tasks thrown upon the developer to produce automated, component level, automated tests before even coding the module.
Personally, I think this is good practice. And, it saves the business wasted time and money on defective code.
You may be wondering how component integration fits into things, is it the same as component testing, or a completely different phase. In this section, I plan to explain exactly what this is.
Component integration should follow component testing, in an ideal world. At this level, it is assumed that the individual components are tested.
The objective here is to verify that the component talks to each other, as per the agreed specification. This is done to verify before moving onto the next level of testing.
Earlier we talked about component integration, and when it is expected to be done and established that it is its own unique phase. But, when is the actual component testing done? In this section, I will explain.
Component testing is typically done before component integration testing. In fact, logically, it has too. Why? Because the code components need to be verified first, before moving onto integration. Think of this as a necessary gate.
Ideally, once the code is written, competent testing will commence, to verify the code meets the detailed design or component specification.