Integration testing is a pivotal part of testing and one of those areas that is always covered in the ISTQB syllabus. However, who actually does this and what is it? These are the questions I plan to cover in this article…
Who does integration testing? Developers and testers perform integration testing depending on the level of testing. There is a component as well as system integration testing that will be tested by their respective resources.
Now that you know who does what, let me go on to explain exactly what it is, the objectives, what types of defects you may expect to get from the process and more…
As discussed briefly, Integration testing is a term that encapsulates system as well as component-level testing. Therefore, there are different resources that perform each of these levels.
At the component level you would expect to see a developer doing the work. This is because they need to be intimate with the component and have a deep understanding of how the component works.
At the system integration level, this is typically done by testers. They also need to have a deep understanding, but not of the component, of the architecture of the system and how these interfaces should interact, are you with me?
Integration testing is an important part of the Software Development Life-cycle (SDLC). There is a number of objectives in this process and for that reason, I will give you an overview of this now.
At the integration level, we expect a level of confidence in the systems that are to be integrated. Whether that be the components or the systems themselves.
However, integration testing reduces the risks before getting into further levels of testing because there are basic integrations that may not be quite correct.
A quick example of this could be a basic XML file interface which is misinterpreted by two separate parties (but more on this later).
The benefit of finding defects early is reducing the cost further down the line. The cost of fixing defects towards the end of a project is substantially more than at the beginning or earlier stages.
Therefore it is important to capture the defects as early as possible. During integration testing is a perfect opportunity to see glaringly obvious integration issues such as what I mentioned earlier in a previous section about XML interfaces.
As discussed earlier there are different levels of integration tests at the component and the system level. But let me give you more detail on each of these levels…
At the component level, we are focused on the interaction of the individual components. This is a lower level than the system.
This is after a developer completes their component testing (unit testing). And then take it one step further to confirm that the component that they have just unit tested can interface with the next component in line.
System-level integration testing usually happens after system testing. It is essential to confirm that the system has been tested before going into integration.
The objective here is to prove that the system integrates as defined in the specification with external systems, for example, web services.
So far we have discussed the objectives of integration testing and how it works between a component and a system level. But what will be used to confirm how the integration should work? This is where test basis items come in..
Examples of test basis documents are as follows:
During integration there are a number of different objects that can be used at both levels, as you can imagine. But, to give you an idea of these, I will list down a few of these objects to give you an idea.
Once you are in the actual integration test phase it is a good idea to find as many defects as you can, as I explained earlier. However, what kind of defects can you expect to see?
At the component level, typical defects include data issues, interface problems, or functions that may not be handled correctly. E,g, causing indefinite loops or system component errors.
At the system level, you will typically see defects around the actual system interfaces. one of the biggest issues that I personally experienced during my integration testing is XML interfaces that integrate with external systems that are hosted by third parties.
This usually comes down to ambiguous design specifications. These specs could be interpreted in two different ways. Or even data items that were not correctly defined and left open for debate.
It’s amazing the number of man-hours that can be lost just by a simple field not being defined correctly in a design specification.
However, this is the purpose of capturing these things at the integration level and not later on in the process when it cost more money.
Integration at the surface level may sound straightforward but it can get very complicated depending on how large the system is.
Also, it depends on how frequent and how far into the integration testing. Whether that be at the component or system level.
For large systems, for example, financial-based systems can be very complicated. Especially when you’re dealing with multiple third-party vendors that have to interface with different systems.
I remember dealing with a client who specialized in vehicle finance applications. During my time working for this client, I experienced a number of challenges with simple interfaces into their system.
During this process, one of the key things that helped push the project through was regular contact with the client and clearly defined specifications.
Briefly, we talked about integration testing at the system and component level but when is this usually done?
At the component-level, these tests will typically happen immediately after a component has been completed and unit tested.
However, logically speaking you need more than one component completed before the integration can be tested, obviously.
Therefore there may be a reliance on a collaboration of developers working together so that they can make integration happen.
As for the system testing, integration, as you can imagine, happens after system testing. Ideally. And again, there is a dependency on the 3rd party system being available.
Therefore, you can be at the mercy of third parties and playing the waiting game if they are not ready when you are.
You may be wondering how integration testing fits into the big scheme of things, right? Is it a black box testing activity or a white-box? The reality is, it can be white or black box testing depending on the test scenario. The best way to explain this is with an example…
Let’s start with a white box testing example. A white-box integration test could be the integration of how two web services integrate, at a component level.
An example of a black box test in this context could be, testing the integration of a vehicle finance system with an external credit referencing agencies system, in particular, making sure that the XML feed from the external agencies system integrates with your system.
So far you have learned about what integration testing is, its objective, etc. But why is it even needed? This is the question I will help to explain now…
Integration testing is essential because if dependant systems or components cannot talk to each other the system is useless, are you with me?
For example, from a system integration perspective, imagine that you have a credit checking online app, but you can log in and see your profile, but when you click the “get credit score” button you get no response because the interface to Experian is not integrated correctly, would you be happy to pay for that service? are you with me?
This is a simple but powerful example of how critical integration testing is.
In integration, there are a few different ways to actually implement it. Each one has an inherent risk associated. Let me explain what they are…
This is one of the riskiest methods. It essentially means that all integration will be tested together and fingers crossed, it will work. The problem is identifying what has caused the issue in this method. If there are so many integrations at once it is a real challenge.
As the name suggests, the testing will start at the top-level components or systems and work its way down the integration tree.
An example would be testing the login page loads the profile page of a CRM system.
As you can imagine, this is the opposite to top-down testing. The integrations at the bottom will be tested first until you reach the top. So, following on from the previous example, the login page would be tested last. This method is only really effective if the components or systems are ready at the same level.
If you are preparing for your ISTQB, you are probably keen to understand how stubs and drivers are used. For that reason, in this article, I will explain this to you…
Why are Stubs and Drivers Used in Component Testing? This is to enable a developer to unit test a code component without other dependent code modules being available. Stubs help to produce an expected output whereas a driver will send a required input to the code module.
Now that you know what they are at a high level, let me explain in more detail, with examples. Also, exactly what component testing is, the objectives and so much more.
For example, if you have an address book app and the dev wants to test the “login” code component, in the absence of a display contacts code module a stub could be used to test the login component in isolation.
In another example for a driver, if we use the address book app example again if the login page is not ready, but the “display contacts” code module is ready for testing. The developer can use a driver for the Login feature so he can test the display contacts component in isolation.
What is component testing?
This method focuses on a specific code component in isolation. It is also referred to as unit or module testing.
There is some confusion regarding unit and component testing, with other sources claiming/suggesting they are separate, but according to the latest ISTQB syllabus, they are one of the same thing. And, from my experience, they have been treated as one of the same as well.
Now that you understand what component testing is, from a high level. Let me explain how stubs and drivers are used. This will broaden your knowledge and expand on the brief explanation I covered in the earlier section.
As discussed, component testing is typically done in isolation. Meaning, to do this correctly other supporting objects will be required, such as:
Drivers are place holder (dummy) programs that call the function that you need to test. For example, earlier I used the address book analogy, this driver could be the login code module that calls the display contact component you wish to test.
These code modules take inputs from a calling function and display the required output. So, refereeing back to the address book app example, this could be used to replace the display contacts component while the developer tests the login component.
You may be wondering why do we need to use component testing. To help to make this clear I will explain the objectives of it in this section.
The main objective of component testing is to reduce the risk of the component. It is better to find defects early rather than waiting until later test levels. Why? Simple, it saves costs by detecting defects as early as possible.
It is far easier and cheaper to verify that the component meets the related specification before promoting it to integration or system testing.
I have had personal experiences in the past where code has been promoted to system test and there were glaringly obvious issues with the software that wasted many test resources time.
For example, after attempting to test the application, discovering that we are not even able to log in, are you with me?
It may not sound much to you right now. But, look at through the lens of a business. Each tester costs money per hour. If the code is not testable, the time lost could be half a day, or even worse. And, when you multiply that by the number of test resource booked to test, it gets pricey, are you with me?
Using simple techniques such as component testing at the earlier stage helps to reduce these risks.
At this component stage, another objective is verifying that the code module matches the agreed specification. This is from a functional as well as non-functional level.
If it is verified at this stage, it reduces the chance (but not eliminates) of defects going forward.
Automating regression for ever-changing components.
In some cases, when code components are changing frequently, it may be advantageous to automate some component regression tests to verify that the component(s) have not regressed since the last iteration. This is even more relevant when an Agile methodology is used.
Now that you understand what component testing is, its benefits and why we use it. It is also important to understand how it is verified. What do I mean? Well, how can you be sure it’s right? In this section, I will explain some documents that can be used to do just that. Also referred to as the “test basis”.
Here are a few documents that can be used:
In my experience, these are the most common documents used as a test basis. Typically, they clearly outline what is to be expected by the code components. Making the component testing easy to see if it is correct.
One thing to remember, these specifications can also be wrong, they are not perfect. However, that is not the objective of component testing. The concerns can be noted, but ultimately the task is to verify it meets the specification.
According to the latest ISTQB syllabus, the code and data models can also be used. To be honest, in my time I have not seen this done. But, I am not saying it’s impossible, just that I have not experienced this.
Understanding the high-level definitions of component testing is one thing, but grasping exactly, down to the code level, is another. What do I mean? Well, what type of code will be testing in this phase? Let me explain…
During component testing here are some examples of code that can be tested:
These are examples, in reality, there are more. But this is a good flavor of what can be expected.
As you can imagine, during component testing you will encounter many different issues. But, that’s a good thing, right? But, what kind of issues typically rear their ugly head? Let me explain…
During component testing here are some of the most common issues detested:
These errors are a lot easier to pick up these days. Why? Because most up to date code management software will do this automatically. But, there is always room for error.
These errors are more complex when compared to syntax errors. Why? Because they are not always obvious. The challenge with these errors is, it may look ok on the surface (passes syntax checks, etc.), but it won’t give you the expected result. This is where an experienced developer will save the business a lot of time.
Data issues can cause code errors where it is least expected. It could be a weird combination of data, that was not expected, that is actually used in some obscure scenario, that can trip up the said program.
So, now we know the type of issues we can expect. But, which resource is usually responsible for this testing? Let me explain…
Usually, the developer that actually coded the component is responsible for this testing. And from my experience, this is the case. However, I have seen many cases where the developers attempt to avoid this task and effectively promote the code directly to testers. Not ideal.
When new methodologies are being used, such as agile, with automation. There can be additional tasks thrown upon the developer to produce automated, component level, automated tests before even coding the module.
Personally, I think this is good practice. And, it saves the business wasted time and money on defective code.
You may be wondering how component integration fits into things, is it the same as component testing, or a completely different phase. In this section, I plan to explain exactly what this is.
Component integration should follow component testing, in an ideal world. At this level, it is assumed that the individual components are tested.
The objective here is to verify that the component talks to each other, as per the agreed specification. This is done to verify before moving onto the next level of testing.
Earlier we talked about component integration, and when it is expected to be done and established that it is its own unique phase. But, when is the actual component testing done? In this section, I will explain.
Component testing is typically done before component integration testing. In fact, logically, it has too. Why? Because the code components need to be verified first, before moving onto integration. Think of this as a necessary gate.
Ideally, once the code is written, competent testing will commence, to verify the code meets the detailed design or component specification.