top button
    Connect to us
      Why to Join

Architecturally Aligned Testing

+7 votes

Architecture impacts heavily testing. The way you test a monolith is different than how you should test a loosely coupled system. Let’s look at microservices as a prominent exponent of loosely coupled systems. As we will see, they challenge our previous definition of testing.

Conway’s Law

Melvin Conway had observed that how organizations are structured has a strong impact on the products they create: "Any organization that designs a system […] will inevitably produce a design whose structure is a copy of the organization's communication structure".

Conway’s law says that a company’s software architecture usually reflects its organizational structure. On the other hand, how we organize the teams has a powerful effect on the architecture and testing approaches.

The interdependencies of architecture and organizational structure is also reflected in the BAPO model which assumes that in the context of software engineering, four concerns need to be addressed: Business, Architecture, Process and Organization. Basically, the idea is to define the architecture based on the business needs. However, as Jan Bosch (2017) states, most companies are not BAPO but instead they are OPAB which means that the organizational structure is the basis to define the architecture.

If an organization wants to change the architecture of a software product, its organizational structure (including how testing is organized) can be an enabler or impediment. Let’s look at different organizational structures:

  1. Functional-oriented organizations optimize for costs. Those organizations have typically a hierarchical structure with teams of specialists, responsible for their functional area.
  2. Market-oriented organizations optimize for speed. Those organizations are typically rather flat with cross-functional and autonomous teams, responsible for implementing and delivering their products even though this may lead to redundancies across the organization.
  3. Matrix-oriented organizations attempt to combine the functional and market orientation.

For more information, see: Gene Kim (2016).

Testing Microservices

How shall a loosely coupled architecture, i.e. microservices, be tested?

A loosely coupled system follows a service autonomy principle as its architecture is based on the decomposition in autonomous parts.

Microservices are increasingly being adopted by organizations to improve the autonomy of their teams and increase the speed of change. Microservice applications are composed of those small, independently versioned, and scalable customer-focused services that communicate with each other over standard protocols with well-defined interfaces.


  • Are autonomous and truly loosely coupled since each micro service is physically separated from others.
  • Are responsible for one functionality (functional decomposition).
  • Can be deployed and released independently.
  • Shall embed dependencies to ensure that the number of integration points is manageable.
  • Shall be resilient and isolate failures by acting autonomously.

What does this mean for testing?

A domain-oriented and vertical slicing per business capabilities means that there can be still a layered architecture but the layering is the secondary organization mechanism. Autonomous product development teams are responsible for implementing, testing, and delivering the business capability:


are often accompanied by DevOps: In agile and DevOps, there is no separate design phase with an architect responsible to define the architecture prior to the development phase. Instead, the architecture is defined more federated, addressed across the project, and owned by the whole team.

And how is the test approach changing for those systems? Also testing is typically not done anymore in a separate test phase by an independent test team. Instead, everybody is responsible for the quality.

There is no separate testing phase:

DevOps and Continuous Integration & Continuous Delivery (CI & CD) are addressing the need to deliver to the customer fast and with high quality. This means that tests must provide fast and meaningful feedback. To be able to release with both speed and confidence requires having immediate feedback from Continuous Testing:

  • Almost all testing should be automated. Enabling Continuous Testing is crucial for CI & CD as it provides timely feedback about the quality. Also, it enables the team to learn and adapt fast.
  • However, this does not mean that Continuous Testing is only about automating tests. As we will see, it is a holistic approach to get timely feedback about the quality:

Source: Dan Ashby (2016)

Testing is not an isolated activity but rather integrated in a comprehensive set of practices to promote quality: “Still, many companies struggle with changing their process from 'testing or inspecting quality in' to achieving quality from the start – through culture, design, craftsmanship, and leadership.” (Ben Linders 2017).

Developers play an important role to ensure high quality, e.g. as it is also encouraged by software craftsmanship principles. Software craftsmanship is about professionalism in software development: “Well-crafted software means that, regardless of how old the application is, developers can understand it easily. The side effects are well known and controlled. It has high and reliable test coverage, clear and simple design, and business language well expressed in the code” (Sandro Mancuso, 2015).

Continuous Testing means (often) shift left and right:

What does shift left and right mean?

Tests are typically executed in a specific order, starting (on the left) with unit tests as they provide fast feedback, followed by tests that take more time to execute but on the other hand increase the confidence in the release candidate:

There is a shift left towards automated unit and component level tests, executed in Continuous Integration & Continuous Delivery (CI & CD) pipelines that are owned by product development teams. However, this has its costs: Given that the integration scenarios are complex and dynamic, integration testing is getting more challenging.

If we talk about test in isolation versus collaboration based tests, it is important to differentiate solitary and sociable test approaches:

  • Solitary testing is an activity by which the software unit under test is tested in isolation by replacing collaborators (upstream dependencies) with test doubles such as mocks or stubs.
  • Sociable testing is an activity by which the unit software under test may be tested together with collaborators.

Following the idea of the test pyramid, the focus of microservices testing is typically on having many unit tests and a comprehensive set of component tests. Microservices are tested in isolation from each other followed by a smaller number of integration and eventually End-to-End tests. Many End-to-End tests is typically seen as a problem as they take more time to execute than unit and component tests and are also often fragile. But interestingly, there is criticism about the value of the test pyramid for microservice testing and if focusing on unit and component tests in lieu of integration tests is still reasonable (see for example: Cindy Sridharan 2017 and André Schaffer 2018). That is a legitimized question because if the integration of microservices is most challenging, shall we focus not more on integration testing and rely more on sociable tests? In my opinion, the answer is yes and no: It may make sense to test a microservice together with its dependencies if those embedded dependencies do not provide independent business capabilities. Such a sociable test of a microservice should be done before testing the integration with other microservices, encapsulating other business capabilities. If there are not enough unit and component level tests, analyzing failed integration tests would get even more complicated. Also, besides traditional integration tests, API as well as contract integration tests are important as they help to keep the pipelines of collaborating microservices independent from each other. Therefore, it is important to test on the appropriate test level.

Another aspect is also important: Given that for microservices the integration scenarios are complex and dynamic, how many integration tests are sufficient? There is a trade-off between putting more efforts into testing and detecting issues faster in production. Besides the shift left, there is also a trend that testing, deploying and monitoring on production gets closer together. Controlled experiments in production help to identify problems fast and to get fast feedback from actual customers.

What kind of production traffic tests can be differentiated?

This means that there is also a shift right in testing.

As outlined earlier, market-oriented organizations optimize for speed and the shift right in testing helps to accomplish these goals. As part of continual experimentation, it is possible to explore and learn. And with controlled experiments, you get feedback from customers, using the application in real world conditions.

Controlled experiments can also support a resilient architecture. As we discussed earlier, microservices shall be resilient and isolate failures by acting autonomously. But what means resilient? "We usually think of robust systems as the opposite of fragile ones because they don't care too much about (i.e., neither like nor dislike) stress. In fact, robust systems are merely less fragile. The true opposite of a fragile system would be one that actually benefits from stress. Nassim Taleb calls such systems antifragile" (Dave Zwieback 2014, p. 3). A traditional test approach would be to test the system prior to releases to ensure the stability. However, those tests support the robustness but not the “antifragility” of a system. A different test approach is to run proactive controlled chaos experiments by introducing failures on production. This orchestrated chaos guides the product development teams to think about failure tolerance and possibilities to isolate failures.


For loosely coupled systems, i.e. microservices, tests shall not be done in a separate test phase, by a dedicated test team, but instead collaboratively by cross-functional product development teams. There is a shift left in testing to ensure that teams stay autonomous and a shift right in testing towards exploration and experimentation.

Continuous Testing, as automated testing within CI & CD as well as a culture of experimentation and exploration, are enablers to release loosely coupled services fast and reliable

posted Nov 21, 2018 by Arun

  Promote This Article
Facebook Share Button Twitter Share Button Google+ Share Button LinkedIn Share Button Multiple Social Share Button

Related Articles
+8 votes

What is Integration Testing

It is the second level of testing after unit testing.In this level of software testing individual units are combined and tested as a group. The purpose of this level of testing is to find bugs in the interaction between integrated units. 

Need of Integration Testing

Different modules are developed by different developers so it become necessary to test the all units after integration .In most of the applications it interacts with some third party tools or APIs which also need to be tested

•Now in Agile era requirements changes frequently many a time developer deploys the changes without unit testing it. Integration testing becomes important to minimize the chances of integration issues.

•Test drivers and test stubs are used in Integration Testing as simulators if required.

Approaches of Integration Testing

Big Bang Approach-

Big bang approach integrates all the modules at once It verifies if the system works as expected or not after integration. If any issue is detected in the completely integrated module, then it becomes difficult to find out which module has caused the issue.

Bottom -Up Approach

Bottom-up testing starts from the lowest unit of the application, and gradually moves up.This integration continues till all the modules are integrated and the entire application is tested as a single unit.If the upper module is not developed then simulator is used which is called driver


Top Down is an approach to Integration Testing where top-level units are tested first and lower level units are tested after that.If the lower module is not developed then simulator is used which is called Stub.