Best Practices for Microservices QA Automation

Insight categories: ArchitectureTesting and QAAutomotiveCommunicationsConsumer and RetailFinancial ServicesHealthcareManufacturing and IndustrialMediaTechnology


What is a Microservice Architecture Style?

A microservice architecture style is an approach used to develop a single application with separate processes for each mechanism. These “small services” interact with each other through their own process and communicate by accessing each other’s exposed APIs.

A typical example is Amazon online shopping. As you can see in the below diagram, each lightweight service runs independently from the others. Even if there is a failure at the payment gateway, users can still add items to their shopping carts and look at other modules. The failure of one module does not kill off the entire system.

Fig. 1:  Amazon microservice architecture

Issues with Microservice Architectures

Even though a microservice architecture approach to software development provides countless benefits, it does have some drawbacks in terms of reporting. For example, it can be a hassle to analyze test results, identify pass/fail ratios and trends, understand the total execution time for a particular microservice regression suite, etc.

Let’s consider the below sample microservice architecture for Netflix, where there are ‘n’ number of services that are running. In order to maintain a stable automation pipeline, you must obtain data that answers the below questions:

  • Which services have a maximum execution time?
  • Which services have more failures?
  • What are the trends in service execution times? Are they up or down? 
  • I have the name of services that have a maximum number of Failures, but how do I drill down and check on the basis of the scenario?
  • Can I see a list of scenarios that are failing for quite a long time, and if the age of failing is high?
  • Can I get all the details of the service that has the latest build installed?

Fig. 2: Netflix microservice architecture

Effective Microservice Management

We have found that one way to successfully manage the different requirements listed above is to integrate all the services into a single platform. For example, we recently developed a custom dashboard for a client that can be used as a report generation tool and monitor more than 50 microservices (with the potential to be extended to 100+). 

The main objective of this dashboard was to be a one-stop shop for all automation reporting, trends and monitoring. To create this dashboard, we used the following technologies:

  • Spring Boot
  • Spring Thymeleaf
  • Maven
  • Java 1.8
  • Couchbase DB(Can be any DB)
  • Jenkins client api
  • D3.js

The dashboard was so successful that we are now implementing it in another project. Below are the different reports that we created to improve our automation health. 

Overall Microservices Tab

This tab will answer most of the queries that consist of the below data, including the historic (previous build) data.

  • Build data for all the microservices
  • Duration of that microservice suite
  • Total test case count, fail test case count, etc.

Fig. 3: Overall Microservices Tab

Execution Time Analysis Tab

This tab is just a graphical representation of the above data that displays the trends of your microservice automation health. We can filter down based on environment and type of run (i.e., smoke, regression, etc.).

Fig. 4: ExecutionTime-Analysis Tab

Failure Analysis Tab

This is one of my favorite reports. It tells us two important parameters (“age” and “failed since”) so we can easily dig down to the scenarios that are failing over a long period of time. This report ultimately helps us improve our smoke suite (if it’s an application issue) or quality of automation test case (if it’s an automation issue).

Fig. 5: Scenario Failure-Analysis Tab

Summary Tab

This tab is useful for managers to obtain the latest consolidated report for all microservices of their latest runs.

Repo-Analysis Tab

Larger, distributed teams where people work in different branches can find QA especially challenging. For example, while they might merge their code during intermediate runs to develop an interim branch, it’s easy to forget to merge their code into a master branch. This oversight can create issues during deployments, as there are always substantial differences between these individual developer and master branches. To resolve this issue, we developed a matrix that can tell the difference between the commits of these various branches and raise an alert when needed. There is an auto scheduler job that triggers after every hour and updates the latest data into the database.

Fig. 6: Repo Commit-Diff


For my team at GlobalLogic, consolidating all the various requirements and reports of a system into a single dashboard has been extremely effective in managing microservices. Although the specific docker files for this particular dashboard are proprietary to GlobalLogic, I encourage you to use the information I’ve shared with you today to create your own microservices dashboard.



Microservices qa.jpg


Rohit Sehgal

Consultant, Quality Assurance

View all Articles

Trending Insights



MobilityConsumer and RetailMedia
6 Questions to Ask When Looking for a Software Engineering Partner

6 Questions to Ask When Looking for a...

Digital TransformationInsightsTechnology

Top Authors

Arti Gupta

Arti Gupta

Sr. Manager, Engineering

Siddhi Thakkar

Siddhi Thakkar

Manager, Engineering

Neven Dimač

Neven Dimač

Software Engineer

Mayank Gupta

Mayank Gupta

Senior Consultant

Hrushikesh Zadgaonkar

Hrushikesh Zadgaonkar

Senior Consultant, Engineering

All Categories

  • URL copied!