How Microservices Help Continuous Delivery in the SDLC: Part 1

Here at Armory, we believe that obtaining the value of microservices is a critical component of continuous delivery. A lot of companies are moving from monolith to microservices architecture, but they often end up duplicating effort in their systems which is an inefficient use of resources and leaves the promise of microservices, unfulfilled.

The value of microservices is evident in its ability to add a feature to one component without affecting another. The assumption is that the software development life cycle (SDLC) cost-per-feature is reduced.

Every organization moving toward this type of architecture has a different understanding of microservices and best practices regarding them. Any instance of an individual microservice is constantly changing making it hard to navigate this transition. The architecture transition, make no mistake, is huge.

This two-part article shares some of the lessons we’ve learned while helping our customers through this journey. Part One will talk about breaking into using microservices, tooling, and determining service boundaries. Part Two will talk about security, testing, teams, and the biggest challenges we’ve faced.

Getting Started

Step one is to break the system’s functionality into groups. The grouping process needs to involve the people who contribute to the application, including the team leads.

There is inevitable tension when re-architecting an application, but this becomes less controversial over time. As the influential people who work with the code start seeing the benefits of using microservices, it becomes easier. It’s important to begin by discussing how the different components will be split.

Step two is to create a plan. Breaking up a monolith into microservices doesn’t happen overnight. Start with the easiest, non-controversial components and leave the more contested pieces for later.

Next, is execution of the plan. We deploy the new service as quickly and as iteratively as possible to show success. Demonstrating success to the non-engineering organization is critical because the support for this change must come by sacrificing product and features for additional stability and flexibility.

We commonly measure steps to deploy, SLA/SLO and number of deployments to production as KPIs to demonstrate value outside of the engineering organization.

Invest in Tooling

In almost all cases, engineering teams operate with autonomy. At the same time, they need a “paved road” which has a supported set of tooling to guide them. Making the decision to invest in tooling instead of adding features is a common change companies make, but it takes a disciplined approach and forethought to reuse the tooling.

As a general rule, the smaller the microservice the better. Containerizing microservices internalizes application-specific details and allows for the server environment to stay consistent across different devices.

Build tooling, monitoring, alerting, etc. should be put into a common set of tools to reduce the overhead of developing and on-boarding. More importantly, it operationalizes microservices in production. If you find your microservices are still quite large it is likely due to lack of tooling.

The chosen tool set is specific to each company. Usually, a company will choose a set of JVM languages and one scripting language (Python/Ruby/JavaScript) for some flexibility in coding styles. There is no strict requirement to use these languages but they are the ones we’ve found that work at scale and have the best tooling support.

Redundant Code Will Slow You Down

Some functionality and code becomes redundant within microservices, often in unexpected ways. For example, services that communicate with the user-registration service will need to deserialize the user object; that’s a straightforward case. When two microservices are responsible for a schema in a database, it creates a new problem. This issue was deceiving at first since one service was for read and another for write.

Redundant code slows you down when a deployment for one service requires a deployment for another. It adds overhead and defeats the purpose of microservices.

Service Boundaries

At first, conversations about where the groupings of functionality will live seem obvious, but as time goes on, deciding where new business logic will live becomes more difficult. Should it be in its own service, or would some logic be duplicated in two services?

For example, one service maintains user events and then ships to a log. We introduce functionality that processes those events and make runtime decisions. Should the decision live with the event engine itself or create a whole new service that handles this business logic?

Ultimately our decision was to include it in the same service. If the product team requested additional functionality over time we would decide to move it later. These decisions are necessary any time a new functionality or application is added. This reiterates why the first step is so important. Getting the team to work together with known functionality will help them make the important decisions.

Determining Service Boundaries

This is critical: How you want data to be accessed is critical in determining these boundaries.

One customer maintains significant user data, but some data is more sensitive than others (by law.) Confidential data like names and contact information has to be split out into different services so the data can be protected and managed by one source.

Deciding where the data needs to be stored and how it will be accessed, is the basis for all these decisions.

The important thing for data consistency is that you don’t have multiple services managing a data model. We had an issue where one microservice controlled the schema version and writing to the tables while another microservice was reading directly from the database. The result being when one service was deployed it required us to deploy the other service. The coupling of these two “microservices” defeats the purpose of the architecture.

In Part Two, we’ll talk about the biggest challenges we’ve faced, security, testing, and changes you’ll need to make to your teams.