Microservices architecture versioning on periodic releases - continuous-integration

I'm trying to wrap my head around the best practices to manage versioning in microservices based architecture with periodic releases.
Currently our system is decomposed into multiple different repositories:
Frontend
Backend
Database
API gateway
Docker-compose-env
Each of these components must be developed, built, tested, containerized and deployed independently. But the release cycles are synchronized and periodic. Docker-compose-env project contains the environment definition to start all compatible service versions for development and integration testing purpose.
Current versioning strategy is as follows:
Each commit to master branch is tagged with a semantic version and pushed to docker registry (semantic tags are used to track dependencies during development cycle)
Each merge commit to persistent release branch is tagged with a release tag and pushed to docker registry (release tags are used to synchronize project versions together for quarterly release)
master is the trunk, and periodic release build is initiated by PR from master to release.
I'm skeptical if this the best way to manage versions with microservices based architecture on periodic releases. Any feedback or tips are appreciated.

Each of these components must be developed, built, tested, containerized and deployed independently. But the release cycles are synchronized and periodic.
There is a contradiction here. Microservices mostly solve an organizational problem - the main point is that teams should be able to work independently as much as possible.
Synchronization between teams is what make them slow. This can happen in different ways, e.g. waiting for another version to be deployed in a shared test environment, or using the same shared database schema, or making releases at the same time.
I'm skeptical if this the best way to manage versions with microservices based architecture on periodic releases.
Try to avoid "synchronized releases", instead make sure to not break any contracts between the services (e.g. no breaking API changes). Try to release more often, you want to work in small batches to reduce the risk with deployments and changes. Try to not pile of a bunch of changes, deploy continuously - Continuous Delivery.

Release cycles are syncronized
I think the fact that you need to do a synchronized release of all services at the same time could be an indicator that the coupling between your services is higher then it should be and probably the way you are managing it can be improved.
The question is how can you design your development teams working on different micro-services so that when they introduce changes and they do not break each others micro-service?
Versioning and managing changes
There are 2 aspects which are important for this to work and they are:
Versioning and how you implement and work with versioning.
Team Communication. Communication between teams when introducing breaking changes.
What do I mean by this?
First about versioning. Your micro-services are communicating with each other.
Regardless of the fact that the communication is sync or async using Rest(or SOAP or gRPC or other) or Messaging(Queues) they need to rely on some Contracts. Those Contracts will be some API Contracts(in terms of Java/C# classes/interfaces). They need to be stable as they can be used by other micro-services.
Suggestion: I would suggest to do versioning of the micro-service independent from the versioning of the Contracts.
Example:
Micro-service order-micro-service could be at latest version v1.0.0 and Contracts order-micro-service-contracts at version v1.0.0 as well.
Micro-service customer-micro-service could be at latest version v3.0.0 but Contracts customer-micro-service-contracts could be at version v2.2.1.
Micro-service product-micro-service could be at latest version v3.0.0 and Contracts product-micro-service-contracts could be at version v4.0.0.
As you can see from the example above the version from the Micro-service and its corresponding exposed Contracts can be the same but they can also differ. The reason is simply that you can do changes on the micro-service(some internal business logic change) without changing the Contracts. And you can also do changes on the Contracts without changing the micro-service logic. Usually changes happens on both of them in the same time. You update some api business logic for which you adjust the exposed Contract. But sometimes a MAJOR change in the micro-service logic is not necessary a breaking or MAJOR change on the Contracts. As you see this gives you great flexibility. The benefit of this is not only flexibility but also the fact that a micro-service-A will only be dependent on micro-service-B-contracts and not the micro-service-B itself. This is just a suggestion you can also use one version for micro-service and its exposed Contracts.
Now about team communication. By this I mean if you have an organization where you have multiple teams working on different areas of the system and each team is responsible for one or more micro-services from an particular Domain.
If you are using the Semantic versioning like MAJOR.MINOR.PATCH for example v1.3.5 then you can do it in the following way.
There are a couple of things which are important to consider:
Contracts PATCH/MINOR change
A change which is a PATCH or MINOR change version upgrade should not be a breaking change and should always be backwards compatible for the consumers who use those contracts. This means that you should ensure that upgrading from version 1.3.0 to 1.4.0 should not be a problem to the consumer regardless of the fact if he upgraded to 1.4.0 or stayed on 1.3.0 for a little longer. Ideally all consumer should update to latest version but even if they don't for some period they will not be broken by the change. For example a change for which you will do that kind of upgrade would be adding new Contract model or updating existing model with new not mandatory fields, or increasing accepted string length from a field from 20 to 50 or similar.
Contracts MAJOR or breaking change
Is usually a big change which can also be breaking change. If it is a breaking change then we need some team process in place. The teams who use those contracts need to be notified that the change will happen upfront and even when releasing the new version a bridging period of couple weeks(or sprints) should be ensured where both versions of contracts will work(old and new). This will give the affected teams/micro-services enough time to upgrade and adjust their services. After that the backwards compatibility compromise Contracts/code can be deprecated. Sometimes for some cases a solution for a breaking change Contract change is introducing a complete new version of that Model(class) and not do a hard change on the same Model. For example you could have a CustomerModel class and then introduce CustomerModelV2 and remove the old CustomerModel class after some period. This is a common situation where you have a Contract Model for an Event(Message from a queue) like: CustomerCreated. You can have CustomerCreated and CustomerCreatedV2. You can publish both messages for a particular time period until the consumers adopt and deprecate(stop publishing the event and removing the Contract model) the CustomerCreated event. This depends on your particular business logic or case.
Micro-service changes
Regardless of the fact that the change is just a bug fix, small change or a big change in the service if your versioning is separate from the from the Contracts it should not be affecting the other micro-services, at least not from the contract managing prospective. Doing versioning updates on micro-service only gives you the possibility to deploy it independently.
Independent and separate deployments of micro-services
If you apply the above advice's you will come closer to the situation where you can deploy micro-services independently and without synchronized periods where all services have do be deployed at once.
One of the biggest advantages of using micro-services is being able to deploy micro-services independent from other parts of the system so if you have a chance to do that you should go for it.
Each of these components must be developed, built, tested,
containerized and deployed independently. But the release cycles are
synchronized and periodic.
Since you already develop, build and tested independently you could also do the release independently.
I know that all those suggested changes are not only technical but also organizational changes like team communication, team setup and so on. But usually when working with big system using micro-services it a compromise between those 2 worlds and trying to find the best process and solution for your Organization and Business.

Related

Stateful workflow engine vs Orchestrated idempotent services

I realize the benefits of workflow engine such as easy to understand communication, easy waiting, parallelism and compensative actions with informative graphical model. The concept is great and more manageable than dogmatic event driven architecture without central coordinator and specified flow.
We are currently using legacy workflow engine to orchestrate microservices in insurance business. Over the time chunks of business logic and little helper scripts has creeped into process model, which is not developer friendly solution to maintain and test with continuous integration standards. Also the lack of available expertise and future support is a huge risk from the project management perspective.
I played around with Camunda and Activiti, but immediately faced compability issues with Spring Boot 3 and a lack of up to date examples and general knowledge outside of relatively small user community. This gives me a bad feeling of drowning into the same swamp as we are now in the future.
We planned design our own Java based orchestrator, which just invokes specified microservices in a specified order when the process is started or user task is completed. The orchestrator will also handle monitoring and versioning of the process flow. It's up to microservices to validate their business context and halt the process by raising user tasks if necessary. When user task is completed, the orchestrator restarts the whole process from the beginning with all tasks cleared. It is the responsibility of microservices to no-op when their work is already done in the previous run. Eventually, the process will reach it's end and finish. This solution would be a good balance of modern DX and coordinated process management.
Is there examples or name for such an idempotent orchestrated architecture?
You only get into the challenge of aligning dependencies between your services and the process engine (and other components) if you tightly couple the orchestration / engine with the services. Happened to me many times in the past, too. If you separate the engine (called remote process engine with Camunda 7, only architecture with Camunda 8), then you are not influenced by its dependencies. Try for instance the Camunda RUN distribution and the external task pattern or C8 SaaS to get to a cleaner, decoupled architecture. See Bernd Ruecker's reasoning here.
Details will depend on your specific requirements, but I would definitely advise anyone against building a homegrown solution. There are enough options in the market and these times are over. Requirements grow over time. There are security vulnerabilities to be aware of and to fix, etc. High maintenance, no market for resources, no synergies, you would need to maintain proprietary knowledge in the company and cannot achieve the same level of quality and feature richness as a more broadly used solution can. For a list of options see for instance Bernd Ruecker's articles. Among the available options I would personally prefer an orchestrator, which uses a graphical process modelling approach based on the BPMN 2 standand. It helps clarity, knowledge transfer, and Business-IT alignment and the standard is a vendor-independent skill set.
There is no need to build your own. Use temporal.io open source project. Besides Java SDK it supports Go, Typescript/Javascript, Python, PHP.
The project started at Uber in 2016. There are hundreds of companies using it for mission critical applications.

How do you release Microservices?

The question is tied more to CI/CD practices and infrastructure. In the release we follow, we club a set of microservices docker image tags as a single release, and do CI/CD pipeline and promote that version.yaml to staging and production - say a sort of Mono-release pattern. The problem with this is that at one point we need to serialize and other changes have to wait, till a mono-release is tested and tagged as ready for the next stage.A little more description regarding this here.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices? An alternate could have a single pipeline, but parallel test cases and a polling CD - sort of like GitOps way which takes the latest production tagged Docker images.
There seems precious little information regarding the way MS is released. Most talk about interface level or API level versioning and releasing, which is not really what I am after.
Assuming your organization is developing services in microservices architecture and is deploying in a kubernetes cluster, you must use some CD tool (continuous delivery tool) to release new microservices services, or even update a microservice.
Take a look in tools like Jenkins (https://www.jenkins.io), DroneIO (https://drone.io)... Some organizations use Python scripts, or Go and so on... I, personally, do not like this approch, I think the best solution is to pick a tool from CNCF Landscape (https://landscape.cncf.io/zoom=150) in Continuous Integration & Delivery group, these are tools test and used in the market.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices?
It's ok in some tools you have a parameterized pipeline thats build projects based in received parameters, but I think the best solution is to have one pipeline per service, and some parameterized pipelines to deploy, or apply specific tests, archive assets and so on... Like you say micro-release strategy
Agreed, there is little information about this out there. From all I understand the approach to keep one pipeline per service sounds reasonable. With a growing amount of microservices you will run into several problems:
how do you keep track of changes in the configuration
how do you test your services efficiently with regression and integration tests
how do you efficiently setup environments
The key here is most probably that you make better use of parameterized environment variables that you then look to version in an efficient manner. This will allow you to keep track of the changes in an efficient manner. To achieve this make sure to a.) strictly paramterize all variables in the container configs and the code and b.) organize the config variables in a way that allows you to inject them at runtime. This is a piece of content that I found helpful in regard to my point a.);
As for point b.) this is slightly more tricky. As it looks you are using Kubernetes so you might just want to pick something like helm-charts. The question is how you structure your config files and you have two options:
Use something like Kustomize which is a configuration management tool that will allow you to version to a certain degree following a GitOps approach. This comes (in my biased opinion) with a good amount of flaws. Git is ultimately not meant for configuration management, it's hard to follow changes, to build diffs, to identify the relevant history if you handle that amount of services.
You use a Continuous Delivery API (I work for one so make sure you question this sufficiently). CDAPIs connect to all your systems (CI pipelines, clusters, image registries, external resources (DBs, file storage), internal resources (elastic, redis) etc. They dynamically inject environment variables at run-time and create the manifests with each deployment. They cache these as so called "deployment sets". Deployment Sets are the representation of the state of an environment at deployment time. This approach has several advantages: It allows you to share, version, diff and relaunch any state any service and application were in at any given point in time. It provides a very clear and bullet proof audit auf anything in the setup. QA environments or test-feature environments can be spun of through the API or UI allowing for fully featured regression and integration tests.

Multiple microservices in one repository

I have question about microservices and repositories. We are a small team (5 people) and we creating new project in microservices. Expected microservice applications in our project is between 10-15.
We are thinking about one repository for all microservices in structure like that:
-/
--/app1
--/app2
--/app3
-./script.sh
-./script.bat
What do you think about this design? Can you recommend something better? We think if we will have repository per app it will be overkill for that small project in one team. As our applications you can imagine spring boot or spa applications in angular. Thank you in advice.
In general you can have all your micro-services in one repository but I think while the code grows for each of them it can be difficult to manage that.
Here are some things that you might want to consider before deciding to put all your micro-services in one repository:
Developer discipline:
Be careful with coupling of code. Since the code for all your micro-services is in one repository you don't have a real physical boundary between them, so developers can just use some code from other micro-services like adding a reference or similar. Having all micro-services in one repository will require some discipline and rules for developers not to cross boundaries and misuse them.
Come into temptation to create and misuse shared code.
This is not a bad thing if you do it in a proper and structured way. Again this leaves a lot of space for doing it the wrong way. If people just start using the same shared jar or similar that could lead to a lot of problems. In order to have something shared it should be isolated and packaged and ideally should have some versioning with support for backwards compatibility. This way each micro-service when this library is updated would still have a working code with the previous version. Still it is doable in the same repository but as with the 1. point above it requires planing and managing.
Git considerations:
Managing a lot of pull requests and branches in one repository can be challenging and can lead to the situation: "I am blocked by someone else". Also as possibly more people are going to work on the project and will commit to your source branch you will have to do rebase and/or merge source branch to your development or feature branch much more often(even if you do not need the changes from other services). Email notifications configured for the repository can be very annoying as you will receive Emails for things which are not in your micro-service code. In this case you need to create some Filters/Rules in your Email clients to avoid the Emails that you are not interested in.
Number of micro-services grow even further then your initial 10-15. The number can grow? If not, all fine. But if it does, at some point you could maybe consider to split each micro-service in a dedicated repository. Doing this at the point where you are in later stage of project can be challenging and could require some work and in worst case you will find out that there are some couplings that people made over time which you will have to resolve at this stage.
CI pipelines considerations:
If you use something like Jenkins to build, test and/or deploy your code
you could encounter some small configuration difficulties like the integration between Jenkins and GitHub. You would need to configure a pipeline which would only build/test a specific part of the code (or one micro-service) if someone creates a merge/pull request against that micro-service. I never tried to do such a thing but I guess you will have to figure out how to do it (script and automate this). It is doable I guess but will required some work to achieve it.
Conclusion
Still all or most of these points can be resolved with some extra management and configuration but it is still worth knowing what additional effort you could encounter. I guess there are some other points to be taken into considerations as well but my general advice would be to use separate repositories for each micro-service if you can (Private Repository pricing and similar reasons). This is a decision which is made project by project.

Tracking api/even changes between different microservice versions before deployment

I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.

Microservices Deployment

I have:
Microservice-A
Microservice-B
Microservice-C
Microservice-A calls Microservice-B and Microservice-C
When I deploy Microservice-A I want make sure that other microservices it depends on have not changed since I last release it.
Is there a recommended way to do this?
I'm thinking:
when I deploy Microservice-A
Microservice-A makes calls to Microservice-B and Microservice-C
this call would fetch the endpoint specification for the endpoints it depends on and verify whether the endpoints have changed (in a way that would break Microservice-A) since last release.
This should happen before I interrupt the currently running Microservice-A just before deployment procedure commences.
Sure can do testing but that would be too late in my view. I'm looking for an automated way to verify this before deployment.
Has anyone done anything like this before? What tooling can be used for this?
The ideal solution is to never be in a position where you are deploying into an environment where you don't already know the versions of your dependencies. That way madness lies.
Avoiding this is a governance concern and so should be central to any service oriented approach to building software.
For instance, let's say you are developing version 2.0 of your service A. In your target environment, you have service B version 1.0 and service C version 1.0.
So the first step on the path to stress-free releases, as part of your development build, you should be running a set of nearest neighbour automated tests, which stub out B v1.0 and C v1.0 based on the service contracts (more on this later). This can be facilitated using test double tools such as mountebank.
Then, just as you have created your v2.0 release branch, you learn that another team is about to release v1.1 of service C. It should always be possible to work out whether v1.0 to v1.1 constitutes a breaking change to the v1.0 contract for service C (more on this later).
If v1.1 is a breaking change, no problem, you update your tests with v1.1 of the service C contract and fix any failures. You are then good to create a new v2.0.1 patch branch and release. If for whatever reason you are forced to release before service C you can still release from the v2.0 branch.
If v1.1 is not a breaking change, no problem, just release off your existing branch.
There are various strategies for coping with the overhead produced by a centralised release management protocol such as described above.
As stated earlier, contracts for all dependent services should be used when testing your service. (Note: it's very important for the nearest neighbour tests to be driven from contracts, rather than using existing code models, such as DTOs defined in the service's unit tests). Contracts for all the services should be based on a standard (such as swagger) which supplies a complete service description, and be very easy to find - the use of a service repository can simplify this.
Also stated earlier, it should always be possible to know if new versions of dependent services have the potential to break your service. One strategy is to agree on a versioning convention which bestows some kind of meaning when incrementing the version. For instance, you could use a major.minor.patch (eg v1.0.0) where a change the major version number constitutes a change to the service contract and therefore has the potential to break things. In our previous example, service C went from v1.0 to v1.1. With a convention such as the one described above, we could be sure that the change would not break us, as the major version number was unchanged.
While it can be cumbersome to set up and maintain a centralised release management protocol, the benefit is that you always have full confidence that by deploying your service, nothing will break. What's more, this avoids having any complex (and to my mind, contrived) runtime dependency resolution, such as you are proposing in your original question.
Each microservice could provide a versionnumber either via API or by writing to a public file / shared DB. Each microservice would then contain all the expected versionnumbers of its dependencies with and check if the version numbers in the file / database match before startup.

Resources