Architecture overview
Consider the following simplified microservice architecture:
Service A - Portal
Service B - API
Service A depends on Service B.
Problem statement
If I want to build a new backward-compatible feature in service A which requires changes to service B, then by definition of semantic versioning I'll have to create a new minor version for both services. However, service A requires the new minor version of service B to be deployed. How can I effectively manage this dependency? Do I need to create a new major version of service A to signal the changed dependency? I want to avoid that service A gets deployed while service B hasn't been deployed yet...
So basically; how should I version changes which are non-breaking (i.e. minor) in components itself, but will break the overall application if versions don't match?
You don't need to break the existing API contracts. Lets call your existing API (service B ) myapi/v1/products. You won't change anything in the existing API or this end point at all. You will create new version call it myapi/v2/products and deploy it. Its your choice where and how you host that end point. This end point has all the latest changes you want. Your portal is not affected yet.
Now you would deploy the portal and would use myapi/v1/ for features that require backward compatibility and myapi/v2/ for new features and this way you can manage API versioning without breaking features.
Hope that helps!
Related
Need some best practice recommendations to a classic requirement around modularising Springboot application based on layers.
Some background info:
Small- medium size Spring boot project with less than 10 developers
2 different Spring-boot applications and shared Service, Repo layer and also shared models
Bit too late to go with micro service approach with full Model/ Controller/ Service/ Repo per API.
Currently there is just one web application exposing the APIs for a frontend application.
Requirement is to add new set of APIs which are used for B2B integration, so the request/ response formats will be quite different to the already available APIs. i.e. /webapi/v1/orders for frontend client and /b2b/v1/orders will need to return different response format.
The Service and Repository layer along with the models need to be shared among the 2 applications, so 3 modules identified as similar to how it's explained in https://stackoverflow.com/a/50352532/907032
-- Main app
-- Webapi (Got dependency to common, jar packaging)
-- b2b (Got dependency to common, jar packaging)
-- common (jar packaging)
The two applications need to be deployed separately and also separated from CICD perspective not to build all the sub modules every time (A change to b2b controller should not affect common/ Webapi)
A change to common module which is only required to the latest b2b module, preferably should not trigger a build and deploy of webapi. i.e. webapi uses common-1.01 and b2b module uses common-1.02. Understood the new version common-1.02 should not break any feature from common-1.01 but just trying to save unnecessary build & deploy for that module until required if that makes sense.
The challenge
Should the modules defined in the same Repo or 3 different Repos?
All the talks about mono vs multi repo is about whether to keep all different projects in same or not, but here as you can see these are modules which are kind of related to each other.
If we define these as sub-modules in same Repo, how versioning of the common module handled? If it's always triggering a build of all three sub modules, do we even have any advantage of modularising the code?
As per your description, the module named "common" is not not that comon to the other two. I'd go with the multi-modudle way by doing so:
first break that common module in three: common, utils-webapi, utils-b2b
The first will strictly contains the thing both webapi and b2b need at the same version. Utils-webapi will be dedicated strictly to the things in api. Same goes for utils-b2b
B2b depends on utils-b2b with depends on common. Webapi depends on utils-webapi with depends on common
Versionning of common module is always consistant, only utils-X module version change from the X module perspective
CI is thus independant for each build.
Note: You can go further and simply consider utils-webapi utils-b2b and get rid of common. At the cost of some deduped code.
A conceptual question for vdm usage. Assume my OData evolves in a S4 cloud system and I am consuming it in a microservice. Since vdm needs the edmx file to generate entitiy classes, assume my odata has a new field or has eliminated one field that I do not use. If I do not change my edmx and will not generate new classes, will it be still work my call? And second question is, if one of the fields I use change, and I need to ensure 0 downtime, how do I handle 2 versions of generated classes in the same time?
The generated OData VDM ultimately performs an OData call based on the fields that are used. So if you would not use fields that are removed, this should not be a problem. Note however, that such removals would have to be done in a new version of the SAP S/4HANA service.
Since breaking changes affect all consumers independent of whether the Java or JavaScript VDM of the SAP S/4HANA Cloud SDK is used, developers of services in SAP S/4HANA have to follow a certain API guideline that includes specific deprecation rules.
So, if a breaking change is really required, according to the S/4HANA API guideline, a new version of the service has to be published and this will be also available with a different URL. This then gives you the possibility to migrate from an old to a new version without interruptions.
I have four micro services as below(names changed),
Account Service
Product Catalog Service
Car Service
Order Service
However, while developing I am feeling difficulty in starting/deploying all the application.
Also there is a strong reason to do this. Our client has already purchased Weblogic 12c Enterprise Edition. He is reluctant to pay for infrastructure cost incur for deploying microservices. He wants us to follow Monolithic architecture and configure the cluster for the better performance.
Is there a way to put all above micro services(jars) into a war so that we deploy single war?
Thanks in advance.
Yes you can do that But Its look like you are moving away from the microservice architecture
You can create new Spring boot application and add all microservice dependency into new application.
Take care below point as well
You have to update #ComponentScan(basePackages =
If you are using JPA repository then you have to update #EnableJpaRepositories(basePackages=
Our application are built on Spring boot, the app will be packaged to a war file and ran with java -jar xx.war -Dspring.profile=xxx. Generally the latest war package will served by a static web server like nginx.
Now we want to know if we can add auto-update for the application.
I have googled, and people suggested to use the Application server which support hot deployment, however we use spring boot as shown above.
I have thought to start a new thread once my application started, then check update and download the latest package. But I have to terminate the current application to start the new one since they use the same port, and if close the current app, the update thread will be terminated too.
So how to you handle this problem?
In my opinion that should be managed by some higher order dev-ops level orchestration system not by either the app nor its container. The decision to replace an app should not be at the dev-ops level and not the app level
One major advantage of spring-boot is the inversion of the traditional application-web-container to web-app model. As such the web container is usually (and best practice with Spring boot) built within the app itself. Hence it is fully self contained and crucially immutable. It therefore should not be the role of the app-web-container/web-app to replace either part-of or all-of itself.
Of course you can do whatever you like but you might find that the solution is not easy because it is not convention to do it in this way.
I Wanted to create Java EE application in JSF+Spring Framework with WildFly AS. One of the hot requirements is:
Plug and Play Modules This means if I update my application Or Add new module into my Application.
(Obviously Update bean.xml, web.xml, pojo classes , jars etc)
Then without redeployment of my *.war file and with out restarting my Wildfly AS changes occurs.
This is a complicated requirement for a few reasons. How will you handle changes to your DB schema/entity model? How will you handle sessions which are in progress at the time of the upgrade and are actively using the 'old' code? How do you handle changes to container managed code, code that is managed by the container only at deployment time, for example new EJBs etc?
One approach I have seen used in production to achieve some of these requirements is to do rolling updates with application versioning and full schema backwards compatibility. This is done in a clustered environment which is fronted by proxy servers that can allow active sessions using the old version of the application to continue until finished and ensure that new sessions go to servers/contexts containing the new version of the code. So you end up still deploying WARs which contain the new version of your code, and eventually undeploy the old versions when all old sessions have ended/expired. To do this you have to assume the burden in your code to fully support working against two simultaneous versions of your model when new versions introduce changes to it. This is not a trivial burden. You also have to assume the burden of the extra infrastructure to route sessions appropriately.
I know a product like JRebel will let you do hot deploys of code (even things like EJBs) with the idea being that it shortens the develop/test cycle but I haven't seen it used outside of a development environment. Also you would still have to deal with active sessions that were started on the old version /model.