Creating Microservices in ATG 10.2 - microservices

As part of requirements, there is an expectation to create microservices for an existing ecommerce platform. The current architecture runs on ATG 10.2 version and has some rest API's hosted on it.
Given the fact that ATG is a monolithic ecommerce framework, is there any way that we can create microservices in ATG? Even if we are able to do so, how will they run as independent services? i mean how can we deploy them and test them in other environment? Wanted to know the technical feasibility of creating microservices on ATG ecommerce platform.

Perhaps you need to define how your microservices are supposed to work first. If you were to, for example, expose the ATG Profile as a microservice, it won't, by itself, run in another environment, it simply means that you can expose the functionality for consumption by a different system via the service. Alternatively you can expose a Profile module on a different system and try to consume it within ATG. That too is possible.
In a nutshell you can integrate various open source libraries into your ATG stack to build and expose the functionality of the monolithic application into microservices. To get started, read up about webmvc, oxm, hateoas, plugin-core, springtonucleus and perhaps dozer.
Perhaps you need to define your architecture first before asking a much more specific question here. The real answer is just too long.

Related

How to Test Gol App Engine apps locally on Win 10 and use app.yaml

In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

How to change the configuration to build as single deployment with multi-database?

I came a cross this boilerplate just couple of weeks ago and it's the coincidence that I was working on designing Multi-tenant Saas Architecture using .Net Core Framework, EFCore as ORM, SPA(Angular)as presentation layer, and OData Api but then I found this boilerplate is exactly what I am looking for. I have one question how to set-up configuration on this sample Event Saas app to make it as Single Deployment with multi-database?
I have noted there is appsetting.json where subdomains are stored and in each entity for example Event is inherited from IMustHaveTenant that means each entity should have TenantId means this setup is suitable for single deployment with single database (database filter is automatically applied by aspboilerplate) but I am looking to make it single deployment with multiple database (Each database per tenant). It will be great if you just give me some clues.
Note: This is what I am talking about.
Thank you.

Microservice dependency manager tools

Is there a tool available to manage the microservice dependency.
For eg:- If there are service like Inventory service, Catalog service and identity service which together constitute product service.
Is there a visual tool which can map all the dependency and if any of the service is getting changed it should show what all other service is going to be effected by this.
While this questions was posted some years ago, there is now an open source tool called Ortelius.io that does microservice dependency mapping across clusters. It tracks and versions 'logical' views of the application, and shows what apps are dependent upon what services. Tracks this across all clusters with a full versioning engine.
https://github.com/ortelius/ortelius
I think your requirement is closely satisfied by Service map feature of New Relic which is an Application Performance Monitoring platform
Check out https://docs.newrelic.com/docs/using-new-relic/service-maps/get-started/introduction-service-maps
Service maps are visual, customizable representations of your application architecture.
Maps automatically show you your app's connections and dependencies, including databases and external services.
Health indicators and performance metrics show you the current operational status for every part of your architecture.
Well not exactly a dependency manager, if at all there is anything like that, but we made use of a tool called Pinpoint. Amongst it many features is one which shows all the services which are configured with pinpoint and how they interact with other services and databases.
It may help you find how services are linked, and you can infer what all services be impacted if you alter a given service.
It may be long shot, to get a whole apm set up just to find these dependecies, but if you starting from scratch, you may think around it.

Architectural advice to developing Service Portal Application

I am new in Service Now platform, developing a custom app using the service portal and I am looking for some architectural advice from experts.
My storyline is my service is gonna serve different companies as per their requirements by easy codebase maintenance. for example, I am having a base app which has some concrete requirements that fit for all companies, but there will be some other features for company specific, like feature A for company A, feature B for company B and so on. So my initial plan was like classic software development that is to have a single codebase using git that will have multiple feature branches that will deploy to multiple instances. But sometimes some situation where I might need to merge the branches that is not possible now. My question here is there any alternative way to do that? Other possible scenarios here is should I go with a single instance with ACL based data separation? (but that not feels scalable to me cause the amount of the data will be huge after some time) or is that possible to apply regular SAAS architecture like multitenancy(single app with multiple databases) with some configuration wise feature separation?
Thanks in Advance.

Resources