Is there a tool available to manage the microservice dependency.
For eg:- If there are service like Inventory service, Catalog service and identity service which together constitute product service.
Is there a visual tool which can map all the dependency and if any of the service is getting changed it should show what all other service is going to be effected by this.
While this questions was posted some years ago, there is now an open source tool called Ortelius.io that does microservice dependency mapping across clusters. It tracks and versions 'logical' views of the application, and shows what apps are dependent upon what services. Tracks this across all clusters with a full versioning engine.
https://github.com/ortelius/ortelius
I think your requirement is closely satisfied by Service map feature of New Relic which is an Application Performance Monitoring platform
Check out https://docs.newrelic.com/docs/using-new-relic/service-maps/get-started/introduction-service-maps
Service maps are visual, customizable representations of your application architecture.
Maps automatically show you your app's connections and dependencies, including databases and external services.
Health indicators and performance metrics show you the current operational status for every part of your architecture.
Well not exactly a dependency manager, if at all there is anything like that, but we made use of a tool called Pinpoint. Amongst it many features is one which shows all the services which are configured with pinpoint and how they interact with other services and databases.
It may help you find how services are linked, and you can infer what all services be impacted if you alter a given service.
It may be long shot, to get a whole apm set up just to find these dependecies, but if you starting from scratch, you may think around it.
Related
We're looking into how we could manage the configuration of several microservices (10 - 15 services) and fat client applications which are installed in equipment (several hundreds). The applications are being developed in Java (for what it's worth). The equipment doesn't always have a working connection to the network, so the configuration must also be cached locally.
We have been looking in to Spring Cloud Config and services such as Consul, Zookeeper and Etcd. We particularly like Consul as it comes with a lot of functionality out-of-the-box, not in the latest a user interface.
What we are still struggling with is how we should setup such a tool especially for the equipment configuration. We have four different types of equipment which can be running slightly different versions of their respective applications. These applications share some configuration settings, whereas other settings are specific to a version, an equipment type or even a single equipment.
It seems pretty easy to store the configuration for one version of a single type in a tool like Consul, but how could we structure the settings in Consul for the environment we have in such a way that it is still clear and understandable for service engineers who shouldn't be too familiar with all the intricacies of the application? Is Consul actually the right tool for this?
I'm uncertain if you want to simplify your configuration management that you can reuse anytime.
You might want to check out some popular Key-Value Management software such as Hashicorp Vault, AWS Secret Manager, Bitnami Sealed Secrets, and others.
Cheers!
Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose
As part of requirements, there is an expectation to create microservices for an existing ecommerce platform. The current architecture runs on ATG 10.2 version and has some rest API's hosted on it.
Given the fact that ATG is a monolithic ecommerce framework, is there any way that we can create microservices in ATG? Even if we are able to do so, how will they run as independent services? i mean how can we deploy them and test them in other environment? Wanted to know the technical feasibility of creating microservices on ATG ecommerce platform.
Perhaps you need to define how your microservices are supposed to work first. If you were to, for example, expose the ATG Profile as a microservice, it won't, by itself, run in another environment, it simply means that you can expose the functionality for consumption by a different system via the service. Alternatively you can expose a Profile module on a different system and try to consume it within ATG. That too is possible.
In a nutshell you can integrate various open source libraries into your ATG stack to build and expose the functionality of the monolithic application into microservices. To get started, read up about webmvc, oxm, hateoas, plugin-core, springtonucleus and perhaps dozer.
Perhaps you need to define your architecture first before asking a much more specific question here. The real answer is just too long.
I am currently working heavily in Azure. I am actually quite fond of ARM (Azure Resource Manager) right now and would love to keep using it. Right now in the old portal, We have a lot of resources tied up as Cloud Services. Now, I know cloud services are available in the new portal, but it seems that Microsoft is moving away from the classic cloud service model. Can someone explain if this is true? If so, what will the new model look like? I already use resources groups to manage Websites (WebApps), so I assume this is where the azure future lies. Will we see the "deprecation" of cloud services on down the line?
I am trying to understand if I need to begin re-structuring my Azure Infrastructure.
Any insight, explanation, or documentation is greatly appreciated.
So there are two things here - Cloud Services and managemenet of Cloud Services.
When you manage Cloud Services in current portal the underlying mechanism used is Azure Service Management (ASM) where as it is Azure Resource Manager (ARM) in the preview portal. To me, ARM is the new way of managing your Cloud resources in Azure (including Cloud Services).
I don't work for Microsoft so I would not know if Cloud Services themselves will be deprecated down the road or not but one thing I think will happen is that ASM will be deprecated in favor of ARM. At some point of time, the only option you will be left with managing your cloud resources will be through Azure Resource Manager. One example that makes me believe this thing is the presence of Classic resource providers (e.g. Classic Storage Resource Provider which enables you to manage storage accounts created in current portal via ASM in the preview portal which works exclusively on ARM).
Personally I can't see a place for cloud services in the new ARM world of Azure. I have always found them a convoluted concept that simply added complexity to a deployment.
In the ARM view of deployments servers are collected together in a VNet, and each server is attached to a Nic which in turn can be connected to the internet. A security group then takes care of ingress / egress rules.
This is a much cleaner deployment method, as it puts connectivity configuration at the server layer instead of mapping them all through a higher layer of abstraction.
I don't see the place of cloud services in ARM, however after a quick search it seems that there is a plan to implement it
Still no direction from the Azure Advisers group other than officially they will not drop support for Cloud Services. I think they are nearing giving us some kind of direction but I can't say anymore than that.
I asked a question about the future of Cloud Services on the recent Azure Compute AMA.
You can read the answers directly on Reddit for all details, below are a few interesting quotes (emphasis mine).
On ARM Integration for Cloud Services:
We are looking at ways to make the transition to ARM easier for Cloud Service customers- one of those options includes CS integration in ARM. This investigation is in the very early stages though, so if you are looking for a solution soon, check out VMSS/ACS/SF/Web Apps (meagan-msft)
And:
I think it's safe to say that if we make any significant investment in CS in the near future, it would be ARM integration, and as Meagan suggests, that's still in planning. Beyond that, there are no major feature improvements on the horizon. We believe the platform is pretty mature at this point. (seanmichaelmckenna)
So it doesn't look like any major innovations will hit Cloud Services soon, however:
Cloud Services are not going anywhere. In fact, many Microsoft services run on Cloud Services, so we heavily rely on them as well. They are fully supported, so feel free to continue to use them.
(meagan-msft)
For those who want to switch to a different Compute service, these recommendations were made:
However, if you would like to check out other services that are integrated with ARM today, we recommend checking out the following:
Web Apps for customers who want a fully managed platform and are building traditional web applications
Service Fabric for customers who want an opinionated application platform and managed infrastructure, but still need some control over the IAAS layer
VM Scale Sets for customers who need IaaS-level control with easy scaling, autoscale and load balancer integration
Azure Container service was also listed as a potential alternative.
Some things to consider (my understanding):
Service Fabric currently (2017) requires at least 5 VM instances, except for dev/test purposes. So probably only an option for larger services
VM Scale Sets is an IaaS offering, i.e. you have to manage OS updates etc. yourself. However, support for automatic OS updates is being worked on.
We are looking at a standard way of configuring the various "endpoints" of our application. Our application is a distributed system with Windows Desktop applications, Windows Server "services" and databases.
We currently configure each piece using XML files. This is getting a little out of hands as we work with larger customers who can have dozens of Servers running our application and hundreds of desktop clients.
Can anyone recommend a Microsoft technology or a third party that would allow us to centralize all that configuration information and manage it in a one place for all our applications? Any changes would be "pushed" to the endpoint(s) that are interested.
For example, if we were to change the login for one of our database, we would make that change on the database, then reflect that change in our centralized system. Following that last step, any service that needs to connect to the database would be notified of the change (and potentially receive the new data). How and what each endpoint does with that information is outside the scope of the system.
Our primary business is not "Centralized Configuration Services". We are a GIS company that provides solutions for various utilities worldwide.
I've done a couple of things to give myself this functionality over the years. I build enterprise applicatons that may be distributed across many servers. I don't want to bury config settings in each services config file or each web server's web.config file. For application specific stuff I usually create an application settings table in the app's database. The table only has two fields. SettingName and SettingValue. I then write a web or wcf service whose sole function it is to retrieve these settings. I write a function called GetSetting where you pass "SettingName" and it returns SettingValue or an empty string if your setting is not found. This way I can store all application settings for all components of the application in one spot. Maintenance and troubleshooting for this is really easy, I'm not hunting through scads of config files spread across a dozen web and app servers.
For larger scale apps I might create a separate AppSettings database where I add a new field to my table mentioned above. ApplicationName. My web or wcf service for this approach has the same method call (GetSetting) only at this scope I pass ApplicationName and SettingName and it returns SettingValue or an empty string.
Doing either of these things allows you to centralize all app settings for any size application or IT shop. It has worked really well for us.
You could use RSS together with BitTorrent to distribute changes. See Wikipedia. It is not MS specific however, but should provide the flexibility you need - a configuration server holding the configuration and providing the feeds needed to configure the clients and possibly servers.
Any VCS through a secure channel?
For example, git through ssh (both available in cygwin).
I think the first step is to have the secure channel (if you want the push ability, pulling might be different).
As for managing the "versions" in different "branches", what's better than a version control system?
As it goes for the Microsoft requirement, well the Microsoft sofwares in that exists in that area would suck pretty bad in your case (as in not the best tool for the job).