I have an adapter (written in Spring Boot and Spring Integration) retrieving currency reates from two different sources (via REST and proprietary library). I filter unnecesary things, create instances of class known in my system and send rates to JMS cluster. I want this adapter to be replicated. Only one instance should be running at the same time. When one crashes (I know it from health endpoint) another one should start publishing rates. How can I achieve such effect? I know that available services can be registered using Eureka but how to turn one of them on automatically?
The solution to the problem is using spring-cloud-cluster. One can use either zookeeper or hazelcast to negotiate leadership. From few instances only one is given a leader role. If it crashes, another one takes its role (it is informed via event propagation). You can also use yieldLeadership method to manually relinquish leadership (if health indicator says something is wrong with the application).
Without knowing more details it is hard to give you a recommendation.
I'd personally say Eureka is not build for what you are trying to achieve. But it sounds more like you want to have a look into ZooKeeper. Also see Eureka FAQ for reference. ZooKeeper was exactly build for doing what you are trying to achieve: leader election.
On the other hand, if you can survive also with having the service down for a few seconds I'd suggest you use either your script that monitors the /health endpoint already to restart the service or use systems who already have this build in like Systemd or Docker, where you can define Restart policies.
Related
all new to Masstransit and are currently evaluating it for a larger project and
wondering if anyone could help get a better understanding of the following challenges:
"Single consumer of events in a loadbalanced environment"
In production our services will be runinng multiple instances for scalability and failvover and be
part of a larger ecosystem of microservices. The overall architecture is based on Microsofts eshoponcontainers
reference implementation where different microservices are communicating with each other via "integrationevents".
When publishing a IntegrationEvent to other services which I assume should be done as described in Masstransit Producers / Publish
, https://masstransit-project.com/usage/producers.html#publish, how can I assure that only ONE instance in a specific microservice are processing
the event but of course that the event reaches ALL microsystem that depends on the event? When we have done similar solutions based on Azure Functions
this requirement has been solved by using the "Singelton" attribute (https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#singleton-attribute).
Azure Service Bus
Reading the documentation my impression is that Masstransit is very RabbitMQ centric.
Since we will be on the Azure Service bus when moving to production is there any limitations or features
not available on that "transport"?
Regards Niclas
For the first question, it's a normal publish-subscribe with competing consumers. It works like this out of the box, there's nothing that needs to be done to achieve this.
When running multiple instances of the same service
Use the same queue name for each instance
Messages from the queue will be load balanced across all instances (the competing consumer pattern)
It's from the RMQ Guidances, but it's like this for all transports.
Concerning the Azure Service Bus transport, it works as expected and has a lot of production users. It's properly documented as well.
I'd say for both of your questions the answer is "it just works".
I am using Cloud Foundry and I deployed my Spring boot application on Cloud. Whenever there is some updates/upgrade happens on Cloud foundry, my application got restart and some request got failed to reach to application as restart of application takes more time to get up.
Is there any way in CF that some instances of application will be running while upgrade/restart of application to process requests.
Also I want to know, if CF provides services from different locations/regions, so consider my application will be deployed on 2 CF containers available on different region. Wherever there is some updates/upgrade available, proceed upgrade on one region for Cf so other CF service from another region will be available and some application instances will be running to serve requests and vice versa.
-Thank you.
What you're describing is the intended behavior of CF.
If you have two or more instance of your application, they should never both go down at the same time. i.e. one will be taken down, then after it's restarted successfully, then the other will be taken down and restarted.
If your operator has configured multiple availability zones for the foundation that you've targeted, then application instances will be distributed across those AZs to help facilitate HA and best possible availability.
If you're not seeing this behavior then you should take a look at the following as these items can affect uptime of your apps:
Do you have more than one application instance? If you only have one application instance, then you can expect to see some small windows of downtime when updates are applied to the foundation and under other scenarios. This happens because a times Diego will need to evict applications running on a Diego Cell. It makes an attempt to start your app on another Cell before stopping the current instance, but there are no guarantees provided around this. Thus you can end up with some downtime, if for example your app is slow to start or your app does not have a good health check configured (like it passes the health check before the app is really up).
Did your operator set up multiple AZs? As a developer, you cannot really tell. This is abstracted away, so you would need to ask your platform operations team and confirm if there are more than one and if so how many. For best possible uptime, have at least as many app instances as you have AZs.
The other thing often overlooked, does your application depend on any services? If so, it is also possible that you will see downtime when services are being updated. That all depends on the services you are using and if there will be associated downtime for management and upgrades of those services. You may be able to tell if this is the case by looking more closely at your application logs when it fails to see if there are connection failures or errors like that. You might also be able to tell by looking at the plan defined in the CF Marketplace. Often the description will say if there are stipulations regarding the plan, like it is or isn't clustered or HA.
UPDATE
One other thing which can cause downtime:
If your operator has the "max in flight" value too high for the number of Diego Cells this can also cause downtime. Essentially, "max in flight" dictates how many Diego Cells will be taken out of service during an upgrade. If this value is too high, you can run into a situation where there is not enough capacity in the remaining Cells to host all of your applications. This ends up resulting in downtime for app instances as they cannot be rescheduled on another Cell in a timely manner. As a developer, I don't think this is something you can troubleshoot, you would need to work with your platform operators to investigate further.
That is probably a theme here. If you are an app developer, you should be talking to your platform operations team to debug this.
Hope that helps!
I have a spring boot rest service where configuration values are stored in git and fetched using a config server. Deployment is done in a docker swarm cluster where this service would run across multiple containers. So one thing I had to keep in mind is that when actuator's refresh endpoint is called, it refreshes all the containers for this service seamlessly and not just any random container. This is quite an obvious ask I believe.
I can implement updating the config values for a service as and when it's config changes in git using a message broker. However, that would take time and time is not with me at the moment.
I have come up with two quick solutions and would like your help based on your experience as to which one is better than the other. Keep in mind that both work and I tested them both.
Solution 1
Create a scheduler using #Scheduled in the Application.java and keep pinging actuator's refresh endpoint every 5 seconds. I think this is really expensive and resource intensive in production.
Solution 2
Call actuator's refresh endpoint in the controller method itself. This way, I called refresh endpoint on demand and don't keep polling it like solution 1 and be wasteful. It will also ensure that whatever container is picked for servicing a request, it refreshes itself as refresh endpoint call would refresh the properties referred by that container only.
Do you have any preference on one over the other ? Do you see any pros and cons with these solutions ? which one would you pick and why ?
Please let me know what your thoughts are.
This sounds like an interesting problem. Also, like you pointed out Solution1 is resource intensive and should not be used in production. If you are running out of time, I would suggest you go ahead with Solution2, its smarter than the prior.
However, I think the optimal way to solve this problem can be using webhooks in github. This way github will make an API call to your predefined endpoint when a specific event is generated. Events are the core of Github Webhooks. Here is the list of all github events. Choose the one that best suits your requirement. https://developer.github.com/webhooks/#events
I am a newbie in Microservices, having theoretical knowledge. I want to make a small application in Microservices. Can anyone please help me with the idea of how to implement microservices?
Thanks in Advance!!
You can create something like a currency conversion app with three microservices like these:
Limit service;
Exchange service;
Currency conversion service.
Limit service and currency conversion service can communicate with the database for retrieving the values of the limits and currencies conversion.
For more info check github.com/in28minutes and look after a microservice repository.
No matter how perfect the code of your microservice is, you may face issues with support and development if the microservice architecture doesn’t work according to certain
rules.
The following rules can help you with microservices a lot:
You have to do everything by yourself because you do not have any Rails and architecture out of the box that can be started by one command. Your microservice should load libraries, establish client connections, and be able to release resources if it stops working for any reason.
It means that being in the microservice folder and having made the 'ruby server.rb' command (a file for starting a microservice) we should make the microservice do the following:
Load used gems, vendor libraries (if used), and our own libraries
Use the configuration (depend on the environment) for adapters or classes of client connections
Establish client connections (permanent connections are meant here). As your microservice should be ready for any shutdowns, you should take care of closing these client connections at such moments. EventMachine and its callback mechanism helps a lot with this.
After that your microservice should be loaded and ready for work.
Incapsulate your communication with the services into abstractly named adapters. We name these adapters based on their role (PubSub, SMSMessenger, Mailer, etc.). This way, we can always change the inner implementation of these adapters by replacing the service if the names of our classes are service agnostic.
For example, we almost always use Redis in our application from the very beginning, thus it is also possible to use it as a message bus, so that we don’t have to integrate any other services. However, with the application growth we should think about solutions like RabbitMQ which are more appropriate for cases like ours.
If your code is designed in such a way that your classes are coupled with each other, do it according to the dependency inversion principle. This will help your code to avoid issues with lib booting.
Learn more here
You can try splitting an existing Monolithic application to gain perspective on microservice architecture.
I wrote this article, which talks about splitting a Django App into microservices. Hope it helps.
Let's consider a situation, where multiple services relay on data that can change any time and should be updated in each microservice roughly at the same time - for example there is a list of supported languages or some common policies that could change one day and affect many services at once.
One solution that I could think of is to have another microservice that could hold that data and any service that needs current state can just ask for it. The drawback is that this data is not changing very frequently, asking by HTTP is not that cheap and there is a lot of traffic to this let's say global registry service. As it is not changing very often, many services could just cache the data - in order to not ask for it every time - and not be able to respond to change quick enough when the change is made to the configuration.
The other solution could be to externalize such configuration - in AWS for example there could be some configuration file on S3 that would be available for others. The drawback here is that there is no way (as far as I know) to track changes in such file and there is no way to add some logic for verification if changed value in configuration is correct (there is no typos and so on), etc.
So my question is how to handle global configuration/registry in microservice world so that there is little HTTP overhead, you can audit changes as well as introduce change at the same time in many services?
I will prefer the option 1. Apart from the HTTP overhead, this will also lead your system in an inconsistent state. Service 1 might be working on new values but service 2 will be on old.
Since this is a distributed system that we are talking about, I am willing to take a risk with availability.
Have a configuration service that allows you to plan your config changes. Instead of saying change the value of A from x to y, you say change from x to y at time t. This t allows you to consistently propagate changes to all your system.You need to put in effort to understand what the min value of t should be for you set of services, how will you make all services acknowledge the changes and make them at the right time and how will you manage the new services that come up in between.
Another approach is use Spring Cloud Config (or something similar). It ask the service to register with the centralised config service and make refresh call to all the services to update config. Limitation being not all configs could be refreshed and if you are behind the LB you still need to handle ways to make sure all instances gets updated.
Use Config Server( spring cloud config server) that will maintain centralized configurations, you need to make changes to config server related to configurations, each microservices will come on startup for configurations to config server, even after start up after certain interval of time microservices can come to config server for validating any change in configurations and update accordingly.
There are couple of ways to do it, a better way especially in prod is to use external Configuration Store Pattern.
You can save the configuration in external stores like Azure Key Vault or Azure App configuration
Find more details about Azure key vault here:
Azure key vault
5-Minute quickstarts of Azure key vault integration
If you absolutely must have a shared config, best decoupled architecture I've encountered is as follows:
You have a standalone Config Service, completely private to the outside world and can only be accessed through an internal network for your microservices
ON STARTUP: Microservices do a pull request from the Config Service of what is needed per service and is stored in memory. if it is unable to pull from Config Service do not allow it to start. Have Retry Mechanism on this front.
ON CHANGE of the Config Service: Publish an event to your messaging layer that will force services to update their respective configurations.
Caveats:
do not put time sensitive configurations here, since we are using asynchronous communications here (if you have time critical configs why are they shared in the first place, you might need to revisit)
you need to handle your own plumbing, retry mechanism, memory management etc etc.