Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am developing an application and using spring-boot-application for my health endpoint, my application is interacting with several third party services which I need to include in my health check, the problem is when the services does not respond I keep waiting until the ping times out so I know they are not available , this takes long time and I want to avoid this.
One solution was to run all the check for the services in parallel, this will reduce the time incase of the timeout significantly but still I have a bottleneck of one timeout.
Another solution would be to keep checking the services in the background periodically (using scheduler) and cache the last result of the check so when our monitor asks for health information, health endpoint will return the cashed result.
Are there any more practical solutions? Or are there any known best practices for such a scenario?
I want to post how I tageled this issue:
Since The application is dependent on third parties and without the connection to these third parties the application is not considered functional ( not a microservice ) there was no escaping checking the health of these third parties
To address the issue I did the following:
1- Identify the critical third parties, the application connects to various third parties, some of them are critical and some of them are not (most probably this will involve a business decision), so it is important to identify the critical third parties, for the non-critical third parties I excluded them from the health check
2- Perform a periodic check for the health of the critical third parties and cache it, ofc this will introduce a delay in fetching the health so the period of the check should be agreed on, for me it was ok to have a small delay ( 3 mins) so this was the interval of the period for me
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I will be appritiated if anyone answer to below quesion.
How a system will be designed with hundreds of services if each and every service has to be independent with a dedicated port as per microservices architecture? i mean is it a good practice to open hundreds of ports on OS for example?
Best Regards.
For security reasons microservices are hosted in private vpc, i.e. the nodes (where the microservices are run) does not have public ip. And the only way to get access to them is via a gateway api (see below). Also "each and every services has to be independent" should be in the means of domain link1 link2.
To expose services use the API gateway pattern: "a service that provides a single-entry point for certain groups of microservices" link1 link2. Note that api gateway is for a group of microservices, i.e. there may be several gateways for different groups of services (one for public api, one for mobile api, etc).
Only you can answer this question because only you knows what problem you try to solve. Before deciding I recommend to read about MonolithFirst approach
Micro services architecture is somehow the next generation of ESB products but in this case due to the high number of services,I am not sure if it is a solution!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have two datasets that update two separate reports respectively, and I have set the datasets to automatically refresh every morning at 10:30. I am refreshing from the PowerBI service from the cloud. There is a gateway included. The scheduled refresh for one report took 17 minutes to update whilst the refresh on the other took over an hour. When I manually refresh, it doesn't take longer than two minutes. Is there anything I can do to shorten the length of the scheduled refresh?
Your question is very broad, it should possibly be closed, but I'll give you a few places to look.
Are you manually refreshing from the PowerBI service? Remember that PowerBI service refreshed from the cloud and PowerBI desktop refreshed from your machine. So the location of your data sources matters here. Further, if there is a gateway involved, this will add in an additional hop to the PowerBI service.
Do the reports compete for a database or API resource? What happens if you schedule one for 10:00 and one for 10:30 to see if there are any race conditions in any shared sources?
How busy are the sources at 10:30 am - are there other jobs running (e.g. backups).
Try various times and combinations to see where the delays may be. If you have access to the sources, add monitoring to them to understand how they are performing.
Also keep in mind that with the recent Microsoft outages, things seem to be a bit slow at the moment at times, so if this is a recent issue then it may be transient.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
My spring OAuth2.0 authorization micrservice is extremely slow. It takes 450+ms to check a token.Generating tokens takes 1.6s and above. What could be the reason? How can I improve the performance of my microservice ?
Details:
Auth and Microservices are running on my laptop
The time I mentioned is for the auth server with requests from only one microservice
Thanks in advance
Download a tool such as VisualVM to perform profiling of your application.
I would also record the elapsed time of individual methods to determine exactly which methods are taking the longest amounts of time.
Once you can verify exactly what code is taking awhile, you can attempt JVM optimizations, or review the code (if you're using an external library) and verify the implementation.
There might be three reasons,
Your services might be in different regions and OAuth2 server might be central one and in different region. If this is the case create instance of OAuth servers in all regions which you use so that your latency can be improved.
Check the Encryption techniques you used. always it's preferred to use SHA-256 Hashing but this might not be complete reason in some cases this could help.
Check your OAuth server Capacity, i.e. it's RAM processor and Storage volume. It might also be reason that multiple services makes same /generatetoken call to server and Tomcat makes it as One Thread per request and if this the case configuring your connection pool will also help.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to migrate from wcf rest services to web API, (around 30 endpoints to be created with 6 complex methods) just want to decide based on the budget (1 month time with one resource) available, which amongst the below would be a better solution.
Writing whole new code for creating web API, just utilizing logic already present in wcf rest services.
Creating API endpoints and calling wcf services inside that.
There is no real way to tell for sure without knowing more details (or maybe the entire project).
If you're not sure the time will be enough, one thing you can do is to start with option 2 and then replace each endpoint with the actual code from the WCF service. If one month proves to not be enough, you may end up with a mixed solution (where some methods are implemented in the Web Api and some are wrappers calling the WCF service). However, you will be able to just keep slowly moving the methods back to the Web Api and finish it eventually.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm using phoenix controllers to receive data via REST calls. So an iOS app could send the "events" for each user and based on the event, I need to calculate the score/points and send it back to the user. Calculation and sending back to the user can happen asynchronously. I'm using Firebase to communicate back to the user.
What is a good pattern to do calculation? Calculate could be bunch of database queries to determine the score of that event. Where should this calculation happen? Background workers, GenEvent, streams within user-specific GenServer (I have supervised GenServer per user).
I would look at Phoenix channels, tasks and GenServer.
Additionally, if you would like to manage a pool of GenServer workers to do the calculations and maybe send back the results for you, check out Conqueuer. I wrote this library and it is in use in production systems for my company. It is uses poolboy, which is probably the most pervasive pool management library in Erlang/Elixir.
Admittedly, I do not fully understand the requirements of your system, but it does not seem to me GenEvent has a place in your requirements. GenEvent is about distributing events to one or more consumers of events. So unless you have a graph of processes that need to subscribe to events being emitted from other parts of your system I do not see a role for it.