Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have two spring boot microservices say qbank and reports. For communication I am using API calls.
Problem - In a repots function I need some data from qbank, but the function is being called so frequently because of heavy user traffic.
qbank data does not changes so frequently, may be in a month manually by Admins.
So how can I reduce the API calls ? Solutions that I thought -
Cache - Problem - how to keep updated ? Need to modify very fast just after qbank data changes. (API call from qbank to reports not allowed)
Kafka - Don't know much about it but can it helps ? if yes then how ?
Any suggestion will be welcomed.
qbank data does not changes so frequently, may be in a month manually by Admins
CDC (Change data Capture) on qbank data with Kafka Connect
One of the choice is to have copy of qbank data in reports database.
Kafka Connect can be used to to monitor qbank data and detect any changes happen on qbank data (Source) and replicate the same into reports database (Sink).
Source Connector (qbank database)
Sink Connector (reports database)
Cache can be one more option in Microservices to share data across services but picking right caching mechanism/strategies depends on various factors.
Caching strategies
Eviction policy
Data access strategies
Data type wise Cache
etc..
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
My question is theoritical (I am not asking the steps about scaling) and related to keep the same performance.
For example our web site (Spring Boot based) is visited 100 person / day and after a year is şs started 1.000.000 visit per day. In this situation, I have the following ideas basically, but need to know more and if these ideas are good or bad:
Using Cloud services
LOad balancer
Using microservices and applying distributed system techniques.
If read operations are much more than write or update, a NoSQL db can be used.
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think.
... etc.
Could you pls share your ideas and comment the idea above? Any help would be appreciated.
There have been several POC( proof of concept ) and proved deployment strategies for better availability.
Keeping your points, I am summarizing and possibly giving a bit more clarity!
Using Cloud services --> This is the platform you choose for e.g. One can choose on-premise service deployment or on cloud such as AWS,Azure GCP etc. Not related to scalability question at the moment.
Load balancer --> Balance the load when you have multiple instances of your Microservice, so for e.g. You can create docker images of your microservice & deploy as a Pod on Kubernetes platform where you can have more than one Replicas (Replica is copy of your same service). Load balancer will balance the HTTP requests among multiple pods.
Using microservices and applying distributed system techniques --> You can but make sure to adhere to best practices and proven Microservice deployment strategies. Read more about the more about them here https://www.urolime.com/blogs/microservices-deployment-strategies/
If read operations are much more than write or update, a NoSQL db can be used. --> Definitely, infact you can decompose your microservice based on number of transactions or read/write operations & you can use NoSql DB like Couchbase or MongoDb
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think. --> Again such mechanisms are usually centralized and JWT token has some time validity!
So there might be several other options of scaling but most used is the one I mentioned in point 2.
I highly suggest you get a grip on basics, Here are few links which would be helpful!
https://microservices.io/patterns/microservices.html
https://medium.com/design-microservices-architecture-with-patterns/decomposition-of-microservices-architecture-c8e8cec453e
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
As a product scales, APIs and two tier architecture incurs bottlenecks, data contention, downtime. Messages can become lost, if there are thousands or millions of requests & activity
What makes websocket connections beneficial vs Kafka? What are the best use cases for each?
Is there an example such as a large scale chat application where a hybrid of both technologies are necessary?
Websockets should be used when you need real-time interactions, such as propagating the same message to multiple users (group messaging) in a chat app.
Kafka should be used as a backbone communication layer between components of a system. It fits really well in event-driven architectures (microservices).
I see them as 2 different technologies which have been developed for 2 different purposes.
Kafka, for example, allows you to reply messages easily, because they are stored on the local disk (for the configured topic retention time). Websockets are based on TCP connections (two-way communication), so they have a different use-case spectrum.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
My spring OAuth2.0 authorization micrservice is extremely slow. It takes 450+ms to check a token.Generating tokens takes 1.6s and above. What could be the reason? How can I improve the performance of my microservice ?
Details:
Auth and Microservices are running on my laptop
The time I mentioned is for the auth server with requests from only one microservice
Thanks in advance
Download a tool such as VisualVM to perform profiling of your application.
I would also record the elapsed time of individual methods to determine exactly which methods are taking the longest amounts of time.
Once you can verify exactly what code is taking awhile, you can attempt JVM optimizations, or review the code (if you're using an external library) and verify the implementation.
There might be three reasons,
Your services might be in different regions and OAuth2 server might be central one and in different region. If this is the case create instance of OAuth servers in all regions which you use so that your latency can be improved.
Check the Encryption techniques you used. always it's preferred to use SHA-256 Hashing but this might not be complete reason in some cases this could help.
Check your OAuth server Capacity, i.e. it's RAM processor and Storage volume. It might also be reason that multiple services makes same /generatetoken call to server and Tomcat makes it as One Thread per request and if this the case configuring your connection pool will also help.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm using phoenix controllers to receive data via REST calls. So an iOS app could send the "events" for each user and based on the event, I need to calculate the score/points and send it back to the user. Calculation and sending back to the user can happen asynchronously. I'm using Firebase to communicate back to the user.
What is a good pattern to do calculation? Calculate could be bunch of database queries to determine the score of that event. Where should this calculation happen? Background workers, GenEvent, streams within user-specific GenServer (I have supervised GenServer per user).
I would look at Phoenix channels, tasks and GenServer.
Additionally, if you would like to manage a pool of GenServer workers to do the calculations and maybe send back the results for you, check out Conqueuer. I wrote this library and it is in use in production systems for my company. It is uses poolboy, which is probably the most pervasive pool management library in Erlang/Elixir.
Admittedly, I do not fully understand the requirements of your system, but it does not seem to me GenEvent has a place in your requirements. GenEvent is about distributing events to one or more consumers of events. So unless you have a graph of processes that need to subscribe to events being emitted from other parts of your system I do not see a role for it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The question itself is very tricky. But I'll try to break it down into pieces.
Let's say I have external datasources each of them providing my data model. Either a webservice, or database. What matters is that my Entities are defined and exists in separated systems than the Dynamics builtin database.
What I want to do is to use the capabilities of CRM, to handle Business Entities (Provided from the external source), aspects such as security, and UI. Are well managed inside the CRM. So I want to build my system, using this tool, but I want to be able to store and keep the data in my own sources.
In other words, is there a way in CRM (Through the webServices I believe), in which I can provide the entity, and have it managed later inside the CRM.
thanks in advance... I really hope I can find the answer here.
The only option you have is to synchronize the data stored inside Dynamics CRM database with your external sources.
With tools like Scribe from Scribesoft, this scenario is manageable.
About 50% of the functionality of MS CRM is implemented via rather convoluted SQL views/queries/stored functions etc. It is much more then a simple "table per entity type" data store. There is no way to keep live data "somewhere else", so you are stuck with import/export (as recommended in the previous answer).