GraphQL How to read values with dealy using UseSubscription hook - graphql

I have created a graphQL Realtime server and client reading values using useSubscription hook. Can anyone tell me please how to create a delay of few seconds to read the values.
Right now on the server side, I have created a change stream on a mongoDB collection and whenever an object is inserted into that collection it pushes the data to graphql subscription and the client receives it, The problem is the process which enters the data is inserting too many values in one second and i need just the most latest one and probably most latest with 5 seconds. So I need to create some sort of delay either on server side or client side.

Related

RTK-query fetch data every X minutes and add it to stored data

I have a "live graph" with many data points.
I want to use rtk-query to fetch the data from time A to now and store it.
Then, every X minutes, I want to call the API again in order to fetch the data from the last time point to now.
Two questions:
How do I ignite the call every X minutes?
How do I add to the store and not totally invalidate it?
If you want to call API every X minute, you can implement it with pooling interval.
Also, the same could be implemented through socket.io pooling or with plain websocket connection. If you do not want to cache the data, you can simply create redux middleware for socket.io - check my code. Or if it's necessary to cache websocket data, you can use RTK Query onCacheEntryAdded function -
here is a link.

In this example, should I use WebSocket or HTTP?

I have an intuition (maybe wrong) that would be better to use WebSocket in a specific process in my application. First, I need to know if I'm right, after, how I could measure the performance in the two options. I would be grateful if someone could share some thoughts.
I load all the user's orders to the dashboard after the login.
I built the Server API in graphQl, so I query the resolver to get the orders and display them.
Whenever the customer creates a new order, I add it to the database, redo the query on the frontend, and reload all the orders with the new one on the array. So the dashboard is updated.
However, I've been thinking that maybe, pushing the new order data to the preexistent order array (saved on the state - React) would be more efficient. When a customer creates an order, I emit (via Websocket) the new order data, the dashboard listens to the event and pushes to the state.
Am I right?

How to process concurrent requests sequentially coming from certain user to specific endpoint

I am having trouble with handling concurrent requests coming from user to specific endpoint. The problem I am encountering is when user makes a request to certain endpoint with a certain parameter, to be specific with uuid, I pass that parameter to stored procedure then query the database and db returns error since first transaction is not complete. I want subsequent requests wait until previous processed if request is coming from the same user and to specific endpoint. How do I do that?
I tried to implement it using mutexes but it seemed didn't work.
I want to solve this problem only in server side without touching database.

Example micoservice app with CQRS and Event Sourcing

I'm planning to create a simple microservice app (set and get appointments) with CQRS and Event Sourcing but I'm not sure if I'm getting everything correctly. Here's the plan:
docker container: public delivery app with REST endpoints for getting and settings appointments. The endpoints for settings data are triggering a RabbitMQ event (async), the endpoint for getting data are calling the command service (sync).
docker container: for the command service with connection to a SQL database for setting (and editing) appointments. It's listening to the RabbidMQ event of the main app. A change doesn't overwrite the data but creates a new entry with a new version. When data has changed it also fires an event to sync the new data to the query service.
docker container: the SQL database for the command service.
docker container: the query service with connection to a MongoDB. It's listening for changes in the command service to update its database. It's possible for the main app to call for data but not with REST but with ??
docker container: an event sourcing service to listen to all commands and storing them in a MongoDB.
docker container: the event MongoDB.
Here are a couple of questions I don't get:
let's say there is one appointment in the command database and it already got synced to the query service. Now there is a call for changing the title of this appointment. So the command service is not performing an UPDATE but an INSERT with the same id but a new version number. What is it doing afterwards? Reading the new data from the SQL and triggering an event with it? The query service is listening and storing the same data in its MongoDB? Is it overwriting the old data or also creating a new entry with a version? That seems to be quite redundant? Do I in fact really need the SQL database here?
how can the main app call for data from the query service if one don't want to uses REST?
Because it stores all commands in the event DB (6. docker container) it is possible to restore every state by running all commands again in order. Is that "event sourcing"? Or is it "event sourcing" to not change the data in the SQL but creating a new version for each change? I'm confused what exactely event sourcing is and where to apply it. Do I really need the 5. (and 6.) docker container for event sourcing?
When a client wants to change something but afterwards also show the changed data the only way I see is to trigger the change and than wait (let's say with polling) for the query service to have that data. What's a good way to achieve that? Maybe checking for the existing of the future version number?
Is this whole structure a reasonable architecture or am I completely missing something?
Sorry, a lot of questions but thanks for any help!
Let’s take this one first.
Is this whole structure a reasonable architecture or am I completely
missing something?
Nice architecture plan! I know it feels like there are a lot of moving pieces, but having lots of small pieces instead of one big one is what makes this my favorite pattern.
What is it doing afterwards? Reading the new data from the SQL and
triggering an event with it? The query service is listening and
storing the same data in its MongoDB? Is it overwriting the old data
or also creating a new entry with a version? That seems to be quite
redundant? Do I in fact really need the SQL database here?
There are 2 logical databases (which can be in the same physical database but for scaling reasons it's best if they are not) in CQRS – the domain model and the read model. These are very different structures. The domain model is stored as in any CRUD app with third normal form, etc. The read model is meant to make data reads blazing fast by custom designing tables that match the data a view needs. There will be a lot of data duplication in these tables. The idea is that it’s more responsive to have a table for each view and update that table in when the domain model changes because there’s nobody sitting at a keyboard waiting for the view to render so it’s OK for the view model data generation to take a little longer. This results in some wasted CPU cycles because you could update the view model several times before anyone asked for that view, but that’s OK since we were really using up idle time anyway.
When a command updates an aggregate and persists it to the DB, it generates a message for the view side of CQRS to update the view. There are 2 ways to do this. The first is to send a message saying “aggregate 83483 needs to be updated” and the view model requeries everything it needs from the domain model and updates the view model. The other approach is to send a message saying “aggregate 83483 was updated to have the following values: …” and the read side can update its tables without having to query. The first approach requires fewer message types but more querying, while the second is the opposite. You can mix and match these two approaches in the same system.
Since the read side has very different table structures, you need both databases. On the read side, unless you want the user to be able to see old versions of the appointments, you only have to store the current state of the view so just update existing data. On the command side, keeping historical state using a version number is a good idea, but can make db size grow.
how can the main app call for data from the query service if one don't
want to uses REST?
How the request gets to the query side is unimportant, so you can use REST, postback, GraphQL or whatever.
Is that "event sourcing"?
Event Sourcing is when you persist all changes made to all entities. If the entities are small enough you can persist all properties, but in general events only have changes. Then to get current state you add up all those changes to see what your entities look like at a certain point in time. It has nothing to do with the read model – that’s CQRS. Note that events are not the request from the user to make a change, that’s a message which then is used to create a command. An event is a record of all fields that changed as a result of the command. That’s an important distinction because you don’t want to re-run all that business logic when rehydrating an entity or aggregate.
When a client wants to change something but afterwards also show the
changed data the only way I see is to trigger the change and than wait
(let's say with polling) for the query service to have that data.
What's a good way to achieve that? Maybe checking for the existing of
the future version number?
Showing historical data is a bit sticky. I would push back on this requirement if you can, but sometimes it’s necessary. If you must do it, take the standard read model approach and save all changes to a view model table. If the circumstances are right you can cheat and read historical data directly from the domain model tables, but that’s breaking a CQRS rule. This is important because one of the advantages of CQRS is its scalability. You can scale the read side as much as you want if each read instance maintains its own read database, but having to read from the domain model will ruin this. This is situation dependent so you’ll have to decide on your own, but the best course of action is to try to get that requirement removed.
In terms of timing, CQRS is all about eventual consistency. The data changes may not show up on the read side for a while (typically fractions of a second but that's enough to cause problems). If you must show new and old data, you can poll and wait for the proper version number to appear, which is ugly. There are other alternatives involving result queues in Rabbit, but they are even uglier.

jsp(springMVC) store data in session or request to retrieve between same page requests

I have a JSP page, lets say events_index.jsp that gets all the events in the system. i am using spring MVC pagedListHolder to implement pagination. do i need to store the data source in request or session. if i store it in session, new events created will not come into list unless i close the browser before creating a new event. If i store in request it fetches entire data from database every time as it cannot find data in next request object. i need data to retain only between events_index.jsp requests only but not entire session.
any suggestions?
What I understood from your question you need to show latest data everytime you paginate?
Even you are paginating in same page for every next or prev , there is a http request goes to server and as per pageSize and offset setting server returns data.
so this happens for every request and you will get latest data.
If you store paging data in session and serve the subsequent request from session you may not have latest state of data.
So I suggest use request to show latest data [may hit performance]
use session to show data[may not show latest data]

Resources