I want my React web app to receive messages using AWS AppSync subscriptions for mutations that are computed asynchronously, based on mutations to types other than the type the client originally submitted a request to mutate. For example, if a user casts a "vote", I want the server to respond immediately, but the server should also send clients the aggregations of the overall database that might take extra time to compute or can be computed at a slower rate.
I assume AppSync will notify clients if they make a Graphql subscription, lets say, to the type "Aggregation".
Q1. Will a web client receive a message for the Aggregation subscription if I write a server-side client that writes the Aggregation mutation to the AppSync API EVEN after the client received a response from the original vote request?
I assume I will need to make a server-side Graphql client to write the aggregation mutation. I guess this is as simple as a http client.
Q2. How can I trigger the code that computes the aggregation when at least one user has submitted a mutation (vote)? My best guess is that I need to use a Lambda Function to handle the original mutation (vote), but before responding to the web client, it will start another process (maybe a different Lambda Fn) which will eventually mutate the aggregation.
I have not yet integrated the Apollo client so I'd like to keep the web client side code simple for now.
If I understand your question you want to something to kick off the aggregation process and then get a subscription message when there's a new aggregate. To kick off the aggregation you could use any number of things depending on where you're storing your data. For example, if you're using DynamoDB you could use DynamoDB streams to kick off an aggregation when there's a change to vote. Or, like you said, you could kick off a lambda or another process in response to the subscription message to vote. Any of these solutions would need to make a mutation to write the aggregate which will result in a subscription message to clients subscribed to Aggregation.
Related
Does the GraphQL spec say something about mutating data in a GraphQL subscription or are there any best practices/pros/cons about this topic? (same applies to data mutations in query resolvers...)
I have problem with creating metrics and later trigger alerts base on that metric. I have two datasources, both are elasticsearch. One contains documents (logs from service) saying that message was produced to kafka, second contain documents (also logs from service) saying that message was consumed. What I want to achieve is to trigger alert if ratio of produced to consumed messages drop below 1.
Unfortunately it is impossible to use prometheus, for two reasons:
1) counter resets each time service is restarted.
2) second service doesn't have (and wont't have in reasonable time) prometheus integration.
Question is how to approach metrics and alerting based on that data sources? Is it possible? Maybe there is other way to achieve my goal?
The question is somewhat generic (meaning no mapping or code, in general, is provided), so I'll provide an approach.
You can use a watcher upon an aggregation that you will create.
It's relatively straightforward to create a percentage of consume/produce, and based upon that percentage you can trigger an alert via the watcher.
Take a look at this tutorial (official elasticsearch channel) on how to do this. Moreover, check the tutorials for your specific version of elasticsearch. From 5.x to 7.x setting alerts has been significantly improved (this means that for 7.x you might be able to do this via the UI of kibana, but for 5.x you'll probably need to add the alert via indexing json in the appropriate indices .watcher)
I haven't used grafana, but I believe the same approach can be applied. You'll need an aggregation as mentioned before and then you add the alert https://grafana.com/docs/grafana/latest/alerting/rules/
I am working on a search dashboard with full text search capabilities, backed by ES. The search would initially be consumed by a UI dashboard. I am planning to have an application web service (WS) api layer between the UI dashboard and ES which will route the business search to ES.
There can be multiple clients to WS going forward, each with its own business use cases, and complex data requirements (basically response fields). There are many entities and huge number of fields across them. Each client would need to specify what fields entities it wants to return with what fields.
To support this dynamically changing requirement, one approach could be to have the WS be a pass through to the ES (with pre validations like access control and post transformations to the response from ES). The WS APIs will look exactly like the ES APIs, the UI should build ES queries through JS client and send it to WS, which after access control will get data from ES.
I am new to ES and skeptic of this approach. Can there be any particular challenges in this approach. One of my colleague has worked on ES before but always with a backend Java client, so he's not too sure.
I looked up a ES Js client and there's an official one here.
Some Context here:
We have around 4 different entities (can increase in future) with both full text and keyword type fields. A typical search could have multiple filters and search terms and would want to specify the result fields. Also, some searches would be across entities and some to individual ones. We are maintaining a separate entity for each entity.
What I understand from your post is, below is what you want to achieve at high level.
There can be multiple clients to WS going forward, each with its own
business use cases, and complex data requirements (basically response
fields)
And as you are not sure, how to do this, you are thinking to build Elasticsearch queries from Javascript in your front-end only. I am not a very big fan of this approach as it exposes, how you are building queries and if some hacker knows crucial like below information, then can bring your entire ES cluster to its knees:
Knows what types of wildcard queries.
Knows index names and ES cluster details(although you may have access control but still you are exposing the crucial info).
How you are building your search queries.
Above are just a few examples and will add more info.
Right approach
As you already have a backend, where you would be checking the access, there only build the Elasticsearch queries and you even have the advantage of your teammates who knows it.
For building complex response field, you can use the source filtering, using which you can specify in your search request, what all fields you want to return in your search result.
I have a RethinkDB cluster with thousands of changefeed subscribers and the load from the subscriptions is pretty high, so I'm looking into proxy nodes. One question I have is which changes each proxy node receives, or put differently, is there any benefit in trying to cluster similar subscriptions on specific proxy nodes?
The concrete situation is that I have one table with two relevant fields for this discussion: an account field and a topic field. The subscriptions filter by account and do a "between" two topics to get a topic prefix. The table has a compound secondary index on account and topic, so the filter really is an index range scan.
What I'm wondering is whether breaking the table up into a table by account would help with subscriptions. This would only be the case if I could direct all subscriptions for an account to one proxy and if at the RethinkDB level that proxy then would not receive changes for tables for which it has no subscription. Is that the case or does each proxy receive all changes?
I am new to elasticsearch. I am storing the values for response times of some services over the period of time. What I want is the way to get notified when the average value for the response time goes below some threshold value. Is there any way that elasticsearch can notify me?
I don't think there's a way Elasticsearch can send notifications. The best you can do is to have your client send an Avg Aggregation command to Elasticsearch, read the response and send notification through some custom logic.