I am new to elasticsearch. I am storing the values for response times of some services over the period of time. What I want is the way to get notified when the average value for the response time goes below some threshold value. Is there any way that elasticsearch can notify me?
I don't think there's a way Elasticsearch can send notifications. The best you can do is to have your client send an Avg Aggregation command to Elasticsearch, read the response and send notification through some custom logic.
Related
When I use kibana to search logs, the response time is very slow. How can I grab the raw query sent to Elasitcsearch from Kibana? I'd like to analyse why the query is very slow and how to improve that.
You can view the raw query, response time, request time etc. from the "inspect" option - in the visualizations or the discover page of Kibana.
I want my React web app to receive messages using AWS AppSync subscriptions for mutations that are computed asynchronously, based on mutations to types other than the type the client originally submitted a request to mutate. For example, if a user casts a "vote", I want the server to respond immediately, but the server should also send clients the aggregations of the overall database that might take extra time to compute or can be computed at a slower rate.
I assume AppSync will notify clients if they make a Graphql subscription, lets say, to the type "Aggregation".
Q1. Will a web client receive a message for the Aggregation subscription if I write a server-side client that writes the Aggregation mutation to the AppSync API EVEN after the client received a response from the original vote request?
I assume I will need to make a server-side Graphql client to write the aggregation mutation. I guess this is as simple as a http client.
Q2. How can I trigger the code that computes the aggregation when at least one user has submitted a mutation (vote)? My best guess is that I need to use a Lambda Function to handle the original mutation (vote), but before responding to the web client, it will start another process (maybe a different Lambda Fn) which will eventually mutate the aggregation.
I have not yet integrated the Apollo client so I'd like to keep the web client side code simple for now.
If I understand your question you want to something to kick off the aggregation process and then get a subscription message when there's a new aggregate. To kick off the aggregation you could use any number of things depending on where you're storing your data. For example, if you're using DynamoDB you could use DynamoDB streams to kick off an aggregation when there's a change to vote. Or, like you said, you could kick off a lambda or another process in response to the subscription message to vote. Any of these solutions would need to make a mutation to write the aggregate which will result in a subscription message to clients subscribed to Aggregation.
According to ES documentation document indexing/deletion happens as follows:
Request received at one of the nodes.
Request forwarded to the document's primary shard.
The operation performed on the primary shard and parallel requests sent to replica nodes.
Primary shard node waits for a response from replica nodes and then send the response to the node where the request was originally received.
Send the response back to the client.
Now in my case, I am sending a create document request to ES at time t and then sending a request to delete the same document (using delete_by_query) at approximately t+800 milliseconds. These requests are sent via a messaging system (internal implementation of kafka) which ensures that the delete request will be sent to ES only after receiving 200 OK response for the indexing operation from ES.
According to ES documentation, delete_by_query throws a 409 version conflict only when the documents present in the delete query have been updated during the time delete_by_query was still executing.
In my case, it is always guaranteed that the delete_by_query request will be sent to ES only when a 200 OK response has been received for all the documents that have to be deleted. Hence there is no possibility of an update/create of a document that has to be deleted during delete_by_query operation.
Please let me know if I am missing something or this is an issue with ES.
Possible reason could be due to the fact that when a document is created, it is not "committed" to the index immediately.
Elasticsearch indices operate on a refresh_interval, which defaults to 1 second.
This documentation around refresh cycles is old, but I cannot for the life of me find anything as descriptive in the more modern ES versions.
A few things you can try:
Send _refresh with your request
Add ?refresh=wait_for or ?refresh=true param
Note that refreshing the index on every indexing request is terrible for performance, which begs the question as to why you are trying to delete a document immediately after indexing it.
add
deleteByQueryRequest.setAbortOnVersionConflict(false);
I have two metric already made.
1st metric represents the number of transactions started by client
2nd metric represents the number of transactions received by server
I want to get the number of transactions which failed(are sent by client but not received by server) which is simple subtraction
Can I achieve this in Kibana?
There is a plugin for Kibana 5.0.0+. It is based on the core Metric-Plugin but gives you the ability to output custom aggregates on metric-results by using custom formula and/or JavaScript.
You can check more details Here .
I have an Elasticsearch/kibana stack that stores every request the application receives. It stores gereneral information about the request (RequestTimestamp, IP, Headers, HttpStatus, Route etc), and there's at least some requests per minute.
I would like to know if there's some way to query Kibana/Elastic to know the points in time that the application didn't receive any request for, let's say, 3 minutes.
I know it can be done programmatically, but it needs to be purely done with querys (so I can show it on the Dashboard).
You could do date histogram aggregation.
You could specify 3m interval and query for a specified day.
So you would get 24*60/3 = 480 values for each day.
You could plot it on the chart and see the gaps.
If you are an expert ES user you could try filtering the aggregations using bucket selector pipeline aggregation or create a moving average using moving average aggregation.