Use Heartbeat with Elasticsearch to retrieve data from Json result - elasticsearch

I've configured a heartbeat with elasticsearch and I would like to know if it's possible to "catch" the data returned by the json response.
Heartbeat :
- type: http
urls: ["https://myurl.com"]
fields_under_root: true
fields:
application: myapplication
schedule: '#every 1m'
check.response:
status: [200]
Json response :
{
"status": "UP",
"components": {
"ping": {
"status": "UP"
},
"MyService": {
"status": "UP"
}
}
}
In addition to the result of the "check response" is it possible with the heartbeat to add a field with the value "UP" of the ping status for example ?
Thanks for your help

Related

Are batch json-rpc requests not supported with near?

When trying to batch json rpc requests like this:
[
{
"jsonrpc":"2.0",
"id":"46500000",
"method":"block",
"params":{
"block_id": 46500000
}
},
{
"jsonrpc":"2.0",
"id":"46500001",
"method":"block",
"params":{
"block_id": 46500001
}
}
]
The response given is this:
{
"jsonrpc": "2.0",
"error": {
"name": "REQUEST_VALIDATION_ERROR",
"cause": {
"name": "PARSE_ERROR",
"info": {
"error_message": "JSON RPC Request format was expected"
}
},
"code": -32700,
"message": "Parse error",
"data": "JSON RPC Request format was expected"
},
"id": null
}
This is quite confusing since the above request is a valid jsonrpc request according to the spec. Are batch requests not supported on near?
You are right, batch JSON RPC requests are not supported by nearcore JSON RPC implementation. Batch JSON RPC requests can become arbitrary heavy. It is preferred to leverage load balancer to get several requests resolved, so just make separate calls instead of batching them.
I wonder what is your use-case, though. Maybe you want to take a look into Indexer Framework

Use Postman to test Appsync Subscription

I have been able to successfully execute Appsync GraphQL queries and mutations from Postman. However, i'm struggling to connect to subscriptions which are websocket urls.
How can I achieve the same ?
Since Postman supports WebSockets testing GraphQL subscriptions is achievable as well. Such a testing requires two steps:
connection to a server,
sending a start message.
Establishing a connection:
Create a new WebSocket request.
Put your server URL ws:// or wss://.
Add custom header parameter Sec-WebSocket-Protocol: graphql-ws. Other headers may depend on your server configuration.
Press the "Connect" button.
When the connection is established we may start a subscription.
In the "New message" field put the command.
Press the "Send" button.
The start message should look like this:
{
"id":"1",
"payload": {
"operationName": "MySubscription",
"query": "subscription MySubscription {
someSubscription {
__typename
someField1
someField2 {
__typename
someField21
someField22
}
}
}",
"variables": null
},
"type": "start"
}
operationName is just the name of your subscription, I guess it's optional. And someSubscription must be a subscription type from your schema.
query reminds regular GraphQL syntax with one difference:
__typename keyword precedes every field list.
For example, the query from the payload in regular syntax looks like the following:
subscription MySubscription {
someSubscription {
someField1
someField2 {
someField21
someField22
}
}
}
Example message with parameters (variables):
{
"id":"1",
"payload": {
"operationName": "MySubscription",
"query": "subscription MySubscription($param1: String!) {
someSubscription((param1: $param1)) {
__typename
someField
}
}",
"variables": {
"param1": "MyValue"
}
},
"type": "start"
}
It also reminds regular GraphQL syntax as described above.
variables is an object with your parameters.
#Vladimir's answer is spot on. Adding a few notes for folks still having trouble.
Full document here # https://docs.aws.amazon.com/appsync/latest/devguide/real-time-websocket-client.html
Step 1 - establish connection:
make sure to base64 encode values in "header" and "payload" querystrings
header example:
{
"host":"example1234567890000.appsync-api.us-east-1.amazonaws.com",
"x-api-key":"da2-12345678901234567890123456"
}
payload: You can pass in empty payload
{}
Step 2 - register subscription:
Include the authorization in the message. Escape line feeds properly "\n" throws an error but "\\n" works. it throws the following error - misleading.
Don't forget to stringify value in "data" field.
{
"type": "error",
"payload": {
"errors": [
{
"errorType": "UnsupportedOperation",
"message": "unknown not supported through the realtime channel"
}
]
}
}
{
"id": "2",
"payload": {
"data": "{\"query\":\"subscription onCreateMessage { changeNotification{ __typename changeType from } }\",\"variables\":{}}",
"extensions":{
"authorization":{
"host":"example1234567890000.appsync-api.us-east-1.amazonaws.com",
"x-api-key":"da2-12345678901234567890123456"
}
}
},
"type": "start"
}

Elasticsearch error resource_already_exists_exception when index doesn't exist for sure

I use random index name for new indices:
async import_index(alias_name, mappings, loadFn) {
const index = `${alias_name}_${+new Date()}`
console.log('creating new index: ', index)
await this.esService.indices.create({
index: index,
body: {
"settings": this.index_settings(),
"mappings": mappings
}
}).then(res => {
console.log('index created: ', index)
}).catch(async (err) => {
console.error(alias_name, ": creating new index", JSON.stringify(err.meta, null, 2))
throw err
});
I believe an index with this name cannot exist, but ES returns me this error
"error": {
"root_cause": [
{
"type": "resource_already_exists_exception",
"reason": "index [brands_1637707367610/bvY5O_NjTm6mU3nQVx7QiA] already exists",
"index_uuid": "bvY5O_NjTm6mU3nQVx7QiA",
"index": "brands_1637707367610"
}
],
"type": "resource_already_exists_exception",
"reason": "index [brands_1637707367610/bvY5O_NjTm6mU3nQVx7QiA] already exists",
"index_uuid": "bvY5O_NjTm6mU3nQVx7QiA",
"index": "brands_1637707367610"
},
"status": 400
}
ES installed in k8s using bitnami helm chart, 3 master nodes running. Client is connected to master service url. My thoughts: client sends a request to all nodes at the same time, but i cannot prove it.
plz help
We have experienced same error with the python client. But from what i see the javascript client is written in similar way.
There is a retry option in the client.
Most likely this is set to true in your case (can be reconfigured). Then you pass a large mapping to this.esService.indices.create. The operation takes too long, times out, then a retry happens, but the index was created on the cluster.
You need to send a larger timeout to elasticsearch index create api (default 30s). And also set the same timeout for the http connection.
Those are 2 separate settings:
Client http connection requestTimeout . Default: 30s
https://github.com/elastic/elasticsearch-js/blob/v7.17.0/lib/Transport.js#L435
https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/7.17/basic-config.html
Serverside timeout via Create Index API. Default: 30s
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/indices-create-index.html
this.esService = new Client({
// ....
maxRetries: 5,
requestTimeout: 60000 // time in ms
})
// ....
tnis.esService.indices.create({
index: index,
body: {
"settings": this.index_settings(),
"mappings": mappings
},
timeout: "60s" // elasticsearch timestring
})

google speech api response is empty, even with this example test

When I run the API explorer from this page i get a 200 OK response, but the response json doesn't have any transcription. What am i doing wrong?
API Explorer location:
https://cloud.google.com/speech/reference/rest/v1/speech/longrunningrecognize
Request parameters:
Default
Request body:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000
},
"audio": {
"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"
}
}
Response:
{
"name": "3497944051092250866"
}
i figured it out.
with longrunningrecognize you get back the name object and then you have to send a operations.get to retrieve your processed object.
this is explained here beautifully
https://medium.com/towards-data-science/tutorial-asynchronous-speech-recognition-in-python-b1215d501c64

What does the spring boot /health top status: UP indicates

There is a health endpoint which indicates the status of other health indicatiors and as well the main status. My question is:
Is the top status: "UP" just a summary of other health indicators, or it actually can indicate "DOWN" for some other reason?
Is this the actual application health?
{
status: "UP",
jms: {
status: "UP",
provider: "ActiveMQ"
},
diskSpace: {
status: "UP",
total: 255179702272,
free: 78310952960,
threshold: 10485760
},
db: {
status: "UP",
database: "Oracle",
hello: "Hello"
}
}
It simply aggregates (via the configured HealthAggregator) the statuses of all the configured HealthIndicators.
You can provide a custom implementation if you wanted it to do something else.

Resources