Grafana to use substraction of two fields in Elasticsearch data source - elasticsearch

I have two fields, called 'status_codes' and requests
I want to get number of failed requests.
My equation is [requests - no of success requests]
In the script i wrote something like this _value - doc['#status_codes.200'].value
BUT the value return in the graph is 'N/A'
I'm using elasticsearch(7.6.0) and Grafana(6.6.2).
Following is the out file which i'm sending to elasticsearch
{ "latencies":{
"total":3981710268690,
"mean":43876078,
"50th":916913,
"90th":2217744,
"95th":5162430,
"99th":60233348,
"max":60000209373,
"min":43652
},
"#version":"1",
"latest":"2020-03-05T16:14:44.23387091Z",
"path":"test23.json",
"duration":61163899322,
"wait":552109,
"status_codes":{
"0":90624,
"200":125
},
"earliest":"2020-03-05T16:13:43.069971588Z",
"rate":1483.702004057131,
"throughput":2.0436707446156577,
"#timestamp":"2020-03-05T16:14:44.453Z",
"errors":[
"Post http://www: dial tcp 0.0.0.0:0->10.133.9.87:8688: socket: too many open files",
"Post http://www: dial tcp: lookup internal-netty-load-balancer-937469711.us-east-1.elb.amazonaws.com on 10.20.30.30: dial udp 10.20.30:45: socket: too many open files"
],
"bytes_in":{
"mean":70.90298515686123,
"total":6434375
},
"requests":90749,
"Report_Title":"test23",
"host":"ABS",
"success":0.0013774256465635985,
"end":"2020-03-05T16:14:44.234423019Z",
"bytes_out":{
"mean":70.90298515686123,
"total":6434375
}
}
Also I have used Singlestat plugin as #yash mentioned, but still i could resolve the issue.
Query section
Visualization section
Can someone help me

This is a fairly easy task. You just need to use either of the 'Singlestat Math' or 'Metaqueries' plugin for that. What you need to do is, use the count metric in two queries in the same panel, one for getting the count of successful status codes, and other for unsuccessful status codes. Then you can use either of the plugin to subtract the value of result of query one query from another.
https://grafana.com/grafana/plugins/blackmirror1-singlestat-math-panel
https://grafana.com/grafana/plugins/goshposh-metaqueries-datasource
I suggest you go with the singlestat math plugin, it would be easier to work with, from my experience.
Note: The calculation (A-B) is done in the visualization section and not in query section, in the singlestat math plugin.
P.S. singlestat-math plugin actually adds a new panel in the visualization section. It's a different panel than the default singlestat panel.

Finally I found the the solution as follow,
solution
Thanks everyone.

Related

Elasticsearch: How do you search scroll without HTTP 404 errors?

I'm trying to get all the documents from Elasticsearch for a search query that has 600k+ documents. You can not use the search size parameter or the index's settings that adjusts the max number of returned documents parameter because both of these expect a smaller number of documents.
From python elasticsearch module I do a:
result = client.search(... scroll='1m')
scroll_id = result['_scroll_id']
while True:
# ...
client.scroll(... scroll_id=scroll_id)
# ...
The console reports successful scroll for like 200ms or about 7 scrolls. Then I get HTTP 404.
How do you search scroll without HTTP 404 errors?
p.s. i'm on version 7.3 here.
I found a solution here: https://github.com/ropensci/elastic/issues/178
You have to modify your client.scroll call to include the scroll timeout parameter scroll='1m':
client.search(... scroll='1m')
while True:
# ...
client.scroll(... scroll_id=scroll_id, scroll='1m')
Then my scrolling works.
p.s. I only use python here because it makes the writing easier (the solution was originally from a github for R programming language). This solution applies to any Elasticsearch REST API usage IMO.

Zapier CLI Trigger - How to use defined sample data when no results returned during setup

I am trying to prototype a trigger using the Zapier CLI and I am running to an issue with the 'Pull In Samples' section when setting up the trigger in the UI.
This tries to pull in a live sample of data to use, however the documentation states that if no results are returned it will use the sample data that is configured for the trigger.
In most cases there will be no live data and so ideally would actually prefer the sample data to be used in the first instance, however my trigger does not seem to ever use the sample and I have not been able to find a concrete example of a 'no results' response.
The API I am using returns XML so I am manipulating the result into JSON which works fine if there is data.
If there are no results so far I have tried returning '[]', but that just hangs and if I check the zapier http logs it's looping http requests until I cancel the sample check.
Returning '[{}]' returns an error that I need an 'id' field.
The definition I am using is:
module.exports = {
key: 'getsmsinbound',
noun: 'GetSMSInbound',
display: {
label: 'Get Inbound SMS',
description: 'Check for inbound SMS'
},
operation: {
inputFields: [
{ key: 'number', required: true, type: 'string', helpText: 'Enter the inbound number' },
{ key: 'keyword', required: false, type: 'string', helpText: 'Optional if you have configured a keyword and you wish to check for specific keyword messages.' },
],
perform: getsmsinbound,
sample: {
id: 1,
originator: '+447980123456',
destination: '+447781484146',
keyword: '',
date: '2009-07-08',
time: '10:38:55',
body: 'hello world',
network: 'Orange'
}
}
};
I'm hoping it's something obvious as on scouring the web and Zapier documentation I've not had any luck!
Sample data must be provided from your app and the sample payload is not used for this poll specifically. From the docs:
Sample results will NOT be used for a user's Zap testing step. That
step requires data to be received by an event or returned from a
polling URL. If a user chooses to "Skip Test", then the sample result,
if provided, will be used.
Personally, I have never seen "Skip Test" show up. A while back I asked support about this:
That's a great question! It's definitely one of those "chicken and
egg" situations when using REST Hooks - if there isn't a sample
available, then everything just stalls.
When the Zap editor tries to obtain a "sample result", there are three
places where it's going to look:
The Polling endpoint (in Step #3 of your trigger's setup) is invoked for the current user. If that returns "nothing", then the Zap
editor will try the next step.
The "most recent record/data" in the Zap's history. Since this is a brand new Zap, there won't be anything present.
The Sample result (in Step #4 of your trigger's setup). The Zap editor will tell the user that there's "nothing to show", and will
give the user the option to "skip test and continue", which will use
the sample JSON that you've provided here.
In reality, it will just continue to retry the request over and over and never provide the user with a "skip test and continue" option. I just emailed again asking if anything has changed since then, but it looks like existing sample data is a requirement.
Perhaps create a record in your API by default and hide it from normal use and just send back that one?
Or send back dummy data even though Zapier says not to. Not sure, but I don't know how people can set up a zap when no data has been created yet (and Zapier says not many of their apps have this issue, but nearly every trigger I've created and ever use case for other applications would hint to me otherwise).

Ruby neo4j-core mass processing data

Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes

Ethon - Valid resolve option format

Curl has an option which allows me to specify to which IP a domain should be resolved
e.g. curl --resolve example.com:443:1.2.3.4 https://example.com/foo
to make sure that a very specific server is called
(e.g. when multiple servers have the same vhost with a load balancer usually in front of it and there are multiple applications running on the same port with different vhosts)
How do I set this value when using Ethon? https://github.com/typhoeus/ethon
This is how I'd expect it to work
Ethon::Easy.new(url: "https://example.com/foo", :resolve => "example.com:443:1.2.3.4")
but I'm getting an invalid value exception (I have tried multiple different formats that came to mind)
Ethon::Errors::InvalidValue: The value: example.com:443:1.2.3.4 is invalid for option: resolve.
I took a look at the code but couldn't figure out how I'd have to provide the value - and the documentation on this is a bit scarce
Thanks in advance for any reply that might point me in the right direction
Thanks to i0rek on Github I got the answer I was looking for:
resolve = nil
Ethon::Curl.slist_append(resolve, "example.com:443:1.2.3.4")
e = Ethon::Easy.new(url: "https://example.com/foo", :resolve => resolve)
#=> #<Ethon::Easy:0x007faca2574f30 #url="https://example.com/foo", ...>
Further information can be found here:
https://github.com/typhoeus/ethon/issues/95#event-199961240

Drupal: debug in teaserview

As soon as i am on a single-view-page, i can use this to see the whole array of my node:
dpm($content);
it helps me also to get information about the activated fields through "manage display".
but for example on the startpage, there are no singleviews but only lists of teasers and because of that, theres a timeout when trying to debug with dpm($content);
is there someting like
dpm($content[5360]);
and the number is the nodeid, to get the output of a certain node, but not on a singlepage but on a page with teaserviews
I assume you have the dpm($content) statement in your node template file?
You should be able to do something like this:
if ($node->nid == 5360) {
dpm($content);
}

Resources