grafana scripted influxdb - dashboard

I am using grafana 4 and influxDB.
I need to show a graph of say CPU usage for a certain host by building the parameters in the URL like this
http://my_grafana:3000/dashboard/script/scripted.js?name=CPULoad&host=ussd1
i am trying to use scripted dashboards for this but i cannot figure out how to tell scripted.js where to look for the data of CPULoad.
can anyone give me some pointers?
regards,
Martin

Well I found out how it works, but I have to say it is wierd that it is not documented anywhere and it involves a little modification to the source code...
A little bit of context first
I have a influxdb database called "Nagios". Inside this database, I have several series. a show series in influxdb shows the following
> show series
key
---
nagios.CPULoad,hostname=cbba.storage,state=OK
nagios.CPULoad,hostname=ussd1,state=OK
nagios.CPULoad,hostname=ussd2,state=OK
nagios.CPULoad,hostname=ussd3,state=OK
nagios.CPULoad,hostname=ussd4,state=OK
The structure of the data in series CPULoad is like this
> select * from "nagios.CPULoad" limit 1
name: nagios.CPULoad
time hostname load1 load15 load5 state
---- -------- ----- ------ ----- -----
1487867813000000000 cbba.storage 0 0 0 OK
My URL to scripted.js is as follows:
http://10.72.6.220:3000/dashboard/script/scripted.js?name=CPULoad&field=load1&hostname=ussd3
name indicates the series in influxDB I want to graph
field indicates which field to use
hostname indicates the host to choose
The SQL I want grafana scripted.js to build is as follows
SELECT mean("load1") FROM "nagios.CPULoad" WHERE "hostname" = 'ussd3' AND $timeFilter GROUP BY time($interval) fill(null)
The code to build inside scripted.js involves modifying the "targets" parameter in dashboard.rows structure, and it turns out to be like this (i found this out after going through the code)
targets: [
{
"measurement": "nagios." + ARGS.name,
"metric": ARGS.name,
"tags": {
"hostname": {
operator: "=" ,
value: ARGS.hostname
}
},
"select": [[{
type: "field",
params: [ARGS.field]
}, {
type: "mean",
params: []
}]],
},
],
Now, I dont know why, but I had to modify the code in order for the key "hostname" to be taken into account. In function renderTagCondition which I copy here for convenience
a.prototype.renderTagCondition = function(a, b, c) {
var d = ""
, e = a.operator
, f = a.value;
return b > 0 && (d = (a.condition || "AND") + " "),
e || (e = /^\/.*\/$/.test(f) ? "=~" : "="),
"=~" !== e && "!~" !== e ? (c && (f = this.templateSrv.replace(f, this.scopedVars)),
">" !== e && "<" !== e && (f = "'" + f.replace(/\\/g, "\\\\") + "'")) : c && (f = this.templateSrv.replace(f, this.scopedVars, "regex")),
d + '"' + a.key + '" ' + e + " " + f
}
the returned value
d + '"' + a.key + '" ' + e + " " + f
seems to be wrong... It should be
d + '"' + b + '" ' + e + " " + f
since b carries "hostname"
After all this, calling the URL I mentioned at the beginning it all worked out pretty well

Adding to #mquevedob answer above,
Change the tags object like this
"tags":
[
{
key: "jobId",
operator: "=" ,
value: "340"
}
]
This should work fine in Grafana when using InfluxDB.

Related

My Discord bot's command suddenly stopped listening to my arg hints, then stopped working altogether

I'm trying to write a command to collect free agent offers in our local basketball simulation server into CSV format so we can go through them more quickly and efficiently. I wrote this command to take the offers, but suddenly, my arguments stopped listening to the hints I gave:
async def offer(ctx, firstName: str, lastName: str, amount: float, length: int, option: str='no', ntc: str='no'):
It allows integers to be used for lastName, even though I used these hints here. It was working normally, except that it didn't throw errors if you used the wrong type of input, and then suddenly my Discord bot stops responding to this command altogether. It still responds correctly to other commands, so something is wrong with this command specifically, and I can't figure it out...
Here's the full command:
#bot.command()
async def offer(ctx, firstName: str, lastName: str, amount: float, length: int, option: str='no', ntc: str='no'):
team = ctx.channel.category.name
username = ctx.author.name
usermention = ctx.author.mention
userid = ctx.author.id
if option.lower() == 'po' or option.lower() == 'yes': option = 'yes'
else: option = 'no'
if ntc.lower() == 'ntc' or ntc.lower() == 'yes': ntc = 'yes'
else: ntc = 'no'
offer = (str(team) + ',' + str(username) + ',' + str(userid) + ',' + str(firstName) + ' ' + str(lastName) + ',' + str(amount) + ',' + str(length) + ',' + str(option) + ',' + str(ntc))
offersList.append(offer)
baseText = ('The ' + team + ' (' + usermention + ') offered ' + firstName + ' ' + lastName + ' a $' + str(amount) + ' million contract for ' + str(length) + ' years')
if option == 'no' and ntc == 'no': text = (baseText + '.')
if option == 'no' and ntc == 'yes': text = (baseText + ' with an NTC.')
if option == 'yes' and ntc == 'no': text = (baseText + ' with a player option.')
if option == 'yes' and ntc == 'yes': text = (baseText + ' with a player option and an NTC.')
print(text)
await ctx.send(text)
Please let me know if it's something obvious, or I have to rethink this command altogether... thanks everyone!
Ahmed asked if it executed a print function at the beginning of the command. Once I added this print function, suddenly the whole command started working again. I'm not sure why, but it's working... so I won't touch it. lol. thanks guys.

OpenWhisk - character sets?

I recently started using OpenWhisk and love it.
Everything seems to work real nice, except I have run into some issue which might be related to character sets / encoding.
E.g. when I use "Scandinavian characters", like æ, ø, å, I see this in the OpenWhisk Web Editor when calling an action / trigger with payload like:
{
"station": "Rådhuset",
"no2": 8.7,
"pm10": 6.5,
"pm25": 2.2,
"time": 1461348000,
"id": "Rådhuset-1461348000"
}
I get the following result / response payload:
{
"notify": "Station R??dhuset != R���dhuset"
}
The main function in the action called looks like this:
var payload = params.payload || params;
var station = 'Rådhuset';
if (station == payload.station) {
...
} else
return whisk.done({notify : 'Station ' + station + ' != ' + payload.station});
When running the action without these characters, e.g. "Kirkeveien", everything works fine.
Has anyone else run into similar situation?!
There is a known defect with non-ASCII characters. https://github.com/openwhisk/openwhisk/issues/252
A possible workaround is to encode the string (base64 encoding for example).
try encoding:
var payload = params.payload || params;
var station = 'Rådhuset';
if (station == payload.station) {
...
} else
return whisk.done({notify : 'Station ' + encodeURIComponent(station) + ' != ' + encodeURIComponent(payload.station)});

MongoDB Native Query vs C# LINQ Performance

I am using the following two options, the Mongo C# driver seems to be taking more time. I'm using StopWatch to calculate the timings.
Case 1: Native Mongo QueryDocument (takes 0.0011 ms to return data)
string querytext = #"{schemas:{$elemMatch:{name: " + n + ",code : " + c + "} }},{schemas:{$elemMatch:{code :" + c1 + "}}}";
string printQueryname = "Query: " + querytext;
BsonDocument query1 = MongoDB.Bson.Serialization.BsonSerializer.Deserialize<BsonDocument>(querytext);
QueryDocument queryDoc1 = new QueryDocument(query1);
var queryResponse = collection.FindAs<BsonDocument>(queryDoc1);
Case 2: Mongo C# Driver (takes more than 3.2 ms to return data)
Schema _result = new Schema();
_result = (from c in _coll.AsQueryable<Schema>()
where c.schemas.Any(s => s.code.Equals(c) && s.name.Equals(n) ) &&
c.schemas.Any(s => s.code.Equals(c1))
select c).FirstOrDefault();
Any thoughts ? Anything wrong here ?

R: tm Textmining package: Doc-Level metadata generation is slow

I have a list of documents to process, and for each record I want to attach some metadata to the document "member" inside the "corpus" data structure that tm, the R package, generates (from reading in text files).
This for-loop works but it is very slow,
Performance seems to degrade as a function f ~ 1/n_docs.
for (i in seq(from= 1, to=length(corpus), by=1)){
if(opts$options$verbose == TRUE || i %% 50 == 0){
print(paste(i, " ", substr(corpus[[i]], 1, 140), sep = " "))
}
DublinCore(corpus[[i]], "title") = csv[[i,10]]
DublinCore(corpus[[i]], "Publisher" ) = csv[[i,16]] #institutions
}
This may do something to the corpus variable but I don't know what.
But when I put it inside a tm_map() (similar to lapply() function), it runs much faster, but the changes are not made persistent:
i = 0
corpus = tm_map(corpus, function(x){
i <<- i + 1
if(opts$options$verbose == TRUE){
print(paste(i, " ", substr(x, 1, 140), sep = " "))
}
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
})
Variable corpus has empty metadata fields after exiting the tm_map function. It should be filled. I have a few other things to do with the collection.
The R documentation for the meta() function says this:
Examples:
data("crude")
meta(crude[[1]])
DublinCore(crude[[1]])
meta(crude[[1]], tag = "Topics")
meta(crude[[1]], tag = "Comment") <- "A short comment."
meta(crude[[1]], tag = "Topics") <- NULL
DublinCore(crude[[1]], tag = "creator") <- "Ano Nymous"
DublinCore(crude[[1]], tag = "Format") <- "XML"
DublinCore(crude[[1]])
meta(crude[[1]])
meta(crude)
meta(crude, type = "corpus")
meta(crude, "labels") <- 21:40
meta(crude)
I tried many of these calls (with var "corpus" instead of "crude"), but they do not seem to work.
Someone else once seemed to have had the same problem with a similar data set (forum post from 2009, no response)
Here's a bit of benchmarking...
With the for loop :
expr.for <- function() {
for (i in seq(from= 1, to=length(corpus), by=1)){
DublinCore(corpus[[i]], "title") = LETTERS[round(runif(26))]
DublinCore(corpus[[i]], "Publisher" ) = LETTERS[round(runif(26))]
}
}
microbenchmark(expr.for())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.for() 21.50504 22.40111 23.56246 23.90446 70.12398
With tm_map :
corpus <- crude
expr.map <- function() {
tm_map(corpus, function(x) {
meta(x, "title") = LETTERS[round(runif(26))]
meta(x, "Publisher" ) = LETTERS[round(runif(26))]
x
})
}
microbenchmark(expr.map())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.map() 5.575842 5.700616 5.796284 5.886589 8.753482
So the tm_map version, as you noticed, seems to be about 4 times faster.
In your question you say that the changes in the tm_map version are not persistent, it is because you don't return x at the end of your anonymous function. In the end it should be :
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
x

Running a mapreduce job on cloudera demo cdh3u4 (airline data example)

I'm doing the R-Hadoop tutorial (october 2012) of Jeffrey Breen.
At the moment I try to populate hdfs and then run the commands Jeffrey published in his tutorial in RStudio. Unfortunately I got some troubles with it:
UPDATE: I now moved the data folder to:
/home/cloudera/data/hadoop/wordcount (and same for airline-Data)
No when I run populate.hdfs.sh I get the following output:
[cloudera#localhost ~]$ /home/cloudera/TutorialBreen/bin/populate.hdfs.sh
mkdir: cannot create directory /user/cloudera: File exists
mkdir: cannot create directory /user/cloudera/wordcount: File exists
mkdir: cannot create directory /user/cloudera/wordcount/data: File exists
mkdir: cannot create directory /user/cloudera/airline: File exists
mkdir: cannot create directory /user/cloudera/airline/data: File exists
put: Target /user/cloudera/airline/data/20040325.csv already exists
And then I tried the commands in RStudio as shown in the tutorial but I get errors at the end. Can someone show me what I did wrong?
> if (LOCAL)
+ {
+ rmr.options.set(backend = 'local')
+ hdfs.data.root = 'data/local/airline'
+ hdfs.data = file.path(hdfs.data.root, '20040325-jfk-lax.csv')
+ hdfs.out.root = 'out/airline'
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ if (!file.exists(hdfs.out))
+ dir.create(hdfs.out.root, recursive=T)
+ } else {
+ rmr.options.set(backend = 'hadoop')
+ hdfs.data.root = 'airline'
+ hdfs.data = file.path(hdfs.data.root, 'data')
+ hdfs.out.root = hdfs.data.root
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ }
> asa.csvtextinputformat = make.input.format( format = function(con, nrecs) {
+ line = readLines(con, nrecs)
+ values = unlist( strsplit(line, "\\,") )
+ if (!is.null(values)) {
+ names(values) = c('Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime',
+ 'ArrTime','CRSArrTime','UniqueCarrier','FlightNum','TailNum',
+ 'ActualElapsedTime','CRSElapsedTime','AirTime','ArrDelay',
+ 'DepDelay','Origin','Dest','Distance','TaxiIn','TaxiOut',
+ 'Cancelled','CancellationCode','Diverted','CarrierDelay',
+ 'WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay')
+ return( keyval(NULL, values) )
+ }
+ }, mode='text' )
> mapper.year.market.enroute_time = function(key, val) {
+ if ( !identical(as.character(val['Year']), 'Year')
+ & identical(as.numeric(val['Cancelled']), 0)
+ & identical(as.numeric(val['Diverted']), 0) ) {
+ if (val['Origin'] < val['Dest'])
+ market = paste(val['Origin'], val['Dest'], sep='-')
+ else
+ market = paste(val['Dest'], val['Origin'], sep='-')
+ output.key = c(val['Year'], market)
+ output.val = c(val['CRSElapsedTime'], val['ActualElapsedTime'], val['AirTime'])
+ return( keyval(output.key, output.val) )
+ }
+ }
> reducer.year.market.enroute_time = function(key, val.list) {
+ if ( require(plyr) )
+ val.df = ldply(val.list, as.numeric)
+ else { # this is as close as my deficient *apply skills can come w/o plyr
+ val.list = lapply(val.list, as.numeric)
+ val.df = data.frame( do.call(rbind, val.list) )
+ }
+ colnames(val.df) = c('crs', 'actual','air')
+ output.key = key
+ output.val = c( nrow(val.df), mean(val.df$crs, na.rm=T),
+ mean(val.df$actual, na.rm=T),
+ mean(val.df$air, na.rm=T) )
+ return( keyval(output.key, output.val) )
+ }
> mr.year.market.enroute_time = function (input, output) {
+ mapreduce(input = input,
+ output = output,
+ input.format = asa.csvtextinputformat,
+ output.format='csv', # note to self: 'csv' for data, 'text' for bug
+ map = mapper.year.market.enroute_time,
+ reduce = reducer.year.market.enroute_time,
+ backend.parameters = list(
+ hadoop = list(D = "mapred.reduce.tasks=2")
+ ),
+ verbose=T)
+ }
> out = mr.year.market.enroute_time(hdfs.data, hdfs.out)
Error in file(f, if (format$mode == "text") "r" else "rb") :
cannot open the connection
In addition: Warning message:
In file(f, if (format$mode == "text") "r" else "rb") :
cannot open file 'data/local/airline/20040325-jfk-lax.csv': No such file or directory
> if (LOCAL)
+ {
+ results.df = as.data.frame( from.dfs(out, structured=T) )
+ colnames(results.df) = c('year', 'market', 'flights', 'scheduled', 'actual', 'in.air')
+ print(head(results.df))
+ }
Error in to.dfs.path(input) : object 'out' not found
Thank you so much!
First of all, it looks like the command:
/usr/bin/hadoop fs -mkdir /user/cloudera/wordcount/data
Is being split into multiple lines. Make sure you're entering it as-is.
Also, it is saying that the local directory data/hadoop/wordcount does not exist. Verify that you're running this command from the correct directory and that your local data is where you expect it to be.

Resources