InfluxDB - Getting metrics like write per second using chronograf - metrics

We are trying to plot metrics like Write per second for a measurement in an Influx DB database using TICK Stack. We are hosting influxDB on Ubuntu and have followed the instruction in the following link
https://www.digitalocean.com/community/tutorials/how-to-monitor-system-metrics-with-the-tick-stack-on-ubuntu-16-04
We are trying to create a dashboard for getting Write per second for a measurement in an Influx DB database. However we could not find any corresponding documentation.
Has any one done this... Can anyone point us to the necessary documentation
Thanks a ton in advance

It sounds like non-negative derivative is what you're looking for.
The documentation for that InfluxDB function can be found at https://docs.influxdata.com/influxdb/latest/query_language/functions/#non-negative-derivative.
According to the docs there, non-negative derivative... [r]eturns the non-negative rate of change between subsequent field values. Non-negative rates of change include positive rates of change and rates of change that equal zero.

Related

How to fetch metric details per component (file) with Sonarqube APIs

I need to get the value of a few metrics for a specific component (a specific java class) for each analysis.
As an example, I need to get something like:
Analysis
Component
Complexity
ncloc
f1234
/mypackage/a.java
10
150
f1235
/mypackage/a.java
10
155
I can get the details of the metric, I guess related to the last analysis.
Here is the API call I am using for the details. As an example, I can get
Metrics: complexity and ncloc
componentKey: org:apache:zookeper2:src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java
http://sonar63.rd.tut.fi/api/measures/component?metricKeys=complexity,ncloc&componentKey=org:apache:zookeper2:src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java
Does anybody know how to get the same metrics for all the analysis or for a specific analysis?
I might get the list of analysis with the api http://sonar63.rd.tut.fi/api/project_analyses/search?project=org:apache:zookeper2&ps=500 but I cannot find a way to pass the analysis id to the measures api.
You should try api/measures/search_history endpoint. You should be able to reach documentation at <sonarqube_instance_url>/web_api/api/measures/search_history.

BaseStatefulBolt (Storm Core) vs StateFactory (Storm Trident)

i am confused about using storm. I am going to measure status of data source using its streamed data. Status will be calculated with combine of some fields, and these field can be achieved different time interval. That's why i need to save fields to measure status of data source.
Can i use BaseStatefulBolt? Or the only solution is trident for this cenario?
What is the difference btw them. Because there is a statefactory inside trident too.
Thank you.
I think the difference is trident is higher level than BaseStatefulBol, it has some options for counting like group by,persistentAggregate,aggregate .
I have used trident for counting total view per user. If we only care about current total count, I think we can use trident by using MemoryMapState.Factory() and class implement action for counting or summing.
In your case you need to managing status of some current fields , I think implement BaseStatefulBolt is a good choice, it has KeyValueState for save current state.

Sonarqube report in graph/chart for time (weekly/daily) and number of issues

I want to display a graphical report based on time (weekly/daily) which shows that what is the status of static code analysis over the period of time. E.g. vertical bar will denote number of issue and horizontal will display the time day/month/week. This will help to keep an watch of code quality easily over the period of time (something like burn down chart of scrum). Can someone help me for this?
The 5.1.2 issues search web service includes parameters which let you query for issues by creation date. Your best best is to use AJAX requests to get the data you need and build your widget from there.
Note that you can query iteratively across a date range using &p=1&ps=1 (page=1 and page size=1) to limit the volume of data flying around, and just mine the total value in the top level of the response to get your answer.
Here's an example on Nemo

Parse.com. Execute backend code before response

I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.

Solution for graphing application events metrics in real time

We have an application that parses tweets and we want to see the activity in real time. We have tried several solution without success. Our main problems is that the graphing solution (example:graphite), needs a continious flow of metrics. When the db aggregates the metrics it's an average operation which is done, not a a sum.
We recently saw cube from square which would fit our requirement but it's too new.
Any alternatives?
I found the solution in the last version of graphite:
http://graphite.readthedocs.org/en/latest/config-carbon.html#storage-aggregation-conf
If I understood correctly, you cannot feed graphite in realtime, for instance as soon as you discover a new tweet?
If that's the case, it looks like you can specify a unix timestamp when updating graphite metric_path value timestamp\n so you could pass in the time of discovery/publication/whatever, regardless of when you process it.

Resources