How do I only filter the results for custom graphs on the HTML Report for Jmeter? - jmeter

I've tried using the following under the definition for custom graph but it filters the entire report:
## Custom graph definition
jmeter.reportgenerator.graph.custom_mm_hit.classname=org.apache.jmeter.report.processor.graph.impl.ResponseTimeOverTimeGraphConsumer
jmeter.reportgenerator.graph.custom_mm_hit.title=Login Response Time Comparison
jmeter.reportgenerator.graph.custom_mm_hit.property.set_Y_Axis=Response Time (ms)
jmeter.reportgenerator.graph.custom_mm_hit.property.set_X_Axis=Over Time
jmeter.reportgenerator.graph.custom_mm_hit.property.set_granularity=${jmeter.reportgenerator.overall_granularity}
jmeter.reportgenerator.graph.custom_mm_hit.property.setSampleVariableName=label
jmeter.reportgenerator.graph.custom_mm_hit.property.setContentMessage=Message for graph point label
jmeter.reportgenerator.exporter.html.series_filter=^(Run 1 Login|Run 2 Login)(-success|-failure)?$
How can I provide separate filtering for each custom graph?
For instance, if there are 3 transactions being monitored but I would like to split out one on its own Response Time Over Time custom graph while keeping all 3 on the original Charts dropdown Response Time over Time graph.
Thanks in advance!

As per JMeter Properties Reference:
jmeter.reportgenerator.exporter.html.series_filter
Regular Expression which Indicates which graph series are filtered in display.
Empty value means no filtering.
Defaults to empty value.
So you can apply a filter only to all HTML charts.
The only workaround I can think of is storing particular transaction response time into a Sample Variable and plot it as a custom chart
Alternative solution would be uploading your test results to BM.Sense analysis service where you can apply whatever filters you want on the Composite Timeline Analysis tab

Related

Anylogic Histogram - Hourly Update Triggered by Event

I am trying to get a histogram object that displays the distribution output of a timeMeasureEnd block, and have managed to get the histogram to display this output as a cumulative distribution and mean.
However, one objective of my model is to the measure the hourly average and distribution of the timeMeasureEnd block, and I am unable to make the histogram object reset on an hourly basis using an event block.
At present I have the following:
An event block called HourlyReset in cyclic mode using a 1h timeout based on model time, this element is functioning correctly.
I also have a histogram provisionally called chart, that is currently displaying timeMeasureEnd.distribution, this is also functioning correctly.
However, when I specify the action for the event block as chart.reset(); I get an error message:
Description: Type mismatch: cannot convert from TimeMeasureEnd to double. Location: Histogram Test/Main/data - Histogram Data
A second approach I tried was to have the timeMeasureEnd block write to a histogram data object, and have the event block reset a histogram data object but in this instance I get the same error message.
I am clearly missing something here, and I assume it is related to the agent object that is being injected into the system by the source block.
Any pointers in the right direction would be welcomed.
You can just call the resetStats() method of the timeMeasureEnd block. Just put code like timeMeasureEnd.resetStats() inside your event and the collected statistics inside this block and therefore the histogram will be reset every time you call this function.
Good luck (and please accept this answers if it solves your problem:))

How to fetch metric details per component (file) with Sonarqube APIs

I need to get the value of a few metrics for a specific component (a specific java class) for each analysis.
As an example, I need to get something like:
Analysis
Component
Complexity
ncloc
f1234
/mypackage/a.java
10
150
f1235
/mypackage/a.java
10
155
I can get the details of the metric, I guess related to the last analysis.
Here is the API call I am using for the details. As an example, I can get
Metrics: complexity and ncloc
componentKey: org:apache:zookeper2:src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java
http://sonar63.rd.tut.fi/api/measures/component?metricKeys=complexity,ncloc&componentKey=org:apache:zookeper2:src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java
Does anybody know how to get the same metrics for all the analysis or for a specific analysis?
I might get the list of analysis with the api http://sonar63.rd.tut.fi/api/project_analyses/search?project=org:apache:zookeper2&ps=500 but I cannot find a way to pass the analysis id to the measures api.
You should try api/measures/search_history endpoint. You should be able to reach documentation at <sonarqube_instance_url>/web_api/api/measures/search_history.

Query for a cache hit rate graph with prometheus

I'm using Caffeine cache with Spring Boot application. All metrics are enabled, so I have them on Prometheus and Grafana.
Based on cache_gets_total metric I want to build a HitRate graph.
I've tried to get a cache hits:
delta(cache_gets_total{result="hit",name="myCache"}[1m])
and all gets from cache:
sum(delta(cache_gets_total{name="myCache"}[1m]))
Both of the metrics works correctly and have values. But when I'm trying to get a hit ratio, I have no data points. Query I've tried:
delta(cache_gets_total{result="hit",name="myCache"}[1m]) / sum(delta(cache_gets_total{name="myCache"}[1m]))
Why this query doesn't work and how to get a HitRate graph based on information, I have from Spring Boot and Caffeine?
Run both ("cache hits" and "all gets") queries individually in prometheus and compare label sets you get with results.
For "/" operation to work both sides have to have exactly the same labels (and values). Usually some aggregation is required to "drop" unwanted dimensions/labels (like: if you already have one value from both queries then just wrap them both in sum() - before dividing).
First of all, it is recommended to use increase() instead of delta for calculating the increase of the counter over the specified lookbehind window. The increase() function properly handles counter resets to zero, which may happen on service restart, while delta() would return incorrect results if the given lookbehind window covers counter resets.
Next, Prometheus searches for pairs of time series with identical sets of labels when performing / operation. Then it applies individually the given operation per each pair of time series. Time series returned from increase(cache_gets_total{result="hit",name="myCache"}[1m]) have at least two labels: result="hit" and name="myCache", while time series returned from sum(increase(cache_gets_total{name="myCache"}[1m])) have zero labels because sum removes all the labels after the aggregation.
Prometheus provides the solution to this issue - on() and group_left() modifiers. The on() modifier allows limiting the set of labels, which should be used when searching for time series pairs with identical labelsets, while the group_left() modifier allows matching multiple time series on the left side of / with a single time series on the right side of / operator. See these docs. So the following query should return cache hit rate:
increase(cache_gets_total{result="hit",name="myCache"}[1m])
/ on() group_left()
sum(increase(cache_gets_total{name="myCache"}[1m]))
There are alternative solutions exist:
To remove all the labels from increase(cache_gets_total{result="hit",name="myCache"}[1m]) with sum() function:
sum(increase(cache_gets_total{result="hit",name="myCache"}[1m]))
/
sum(increase(cache_gets_total{name="myCache"}[1m]))
To wrap the right part of the query into scalar() function. This enables vector op scalar matching rules described here:
increase(cache_gets_total{result="hit",name="myCache"}[1m])
/
scalar(sum(increase(cache_gets_total{name="myCache"}[1m])))
It is also possible to get cache hit rate for all the caches with a single query via sum(...) by (name) template:
sum(increase(cache_gets_total{result="hit"}[1m])) by (name)
/
sum(increase(cache_gets_total[1m])) by (name)

Storm fields grouping

I'm having the following situation:
There is a number of bolts that calculate different values
This values are sent to visualization bolt
Visualization bolt opens a web socket and sends values to be visualized somehow
The thing is, visualization bolt is always the same, but it sends a message with a different header for each type of bolt that can be its input. For example:
BoltSum calculates sum
BoltDif calculates difference
BoltMul calculates multiple
All this bolts use VisualizationBolt for visualization
There are 3 instances of VisualizationBolt in this case
My question is, should I create 3 independent instances, where each instance will have one thread, e.g.
builder.setBolt("forSum", new VisualizationBolt(),1).globalGrouping("bolt-sum");
builder.setBolt("forDif", new VisualizationBolt(),1).globalGrouping("bolt-dif");
builder.setBolt("forMul", new VisualizationBolt(),1).globalGrouping("bolt-mul");
Or should I do the following
builder.setBolt("forAll", new VisualizationBolt(),3)
.fieldsGrouping("forSum", new Fields("type"))
.fieldsGrouping("forDif", new Fields("type"))
.fieldsGrouping("forMul", new Fields("type"));
And emit type from each of the previous bolts, so they can be grouped on based on it?
What are the advantages?
Also, should I expect that each and every time bolt-sum will go to first visualization bolt, bolt-dif will go to second visualization bolt and bolt-mul will go to third visualization bolt? They won't be mixed?
I think that that should be the case, but it currently isn't in my implementation, so I'm not sure if it's a bug or I'm missing something?
The first approach using three instances is the correct approach. Using fieldsGrouping does not ensure, that "sum" values go to "Sum-Visualization-Bolt" and neither that sum/diff/mul values are distinct (ie, in different bolt instances).
The semantic of fieldGrouping is more relaxed: it only guarantees, that all tuples of the same type will be processed by a single bolt instance, ie, that it will never be the case, that two different bolt instances get the same type.
I guess you can use Partial Key grouping (partialKeyGrouping). On the Storm documentation about stream groups says:
Partial Key grouping: The stream is partitioned by the fields
specified in the grouping, like the Fields grouping, but are load
balanced between two downstream bolts, which provides better
utilization of resources when the incoming data is skewed. This paper
provides a good explanation of how it works and the advantages it
provides.
I implemented a simple topology using this grouping and the chart on Graphite server show a better load balance compared to fieldsGrouping. The full source code is here.
topologyBuilder.setBolt(MqttSensors.BOLT_SENSOR_TYPE.getValue(), new SensorAggregateValuesWindowBolt().withTumblingWindow(Duration.seconds(5)), 2)
// .fieldsGrouping(MqttSensors.SPOUT_STATION_01.getValue(), new Fields(MqttSensors.FIELD_SENSOR_TYPE.getValue()))
// .fieldsGrouping(MqttSensors.SPOUT_STATION_02.getValue(), new Fields(MqttSensors.FIELD_SENSOR_TYPE.getValue()))
.partialKeyGrouping(MqttSensors.SPOUT_STATION_01.getValue(), new Fields(MqttSensors.FIELD_SENSOR_TYPE.getValue()))
.partialKeyGrouping(MqttSensors.SPOUT_STATION_02.getValue(), new Fields(MqttSensors.FIELD_SENSOR_TYPE.getValue()))
.setNumTasks(4) // This will create 4 Bolt instances
.addConfiguration(TagSite.SITE.getValue(), TagSite.EDGE.getValue())
;

Sonarqube report in graph/chart for time (weekly/daily) and number of issues

I want to display a graphical report based on time (weekly/daily) which shows that what is the status of static code analysis over the period of time. E.g. vertical bar will denote number of issue and horizontal will display the time day/month/week. This will help to keep an watch of code quality easily over the period of time (something like burn down chart of scrum). Can someone help me for this?
The 5.1.2 issues search web service includes parameters which let you query for issues by creation date. Your best best is to use AJAX requests to get the data you need and build your widget from there.
Note that you can query iteratively across a date range using &p=1&ps=1 (page=1 and page size=1) to limit the volume of data flying around, and just mine the total value in the top level of the response to get your answer.
Here's an example on Nemo

Resources