We are using jmeter-prometheus-plugin to populate data to prometheus end point. I want to send the value of a specific Jmeter variable to the metrics endpoint (as a metrics), the only way available is as a label as per the documentation. How can i achieve this ?
Related
While doing performance testing via JMETER, I encountered one usecase where the POST request call is taking the dynamic data from the website. So in that case when we run our script it fails as that data is no more available on the website.
Payload looks like given below. It is a POST CALL and the payload is changing everytime.
{"marketId":"U-16662943","price":{"up":98,"down":100,"dec":"1.98"},"side":"HOME","line":0,"selectionids":["W2-1"]}
Could anyone suggest how we can make this payload dynamic when we create a script in JMETER?
I can think of 3 possible options:
Duplicate data is not allowed. If this is the case you can use JMeter Functions like __Random(), __RandomString(), __counter() and so on
The data you're sending needs to be aligned with the data in the application somehow, in this case you can use JDBC PreProcessor in order to build a proper request body basing on the data from the application under test database
The data is present in previous response. In that case it's a matter of simple correlation, the dynamic values should be extracted from the previous response using suitable Post-Processors and variables needs to be sent instead of hard-coded parameters
I am using jmeter to perform load testing of a REST API. I want to feed the response data of this test(which includes duration, size of the response) to another REST API.
Is there any way to do so using the tool?
JMeter's .jtl result files are basically CSV files having header and metrics for each and every Sampler
So you can read this data using CSV Data Set Config and feed to whatever consumer you want.
If you want to choose which values, where and how to store you can consider going for Flexible File Writer plugin
I am using spring boot 2 and apache camel 2.24 to build an api gateway which exposes REST end points to receive JSON/XML request and do the following
Validate incoming JSON/XML
Convert incoming request to downstream expected format.
Push the request to a camel route which invokes the downstream REST end point and returns back the response.
Convert response to expected format and return back to client.
Currently I have route configured as below
from("direct::camel").process(preprocessor).process(httpClientProcessor).process(postprocessor);
httpClientProcessor - This does the main job of invoking downstream end point and return the response.
preprocessor - splits the request for configured requests, puts them in a list before passing to httpClientProcessor.
postprocessor - does the following based on content type
XML - Remove "xml" tag from individual responses and combine to form one single response under one root element
JSON - Combine json response under one parent array element.
There can be cases where the multiple requests need to be sent to the same end point or each to be sent to a unique end point. Currently I have that logic in httpClientProcessor. One issue with this approach is that I can invoke the downstream end points only one after another rather than in parallel (unless I add thread pool executor in httpClientProcessor)
I am new to apache camel and hence started with this basis route configuration. Based on going through the documentation, I came across camel components like split(), parallelProcessing(), multicast and aggregator components of camel but I am not sure how to plug these all together to achieve my requirement -
Split the incoming request using a pre configured delimiter to create multiple requests. The incoming request may or may not have multiple requests.
Based on endpoint url configuration, either post all split requests to same end point or each request to a unique endpoint.
Combine responses from all to form one master response (XML or JSON) as output from the route.
Please advise.
This sounds like you should have a look a the Camel Split EIP.
Especially
How to split a request into parts and re-aggregate the parts later
How to process each part in parallel
For the dynamic downstream endpoints, you can use a dynamic To (.toD(...)) or the Recipient List EIP. The latter can even send a message to multiple or no endpoint.
I'm trying to ship Aerospike metrics to another node using some available methods, e.g., collectd.
For example, among the Aerospike monitoring metrics, given two fields: say X and Y, how can I define and send a derived metric like Z = X+Y or X/Y?
We could calculate it on the receiver side but it degrades the performance of our application overall. Will appreciate your guidance in advance.
Thanks.
It can't be done within the Aerospike collectd plugin, as the metrics are more or less shipped immediately once they are read. There's no variable that saves the metrics that have been shipped.
If you can use the Graphite plugin, it keeps track of all gathered metrics then sends once at the very end. You can add another stanza for your calculated metrics right before nmsg line. You'll have to search through the msg[] array for your source metrics.
The Nagios plugin is a very different method. It's a single metric pull, so a wrapper script would be needed to run the plugin for each operand, and run the calculation in the wrapper.
Or you can supplement existing plugins with your own script(s) just for derived metrics. All of our monitoring plugins utilize the Aerospike Info Protocol and you can use asinfo to gather metrics for your operands similar to the previous Nagios method.
I need to save data sets like (timestamp, event_name, event_value1, event_value2, event_value3, ...) with StatsD. I need this to track custom events in the web app I'm working on.
Official StatsD readme states that StatsD expects metrics to be sent in the format:
<metricname>:<value>|<type>
Is there any way to push multiple values, or any workaround to make this possible?
We're currently using Graphite as a backend service, but it can be changed for the sake of adding this feature.
You could use a naming convention to capture this information:
event_name-timestamp-event_value1_name:event_value1|<stat type>
event_name-timestamp-event_value2_name:event_value2|<stat type>
event_name-timestamp-event_value3_name:event_value3|<stat type>
etc.