JMeter - Show selected steps from in the grafana - jmeter

I have situation with JMeter and grafana,I have a script with sixteen steps but I want to show only 8 of them in Grafana.
So far I used Backend Listener, added it to the Thread Group and voila - everything is working and results can be read from Grafana but now I want to show only 8.
I can't break it on two different scripts...It must be one script in one Thread Group

Put the specific 8 requests and Backend Listener under same controller, even Simple Logic Controller to report only them to Grafana
Simple Logic Controller lets you organize your Samplers and other Logic Controllers. Unlike other Logic Controllers, this controller provides no functionality beyond that of a storage device.

Please check elastic search backed listener.
Filters
Only send the samples you want by using Filters! Simply type them as
follows in the appropriate field : filter1;filter2;filter3 or
sampleLabel_must_contain_this.
Reference;-https://github.com/delirius325/jmeter-elasticsearch-backend-listener
In backend listener, there is also a field name samplersRegex or sample list (based on different implementation). Check that if that can help you to put filter.
I have not tried this but with the above information it seems there is a possibility of this.
Kindly check if this helps.

Related

Micrometer tracing for Batch DataFetchers

I am implementing micrometer for our GraphQL service. One thing I am noticing is for #BatchMapping methods we are getting a DataFetcherObservationContext for each index in the incoming list.
Example: I am looking up a group of skus and on each of those skus I am looking up the brand information using a #BatchMapping so that I am making only 1 webservice call to our Brand microservice. However when I look at the observability trace metrics in grafana I am seeing an entry for each index(sku) in the list that I am giving to the #BatchMapping. Is there a way to combine these into a single DataFetcherObservationContext so I am not getting 1 for each Sku that I am ultimately returning?
See attached screenshot for what I see in grafana
I am using all of the OOB Observation contexts for graphql and have just started to dabble into creating my own custom implementation but hoping there is an easier way.
I am expecting this to be one single observation for the entire #BatchMapping not individual for each index of the parent list coming in.
Edit: One other thing I am seeing is stackOverFlowErrors if I try to look at the graphQLContext for the Observation object for all of the parentObservations. It seems to be doing so many that it overflows the buffer.

Nifi processor to route flows based on changeable list of regex

I am trying to use Nifi to act as a router for syslog based on a list of regexes matching the syslog.body (nb as this is just a proof of concept I can change any part if needed)
The thought process is that via a separate system (for now, vi and a text file 😃) an admin can define a list of criteria (regex format for each seems sensible) which, if matched, would result in syslog messages being sent to a specific separate system (for example, all critical audit data (matched by the regex list) is sent to the audit system and all other data goes to the standard log store
I know that this can be done on Route by content processors but the properties are configured before the processor starts and an admin would have to stop the processor every time they need to make an edit
I would like to load the list of regex in periodically (automatically) and have the processor properties be updated
I don’t mind if this is done all natively in Nifi (but that is preferable for elegance and to save an external app being written) or via a REST API call driven by a python script or something (or can Nifi send REST calls to itself?!)
I appreciate a processor property cannot be updated while running, so it would have to be stopped to be updated, but that’s fine as the queue will buffer for the brief period. Maybe a check to see if the file has changed could avoid outages for no reason rather than periodic update regardless, I can solve that problem later.
Thanks
Chris
I think the easiest solution would be to use ScanContent, a processor which specifies a dictionary file on disk which contains a list of search terms and monitors the file for changes, reloading in that event. The processor then applies the search terms to the content of incoming flowfiles and allows you to route them based on matches. While this processor doesn't support regular expressions as dictionary terms, you could make a slight modification to the code or use this as a baseline for a custom processor with those changes.
If that doesn't work for you, there are a number of LookupService implementations which show how CSV, XML, property files, etc. can be monitored and read by the controller framework to provide an updated mapping of key/value pairs. These can also serve as a foundation for building a more complicated scan/match flow using the loaded terms/patterns.
Finally, if you have to rely on direct processor property updating, you can script this with the NiFi API calls to stop, update, and restart the processors so it can be done in near-real-time. To determine these APIs, visit the API documentation or execute the desired tasks via the UI in your browser and use the Developer Tools to capture the HTTP requests being made.

Connecting NiFi to ElasticSearch

I'm trying to solve one task and will appreciate any help - links to documentation, or links to forums, or other FAQs besides https://cwiki.apache.org/confluence/display/NIFI/FAQs, or any meaningful answer in this post =) .
So, I have the following task:
Initial part of my system collects data each 5-15 min from different DB sources. Then I remove duplicates, remove junk, combine data from different sources according to logic and then redirect it to second part of the system as several streams.
As far as I know, "NiFi" can do this task in the best way =).
Currently I can successfully get information from InfluxDB by "GetHTTP" processor. However I can't configure same kind of processor for getting information from Elastic DB with all necessary options. I'd like to receive data each 5-15 minutes for time period from "now-minus-<5-15 minutes>" to "now". (depends on scheduler period) with several additional filters. If I understand it right, this can be achieved either by subscription to "_index" or by regular requests to DB with desired interval.
I know that NiFi has several specific Processors designed for Elasticsearch (FetchElasticsearch5, FetchElasticsearchHttp, QueryElasticsearchHttp, ScrollElasticsearchHttp) as well as GetHTTP and PostHTTP Processors. However, unfortunately, I have lack of information or even better - examples - how to configure their "Properties" for my purposes =(.
What's the difference between FetchElasticsearchHttp, QueryElasticsearchHttp? Which one fits better for my task? What's the difference between GetHTTP and QueryElasticsearchHttp besides several specific fields? Will GetHTTP perform the same way if I tune it as I need?
Any advice?
I will be grateful for any help.
The ElasticsearchHttp processors try to make it easier to interact with ES by generating the appropriate REST API call based on the properties you set. If you know the full URL you need, you could use GetHttp or InvokeHttp. However the ESHttp processors let you put in just the stuff you're looking for, and it will generate the URL and return the results.
FetchElasticsearch (and its variants) is used to get a particular document when you know the identifier. This is sometimes used after a search/query, to return documents one at a time after you know which ones you want.
QueryElasticsearchHttp is for when you want to do a Lucene-style query of the documents, when you don't necessarily know which documents you want. It will only return up to the value of index.max_result_window for that index. To get more records, you can use ScrollElasticsearchHttp afterwards. NOTE: QueryElasticsearchHttp expects a query that will work as the "q" parameter of the URL. This "mini-language" does not support all fields/operators (see here for more details).
For your use case, you likely need InvokeHttp in order to issue the kind of query you describe. This article describes how to issue a query for the last 15 minutes. Once your results are returned, you might need some combination of EvaluateJsonPath and/or SplitJson to work with the individual documents, see the Elasticsearch REST API documentation (and NiFi processor documentation) for more details.

JMeter - Requests not getting grouped within controllers in graph output

In my test plan I have a series of steps like Login, HomePage, DoSearch, DoTask, Logout and each has a number of HTTP requests. I have tried using Simple Controllers, Transaction Controllers, keeping transaction controllers within simple controllers, and vice versa but am unable to see the timings at a step level in any graph. It at the most shows me the timings for Login, but other requests are not grouped on either controllers. Tried checking "Generate Parent Sample" and "Include duration..." but no luck.
Can someone let me know what needs to be done here?
You need to add a Listener. Each listener provides different information. I usually just use aggregate report.

Jmeter - Response time by resource type

I am using Jmeter to test a web page. Obviously it has css, images, js etc. How can I group the response times by css,js and images so that I can clearly see the response time broken by resource types.
You pose an interesting question. The listeners in JMeter show what's at the sibling/child level. Therefore, instinctively, if you want just one type of resource in a given report, you'd need to put all the requests in a single controller and give that controller a listener.
You can get around restructuring your test by renaming your samples in a consistent manner. Something like: "CSS - actual request name".
Then, using Aggregate Report, you can copy the results into a spreadsheet, sort by name, and get your metrics that way.
I think you can use parallel controller to group the type of static files, ajax, etc. And use a transaction controller to group the page you would like to request, under the transaction controller you can add the parallel controller as hierarchy.

Resources