Am having a CSV file with a list of active mq queue name. Am trying to push messages to dynamic queue by reading the queue name from csv and setting it to my variable and passing that variable in jms point to point sampler as ${queueName}. The messages are not getting poster todynamic queue, instead it's getting posted to ${queueName
As per your reply, Its clear your queue variable name doesn't get evaluated properly.
I hope you can find the root cause for the issue by following below steps.
Check whether CSV file in a readable location, It should be on bin folder
Check for variable name spellings
Add a debug sampler in middle and check whether variable getting evaluated correctly
Install step-by-step plugin and debug the jmeter flow
Just make sure the variable spelling is correct in both CSV Data config and the request where you are using the variable. This happens when there is no data available in the sheet or if there is a spell mistake in the variable.
Related
I have a PutGCSObject processor for which I want to capture the error into a flow file attribute.
As in the Picture, when there is an error for the Processor, it sends to failure with all the pre-existing attributes as-is.
I want the error message to be a part of the same flow file as an attribute. How can I achieve that ?
There is actually a way to get it.
Here is how i do it:
1: I route all ERROR connections to a main "monitoring process group"
2: Here is my "monitoring process group"
In updateattribute I capture filename as initial_filename
Then in my next step I query the bulletins
I then parse the output as individual attributes.
After I have the parsed bulleting output I use a RouteOnAttribute proc to drop all bulletins I don't need (some of them I have already used and notified on).
Once I only have my actual ERROR bulletin left, I use ExecuteStreamingCommand to run a python script using nipyapi module to get more info about the error, such as where it is in my flow, hierarchy, a description of the processor that failed, some proc stats and also I have metadata catalog about each proc/process group with their custodians and business use case.
This data is then posted to sumologic for logging and also I trigger a series of notifications (Slack + PagerDuty hook to create an incident lifecycle).
I hope this helps
There's no universal way to append error messages as flowfile attributes. Also, we tend to strongly avoid anything like that because of the potential to bubble up error messages with sensitive data to users who might not be authorized to see those details.
I stored all the required parquet tables in a Hadoop Filesystem, and all these files have a unique path for identification. These paths are pushed into a RabbitMQ queue as a JSON and is consumed by the consumer (in CherryPy) for processing. After successful consumption, the first path is sent for reading and the following paths will be read once the preceding read processes are done. Now to read a specific table I am using the following line of code,
data_table = parquet.read_table(path_to_the_file)
Let's say I have five read tasks in the message. The first read process is being carried out and gets read successfully and now before the other reading tasks are yet to be performed I just manually stopped my server. This stop would not send a message execution successful acknowledgement to the queue as there are a four remaining read processes. Once I restart the server, the whole consumption and reading processes starts from the initial stage. And now when the read_table method is called upon the first path, it gets stuck totally.
Digging up inside the work flow of read_table method, I found out where it actually gets stuck.
But further explanations of this method for reading a file inside a hadoop filesystem is required.
path = 'hdfs://173.21.3.116:9000/tempDir/test_dataset.parquet'
data_table = parquet.read_table(path)
Can somebody please give me a picture of the internal implementation that happens after calling this method? So that I could find where the issue is actually occurred and a solution to it.
I need to create a variable in IIB flow which has to be available through out the flow. I have gone through the variables creation in documentation. As per my understanding, I should create a SHARED variable in ESQL module. But in documentation its mentioned as "Subsequent messages can access the data left by a previous message." which I didn't understand.
Could anyone please suggest how to create a variable which should have scope only for that flow(only per each request/instance)?
For example if I have to capture total value of some elements in payload and store calculated value in the created variable which I can use across all the nodes throughout the flow .
The Environment tree structure can be used for your use case:
The environment tree differs from the local environment tree in that a single instance of it is maintained throughout the message flow.
When the message flow processing is complete, the Environment tree is discarded.
I'm fairly new to Nifi and have not had much luck finding an answer so I'm posting my question here.
In the flow I have, I need to be able to set a variable by each flow path so the URL that is called is modified. I've created a crude drawing to show what I'm trying to do where I need to be able to set a variable, let's call it {target} based on the flow that comes in.
The flows that come in are from a Splitter as I'm reading in file data.
How do I even create a var {target} to set?
How do I get each flow path to set the {target}?
What do type of processor do I need to add for this to happen?
I want to correlate this 181-418-5889 in the following statement: regSend&transferNumber=181-418-5889".
I used the regular web_reg_save_param: But it failed... any suggestion?
You are using the statement in the wrong location, such as using it just before the request is sent containing the correlated value versus just before the location where the response containing the value is sent to the client
You are not receiving the correct page response and as a result you may not be able to collect the value. The page may be an HTTP 200 page but the content could be completely off. Always check for an appropriate expected result
Your left boundary, right boundary and other parameters are incorrect to collect the value you need
You have not been through training and you are being forced by your management to learn this tool via trial and error
1- I am not using the statement in the wrong location since I did find the needed value I want to correlate via the Tree function and put it just before the statement that hold this value
2- The Page is not an HTTP 200
3- The Left and right boundary are correct since I checked the text if it does exist twice in the response body.
4- I know the tool (Loadrunner) but in fact, the application is developed under ZK platform and I am not sure if ZK and Loadrunner are compatible knowing that I did implement the dtid function in my script to have a static desktop id each time I replay the process.