How to create a variable in IIB which has scope for each single flow? - ibm-integration-bus

I need to create a variable in IIB flow which has to be available through out the flow. I have gone through the variables creation in documentation. As per my understanding, I should create a SHARED variable in ESQL module. But in documentation its mentioned as "Subsequent messages can access the data left by a previous message." which I didn't understand.
Could anyone please suggest how to create a variable which should have scope only for that flow(only per each request/instance)?
For example if I have to capture total value of some elements in payload and store calculated value in the created variable which I can use across all the nodes throughout the flow .

The Environment tree structure can be used for your use case:
The environment tree differs from the local environment tree in that a single instance of it is maintained throughout the message flow.
When the message flow processing is complete, the Environment tree is discarded.

Related

Read ruleset topic/partition by multiple kafka stream instances of the same app

I am having a Kafka Stream app that does some processing in a main event topic and I also have a side topic that
is used to apply a ruleset to the main event topic.
Till now the app was running as a single instance and when
a rule was applied a static variable was set for the other processing operator (main topic consumer) to continue
operating evaluating rules as expected. This was necessary since the rule stream would be written to a single partition depending
on the rule key literal e.g. <"MODE", value> and therefore that way (through static variable) all the other tasks
involved would made aware of the change.
Apparently though when deploying the application to multiple nodes this approach could not work since having a
single consumer group (from e.g. two instance apps) would lead only one instance app setting its static variable to
the correct value and the other instance app never consuming that rule value (Also setting each instance app to a
different group id would lead to the unwanted side-effect of consuming the main topic twice)
On the other hand a solution of making the rule topic used as a global table would lead to have the main processing
operator querying the global table every time an event is consumed by that operator in order to retrieve the latest rules.
Is it possible to use some sort of a global table listener when a value is introduced in that topic to execute some
callback code and set a static variable ?
Is there a better/alternative approach to resolve this issue ?
Instead of a GlobalKTable, you can fall back to addGlobalStore() that allows you to execute custom code.

How to pass dynamic queue name in jmeter

Am having a CSV file with a list of active mq queue name. Am trying to push messages to dynamic queue by reading the queue name from csv and setting it to my variable and passing that variable in jms point to point sampler as ${queueName}. The messages are not getting poster todynamic queue, instead it's getting posted to ${queueName
As per your reply, Its clear your queue variable name doesn't get evaluated properly.
I hope you can find the root cause for the issue by following below steps.
Check whether CSV file in a readable location, It should be on bin folder
Check for variable name spellings
Add a debug sampler in middle and check whether variable getting evaluated correctly
Install step-by-step plugin and debug the jmeter flow
Just make sure the variable spelling is correct in both CSV Data config and the request where you are using the variable. This happens when there is no data available in the sheet or if there is a spell mistake in the variable.

Write to GlobalStateStore on Kafka Streams

I am trying to use addGlobalStore on a Kafka DSL where a need to store few values that I will need global access for all my threads/instances.
My problem is that I need periodically to update these values inside my topology and make all running threads aware of the new values.
I initialized the global store through builder.addGlobalStore and using the init() function of a Processor that was used as the last argument on this function, but I cannot find a way to update the values inside the global store.
The next step on my Topology is a Transformer where I can get a hook through ```init()`` on the global Store and read the stored values but unfortunately I cannot updated them globally. I mean I can update the
local copy for the running thread but other threads/instances cannot see the change.
I read somewhere that this cannot be done on Transformer, but even I use a Processor instead the issue remains
So, Is there a way to update globalStateStore on a Kafka DSL topology,
and if so how is this possible ? Or in order to use global store do I need to use the low level processor API ?
I initialized the global store through builder.addGlobalStore and using the init() function of a Processor that was used as the last argument on this function, but I cannot find a way to update the values inside the global store.
You cannot update a global store directly. Instead, you must update (= write a message to) the underlying topic of that global store.
In case it fits your needs you probably could use GlobalKTable instead of GlobalStore

Many flows to one URL caller - How can I set a variable by each flow path to be read in the URL

I'm fairly new to Nifi and have not had much luck finding an answer so I'm posting my question here.
In the flow I have, I need to be able to set a variable by each flow path so the URL that is called is modified. I've created a crude drawing to show what I'm trying to do where I need to be able to set a variable, let's call it {target} based on the flow that comes in.
The flows that come in are from a Splitter as I'm reading in file data.
How do I even create a var {target} to set?
How do I get each flow path to set the {target}?
What do type of processor do I need to add for this to happen?

why do we use tibco mapper activity?

The tibco documentation says
The Mapper activity adds a new process variable to the process definition. This variable can be a simple datatype, a TIBCO ActiveEnterprise schema, an XML schema, or a complex structure.
so my question is tibco mapper does only this simple function.We can create process variables in process definition also(by right clicking on process definition).I looked for it in google but no body clearly explains why to use this activity and I have also tried in youtube and there also only one video and it does not explain clearly.I am looking for an example how it is used in large organizations and a real time example.Thanks in advance
The term "process variable" is a bit overloaded I guess:
The process variables that you define in the Process properties are stateful. You can use (read) their values anywhere in the process and you can change their values during the process using the Assign task (yellow diamond with a black equals sign).
The mapper activity produces a new output variable of that task that you can only use (read) in activities that are downstream from it. You cannot change its value after the mapper activity, as for any other activity's output.
The mapper activity is mainly useful to perform complex and reusable data mappings in it rather than in the mappers of other activities. For example, you have a process that has to map its input data into a different data structure and then has to both send this via a JMS message and log it to a file. The mapper allows you to perform the mapping only once rather than doing it twice (both in the Send JMS and Write to File activity).
You'll find that in real world projects, the mapper activity is quite often used to perform data mapping independently of other activities, it just gives a nicer structure to the processes. In contrast the Process Variables defined in the Process properties together with the Assign task are used much less frequently.
Here's a very simple example, where you use the mapper activity once to set a process variable (here the filename) and then use it in two different following activities (create CSV File and Write File). Obviously, the mapper activity becomes more interesting if the mapping is not as trivial as here (though even in this simple example, you only have one place to change how the filename is generated rather than two):
Mapper Activiy
First use of the filename variable in Create File
Second use of the filename variable in Write File
Process Variable/Assign Activity Vs Mapper Activity
The primary purpose of an assign task is to store a variable at a process level. Any variable in an assign task can be modified N times in a process. But a mapper is specifically used for introducing a new variable. We cannot change the same mapper variable multiple times in a project.
Memory is allocated to Process Variable when the process instance is created but in case of TIBCO Mapper the memory is allocated only when the mapper activity is executed in a process instance.
Process Variable is allocated a single slot of memory which is used to update/modify the schema thought the process instance execution i.e. N number of assign activity will access same memory allocated to the variable. Whereas using N mapper for a same schema will create N amount of memory.
Assign Activity can be is used to accumulate the output of a tibco activity inside a group.

Resources