What is the code to get the Processor Name and Processor Group Name - apache-nifi

Is there a way in Groovy Code to get the Processor Group Name the ExecuteScript Processor is in and Processor Name of the ExecuteScript Processor the Groovy Code is in. If so what would the code be. Any help would be greatly appreciated.

To get the processor name, use ProcessContext#getName(). The ProcessContext class is referenceable from ExecuteScript via the provided variable context, so the code would be String processorName = context.getName().
To get the process group name, I am not aware of an easy way through the framework code. You can, of course, use the Apache NiFi REST API to request the list of process groups and iterate through, checking to see if the process group contains a processor with the identifier of the current processor.

to get the name of all the processors and process groups name, you can use
the following code.
final EventAccess access = context.getEventAccess();
final ProcessGroupStatus procGroupStatus = access.getControllerStatus();
procGroupStatus.getProcessGroupStatus();
final ProcessorStatus processorstatus = procGroupStatus.getProcessorStatus()
ProcessorStatus class contains getName method, which can be used to get the name other processor.
Below is the source code of the same class for your reference.
https://github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

how to pass filewriter object between beanshell processor

I am using beanshell processor in jmeter where i am defining FileWriter object in one beanshell processor and passing the object(fstream) to another beanshell processor.
String filename = "test.csv";
FileWriter fstream = new FileWriter(filename , true);
props.putObject("fstream", fstream);
Now i am trying to get the fstream object to another beanshell processor
fstream = props.getObject("fstream");
When i am running jmeter script, i am getting following error message:
Error in method invocation: Method putObject( java.lang.String,
java.io.FileWriter ) not found in java.util.Properties'
I know why i am getting this error because i m trying to pass a filewriter object but this type of function not found in properties class.
Then how should i pass filewriter object between beanshell processor, please explain and provide the sample code.
If it is within the bounds of one Thread Group you can go for vars.putObject("fstream", fstream);
If the scenario assumes different thread groups you can go for just props.put("fstream", fstream); function as it assumes Object already
There is bsh.shared namespace so you can share an arbitrary object via it like:
in 1st scripting element:
bsh.shared.fstream = fstream
in 2nd scripting element:
fstream = bsh.shared.fstream
Consider switching to JSR223 Test Elements and Groovy language as this approach has much better performance comparing to Beanshell

Read Nifi Counter value programmatically

I am developing a custom processor in which I want to read value of Nifi counters. Is there a way to read Counters' value other than using Nifi Rest Api "http://nifi-host:port/nifi-api/counters"?
No. Apache NiFi doesn't have any straightforward APIs available to read the counter value programmatically. An easy approach would be to use GetHTTP processor and use the NiFi REST API URL that you had mentioned: http(s)://nifi-host:port/nifi-api/counters.
Then use EvaluateJsonPath to just parse and read the counter value from the response JSON received from the GetHTTP processor.
Based on Andy's suggestion, I have used refelection to read Counters as follows:
private void printCounters(ProcessSession session) throws NoSuchFieldException, SecurityException, IllegalArgumentException, IllegalAccessException, NoSuchMethodException, InvocationTargetException {
Class standardProcessSession=session.getClass();
Field fieldContext = standardProcessSession.getDeclaredField("context");
fieldContext.setAccessible(true);
Object processContext = fieldContext.get(session);
Class processContextClass = processContext.getClass();
Field fieldCounterRepo = processContextClass.getDeclaredField("counterRepo");
fieldCounterRepo.setAccessible(true);
Object counterRepo = fieldCounterRepo.get(processContext);
Method declaredMethod = counterRepo.getClass().getDeclaredMethod("getCounters");
ArrayList<Object> counters = (ArrayList<Object>)declaredMethod.invoke(counterRepo);
for(Object obj:counters) {
Method methodName = obj.getClass().getDeclaredMethod("getName");
methodName.setAccessible(true);
Method methodVal = obj.getClass().getDeclaredMethod("getValue");
methodVal.setAccessible(true);
System.out.println("Counter name: "+methodName.invoke(obj));
System.out.println("Counter value: "+methodVal.invoke(obj));
}
}
NOTE: NIFI Version is 1.5.0
While it is not as easy to read/write counter values as it is to modify flowfile attributes, Apache NiFi does have APIs for modifying counters. However, the intent of counters is to provide information to human users, not for processors to make decisions based on their values. Depending on what you are trying to accomplish, you might be more successful using local maps or DistributedMapCacheServer and DistributedMapCacheClientService. If the values are only relevant to this processor, you can just use an in-memory map to store and retrieve the values. If you need to communicate with other processors, use the cache (example here).
Pierre Villard has written a good tutorial about using counters, and you can use ProcessSession#adjustCounter(String counter, int delta, boolean immediate) to modify counter values. Because counters were not designed to allow programmatic there is no way to retrieve the CounterRepository instance from the RepositoryContext object. You may also want to read about Reporting Tasks, as depending on your goal, this may be a better way to achieve it.

ConstraintViolationException - extract field name which caused exception

I'm using hibernate-validator with a JAX-RS service to validate query parameters using #NotNull:
#GET
public Response doSomething(#NotNull #QueryParam("myParam") String myParam) {...}
This works as expected and throws a ConstraintViolationException if myParam is null. I'd like to extract the param name which is associated to the violation (e.g. myParam), and return that in the response message to the client but there does not appear to be an obvious way of extracting this from the exception. Can someone provide some insight?
As of BeanValidation 1.1 there is a ParameterNameProvider contract which makes parameter name extraction configurable. As mentioned in the other answer, with Java 8 you can get the parameter names in the byte code provided you compile with the -parameters flag. Use the ReflectionParameterNameProvider in this case. However, even with Java 7 you can get parameter names, for example by using the ParanamerParameterNameProvider. This parameter name provider is based on Paranamer and there are several ways to set it up.
This only works if you're using Java 8, as prior to Java 8 the actual parameter name was lost at compile time. Its now retained, assuming you compile and run at Java 8. See also http://docs.jboss.org/hibernate/validator/5.2/reference/en-US/html_single/#_java_8_support

Trying to generate JMeter Test Plan (jmx) With JMeter API : Not able to save CSVDataSet element

I am creating a JMeter jmx file dynamically by using JMeter APIs. I am able to add a ThreadGroup within a TestPlan and a JavaSampler within the ThreadGroup. But when I add a CSVDataSet element within the Java Sampler, it does not get saved properly.
The following code is used to create a new CSVDataSet element
CSVDataSet csvDataSet = new CSVDataSet();
csvDataSet.setName("CSV Data Set");
csvDataSet.setComment("Sample CSV Data Set");
csvDataSet.setDelimiter(",");
csvDataSet.setFileEncoding("");
csvDataSet.setFilename("d:\\jmeter\\data.csv"); // variable
csvDataSet.setQuotedData(true);
csvDataSet.setRecycle(true);
csvDataSet.setShareMode(shareMode.all);
csvDataSet.setStopThread(false);
csvDataSet.setVariableNames("firstname, lastname, email"); // variable
csvDataSet.setEnabled(true);
When this is saved using SaveService.saveTree, the final jmx does not contain all the values which were set.
<org.apache.jorphan.collections.HashTree>
<CSVDataSet testname="CSV Data Set Config" enabled="true">
<stringProp name="TestPlan.comments">Sample CSV Data Set Config</stringProp>
</CSVDataSet>
<org.apache.jorphan.collections.HashTree/>
As seen above, only the test name, enabled, and comments are added. The rest of the variables are completely ignored.
Is there something that needs to be set in order to get all the values as expected?
or is this a bug in JMeter? I am using version 2.11
The basic code is as per section 4.3 from following link
http://blazemeter.com/blog/5-ways-launch-jmeter-test-without-using-jmeter-gui
To that I add the code shown above. The way it is added is,
testPlanTree.add("testPlan", testPlan);
testPlanTree.add("loopController", loopController);
testPlanTree.add("threadGroup", threadGroup);
testPlanTree.add("httpSampler", httpSampler);
testPlanTree.add("csvDataSet", csvDataSet);
SaveService
.saveTree(testPlanTree, new FileOutputStream("d:\\test.jmx"));
output of CSVDataSet block is as shown above.
After looking into the JMeter source code, it seems all the properties are set using the setProperty function rather than the individual setter functions. So putting the following code does the job of creating the CSVDataSet element properly.
csvDataSet.setProperty("delimiter", ",");
csvDataSet.setProperty("fileEncoding", "");
csvDataSet.setProperty("filename", "d:\\data.csv");
csvDataSet.setProperty("quotedData", true);
csvDataSet.setProperty("recycle", true);
csvDataSet.setProperty("shareMode", "shareMode.all");
csvDataSet.setProperty("stopThread", false);
csvDataSet.setProperty("variableNames", "var1, var2, var3");
Not sure why setters are not used in the code, but this seems to be the way to go for now
It is clearly not a bug in JMeter otherwise CSV Data Set could not be saved.
It is probably an issue in the way you build the HashTree, but unless you show the full code, you cannot get help.
By the way, as I said in a previous answer, what you are trying to do to build different tests based on input parameter is not good idea IMHO, the approach will be very fragile towards upcoming versions of JMeter.
JMeter provides ways to do it that you should follow.

Resources