Read Nifi Counter value programmatically - apache-nifi

I am developing a custom processor in which I want to read value of Nifi counters. Is there a way to read Counters' value other than using Nifi Rest Api "http://nifi-host:port/nifi-api/counters"?

No. Apache NiFi doesn't have any straightforward APIs available to read the counter value programmatically. An easy approach would be to use GetHTTP processor and use the NiFi REST API URL that you had mentioned: http(s)://nifi-host:port/nifi-api/counters.
Then use EvaluateJsonPath to just parse and read the counter value from the response JSON received from the GetHTTP processor.

Based on Andy's suggestion, I have used refelection to read Counters as follows:
private void printCounters(ProcessSession session) throws NoSuchFieldException, SecurityException, IllegalArgumentException, IllegalAccessException, NoSuchMethodException, InvocationTargetException {
Class standardProcessSession=session.getClass();
Field fieldContext = standardProcessSession.getDeclaredField("context");
fieldContext.setAccessible(true);
Object processContext = fieldContext.get(session);
Class processContextClass = processContext.getClass();
Field fieldCounterRepo = processContextClass.getDeclaredField("counterRepo");
fieldCounterRepo.setAccessible(true);
Object counterRepo = fieldCounterRepo.get(processContext);
Method declaredMethod = counterRepo.getClass().getDeclaredMethod("getCounters");
ArrayList<Object> counters = (ArrayList<Object>)declaredMethod.invoke(counterRepo);
for(Object obj:counters) {
Method methodName = obj.getClass().getDeclaredMethod("getName");
methodName.setAccessible(true);
Method methodVal = obj.getClass().getDeclaredMethod("getValue");
methodVal.setAccessible(true);
System.out.println("Counter name: "+methodName.invoke(obj));
System.out.println("Counter value: "+methodVal.invoke(obj));
}
}
NOTE: NIFI Version is 1.5.0

While it is not as easy to read/write counter values as it is to modify flowfile attributes, Apache NiFi does have APIs for modifying counters. However, the intent of counters is to provide information to human users, not for processors to make decisions based on their values. Depending on what you are trying to accomplish, you might be more successful using local maps or DistributedMapCacheServer and DistributedMapCacheClientService. If the values are only relevant to this processor, you can just use an in-memory map to store and retrieve the values. If you need to communicate with other processors, use the cache (example here).
Pierre Villard has written a good tutorial about using counters, and you can use ProcessSession#adjustCounter(String counter, int delta, boolean immediate) to modify counter values. Because counters were not designed to allow programmatic there is no way to retrieve the CounterRepository instance from the RepositoryContext object. You may also want to read about Reporting Tasks, as depending on your goal, this may be a better way to achieve it.

Related

Can I store sensitive data in a Vert.x context in a Quarkus application?

I am looking for a place to store some request scoped attributes such as user id using a Quarkus request filter. I later want to retrieve these attributes in a Log handler and put them in the MDC logging context.
Is Vertx.currentContext() the right place to put such request attributes? Or can the properties I set on this context be read by other requests?
If this is not the right place to store such data, where would be the right place?
Yes ... and no :-D
Vertx.currentContext() can provide two type of objects:
root context shared between all the concurrent processing executed on this event loop (so do NOT share data)
duplicated contexts, which are local to the processing and its continuation (you can share in these)
In Quarkus 2.7.2, we have done a lot of work to improve our support of duplicated context. While before, they were only used for HTTP, they are now used for gRPC and #ConsumeEvent. Support for Kafka and AMQP is coming in Quarkus 2.8.
Also, in Quarkus 2.7.2, we introduced two new features that could be useful:
you cannot store data in a root context. We detect that for you and throw an UnsupportedOperationException. The reason is safety.
we introduced a new utility class ( io.smallrye.common.vertx.ContextLocals to access the context locals.
Here is a simple example:
AtomicInteger counter = new AtomicInteger();
public Uni<String> invoke() {
Context context = Vertx.currentContext();
ContextLocals.put("message", "hello");
ContextLocals.put("id", counter.incrementAndGet());
return invokeRemoteService()
// Switch back to our duplicated context:
.emitOn(runnable -> context.runOnContext(runnable))
.map(res -> {
// Can still access the context local data
String msg = ContextLocals.<String>get("message").orElseThrow();
Integer id = ContextLocals.<Integer>get("id").orElseThrow();
return "%s - %s - %d".formatted(res, msg, id);
});
}

What is the code to get the Processor Name and Processor Group Name

Is there a way in Groovy Code to get the Processor Group Name the ExecuteScript Processor is in and Processor Name of the ExecuteScript Processor the Groovy Code is in. If so what would the code be. Any help would be greatly appreciated.
To get the processor name, use ProcessContext#getName(). The ProcessContext class is referenceable from ExecuteScript via the provided variable context, so the code would be String processorName = context.getName().
To get the process group name, I am not aware of an easy way through the framework code. You can, of course, use the Apache NiFi REST API to request the list of process groups and iterate through, checking to see if the process group contains a processor with the identifier of the current processor.
to get the name of all the processors and process groups name, you can use
the following code.
final EventAccess access = context.getEventAccess();
final ProcessGroupStatus procGroupStatus = access.getControllerStatus();
procGroupStatus.getProcessGroupStatus();
final ProcessorStatus processorstatus = procGroupStatus.getProcessorStatus()
ProcessorStatus class contains getName method, which can be used to get the name other processor.
Below is the source code of the same class for your reference.
https://github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java

Aws integration spring: Extend Visibility Timeout

Is it possible to extend the visibility time out of a message that is in flight.
See:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html.
Section: Changing a Message's Visibility Timeout.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html#changeMessageVisibility-com.amazonaws.services.sqs.model.ChangeMessageVisibilityRequest-
In summary I want to be able to extend the first set visibility timeout for a given message that is in flight.
Example if 15secs have passed I then want to extend the timeout by another 20secs. Better example in java docs above.
From my understanding in the links above you can do this on the amazon side.
Below are my current settings;
SqsMessageDrivenChannelAdapter adapter =
new SqsMessageDrivenChannelAdapter(queue);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setMaxNumberOfMessages(1);
adapter.setSendTimeout(2000);
adapter.setVisibilityTimeout(200);
adapter.setWaitTimeOut(20);
Is it possible to extend this timeout?
Spring Cloud AWS supports this starting with Version 2.0. Injecting a Visiblity parameter in your SQS listener method does the trick:
#SqsListener(value = "my-sqs-queue")
void onMessageReceived(#Payload String payload, Visibility visibility) {
...
var extension = visibility.extend(20);
...
}
Note, that extend will work asynchronously and will return a Future. So if you want to be sure further down the processing, that the visibility of the message is really extended at the AWS side of things, either block on the Future using extension.get() or query the Future with extension.isDone()
OK. Looks like I see your point.
We can change visibility for particular message using API:
AmazonSQS.changeMessageVisibility(String queueUrl, String receiptHandle, Integer visibilityTimeout)
For this purpose in downstream flow you have to get access to (inject) AmazonSQS bean and extract special headers from the Message:
#Autowired
AmazonSQS amazonSqs;
#Autowired
ResourceIdResolver resourceIdResolver;
...
MessageHeaders headers = message.getHeaders();
DestinationResolver destinationResolver = new DynamicQueueUrlDestinationResolver(this.amazonSqs, this.resourceIdResolver);
String queueUrl = destinationResolver.resolveDestination(headers.get(AwsHeaders.QUEUE));
String receiptHandle = headers.get(AwsHeaders.RECEIPT_HANDLE);
amazonSqs.changeMessageVisibility(queueUrl, receiptHandle, YOUR_DESIRED_VISIBILITY_TIMEOUT);
But eh, I agree that we should provide something on the matter as out-of-the-box feature. That may be even something similar to QueueMessageAcknowledgment as a new header. Or even just one more changeMessageVisibility method to this one.
Please, raise a GH issue for Spring Cloud AWS project on the matter with link to this SO topic.

Parquet-MR AvroParquetWriter - how to convert data to Parquet (with Specific Mapping)

I'm working on a tool for converting data from a homegrown format to Parquet and JSON (for use in different settings with Spark, Drill and MongoDB), using Avro with Specific Mapping as the stepping stone. I have to support conversion of new data on a regular basis and on client machines which is why I try to write my own standalone conversion tool with a (Avro|Parquet|JSON) switch instead of using Drill or Spark or other tools as converters as I probably would if this was a one time job. I'm basing the whole thing on Avro because this seems like the easiest way to get conversion to Parquet and JSON under one hood.
I used Specific Mapping to profit from static type checking, wrote an IDL, converted that to a schema.avsc, generated classes and set up a sample conversion with specific constructor, but now I'm stuck configuring the writers. All Avro-Parquet conversion examples I could find [0] use AvroParquetWriter with deprecated signatures (mostly: Path file, Schema schema) and Generic Mapping.
AvroParquetWriter has only one none-deprecated Constructor, with this signature:
AvroParquetWriter(
Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
boolean enableValidation,
WriterVersion writerVersion,
Configuration conf
)
Most of the parameters are not hard to figure out but WriteSupport<T> writeSupport throws me off. I can't find any further documentation or an example.
Staring at the source of AvroParquetWriter I see GenericData model pop up a few times but only one line mentioning SpecificData: GenericData model = SpecificData.get();.
So I have a few questions:
1) Does AvroParquetWriter not support Avro Specific Mapping? Or does it by means of that SpecificData.get() method? The comment "Utilities for generated Java classes and interfaces." over 'SpecificData.class` seems to suggest that but how exactly should I proceed?
2) What's going on in the AvroParquetWriter constructor, is there an example or some documentation to be found somewhere?
3) More specifically: the signature of the WriteSupport method asks for 'Schema avroSchema' and 'GenericData model'. What does GenericData model refer to? Maybe I'm not seeing the forest because of all the trees here...
To give an example of what I'm aiming for, my central piece of Avro conversion code currently looks like this:
DatumWriter<MyData> avroDatumWriter = new SpecificDatumWriter<>(MyData.class);
DataFileWriter<MyData> dataFileWriter = new DataFileWriter<>(avroDatumWriter);
dataFileWriter.create(schema, avroOutput);
The Parquet equivalent currently looks like this:
AvroParquetWriter<SpecificRecord> parquetWriter = new AvroParquetWriter<>(parquetOutput, schema);
but this is not more than a beginning and is modeled after the examples I found, using the deprecated constructor, so will have to change anyway.
Thanks,
Thomas
[0] Hadoop - The definitive Guide, O'Reilly, https://gist.github.com/hammer/76996fb8426a0ada233e, http://www.programcreek.com/java-api-example/index.php?api=parquet.avro.AvroParquetWriter
Try AvroParquetWriter.builder :
MyData obj = ... // should be avro Object
ParquetWriter<Object> pw = AvroParquetWriter.builder(file)
.withSchema(obj.getSchema())
.build();
pw.write(obj);
pw.close();
Thanks.

Hibernate search, convert byte[] to List<LuceneWork>

As of Hibernate Search 3.1.1, when one wanted to send an indexed entity to a JMS queue for further processing, in the onMessage() method of the processing MDB was enough to apply a cast to obtain the list of LuceneWork, e.g
List<LuceneWork> queue = (List<LuceneWork>) objectMessage.getObject();
But in version 4.2.0 this is no longer an option as objectMessage.getObject() returns a byte[].
How could I deserialize this byte[] into List<LuceneWork>?
I've inspected the message and saw that I have the value for JMSBackendQueueTask.INDEX_NAME_JMS_PROPERTY.
You could extend AbstractJMSHibernateSearchController and have it deal with these details, or have a look at its source which contains:
indexName = objectMessage.getStringProperty(JmsBackendQueueTask.INDEX_NAME_JMS_PROPERTY);
indexManager = factory.getAllIndexesManager().getIndexManager(indexName);
if (indexManager == null) {
log.messageReceivedForUndefinedIndex(indexName);
return;
}
queue = indexManager.getSerializer().toLuceneWorks((byte[]) objectMessage.getObject());
indexManager.performOperations(queue, null);
Compared to older versions 3.x there are two main design differences to keep in mind:
The Serializer service is pluggable so it needs to be looked up
Each index (identified by name) can have an independent backend
The serialization is now performed (by default) using Apache Avro as newer Lucene classes are not Serializable.

Resources