MiNiFi - How to get processors list and number of queued flowfiles? - apache-nifi

I'd like to monitor state of the running MiNiFi flow, especially get list of the processors and number of queued flowfiles for each processor. I'm trying to use FlowStatus Script Query, eg.:
$ ./minifi.sh flowStatus systemdiagnostics:processorstats
{"controllerServiceStatusList":null,"processorStatusList":null,"connectionStatusList":null,"remoteProcessGroupStatusList":null,"instanceStatus":null,"systemDiagnosticsStatus":{"garbageCollectionStatusList":null,"heapStatus":null,"contentRepositoryUsageList":null,"flowfileRepositoryUsage":null,"processorStatus":{"loadAverage":1.99,"availableProcessors":2}},"reportingTaskStatusList":null,"errorsGeneratingReport":[]}
$ ./minifi.sh flowStatus processor:all:health,stats,bulletins
{"controllerServiceStatusList":null,"processorStatusList":[],"connectionStatusList":null,"remoteProcessGroupStatusList":null,"instanceStatus":null,"systemDiagnosticsStatus":null,"reportingTaskStatusList":null,"errorsGeneratingReport":[]}
$ /minifi.sh flowStatus processor:MyProcessorName:health,stats,bulletins
{"controllerServiceStatusList":null,"processorStatusList":[],"connectionStatusList":null,"remoteProcessGroupStatusList":null,"instanceStatus":null,"systemDiagnosticsStatus":null,"reportingTaskStatusList":null,"errorsGeneratingReport":["Unable to get status for request 'processor:MyProcessorName:health,stats,bulletins' due to:org.apache.nifi.minifi.status.StatusRequestException: No processor with key MyProcessorName to report status on"]}
but I'm receiving only nulls. What should I do to be able retrieve data which I want (enable some option in config?)? Is it possible using flowStatus queries? My flow which is running contains several processors, so why systemdiagnostics shows only two availableProcessors and why I can't use flowStatus processor command to get any processor data?
Unfortunately NiFi/MiNiFi documentation is very poor, so I'm not even sure if I can retrieve processors data (number of queued elements and processors list) in this way. If not, maybe do you know some other way to do it?

Do you have any processors in a flow running on this instance of MiNiFi? Each response from the queries you've submitted show no processors. In fact, the third example says this explicitly -- "Unable to get status for request 'processor:MyProcessorName:health,stats,bulletins' due to:org.apache.nifi.minifi.status.StatusRequestException: No processor with key MyProcessorName to report status on".

Related

How to get Job version from allocation JSON that has no job information?

I have a persistent Nomad database of jobs, allocations and evaluations (with my own cleanup settings, not in the scope of the question). I take Nomad event stream https://developer.hashicorp.com/nomad/api-docs/events and listen to allocations, evaluations and jobs and save all JSONs to a database.
Allocations from Nomad event stream contain no Job information. I can get evaluation from allocation using "EvalID" field. I do not know hot to get Job version from evaluation. Evaluation JSON has only "JobID", it has no "JobModifyIndex", nor "JobVersion" field that I could connect to Job history.
How can I get which Job version is associated with allocation? Nomad UI shows that - how can I get that information using only Nomad event stream? Evaluation has "ModifyIndex" - can I use it?

How can ı constraint nifi processors response ? (queue) Apache Nifi

When I request an API in Nifi, more than one response returns. And the content of these responses is the same. If I don't stop the processor, it keeps coming. I keep turning the processor on and off quickly. Is there a way to restrict this?
Can I have the API return a certain number of times no matter how many requests it sends? For example, return only 3 requests.
NiFi flows are intended to be always-on streams. If you go to the Scheduling tab of a processor's config, you'll see that, by default, it is scheduled to run continuously (0 ms).
If you don't want this style of streaming behaviour, you need to change the Scheduling of the processor.
You can change it to only schedule the processor every X seconds, or you can change it to run based on a CRON expression.

Why does Nifi PutParquet processor create so many tasks?

The Nifi PutParquet Processor with timer driven run schedule of 0 sec with previous processor in stopped status shows ~3000 Tasks for the last 5 minutes.
We are on Nifi 1.9.2.
My expectation would be that this processor only creates tasks if data is in the incoming queue for the processor. Is this some misconfiguration or a bug in the implementation?
The processor is annotated with #TriggerWhenEmpty which lets it execute all the time regardless of data in the incoming queue. The reason for this is because in a kerberized environment, the processor needs a chance to refresh the credentials. It was a common problem with other processors where no data comes in for a long time, say over a weekend, and during that time the kerberos ticket expired, and then when data starts coming in Monday everything fails.
These empty executions shouldn't have a big impact on the system. When the processor executes and no data is available, it just calls yield and returns. The default yield duration is 1 second, but is controllable through the UI.

Does Apache NiFi support batch processing?

I need to know if Apache NiFi supports running processors until completion.
"the execution of a series of processors in process group wait for anothor process group results execution to be complete".
For example:
Suppose there are three processors in NiFi UI.
P1-->P2-->P3
P-->Processor
Now I need to run P1 if it run completely then run P2 And finally it will run like sequence but one wait for another to be complete.
EDIT-1:
Just for example I have data in web URL. I can download that data using GetHTTP Processor. Now I stored that in putFile content. If file saved in putFile directory then run FetchFile to process that file into my database like below workflow.
GetHTTP-->PutFile-->FetchFile-->DB
Is this possible?
NiFi itself is not really a batch processing system, it is a data flow system more geared towards continuous processing. Having said that, there are some techniques you can use to do batch-like operations, depending on which processors you're using.
The Split processors (SplitText, SplitJSON, etc.) write attributes to the flow files that include a "fragment.identifier" which is unique for all splits created from an incoming flow file, and "fragment.count" which is the total number of those splits. Processors like MergeContent use those attributes to process a whole batch (aka fragment), so the output from those kinds of processors would occur after an entire batch/fragment has been processed.
Another technique is to write an empty file in a temp directory when the job is complete, then a ListFile processor (pointing at that temp directory) would issue a flow file when the file is detected.
Can you describe more about the processors in your flow, and how you would know when a batch was complete?

"Too many fetch-failures" while using Hive

I'm running a hive query against a hadoop cluster of 3 nodes. And I am getting an error which says "Too many fetch failures". My hive query is:
insert overwrite table tablename1 partition(namep)
select id,name,substring(name,5,2) as namep from tablename2;
that's the query im trying to run. All i want to do is transfer data from tablename2 to tablename1. Any help is appreciated.
This can be caused by various hadoop configuration issues. Here a couple to look for in particular:
DNS issue : examine your /etc/hosts
Not enough http threads on the mapper side for the reducer
Some suggested fixes (from Cloudera troubleshooting)
set mapred.reduce.slowstart.completed.maps = 0.80
tasktracker.http.threads = 80
mapred.reduce.parallel.copies = sqrt (node count) but in any case >= 10
Here is link to troubleshooting for more details
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
Update for 2020 Things have changed a lot and AWS mostly rules the roost. Here is some troubleshooting for it
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-resource-1.html
Too many fetch-failures
PDF
Kindle
The presence of "Too many fetch-failures" or "Error reading task output" error messages in step or task attempt logs indicates the running task is dependent on the output of another task. This often occurs when a reduce task is queued to execute and requires the output of one or more map tasks and the output is not yet available.
There are several reasons the output may not be available:
The prerequisite task is still processing. This is often a map task.
The data may be unavailable due to poor network connectivity if the data is located on a different instance.
If HDFS is used to retrieve the output, there may be an issue with HDFS.
The most common cause of this error is that the previous task is still processing. This is especially likely if the errors are occurring when the reduce tasks are first trying to run. You can check whether this is the case by reviewing the syslog log for the cluster step that is returning the error. If the syslog shows both map and reduce tasks making progress, this indicates that the reduce phase has started while there are map tasks that have not yet completed.
One thing to look for in the logs is a map progress percentage that goes to 100% and then drops back to a lower value. When the map percentage is at 100%, this does not mean that all map tasks are completed. It simply means that Hadoop is executing all the map tasks. If this value drops back below 100%, it means that a map task has failed and, depending on the configuration, Hadoop may try to reschedule the task. If the map percentage stays at 100% in the logs, look at the CloudWatch metrics, specifically RunningMapTasks, to check whether the map task is still processing. You can also find this information using the Hadoop web interface on the master node.
If you are seeing this issue, there are several things you can try:
Instruct the reduce phase to wait longer before starting. You can do this by altering the Hadoop configuration setting mapred.reduce.slowstart.completed.maps to a longer time. For more information, see Create Bootstrap Actions to Install Additional Software.
Match the reducer count to the total reducer capability of the cluster. You do this by adjusting the Hadoop configuration setting mapred.reduce.tasks for the job.
Use a combiner class code to minimize the amount of outputs that need to be fetched.
Check that there are no issues with the Amazon EC2 service that are affecting the network performance of the cluster. You can do this using the Service Health Dashboard.
Review the CPU and memory resources of the instances in your cluster to make sure that your data processing is not overwhelming the resources of your nodes. For more information, see Configure Cluster Hardware and Networking.
Check the version of the Amazon Machine Image (AMI) used in your Amazon EMR cluster. If the version is 2.3.0 through 2.4.4 inclusive, update to a later version. AMI versions in the specified range use a version of Jetty that may fail to deliver output from the map phase. The fetch error occurs when the reducers cannot obtain output from the map phase.
Jetty is an open-source HTTP server that is used for machine to machine communications within a Hadoop cluster

Resources