I was working on Apache Nifi and found hundreds of files in a queue. While listing the queue it only displays the 100 flowfiles.
My question is that, Is there is any way to see all the files which are in queue state?
For ease shared the Screenshot
Generally, nifi UI allows you to see 100 queued flow files as it's hardcoded value. See this discussion: Nifi API: Get ALL elements with a flowfile queue
Related
I have a NiFi flow, in which I'm getting data from elasticsearch, after some processing I'm saving flow file in a destination. Once all the flow file is saved I want notify the Merge Process to merge all the data from file into a large csv file. My problem is how I can notify Merge Processor that now all the files are saved and now start merging all the files.
Thanks for the help.. :)
NiFi has a way to use the Wait/Notify processors to notify when the files are saved. What I would suggest is that whichever processor you are using to save the files to have its successful state get sent to the Notify Processor, which could then be used to notify the Wait Processor. The Wait Processor could used with the Merge Processor to merge said files. I know that it is indeed possible to use the Wait/Notify in NiFi for complex actions Pierre Villard NiFi WorkFlow Wait/Notify
Although I know that this isnt what you're specifically trying to do here, but this might help you better understand the Wait/Notify a bit better Wait/Notify Moving Zendesk
I am new to NiFi. We have a complex NiFi data flow in our organization. We have segregated different projects data flow into Process Groups. I was asked to find the data volume for a specific project (Process Group in NiFi) for a given period of time? How to find it in NiFi Web UI?
The Apache NiFi User Guide has a section on monitoring components and data flow. By default, a processor/process group will display the total amount of data processed by that component over the last 5 minutes. There are also Reporting Tasks which allow the transmission of such monitoring data to external destinations.
We have multiple team nifi applications running in same nifi machine... Is there any way to log the logs specific to my application? Also by default nifi-app.log file is difficult to track the issues and bulletin board shows the error msg for only 5 mins... How to get the errors captured and send an mail alert in Nifi?
Please help me to get through this. Thanks in advance!
There are a couple ways to approach this. One is to route failure relationships from processors to a PutEmail processor which can send an alert on errors. Another is to use a custom reporting task to alert a monitoring service when a certain number of flowfiles are in an error queue.
Finally, we have heard that in multitenant environments, log parsing is difficult. While NiFi aims to reduce or completely eliminate the need to visually inspect logs by providing the data provenance feature, in the event you do need to inspect the logs, we recommend searching the log by processor ID to isolate relevant messages. You can also use NiFi itself to ingest those same logs and perform parsing and filtering activities if desired. Future versions may improve this experience.
By parsing the nifi log, you can separate the logs which is specific to your team applications, by using the processor group id and using Nifi Rest API. Check the below link for the nifi template and python codes to solve this issue:
https://link.medium.com/L6IY1wTimV
You can send all the errors in a Processor Group to the same processor, it could be a regular UpdateAttribute or a custom processor, this processor is going to add the path and all the relevant information and then send this to a general error/logs flow that is going to check the information inside the flowfile regarding to the error, and will make the decision of send or not an email, to whom and this kind of things.
Using this approach, the system keeps simple and inside NiFi, so you don't add more layers of complexity, and you are going to have only one processor to manage the errors per Process Group.
This is the way we are managing errors in my company.
I want to know how do I retrieve the messages already present on the Solace Queue. I am able to send and receive the messages I created from my machine but can't receive any messages that are already present in the queue. I want to retrieve the messages and store it in a text file.
I am sending my messages by integrating Solace APIs in Gradle and writing code in Java. Can anyone guide me regarding the same?
There's an exact tutorial for this.
If you had downloaded the Solace Java JAR via the Maven links, you might have missed the entire suite, which contains all the dependent JARs distributed by Solace, API reference docs, as well as a bunch of samples. The latter is in addition to what you may find on http://dev.solace.com/get-started/java-tutorials/. Get the entire ZIP file, as well as the Release Notes, from http://dev.solace.com/downloads/.
There are multiple possibilities why you cannot receive messages from a queue:
Queue name is misspelt.
Queue permissions are wrong.
Queue is shut down on the egress.
Message spool is not active on the router.
Client profile is set not to receive Guaranteed Messages.
Number of egress flows has exceeded the router / message-vpn limit.
Bind count on the queue has exceeded.
The egress flow is not active.
Client is not connected to the router.
...
Examining the error / exception will give you information why you cannot receive messages.
We are currently implementing mq fte solution
One of the projects need to executive file to queue function because the target system reads only from mq .
We are looking for a way not only upload the files to queue but to keep the order of the files too.
We need that the oldest file will be uploaded first ( by modification or creation date ) and the the oldest file after him in the folder
Someone had this request on fte ? How did you handle it ?
The source system is windows .
Thanks for the assistance .
That depends on your setup. Is there a single queue manager in your scenario? Does the source system share the same local queue manager with the target system?
The order of messages might be guaranteed by default, as the MQ v7 Infocenter states in chapter Priority, in these cases:
If an application puts a sequence of messages on a queue, another
application can retrieve those messages in the same order that they
were put, provided:
The messages all have the same priority
The messages were all put within the same unit of work, or all put outside a unit of work
The queue is local to the putting application
If these conditions are not met, and the applications depend on the messages
being retrieved in a certain order, the applications must either
include sequencing information in the message data, or establish a
means of acknowledging receipt of a message before the next one is
sent.
If you do not meet these requirements (for example when the communication spans multiple queue managers), you can meet the requirements by:
ensuring that next message is put if and only if the recipient confirmed getting the previous one (for example by a MQ reply message)
using Message Groups to retrieve messages in logical order - that requires setting GroupId and MsgSeqNumber in the MQMD by the putting application and subsequently using MQGMO_LOGICAL_ORDER option by getting application (see chapter Logical and Physical ordering)