At this time, it isn't possible to interact with an iSeries IFS using Nifi. If this is something that you need, I recommend heading over and voting/commenting.
Opened an enhancement with ASF
Edit:
I created a dynamic property pre.cmd._____ per a suggestion below and seem to still be having the same issue. Perhaps I'm not creating this correctly? I've tried quoting and a few other variations with no success:
Looking to use Apache Nifi (1.11.4) for some near real time file movements and one requirement of mine is to FTP a file to an FTP hosted on an iSeries.
An iSeries has two formats when utilizing FTP, NAMEFMT 0 when accessing libraries, physical files, and members, and NAMEFMT 1 when accessing the IFS. More on that if you're interested...
I set up what I thought was a very simple process to get acquainted, detect when a file is placed on an SFTP, and simply move it on over to our iSeries:
The problem I'm having is that I'm simply trying to place the file on the IFS in my home directory /home/dkorinke. However, the PutFTP is erroring as it seems to be using NAMEFMT 0 and when changing directory to /home/dkorinke.
From the logs:
2020-06-10 21:09:36,924 ERROR [Timer-Driven Process Thread-6] o.apache.nifi.processors.standard.PutFTP PutFTP[id=a084128f-0172-1000-49e1-fdd7b9d8d129] Unable to transfer StandardFlowFileRecord[uuid=72600f74-2678-4c33-870a-f2648aac034b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1591830969408-1, container=default, section=1], offset=463787, length=104801],offset=0,name=My_File.csv,size=104801] to remote host DEVQA01 due to org.apache.nifi.processor.exception.ProcessException: Unable to change working directory to /HOME/DKORINKE/: null; routing to failure
I began playing around with the Remote Path property on the PutFTP and was able to get a different error when using "/HOME/DKORINKE/". I noticed QGPL/ in front of "/HOME/DKORINKE":
2020-06-10 21:53:35,710 ERROR [Timer-Driven Process Thread-5] o.apache.nifi.processors.standard.PutFTP PutFTP[id=a084128f-0172-1000-49e1-fdd7b9d8d129] Unable to transfer StandardFlowFileRecord[uuid=14303c7c-babb-4cbf-9728-346e0afaafba,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1591830969408-1, container=default, section=1], offset=882991, length=104801],offset=0,name=My_File.csv,size=104801] to remote host DEVQA01 due to org.apache.nifi.processor.exception.ProcessException: Unable to change working directory to QGPL/"/HOME/DKORINKE/": null; routing to failure
So far, the only solution I can think of is to update my user profile to default to NAMEFMT 1 which I'm not entirely sure is possible. I've hit the Google pretty hard this evening but I haven't been able to find anyone having a similar use case or issue like this.
Related
I´m using Apache NiFi to ingest and preprocess some CSV files, but when runing during a long time, it always fails. The error is always the same:
FlowFile Repository failed to update
Searching at logs, I see this error always:
2018-07-11 22:42:49,913 ERROR [Timer-Driven Process Thread-10] o.a.n.p.attributes.UpdateAttribute UpdateAttribute[id=c7f45dc9-ee12-31b0-8dee-6f1746b3c544] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update: org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update
org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:405)
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:336)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: **Cannot update journal file ./flowfile_repository/journals/8772495.journal because this journal has already been closed**
at org.apache.nifi.wali.LengthDelimitedJournal.checkState(LengthDelimitedJournal.java:223)
at org.apache.nifi.wali.LengthDelimitedJournal.update(LengthDelimitedJournal.java:178)
at org.apache.nifi.wali.SequentialAccessWriteAheadLog.update(SequentialAccessWriteAheadLog.java:121)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:300)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:257)
What makes me believe that the root cause is that Nifi Cannot update journal file ./flowfile_repository/journals/8772495.journal because this journal has already been closed**, as seen on logs file.
How can I solve this issue?
Thanks!
I ran into the same issue the other day.
When I inspected the disk space on the volume where "flowfile_repository" is located I see this
/dev/sdc1 447G 447G 24K 100% /var/proj/data2
Its 100% full.
If NiFi is having issues writing to the journal file there are a few things to check.
Are you reading in fields from your CSV that are large (greater than 64kb) and attempting to assign them to attributes? You may want to consider processing that particular field in your CSV as a separate flowfile and matching it with attributes later. See this mailing list discussion for more information.
Have you checked NiFi's configuration against the best practices listed in the administration guide? I also recommend understanding each of the Flowfile repository settings. It will allow you to ask more targeted questions.
It is likely worth updating your JVM settings to allow for larger file processing. Check out this post on Hortonworks detailing best practices for high performance systems.
In order to solve the problem you may need to tweak a couple of things. Does the flow handle the CSV in an efficient manner? Does NiFi have enough memory to do what it needs to with the data? Would it be more appropriate to handle the CSV files as records? If that concept is unfamiliar check out this post that introduces record processing in NiFi. I hope some of these resources help you get a little closer to a solution. If you have a follow up question let me know.
I encountered the same problem after running Nifi for 2 days on my Ubuntu system.
First, I ran command du -shr ./* under the Nifi folder, it turned out there are a lot of application logs there. Each of the log file is 101M. I think it is the default value for retention.
Since I don't need to keep log for that much for now, so I updated the logback.xml file in Nifi /conf folder set the application log to daily rollover.
I am trying to read the data from following cap files.
everyhing from alerts folder
http://dd2.weather.gc.ca/alerts/cap/20180205/CWHX/14/
I am using AMQP from http://metpx.sourceforge.net. And when I am trying to connect to subscriber from nifi, I am getting the following error.
Failed to establish the connection with AMQP broker
this is my cap.conf file.
broker amqp://anonymous:anonymous#dd.weather.gc.ca
directory /data
subtopic alerts.cap.#
accept .*
mirror True
Over the summer, the broker migrated to SSL, so the current URL would be: amqps://anonymous:anonymous#dd.weather.gc.ca
The web page has also moved to: https://github.com/MetPX/sarracenia
best practice to put authentication info in ~/.config/sarra/credentials.conf
a line like: amqps://anonymous:anonymous#dd.weather.gc.ca
Installing a version from the past year is likely to be a much
better experience. It now comes with sample configurations,
one of them being ddc_cap-xml.conf which is the same data you
are trying to download.
So the work is:
blacklab% sr_subscribe add ddc_cap-xml.conf
blacklab% sr_subscribe edit ddc_cap-xml.conf
# Change the directory option to suit your case.
blacklab% sr_subscribe foreground ddc_cap-xml.conf
And it should work. It could take many hours to prove it because this particular set (severe weather warnings in Common Alerting Protocol format) is produced only when needed, rather than continuously. (Use start instead of foreground to run as a background daemon.)
To test things, it might be easier to start with dd_swob, which will be a continuous feed.
blacklab% sr_subscribe list dd_swob
broker amqp://anonymous#dd.weather.gc.ca
exchange xpublic
#msg_skip_threshold 60
#on_msg ../msg_skip_old.py
subtopic observations.swob-ml.#
accept .*
In this configuration you need to add a directory option just before the accept line. and should start downloading data immediately. Once you know it works, switch back to the data set you actually want.
I just recently set up a FOIP server which involved configuring several different applications and one of them is freeswitch. I've searched and searched but haven't found a good solution for this problem, so I'll get right to it. This is the error message:
[ERR] mod_spanddsp_fax.c:1367 Cannot send non-existant fax file [usr/ictcore/data/document/document_1.tiff]
I have tired several things to troubleshoot the issue such as checking permissions, installing the correct dependencies, etc...so if that error message strikes anyone as something familiar please share. I can provide full logs per request.
Here's a hint for people who run into this stupid issue. Disable SElinux before changing any configuration files and make the changes permanent, otherwise a reboot will put you back right where you started.
I'm attempting to write files to a server in my program. I'm using FTPClientSession, everything is going well, I have a session started, it's logged in with no errors, I have a file I know is on my local disk but it fails to send the data as I think the remote path for the location it's trying to write to is always wrong no matter what I try.
I don't really know what the path should be, if I right click on a directory within filezilla I will get something like
ftp://"username"#ftp."subdomain".vhostall.com/public_html/
is that what I use? Or would I use
www."subdomain".vhostall.com/public_html
or if the client session already knows the domain, should I just use
public_html
(none of these work by the way). I always get an error message wihin an exception "STOR command failed: 553 Can't open that file: No such file or directory"
I've compiled and trolled around the quickfix ( http://www.quickfixengine.org ) source and the examples. I figured a good starting point would be to compile (C++) and run the 'executor' example, then use the 'tradeclient' example to connect to 'executor', and send it order requests.
I created two seperate session files one for the 'executor' as an acceptor, and one for the 'tradeclient' as the initiator. They're both running on the same Win7 pc.
'executor' runs, but tradeclient can't connect to it, and I can't figure out why. I downloaded Mini-fix and was able to send messages to executor, so I know that executor is working. I figure that the problem is with the tradeclient session settings. I've included both of them below, I was hoping someone could point out what's causing them to not communicate. They're both running on the same computer using port 56156.
--accceptor session.txt----
[DEFAULT]
ConnectionType=acceptor
ReconnectInterval=5
SenderCompID=EXEC
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=SENDER
HeartBtInt=5
#SocketConnectPort=
SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileStorePath=store
---- initiator session.txt ---
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=5
SenderCompID=SENDER
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=EXEC
HeartBtInt=5
SocketConnectPort=56156
#SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileLogPath=log
FileStorePath=store
--------end------
Update: Thanks for the resonses... Turns out that my logfile directories didn't exist. Once I created them, they both started communicating. Must have been some logging error that didn't throw an exception, but disabled proper behavior.
Is there an error condition that I should be checking? I was relying on exceptions, but that's obviously not enough.
It doesn't seem to be config, check that your message sequence numbers are in synch, especially since you've been connecting to a different server using the same settings.
Try setting the TargetCompID and SenderCompID on the acceptor to *