Apache NIFI: Recovering from Flowfile repository issue - apache-nifi

I am currently trying to recover my flows from the below exception.
failed to process session due to Cannot update journal file
/data/disk1/nifi/flowfile_repository/journals/90620570.journal because
no header has been written yet.; Processor Administratively Yielded
for 1 sec: java.lang.IllegalStateException: Cannot update journal file
/data/disk1/nifi/flowfile_repository/journals/90620570.journal because
no header has been written yet.
I have seen some answers on best practices wrt to handling large files in Nifi, but my question is more about how to recover from this exception. My observation is that, once the exception is seen, it begins to appear in several processors in all the flows in our nifi instance, how do we recover without a restart?

It seems like your disk is full which is not allowing the processors to update or modify the data.
You can either increase your disk or you can delete the contents from your nifi repository.
first, check the logs folder. If its the logs folder thats taking up the space, you can directly do a
rm -rf logs/*
else just delete all the content
rm -rf logs/* content_repository/* provenance_repository/* flowfile_repository/* database_repository/*
PS : The deletion of the content will cause all your data on the canvas also to be deleted, so make sure you're not deleting the data which can't be reproduced.
Most likely, it must be the logs which must be eating up the space. Also, check your log rotation interval!
Let me know if you need further assistance!

Related

NVMeOF/RDMA sync file modifications

I just set up the NVMeOF/RDMA environment to play around. I have a target node which NVMe SSD is accessed by some client nodes. However, when I delete a file say test on one client node, the rest nodes cannot see this operation and can still read the content of test as normal. I know that RDMA bypasses the kernel, so I guess this is because of the cache? I then have tried to clean up the cache using these commands:
sudo sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 1 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 2 | sudo tee /proc/sys/vm/drop_caches
Unfortunately, other nodes still keep this file.
So actually I have two questions:
Does it exactly due to the cache? How does it work?
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
Any help will be greatly appreciated!
Relatively short answer
Like Boris said, you don't want to do that (distributed consistency on storage is a hard problem), and you need something else to do what you want. Flushing caches may not work because you've got multiple distinct views of the system + caching behaviors
Longer answer:
As Boris mentioned, NVMeoF is a block protocol. This means that at a broad level (with some hand-waving) all it can do is read and write blocks at a particular address. In practice, we usually have layers above the NVMe/NVMeoF communication layer like file systems that handle this abstraction.
I can't tell right now if you're using a file system or if you reading/writing the device directly, but in either case you are at least partially correct that the page cache may be getting in the way, even with RDMA.
Now, if you are using local file systems on your client nodes, you quickly get inconsistent views. The filesystem (and consequently overall operating system and its view of the state of the page cache and block storage) has no idea anyone else wrote anything. So even if you write and sync on one client, you may have to bypass the page cache on another (e.g. use O_DIRECT reads, which have their own sets of complexities) and make sure you target something that eventually refers to the same block addresses that were written on the NVMe target from your other client.
In theory, this will let you read data written by another if everything lines up correctly, in practice though this can cause confusion, especially if a file system or application on one client writes something at a location, and the other client attempts to read or write that location unknowingly. Now you have a consistency problem.
NVMeoF (with RDMA or any other transport) is a block level storage protocol and not a file level storage protocol. Thus, there is no guarantee to the atomicity of file operations across nodes in NVMeoF systems. Even if one node deletes a file, there is no guarantee that:
The delete operation was actually translated to block erase operations and sent to the storage server;
Even if the storage server deleted the blocks, there is no guarantee that other clients that have cached this data will not continue to read it. Moreover, another client can overwrite the deleted file.
Overall, I think that to have any guarantees at the file level, you should consider a distribute file systems, rather than NVMeoF.
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
There is no good way to do it. Flushing the cache on all nodes and only then reading may work, but it depends on the file system.

ClickHouse log shows hash of uncompressed files doesn't match

ClickHouse logs printed the error messages as below frequently:
2021.01.07 00:55:24.112567 [ 6418 ] {} <Error> vms.analysis_data (7056dab3-3677-455b-a07a-4d16904479b4):
Code: 40, e.displayText() = DB::Exception: Checksums of parts don't match:
hash of uncompressed files doesn't match (version 20.11.4.13 (official build)).
Data after merge is not byte-identical to data on another replicas. There could be several reasons:
1. Using newer version of compression library after server update.
2. Using another compression method.
3. Non-deterministic compression algorithm (highly unlikely).
4. Non-deterministic merge algorithm due to logical error in code.
5. Data corruption in memory due to bug in code.
6. Data corruption in memory due to hardware issue.
7. Manual modification of source data after server startup.
8. Manual modification of checksums stored in ZooKeeper.
9. Part format related settings like 'enable_mixed_granularity_parts' are different on different replicas.
We will download merged part from replica to force byte-identical result.
We use the same version(20.11.4.13) and the same compression method (LZ4) for all data nodes in the production environment, we wouldn't modify the data files or the values stored in Zookeeper also.
So my questions are:
How was the error caused? Furtherly, in which cases will the CickHouse server throws those exceptions?
Is there a checksum-checking mechanism among the replicas during the merging parts?
I also found that in one of our data nodes, there are many folders named like "ignored_20201208_23116_23116_0" in the detached folder, were these files the corrupted data caused by the referred problem?
Thanks.
You need to upgrade all nodes to 20.11.6.6 ASAP.
The reason of these errors is a serious bug related to AIO.
ignored_ -- it's not related. You can remove them.
gtranslate: Inactive parts are not deleted immediately, because when writing a new part, fsync is not called, i.e. for some time, the new part is only in the server's RAM (OS cache). So when the server is rebooted spontaneously, a new (merged) part can be lost or damaged. In this case, ClickHouse, during the startup process is checking the integrity of the parts, if it detects a problem, it returns the inactive chunks to the active list, and later merge them again. In this case, the broken piece is renamed (the prefix broken_ is added) and moved to the detached folder. If the integrity check detects no problems in the merged part, then the original inactive chunks are renamed (prefix ignored_ is added) and moved to the detached folder.

NiFi: ListFile Processor is not detecting file changes sometimes

ListFile processor is not detecting any changes to a previously processed file and reprocess it. FYI, I have tried the following options already for reprocessing and only the finally mentioned hack is working. This is in a single-node NiFi I am running in my development environment.
Update Scenario: ListFile processor is not detecting file content changes and trigger automatically post-update (i.e file updates using VIM editor)
Timestamp modification Scenario: Changing the file timestamp with touch -c command changes the file timestamp but this does not cause auto-trigger of the ListFile processor either.
Stop-start Scenario: Stop-start of the whole process group in NiFi after changing the file as mentioned above also does not cause triggering of ListFile processor.
Waiting Clause: Waiting for long enough after file change also does not help - just in case we assume it will auto-trigger after some delay.
HACK: The only way I am able to trigger the re-processing of the file by ListFile processor is by changing the wildcard expression for "File Filter" in ListFile processor in a harmless, idempotent manner, for example from .*test.*\.csv to test.*\.csv and vice versa later (i.e go back and forth like this for repeated reprocessing).
Reprocessing of files with same old names and with modified data is a requirement for us. Please help!
And sometimes forced reprocessing of even an unmodified file could be required in case of unanticipated data issues upstream/downstream. Please help!
UPDATE
Still facing this sporadic behavior! Only restart of NiFi helps when the ListFile processor fails to respond to file change.
Probably this is delayed answer.
The old List processors like ListFiles/ListFtp/ListSftp etc. used only timestamp tracking strategy to identify the changed files. The processor used to cache last seen timestamp in its processor state and use it to list files with only greater timestamp.
However, this approach was very buggy. Hence they had to come up with much better strategy which is called Entity Tracking. This approach gives broad
range of monitoring on file changes. It keeps track of below parameters of each file in the specified directory.
Name
Size
Last modified timestamp
Any change in file is reflected in these key parameters. Since they are cached, any difference is treated as change, thus changed files appear in the success connection.

atomic hadoop fs move

While building an infrastructure for one of my current projects I've faced the problem of replacement of already existing HDFS files. More precisely, I want to do the following:
We have a few machines (log-servers) which are continuously generating logs. We have a dedicated machine (log-preprocessor) which is responsible for receiving log chunks (each chunk is about 30 minutes in length and 500-800 mb in size) from log-servers, preprocessing them and uploading to HDFS of our Hadoop-cluster.
Preprocessing is done in 3 steps:
for each logserver: filter (in parallel) received log chunk (output file is about 60-80mb)
combine (merge-sort) all output files from the step1 and do some minor filtering (additionally, 30-min files are combined together into 1-hour files)
using current mapping from external DB, process the file from step#2 to obtain the final logfile and put this file to HDFS.
Final logfiles are to be used as input for several periodoc HADOOP-applications which are running on a HADOOP-cluster. In HDFS logfiles are stored as follows:
hdfs:/spool/.../logs/YYYY-MM-DD.HH.MM.log
Problem description:
The mapping which is used on step 3 changes over time and we need to reflect these changes by recalculating step3 and replacing old HDFS files with new ones. This update is performed with some periodicity (e.g. every 10-15 minutes) at least for last 12 hours. Please note that, if the mapping has changed, the result of applying step3 on the same input file may be significantly different (it will not be just a superset/subset of previous result). So we need to overwrite existing files in HDFS.
However, we can't just do hadoop fs -rm and then hadoop fs -copyToLocal because if some HADOOP-application is using the file which is temporary removed the app may fail. The solution I use -- put a new file near the old one, the files have the same name but different suffixes denoting files` version. Now the layout is the following:
hdfs:/spool/.../logs/2012-09-26.09.00.log.v1
hdfs:/spool/.../logs/2012-09-26.09.00.log.v2
hdfs:/spool/.../logs/2012-09-26.09.00.log.v3
hdfs:/spool/.../logs/2012-09-26.10.00.log.v1
hdfs:/spool/.../logs/2012-09-26.10.00.log.v2
Any Hadoop-application during it's start (setup) chooses the files with the most up-to-date versions and works with them. So even if some update is going on, the application will not experience any problems because no input file is removed.
Questions:
Do you know some easier approach to this problem which does not use this complicated/ugly file versioning?
Some applications may start using a HDFS-file which is currently uploading, but not yet uploaded (applications see this file in HDFS but don't know if it consistent). In case of gzip files this may lead to failed mappers. Could you please advice how could I handle this issue? I know that for local file systems I can do something like:
cp infile /finaldir/outfile.tmp && mv /finaldir/output.tmp /finaldir/output
This works because mv is an atomic operation, however I'm not sure that this is the case for HDFS. Could you please advice if HDFS has some atomic operation like mv in conventional local file systems?
Thanks in advance!
IMO, the file rename approach is absolutely fine to go with.
HDFS, upto 1.x, lacks atomic renames (they are dirty updates IIRC) - but the operation has usually been considered 'atomic-like' and never given problems to the specific scenario you have in mind here. You could rely on this without worrying about a partial state since the source file is already created and closed.
HDFS 2.x onwards supports proper atomic renames (via a new API call) that has replaced the earlier version's dirty one. It is also the default behavior of rename if you use the FileContext APIs.

Transaction implementation for a simple file

I'm a part of a team writing an application for embedded systems. The application often suffers from data corruption caused by power shortage. I thought that implementing some kind of transactions would stop this from happening. One scenario would include copying the area of a file before writing to some additional storage (transaction log). What are other possibilities?
Databases use a variety of techniques to assure that the state is properly persisted.
The DBMS often retains a replicated control file -- several synchronized copies on several devices. Two is enough. More if your're paranoid. The control file provides a few key parameters used to locate the other files and their expected states. The control file can include a "database version number".
Each file has a "version number" in several forms. A lot of times it's in plain form plus in some XOR-complement so that the two version numbers can be trivially checked to have the correct relationship, and match the control file version number.
All transactions are written to a transaction journal. The transaction journal is then written to the database files.
Before writing to database files, the original data block is copied to a "before image journal", or rollback segment, or some such.
When the block is written to the file, the sequence numbers are updated, and the block is removed from the transaction journal.
You can read up on RDBMS techniques for reliability.
There's a number of ways to do this; generally the only assumption required is that small writes (<4k) are atomic. For example, here's how CouchDB does it:
A 4k header contains, amongst other things, the file offset of the root of the BTree containing all the data.
The file is append-only. When updates are required, write the update to the end of the file, followed by any modified BTree nodes, up to and including the root. Then, flush the data, and write the new address of the root node to the header.
If the program dies while writing an update but before writing the header, the extra data at the end of the file is discarded. If it fails after writing the header, the write is complete and all is well. Because the file is append-only, these are the only failure scenarios. This also has the advantage of providing multi-version concurrency control with no read locks.
When the file grows too long, simply read out all the 'live' data and write it to a new file, then delete the original.
You can avoid implementing such transaction logs yourself by using existing transaction managers around file-systems, e.g. XADisk.
The old link is no longer available, a github repo is here.

Resources