How to skip and continue file load failures in Vertica? - vertica

I'm trying to load thousands of compressed files at once over NFS into Vertica with a copy statement with the glob expression, but the operation is aborting on the following error:
ERROR 6253: Error occured during LZO header processing: expecting more than 8 bytes, possibly file corrupted
What's the right way to tell vertica to continue on loading all the good files and just report which ones failed at the end of the load?

If you are running Vertica 7.2.x, they added a new parameter called ERROR TOLERANCE. Prior to this version, it does not exist.
You can see the copy options here.
Treats each source during execution independently when loading data.
The statement is not rolled back if a single source is invalid. The
invalid source is skipped and the load continues.
This parameter is disabled for ORC files, Parquet files, and when
using a fenced User Defined Load (UDL).
The only other alternative would be to precheck the validity in a script or load them separately (obviously this would be a performance issue so I would opt for the precheck).

Related

Errors : ora-00604&ora-1578 &ora-01110

I get ora-00604 & ora-1578 & ora-01110, any one have solution ????
That doesn't sound good.
ORA-01578: ORACLE data block corrupted (file # string, block # string)
Cause : The data block indicated was corrupted, mostly due to software errors.
Action: Try to restore the segment containing the block indicated.
This may involve dropping the segment and recreating it.
If there is a trace file, report the errors in it to your ORACLE representative.
system02 database file is corrupted; there's a possibility that hard disk crashed and you should check it for errors. As it is the database server, I presume that it would be safer if you replaced it (instead of just fixed it, because - when one data block gets corrupted, there's a good chance that it'll happen again (according to Murphy's law, at least)).
Furthermore, it means that you'll have to restore the database. I hope you have backup (the one you got BEFORE corruption happened).

ClickHouse log shows hash of uncompressed files doesn't match

ClickHouse logs printed the error messages as below frequently:
2021.01.07 00:55:24.112567 [ 6418 ] {} <Error> vms.analysis_data (7056dab3-3677-455b-a07a-4d16904479b4):
Code: 40, e.displayText() = DB::Exception: Checksums of parts don't match:
hash of uncompressed files doesn't match (version 20.11.4.13 (official build)).
Data after merge is not byte-identical to data on another replicas. There could be several reasons:
1. Using newer version of compression library after server update.
2. Using another compression method.
3. Non-deterministic compression algorithm (highly unlikely).
4. Non-deterministic merge algorithm due to logical error in code.
5. Data corruption in memory due to bug in code.
6. Data corruption in memory due to hardware issue.
7. Manual modification of source data after server startup.
8. Manual modification of checksums stored in ZooKeeper.
9. Part format related settings like 'enable_mixed_granularity_parts' are different on different replicas.
We will download merged part from replica to force byte-identical result.
We use the same version(20.11.4.13) and the same compression method (LZ4) for all data nodes in the production environment, we wouldn't modify the data files or the values stored in Zookeeper also.
So my questions are:
How was the error caused? Furtherly, in which cases will the CickHouse server throws those exceptions?
Is there a checksum-checking mechanism among the replicas during the merging parts?
I also found that in one of our data nodes, there are many folders named like "ignored_20201208_23116_23116_0" in the detached folder, were these files the corrupted data caused by the referred problem?
Thanks.
You need to upgrade all nodes to 20.11.6.6 ASAP.
The reason of these errors is a serious bug related to AIO.
ignored_ -- it's not related. You can remove them.
gtranslate: Inactive parts are not deleted immediately, because when writing a new part, fsync is not called, i.e. for some time, the new part is only in the server's RAM (OS cache). So when the server is rebooted spontaneously, a new (merged) part can be lost or damaged. In this case, ClickHouse, during the startup process is checking the integrity of the parts, if it detects a problem, it returns the inactive chunks to the active list, and later merge them again. In this case, the broken piece is renamed (the prefix broken_ is added) and moved to the detached folder. If the integrity check detects no problems in the merged part, then the original inactive chunks are renamed (prefix ignored_ is added) and moved to the detached folder.

Azure Databricks - Receive error Zip bomb detected! The file would exceed the max. ratio of compressed file size to the size of the expanded data

I have been through many a links to solve this problem. However, none have helped me. Primarily because I am facing this error on Azure Databricks.
I am trying to read Excel files located on ADLS Curated zone. There are about 25 of the excel files. My program loops through the excel files and reads them into a PySpark Dataframe. However, after reading about 9 excel files, I receive the below error -
Py4JJavaError: An error occurred while calling o1481.load.
: java.io.IOException: Zip bomb detected! The file would exceed the max. ratio of compressed file size to the size of the expanded data.
This may indicate that the file is used to inflate memory usage and thus could pose a security risk.
You can adjust this limit via ZipSecureFile.setMinInflateRatio() if you need to work with files which exceed this limit.
Uncompressed size: 6111064, Raw/compressed size: 61100, ratio: 0.009998
I installed the maven - org.apache.poi.openxml4j but when I try to call it using the simple following import statement, I receive the error "No module named 'org'"
import org.apache.poi.openxml4j.util.ZipSecureFile
Any ideas anyone about how to set the ZipSecureFile.setMinInflateRatio() to 0 in Azure Databricks?
Best regards,
Sree
The "Zip bomb detected" exception will occur if the expanded file crosses the default MinInflateRatio set in the Apache jar. Apache includes a setting called MinInflateRatio which is configurable via ZipSecureFile.setMinInflateRatio() ; this will now be set to 0.0 by default to allow large files.
Checkout known issue in POI: https://bz.apache.org/bugzilla/show_bug.cgi?id=58499

atomic hadoop fs move

While building an infrastructure for one of my current projects I've faced the problem of replacement of already existing HDFS files. More precisely, I want to do the following:
We have a few machines (log-servers) which are continuously generating logs. We have a dedicated machine (log-preprocessor) which is responsible for receiving log chunks (each chunk is about 30 minutes in length and 500-800 mb in size) from log-servers, preprocessing them and uploading to HDFS of our Hadoop-cluster.
Preprocessing is done in 3 steps:
for each logserver: filter (in parallel) received log chunk (output file is about 60-80mb)
combine (merge-sort) all output files from the step1 and do some minor filtering (additionally, 30-min files are combined together into 1-hour files)
using current mapping from external DB, process the file from step#2 to obtain the final logfile and put this file to HDFS.
Final logfiles are to be used as input for several periodoc HADOOP-applications which are running on a HADOOP-cluster. In HDFS logfiles are stored as follows:
hdfs:/spool/.../logs/YYYY-MM-DD.HH.MM.log
Problem description:
The mapping which is used on step 3 changes over time and we need to reflect these changes by recalculating step3 and replacing old HDFS files with new ones. This update is performed with some periodicity (e.g. every 10-15 minutes) at least for last 12 hours. Please note that, if the mapping has changed, the result of applying step3 on the same input file may be significantly different (it will not be just a superset/subset of previous result). So we need to overwrite existing files in HDFS.
However, we can't just do hadoop fs -rm and then hadoop fs -copyToLocal because if some HADOOP-application is using the file which is temporary removed the app may fail. The solution I use -- put a new file near the old one, the files have the same name but different suffixes denoting files` version. Now the layout is the following:
hdfs:/spool/.../logs/2012-09-26.09.00.log.v1
hdfs:/spool/.../logs/2012-09-26.09.00.log.v2
hdfs:/spool/.../logs/2012-09-26.09.00.log.v3
hdfs:/spool/.../logs/2012-09-26.10.00.log.v1
hdfs:/spool/.../logs/2012-09-26.10.00.log.v2
Any Hadoop-application during it's start (setup) chooses the files with the most up-to-date versions and works with them. So even if some update is going on, the application will not experience any problems because no input file is removed.
Questions:
Do you know some easier approach to this problem which does not use this complicated/ugly file versioning?
Some applications may start using a HDFS-file which is currently uploading, but not yet uploaded (applications see this file in HDFS but don't know if it consistent). In case of gzip files this may lead to failed mappers. Could you please advice how could I handle this issue? I know that for local file systems I can do something like:
cp infile /finaldir/outfile.tmp && mv /finaldir/output.tmp /finaldir/output
This works because mv is an atomic operation, however I'm not sure that this is the case for HDFS. Could you please advice if HDFS has some atomic operation like mv in conventional local file systems?
Thanks in advance!
IMO, the file rename approach is absolutely fine to go with.
HDFS, upto 1.x, lacks atomic renames (they are dirty updates IIRC) - but the operation has usually been considered 'atomic-like' and never given problems to the specific scenario you have in mind here. You could rely on this without worrying about a partial state since the source file is already created and closed.
HDFS 2.x onwards supports proper atomic renames (via a new API call) that has replaced the earlier version's dirty one. It is also the default behavior of rename if you use the FileContext APIs.

What can lead to failures in appending data to a file?

I maintain a program that is responsible for collecting data from a data acquisition system and appending that data to a very large (size > 4GB) binary file. Before appending data, the program must validate the header of this file in order to ensure that the meta-data in the file matches that which has been collected. In order to do this, I open the file as follows:
data_file = fopen(file_name, "rb+");
I then seek to the beginning of the file in order to validate the header. When this is done, I seek to the end of the file as follows:
_fseeki64(data_file, _filelengthi64(data_file), SEEK_SET);
At this point, I write the data that has been collected using fwrite(). I am careful to check the return values from all I/O functions.
One of the computers (windows 7 64 bit) on which we have been testing this program intermittently shows a condition where the data appears to have been written to the file yet neither the file's last changed time nor its size changes. If any of the calls to fopen(), fseek(), or fwrite() fail, my program will throw an exception which will result in aborting the data collection process and logging the error. On this machine, none of these failures seem to be occurring. Something that makes the matter even more mysterious is that, if a restore point is set on the host file system, the problem goes away only to re-appear intermittently appear at some future time.
We have tried to reproduce this problem on other machines (a vista 32 bit operating system) but have had no success in replicating the issue (this doesn't necessarily mean anything since the problem is so intermittent in the first place.
Has anyone else encountered anything similar to this? Is there a potential remedy?
Further Information
I have now found that the failure occurs when fflush() is called on the file and that the win32 error that is being returned by GetLastError() is 665 (ERROR_FILE_SYSTEM_LIMITATION). Searching google for this error leads to a bunch of reports related to "extents" for SQL server files. I suspect that there is some sort of journaling resource that the file system is reporting and this because we are growing a large file by opening it, appending a chunk of data, and closing it. I am now looking for understanding regarding this particular error with the hope for coming up with a valid remedy.
The file append is failing because of a file system fragmentation limit. The question was answered in What factors can lead to Win32 error 665 (file system limitation)?

Resources