I'm running into an issue using distcp to copy files - every copy fails with an IO Exception (Checksum mismatch), even if performing a simple copy within the cluster (i.e. hadoop distcp -pbugctrx /foo/bar /foo/baz).
If forced to complete the copy using -skipcrccheck, I can see that the checksum is different ( hdfs dfs -checksum ), but that this isn't being caused by a difference in the actual source data (hdfs dfs -cat | md5sum returns matching checksums for source and destination).
I'm leery of disabling a data integrity check if I don't need to. Is there a better way to address this failing check than just ignoring it.
Both the source and target may be in different encryption zones. In that case also the checksum will fail
Related
I have the multiple text files.
The total size of them exceeds the largest disk size available to me (~1.5TB)
A spark program reads a single input text file from HDFS. So I need to combine those files into one. (I cannot re-write the program code. I am given only the *.jar file for execution)
Does HDFS have such a capability? How can I achieve this?
What I understood from your question is you want to Concatenate multiple files into one. Here is a solution which might not be the most efficient way of doing it but it works. suppose you have two files: file1 and file2 and you want to get a combined file as ConcatenatedFile
.Here is the script for that.
hadoop fs -cat /hadoop/path/to/file/file1.txt /hadoop/path/to/file/file2.txt | hadoop fs -put - /hadoop/path/to/file/Concatenate_file_Folder/ConcatenateFile.txt
Hope this helps.
HDFS by itself does not provide such capabilities. All out-of-the-box features (like hdfs dfs -text * with pipes or FileUtil's copy methods) use your client server to transfer all data.
In my experience we always used our own written MapReduce jobs to merge many small files in HDFS in distributed way.
So you have two solutions:
Write your own simple MapReduce/Spark job to combine text files with
your format.
Find already implemented solution for such kind of
purposes.
About solution #2: there is the simple project FileCrush for combining text or sequence files in HDFS. It might be suitable for you, check it.
Example of usage:
hadoop jar filecrush-2.0-SNAPSHOT.jar crush.Crush -Ddfs.block.size=134217728 \
--input-format=text \
--output-format=text \
--compress=none \
/input/dir /output/dir 20161228161647
I had a problem to run it without these options (especially -Ddfs.block.size and output file date prefix 20161228161647) so make sure you run it properly.
You can do a pig job:
A = LOAD '/path/to/inputFiles' as (SCHEMA);
STORE A into '/path/to/outputFile';
Doing a hdfs cat and then putting it back to hdfs means, all this data is processed in the client node and will degradate your network
Do we need to verify checksum after we move files to Hadoop (HDFS) from a Linux server through a Webhdfs ?
I would like to make sure the files on the HDFS have no corruption after they are copied. But is checking checksum necessary?
I read client does checksum before data is written to HDFS
Can somebody help me to understand how can I make sure that source file on Linux system is same as ingested file on Hdfs using webhdfs.
If your goal is to compare two files residing on HDFS, I would not use "hdfs dfs -checksum URI" as in my case it generates different checksums for files with identical content.
In the below example I am comparing two files with the same content in different locations:
Old-school md5sum method returns the same checksum:
$ hdfs dfs -cat /project1/file.txt | md5sum
b9fdea463b1ce46fabc2958fc5f7644a -
$ hdfs dfs -cat /project2/file.txt | md5sum
b9fdea463b1ce46fabc2958fc5f7644a -
However, checksum generated on the HDFS is different for files with the same content:
$ hdfs dfs -checksum /project1/file.txt
0000020000000000000000003e50be59553b2ddaf401c575f8df6914
$ hdfs dfs -checksum /project2/file.txt
0000020000000000000000001952d653ccba138f0c4cd4209fbf8e2e
A bit puzzling as I would expect identical checksum to be generated against the identical content.
Checksum for a file can be calculated using hadoop fs command.
Usage: hadoop fs -checksum URI
Returns the checksum information of a file.
Example:
hadoop fs -checksum hdfs://nn1.example.com/file1
hadoop fs -checksum file:///path/in/linux/file1
Refer : Hadoop documentation for more details
So if you want to comapre file1 in both linux and hdfs you can use above utility.
I wrote a library with which you can calculate the checksum of local file, just the way hadoop does it on hdfs files.
So, you can compare the checksum to cross check.
https://github.com/srch07/HDFSChecksumForLocalfile
If you are doing this check via API
import org.apache.hadoop.fs._
import org.apache.hadoop.io._
Option 1: for the value b9fdea463b1ce46fabc2958fc5f7644a
val md5:String = MD5Hash.digest(FileSystem.get(hadoopConfiguration).open(new Path("/project1/file.txt"))).toString
Option 2: for the value 3e50be59553b2ddaf401c575f8df6914
val md5:String = FileSystem.get(hadoopConfiguration).getFileChecksum(new Path("/project1/file.txt"))).toString.split(":")(0)
It does crc check. For each and everyfile it create .crc to make sure there is no corruption.
Is there any way to retain the ownership/permissions while copying files in hadoop?
Tried hadoop fs -cp -p <src> <dest> . Didn't work.
Yes of course you can. But I recommend you to use distcp, is an advanced tool to copy data between clusters or on the same cluster, you have a lot of option to optimize the execution. This command will run a mapreduce, so for a long copies it will take less time and you will can preserve all attributes.
Example:
hadoop distcp /source_dir/data \
/target_dir/data
hadoop distcp /source_dir/dataA \
/source_dir/dataB \
/target_dir/
For all attributes:
r: replication number
b: block size
u: user
g: group
p: permission
c: checksum-type
a: ACL
x: XAttr
t: timestamp
Another example, but preserving all attributes:
hadoop distcp -p rbugpcaxt \
/source_dir/data \
/target_dir/data
You can read more about this command on hadoop-distcp
The most important is not the owner and group or permissions, you can change it easy after copy command, the most important attributes are ACL, block size, replication number, and some times timestamp, this are extra properties that you can not change so easy after a simply copy (hdfs dfs -cp).
There is not, but you can (assuming you have the appropriate permissions) change the ownership after you copy the files.
It is currently not possible to create two copies of the file while copying permissions -- Depending on your use case, however, an option may be to move the files instead. For instance, I have had to change the location of a file and its permissions, and also wanted to keep a backup (permissions didn't matter) so I moved with permissions to the new location and copied back to the original without. I know that's not very helpful, but that's the best we have in Hadoop at the moment.
I'm looking for efficient way to sync list of directories from one Hadoop filesytem to another with same directory structure.
For example lets say HDFS1 is official source where data is created and once a week we need to copy newly created data under all data-2 directories to HDFS2:
**HDFS1**
hdfs://namenode1:port/repo/area-1/data-1
hdfs://namenode1:port/repo/area-1/data-2
hdfs://namenode1:port/repo/area-1/data-3
hdfs://namenode1:port/repo/area-2/data-1
hdfs://namenode1:port/repo/area-2/data-2
hdfs://namenode1:port/repo/area-3/data-1
**HDFS2** (subset of HDFS1 - only data-2)
hdfs://namenode2:port/repo/area-1/dir2
hdfs://namenode2:port/repo/area-2/dir2
In this case we have 2 directories to sync:
/repo/area-1/data-2
/repo/area-1/data-2
This can be done by:
hadoop distcp hdfs://namenode1:port/repo/area-1/data-2 hdfs://namenode2:port/repo/area-1
hadoop distcp hdfs://namenode1:port/repo/area-2/data-2 hdfs://namenode2:port/repo/area-2
This will run 2 Hadoop jobs, and if number of directories is big, let's say 500 different non overlapping directories under hdfs://namenode1:port/ - this will create 500 Hadoop jobs which is obvious overkill.
Is there a way to inject custom directory list into distcp?
How to make distcp create one job copying all paths in custom list of directories?
Not sure if this answers the problem, but I noticed you haven't used the "update" operator. The "-update" operator will only copy over the difference in the blocks between the two file systems...
I'm using following simple code to upload files to hdfs.
FileSystem hdfs = FileSystem.get(config);
hdfs.copyFromLocalFile(src, dst);
The files are generated by webserver java component and rotated and closed by logback in .gz format. I've noticed that sometimes the .gz file is corrupted.
> gunzip logfile.log_2013_02_20_07.close.gz
gzip: logfile.log_2013_02_20_07.close.gz: unexpected end of file
But the following command does show me the content of the file
> hadoop fs -text /input/2013/02/20/logfile.log_2013_02_20_07.close.gz
The impact of having such files is quite disaster - since the aggregation for the whole day fails, and also several slave nodes is marked as blacklisted in such case.
What can I do in such case?
Can hadoop copyFromLocalFile() utility corrupt the file?
Does anyone met similar problem ?
It shouldn't do - this error is normally associated with GZip files which haven't been closed out when originally written to local disk, or are being copied to HDFS before they have finished being written to.
You should be able to check by running an md5sum on the original file and that in HDFS - if they match then the original file is corrupt:
hadoop fs -cat /input/2013/02/20/logfile.log_2013_02_20_07.close.gz | md5sum
md5sum /path/to/local/logfile.log_2013_02_20_07.close.gz
If they don't match they check the timestamps on the two files - the one in HDFS should be modified after the local file system one.