I can create directories in my hadoop using: hadoop fs -mkdir /test/input. I can check this by browsing localhost:50070 and it works:
/test
/tmp
But when I check for existence from java:
FileSystem fs = FileSystem.get(conf);
fs.exists(new Path("/tmp")); // returns true
fs.exists(new Path("/test")); // returns false
Same thing happens even when i created test inside /tmp. What's wrong?
Thanks,
FileSystem.get(conf) may return the local file system where the /tmp/ folder exists and /test/ not exists. Try to specify the file system that you want to get:
FileSystem fs = new Path("hdfs://localhost:8020/").getFileSystem(conf);
I'm not sure about the port, you may need a 9000.
Related
Do we need to verify checksum after we move files to Hadoop (HDFS) from a Linux server through a Webhdfs ?
I would like to make sure the files on the HDFS have no corruption after they are copied. But is checking checksum necessary?
I read client does checksum before data is written to HDFS
Can somebody help me to understand how can I make sure that source file on Linux system is same as ingested file on Hdfs using webhdfs.
If your goal is to compare two files residing on HDFS, I would not use "hdfs dfs -checksum URI" as in my case it generates different checksums for files with identical content.
In the below example I am comparing two files with the same content in different locations:
Old-school md5sum method returns the same checksum:
$ hdfs dfs -cat /project1/file.txt | md5sum
b9fdea463b1ce46fabc2958fc5f7644a -
$ hdfs dfs -cat /project2/file.txt | md5sum
b9fdea463b1ce46fabc2958fc5f7644a -
However, checksum generated on the HDFS is different for files with the same content:
$ hdfs dfs -checksum /project1/file.txt
0000020000000000000000003e50be59553b2ddaf401c575f8df6914
$ hdfs dfs -checksum /project2/file.txt
0000020000000000000000001952d653ccba138f0c4cd4209fbf8e2e
A bit puzzling as I would expect identical checksum to be generated against the identical content.
Checksum for a file can be calculated using hadoop fs command.
Usage: hadoop fs -checksum URI
Returns the checksum information of a file.
Example:
hadoop fs -checksum hdfs://nn1.example.com/file1
hadoop fs -checksum file:///path/in/linux/file1
Refer : Hadoop documentation for more details
So if you want to comapre file1 in both linux and hdfs you can use above utility.
I wrote a library with which you can calculate the checksum of local file, just the way hadoop does it on hdfs files.
So, you can compare the checksum to cross check.
https://github.com/srch07/HDFSChecksumForLocalfile
If you are doing this check via API
import org.apache.hadoop.fs._
import org.apache.hadoop.io._
Option 1: for the value b9fdea463b1ce46fabc2958fc5f7644a
val md5:String = MD5Hash.digest(FileSystem.get(hadoopConfiguration).open(new Path("/project1/file.txt"))).toString
Option 2: for the value 3e50be59553b2ddaf401c575f8df6914
val md5:String = FileSystem.get(hadoopConfiguration).getFileChecksum(new Path("/project1/file.txt"))).toString.split(":")(0)
It does crc check. For each and everyfile it create .crc to make sure there is no corruption.
Running Spark on EMR (AMI 3.8). When trying to write an RDD to a local file, I am getting no results on the name/master node.
On my previous EMR cluster (same version of Spark installed with bootstrap script instead of as an add-on to EMR), the data would write to the local dir on the name node. Now I can see it appearing in "/home/hadoop/test/_temporary/0/task*" directories on the other nodes in the cluster, but only the 'SUCCESS' file on the master node.
How can I get the file to write to the name/master node only?
Here is an example of the command I am using:
myRDD.saveAsTextFile("file:///home/hadoop/test")
I can do this in a round about way using by pushing to HDFS first then writing the results to local filesystem with shell commands. But I would love to hear if others have a more elegant approach.
//rdd to local text file
def rddToFile(rdd: RDD[_], filePath: String) = {
//setting up bash commands
val createFileStr = "hadoop fs -cat " + filePath + "/part* > " + filePath
val removeDirStr = "hadoop fs -rm -r " + filePath
//rm dir in case exists
Process(Seq("bash", "-c", removeDirStr)) !
//save data to HDFS
rdd.saveAsTextFile(filePath)
//write data to local file
Process(Seq("bash", "-c", createFileStr)) !
//rm HDFS dir
Process(Seq("bash", "-c", removeDirStr)) !
}
I am trying to clear the content of fileA, and then copy the content from one fileB to fileA, how can I do it?
Thanks in advance!
Just delete and copy
hadoop fs -rm URI_A
hadoop fs -cp URI_B URI_A
FileSystem.create(), FileSystem.delete(), FileSystem.rename(). You obtain the FileSystem usually via the static .get(Conf). FileSystem is abstract and can operate on HDFS or other filesystems (eg. wasb://).
-put and -copyFromLocal are documented as identical, while most examples use the verbose variant -copyFromLocal. Why?
Same thing for -get and -copyToLocal
-copyFromLocal is similar to -put command, except that the source is restricted to a local file reference.
So basically, you can do with put, all that you do with -copyFromLocal, but not vice-versa.
Similarly,
-copyToLocal is similar to get command, except that the destination is restricted to a local file reference.
Hence, you can use get instead of -copyToLocal, but not the other way round.
Reference: Hadoop's documentation.
Update: For the latest as of Oct 2015, please see this answer below.
Let's make an example:
If your HDFS contains the path: /tmp/dir/abc.txt
And if your local disk also contains this path then the hdfs API won't know which one you mean, unless you specify a scheme like file:// or hdfs://. Maybe it picks the path you did not want to copy.
Therefore you have -copyFromLocal which is preventing you from accidentally copying the wrong file, by limiting the parameter you give to the local filesystem.
Put is for more advanced users who know which scheme to put in front.
It is always a bit confusing to new Hadoop users which filesystem they are currently in and where their files actually are.
Despite what is claimed by the documentation, as of now (Oct. 2015), both -copyFromLocal and -put are the same.
From the online help:
[cloudera#quickstart ~]$ hdfs dfs -help copyFromLocal
-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst> :
Identical to the -put command.
And this is confirmed by looking at the sources, where you can see that the CopyFromLocal class extends the Put class, but without adding any new behavior:
public static class CopyFromLocal extends Put {
public static final String NAME = "copyFromLocal";
public static final String USAGE = Put.USAGE;
public static final String DESCRIPTION = "Identical to the -put command.";
}
public static class CopyToLocal extends Get {
public static final String NAME = "copyToLocal";
public static final String USAGE = Get.USAGE;
public static final String DESCRIPTION = "Identical to the -get command.";
}
As you might notice it, this is exactly the same for get/copyToLocal.
both are the same except
-copyFromLocal is restricted to copy from local while -put can take file from any (other HDFS/local filesystem/..)
They're the same. This can be seen by printing usage for hdfs (or hadoop) on a command-line:
$ hadoop fs -help
# Usage: hadoop fs [generic options]
# [ . . . ]
# -copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst> :
# Identical to the -put command.
# -copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst> :
# Identical to the -get command.
Same for hdfs (the hadoop command specific for HDFS filesystems):
$ hdfs dfs -help
# [ . . . ]
# -copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst> :
# Identical to the -put command.
# -copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst> :
# Identical to the -get command.
Both -put & -copyFromLocal commands work exactly the same. You cannot use -put command to copy files from one HDFS directory to another. Let's see this with an example: say your root has two directories, named 'test1' and 'test2'. If 'test1' contains a file 'customer.txt' and you try copying it to test2 directory
$ hadoop fs -put /test1/customer.txt /test2
It will result in 'no such file or directory' error since 'put' will look for the file in the local file system and not hdfs.
They are both meant to copy files (or directories) from the local file system to HDFS, only.
Is there an HDFS API that can copy an entire local directory to the HDFS? I found an API for copying files but is there one for directories?
Use the Hadoop FS shell. Specifically:
$ hadoop fs -copyFromLocal /path/to/local hdfs:///path/to/hdfs
If you want to do it programmatically, create two FileSystems (one Local and one HDFS) and use the FileUtil class
I tried copying from the directory using
/hadoop/core/bin/hadoop fs -copyFromLocal /home/grad04/lopez/TPCDSkew/ /export/hadoop1/lopez/Join/TPCDSkew
It gave me an error saying Target is a directory . I then modified it to
/hadoop/core/bin/hadoop fs -copyFromLocal /home/grad04/lopez/TPCDSkew/*.* /export/hadoop1/lopez/Join/TPCDSkew
it works .
In Hadoop version:
Hadoop 2.4.0.2.1.1.0-390
(And probably later; I have only tested this specific version as it is the one I have)
You can copy entire directories recursively without any special notation using copyFromLocal e.g.,:
hadoop fs -copyFromLocal /path/on/disk /path/on/hdfs
which works even when /path/on/disk is a directory containing subdirectories and files.
You can also use the put command:
$ hadoop fs -put /local/path hdfs:/path
For programmer, you also can use copyFromLocalFile. Here is an example:
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
val hdfsConfig = new Configuration
val hdfsURI = "hdfs://127.0.0.1:9000/hdfsData"
val hdfs = FileSystem.get(new URI(hdfsURI), hdfsConfig)
val oriPath = new Path("#your_localpath/customer.csv")
val targetFile = new Path("hdfs://your_hdfspath/customer.csv")
hdfs.copyFromLocalFile(oriPath, targetFile)