Move zip files from one server to hdfs? - hadoop

What is the best approach to move files from one Linux box to HDFS should I use flume or ssh ?
SSH Command:
cat kali.txt | ssh user#hadoopdatanode.com "hdfs dfs -put - /data/kali.txt"
Only problem with SSH is I need to mention password every time need to check how to pass password without authentication.
Can flume move files straight to HDFS from one server?

Maybe you can make passwordless-ssh, then transfer files without entering password

Maybe you create a script in python for example which does the job for you

You could install hadoop client on a Linux box that has the files. Then you could "hdfs dfs -put" your data directly from that box to hadoop cluster.

Related

how to copy file from remote server to HDFS

I have a remote server and servers authenticated Hadoop environment.
I want to copy file from Remote server to Hadoop machine to HDFS
Please advise efficient approach/HDFS command to copy files from remote server to HDFS.
Any example will be helpful.
as ordinary way to copy file from remote server to server itself is
scp -rp file remote_server:/tmp
but this approach not support copy directly to hdfs
You can try that:
ssh remote-server "hadoop -put - /tmp/file" < file
Here the remote server you mean to say it is not in the same network as the hadoop nodes. If that is the case may be you can scp from remote machine to hadoop nodes local file system and then use -put or -copyFromLocal command to move to HDFS.
example: hadoop fs -put file-name hdfs://namenode-uri/path-to-hdfs

Hadoop fs getmerge to remote server/machine due to low disk space

I have the same question as this other post:
hadoop getmerge to another machine
but the answer does not work for me
To summarize what I want to do: get merge (or get the files) from the hadoop cluster, and NOT copy to the local machine (due to low or no disk space), but directly transfer them to a remote machine. I have my public key in the remote machine authorized keys list, so no password authentication is necessary.
My usual command on the local machine is (which merges and puts the file onto the local server/machine as a gzip file):
hadoop fs -getmerge folderName.on.cluster merged.files.in.that.folder.gz
I tried as in the other post:
hadoop fs -cat folderName.on.cluster/* | ssh user#remotehost.com:/storage | "cat > mergedoutput.txt"
This did not work for me.. I get these kind of errors..
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: Could not resolve hostname user#remotehost.com:/storage /: Name or service not known
and I tried it the other way
ssh user#remotehost.com:/storage "hadoop fs -cat folderName.on.cluster/*" | cat > mergedoutput.txt
Then:
-bash: cat > mergedoutput.txt: command not found
Pseudo-terminal will not be allocated because stdin is not a terminal.
-bash: line 1: syntax error near unexpected token `('
Any help is appreciated. I also don't need to do -getmerge, I could also do -get and then just merge the files once copied over to the remote machine. Another alternative is if there is a way I can run a command on the remote server to directly copy the file from the hadoop cluster server.
Thanks
Figured it out
hadoop fs -cat folderName.on.cluster/* | ssh user#remotehost.com "cd storage; cat > mergedoutput.txt"
This is what works for me. Thanks to #vefthym for the help.
This merges the files in the directory on the hadoop cluster, to the remote host without copying it to the local host YAY (its pretty full already). Before I copy the file, I need to change to another directory I need the file to be in, hence the cd storage; before cat merged output.gz
I'm glad that you found my question useful!
I think your problem is just in the ssh, not in the solution that you describe. It worked perfectly for me. By the way, in the first command, you have an extra '|' character. What do you get if you just type ssh user#remotehost.com? Do you type a name, or an IP? If you type a name, it should exist in /etc/hosts file.
Based on this post, I guess you are using cygwin and have some misconfigurations. Apart from the accepted solution, check if you have installed the openssh cygwin package, as the second best answer suggests.
hadoop fs -cat folderName.on.cluster/* | ssh user#remotehost.com "cd storage; cat > mergedoutput.txt"
This is what works for me. Thanks to #vefthym for the help.
This merges the files in the directory on the hadoop cluster, to the remote host without copying it to the local host YAY (its pretty full already). Before I copy the file, I need to change to another directory I need the file to be in, hence the cd storage; before cat merged output.gz

Failed to copy file from FTP to HDFS

I have FTP server (F [ftp]), linux box(S [standalone]) and hadoop cluster (C [cluster]). The current files flow is F->S->C. I am trying to improve performance by skipping S.
The current flow is:
wget ftp://user:password#ftpserver/absolute_path_to_file
hadoop fs -copyFromLocal path_to_file path_in_hdfs
I tried:
hadoop fs -cp ftp://user:password#ftpserver/absolute_path_to_file path_in_hdfs
and:
hadoop distcp ftp://user:password#ftpserver/absolute_path_to_file path_in_hdfs
Both hangs. The distcp one being a job is killed by timeout. The logs (hadoop job -logs) only said it was killed by timeout. I tried to wget from the ftp from some node of the C and it worked. What could be the reason and any hint how to figure it out?
Pipe it through stdin:
wget ftp://user:password#ftpserver/absolute_path_to_file | hadoop fs -put - path_in_hdfs
The single - tells HDFS put to read from stdin.
hadoop fs -cp ftp://user:password#ftpserver.com/absolute_path_to_file path_in_hdfs
This cannot be used as the source file is a file in the local file system. It does not take into account the scheme you are trying to pass. Refer to the javadoc: FileSystem
DISTCP is only for large intra or inter cluster (to be read as Hadoop clusters i.e. HDFS). Again it cannot get data from FTP. 2 step process is still your best bet. Or write a program to read from FTP and write to HDFS.

hadoop getmerge to another machine

Is it possible to store the output of the hadoop dfs -getmerge command to another machine?
The reason is that there is no enough space in my local machine. The job output is 100GB and my local storage is 60GB.
Another possible reason could be that I want to process the output in another program locally, in another machine and I don't want to transfer it twice (HDFS-> local FS -> remote machine). I just want (HDFS -> remote machine).
I am looking for something similar to how scp works, like:
hadoop dfs -getmerge /user/hduser/Job-output user#someIP:/home/user/
Alternatively, I would also like to get the HDFS data from a remote host to my local machine.
Could unix pipelines be used in this occasion?
For those who are not familiar with hadoop, I am just looking for a way to replace a local dir parameter (/user/hduser/Job-output) in this command with a directory on a remote machine.
This will do exactly what you need:
hadoop fs -cat /user/hduser/Job-output/* | ssh user#remotehost.com "cat >mergedOutput.txt"
fs -cat will read all files in sequence and output them to stdout.
ssh will pass them to a file on remote machine (note that scp will not accept stdin as input)

SFTP file system in hadoop

Does hadoop version 2.0.0 and CDH4 have a SFTP file system in place ? I know hadoop has a support for FTP Filesystem . Does it have something similar for sftp ? I have seen some patches submitted for the sme though couldn't make sense of them ..
Consider using hadoop distcp.
Check here. That would be something like:
hadoop distcp
-D fs.sftp.credfile=/user/john/credstore/private/mycreds.prop
sftp://myHost.ibm.com/home/biadmin/myFile/part1
hdfs:///user/john/myfiles
After some research , I have figured out that hadoop currently doesn't have a FileSystem written for SFTP . Hence if you wish to read data using SFTP channel you have to either write a SFTP FileSystem (which is quite a big deal , extending and overriding lots of classes and methods) , patches of which are already been developed , though not yet integrated into hadoop , else get a customized InputFormat that reads from streams , which again is not implemented in hadoop.
You need to ensure core-site.xml is having property fs.sftp.impl set with value org.apache.hadoop.fs.sftp.SFTPFileSystem
Post this hadoop commands will work. Couple of samples are given below
ls command
Command on hadoop
hadoop fs -ls /
equivalent for SFTP
hadoop fs -D fs.sftp.user.{hostname}={username} -D fs.sftp.password.{hostname}.{username}={password} -ls sftp://{hostname}:22/
Distcp command
Command on hadoop
hadoop distcp {sourceLocation} {destinationLocation}
equivalent for SFTP
hadoop distcp -D fs.sftp.user.{hostname}={username} -D fs.sftp.password.{hostname}.{username}={password} sftp://{hostname}:22/{sourceLocation} {destinationLocation}
Ensure you are replacing all the place holders while trying these commands. I tried them on AWS EMR 5.28.1 which has Hadoop 2.8.5 installed on it
So hopefully cleaning up these answers a bit into something more digestible. Basically Hadoop/HDFS is capable of support SFTP, it's just not enabled by default, nor is it really documented in the core-default.xml very well.
The key configuration you need to set to enable SFTP support is:
<property>
<name>fs.sftp.impl</name>
<value>org.apache.hadoop.fs.sftp.SFTPFileSystem</value>
</property>
Alternatively, you can set it right at the CLI depending on your command
hdfs dfs \
-Dfs.sftp.impl=org.apache.hadoop.fs.sftp.SFTPFileSystem \
-Dfs.sftp.keyfile=~/.ssh/java_sftp_testkey.ppk \
-ls sftp://$USER#localhost/tmp/
The biggest requirement is that your SSH Keyfile needs to be password-less to work. This can be done via
cp ~/.ssh/mykeyfile.ppk ~/.ssh/mykeyfile.ppk.orig
ssh-keygen -p -P MyPass -N "" -f ~/.ssh/mykeyfile.ppk
mv ~/.ssh/mykeyfile.ppk ~/.ssh/mykeyfile_nopass.ppk
mv ~/.ssh/mykeyfile.ppk.orig ~/.ssh/mykeyfile.ppk
And finally, the biggest (and maybe neatest) is using this via distcp, if you need to send/receive a large amount of data to/from an SFTP server. There's an oddity about the ssh keyfile being needed locally to generate the directory listing, as well as on the cluster for the actual workers.
Something like this should work well enough:
cd workdir
ln -s ~/.ssh/java_sftp_testkey.ppk
hadoop distcp \
--files ~/.ssh/java_sftp_testkey.ppk \
-Dfs.sftp.impl=org.apache.hadoop.fs.sftp.SFTPFileSystem \
-Dfs.sftp.keyfile=java_sftp_testkey.ppk \
hdfs:///path/to/source/ \
sftp://user#FQDN/path/to/dest

Resources