I have FTP server (F [ftp]), linux box(S [standalone]) and hadoop cluster (C [cluster]). The current files flow is F->S->C. I am trying to improve performance by skipping S.
The current flow is:
wget ftp://user:password#ftpserver/absolute_path_to_file
hadoop fs -copyFromLocal path_to_file path_in_hdfs
I tried:
hadoop fs -cp ftp://user:password#ftpserver/absolute_path_to_file path_in_hdfs
and:
hadoop distcp ftp://user:password#ftpserver/absolute_path_to_file path_in_hdfs
Both hangs. The distcp one being a job is killed by timeout. The logs (hadoop job -logs) only said it was killed by timeout. I tried to wget from the ftp from some node of the C and it worked. What could be the reason and any hint how to figure it out?
Pipe it through stdin:
wget ftp://user:password#ftpserver/absolute_path_to_file | hadoop fs -put - path_in_hdfs
The single - tells HDFS put to read from stdin.
hadoop fs -cp ftp://user:password#ftpserver.com/absolute_path_to_file path_in_hdfs
This cannot be used as the source file is a file in the local file system. It does not take into account the scheme you are trying to pass. Refer to the javadoc: FileSystem
DISTCP is only for large intra or inter cluster (to be read as Hadoop clusters i.e. HDFS). Again it cannot get data from FTP. 2 step process is still your best bet. Or write a program to read from FTP and write to HDFS.
Related
Apologies beforehand if this turns out to be a silly question, I am new to hadoop environment.
I have two hadoop clusters my-prod-cluster and my-bcp-cluster.
Both are accessible over the same network.
Is there any way to configure my clusters in such a way that when I am in BCP mode, all my queries to my-prod-cluster gets routed to my-bcp-cluster (on the basis of some config parameter or environment variable)
So when flag=prod
hadoop fs -ls /my-prod-cluster/mydir translates to hadoop fs -ls /my-prod-cluster/mydir
and fetches the data in /my-prod-cluster/mydir
when the flag=bcp
hadoop fs -ls /my-prod-cluster/mydir translates to hadoop fs -ls /my-bcp-cluster/mydir
and fetches data from /my-bcp-cluster/mydir
I am using [mapr][1] flavour of haddop(provided by HP), version 6.1, in case that matters
You could easily make a shell wrapper script that prepends the NameNode address to each query
For example, a fully-qualified command would look like this
hdfs dfs -ls hdfs://my-prod-cluster.domain.com/path/to/mydir
So, refactoring that, you could have a script like
#!/bin/sh
if [ $1 -eq "prod" ]; then
NAMENODE=hdfs://my-prod-cluster.domain.com
fi
# TODO: error handling and more clusters
PATH=$2
hdfs dfs -ls "${NAMENODE}${PATH}"
Then execute something like my-hdfs-ls prod /mydir
If you need something more complex than that like Kerberos tickets, and such, then creating a separate HADOOP_CONF_DIR variable with unique core-site and hdfs-site XMLs for each cluster would be recommended.
I have a remote server and servers authenticated Hadoop environment.
I want to copy file from Remote server to Hadoop machine to HDFS
Please advise efficient approach/HDFS command to copy files from remote server to HDFS.
Any example will be helpful.
as ordinary way to copy file from remote server to server itself is
scp -rp file remote_server:/tmp
but this approach not support copy directly to hdfs
You can try that:
ssh remote-server "hadoop -put - /tmp/file" < file
Here the remote server you mean to say it is not in the same network as the hadoop nodes. If that is the case may be you can scp from remote machine to hadoop nodes local file system and then use -put or -copyFromLocal command to move to HDFS.
example: hadoop fs -put file-name hdfs://namenode-uri/path-to-hdfs
What is the best approach to move files from one Linux box to HDFS should I use flume or ssh ?
SSH Command:
cat kali.txt | ssh user#hadoopdatanode.com "hdfs dfs -put - /data/kali.txt"
Only problem with SSH is I need to mention password every time need to check how to pass password without authentication.
Can flume move files straight to HDFS from one server?
Maybe you can make passwordless-ssh, then transfer files without entering password
Maybe you create a script in python for example which does the job for you
You could install hadoop client on a Linux box that has the files. Then you could "hdfs dfs -put" your data directly from that box to hadoop cluster.
Is it possible to store the output of the hadoop dfs -getmerge command to another machine?
The reason is that there is no enough space in my local machine. The job output is 100GB and my local storage is 60GB.
Another possible reason could be that I want to process the output in another program locally, in another machine and I don't want to transfer it twice (HDFS-> local FS -> remote machine). I just want (HDFS -> remote machine).
I am looking for something similar to how scp works, like:
hadoop dfs -getmerge /user/hduser/Job-output user#someIP:/home/user/
Alternatively, I would also like to get the HDFS data from a remote host to my local machine.
Could unix pipelines be used in this occasion?
For those who are not familiar with hadoop, I am just looking for a way to replace a local dir parameter (/user/hduser/Job-output) in this command with a directory on a remote machine.
This will do exactly what you need:
hadoop fs -cat /user/hduser/Job-output/* | ssh user#remotehost.com "cat >mergedOutput.txt"
fs -cat will read all files in sequence and output them to stdout.
ssh will pass them to a file on remote machine (note that scp will not accept stdin as input)
Basically, what I want is use the Hadoop Java API to call from local to a remote Hadoop cluster. I want the Hadoop cluster to execute the
It should be roughly equivalent to "ssh user#remote 'hadoop fs -text .'"
First of all, if all you want is exactly what hadoop fs -text gives you, then you can certainly just install the hadoop client on your local machine, and run it there, being sure to specify the full path:
hadoop fs -text hdfs://remote.namenode.host:9000/my/file
But if you do have a reason to do it from java, the basic answer is something like this:
Configuration conf = new Configuration();
Path p = new Path("hdfs://remote.namenode.host:9000/foo/bar");
FileSystem fs = p.getFileSystem(conf);
InputStream in = fs.open(p);
You can then read from that input stream however you like: copy it to stdout or whatever.
Note that the fs -text is a little bit more clever than just raw copying. It detects gzipped files and sequence files and "decodes" them into text. This is pretty tricky; you can check out the source code to see how its done internally.