Decompress .deflate files as text in HDFS and copy result to local - hadoop

After running a sqoop jobs I got the files .deflate extension (compression is configured by default). I know that I can show the file content using following command:
hadoop fs -text <file>
How can I copy this result to my local folder?

Just redirect output to some local file
hadoop fs -text hdfs_path > local_file.txt

Related

Hadoop error when outputting the grep results to a new file in a different directory

I'm trying to read the contents of a few files and using grep find the lines with the my search query and then output the results into a folder in another directory. I get an error "No such file or directory exists". I have created the folder structure and the text file.
hadoop fs -cat /Final_Dataset/c*.txt | grep 2015-01-* > /energydata/2015/01/01.txt
ERROR:
-bash: /energydata/2015/01/01.txt: No such file or directory
> /energydata/2015/01/01.txt means that the output is being redirected to a local file. hdfs fs -cat sends output to your local machine and at that point you're no longer operating within Hadoop. grep simply acts on a stream of data, it doesn't care (or know) where it came from.
You need to make sure that /energydata/2015/01/ exists locally before you run this command. You can create it with mkdir -p /energydata/2015/01/.
If you're looking to pull certain records from a file on HDFS and then re-write the new file to HDFS then I'd suggest not cat-ing the file and instead keeping the processing entirely on the cluster, by using something like Spark or Hive to transform data efficiently. Or failing that just do a hadoop dfs -put <local_path> /energydata/2015/01/01.txt.
The following CLI command worked
hadoop fs -cat /FinalDataset/c*.txt | grep 2015-01-* | hadoop fs -put - /energydata/2015/01/output.txt

How to unzip a split zip file in Hadoop

I have a split zip file (created by winzip in window) , then ftp to hadoop server.
Somehow i can't unzip it through something like below command
The files like below
file.z01,file.zo2,file.zo3....file.zip
Then i run below command
hadoop fs -cat /tmp/Links.txt.gz | gzip -d | hadoop fs -put - /tmp/unzipped/Links.txt
Then Error comes up
cat: Unable to write to output stream
What i expect is that unzip those split files to Hadoop particular folder
Unclear how Links.txt.gz is related to your .zip part files...
Hadoop doesn't really understand ZIP format (especially split ones), and gzip -d wouldn't work on .zip files anyway.
Zip nor gzip are splittable in Hadoop processing (read "able to be computed in parallel"), so since WinZip supports BZ2 format, I suggest you switch to that, and I don't see a need to create split files in Windows unless it's to upload the file faster...
Sidenote: hadoop fs -cat /input | <anything> | hadoop fs -put - /output is not splitting "in Hadoop"... You are copying the raw text of the file to your local buffer, then doing an operation locally, then optionally streaming it back to HDFS.

getmerge command in hadoop datacopy

My aim is to read all the files that starts with "trans" in a directory and convert them into a single file and load that single file into HDFS location
my source directory is /user/cloudera/inputfiles/
Assume that inside that above directory , there are lot of file , but i need all the files that start with "trans"
my destination directory is /user/cloudera/transfiles/
So i tried this command below
hadoop dfs - getmerge /user/cloudera/inputfiles/trans* /user/cloudera/transfiles/records.txt
but the above command is not working .
If i try the below command then it works
hadoop dfs - getmerge /user/cloudera/inputfiles /user/cloudera/transfiles/records.txt
Any suggestion on how do i merge some files from a hdfs location and store that merged single file in another hdfs location
Below is the usage of getmerge command:
Usage: hdfs dfs -getmerge <src> <localdst> [addnl]
Takes a source directory and a destination file as input and
concatenates files in src into the destination local file.
Optionally addnl can be set to enable adding a newline character at the
end of each file.
It expects directory as first parameter.
you can try cat command like this:
hadoop dfs -cat /user/cloudera/inputfiles/trans* > /<local_fs_dir>/records.txt
hadoop dfs -copyFromLocal /<local_fs_dir>/records.txt /user/cloudera/transfiles/records.txt

How to copy file from HDFS to the local file system

How to copy file from HDFS to the local file system . There is no physical location of a file under the file , not even directory . how can i moved them to my local for further validations.i am tried through winscp .
bin/hadoop fs -get /hdfs/source/path /localfs/destination/path
bin/hadoop fs -copyToLocal /hdfs/source/path /localfs/destination/path
Point your web browser to HDFS WEBUI(namenode_machine:50070), browse to the file you intend to copy, scroll down the page and click on download the file.
In Hadoop 2.0,
hdfs dfs -copyToLocal <hdfs_input_file_path> <output_path>
where,
hdfs_input_file_path maybe obtained from http://<<name_node_ip>>:50070/explorer.html
output_path is the local path of the file, where the file is to be copied to.
you may also use get in place of copyToLocal.
In order to copy files from HDFS to the local file system the following command could be run:
hadoop dfs -copyToLocal <input> <output>
<input>: the HDFS directory path (e.g /mydata) that you want to copy
<output>: the destination directory path (e.g. ~/Documents)
Update: Hadoop is deprecated in Hadoop 3
use hdfs dfs -copyToLocal <input> <output>
you can accomplish in both these ways.
1.hadoop fs -get <HDFS file path> <Local system directory path>
2.hadoop fs -copyToLocal <HDFS file path> <Local system directory path>
Ex:
My files are located in /sourcedata/mydata.txt
I want to copy file to Local file system in this path /user/ravi/mydata
hadoop fs -get /sourcedata/mydata.txt /user/ravi/mydata/
If your source "file" is split up among multiple files (maybe as the result of map-reduce) that live in the same directory tree, you can copy that to a local file with:
hadoop fs -getmerge /hdfs/source/dir_root/ local/destination
This worked for me on my VM instance of Ubuntu.
hdfs dfs -copyToLocal [hadoop directory] [local directory]
1.- Remember the name you gave to the file and instead of using hdfs dfs -put. Use 'get' instead. See below.
$hdfs dfs -get /output-fileFolderName-In-hdfs
if you are using docker you have to do the following steps:
copy the file from hdfs to namenode (hadoop fs -get output/part-r-00000 /out_text).
"/out_text" will be stored on the namenode.
copy the file from namenode to local disk by (docker cp namenode:/out_text output.txt)
output.txt will be there on your current working directory
bin/hadoop fs -put /localfs/destination/path /hdfs/source/path

How do I use hadoop fs -getmerge to download .deflate files?

I've tried running
hadoop fs -getmerge
on a directory of .deflate files. The result is a compressed file on my local machine.
What is the easiest way to download the entire directory in uncompressed format on to my local machine?
Try this:
hadoop fs -text /some/where/job-output/part-*

Resources