How to copy a file from a GCS bucket in Dataproc to HDFS using google cloud? - hadoop

I had uploaded the data file to the GCS bucket of my project in Dataproc. Now I want to copy that file to HDFS. How can I do that?

For a single "small" file
You can copy a single file from Google Cloud Storage (GCS) to HDFS using the hdfs copy command. Note that you need to run this from a node within the cluster:
hdfs dfs -cp gs://<bucket>/<object> <hdfs path>
This works because hdfs://<master node> is the default filesystem. You can explicitly specify the scheme and NameNode if desired:
hdfs dfs -cp gs://<bucket>/<object> hdfs://<master node>/<hdfs path>
Note that GCS objects use the gs: scheme. Paths should appear the same as they do when you use gsutil.
For a "large" file or large directory of files
When you use hdfs dfs, data is piped through your local machine. If you have a large dataset to copy, you will likely want to do this in parallel on the cluster using DistCp:
hadoop distcp gs://<bucket>/<directory> <HDFS target directory>
Consult the DistCp documentation for details.
Consider leaving data on GCS
Finally, consider leaving your data on GCS. Because the GCS connector implements Hadoop's distributed filesystem interface, it can be used as a drop-in replacement for HDFS in most cases. Notable exceptions are when you rely on (most) atomic file/directory operations or want to use a latency-sensitive application like HBase. The Dataproc HDFS migration guide gives a good overview of data migration.

Related

How to run HDFS Copy commands using Airflow?

May I know how to execute HDFS copy commands on DataProc cluster using airflow.
After the cluster is created using airflow, I have to copy few jar files from Google storage to the HDFS master node folder.
You can execute hdfs commands on dataproc cluster using something like this
gcloud dataproc jobs submit hdfs 'ls /hdfs/path/' --cluster=my-cluster --
region=europe-west1
The easiest way is [1] via
gcloud dataproc jobs submit pig --execute 'fs -ls /'
or otherwise [2] as a catch-all for other shell commands.
For a single small file
You can copy a single file from Google Cloud Storage (GCS) to HDFS using the hdfs copy command. Note that you need to run this from a node within the cluster:
hdfs dfs -cp gs://<bucket>/<object> <hdfs path>
This works because
hdfs://<master node>
is the default filesystem. You can explicitly specify the scheme and NameNode if desired:
hdfs dfs -cp gs://<bucket>/<object> hdfs://<master node>/<hdfs path>
For a large file or large directory of files
When you use hdfs dfs, data is piped through your local machine. If you have a large dataset to copy, you will likely want to do this in parallel on the cluster using DistCp:
hadoop distcp gs://<bucket>/<directory> <HDFS target directory>
Consider [3] for details.
[1] https://pig.apache.org/docs/latest/cmds.html#fs
[2] https://pig.apache.org/docs/latest/cmds.html#sh
[3] https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html
I am not sure about your use case to do this via airflow because if its onetime setup then i think we can run commands directly on dataproc cluster. But found some links which might be of some help. As i understand we can use BashOperator and can run commands.
https://big-data-demystified.ninja/2019/11/04/how-to-ssh-to-a-remote-gcp-machine-and-run-a-command-via-airflow/
Airflow Dataproc operator to run shell scripts

hadoop on windows, how to add D:\folder1 and E:\folder1 to hdfs?

hadoop fs -put popularNames.txt /user/hadoop/dir1/popularNames.txt
My folders are very huge, about 3 TB.
I don't want to copy the folder, I want to set the hdfs to the location. How to make it?
HDFS: Hadoop distributed file system.
You can't add a link to point to a location, because the data must be present in the HDFS(not on local). The whole point of using hadoop is distributed computation, which is possible when your data is distributed on a cluster.
hadoop fs -put had to be used to move the file from your local to the hdfs in order to use hadoop framework.

What is the similar function to Distributed cache of Hadoop Distribution File system in Google File System

I have deployed a 6-node Hadoop Cluster in Google Compute Engine.
I am using Google file system(GFS) instead of Hadoop File Distribution System(HFS).
.
So, I want to access files in GFS in the same way as distributed cache method does in HDFS
Please tell me a way to access files this way.
When running Hadoop on Google Compute Engine with the Google Cloud Storage connector for Hadoop as the "default filesystem", the GCS connector is able to be treated exactly the same way HDFS is treated, including for usage in the DistributedCache. So, to access files in Google Cloud Storage, you'd use it exactly the same way you would use HDFS, no need to change anything. For example, if you had deployed your cluster with your GCS connector's CONFIGBUCKET set to foo-bucket, and you had local files you wanted to place in the DistributedCache, you'd do:
# Copies mylib.jar into gs://foo-bucket/myapp/mylib.jar
$ bin/hadoop fs -copyFromLocal mylib.jar /myapp/mylib.jar
And in your Hadoop job:
JobConf job = new JobConf();
// Retrieves gs://foo-bucket/myapp/mylib.jar as a cached file.
DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
If you want to access files in a different bucket than your CONFIGBUCKET, you just need to specify a full path, using gs:// instead of hdfs://:
# Copies mylib.jar into gs://other-bucket/myapp/mylib.jar
$ bin/hadoop fs -copyFromLocal mylib.jar gs://other-bucket/myapp/mylib.jar
and then in Java
JobConf job = new JobConf();
// Retrieves gs://other-bucket/myapp/mylib.jar as a cached file.
DistributedCache.addFileToClassPath(new Path("gs://other-bucket/myapp/mylib.jar"), job);

Explanation of the hadoop file system

Can any one help me understand the data storage concept of hadoop?
As I understand it, hadoop deals with fs image and data blocks, and fsimage and edit logs paths are stored hdfs-site.xml. But what about the data blocks? Can anyone help me in this? I am little bit confused where the /user and /tmp dir is actually present in the filesystem.
I used this link to set up a single node hadoop cluster: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
Files are split into blocks and stored in the Hadoop Distributed File System (HDFS). Consult the HDFS module of Yahoo's Hadoop Tutorial for a description of HDFS. The directories stored in HDFS can be viewed by typing the following command into a terminal: hadoop dfs -ls
The Namenode's FSImage keeps track of which Datanode has which files. In the hdfs-site.xml file, the configuration 'dfs.data.dir' defines where the datanode stores the underlying files on the filesystem. This can be a comma separated list of directories (think multiple disks).

Copying directories in HDFS using the JAVA API

How do I copy a directory in HDFS to another directory in HDFS?
I found the copyFromLocalFile functions that copy from the local FS to HDFS, but I want both of the source/destination to be in HDFS.
Thanks
Use distcp command.
The canonical use case for distcp is for transferring data between two HDFS clusters.
If the clusters are running identical versions of Hadoop, the hdfs scheme is
appropriate:
% hadoop distcp hdfs://namenode1/foo hdfs://namenode2/bar
If you want to do it through Java code, see class org.apache.hadoop.tools.DistCp and call it appropriately.
You can try FileUtil.copy
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileUtil.html

Resources