How can I connect to Remote Linux Nodes on which Hadoop is installed using Apache Nifi instance installed on my local Windows Box? - hadoop

I have installed Apache nifi 1.1.1 on My Windows Local System. How can I connect to Remote Linux Nodes on which Hadoop is installed using Apache Nifi instance installed on my local Windows Box?
Also How can I perform data migration activity on Remote Linux Nodes on which Hadoop is installed using these local instance of Nifi?
I have enabled Kerberos on these Remote Hadoop Cluster.

The "Unsupported major.minor version" is because Apache NiFi 1.x requires Java 8, and you tried to start it with a Java 7 JVM. You could install a Java 8 JDK just for NiFi to use, and leave all the Hadoop stuff using Java 7, and you can set NiFi's JAVA_HOME in bin/nifi-env.sh:
export JAVA_HOME=/path/to/jdk1.8.0/
If you are trying to connect NiFi on your local Windows system to remote Hadoop nodes, you will need the core-site.xml and hdfs-site.xml from your Hadoop cluster, and since you have kerberos enabled you will need the krb5.conf file from one of your Hadoop servers (/etc/krb5.conf).

Related

Configure hadoop-client to connect to hadoop in other machine/server

On server A i have hadoop and python scripts for performing tasks on hadoop.
On server B i have hive/hadoop.
Is it possible to configure hadoop-client on server A to be connected to hadoop on server B?
It's not clear what Python library you are using, but assuming PySpark, you can copy or configure the HADOOP_CONF_DIR on your client machine, and it can communicate with any external Hadoop system.
At the very least, you'll need to configure a core-site.xml to communicate with HDFS and a hive-site.xml to communicate with Hive.
If you are using PyHive library, you just connect to user#hiveserver2:1000

How to check the hadoop distribution used in my cluster?

How can I know whether my cluster has been setup using Hortonworks,Cloudera or normal installation of hadoop components?
Also how can I know the port number of various services?
It is difficult to identify hadoop distribution from port number, since Apache, Hortonworks, Cloudera distros uses different port numbers
Other options are to check for cluster management service agents (Cloudera Manager - agent start up script - /etc/init.d/cloudera-scm-agent , Hortonworks - Ambari agent start up script - /etc/init.d/ambari-agent, Vanilla Apache hadoop will not have any agents in the server
Another option is to check hadoop classpath, below command can be used to get the classpath.
`hadoop classpath`
Most of hadoop distributions include distro name in the classpath, If classpath doesn't contains any of below keywords, distribution/setup will be Apache/Normal installation.
hdp - (Hortonworks)
cdh - (Cloudera)
The simplest way is to run hadoop version command and in output you will see, what version of Hadoop you are having and also which distribution and its version you are running with. If you will find words like cdh or hdp then cdh stands for cloudera and hdp for hortonworks.
For example, here I am having cloudera and with hadoop version command below is output.
Here in first line Hadoop version followed by hadoop distribution and its version.
Hope this will help.
Command hdfs version will give you version of the hadoop and its distribution

How to use remote hadoop cluster

I have a Hadoop cluster deployed, and the client MapReduce program is running on another machine. How can I use that cluster?
If you have you have your jars in a client machine install hadoop-client packages in that machine and have configuration details of cluster in conf folder so that you can trigger your jobs from client machine into remote cluster

Setting up Hadoop Client on Mac OS X

Currently, I have 3-node cluster running CDH 5.0 using MRv1. I am trying to figure out how to setup Hadoop on my Mac. So, I can submit jobs to the cluster. According to the "Managing Hadoop API Dependencies in CDH 5", you just need the files in /usr/lib/hadoop/client-0.20/* Do I need the following files too? Does Cloudera has hadoop-client in tarball?
- core-site.xml
- hdfs-site.xml
- mapred-site.xml
Yes, I'nk you can make use of cloudera tarball for setting up hadoop client, the same can be downloaded from the following path, configuration files are availble under etc/hadoop/ directory under Hadoop, just need to modify those files according to your environment.
http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.2.0-cdh5.0.0-beta-2.tar.gz
If the above link doesn't match your version, use the following link for getting the available hadoop versions
http://archive-primary.cloudera.com/cdh5/cdh/5/

How to connect mac to hadoop/hdfs cluster

I have CDH for running in a cluster and I have ssh access to the machine. I need to connect my Mac to Cluster, so if I do hadoop fs -ls , it should show me the content of the cluster.
I have configured HADOOP_CONF to point to the configuration of the cluster. I am running CDH4 in my cluster. Am I missing something here , Is it possible to connect ?
Is there some ssh key setup that I need to do ?
There are a few of things you will need to ensure to do this:
You need to set your HADOOP_CONF_DIR environment variable to point to a directory that carries config XMLs that point to your cluster.
Your Mac should be able to directly access the hosts that form your cluster (all of them). This can be done via VPN, for example - if the cluster is secured from external networks.
Your Mac should carry the same version of Hadoop that the cluster runs.

Resources