Hadoop name node URL for WebHDFS - hadoop

I have a clustered Named Node Setup. The Named nodes are configured to be Active and Passive.
When I make a WEBHDFS call, the URL to be provided is
http://:/webhdfs/v1/
Since I have 2 Named nodes available, I have 2 URL's available
http://:/webhdfs/v1/ - Its active now
http://:/webhdfs/v1/ - its passive now
My question is : The named nodes can failover any time. What value do I provide in HOST? Should I give the Service name? Is there a virtual IP that is normally configured in HDP platform which takes care of the redirection?
Or should I place a load balancer or gateway in front of the Named Nodes so that the failover is handled without any impact to the calling application.

It's a bug, it doesn't work in HA mode.
You have to explicitly put the active NN URL every time NN changes it's state.
https://hortonworks.jira.com/browse/BUG-30030

You will get an exception if you're talking to an inactive namenode.
See my answer here Any command to get active namenode for nameservice in hadoop?

You must determine the active Namenode first, then issue your WebHDFS API request to the active namenode. Issuing WebHDFS API requests to a standby namenode will result in an HTTP 403 error.
There is no automatic way to determine the active Namenode when using WebHDFS yet. You can use the hdfs command line client to query the configuration, or alternatively, loop through the Namenodes and issue JMX API requests to the `/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus" endpoint and parse the output.

Related

Hadoop distcp: what ports are used?

If I want to use distCp on an on-prem hadoop cluster, so it can 'push' data to external cloud storage, what firewall considerations must be made in order to leverage this tool? What ports does the actual transfer of data take place on? Is it via SSH, and/or port 8020? I need to make sure network connectivity is provided for source to destination, but with the least amount of privileges ascribed to it. (i.e., only opening ports that are absolutely needed)
I do not believe SSH is used for the actual data transfer, other than you actually logging into the cluster and starting the command, for example.
At a minimum, it would be the RPC data-transfer ports for the NameNodes and Datanodes, so whatever you've configured for fs.defaultFS, dfs.namenode.rpc-address and dfs.datanode.address

NIFI secure 3 node cluster

I am seeing some errors in my nifi cluster, I have a 3 node secured nifi cluster i am seeing the below errors. at the 2 nodes
ERROR [main] org.apache.nifi.web.server.JettyServer Unable to load flow due to:
java.io.IOException: org.apache.nifi.cluster.ConnectionException:
Failed to connect node to cluster due to: java.io.IOException:
Could not begin listening for incoming connections in order to load balance data across the cluster.
Please verify the values of the 'nifi.cluster.load.balance.port' and 'nifi.cluster.load.balance.host'
properties as well as the 'nifi.security.*' properties
See the clustering configuration guide for the list of clustering options you have to configure. For load balancing, you'll need to specify ports that are open in your firewall so that the nodes can communicate. You'll also need to make sure that each host has its node hostname property set, its host ports set and that there are no firewall restricts between the nodes and your Apache Zookeeper cluster.
If you want to simplify the setup to play around, you can use the information in the clustering configuration section of the admin guide to set up an embedded ZooKeeper node within each NiFi instance. However, I would recommend setting up an external ZooKeeper cluster. A little more work, but ultimately worth it.

What is the communication port between Namenode and Datanode in hadoop cluster

I want to know the communication protocol specifically port number used by Namenode and Datanode in hadoop.
Say, if I write the following command in Namenode,
hdfs dfsadmin -report
it will show the details of live nodes (namenode & datanode), how many datanodes are there etc. My question is how namenode and datanode communicates ? via which port? I am actually getting only 1 datanode with the above command whereas in my cluster, there are 8 datanodes. So, I am not sure whether any port blocking of networking is caused this!! My firewall is disabled in the namenode and all the datanodes. I have checked this via sudo ufw status command which returned inactive.
From hadoop official pages (link), I have found this:
The Communication Protocols
All HDFS communication protocols are layered on top of the TCP/IP
protocol. A client establishes a connection to a configurable TCP port
on the NameNode machine. It talks the ClientProtocol with the
NameNode. The DataNodes talk to the NameNode using the DataNode
Protocol. A Remote Procedure Call (RPC) abstraction wraps both the
Client Protocol and the DataNode Protocol. By design, the NameNode
never initiates any RPCs. Instead, it only responds to RPC requests
issued by DataNodes or clients.
I am using hadoop 3.1.1 in Ubuntu 16.04
Any help is highly appreciated. Thanks.
These are all configured in hdfs-site.xml.
For example, by default, dfs.datanode.address=0.0.0.0:9866
If you search for port or address, then you can generally find what you are looking for https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
If that command or the NameNode UI don't show datanodes, then SSH to the individual nodes, check jps to see if process is running, and log files to find if the process is not running.

Can I access to Nifi Rest-API using localhost instead of actual node-ip address in Nifi cluster?

For example; I have 3 nifi nodes in nifi cluster. Example hostnames of these nodes;
192.168.12.50:8080(primary)
192.168.54.60:8080
192.168.95.70:8080
I know that I can access to nifi-rest api from all nifi nodes. I have GetHTTP processor for get cluster summary from rest-api, and this processor runs on only pimary node. I did set "URL" property of this processor to 192.168.12.50:8080/nifi-api/controller/cluster.
But, if primary node is down, new primary node will be elected. Thus, I will not be able to access 192.168.12.50:8080 address from new primary node. Because this node was down. So, I will not be able to get cluster summary result from rest-api.
In this case, Can I use "localhost:8080/nifi-api/controller/cluster" instead of "192.168.12.50:8080/nifi-api/controller/cluster" for each node in nifi cluster?
It depends on a few things... if you are running securely then you have certificates that are generated for each node specific to the hostname, so the host in the web requests needs to match the host in the certificates, so you can't use localhost in that case.
It also depends how NiFi's web server is configured. If nifi.web.http.host or nifi.web.https.host has a specific hostname specified, then the web server is only bound to that hostname and may not accept connections with a different hostname. In a default unsecure setup, if you leave nifi.web.http.host blank then it binds to all interfaces.
You may be able to use the expression language function to obtain the hostname of the current node. So you could make the url something like "http://${hostname()}/nifi-api/controller/cluster".

Nifi 1.5.0 Cluster configuration

Does anyone know how to cluster NiFi 1.5.0? I want to use dataflow.mydomain.com but... I get this error when I try to hit the loadbalancer that reads:
"The request contained an invalid host header [dataflow.mydomain.com] in the request [/nifi/]. Check for request manipulation or third-party intercept."
According to one post that I read, the problem was that the value of nifi.web.http.host had to match the value of the url.
If that's true, I don't understand how a cluster would be possible.
Thanks!
(I'm using a 3 host setup in AWS, the hosts will individually respond if I set the nifi.web.http.host to their private IP and I access it at http://[ip]/nifi/
but not if I use a loadbalancer in front of the cluster).
It is not really an issue of clustering NiFi, it is an issue of accessing it through a load balancer. A cluster does not imply a load balancer.
In the next version of NiFi there will be a new property (nifi.web.proxy.host) where you could put dataflow.mydomain.com and it would let it through.
For now I think you'd have to strip off the host header of each request at your load balancer so that it doesn't get passed on to the NiFi nodes, that it was is triggering the rejection. NiFi is inspecting the headers of the incoming request and seeing that the host header has a value that is not the host of NiFi.

Resources