Flume agent does not contain any valid channels - hadoop

I am new to Flume. Im trying to pull data from Twitter, but I am not being successful. (I am using Cloudera Quickstart)
My conf file looks like this:
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
I have added all these values which are taken from Twitter account consumerKey, consumerSecret, accessToken ,accessTokenSecret,keywords and path also
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollsize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
The command I am using to execute the conf file is:
flume-ng agent --conf conf --conf-file flume.conf -Dflume.root.logger=DEBUG,console -name TwitterAgent
The error I am getting is:
18/06/27 12:17:18 WARN conf.FlumeConfiguration: Agent configuration for 'TwitterAgent' does not contain any valid channels. Marking it as invalid.
18/06/27 12:17:18 WARN conf.FlumeConfiguration: Agent configuration invalid for agent 'TwitterAgent'. It will be removed.
18/06/27 12:17:18 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: []
18/06/27 12:17:18 WARN node.AbstractConfigurationProvider: No configuration found for this host:TwitterAgent
18/06/27 12:17:18 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
Please advice me.

I think you have an issue in your execution command, the error is about finding the configuration file.
Command should be
flume-ng agent -c conf -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n TwitterAgent
You have to specify your configuration file path. You can try -f conf/flume.conf instead of -f flume.conf

Related

validating Logstash configuration

I am trying to validate my logstash configuration.
Using :
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings -t -f /etc/logstash/conf.d
I received the following error:
penJDK 64-Bit Server VM warning: If the number of processors is
expected to increase from one, then you should configure the number of
parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using
--path.settings. Continuing using the defaults Could not find log4j2 configuration at path /tmp/hsperfdata_logstash/-t/log4j2.properties.
Using default config which logs errors to the console [INFO ]
2018-10-09 14:56:50.240 [main] scaffold - Initializing module
{:module_name=>"fb_apache",
:directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-10-09 14:56:50.265 [main] scaffold - Initializing module
{:module_name=>"netflow",
:directory=>"/usr/share/logstash/modules/netflow/configuration"} [INFO
] 2018-10-09 14:56:50.378 [main] writabledirectory - Creating
directory {:setting=>"path.queue",
:path=>"/usr/share/logstash/data/queue"} [INFO ] 2018-10-09
14:56:50.380 [main] writabledirectory - Creating directory
{:setting=>"path.dead_letter_queue",
:path=>"/usr/share/logstash/data/dead_letter_queue"} [WARN ]
2018-10-09 14:56:51.099 [LogStash::Runner] multilocal - Ignoring the
'pipelines.yml' file because modules or command line options are
specified [INFO ] 2018-10-09 14:56:51.126 [LogStash::Runner] agent -
No persistent UUID file found. Generating new UUID
{:uuid=>"80207611-d5b8-47dd-b229-23c2ade385ae",
:path=>"/usr/share/logstash/data/uuid"} [INFO ] 2018-10-09
14:56:51.568 [LogStash::Runner] runner - Starting Logstash
{"logstash.version"=>"6.2.4"} [INFO ] 2018-10-09 14:56:52.021 [Api
Webserver] agent - Successfully started Logstash API endpoint
{:port=>9600} [ERROR] 2018-10-09 14:56:53.586 [Ruby-0-Thread-1:
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/certs/logstash-forwarder.crt
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
...
} } [ERROR] 2018-10-09 14:56:53.588 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
beats - Invalid setting for beats input plugin:
input {
beats {
# This setting must be a path
# File does not exist or cannot be opened /etc/pki/tls/private/logstash-forwarder.key
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
...
} } [ERROR] 2018-10-09 14:56:53.644 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22]
agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Something is
wrong with your configuration.",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:89:in
config_init'",
"/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:62:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:89:in
plugin'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:112:in
plugin'", "(eval):8:in <eval>'", "org/jruby/RubyKernel.java:994:in
eval'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:84:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:inblock
in converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:inblock
in converge_state'", "org/jruby/RubyArray.java:1734:in each'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in
converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in block
in converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in
converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in
block in execute'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in
block in initialize'"]}
I would appreciate any help with this.
Please check logstash.yml file available on /etc/logstash ? If it available stop the logstash service and kill if any processes ruining on background. Save your config file on /etc/logstash/conf.d/your_file.conf. To run the config test go to, logstash bin directory and run
./logstash -f /etc/logstash/conf.d/your_config_file.conf --config.test_and_exit

Source local file to HDFS sink by using flume

I am using flume to Source local file to HDFS sink, below is my conf:
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /usr/download/test_data/
a1.sources.r1.basenameHeader = true
a1.sources.r1.basenameHeaderKey = fileName
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://172.16.10.5/user/admin/Data/
a1.sinks.k1.hdfs.filePrefix = %{fileName}
a1.sinks.k1.hdfs.idleTimeout=60
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 5000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
And I used the user 'flume' to execute this conf file.
time bin/flume-ng agent -c conf -f conf/hdfs_sink.conf -n a1 -Dflume.root.logger=INFO,console
But it shows I could not find the local file, permission denied
Could not find file: /usr/download/test_data/sale_record0501.txt
java.io.FileNotFoundException: /usr/download/test_data/.flumespool/.flumespool-main.meta (Permission denied)
How to solve this?
Your flume user may not have permission under the spooling directory. Your spooling directory is at /usr and It may be require a root permission to access this path.
First become a root with sudo su then execute or replace your execution command with
sudo bin/flume-ng agent -c conf -f conf/hdfs_sink.conf -n a1 -Dflume.root.logger=INFO,console
On the other hand, you can give permission to flume user with
cd /usr/download/
sudo chown -R flume:somegroup test_data

Windows install hadoop fail to format

I don't know why this is error is happening in Hadoop version 2.7.1.
$ ./hadoop namenode –format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/12/31 22:26:34 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ZKCRJB84CNJ0ZTJ/172.29.66.77
STARTUP_MSG: args = [▒Cformat]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath =
C:\hadoop\etc\hadoop;C:\hadoop\share\hadoop\common\li b\activation-1.1.jar;C:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.j ar;
C:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;
leave out
C:\hadoop\share\hadoop\ mapreduce\hadoop-mapreduce-client-common-2.7.1.jar;C:\hadoop\share\hadoop\mapred uce\hadoop-mapreduce-client-core-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hado op-mapreduce-client-hs-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapredu ce-client-hs-plugins-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce -client-jobclient-2.7.1-tests.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapred uce-client-jobclient-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce -client-shuffle-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-exam ples-2.7.1.jar;C:\cygwin\contrib\capacity-scheduler\*.jar;C;C:\hadoop\contrib\ca pacity-scheduler\*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15e cc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04 Z
STARTUP_MSG: java = 1.8.0_51
************************************************************/
15/12/31 22:26:34 INFO namenode.NameNode: createNameNode [▒Cformat]
Usage: java NameNode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|downgrade|started> ] |
[-finalize] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby] |
[-recover [ -force] ] |
[-metadataVersion ] ]
15/12/31 22:26:34 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ZKCRJB84CNJ0ZTJ/172.29.66.77
************************************************************/
Based on your args = [▒Cformat], use the correct dash sign - not —.

error while executing pig script?

p.pig contains follwoing code
salaries= load 'salaries' using PigStorage(',') As (gender, age,salary,zip);
salaries= load 'salaries' using PigStorage(',') As (gender:chararray,age:int,salary:double,zip:long);
salaries=load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});
highsal= filter salaries by salary > 75000;
dump highsal
salbyage= group salaries by age;
describe salbyage;
salbyage= group salaries All;
salgrp= group salaries by $3;
A= foreach salaries generate age,salary;
describe A;
salaries= load 'salaries.txt' using PigStorage(',') as (gender:chararray,age:int,salary:double,zip:int);
vivek#ubuntu:~/Applications/Hadoop_program/pip$ pig -x mapreduce p.pig
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-09-24 03:16:32,990 [main] INFO org.apache.pig.Main - Apache Pig version 0.14.0 (r1640057) compiled Nov 16 2014, 18:02:05
2015-09-24 03:16:32,991 [main] INFO org.apache.pig.Main - Logging error messages to: /home/vivek/Applications/Hadoop_program/pip/pig_1443089792987.log
2015-09-24 03:16:38,966 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-09-24 03:16:41,232 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/vivek/.pigbootup not found
2015-09-24 03:16:42,869 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-09-24 03:16:42,870 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-09-24 03:16:42,870 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
2015-09-24 03:16:45,436 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " <PATH> "salaries=load "" at line 7, column 1.
Was expecting one of:
<EOF>
"cat" ...
"clear" ...
"fs" ...
"sh" ...
"cd" ...
"cp" ...
"copyFromLocal" ...
"copyToLocal" ...
"dump" ...
"\\d" ...
"describe" ...
"\\de" ...
"aliases" ...
"explain" ...
"\\e" ...
"help" ...
"history" ...
"kill" ...
"ls" ...
"mv" ...
"mkdir" ...
"pwd" ...
"quit" ...
"\\q" ...
"register" ...
"rm" ...
"rmf" ...
"set" ...
"illustrate" ...
"\\i" ...
"run" ...
"exec" ...
"scriptDone" ...
"" ...
"" ...
<EOL> ...
";" ...
Details at logfile: /home/vivek/Applications/Hadoop_program/pip/pig_1443089792987.log
2015-09-24 03:16:45,554 [main] INFO org.apache.pig.Main - Pig script completed in 13 seconds and 48 milliseconds (13048 ms)
vivek#ubuntu:~/Applications/Hadoop_program/pip$
Here at starting p.pig comprised of the code give above.
i'm started my pig in mapreduce mode.
while executing above code it encounters following error:
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " "salaries=load "" at line 7, column 1.
please try to resolve the error .
You have not provided spaces between alias name and command.
Pig expects atleast on space before or after '=' operator.
Change this line :
salaries=load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});
TO
salaries = load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});

Any command to get active namenode for nameservice in hadoop?

The command:
hdfs haadmin -getServiceState machine-98
Works only if you know the machine name. Is there any command like:
hdfs haadmin -getServiceState <nameservice>
which can tell you the IP/hostname of the active namenode?
To print out the namenodes use this command:
hdfs getconf -namenodes
To print out the secondary namenodes:
hdfs getconf -secondaryNameNodes
To print out the backup namenodes:
hdfs getconf -backupNodes
Note: These commands were tested using Hadoop 2.4.0.
Update 10-31-2014:
Here is a python script that will read the NameNodes involved in Hadoop HA from the config file and determine which of them is active by using the hdfs haadmin command. This script is not fully tested as I do not have HA configured. Only tested the parsing using a sample file based on the Hadoop HA Documentation. Feel free to use and modify as needed.
#!/usr/bin/env python
# coding: UTF-8
import xml.etree.ElementTree as ET
import subprocess as SP
if __name__ == "__main__":
hdfsSiteConfigFile = "/etc/hadoop/conf/hdfs-site.xml"
tree = ET.parse(hdfsSiteConfigFile)
root = tree.getroot()
hasHadoopHAElement = False
activeNameNode = None
for property in root:
if "dfs.ha.namenodes" in property.find("name").text:
hasHadoopHAElement = True
nameserviceId = property.find("name").text[len("dfs.ha.namenodes")+1:]
nameNodes = property.find("value").text.split(",")
for node in nameNodes:
#get the namenode machine address then check if it is active node
for n in root:
prefix = "dfs.namenode.rpc-address." + nameserviceId + "."
elementText = n.find("name").text
if prefix in elementText:
nodeAddress = n.find("value").text.split(":")[0]
args = ["hdfs haadmin -getServiceState " + node]
p = SP.Popen(args, shell=True, stdout=SP.PIPE, stderr=SP.PIPE)
for line in p.stdout.readlines():
if "active" in line.lower():
print "Active NameNode: " + node
break;
for err in p.stderr.readlines():
print "Error executing Hadoop HA command: ",err
break
if not hasHadoopHAElement:
print "Hadoop High-Availability configuration not found!"
Found this:
https://gist.github.com/cnauroth/7ff52e9f80e7d856ddb3
This works out of the box on my CDH5 namenodes, although I'm not sure other hadoop distributions will have http://namenode:50070/jmx available - if not, I think it can be added by deploying Jolokia.
Example:
curl 'http://namenode1.example.com:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus'
{
"beans" : [ {
"name" : "Hadoop:service=NameNode,name=NameNodeStatus",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
"State" : "active",
"NNRole" : "NameNode",
"HostAndPort" : "namenode1.example.com:8020",
"SecurityEnabled" : true,
"LastHATransitionTime" : 1436283324548
} ]
So by firing off one http request to each namenode (this should be quick) we can figure out which one is the active one.
It's also worth noting that if you talk WebHDFS REST API to an inactive namenode you will get a 403 Forbidden and the following JSON:
{"RemoteException":{"exception":"StandbyException","javaClassName":"org.apache.hadoop.ipc.StandbyException","message":"Operation category READ is not supported in state standby"}}
In a High Availability Hadoop cluster, there will be 2 namenodes - one active and one standby.
To find the active namenode, we can try executing the test hdfs command on each of the namenodes and find the active name node corresponding to the successful run.
Below command executes successfully if the name node is active and fails if it is a standby node.
hadoop fs -test -e hdfs://<Name node>/
Unix script
active_node=''
if hadoop fs -test -e hdfs://<NameNode-1>/ ; then
active_node='<NameNode-1>'
elif hadoop fs -test -e hdfs://<NameNode-2>/ ; then
active_node='<NameNode-2>'
fi
echo "Active Dev Name node : $active_node"
You can do it in bash with hdfs cli calls, too. With the noted caveat that this takes a bit more time since it's a few calls to the API in succession, but this may be preferable to using a python script for some.
This was tested with Hadoop 2.6.0
get_active_nn(){
ha_name=$1 #Needs the NameServiceID
ha_ns_nodes=$(hdfs getconf -confKey dfs.ha.namenodes.${ha_name})
active=""
for node in $(echo ${ha_ns_nodes//,/ }); do
state=$(hdfs haadmin -getServiceState $node)
if [ "$state" == "active" ]; then
active=$(hdfs getconf -confKey dfs.namenode.rpc-address.${ha_name}.${node})
break
fi
done
if [ -z "$active" ]; then
>&2 echo "ERROR: no active namenode found for ${ha_name}"
exit 1
else
echo $active
fi
}
After reading all the existing answers none seemed to combine the three steps of:
Identifying the namenodes from the cluster.
Resolving the node names to host:port.
Checking the status of each node (without requiring
cluster admin privs).
Solution below combines hdfs getconf calls and JMX service call for node status.
#!/usr/bin/env python
from subprocess import check_output
import urllib, json, sys
def get_name_nodes(clusterName):
ha_ns_nodes=check_output(['hdfs', 'getconf', '-confKey',
'dfs.ha.namenodes.' + clusterName])
nodes = ha_ns_nodes.strip().split(',')
nodeHosts = []
for n in nodes:
nodeHosts.append(get_node_hostport(clusterName, n))
return nodeHosts
def get_node_hostport(clusterName, nodename):
hostPort=check_output(
['hdfs','getconf','-confKey',
'dfs.namenode.rpc-address.{0}.{1}'.format(clusterName, nodename)])
return hostPort.strip()
def is_node_active(nn):
jmxPort = 50070
host, port = nn.split(':')
url = "http://{0}:{1}/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus".format(
host, jmxPort)
nnstatus = urllib.urlopen(url)
parsed = json.load(nnstatus)
return parsed.get('beans', [{}])[0].get('State', '') == 'active'
def get_active_namenode(clusterName):
for n in get_name_nodes(clusterName):
if is_node_active(n):
return n
clusterName = (sys.argv[1] if len(sys.argv) > 1 else None)
if not clusterName:
raise Exception("Specify cluster name.")
print 'Cluster: {0}'.format(clusterName)
print "Nodes: {0}".format(get_name_nodes(clusterName))
print "Active Name Node: {0}".format(get_active_namenode(clusterName))
From java api, you can use HAUtil.getAddressOfActive(fileSystem).
You can do a curl command to find out the Active and secondary Namenode
for example
curl -u username -H "X-Requested-By: ambari" -X GET
http://cluster-hostname:8080/api/v1/clusters//services/HDFS
Regards
I found the below when i simply typed 'hdfs' and found a couple of helpful commands, which could be useful for someone who could maybe come here seeking for help.
hdfs getconf -namenodes
This above command will give you the service id of the namenode. Say, hn1.hadoop.com
hdfs getconf -secondaryNameNodes
This above command will give you the service id of the available secondary namenodes. Say , hn2.hadoop.com
hdfs getconf -backupNodes
This above command will get you the service id of backup nodes, if any.
hdfs getconf -nnRpcAddresses
This above command will give you info of name service id along with rpc port number. Say, hn1.hadoop.com:8020
You're Welcome :)
In HDFS 2.6.0 the one that worked for me
ubuntu#platform2:~$ hdfs getconf -confKey dfs.ha.namenodes.arkin-platform-cluster
nn1,nn2
ubuntu#platform2:~$ sudo -u hdfs hdfs haadmin -getServiceState nn1
standby
ubuntu#platform2:~$ sudo -u hdfs hdfs haadmin -getServiceState nn2
active
Here is example of bash code that returns active name node even if you do not have local hadoop installation.
It also works faster as curl calls are usually faster than hadoop.
Checked on Cloudera 7.1
#!/bin/bash
export nameNode1=myNameNode1
export nameNode2=myNameNode2
active_node=''
T1=`curl --silent --insecure -request GET https://${nameNode1}:9871/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus | grep "\"State\" : \"active\"" | wc -l`
if [ $T1 == 1 ]
then
active_node=${nameNode1}
else
T1=`curl --silent --insecure -request GET https://${nameNode2}:9871/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus | grep "\"State\" : \"active\"" | wc -l`
if [ $T1 == 1 ]
then
active_node=${nameNode2}
fi
fi
echo "Active Dev Name node : $active_node"
#!/usr/bin/python
import subprocess
import sys
import os, errno
def getActiveNameNode () :
cmd_string="hdfs getconf -namenodes"
process = subprocess.Popen(cmd_string, shell=True, stdout=subprocess.PIPE)
out, err = process.communicate()
NameNodes = out
Value = NameNodes.split(" ")
for val in Value :
cmd_str="hadoop fs -test -e hdfs://"+val
process = subprocess.Popen(cmd_str, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
if (err != "") :
return val
def main():
out = getActiveNameNode()
print(out)
if __name__ == '__main__':
main()
You can simply use the below command. I have tested this in hadoop 3.0 You can check the reference here -
hdfs haadmin -getAllServiceState
It returns the state of all the NameNodes.
more /etc/hadoop/conf/hdfs-site.xml
<property>
<name>dfs.ha.namenodes.nameservice1</name>
<value>namenode1353,namenode1357</value>
</property>
hdfs#:/home/ubuntu$ hdfs haadmin -getServiceState namenode1353
active
hdfs#:/home/ubuntu$ hdfs haadmin -getServiceState namenode1357
standby

Resources