I want to monitor Hadoop (Hadoop version-0.20.2) multi node cluster using ganglia. My Hadoop is working properly.I have installed Ganglia after reading following blogs---
http://hakunamapdata.com/ganglia-configuration-for-a-small-hadoop-cluster-and-some-troubleshooting/
http://hokamblogs.blogspot.in/2013/06/ganglia-overview-and-installation-on.html
I have also studied Monitoring with Ganglia.pdf(APPENDIX B
Ganglia and Hadoop/HBase ).
I have modified only the following lines in **Hadoop-metrics.properties**(same on all Hadoop Nodes)==>
// Configuration of the "dfs" context for ganglia
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
dfs.period=10
dfs.servers=192.168.1.182:8649
// Configuration of the "mapred" context for ganglia
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
mapred.period=10
mapred.servers=192.168.1.182:8649:8649
// Configuration of the "jvm" context for ganglia
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.period=10
jvm.servers=192.168.1.182:8649
**gmetad.conf** (Only on Hadoop master Node )
data_source "Hadoop-slaves" 5 192.168.1.182:8649
RRAs "RRA:AVERAGE:0.5:1:302400" //Because i want to analyse one week data.
**gmond.conf** (on all the Hadoop Slave nodes and Hadoop Master)
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
allow_extra_data = yes
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
name = "Hadoop-slaves"
owner = "Sandeep Priyank"
latlong = "unspecified"
url = "unspecified"
}
/* The host section describes attributes of the host, like the location */
host {
location = "CASL"
}
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
host = 192.168.1.182
port = 8649
ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
port = 8649
}
/* You can specify as many tcp_accept_channels as you like to share
an xml description of the state of the cluster */
tcp_accept_channel {
port = 8649
}
Now Ganglia is only giving system metrics(mem , disk etc.) for all the nodes. But it is not showing the Hadoop metrics( like jvm, mapred metrics
etc. ) on the web interface. how can i fix this problem ?
I do work Hadoop with Ganglia, and yes, I see on Ganglia a lot of metrics of Hadoop (Containers, map task, vmem). In fact, Hadoop specific report to Ganglio more of hundred metrics.
The hokamblogs Post was enough for this.
I edit hadoop-metrics2.properties on the master node and the content is:
namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
resourcemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
resourcemanager.sink.ganglia.period=10
resourcemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
and I also edit the same files on the slaves:
datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
nodemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
nodemanager.sink.ganglia.period=10
nodemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
Your remember restart Hadoop and Ganglia after change the files.
I hope this help you.
Thanks to everyone, If you are using older version of Hadoop then put following files( from new version of Hadoop) ==>
GangliaContext31.java
GangliaContext.java
In path ==> hadoop/src/core/org/apache/hadoop/metrics/ganglia
From the new version of Hadoop.
Compile your Hadoop using ant ( and set proper proxy while compiling).
If it gives error like function definition is missing then put that function definition( from new version) in proper java file and then compile Hadoop again. It will work.
Related
I am using flume file_roll sink type to sink high volume of data (rate ~10000 events/second) via syslogTCP source type. however the process(spark streaming job) which is pushing data to syslogTCP port stuck after 15 - 20 min ingesting arrount 1.5 million events. I also observed some file descriptor issue in the linux box where flume-ng agent is running.
Below is the flume configuration i am using:
agent2.sources = r1
agent2.channels = c1
agent2.sinks = f1
agent2.sources.r1.type = syslogtcp
agent2.sources.r1.bind = i-170d29de.aws.amgen.com
agent2.sources.r1.port = 44442
agent2.channels.c1.type = memory
agent2.channels.c1.capacity = 1000000000
agent2.channels.c1.transactionCapacity = 40000
agent2.sinks.f1.type = file_roll
agent2.sinks.f1.sink.directory = /opt/app/svc-edl-ops-ngmp-dev/rdas/flume_output
agent2.sinks.f1.sink.rollInterval = 300
agent2.sinks.f1.sink.rollSize = 104857600
agent2.sinks.f1.sink.rollCount = 0
agent2.sources.r1.channels = c1
agent2.sinks.f1.channel = c1
because of performance issue mainly because of high ingestion rate I cannot use HDFS sink type.:
This was my bad. I was using console logging and at some point The putty terminal was freezing because of connectivity issue. causing entire flume agent to chock.
By redirecting flume console output OR having a log4j.property which writes output to console has resolved the freezing issue.
I am using ganglia to monitor Hadoop. gmond and gmetad are running fine. When I telnet on gmond port (8649) and when I telnet gmetad on its xml answer port, I get no hadoop data. How can it be ?
cluster {
name = "my cluster"
owner = "Master"
latlong = "unspecified"
url = "unspecified"
}
host {
location = localhost
}
udp_send_channel {
#bind_hostname = yes
#mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8649
retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}
tcp_accept_channel {
port = 8649
# If you want to gzip XML output
gzip_output = no
}
I find out the issue. It was related to the hadoop metrics properties. I set up ganglia in the hadoop-metrics.properties but I had to set up hadoop-metrics.properties config file. Now ganglia throws correct metrics.
http://hadoop.apache.org/docs/r2.1.0-beta/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
I am try to make the example work well from the above link.but I can't compile the code below
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(512);
amContainer.setResource(capability);
// Set the container launch content into the
// ApplicationSubmissionContext
appContext.setAMContainerSpec(amContainer);
amContainer is ContainerLaunchContext and my hadoop version is 2.1.0-beta.
I did some investigation. I found there's no method "setResource" in ContainerLaunchContext
I have 3 question about this
1) the method has been removed or something?
2) if the method has been removed, how can I do now?
3) is there any doc about yarn, because I found the doc in website is very easy, I hope I can get a manual or something. for example,
capability.setMemory(512);
I don't know it's 512k or 512M according comments in code.
This is actually proper solution to the question. Previous answer might cause incorrect execution !!!
#Dyin I couldn't fit it in the comment ;) Validated for 2.2.0 and 2.3.0
Driver setting up resources for AppMaster:
ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
ApplicationId appId = appContext.getApplicationId();
appContext.setApplicationName(this.appName);
// Set up the container launch context for the application master
ContainerLaunchContext amContainer = Records.newRecord(ContainerLaunchContext.class);
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(amMemory);
appContext.setResource(capability);
appContext.setAMContainerSpec(amContainer);
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(amPriority);
appContext.setPriority(pri);
appContext.setQueue(amQueue);
// Submit the application to the applications manager
yarnClient.submitApplication(appContext); // this.yarnClient = YarnClient.createYarnClient();
In ApplicationMaster this is how you should specify resources for containers (workers).
private AMRMClient.ContainerRequest setupContainerAskForRM() {
// setup requirements for hosts
// using * as any host will do for the distributed shell app
// set the priority for the request
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(requestPriority);
// Set up resource type requirements
// For now, only memory is supported so we set memory requirements
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(containerMemory);
AMRMClient.ContainerRequest request = new AMRMClient.ContainerRequest(capability, null, null,
pri);
return request;
}
Some run() or main() method in your AppMaster
AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
resourceManager = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
resourceManager.init(conf);
resourceManager.start();
for (int i = 0; i < numTotalContainers; ++i) {
AMRMClient.ContainerRequest containerAsk = setupContainerAskForRM();
resourceManager.addContainerRequest(containerAsk); //
}
Launching containers
You can use the original answer solution (java cmd), but it's just a cherry on top. It should work anyway.
You can set memory available to ApplicationMaster via commend. As such:
// Set the necessary command to execute the application master
Vector<CharSequence> vargs = new Vector<CharSequence>(30);
...
vargs.add("-Xmx" + amMemory + "m"); // notice "m" indicating megabytes, you can use also -Xms combined with -Xmx
... // transform vargs to String commands
amContainer.setCommands(commands);
This should solve your problem. As for the 3 questions. Yarn is rapidly evolving software. My advice forget documentation, get source code and read it. This will answer a lot of your questions.
I am new to Flume-Ng and need help to tail a file. I have a cluster running hadoop with flume running remotely. I communicate to this cluster by using putty. I want to tail a file on my PC and put it on the HDFS in the cluster. I am using the following code to this.
#flume.conf: http source, hdfs sink
# Name the components on this agent
tier1.sources = r1
tier1.sinks = k1
tier1.channels = c1
# Describe/configure the source
tier1.sources.r1.type = exec
tier1.sources.r1.command = tail -F /(Path to file on my PC)
# Describe the sink
tier1.sinks.k1.type = hdfs
tier1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
tier1.sinks.k1.hdfs.filePrefix = events-
tier1.sinks.k1.hdfs.round = true
tier1.sinks.k1.hdfs.roundValue = 10
tier1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
tier1.channels.c1.type = memory
tier1.channels.c1.capacity = 1000
tier1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
tier1.sources.r1.channels = c1
tier1.sinks.k1.channel = c1
I believe the mistake is in the source. This kind source does not take the host name or i.p to look for(in this case should be my PC). Could someone just give me a hint as to how to tail a file on my PC to upload it to the remotely located HDFS using flume.
The exec source in your configuration will run on the machine where you start the flume's tier1 agent. If you want to collect data from another machine, you'll need to start a flume agent on that machine too; to sum up you need:
an agent (remote1) running on the remote machine that has an avro source, which will listen for events from collector agents and will act like an aggregator.
an agent (local1) running on your machine (to act like a collector) that has an exec source and sends data to the remote agent via avro sink.
Or alternatively, you can have only one flume agent running on your local machine (having the same configuration you posted) and set the hdfs path as "hdfs://REMOTE_IP/hdfs/path" (though I'm not entirely sure this will work).
edit:
Below are the sample configurations for the 2-agents scenario (they may not work without some modification).
remote1.channels.mem-ch-1.type = memory
remote1.sources.avro-src-1.channels = mem-ch-1
remote1.sources.avro-src-1.type = avro
remote1.sources.avro-src-1.port = 10060
remote1.sources.avro-src-1.bind = 10.88.66.4 /* REPLACE WITH YOUR MACHINE'S EXTERNAL IP */
remote1.sinks.k1.channel = mem-ch-1
remote1.sinks.k1.type = hdfs
remote1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
remote1.sinks.k1.hdfs.filePrefix = events-
remote1.sinks.k1.hdfs.round = true
remote1.sinks.k1.hdfs.roundValue = 10
remote1.sinks.k1.hdfs.roundUnit = minute
remote1.sources = avro-src-1
remote1.sinks = k1
remote1.channels = mem-ch-1
and
local1.channels.mem-ch-1.type = memory
local1.sources.exc-src-1.channels = mem-ch-1
local1.sources.exc-src-1.type = exec
local1.sources.exc-src-1.command = tail -F /(Path to file on my PC)
local1.sinks.avro-snk-1.channel = mem-ch-1
local1.sinks.avro-snk-1.type = avro
local1.sinks.avro-snk-1.hostname = 10.88.66.4 /* REPLACE WITH REMOTE IP */
local1.sinks.avro-snk-1.port = 10060
local1.sources = exc-src-1
local1.sinks = avro-snk-1
local1.channels = mem-ch-1
I want to develop a website that will allow analysts within the company to run Hadoop jobs (choose from a set of defined jobs) and see their job's status\progress.
Is there an easy way to do this (get running jobs statuses etc.) via Ruby\Python?
How do you expose your Hadoop cluster to internal clients on your company?
I have found one way to get information about jobs on JobTracker. This is the code:
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "URL");
JobClient client = new JobClient(new JobConf(conf));
JobStatus[] jobStatuses = client.getAllJobs();
for (JobStatus jobStatus : jobStatuses) {
long lastTaskEndTime = 0L;
TaskReport[] mapReports = client.getMapTaskReports(jobStatus.getJobID());
for (TaskReport r : mapReports) {
if (lastTaskEndTime < r.getFinishTime()) {
lastTaskEndTime = r.getFinishTime();
}
}
TaskReport[] reduceReports = client.getReduceTaskReports(jobStatus.getJobID());
for (TaskReport r : reduceReports) {
if (lastTaskEndTime < r.getFinishTime()) {
lastTaskEndTime = r.getFinishTime();
}
}
client.getSetupTaskReports(jobStatus.getJobID());
client.getCleanupTaskReports(jobStatus.getJobID());
System.out.println("JobID: " + jobStatus.getJobID().toString() +
", username: " + jobStatus.getUsername() +
", startTime: " + jobStatus.getStartTime() +
", endTime: " + lastTaskEndTime +
", Durration: " + (lastTaskEndTime - jobStatus.getStartTime()));
}
Since version 'beta 2' of Cloudera's Hadoop Distribution you can almost with no effort use Hadoop User Experience (HUE), which was earlier called Cloudera Desktop.
But since this version it has grown enormously. It comes with job designer,hive interface and many more. You should definitely check this out before deciding to build your own application.
Maybe a good place to start would be to take a look at Cloudera Destktop. It provides a web interface to enable cluster administration and job development tasks. Its free to download.
There is nothing like this that ships with hadoop. It should be trivial to build this functionality. Some of this is available via the JobTracker's page and some you will have to build yourself.