Flume NG not writing to HDFS - hadoop

I'm new at using Flume and Hadoop so I'm trying to setup the simplest (but somewhat helpful/realistic) example I can. I'm using the HortonWorks Sandbox in a VM client. After following one tutorial 12 (which involves setting up and using Flume) everything seems to be working correctly.
So I setup my own flume.conf that should
Read from an apache access log
Use a memory channel
Write to the HDFS
Simple enough right? Here's my conf file
agent.sources=exec-source
agent.sinks=hdfs-sink
agent.channels=ch1
agent.sources.exec-source.type=exec
agent.sources.exec-source.command=tail -F /var/log/httpd/access_log
agent.sinks.hdfs-sink.type=hdfs
agent.sinks.hdfs-sink.hdfs.path=/flume/events
agent.sinks.hdfs-sink.hdfs.filePrefix=apacheaccess
agent.sinks.hdfs-sink.hdfs.rollInterval=10
agent.sinks.hdfs-sink.hdfs.rollSize=0
agent.channels.ch1.type=memory
agent.channels.ch1.capacity=1000
agent.sources.exec-source.channels=ch1
agent.sinks.hdfs-sink.channel=ch1
I've seen several people have problems writing to HDFS, and in most cases it was that there weren't enough logs to fill the HDFS block. However, rollInterval=10 should generate a new file every 10 seconds, as long as at least 1 line is written to it. I can run "tail -F /var/log/httpd/access_log" in another window and see lines being written to the log fairly consistantly, so I don't think it's that.
and here's the command/output from trying to start this agent
[root#sandbox ~]# flume-ng agent -f /etc/flume/conf/flume.conf -n apache-agent
Warning: No configuration directory set! Use --conf <dir> to override.
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/libexec/../lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /usr/lib/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.6.1.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12-1.6.1.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.4.3.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.6.1.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar from classpath
Info: Excluding /usr/lib/hadoop/libexec/../lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /usr/lib/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar from classpath
+ exec /usr/jdk/jdk1.6.0_31//bin/java -Xmx20m -cp '/usr/lib/flume/lib/*:/usr/lib/hadoop/libexec/../conf:/usr/jdk/jdk1.6.0_31/lib/tools.jar:/usr/lib/hadoop/libexec/..:/usr/lib/hadoop/libexec/../hadoop-core-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/ambari-log4j-1.2.3.7.jar:/usr/lib/hadoop/libexec/../lib/asm-3.2.jar:/usr/lib/hadoop/libexec/../lib/aspectjrt-1.6.11.jar:/usr/lib/hadoop/libexec/../lib/aspectjtools-1.6.11.jar:/usr/lib/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/libexec/../lib/commons-cli-1.2.jar:/usr/lib/hadoop/libexec/../lib/commons-codec-1.4.jar:/usr/lib/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-configuration-1.6.jar:/usr/lib/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop/libexec/../lib/commons-digester-1.8.jar:/usr/lib/hadoop/libexec/../lib/commons-el-1.0.jar:/usr/lib/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/lib/hadoop/libexec/../lib/commons-io-2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-lang-2.4.jar:/usr/lib/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop/libexec/../lib/commons-math-2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-net-3.1.jar:/usr/lib/hadoop/libexec/../lib/core-3.1.1.jar:/usr/lib/hadoop/libexec/../lib/guava-11.0.2.jar:/usr/lib/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-fairscheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-lzo-0.5.0.jar:/usr/lib/hadoop/libexec/../lib/hadoop-thriftfs-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-tools.jar:/usr/lib/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop/libexec/../lib/hue-plugins-2.2.0-SNAPSHOT.jar:/usr/lib/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/lib/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop/libexec/../lib/jdeb-0.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-core-1.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-json-1.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-server-1.8.jar:/usr/lib/hadoop/libexec/../lib/jets3t-0.6.1.jar:/usr/lib/hadoop/libexec/../lib/jetty-6.1.26.jar:/usr/lib/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/libexec/../lib/jsch-0.1.42.jar:/usr/lib/hadoop/libexec/../lib/junit-4.5.jar:/usr/lib/hadoop/libexec/../lib/kfs-0.2.2.jar:/usr/lib/hadoop/libexec/../lib/log4j-1.2.15.jar:/usr/lib/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/libexec/../lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/libexec/../lib/oro-2.0.8.jar:/usr/lib/hadoop/libexec/../lib/postgresql-9.1-901-1.jdbc4.jar:/usr/lib/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop/libexec/../lib/xmlenc-0.52.jar:/usr/lib/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/lib/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/usr/lib/hbase/bin/../conf:/usr/jdk/jdk1.6.0_31/lib/tools.jar:/usr/lib/hbase/bin/..:/usr/lib/hbase/bin/../hbase-0.94.6.1.3.0.0-107-security.jar:/usr/lib/hbase/bin/../hbase-0.94.6.1.3.0.0-107-security-tests.jar:/usr/lib/hbase/bin/../lib/activation-1.1.jar:/usr/lib/hbase/bin/../lib/asm-3.1.jar:/usr/lib/hbase/bin/../lib/avro-1.5.3.jar:/usr/lib/hbase/bin/../lib/avro-ipc-1.5.3.jar:/usr/lib/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/bin/../lib/commons-cli-1.2.jar:/usr/lib/hbase/bin/../lib/commons-codec-1.4.jar:/usr/lib/hbase/bin/../lib/commons-collections-3.2.1.jar:/usr/lib/hbase/bin/../lib/commons-configuration-1.6.jar:/usr/lib/hbase/bin/../lib/commons-digester-1.8.jar:/usr/lib/hbase/bin/../lib/commons-el-1.0.jar:/usr/lib/hbase/bin/../lib/commons-httpclient-3.1.jar:/usr/lib/hbase/bin/../lib/commons-io-2.1.jar:/usr/lib/hbase/bin/../lib/commons-lang-2.5.jar:/usr/lib/hbase/bin/../lib/commons-logging-1.1.1.jar:/usr/lib/hbase/bin/../lib/commons-math-2.1.jar:/usr/lib/hbase/bin/../lib/commons-net-1.4.1.jar:/usr/lib/hbase/bin/../lib/core-3.1.1.jar:/usr/lib/hbase/bin/../lib/guava-11.0.2.jar:/usr/lib/hbase/bin/../lib/hadoop-core.jar:/usr/lib/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/bin/../lib/httpclient-4.1.2.jar:/usr/lib/hbase/bin/../lib/httpcore-4.1.3.jar:/usr/lib/hbase/bin/../lib/jackson-core-asl-1.8.8.jar:/usr/lib/hbase/bin/../lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hbase/bin/../lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hbase/bin/../lib/jackson-xc-1.8.8.jar:/usr/lib/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/bin/../lib/jaxb-api-2.1.jar:/usr/lib/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/bin/../lib/jersey-core-1.8.jar:/usr/lib/hbase/bin/../lib/jersey-json-1.8.jar:/usr/lib/hbase/bin/../lib/jersey-server-1.8.jar:/usr/lib/hbase/bin/../lib/jettison-1.1.jar:/usr/lib/hbase/bin/../lib/jetty-6.1.26.jar:/usr/lib/hbase/bin/../lib/jetty-util-6.1.26.jar:/usr/lib/hbase/bin/../lib/jruby-complete-1.6.5.jar:/usr/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsr305-1.3.9.jar:/usr/lib/hbase/bin/../lib/junit-4.10-HBASE-1.jar:/usr/lib/hbase/bin/../lib/libthrift-0.8.0.jar:/usr/lib/hbase/bin/../lib/log4j-1.2.16.jar:/usr/lib/hbase/bin/../lib/metrics-core-2.1.2.jar:/usr/lib/hbase/bin/../lib/netty-3.2.4.Final.jar:/usr/lib/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/usr/lib/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase/bin/../lib/stax-api-1.0.1.jar:/usr/lib/hbase/bin/../lib/velocity-1.7.jar:/usr/lib/hbase/bin/../lib/xmlenc-0.52.jar:/usr/lib/hbase/bin/../lib/zookeeper.jar:/etc/hadoop/conf:/usr/lib/hadoop/bin:/usr/lib/hadoop/build.xml:/usr/lib/hadoop/CHANGES.txt:/usr/lib/hadoop/conf:/usr/lib/hadoop/contrib:/usr/lib/hadoop/hadoop-ant-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-client-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-client.jar:/usr/lib/hadoop/hadoop-core-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-core.jar:/usr/lib/hadoop/hadoop-examples-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-examples.jar:/usr/lib/hadoop/hadoop-minicluster-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-minicluster.jar:/usr/lib/hadoop/hadoop-test-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-test.jar:/usr/lib/hadoop/hadoop-tools-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/hadoop-tools.jar:/usr/lib/hadoop/HDP-CHANGES.txt:/usr/lib/hadoop/ivy:/usr/lib/hadoop/ivy.xml:/usr/lib/hadoop/lib:/usr/lib/hadoop/libexec:/usr/lib/hadoop/LICENSE.txt:/usr/lib/hadoop/logs:/usr/lib/hadoop/LONGWING-CHANGES.txt:/usr/lib/hadoop/NOTICE.txt:/usr/lib/hadoop/pids:/usr/lib/hadoop/README.txt:/usr/lib/hadoop/sbin:/usr/lib/hadoop/webapps:/usr/lib/hadoop/lib/ambari-log4j-1.2.3.7.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.11.jar:/usr/lib/hadoop/lib/aspectjtools-1.6.11.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.0.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/hadoop-capacity-scheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/lib/hadoop-fairscheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/lib/hadoop-lzo-0.5.0.jar:/usr/lib/hadoop/lib/hadoop-thriftfs-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/lib/hadoop-tools.jar:/usr/lib/hadoop/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop/lib/hsqldb-1.8.0.10.LICENSE.txt:/usr/lib/hadoop/lib/hue-plugins-2.2.0-SNAPSHOT.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.12.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop/lib/jdeb-0.8.jar:/usr/lib/hadoop/lib/jdiff:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-2.1:/usr/lib/hadoop/lib/junit-4.5.jar:/usr/lib/hadoop/lib/kfs-0.2.2.jar:/usr/lib/hadoop/lib/kfs-0.2.LICENSE.txt:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/hue-plugins-2.2.0-SNAPSHOT.jar:/usr/lib/hadoop/lib/hue-plugins-2.2.0-SNAPSHOT.jar:/usr/lib/hadoop/lib/*plugin*jar:/usr/lib/hadoop/lib/postgresql-9.1-901-1.jdbc4.jar:/usr/lib/hadoop/lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/zookeeper/bin:/usr/lib/zookeeper/conf:/usr/lib/zookeeper/lib:/usr/lib/zookeeper/zookeeper-3.4.5.1.3.0.0-107.jar:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/lib/ant-1.8.0.jar:/usr/lib/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/lib/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/lib/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/lib/zookeeper/lib/commons-codec-1.6.jar:/usr/lib/zookeeper/lib/commons-io-2.2.jar:/usr/lib/zookeeper/lib/commons-logging-1.1.1.jar:/usr/lib/zookeeper/lib/httpclient-4.2.3.jar:/usr/lib/zookeeper/lib/httpcore-4.2.3.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar:/usr/lib/zookeeper/lib/jsoup-1.7.1.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/lib/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/lib/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/lib/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/lib/zookeeper/lib/maven-model-2.2.1.jar:/usr/lib/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/lib/zookeeper/lib/maven-profile-2.2.1.jar:/usr/lib/zookeeper/lib/maven-project-2.2.1.jar:/usr/lib/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/lib/zookeeper/lib/maven-settings-2.2.1.jar:/usr/lib/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/lib/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/lib/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/lib/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/lib/zookeeper/lib/wagon-http-2.4.jar:/usr/lib/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/lib/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/lib/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/lib/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/lib/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/lib/hadoop/libexec/../conf:/usr/jdk/jdk1.6.0_31/lib/tools.jar:/usr/lib/hadoop/libexec/..:/usr/lib/hadoop/libexec/../hadoop-core-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/ambari-log4j-1.2.3.7.jar:/usr/lib/hadoop/libexec/../lib/asm-3.2.jar:/usr/lib/hadoop/libexec/../lib/aspectjrt-1.6.11.jar:/usr/lib/hadoop/libexec/../lib/aspectjtools-1.6.11.jar:/usr/lib/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/libexec/../lib/commons-cli-1.2.jar:/usr/lib/hadoop/libexec/../lib/commons-codec-1.4.jar:/usr/lib/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-configuration-1.6.jar:/usr/lib/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop/libexec/../lib/commons-digester-1.8.jar:/usr/lib/hadoop/libexec/../lib/commons-el-1.0.jar:/usr/lib/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/lib/hadoop/libexec/../lib/commons-io-2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-lang-2.4.jar:/usr/lib/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop/libexec/../lib/commons-math-2.1.jar:/usr/lib/hadoop/libexec/../lib/commons-net-3.1.jar:/usr/lib/hadoop/libexec/../lib/core-3.1.1.jar:/usr/lib/hadoop/libexec/../lib/guava-11.0.2.jar:/usr/lib/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-fairscheduler-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-lzo-0.5.0.jar:/usr/lib/hadoop/libexec/../lib/hadoop-thriftfs-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/libexec/../lib/hadoop-tools.jar:/usr/lib/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop/libexec/../lib/hue-plugins-2.2.0-SNAPSHOT.jar:/usr/lib/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/lib/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop/libexec/../lib/jdeb-0.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-core-1.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-json-1.8.jar:/usr/lib/hadoop/libexec/../lib/jersey-server-1.8.jar:/usr/lib/hadoop/libexec/../lib/jets3t-0.6.1.jar:/usr/lib/hadoop/libexec/../lib/jetty-6.1.26.jar:/usr/lib/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/libexec/../lib/jsch-0.1.42.jar:/usr/lib/hadoop/libexec/../lib/junit-4.5.jar:/usr/lib/hadoop/libexec/../lib/kfs-0.2.2.jar:/usr/lib/hadoop/libexec/../lib/log4j-1.2.15.jar:/usr/lib/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/libexec/../lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/libexec/../lib/oro-2.0.8.jar:/usr/lib/hadoop/libexec/../lib/postgresql-9.1-901-1.jdbc4.jar:/usr/lib/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop/libexec/../lib/xmlenc-0.52.jar:/usr/lib/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/lib/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/conf' -Djava.library.path=:/usr/lib/hadoop/libexec/../lib/native/Linux-amd64-64:/usr/lib/hadoop/libexec/../lib/native/Linux-amd64-64:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64 org.apache.flume.node.Application -f /etc/flume/conf/flume.conf -n apache-agent
13/09/03 12:35:11 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
13/09/03 12:35:11 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/etc/flume/conf/flume.conf
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Added sinks: hdfs-sink Agent: agent
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Processing:hdfs-sink
13/09/03 12:35:11 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent]
13/09/03 12:35:11 WARN node.AbstractConfigurationProvider: No configuration found for this host:apache-agent
13/09/03 12:35:11 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
Now at this point I realize I'm missing several things.
1) I expect to see something along the lines of "INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink started" as my last line, which I don't
2) If I use the command “hadoop fs -lsr /flume” I should see new logs in my HDFS, but I don't. The last logs are from 8/28/2013, when I did the tutorial.
I also don't expect to see that WARN line in there, but I'm not sure why it's there, so maybe that's my problem and someone can tell me why.
So my questions are:
1) Can anyone tell me what might be going wrong here?
2) When I get this problem sorted out, is there anything else I should be looking for to see what Flume is working correctly, reading what it should and writing to where it should and when?

The answer is, of course, to name your agent when you start flume the same as your agent name in the config file. So my command line should have ended "-n agent" and NOT "-n apache-agent" since my flume.conf file specifies "agent.X"
After that everything appears to work.

In the config file you specified
agent.sources=exec-source
agent.sinks=hdfs-sink
agent.channels=ch1
so the agent name is 'agent' flume expects that while running the flume-agent you should use the same name as specified in the config file so the command should be
/usr/lib/flume/bin/flume-ng agent -n agent

Did you do set the agent in step #3 ?
Check out the original blog post and the Hadoop UI Hue and it Hadoop tutorials.

Related

Apache Nifi windows unable to load NAR library bundles

I'm only attempting to launch the Nifi UI as a local instance to start playing with it. I've unzipped the package and made sure to set the JAVA_HOME variable to my Java 1.8. When I try to bin/run-nifi, in my nifi-app log, the error message is:
2018-05-03 15:03:50,585 INFO [main] org.apache.nifi.NiFi Launching NiFi...
2018-05-03 15:03:52,330 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Determined default nifi.properties path to be 'Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties'
2018-05-03 15:03:52,363 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 146 properties from Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\conf\nifi.properties
2018-05-03 15:03:52,423 INFO [main] org.apache.nifi.NiFi Loaded 146 properties
2018-05-03 15:03:52,779 INFO [main] org.apache.nifi.BootstrapListener Started Bootstrap Listener, Listening for incoming requests on port 64802
2018-05-03 15:03:53,071 INFO [main] org.apache.nifi.BootstrapListener Successfully initiated communication with Bootstrap
2018-05-03 15:03:53,181 WARN [main] org.apache.nifi.nar.NarUnpacker Unable to load NAR library bundles due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework directory does not have read/write privilege Will proceed without loading any further Nar bundles
2018-05-03 15:03:53,242 ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created
at org.apache.nifi.util.FileUtils.ensureDirectoryExistAndCanReadAndWrite(FileUtils.java:48)
at org.apache.nifi.nar.NarClassLoaders.load(NarClassLoaders.java:155)
at org.apache.nifi.nar.NarClassLoaders.init(NarClassLoaders.java:131)
at org.apache.nifi.NiFi.<init>(NiFi.java:133)
at org.apache.nifi.NiFi.<init>(NiFi.java:71)
at org.apache.nifi.NiFi.main(NiFi.java:292)
2018-05-03 15:03:53,383 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2018-05-03 15:03:53,387 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
I've followed the installation instructions and haven't been able to trouble shoot. How do I load these NAR files upon running Nifi?
Thanks
I believe the underlying error in your output is java.io.IOException: Z:\DoE\LOCAL-~1\NIFI-1~1.0\.\work\nar\framework could not be created.
NiFi requires file permissions to create and write several directories, there is a list in the NiFi Admin Guide: How to install and start NiFi. NiFi does this to unpack the NAR files, write logs, and for various data repositories that comprise your data flow.
You have a few options:
Modify the permissions of the directory to allow NiFi read/write access. This can be done for each individual child directory.
Copy the entire NiFi distribution to a read/write location and run it from there.
Edit the conf/nifi-properties file to change the locations of these directories to read/write locations. See NiFi Admin Guide: System Properties for help on the properties.
Symlinks are a great solution for systems that support symlinks.
Two things you can try:
Run NiFi with administrator privilege (not a good practice) by going to ~\<NIFI_INSTALLATION_DIR>\bin and right click run-nifi.bat. Click Run as Administrator
Move NiFi directory to a location where the logged in user has full access to. Ex: C:\Users\<YOUR_USER>\Documents\. Now try to execute bin\run-nifi.bat
Similarly to the resolution that James proposed. I had to do the below 3-step process.
My scenario: I'm using docker containers and had the same problem. Even changing the user of my container to root didn't work. So, I did the following:
1 - Download Minifi https://nifi.apache.org/minifi/download.html
2 - Untar and execute the Minifi agent on my own laptop (I'm using MAC) so that the necessary folders and files will be created.
3 - Tar it up again and add to the DockerFile of my container creation
Done! Everything worked fine after that.

Hadoop YARN Job Stucked After Submission and Status Remains Undefined

I am trying to run one example MR job on my pseudo distributed cluster in virtualbox VM (RHEL 6.5, 8GB RAM, 100GB HDD) but after submission of the job It stucks there only.
INFO: mapreduce.Job: Running job: job_1437483993_001
The application tracking url (http://localhost:8088/cluster/applicationID) shows the result like this:
User: root
Name : grep-search
Application-Type : mapreduce
Status : Accepted
FinalStatus : Undefined
What I have tried:
modified yarn-site.xml and mapred-site.xml for minimum and maximum allocation for memory following the tutorial (http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/)
ensured that disk space is free enough to accomodate new jobs.
jps shows all the services are running properly.
But no luck. Please guide me through.
Edit:
Here's the log:
[root#master ~]# hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep /user/pradeep output23 'dfs[a-z.]+'
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
16/04/27 10:21:09 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/04/27 10:21:09 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/04/27 10:21:09 INFO input.FileInputFormat: Total input paths to process : 4
16/04/27 10:21:10 INFO mapreduce.JobSubmitter: number of splits:4
16/04/27 10:21:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1461732411884_0001
16/04/27 10:21:11 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/04/27 10:21:11 INFO impl.YarnClientImpl: Submitted application application_1461732411884_0001
16/04/27 10:21:11 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1461732411884_0001/
16/04/27 10:21:11 INFO mapreduce.Job: Running job: job_1461732411884_0001
There is some other issue and not the infrastructure. Please paste what are u trying to do in the map reduce code.

Job tracker is not starting up

I am installing CDH4.6.0 with the help of this site I am running start-all.sh to start services.
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
bin/bash [to start bash prompt after starting services]
After executing these instructions as a part of docker file, like
CMD ["start-all.sh"]
It starts all the services
When i jps it, i can see only
jps
Namenode
Datanode
Secondary Namenode
Tasktracker
But job tracker is not yet started. log is as follows
2015-01-23 07:26:46,706 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker up at: 8021
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker webserver: 50030
2015-01-23 07:26:47,725 INFO org.apache.hadoop.mapred.JobTracker:
Creating the system directory
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: Failed
to operate on mapred.system.dir (hdfs://localhost:8020/var/lib/hadoop-
hdfs/cache/mapred/mapred/system) because of permissions.
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2015-01-23 07:26:47,751 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
But when i again start it from bash prompt, it works. Why so? Any suggestions?
I can see it from the log. Job tracker is starting at port 8020 and why is it trying to operate at port 8020? Is it a problem? If so, how to tackle it?
Seems like the mapred user doesn't have privilege to write files/directories inside the HDFS root directory.
Switch to hdfs user and assign necessary privilege to mapred user before starting mapreduce service.
sudo -su hdfs ;
hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker stop; /etc/init.d/hadoop-0.20-mapreduce-jobtracker start

Bamboo: More than one agent per EC2 elastic instance?

Is it possible to run more than one Bamboo Agent per EC2 Elastic Instance?
We use Docker in our build system and the process seems mostly network IO bound. It would be nice if we could run multiple Agents on one machine.
By running multiple agents on one machine instead of starting multiple machines, we also don't need to worry about shipping Docker images between machines as artifacts.
Update 30 Oct 2014:
I tried copying the current startup script and adding a new home:
$ cat /opt/bamboo-elastic-agent/bin/bamboo-elastic-agent2
#!/bin/sh
bambooAgentBin=$(cd -P -- $(dirname $0) && pwd)
. $bambooAgentBin/bambooShellFunctions.sh
echo Starting Elastic Bamboo Agent...
java -Dbamboo.home=/home/bamboo/bamboo-agent-home-2/ -Dimagefiles.version=3.3-SNAPSHOT -jar $bambooAgentBin/*installer*.jar 2>&1 | tee -a $(getHomeDir)/bamboo-elastic-agent.out
It seems like the elastic version sets up some kind of a tunnel and crashes because a tunnel is already running:
# su -c /opt/bamboo-elastic-agent/bin/bamboo-elastic-agent2 - bamboo &
[1] 14143
Starting Elastic Bamboo Agent...
2014-10-30 08:39:31,804 INFO [main] [S3Sync] Syncing from: bamboo-agent-release-us-e1/5.6-OD-01-0070/cce16404c14f06456c6adc44181746abf8dd1206/ to /opt/bamboo-elastic-agent
2014-10-30 08:39:31,979 INFO [main] [S3Utils] Syncing s3://bamboo-agent-release-us-e1/5.6-OD-01-0070/cce16404c14f06456c6adc44181746abf8dd1206/ to /opt/bamboo-elastic-agent
2014-10-30 08:39:31,979 INFO [main] [S3Utils] Fetching the list of remote objects...
2014-10-30 08:39:33,006 INFO [main] [S3Utils] Found 579 files in s3://bamboo-agent-release-us-e1/5.6-OD-01-0070/cce16404c14f06456c6adc44181746abf8dd1206/
2014-10-30 08:39:33,059 INFO [main] [S3Utils] Found 463 files in /opt/bamboo-elastic-agent
2014-10-30 08:39:33,060 INFO [main] [S3Utils] Generating the list of files to fetch from S3...
2014-10-30 08:39:33,076 INFO [main] [S3Utils] Generating the list of files to remove...
2014-10-30 08:39:33,078 INFO [main] [S3Utils] Removing 0 files from /opt/bamboo-elastic-agent
2014-10-30 08:39:33,079 INFO [main] [S3Utils] Fetching 155 files to /opt/bamboo-elastic-agent
2014-10-30 08:39:39,969 INFO [main] [S3Utils] Fetched 113 MB from S3
2014-10-30 08:39:39,973 INFO [main] [ElasticAgentInstaller] Starting [java, -server, -Xms32m, -Xmx256m, -XX:MaxPermSize=128m, -XX:+HeapDumpOnOutOfMemoryError, -Dimagefiles.version=3.3, -Dbamboo.agent.installDir=/opt/bamboo-elastic-agent, -cp, /opt/bamboo-elastic-agent/boot/annotations-13.0.jar:/opt/bamboo-elastic-agent/boot/gson-2.2.2-atlassian-1.jar:/opt/bamboo-elastic-agent/boot/atlassian-bamboo-api-agent-bootstrap-5.6-OD-01-0070.jar:/opt/bamboo-elastic-agent/boot/commons-io-2.4.jar:/opt/bamboo-elastic-agent/boot/jackson-core-2.1.1.jar:/opt/bamboo-elastic-agent/boot/atlassian-bamboo-agent-elastic-shared-5.6-OD-01-0070.jar:/opt/bamboo-elastic-agent/boot/atlassian-tunnel-0.21.jar:/opt/bamboo-elastic-agent/boot/stax-api-1.0-2.jar:/opt/bamboo-elastic-agent/boot/guava-bridge-11.0.2-atlassian-01.jar:/opt/bamboo-elastic-agent/boot/atlassian-bamboo-agent-elastic-5.6-OD-01-0070.jar:/opt/bamboo-elastic-agent/boot/commons-codec-1.8.jar:/opt/bamboo-elastic-agent/boot/atlassian-util-concurrent-2.4.1.jar:/opt/bamboo-elastic-agent/boot/joda-time-2.3.jar:/opt/bamboo-elastic-agent/boot/log4j-1.2.15.jar:/opt/bamboo-elastic-agent/boot/guava-11.0.2-atlassian-01.jar:/opt/bamboo-elastic-agent/boot/atlassian-bamboo-agent-bootstrap-5.6-OD-01-0070.jar:/opt/bamboo-elastic-agent/boot/commons-lang-2.6.jar:/opt/bamboo-elastic-agent/boot/atlassian-aws-1.0.71.jar:/opt/bamboo-elastic-agent/boot/jackson-databind-2.1.1.jar:/opt/bamboo-elastic-agent/boot/fugue-1.1.jar:/opt/bamboo-elastic-agent/boot/aws-java-sdk-1.7.1.jar:/opt/bamboo-elastic-agent/boot/httpclient-4.2.5.jar:/opt/bamboo-elastic-agent/boot/commons-logging-1.0.4.jar:/opt/bamboo-elastic-agent/boot/jackson-annotations-2.1.1.jar:/opt/bamboo-elastic-agent/boot/bcprov-jdk15on-1.48.jar:/opt/bamboo-elastic-agent/boot/atlassian-bamboo-core-agent-bootstrap-5.6-OD-01-0070.jar:/opt/bamboo-elastic-agent/boot/bcpkix-jdk15on-1.48.jar:/opt/bamboo-elastic-agent/boot/atlassian-annotations-0.4.jar:/opt/bamboo-elastic-agent/boot/jsr305-1.3.9.jar:/opt/bamboo-elastic-agent/boot/httpcore-4.2.5.jar:, com.atlassian.bamboo.agent.elastic.client.ElasticAgentBootstrap]
2014-10-30 08:39:40,119 INFO [main] [ElasticAgentBootstrap] Starting Agent Bootstrap using Java 1.6.0_45 from Sun Microsystems Inc.
2014-10-30 08:39:40,410 INFO [main] [ElasticAgentBootstrap] Using tunnelling. Setting virtual host name to https://xxxxxxx.atlassian.net/builds/agentServer/
2014-10-30 08:39:40,410 INFO [main] [ElasticAgentBootstrap] Using tunnelling for HTTP(S). Registering 'httpt' and 'httpst' protocols.
2014-10-30 08:39:40,416 INFO [main] [ElasticAgentBootstrap] HTTP(S) tunnel: enabled
2014-10-30 08:39:40,416 INFO [main] [ElasticAgentBootstrap] JMS tunnel: enabled
2014-10-30 08:39:40,424 INFO [main] [ElasticAgentBootstrap] Starting tunnel server, waiting for 2 connections.
2014-10-30 08:39:40,425 FATAL [tunnellogger-thread] [TunnelServer] [com.atlassian.tunnel.tunnel.server.TunnelServer] Fatal error in TunnelServer.
java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383)
at java.net.ServerSocket.bind(ServerSocket.java:328)
at java.net.ServerSocket.<init>(ServerSocket.java:194)
at java.net.ServerSocket.<init>(ServerSocket.java:150)
at javax.net.ssl.SSLServerSocket.<init>(SSLServerSocket.java:84)
at com.sun.net.ssl.internal.ssl.SSLServerSocketImpl.<init>(SSLServerSocketImpl.java:81)
at com.sun.net.ssl.internal.ssl.SSLServerSocketFactoryImpl.createServerSocket(SSLServerSocketFactoryImpl.java:58)
at com.atlassian.tunnel.tunnel.server.TunnelServer.run(TunnelServer.java:54)
at java.lang.Thread.run(Thread.java:662)
Any idea for a workaround?
We (Atlassian Build Engineering) have created a set of plugins to run a number Docker based agents in a cluster (ECS) that come online, build a single job and then exit. It should be able to do what you outlined. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
Amazon instances are historically bad providing I/O capacity (I am assuming you mean network I/O) per dollar as they tend to be optimized more for CPU and Memory intensive workloads. You may find that additional processes on the same node may not help.
If you install your docker image on a node and take a snapshot to create an AMI after the image is installed, you can launch as many ec2 instances with that AMI as you want. They will have the image preinstalled.
You can also use CloudFormation and/or Cloud Init to recompile or download your image on each Ec2 instance without having to worry about manually moving around docker image.
However, if you do want to run more than one bamboo agent on a node you should be able too as longs as you set the bamboo.home parameter differently for each agent instance.
java -Dbamboo.home=/agent1Home -jar atlassian-bamboo-agent-installer-X.X-SNAPSHOT.jar \
http://bamboo-host-server:8085/agentServer/
java -Dbamboo.home=/agent2Home -jar atlassian-bamboo-agent-installer-X.X-SNAPSHOT.jar \
http://bamboo-host-server:8085/agentServer/

Running hadoop job using java org.apache.hadoop.util.RunJar command

I want to submit a job to jobtracker using java (instead of hadoop) so that I can debug classpath issue.
export HADOOP_CLASSPATH=hbase-util-0.0.1-SNAPSHOT.jar:/etc/hadoop/conf:hbase-util-0.0.1-SNAPSHOT.jar:/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hbase/*:/usr/lib/hadoop/etc/hadoop/mapred-site.xml:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.0.1.jar:/usr/lib/hbase/hbase-0.92.1-cdh4.0.1-security.jar:/usr/lib/hbase/lib/zookeeper.jar:/usr/lib/hbase/lib:/etc/hbase/conf:/usr/lib/hbase/lib/guava-11.0.2.jar:/usr/lib/hbase/lib/jackson-mapper-asl-1.5.5.jar:/usr/lib/hbase/lib/jackson-core-asl-1.5.5.jar:/usr/lib/hbase:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
java -cp ${HADOOP_CLASSPATH} org.apache.hadoop.util.RunJar hbase-util-0.0.1-SNAPSHOT.jar hbase.util.RowDiffCounter SRM hdfs://dchilcmsnn01:8020/tmp/hadoop/mapred/temp/job1-temp-1491763074 /tmp/hadoop/mapred/temp/job1-temp-1491763075D SOURCE_MANAGEMENT SOURCE_MANAGEMENT
I get an error
ERROR [main] (UserGroupInformation.java:1235) - PriviledgedActionException as:devuser (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
Adding the following properties does not help. I checked the job configuration page on the jobtracker to get the correct value.
-D mapreduce.framework.name=local
-D mapred.job.tracker=host101:8021
Do I need to pass in the user info as well?

Resources