sqoop import oracle got stock at progress 0.0% for hours - sqoop

When sqoop import from oracle, we saw the following logs on yarn:
2022-05-10 04:52:13,351 INFO [IPC Server handler 12 on 44827] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1646802944907_44646_m_000000_0 is : 0.0
2022-05-10 04:52:43,421 INFO [IPC Server handler 20 on 44827] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1646802944907_44646_m_000000_0 is : 0.0
2022-05-10 04:53:13,487 INFO [IPC Server handler 18 on 44827] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1646802944907_44646_m_000000_0 is : 0.0
2022-05-10 04:53:43,556 INFO [IPC Server handler 22 on 44827] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1646802944907_44646_m_000000_0 is : 0.0
....
The job got stuck at 0.0 for nearly one hour, and then ended with the error: ORA-01555: snapshot too old.
But this does not happen every time.
Our sqoop version is: 1.4.7-cdh6.3.1. Any one know what happened?

Related

Hadoop job fails, Resource Manager doesnt recognize AttemptID

Im trying to aggregate some data in an Oozie workflow. However the aggregation step fails.
I found two points of interests in the logs: The first is an error(?) that seems to occur repeatedly:
After a container finishes, it gets killed but exits with non-zero Exit code 143.
It finishes:
2015-05-04 15:35:12,013 INFO [IPC Server handler 7 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000048_0 is : 0.7231312
2015-05-04 15:35:12,015 INFO [IPC Server handler 19 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000048_0 is : 1.0
And then then when it gets killed by Application Master:
2015-05-04 15:35:13,831 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1430730089455_0009_m_000048_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
The second point of interest is the actual error that crashes the job completely, this happens in the reduce-phase, not sure if these two are related though:
2015-05-04 15:35:28,767 INFO [IPC Server handler 20 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000051_0 is : 0.31450257
2015-05-04 15:35:29,930 INFO [IPC Server handler 10 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000052_0 is : 0.19511986
2015-05-04 15:35:31,549 INFO [IPC Server handler 1 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000050_0 is : 0.5324404
2015-05-04 15:35:31,771 INFO [IPC Server handler 28 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000051_0 is : 0.31450257
2015-05-04 15:35:31,890 ERROR [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Error communicating with RM: Resource Manager doesn't recognize AttemptId: application_1430730089455_0009
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Resource Manager doesn't recognize AttemptId: application_1430730089455_0009
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:675)
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:244)
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:282)
at java.lang.Thread.run(Thread.java:695)
Caused by: org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: Application attempt appattempt_1430730089455_0009_000001 doesn't exist in ApplicationMasterService cache.
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:436)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:394)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy36.allocate(Unknown Source)
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:188)
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:667)
... 3 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException): Application attempt appattempt_1430730089455_0009_000001 doesn't exist in ApplicationMasterService cache.
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:436)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:394)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy35.allocate(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
... 11 more
After that the oozie:launcher job and the job that got the error just sit there indefinitely with STATE:accepted, FINALSTATUS:undefined and TRACKING UI:unassigned.
Does anyone know what is causing this error and how I can fix it?
The same workflow worked before, and I couldnt say that I changed anything inbetween...
Just in case somebody else stubles upon this error: It seemed like this was caused due to hadoop running out of disc space... Pretty cryptic error for something as simple as that. I thought ~90GB would be enough to work on my 30GB Dataset, I was wrong.
I was also facing the similar issue, In my case task got completed successfully. But it took very much time to finish and there were very large logs on jobtracker, most of the log entries were from the progress report such as the following ones :
2015-05-04 15:35:12,013 INFO [IPC Server handler 7 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000048_0 is : 0.7231312
2015-05-04 15:35:12,015 INFO [IPC Server handler 19 on 49697] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1430730089455_0009_m_000048_0 is : 1.0
On debugging my code I found that distributed cache was getting read through the map function of Mapper class, After moving the logic of reading dist-cache data from map function to configure method of Mapper class, my issue got resolved.
We should have to use configure method for such kind of operations which are mainly related with configurations or the code blocks which we need to process only once before calling map function.

Debugging Mapreduce on Hadoop cluster

I have a 2 node hadoop cluster and I suspect that I am getting a deadlock.
Is there a way for me to debug this and determine the root cause? Are deadlocks even possible in the Hadoop world?
The version of Hadoop is: 2.2.0-gphd-3.0.1.0
EDIT:
I am not getting any errors, the jobs are just hanging and not completing. I saw this in the logs (repeated over and over again):
2015-03-16 17:00:25,519 INFO [IPC Server handler 0 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:28,522 INFO [IPC Server handler 1 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:31,525 INFO [IPC Server handler 2 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:34,528 INFO [IPC Server handler 3 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:37,530 INFO [IPC Server handler 4 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:40,533 INFO [IPC Server handler 5 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:43,536 INFO [IPC Server handler 6 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:46,538 INFO [IPC Server handler 7 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:49,541 INFO [IPC Server handler 8 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:52,600 INFO [IPC Server handler 9 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:52,601 INFO [IPC Server handler 9 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423108525336_2110_m_000000_0 is : 1.0
2015-03-16 17:00:55,607 INFO [IPC Server handler 10 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:00:58,609 INFO [IPC Server handler 11 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:01,612 INFO [IPC Server handler 12 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:04,614 INFO [IPC Server handler 13 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:07,617 INFO [IPC Server handler 14 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:10,620 INFO [IPC Server handler 15 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:13,622 INFO [IPC Server handler 16 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:16,625 INFO [IPC Server handler 17 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:19,628 INFO [IPC Server handler 18 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:22,684 INFO [IPC Server handler 19 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from attempt_1423108525336_2110_m_000000_0
2015-03-16 17:01:22,684 INFO [IPC Server handler 19 on 18043] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423108525336_2110_m_000000_0 is : 1.0

Oozie workflow hive action stuck in RUNNING

I am running Hadoop 2.4.0, Oozie 4.0.0, Hive 0.13.0 from Hortonworks distro.
I have multiple Oozie coordinator jobs that can potentially launch workflows all around the same time. The coordinator jobs each watch different directories and when the _SUCCESS files show up in those directories, the workflow would be launched.
The workflow runs a Hive action that reads from external directory and copy stuff.
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
DROP TABLE IF EXISTS ${INPUT_TABLE};
CREATE external TABLE IF NOT EXISTS ${INPUT_TABLE} (
id bigint,
data string,
creationdate timestamp,
datelastupdated timestamp)
LOCATION '${INPUT_LOCATION}';
-- Read from external table and insert into a partitioned Hive table
FROM ${INPUT_TABLE} ent
INSERT OVERWRITE TABLE mytable PARTITION(data)
SELECT ent.id, ent.data, ent.creationdate, ent.datelastupdated;
When I run only one coordinator to launch one workflow, the workflow and hive actions are completing successfully without any problems.
When multiple workflows are launched around the same time, the hive action stays in RUNNING for a long time.
If I look at the job syslogs, I see this:
2015-02-18 17:18:26,048 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1423085109915_0223_m_000000 Task Transitioned from SCHEDULED to RUNNING
2015-02-18 17:18:26,586 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1423085109915_0223: ask=3 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:32768, vCores:-3> knownNMs=1
2015-02-18 17:18:27,677 INFO [Socket Reader #1 for port 38704] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1423085109915_0223 (auth:SIMPLE)
2015-02-18 17:18:27,696 INFO [IPC Server handler 0 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1423085109915_0223_m_000002 asked for a task
2015-02-18 17:18:27,697 INFO [IPC Server handler 0 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1423085109915_0223_m_000002 given task: attempt_1423085109915_0223_m_000000_0
2015-02-18 17:18:34,951 INFO [IPC Server handler 2 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:19:05,060 INFO [IPC Server handler 11 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:19:35,161 INFO [IPC Server handler 28 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:20:05,262 INFO [IPC Server handler 2 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:20:35,358 INFO [IPC Server handler 11 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:21:02,452 INFO [IPC Server handler 23 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:21:32,545 INFO [IPC Server handler 1 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:22:02,668 INFO [IPC Server handler 12 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
It just kept printing the "Progress of TaskAttempt" over and over.
Our yarn-site.xml is configured to use this:
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
Should I be using a different scheduler instead?
At this point I am not sure if the issue is in Oozie or Hive.
It turns out this is the same issue as the HEART BEAT issue listed here:
Error on running multiple Workflow in OOZIE-4.1.0
After changing the scheduler to the FairScheduler as noted in the above post, I was able to run multiple workflows.

Storm worker not starting

I am trying to storm a storm topology but the storm worker refuses to start when I try to run the java command which invokes the worker process I get the following error:
Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread "main"
I am not able to find what problem is causing this. Has anyone faced similar issue
Edit:
when I runt the worker process with flag -V I get the following error:
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler=<NA>
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=3.5.0-23-generic
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=storm
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/storm
588 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/home/storm/storm-0.9.0.1
797 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
PS: When I run the same topology in local cluster it works fine, only when i deploy in cluster mode it doesnt start.
Just found out the issue. The jar I creted to upload in the storm cluster, was kept in the storm base directory pics. This somehow was creating conflict which was not shown in the log file and actually log file never got created.
Make sure no external jars are present in the base storm folder from where one start storm. Really tricky error no idea why this happens until you just get around it.
Hope the storm guys add this into the logs so that user facing such issue can pinpoint why exactly this is happening.

How execute a mapreduce programs in oozie with hadoop-2.2

2.2.0 and oozie-4.0.0 in ubuntu. I am cant able to execute mapreduce programs in oozie.
i am uisng resource manager port number for jobtracker 8032 in oozie.
while scheduling in oozie to goes to running state and running in yarn also after some time i am getting error like this(below) in hadoop logs and still running in oozie logs
Error:
2014-05-30 10:38:14,322 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1401425739447_0003: ask=3 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:1024, vCores:-1> knownNMs=1
2014-05-30 10:38:17,296 INFO [Socket Reader #1 for port 47412] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1401425739447_0003 (auth:SIMPLE)
2014-05-30 10:38:17,316 INFO [IPC Server handler 0 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1401425739447_0003_m_000002 asked for a task
2014-05-30 10:38:17,316 INFO [IPC Server handler 0 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1401425739447_0003_m_000002 given task: attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:22,524 INFO [IPC Server handler 1 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:25,996 INFO [IPC Server handler 2 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:26,003 INFO [IPC Server handler 2 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1401425739447_0003_m_000000_0 is : 1.0
2014-05-30 10:38:29,066 INFO [IPC Server handler 3 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:32,071 INFO [IPC Server handler 4 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:35,075 INFO [IPC Server handler 5 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1401425739447_0003_m_000000_0
2014-05-30 10:38:38,079 INFO [IPC Server handler 6 on 47412] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from attempt_1401425739447_0003_m_000000_0
this error continues...
i can able to run java example(give in oozie) mapreduce program in oozie.
if i am trying to run the pig,hive,sqoop or my own java mapreduce program i am getting the above error i dont know why it comes. i already given my hadoop configuration path in oozie and i started my jobhistory server.
Help me please...
NodeManager and Resource Manager running in same system. So i am gettting this error. I started Node manager in second system and my problem is solved.

Resources