Hadoop 2.6.0: Basic error "starting MRAppMaster" after installing - hadoop

I have just started to work with Hadoop 2.
After installing with basic configs, I always failed to run any examples. Has anyone seen this problem and please help me?
And the error is something like
Error starting MRAppMaster
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
This is the log
20152015-01-06 11:56:23,194 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1420510526926_0002_000001
2015-01-06 11:56:23,362 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:70)
at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:299)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1473)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
... 7 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:39)
... 12 more
2015-01-06 11:56:23,366 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1

Error messages: "Error starting MRAppMaster", "InvocationTargetException", "UnsatisfiedLinkError"
Root cause: Fail to execute the native function "anchorNative" in the class "org.apache.hadoop.security.JniBasedUnixGroupsMapping"
Description: the function "anchorNative" will call a function in the library "libhadoop.so". The path of this library is specified by these environment variables:
export JAVA_LIBRARY_PATH
export HADOOP_COMMON_LIB_NATIVE_DIR
In the Hadoop source code, print the java library class path by
System.err.println( (System.getProperty("java.library.path") );
# result
/home/maidinh/hadoop2/build/hadoop-2.6.0-src/hadoop-dist/target/hadoop-2.6.0/lib/native:
/usr/java/packages/lib/amd64:
/usr/lib64:
/lib64:
/lib:
/usr/lib
Different versions of the library "libhadoop.so" can be found in these locations that make a conflict.
Solution: Except the right path of native library (in hadoop-2.6.0/lib/native), delete all "libhadoop.so" in other directories.
Notes: delete all related libraries of hadoop
rm -r libhadoop*
rm -r libhdfs*

Related

Phoenix-5.0.0-HBase-2.0 installation problem

As descibed in installation guid I copied file phoenix-5.0.0-HBase-2.0-server.jar into hbase library path /opt/hbase/lib/. The Hbase crashes at startup. If I remove it, hbase starting and running well again. What is the problem?
I use hadoop version:
$ hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/hadoop/share/hadoop/common/hadoop-common-3.1.3.jar
Hbase version:
hbase(main):001:0> version
2.0.6, rd65cccb5fda039217954a558c65bda423e0d6df3, Wed Aug 14 15:44:48 UTC 2019
Took 0.0003 seconds
Error details in log file /opt/hbase/logs/hbase-master.log:
2020-06-06 14:54:48,454 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.AbstractMethodError: org.apache.phoenix.trace.PhoenixMetricsSink.init(Lorg/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl$DefaultMetricsSystemInitializer.init(BaseSourceImpl.java:54)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:116)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:46)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:38)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:35)
at org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:195)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:536)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:477)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3050)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3068)
2020-06-06 14:54:48,456 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3057)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3068)
Caused by: java.lang.AbstractMethodError: org.apache.phoenix.trace.PhoenixMetricsSink.init(Lorg/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl$DefaultMetricsSystemInitializer.init(BaseSourceImpl.java:54)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:116)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:46)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:38)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:35)
at org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:195)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:536)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:477)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3050)
... 5 more

Apache Drill 1.6.0 failure in starting embedded Drillbit (windows)

I am unable to start Embedded drillbit on windows machine and getting the following error. I have checked for the jars in 3rd party folder where Jackson-databind-2.7.1.jar is present, still it's saying class not found exception. Can you help me here?
Error: Failure in starting embedded Drillbit: UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance of the class of type org.apache.drill.exec.store.StoragePluginRegistry requested at path drill.exec.storage.registry.
[Error Id: 4e654256-f63d-434f-8f41-981892a776b5 ] (state=,code=0)
java.sql.SQLException: Failure in starting embedded Drillbit: UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance of the class of type org.apache.drill.exec.store.StoragePluginRegistry requested at path drill.exec.storage.registry.
[Error Id: 4e654256-f63d-434f-8f41-981892a776b5 ]
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:120)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:64)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance of the class of type org.apache.drill.exec.store.StoragePluginRegistry requested at path drill.exec.storage.registry.
[Error Id: 4e654256-f63d-434f-8f41-981892a776b5 ]
at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
at org.apache.drill.common.config.DrillConfig.getInstance(DrillConfig.java:88)
at org.apache.drill.exec.server.DrillbitContext.(DrillbitContext.java:85)
at org.apache.drill.exec.work.WorkManager.start(WorkManager.java:105)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:110)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:118)
... 18 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.drill.common.config.DrillConfig.getInstance(DrillConfig.java:86)
... 22 more
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
at org.apache.drill.exec.serialization.JacksonSerializer.(JacksonSerializer.java:32)
at org.apache.drill.exec.store.sys.PersistentStoreConfig.newJacksonBuilder(PersistentStoreConfig.java:81)
at org.apache.drill.exec.store.StoragePluginRegistryImpl.(StoragePluginRegistryImpl.java:90)
... 27 more
apache drill 1.6.0
"this isn't your grandfather's sql"
The issue is related with the HADOOP_HOME environment variable
If it is set, Embedded Drill does not start properly
My HADOOP_HOME was set because I use sometimes Spark or Hadoop MapReduce on my machine.
So, with
set HADOOP_HOME=
and then
sqlline.bat -u "jdbc:drill:zk=local"
The initialization is completed and the Drillbit starts

Class not found in a MapReduce job

I have a mapreduce job which takes an avro file as input. I export it along with all the required libraries (jar library files) into a jar file. I have 2 different clusters, one is HDInsight simulator and the other one in HDP sandbox. It works fine on the HDP sandbox but it gives me an error on the HDInsight simulator and cannot find AvroInputFormat class. I tried running the job with the -libjar option but it didn't help. Here is the error message:
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1927)
at org.apache.hadoop.mapred.JobConf.getInputFormat(JobConf.java:686)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:168)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1919)
... 9 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
... 10 more
This looks weired because it runs fine on one cluster! Does anyone know what can be the problem?

Installing a Spark Cluster, problems with Hive

I am trying to get a Spark/Shark cluster up but keep running into the same problem.
I have followed the instructions on https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster and addressed Hive as stated.
I think that the Shark Driver is picking up another version of Hadoop jars but am unsure why.
Here are the details, any help would be great.
Spark/Shark 0.9.0
Apache Hadoop 2.3.0
Amplabs Hive 0.11
Scala 2.10.3
Java 7
I have everything install but I get some deprecation warnings and then an exception:
14/03/14 11:24:47 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/03/14 11:24:47 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
Exception:
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
at shark.SharkCliDriver.<init>(SharkCliDriver.scala:275)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
... 4 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
... 9 more
Caused by: java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
I had this same problem, and I think it's caused by incompatible versions of hadoop/hive and spark/shark.
You need to either:
Remove hadoop-core-1.0.x.jar from shark/lib_managed/jars/org.apache.hadoop/hadoop-core/
When building shark, explicitly set SHARK_HADOOP_VERSION as follows:
cd shark;
SHARK_HADOOP_VERSION=2.0.0-mr1-cdh4.5.0 ./sbt/sbt clean
SHARK_HADOOP_VERSION=2.0.0-mr1-cdh4.5.0 ./sbt/sbt package
The second method solved other issues for me as well. You can also see this topic for more details: https://groups.google.com/forum/#!msg/shark-users/lTNPcxHJiOQ/EqzyByZrzQMJ

After upgrading hadoop clusters to clodera 4 b1 ,Invalid "mapreduce.jobtracker.address" configuration error comes

I recently upgraded to clodera4b1 .Before upgradation the mapreduce jobs were running fine but now when I execute any mapreduce program,Thefollowing error comes:
Command run:
hadoop jar /usr/lib/hadoop/hadoop-mapreduce-examples-0.23.0-cdh4b1.jar grep *.xml /user/out/ 'dfs'
12/04/10 19:23:15 INFO mapreduce.Cluster: Failed to use org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid "mapreduce.jobtracker.address" configuration value for LocalJobRunner : "dev-xxx:yyyy"
12/04/10 19:23:15 ERROR security.UserGroupInformation: PriviledgedActionException as:anchauhan (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1185)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1181)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1167)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1180)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1209)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1233)
at com.nextag.mapred.MerchantImport.doTask(MerchantImport.java:221)
at com.nextag.mapred.Main.main(Main.java:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
And for my custom mapred jobs,I changed the library files to the new ones while compiling jar files,but no success
We had the same exception and the reason was that we had the wrong dependencies in the pom files. We were pointing to hadoop-client version 2.0.0-cdh4.0.0 which is for yarn but we are using MRv1 so we should have been pointing to 2.0.0-mr1-cdh4.0.0. You should not have any yarn jar files in your dependencies.
Reference

Resources