I'm trying to use custom spark serializer defined as:
conf.set("spark.serializer", CustomSparkSerializer.class.getCanonicalName());
But when I submit application to Spark I'm facing issue with ClassNotFoundException when executor env creating, for example:
16/04/01 18:41:11 INFO util.Utils: Successfully started service 'sparkExecutor' on port 52153.
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1643)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:149)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:250)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.ClassNotFoundException: example.CustomSparkSerializer
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at **org.apache.spark.util.Utils$.classForName(Utils.scala:173)**
at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:266)
at org.apache.spark.SparkEnv$.instantiateClassFromConf$1(SparkEnv.scala:287)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:290)
at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:218)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:183)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
In local standalone mode it can be solved using "spark.executor.extraClassPath=path/to/jar", but on cluster with several nodes it does not help.
I have tried all known (for me) approaches as such as use --jars, executor (and even driver) extra class and library path, sc.addJar also... It was not help.
I found that Spark uses specific classloader in org.apache.spark.util.Utils$.classForName(Utils.scala:173) for load serializer class, but I really don't understand how to make custom serializer loadable.
The application flow submit is more complex - Oozie -> SparkSubmit -> YARN client -> Spark application
The question is - does anybody know how to use custom spark serializer and how to resolve ClassNotFound issue with it ?
Thanks in advance!
The reason why it happens is because I use spark.executor.extraClassPath with prefix /home/some_user. It seems that Spark can not load any class from that path because Spark process owner is another user, once I putted JAR to path smth like /usr/lib/ everything works fine.
So, I confused with users and Hadoop/Oozie/Spark processes owners, but I was not expecting such behavior from ClassLoaders =)
Thank you for help!
Related
I got pre-built Spark 1.4.1 and I'm running HDP 2.6. when I try to run spark-shell it gives me an error message as follows.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:111)
at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:111)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkProperties(SparkSubmitArguments.scala:111)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:97)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:107)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
What is the issue?
ClassNotFoundException occurs when class loader could not find the
required class in class path . So , basically you should check your
class path and add the class in the classpath.
Check whether hadoop-common-0.21.0.jar is added to your classpath.
Is it possible that your Hadoop home is not set, as in here?
Cannot find hadoop installation: $HADOOP_HOME must be set or hadoop must be in the path
I'm using a shared instance of Fiware Cosmos (meaning I don't have root privileges). I have until today successfully acessed and managed tables in hive both remotely using jdbc, and Hive CLI.
But now I'm getting this error when starting Hive CLI:
log4j:ERROR Could not instantiate class [org.apache.hadoop.hive.shims.HiveEventCounter].
java.lang.RuntimeException: Could not load shims in class org.apache.hadoop.log.metrics.EventCounter
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:123)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:115)
at org.apache.hadoop.hive.shims.ShimLoader.getEventCounter(ShimLoader.java:98)
at org.apache.hadoop.hive.shims.HiveEventCounter.<init>(HiveEventCounter.java:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:357)
at java.lang.Class.newInstance(Class.java:310)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:330)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:121)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:664)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:647)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:544)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:440)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:476)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:354)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:127)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:77)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:58)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:641)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:171)
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:120)
... 27 more
log4j:ERROR Could not instantiate appender named "EventCounter".
Logging initialized using configuration in jar:file:/usr/local/apache-hive-0.13.0-bin/lib/hive-common-0.13.0.jar!/hive-log4j.properties
I can however perform select and create in the Hive CLI.
If I then try to access Hive remotely, I get this:
Connecting to jdbc:hive://x.x.x.x:10000/default?user=user&password=XXXXXXXXXX
Could not establish connection: java.net.ConnectException: Connection refused
I didn't do any changes in code or commands before the errors appeared, and after googling around I haven't found any working solutions.
If anyone can guide me to where the problem is, or how to find it, or even better how to solve it, I'd be grateful.
Thanks in advance!
HiveServer2 (the Hive JDBC service) is a very unstable piece of shoftware. In our Prod cluster we have a CRON job to restart each instance every day, and even then, sometimes it blows OutOfMemory errors then just hangs saying Connection refused like you show. Open a ticket to your Hadoop admin so that he/she retarts the damn service.
On the other hand, the org.apache.hadoop.log.metrics.EventCounter message smells like someone tried to change a shared config somewhere (or tried to upgrade some JARs) and now Hive believes that it runs on a very, very old version of Hadoop
=> e.g. comments in Hive-4133 or that MapR support post
The cause of these issues were Hive upgrades in Cosmos. A more thorough explanation and solution is found here:
My Hive client stopped working with Cosmos instance
I'm trying to run hive from the command prompt it is working absolutely fine. But when I try running hiveserver using "hive --service hiveserver" command, I'm getting the following exception.
Starting Hive Thrift Server
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hive.service.HiveServer
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
So I then tried with the command "hive --service hiveserver2"; still I'm not finding any solution.
Can anybody please suggest a solution for this problem.
May be another process (another hiveserver) already listening on port 10000.
can you check it by :
netstat -ntulp | grep ':10000' to see it and if found then kill the process.
Otherwise start the server on another port.
By the way which version you are using ?
This error occurred to me when it can't find hive-service-*.jar in hadoop classpath. Just copy the hive-service-*.jar to your hadoop lib folder or export classpath in hadoop-env.sh. I have mentioned how to add classpath below.
Add this line in hadoop-env.sh:
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/usr/local/hive/lib/hive-*.jar
I have mentioned the path for hive as /usr/local/hive since i have hive installed at that location. Change it to point to your hive installation.
We had an oozie workflow with simple "create" and "alter" statements, with "create" statement using "RCFILE" file format in Hive Action.
The challenge we are facing is that this Hive action is executing successfully sometimes and failing sometimes... We weren't able to fix this.
It is throwing "NoSuchMethodError" exception in regard to "serde".
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], main() threw exception, org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerNumericType(FunctionRegistry.java:630)
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.<clinit>(FunctionRegistry.java:636)
at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:208)
at org.apache.hadoop.hive.cli.CliSessionState.<init>(CliSessionState.java:78)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:645)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:318)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:279)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Can someone help me fix this?
I had the same problem. It was in a different context, but I'm sure the underlying issue is the same. The return type of the org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(String) method was narrowed down to PrimitiveTypeInfoin Hive 0.13 to from TypeInfo in prior versions of Hive which broke Java binary compatibility.
It seems that the org.apache.hadoop.hive.ql.exec.FunctionRegistry was compiled with a pre 0.13 version of a hive-serde-x.x.x.jar, but execution classpath includes a version from newer hive-serde-0.13.x.jar.
The solution was to make sure that hive-serde-n.n.n.jar and hive-exec-n.n.n.jar (or wherever org.apache.hadoop.hive.ql.exec.FunctionRegistry is coming from in your case) have consistent versions on the classpath.
I am running Cascading (actually Scalding) hadoop job that uses DistributedCache for dependent jars.
Fist time it works fine (meaning that the classpath is set up correctly) but then it starts failing with ClassNotFoundException:
java.io.IOException: Split class cascading.tap.hadoop.io.MultiInputSplit not found
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:387)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: cascading.tap.hadoop.io.MultiInputSplit
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:385)
...
Did anybody else have success with Cascading and jars in the DistributedCache
This message seems to imply that Cascading has some internal handling of the distributed cache jars. Any light you can shed on this?
Edit: I am using Cascading 2.1.6 on Hadoop 1.0.3
Which version of hadoop are you using? There are some problems with the distributed cache in 0.20.2. Can you try switching to a newer version?
Chris K Wensel, the author of Cascading responded on the mailing list that Cascading does not do anything with DistributedCache.
I looked further and it was a problem in my code -- I did not add these files to the DistributedCache properly.