I followed the instructions from several great posts in dealing with this error when compiling my JSP and it seemed to go away ---for a while, but now it's back with a vengeance :(
First off here are the 'specs':
Spring 3.1.0 release
Using WebLogic Server 10.3.5,
Adjusted the STS.ini file to read as follows:
-Dosgi.requiredJavaVersion=1.5
-Xmn28m
-Xms120m
-Xmx2048m
-Xss1m
-XX:PermSize=256m
-XX:MaxPermSize=512m
and here is the error I am receiving when trying to view my page:
java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:630)
at java.lang.ClassLoader.defineClass(ClassLoader.java:614)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:305)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:246)
at com.bea.core.repackaged.jdt.internal.compiler.ClassFile.<init>(ClassFile.java:256)
at com.bea.core.repackaged.jdt.internal.compiler.ClassFilePool.acquire(ClassFilePool.java:34)
at com.bea.core.repackaged.jdt.internal.compiler.ClassFile.getNewInstance(ClassFile.java:229)
at com.bea.core.repackaged.jdt.internal.compiler.ast.TypeDeclaration.generateCode(TypeDeclaration.java:512)
at com.bea.core.repackaged.jdt.internal.compiler.ast.TypeDeclaration.generateCode(TypeDeclaration.java:611)
at com.bea.core.repackaged.jdt.internal.compiler.ast.CompilationUnitDeclaration.generateCode(CompilationUnitDeclaration.java:358)
at com.bea.core.repackaged.jdt.internal.compiler.Compiler.process(Compiler.java:770)
at com.bea.core.repackaged.jdt.internal.compiler.Compiler.compile(Compiler.java:464)
at weblogic.jsp.internal.java.JDTJavaCompiler.generateByteCode(JDTJavaCompiler.java:104)
at weblogic.jsp.internal.java.JavaSourceFile._codeGen(JavaSourceFile.java:211)
at weblogic.jsp.internal.java.JavaSourceFile.codeGen(JavaSourceFile.java:201)
at weblogic.jsp.internal.ProxySourceFile.compileGeneratedFiles(ProxySourceFile.java:310)
at weblogic.jsp.internal.ProxySourceFile.codeGen(ProxySourceFile.java:248)
at weblogic.jsp.internal.SourceFile.codeGen(SourceFile.java:327)
at weblogic.jsp.internal.client.ClientUtilsImpl$CodeGenJob.run(ClientUtilsImpl.java:599)
at weblogic.jsp.internal.client.Job.performJob(Job.java:83)
at weblogic.jsp.internal.client.ThreadPool$WorkerThread.run(ThreadPool.java:217)
The error went away a week ago after adjusting the default PermSize (don't remember what it was set to)...is there anything else I should be looking for here?
As a follow on to George D's answer: For weblogic, any changes to JVM args should be put in $DOMAIN_HOME/bin/setDomainEnv.sh (this is for Linux, should be a similar path for Windows).
Related
I'm using a shared instance of Fiware Cosmos (meaning I don't have root privileges). I have until today successfully acessed and managed tables in hive both remotely using jdbc, and Hive CLI.
But now I'm getting this error when starting Hive CLI:
log4j:ERROR Could not instantiate class [org.apache.hadoop.hive.shims.HiveEventCounter].
java.lang.RuntimeException: Could not load shims in class org.apache.hadoop.log.metrics.EventCounter
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:123)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:115)
at org.apache.hadoop.hive.shims.ShimLoader.getEventCounter(ShimLoader.java:98)
at org.apache.hadoop.hive.shims.HiveEventCounter.<init>(HiveEventCounter.java:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:357)
at java.lang.Class.newInstance(Class.java:310)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:330)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:121)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:664)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:647)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:544)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:440)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:476)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:354)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:127)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:77)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:58)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:641)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:171)
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:120)
... 27 more
log4j:ERROR Could not instantiate appender named "EventCounter".
Logging initialized using configuration in jar:file:/usr/local/apache-hive-0.13.0-bin/lib/hive-common-0.13.0.jar!/hive-log4j.properties
I can however perform select and create in the Hive CLI.
If I then try to access Hive remotely, I get this:
Connecting to jdbc:hive://x.x.x.x:10000/default?user=user&password=XXXXXXXXXX
Could not establish connection: java.net.ConnectException: Connection refused
I didn't do any changes in code or commands before the errors appeared, and after googling around I haven't found any working solutions.
If anyone can guide me to where the problem is, or how to find it, or even better how to solve it, I'd be grateful.
Thanks in advance!
HiveServer2 (the Hive JDBC service) is a very unstable piece of shoftware. In our Prod cluster we have a CRON job to restart each instance every day, and even then, sometimes it blows OutOfMemory errors then just hangs saying Connection refused like you show. Open a ticket to your Hadoop admin so that he/she retarts the damn service.
On the other hand, the org.apache.hadoop.log.metrics.EventCounter message smells like someone tried to change a shared config somewhere (or tried to upgrade some JARs) and now Hive believes that it runs on a very, very old version of Hadoop
=> e.g. comments in Hive-4133 or that MapR support post
The cause of these issues were Hive upgrades in Cosmos. A more thorough explanation and solution is found here:
My Hive client stopped working with Cosmos instance
I am testing Tomcat7 showing data from the example table "malaga_plagues". I have done changes but, when I try test it again, I obtain the following error creating .war file.
[root#host-192-168-192-78 AMS_Widhoc]# ../../../apache-maven-3.3.3/bin/mvn package
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/maven/cli/MavenCli : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromSelf(ClassRealm.java:401)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:42)
at org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:271)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:254)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:239)
at org.codehaus.plexus.classworlds.launcher.Launcher.getMainClass(Launcher.java:144)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:266)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
[root#host-192-168-192-78 AMS_Widhoc]#
One month ago this was working fine, but now I have resumed the example and it does not work. I reinstalled Orion CB in mi VM but I didn't change anything in Tomcat.
Could you help me? Thank you.
Maven 3.3.3 is only compatible with Java 7+, and it seems that you are trying to run it with Java 6. Please check your JAVA_HOME environment variable, it should point to a Java 7 JDK.
We had an oozie workflow with simple "create" and "alter" statements, with "create" statement using "RCFILE" file format in Hive Action.
The challenge we are facing is that this Hive action is executing successfully sometimes and failing sometimes... We weren't able to fix this.
It is throwing "NoSuchMethodError" exception in regard to "serde".
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], main() threw exception, org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerNumericType(FunctionRegistry.java:630)
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.<clinit>(FunctionRegistry.java:636)
at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:208)
at org.apache.hadoop.hive.cli.CliSessionState.<init>(CliSessionState.java:78)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:645)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:318)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:279)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Can someone help me fix this?
I had the same problem. It was in a different context, but I'm sure the underlying issue is the same. The return type of the org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(String) method was narrowed down to PrimitiveTypeInfoin Hive 0.13 to from TypeInfo in prior versions of Hive which broke Java binary compatibility.
It seems that the org.apache.hadoop.hive.ql.exec.FunctionRegistry was compiled with a pre 0.13 version of a hive-serde-x.x.x.jar, but execution classpath includes a version from newer hive-serde-0.13.x.jar.
The solution was to make sure that hive-serde-n.n.n.jar and hive-exec-n.n.n.jar (or wherever org.apache.hadoop.hive.ql.exec.FunctionRegistry is coming from in your case) have consistent versions on the classpath.
I'm trying to build a load test for my queues, which are Weblogic Queues, but I can't figure out which weblogic libraries I need for this. At least, that's what I think my problem is (because I get ClassNotFoundExceptions). I've already copied
weblogic.jar
wlclient.jar
wljmsclient.jar
into my jMeter/lib directory.
but I still get ClassNotFoundError like:
Response message: java.lang.NoClassDefFoundError: weblogic/utils/collections/ConcurrentHashMap
Here, also the full stacktrace...
2014/09/16 19:17:19 ERROR - jmeter.protocol.jms.sampler.JMSSampler: weblogic/utils/collections/ConcurrentHashMap java.lang.NoClassDefFoundError:
weblogic/utils/collections/ConcurrentHashMap
at weblogic.jndi.spi.EnvironmentManager.<clinit>(EnvironmentManager.java:19)
at weblogic.jndi.Environment.getContext(Environment.java:307)
at weblogic.jndi.Environment.getContext(Environment.java:277)
at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
at javax.naming.InitialContext.init(InitialContext.java:223)
at javax.naming.InitialContext.<init>(InitialContext.java:197)
at org.apache.jmeter.protocol.jms.sampler.JMSSampler.getInitialContext(JMSSampler.java:424)
at org.apache.jmeter.protocol.jms.sampler.JMSSampler.threadStarted(JMSSampler.java:319)
at org.apache.jmeter.threads.JMeterThread$ThreadListenerTraverser.addNode(JMeterThread.java:597)
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:961)
at org.apache.jorphan.collections.HashTree.traverse(HashTree.java:946)
at org.apache.jmeter.threads.JMeterThread.threadStarted(JMeterThread.java:566)
at org.apache.jmeter.threads.JMeterThread.initRun(JMeterThread.java:554)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:253)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.ClassNotFoundException: weblogic.utils.collections.ConcurrentHashMap
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 17 more
2014/09/16 19:17:19 WARN - jmeter.protocol.jms.sampler.JMSSampler: Session may not be null while creating message java.lang.IllegalStateExcepti
on: Session may not be null while creating message
at org.apache.jmeter.protocol.jms.sampler.JMSSampler.createMessage(JMSSampler.java:179)
at org.apache.jmeter.protocol.jms.sampler.JMSSampler.sample(JMSSampler.java:140)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:429)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:257)
at java.lang.Thread.run(Thread.java:662)
... and, also my JMeter Configuration:
After the first run I get the following error: ConcurrentHashMap not found
And on every run after the first one: EnvironmentManager can't be initialized
What am I not doing right here? Thanks a lot for the help!
Regards,
al
According to http://mazanatti.info/index.php?/archives/68-JMeter-configure-and-post-JMS-messages-to-Weblogic-Server.html you probably need wlthint3client.jar instead of weblogic.jar and wlclient.jar
Combo of wljmsclient.jar and wlthint3client.jar works for me.
I am running Cascading (actually Scalding) hadoop job that uses DistributedCache for dependent jars.
Fist time it works fine (meaning that the classpath is set up correctly) but then it starts failing with ClassNotFoundException:
java.io.IOException: Split class cascading.tap.hadoop.io.MultiInputSplit not found
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:387)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: cascading.tap.hadoop.io.MultiInputSplit
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:385)
...
Did anybody else have success with Cascading and jars in the DistributedCache
This message seems to imply that Cascading has some internal handling of the distributed cache jars. Any light you can shed on this?
Edit: I am using Cascading 2.1.6 on Hadoop 1.0.3
Which version of hadoop are you using? There are some problems with the distributed cache in 0.20.2. Can you try switching to a newer version?
Chris K Wensel, the author of Cascading responded on the mailing list that Cascading does not do anything with DistributedCache.
I looked further and it was a problem in my code -- I did not add these files to the DistributedCache properly.